-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathnote
46 lines (30 loc) · 2.2 KB
/
note
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
- Parmetis decompose grid
- Write grid pieces to file
- Domain load grid pieces
- Final solution gather (ignore)
- We can ignore gathering the final solution UNLESS we want to visualize
- Norm solution reduction
- Domain need to load grid
- Loaded grid
In addition to my idea for scaling epsilon as a function of the minimum enclosing sphere/circle I think I have a good idea for hyperviscosity.
The reason my solutions on the sphere fail for the CVT meshes is in part due to uncertainty on how accurate a stable solution could be (need to check), and partly due to hyperviscosity. Because the nodes are not distributed the same as MD, the eigenvalues are perturbed which means the likelihood of hyperviscosity working the same as the md nodes is low. What I suggest is that we take hyperviscosity of the form
du/dt = diag(vel_u) D_lambda u + diag(vel_v) D_theta u - H u
and rewrite it as
du/dt = diag(vel_u) (D_lambda u - H_lambda u) + diag(vel_v) (D_theta u - H_theta u)
This will allow us to tune parameters for H_theta in the vortex roll-up case that are
1) Velocity scaling independent
2) Reusable for the cosine bell problem
Then we can tune the H_lambda for the cosine bell. What this would really show is that hyperviscosity can be used to smooth the DMs BEFORE velocity scaling, which means we could change the velocity parameter without retuning HV. I hope this works!
-----------------------------
- Need VCL RK4 working
-- NO support for queues yet. Assume all work is in one queue.
-- That means we need RK4 simple (saxpys)
-- Then we can assume we have vector views, build RK4 subset saxpys (NO overlap)
-- Benchmark and compare performance of Cosine Bell in thesis.
-- Add overlapping comm/comp (requires rewriting VCL with queues).
-- Benchmark and compare performance in thesis.
- Extend GMRES with overlap
-- Benchmark current for thesis
-- Overlap comm/comp
-- Benchmark for thesis
----> Thesis introduces OVERLAPPED comm/comp AND GPU. Most other libraries CANNOT match this feature because MPI-2 does not support overlapping comm and comp in MPI_Alltoallv, but MPI-3 would add this. We could use isend/irecv to program a custom overlapped comm/comp.