Gathering D.O.F.s onto one process
Hello, I am working on a project where I need to save the Numpy arrays of degrees of freedom of FE functions and use them in another script. I am wondering how to get consistent results when using MPI. Here's a minimal example of what I would like to do: ---------------------------------------- from firedrake import * import numpy as np mesh = UnitSquareMesh(3, 3) V = FunctionSpace(mesh, "CG", 1) u = TrialFunction(V) v = TestFunction(V) f = Function(V) x, y = SpatialCoordinate(mesh) f.interpolate((1+8*pi*pi)*cos(x*pi*2)*cos(y*pi*2)) a = (dot(grad(v), grad(u)) + v * u) * dx L = f * v * dx u = Function(V) solve(a == L, u, solver_parameters={'ksp_type': 'cg'}) U = u.vector().gather() if u.comm.rank == 0: print(U) name = 'U_global_' + str(u.comm.size) + '.npy' np.save(name,U) --------------------------------------------------------------------- If I run this in serial, and then with two processes in MPI, I get two files 'U_global_1.npy' and 'U_global_2.npy' but they do not agree. The first (in serial) has values: [-0.29551776 -0.67467152 -0.67467152 -0.15155862 0.06606523 0.06606523 0.36508954 0.36508954 1.33298501 1.33298501 0.06606523 -0.15155862 0.06606522 -0.67467153 -0.67467153 -0.29551776] while the second has values: [-0.67467152 -0.67467152 -0.29551776 0.06606523 -0.15155862 0.06606523 0.06606523 -0.67467152 -0.15155862 -0.29551776 -0.67467152 0.06606523 1.33298501 0.36508954 0.36508954 1.33298502] How do I get the parallel implementation to match the ordering of the d.o.f.s in the serial case? And avoiding this .gather() approach would be good, because I really only need the array gathered on a single process. Thanks, Peter
On Thu, Sep 26, 2019 at 8:15 PM Sentz, Peter <sentz2@illinois.edu> wrote:
Hello,
I am working on a project where I need to save the Numpy arrays of degrees of freedom of FE functions and use them in another script. I am wondering how to get consistent results when using MPI.
Here's a minimal example of what I would like to do:
---------------------------------------- from firedrake import * import numpy as np
mesh = UnitSquareMesh(3, 3)
V = FunctionSpace(mesh, "CG", 1) u = TrialFunction(V) v = TestFunction(V) f = Function(V) x, y = SpatialCoordinate(mesh) f.interpolate((1+8*pi*pi)*cos(x*pi*2)*cos(y*pi*2)) a = (dot(grad(v), grad(u)) + v * u) * dx L = f * v * dx
u = Function(V) solve(a == L, u, solver_parameters={'ksp_type': 'cg'})
U = u.vector().gather()
Here I think you can just use u.vector().view(viewer) with a petsc4py Viewer of the appropriate type (maybe binary?). Thanks, Matt
if u.comm.rank == 0: print(U) name = 'U_global_' + str(u.comm.size) + '.npy' np.save(name,U) ---------------------------------------------------------------------
If I run this in serial, and then with two processes in MPI, I get two files 'U_global_1.npy' and 'U_global_2.npy' but they do not agree. The first (in serial) has values:
[-0.29551776 -0.67467152 -0.67467152 -0.15155862 0.06606523 0.06606523 0.36508954 0.36508954 1.33298501 1.33298501 0.06606523 -0.15155862 0.06606522 -0.67467153 -0.67467153 -0.29551776]
while the second has values:
[-0.67467152 -0.67467152 -0.29551776 0.06606523 -0.15155862 0.06606523 0.06606523 -0.67467152 -0.15155862 -0.29551776 -0.67467152 0.06606523 1.33298501 0.36508954 0.36508954 1.33298502]
How do I get the parallel implementation to match the ordering of the d.o.f.s in the serial case? And avoiding this .gather() approach would be good, because I really only need the array gathered on a single process.
Thanks,
Peter _______________________________________________ firedrake mailing list firedrake@imperial.ac.uk https://mailman.ic.ac.uk/mailman/listinfo/firedrake
-- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
participants (2)
- 
                
                Matthew Knepley
- 
                
                Sentz, Peter