Re: [firedrake] Gathering D.O.F.s onto one process
On Mon, Sep 30, 2019 at 3:45 PM Sentz, Peter <sentz2@illinois.edu> wrote:
Hi,
u.vector().view() does not work because a Firedrake.vector.Vector has no such method.
However, I did try the following:
Q = PETSc.Viewer().createBinary('name.dat','w')
with u.dat.vec_ro as vu: vu.view(Q)
Then in separate .py file loaded this .dat file and looked at the resulting vector. Unfortunately it does not maintain the serial ordering. This is perplexing because according to https://www.firedrakeproject.org/demos/parprint.py.html, shouldn't the .view() maintain serial ordering?
I think we may have a misunderstanding about what maintaining an ordering means. If you output using the PETSc parallel viewer, then it will definitely output in the same order in which the vector is distributed in parallel, meaning the dofs from proc0, then proc1, etc. However, it now sounds like this order does not match the order you have in serial, which is completely understandable. When you distribute meshes, it is normal to partition them to reduce communication, which necessitates reordering. Thanks, Matt
Peter ------------------------------ *From:* Matthew Knepley <knepley@gmail.com> *Sent:* Saturday, September 28, 2019 12:14 AM *To:* Sentz, Peter <sentz2@illinois.edu> *Cc:* firedrake@imperial.ac.uk <firedrake@imperial.ac.uk> *Subject:* Re: [firedrake] Gathering D.O.F.s onto one process
On Thu, Sep 26, 2019 at 8:15 PM Sentz, Peter <sentz2@illinois.edu> wrote:
Hello,
I am working on a project where I need to save the Numpy arrays of degrees of freedom of FE functions and use them in another script. I am wondering how to get consistent results when using MPI.
Here's a minimal example of what I would like to do:
---------------------------------------- from firedrake import * import numpy as np
mesh = UnitSquareMesh(3, 3)
V = FunctionSpace(mesh, "CG", 1) u = TrialFunction(V) v = TestFunction(V) f = Function(V) x, y = SpatialCoordinate(mesh) f.interpolate((1+8*pi*pi)*cos(x*pi*2)*cos(y*pi*2)) a = (dot(grad(v), grad(u)) + v * u) * dx L = f * v * dx
u = Function(V) solve(a == L, u, solver_parameters={'ksp_type': 'cg'})
U = u.vector().gather()
Here I think you can just use
u.vector().view(viewer)
with a petsc4py Viewer of the appropriate type (maybe binary?).
Thanks,
Matt
if u.comm.rank == 0: print(U) name = 'U_global_' + str(u.comm.size) + '.npy' np.save(name,U) ---------------------------------------------------------------------
If I run this in serial, and then with two processes in MPI, I get two files 'U_global_1.npy' and 'U_global_2.npy' but they do not agree. The first (in serial) has values:
[-0.29551776 -0.67467152 -0.67467152 -0.15155862 0.06606523 0.06606523 0.36508954 0.36508954 1.33298501 1.33298501 0.06606523 -0.15155862 0.06606522 -0.67467153 -0.67467153 -0.29551776]
while the second has values:
[-0.67467152 -0.67467152 -0.29551776 0.06606523 -0.15155862 0.06606523 0.06606523 -0.67467152 -0.15155862 -0.29551776 -0.67467152 0.06606523 1.33298501 0.36508954 0.36508954 1.33298502]
How do I get the parallel implementation to match the ordering of the d.o.f.s in the serial case? And avoiding this .gather() approach would be good, because I really only need the array gathered on a single process.
Thanks,
Peter _______________________________________________ firedrake mailing list firedrake@imperial.ac.uk https://mailman.ic.ac.uk/mailman/listinfo/firedrake
-- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener
https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
-- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
participants (1)
- 
                
                Matthew Knepley