Re: [firedrake] Gathering D.O.F.s onto one process
On Mon, Sep 30, 2019 at 10:46 PM Sentz, Peter <sentz2@illinois.edu> wrote:
So is there a way to return how the global degrees of freedom are distributed across processors?
I would recommend changing your workflow so that you do not need this, since it complicates everything. However, if you still really want to do this, you can 1) Tell the DM to constructs mappings back to the original ordering https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/DM/DMSetUseNatur... 2) After distribution, map a vector back to that ordering https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/DMPLEX/DMPlexGlo... 3) Output that vector using view() Note that DMDA outputs in the natural ordering by default since it is not expensive in the structured case. Thanks, Matt
-Peter ------------------------------ *From:* Matthew Knepley <knepley@gmail.com> *Sent:* Monday, September 30, 2019 6:33 PM *To:* Sentz, Peter <sentz2@illinois.edu> *Cc:* Firedrake Project <firedrake@imperial.ac.uk> *Subject:* Re: [firedrake] Gathering D.O.F.s onto one process
On Mon, Sep 30, 2019 at 3:45 PM Sentz, Peter <sentz2@illinois.edu> wrote:
Hi,
u.vector().view() does not work because a Firedrake.vector.Vector has no such method.
However, I did try the following:
Q = PETSc.Viewer().createBinary('name.dat','w')
with u.dat.vec_ro as vu: vu.view(Q)
Then in separate .py file loaded this .dat file and looked at the resulting vector. Unfortunately it does not maintain the serial ordering. This is perplexing because according to https://www.firedrakeproject.org/demos/parprint.py.html, shouldn't the .view() maintain serial ordering?
I think we may have a misunderstanding about what maintaining an ordering means. If you output using the PETSc parallel viewer, then it will definitely output in the same order in which the vector is distributed in parallel, meaning the dofs from proc0, then proc1, etc. However, it now sounds like this order does not match the order you have in serial, which is completely understandable. When you distribute meshes, it is normal to partition them to reduce communication, which necessitates reordering.
Thanks,
Matt
Peter ------------------------------ *From:* Matthew Knepley <knepley@gmail.com> *Sent:* Saturday, September 28, 2019 12:14 AM *To:* Sentz, Peter <sentz2@illinois.edu> *Cc:* firedrake@imperial.ac.uk <firedrake@imperial.ac.uk> *Subject:* Re: [firedrake] Gathering D.O.F.s onto one process
On Thu, Sep 26, 2019 at 8:15 PM Sentz, Peter <sentz2@illinois.edu> wrote:
Hello,
I am working on a project where I need to save the Numpy arrays of degrees of freedom of FE functions and use them in another script. I am wondering how to get consistent results when using MPI.
Here's a minimal example of what I would like to do:
---------------------------------------- from firedrake import * import numpy as np
mesh = UnitSquareMesh(3, 3)
V = FunctionSpace(mesh, "CG", 1) u = TrialFunction(V) v = TestFunction(V) f = Function(V) x, y = SpatialCoordinate(mesh) f.interpolate((1+8*pi*pi)*cos(x*pi*2)*cos(y*pi*2)) a = (dot(grad(v), grad(u)) + v * u) * dx L = f * v * dx
u = Function(V) solve(a == L, u, solver_parameters={'ksp_type': 'cg'})
U = u.vector().gather()
Here I think you can just use
u.vector().view(viewer)
with a petsc4py Viewer of the appropriate type (maybe binary?).
Thanks,
Matt
if u.comm.rank == 0: print(U) name = 'U_global_' + str(u.comm.size) + '.npy' np.save(name,U) ---------------------------------------------------------------------
If I run this in serial, and then with two processes in MPI, I get two files 'U_global_1.npy' and 'U_global_2.npy' but they do not agree. The first (in serial) has values:
[-0.29551776 -0.67467152 -0.67467152 -0.15155862 0.06606523 0.06606523 0.36508954 0.36508954 1.33298501 1.33298501 0.06606523 -0.15155862 0.06606522 -0.67467153 -0.67467153 -0.29551776]
while the second has values:
[-0.67467152 -0.67467152 -0.29551776 0.06606523 -0.15155862 0.06606523 0.06606523 -0.67467152 -0.15155862 -0.29551776 -0.67467152 0.06606523 1.33298501 0.36508954 0.36508954 1.33298502]
How do I get the parallel implementation to match the ordering of the d.o.f.s in the serial case? And avoiding this .gather() approach would be good, because I really only need the array gathered on a single process.
Thanks,
Peter _______________________________________________ firedrake mailing list firedrake@imperial.ac.uk https://mailman.ic.ac.uk/mailman/listinfo/firedrake
-- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener
https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
-- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener
https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
-- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
On 1 Oct 2019, at 10:48, Matthew Knepley <knepley@gmail.com> wrote:
On Mon, Sep 30, 2019 at 10:46 PM Sentz, Peter <sentz2@illinois.edu> wrote: So is there a way to return how the global degrees of freedom are distributed across processors?
I would recommend changing your workflow so that you do not need this, since it complicates everything.
However, if you still really want to do this, you can
1) Tell the DM to constructs mappings back to the original ordering https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/DM/DMSetUseNatur...
2) After distribution, map a vector back to that ordering https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/DMPLEX/DMPlexGlo...
3) Output that vector using view()
FWIW, I would be a little bit careful checking that this all DTRT. We don't set a Section on the DMPlex before distribution, so (as I understand the code) the global-to-natural SF won't be built. Aside: why does the interface in PETSc not build an SF for the permutation of the topology to natural ordering, and then you would do: DMGlobalToNatural(dm, Vec global, Section layout, Vec natural); Peter, perhaps you can explain why you need a consistent ordering and we can figure out a way to achieve the same thing without doing this dance. As Matt says, it does make lots of things more complicated. Thanks, Lawrence
On Tue, Oct 1, 2019 at 6:35 AM Lawrence Mitchell <wence@gmx.li> wrote:
On 1 Oct 2019, at 10:48, Matthew Knepley <knepley@gmail.com> wrote:
On Mon, Sep 30, 2019 at 10:46 PM Sentz, Peter <sentz2@illinois.edu> wrote: So is there a way to return how the global degrees of freedom are distributed across processors?
I would recommend changing your workflow so that you do not need this, since it complicates everything.
However, if you still really want to do this, you can
1) Tell the DM to constructs mappings back to the original ordering
https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/DM/DMSetUseNatur...
2) After distribution, map a vector back to that ordering
https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/DMPLEX/DMPlexGlo...
3) Output that vector using view()
FWIW, I would be a little bit careful checking that this all DTRT. We don't set a Section on the DMPlex before distribution, so (as I understand the code) the global-to-natural SF won't be built.
Ah, you are correct. Crap.
Aside: why does the interface in PETSc not build an SF for the permutation of the topology to natural ordering, and then you would do:
We could have if we had thought of it. I did not write it and I missed that in review. Thanks, Matt
DMGlobalToNatural(dm, Vec global, Section layout, Vec natural);
Peter, perhaps you can explain why you need a consistent ordering and we can figure out a way to achieve the same thing without doing this dance. As Matt says, it does make lots of things more complicated.
Thanks,
Lawrence
-- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
Hi Lawrence, I am working on Non-Intrusive Reduced Order modeling techniques for PDEs. To have such a method be non-instrusive, the PDE solver should only be used to compute training data and test data (the d.o.f.s of the PDE), but developing the reduced order model should be independent of how the data was computed. In order to make sense of the data (plotting and L^2 error computation) I need the mesh ordering that the D.o.f.'s actually correspond to. - Peter ________________________________ From: Lawrence Mitchell <wence@gmx.li> Sent: Tuesday, October 1, 2019 5:35 AM To: Matt Knepley <knepley@gmail.com> Cc: Sentz, Peter <sentz2@illinois.edu>; Firedrake Project <firedrake@imperial.ac.uk> Subject: Re: [firedrake] Gathering D.O.F.s onto one process
On 1 Oct 2019, at 10:48, Matthew Knepley <knepley@gmail.com> wrote:
On Mon, Sep 30, 2019 at 10:46 PM Sentz, Peter <sentz2@illinois.edu> wrote: So is there a way to return how the global degrees of freedom are distributed across processors?
I would recommend changing your workflow so that you do not need this, since it complicates everything.
However, if you still really want to do this, you can
1) Tell the DM to constructs mappings back to the original ordering https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/DM/DMSetUseNatur...
2) After distribution, map a vector back to that ordering https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/DMPLEX/DMPlexGlo...
3) Output that vector using view()
FWIW, I would be a little bit careful checking that this all DTRT. We don't set a Section on the DMPlex before distribution, so (as I understand the code) the global-to-natural SF won't be built. Aside: why does the interface in PETSc not build an SF for the permutation of the topology to natural ordering, and then you would do: DMGlobalToNatural(dm, Vec global, Section layout, Vec natural); Peter, perhaps you can explain why you need a consistent ordering and we can figure out a way to achieve the same thing without doing this dance. As Matt says, it does make lots of things more complicated. Thanks, Lawrence
Hi Peter, I think this misconstrues what constitutes “non-intrusive”. What you are actually saying is that you don’t want to use actual published API features of Firedrake like plotting and computing the L^2 norm, but you instead want to exploit unpublished implementation details in order to use a different tool for these things. Just using Firedrake to accomplish those goals (which would literally be a couple of lines of code) would in no way detract from the non-intrusiveness of your reduced order model, because you still wouldn’t be using any information about how the PDE was solved. In fact, it’s a bit worse than that. If you actually want to be non-intrusive then your code cannot possibly plot or compute an L^2, error, because both of those things require knowing what the basis functions used in the finite element method were. Those things can only be the purview of the PDE solver. What you are proposing to do isn’t actually non-intrusive, it’s just conveying basis function information implicitly through a numbering convention rather than explicitly by calling the Firedrake API. Regards, David From: <firedrake-bounces@imperial.ac.uk> on behalf of "Sentz, Peter" <sentz2@illinois.edu> Date: Wednesday, 2 October 2019 at 19:54 To: Lawrence Mitchell <wence@gmx.li> Cc: firedrake <firedrake@imperial.ac.uk> Subject: Re: [firedrake] Gathering D.O.F.s onto one process Hi Lawrence, I am working on Non-Intrusive Reduced Order modeling techniques for PDEs. To have such a method be non-instrusive, the PDE solver should only be used to compute training data and test data (the d.o.f.s of the PDE), but developing the reduced order model should be independent of how the data was computed. In order to make sense of the data (plotting and L^2 error computation) I need the mesh ordering that the D.o.f.'s actually correspond to. - Peter ________________________________ From: Lawrence Mitchell <wence@gmx.li> Sent: Tuesday, October 1, 2019 5:35 AM To: Matt Knepley <knepley@gmail.com> Cc: Sentz, Peter <sentz2@illinois.edu>; Firedrake Project <firedrake@imperial.ac.uk> Subject: Re: [firedrake] Gathering D.O.F.s onto one process
On 1 Oct 2019, at 10:48, Matthew Knepley <knepley@gmail.com> wrote:
On Mon, Sep 30, 2019 at 10:46 PM Sentz, Peter <sentz2@illinois.edu> wrote: So is there a way to return how the global degrees of freedom are distributed across processors?
I would recommend changing your workflow so that you do not need this, since it complicates everything.
However, if you still really want to do this, you can
1) Tell the DM to constructs mappings back to the original ordering https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/DM/DMSetUseNatur...
2) After distribution, map a vector back to that ordering https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/DMPLEX/DMPlexGlo...
3) Output that vector using view()
FWIW, I would be a little bit careful checking that this all DTRT. We don't set a Section on the DMPlex before distribution, so (as I understand the code) the global-to-natural SF won't be built. Aside: why does the interface in PETSc not build an SF for the permutation of the topology to natural ordering, and then you would do: DMGlobalToNatural(dm, Vec global, Section layout, Vec natural); Peter, perhaps you can explain why you need a consistent ordering and we can figure out a way to achieve the same thing without doing this dance. As Matt says, it does make lots of things more complicated. Thanks, Lawrence
participants (4)
- 
                
                Ham, David A
- 
                
                Lawrence Mitchell
- 
                
                Matthew Knepley
- 
                
                Sentz, Peter