Hi Floriane,
On 05/09/17 12:43, Floriane Gidel [RPG] wrote:
> Hi David,
> 
> 
> This is a big piece of code where I solve nonlinear equations in a 2D
> domain with mixed systems involving vector functions. 
> 
> I'd like to run the whole code in parallel except from some lines such
> as saving txt files, to avoid overwriting it or saving in several
> files. Is there a command to select a specific process? Andrew once
> mentioned using op2.MPI.comm.rank. Is that a solution? How can I use it?
The terminology that MPI uses is that you have a *communicator*
comprised of some number of processes, each of these is indexed by its
*rank*.
All Firedrake objects should have the communicator they are defined on
accessible as the .comm attribute (if you find one that is not, please
report a bug).
When you build a mesh, and don't provide a communicator, it is built
on the default COMM_WORLD communicator.  Objects built on top of the
mesh inherit the mesh's communicator.
So if you have a mesh, and only want to print something on rank 0.
You can write:
if mesh.comm.rank == 0:
    print("I am on rank 0")
An important thing to note:
Most operations that you do in firedrake are *collective* over the
communicator of the participating objects.  That means that all ranks
need to participate, otherwise you will get errors or your code might
hang.
So for example you can't do this:
mesh = Mesh(...)
V = FunctionSpace(mesh, ...)
v = TestFunction(V)
f = assemble(v*dx)
if f.comm.rank == 0:
    # This line collectively accesses the data in f
    # it therefore will hang.
    print(f.dat.data)
You can do:
# Everyone accesses data, rank 0 prints something
fdata = f.dat.data
if f.comm.rank == 0:
    print(fdata)
> 
> The error when appending the array is not such an issue as I need it
> only for visualisation purpose.
> 
> For instance, I want to save a 1D function h_1D
> 
> mesh_1D = IntervalMesh(Nx,Lx)
> 
> V_1D = FunctionSpace(mesh_1D, "CG", 1)
> 
> h_1D = Function(V_1D)
> 
> 
> on a 2D surface. So I define a 2D mesh  as follows
> 
> mesh_2D = RectangleMesh(Nx,1,Lx,1.0,quadrilateral=True)
> 
> V_2D = FunctionSpace(mesh_2D, "CG", 1)
> 
> h_2D = Function(V_2D)
> 
> 
> Then, I save in Indx[i] the indices for which x_2D[i] = x_1D[i]: 
> 
> Indx = []
> 
>     fori inrange(Nx+1):
> 
>         Indx.append([item foritem
> inrange(len(mesh_2D.coordinates.dat.data[:,0]))
> ifmesh_2D.coordinates.dat.data[item,0] ==
> mesh_1D.coordinates.dat.data[i]])
> 
> 
> The idea is that I can then project h_1D to the 2D mesh through h_2D:
> 
> fori inrange(Nx+1):
> 
>     h_2D.dat.data[Indx[i]] = h_1D.dat.data[i]
> 
> This works fine when I run the code on one core, but when running in
> parallel I get an index error:
> 
> File "main.py", line 932, in <listcomp>
> 
>     Indx.append([item for item in
> range(len(mesh_2D.coordinates.dat.data[:,0])) if
> mesh_2D.coordinates.dat.data[item,0] == mesh_1D.coordinates.dat.data[j]])
> 
> IndexError: index 125 is out of bounds for axis 0 with size 125
Both the 2D mesh and the 1D mesh are distributed amongst the MPI
processes, so you have no guarantee that there are Nx+1 entries in the
1D coordinate array.  You probably want:
for i in range(len(mesh_1D.coordinates.data.data)):
   Indx.append(...)
The same is true of writing:
Once you have made the Indx array, I would just write:
h_2D.dat.data[Index, 0] = h_1D.dat.data_ro[:]
> I can comment it, but then I get other (Firedrake-related) errors that
> I don't get otherwise, for example:
> 
>   File "main.py", line 970, in <module>
> 
>     mesh_WM= CubeMesh(1,1,1,1.0)
...
>     raise RuntimeError("Mesh must have at least one cell on every
> process")
> 
> RuntimeError: Mesh must have at least one cell on every process
> 
> 
> I don't understand while running in parallel causes errors for the
> mesh definition?
You made a 1x1x1 cube mesh which has 6 cells.  If you tried to run on
more than 6 processes (mpiexec -n 7, for example), then you would get
this error, because every process must own at least one cell.
Possibly even with fewer than 6 processes you might get this error
depending on the exact mesh decomposition.
Cheers,
Lawrence
_______________________________________________
firedrake mailing list
firedrake@imperial.ac.uk
https://mailman.ic.ac.uk/mailman/listinfo/firedrake