On 6 Oct 2014, at 14:08, Eike Mueller <e.mueller@bath.ac.uk> wrote:
Dear all,
I'm wondering whether this isn't an error in a PETSc call after all, since it complains about a segfault.
PETSc installs an error handler which reports errors as petsc errors even when the abort has come from elsewhere.
I provide callbacks to PETSc KSP solvers in my code, and for this I have to set the size of vectors that the KSP operates on. In the mixed preconditioner, I want it to operate on a vector which contains both the pressure and the velocity dofs. To count the total number of dofs I do this:
helmholtz.py: 60 self.ndof_phi = self.V_pressure.dof_dset.size self.ndof_u = self.V_velocity.dof_dset.size self.ndof = self.ndof_phi+self.ndof_u
and then I set up the operator for the KSP like this:
helmholtz.py: 69 op = PETSc.Mat().create() op.setSizes(((self.ndof, None), (self.ndof, None)))
In the preconditioner class I copy the dofs from the vectors encapsulated in the firedrake pressure- and velocity functions in and out and then call my matrix-free solver routine:
helmholtz.py: 365 with self.phi_tmp.dat.vec as v: v.array[:] = x.array[:self.ndof_phi] with self.u_tmp.dat.vec as v: v.array[:] = x.array[self.ndof_phi:] self.solve(self.phi_tmp,self.u_tmp, self.P_phi_tmp,self.P_u_tmp) with self.P_phi_tmp.dat.vec_ro as v: y.array[:self.ndof_phi] = v.array[:] with self.P_u_tmp.dat.vec_ro as v: y.array[self.ndof_phi:] = v.array[:]
Is this the correct way of doing it, in particular the use of self.V_pressure.dof_dset.size?
Yes, that's right. If the sizes are wrong you would see a numpy error about mismatching array sizes.
It's conceivable that it doesn't have problems for smaller grid sizes, and then for larger grid sizes it crashes on one processor where there is an out-of-bounds access, but it reports the compilation error on the master processor?
If you're getting (or were) compilation errors can you please run with "export PYOP2_DEBUG=1". It's possible that the code is not the same on all processes which would be caught by the above. Please note as well that the halo regions are going to be massive, I have some branches that build the correct shrunk halos, but they haven't landed yet and I'm somewhat incapacitated with a broken collarbone. You'd need: firedrake: multigrid-parallel pyop2: local-par_loop petsc4py: bitbucket.org/mapdes/petsc4py branch moar-plex petsc: mlange/plex-distributed-overlap Functionality similar to the latter should hopefully arrive in petsc master this week Lawrence