Dear all,
with Lawrence's fix it now works with 10 levels (20,971,520 cells) on two full nodes. If I increase both the number of cells and the number of nodes by a factor of 4 (i.e. same #dof/core) I get a crash again. However, the position in the code where it crashes varies, here are two example:
In the 1st and 3rd run, I had cleared the PyOP2/firedrake cashes, whereas in the second case I'm not entirely sure what the state of the cache was.
In the 3rd run I also set PYOP2_DEBUG=1, whereas it was 0 in the first two runs.
The weird thing is also that the 3rd run seems to complete, but still generates the dreaded PETSc seg fault.
Thanks,
Eike
--
Dr Eike Hermann Mueller
Research Associate (PostDoc)
Department of Mathematical Sciences
University of Bath
Bath BA2 7AY, United Kingdom
+44 1225 38 5803
e.mueller@bath.ac.ukhttp://people.bath.ac.uk/em459/
 
Dear all,
I rebuilt 'my' PETSc with debugging enabled, and then also rebuilt petsc4py and firedrake. I could run on 48 cores with 7 levels once, but this was not reproducible. When I tried again it crashed with a PETSc error.
Is there any chance that the branches below help? Do I need mlange/plex-distributed-overlap or can I use the PETSc master now?
Thanks a lot,
Eike
firedrake: multigrid-parallel
pyop2: local-par_loop
petsc4py: bitbucket.org/mapdes/petsc4py branch moar-plex
petsc: mlange/plex-distributed-overlap
Functionality similar to the latter should hopefully arrive in petsc master this week
Lawrence
_______________________________________________
firedrake mailing list
firedrake@imperial.ac.uk
https://mailman.ic.ac.uk/mailman/listinfo/firedrake
-- 
Dr Eike Hermann Mueller
Research Associate (PostDoc)
Department of Mathematical Sciences
University of Bath
Bath BA2 7AY, United Kingdom
+44 1225 38 5803
e.mueller@bath.ac.uk
http://people.bath.ac.uk/em459/
_______________________________________________
firedrake mailing list
firedrake@imperial.ac.uk
https://mailman.ic.ac.uk/mailman/listinfo/firedrake