firedrake on ARCHER: PETSc issue
Dear firedraker, after installing PETSc and petsc4py in my own $WORK, I can run sequentially, but if I run on more than one core it hangs when importing firdrake. I traced this down to the call of PETSc._initialise(args, comm) in the method init() in petsc4py/build/lib.linux-x86_64-2.7/petsc4py/__init__.py, which just does not return. It does pick up my PETSc installation correctly (printed out path, arch in ImportPETSc method in lib/__init__.py). I build the PETSc branch mlange/plex-distributed-overlap (same as in $FDRAKE_DIR) with the same configure options use there, and then I build petsc4py with make. Any ideas? Thanks, Eike -- Dr Eike Hermann Mueller Research Associate (PostDoc) Department of Mathematical Sciences University of Bath Bath BA2 7AY, United Kingdom +44 1225 38 5803 e.mueller@bath.ac.uk http://people.bath.ac.uk/em459/
On 11 Oct 2014, at 11:41, Eike Mueller <e.mueller@bath.ac.uk> wrote:
Dear firedraker,
after installing PETSc and petsc4py in my own $WORK, I can run sequentially, but if I run on more than one core it hangs when importing firdrake. I traced this down to the call of PETSc._initialise(args, comm) in the method init() in petsc4py/build/lib.linux-x86_64-2.7/petsc4py/__init__.py, which just does not return. It does pick up my PETSc installation correctly (printed out path, arch in ImportPETSc method in lib/__init__.py).
I build the PETSc branch mlange/plex-distributed-overlap (same as in $FDRAKE_DIR) with the same configure options use there, and then I build petsc4py with make.
I think the problem, since I noticed the same thing yesterday, is that the installed module version of mpi4py was linked against an older version of the MPI library, different to the one you've just built PETSc against. My solution was to just build my own version of mpi4py and push that at the front of PYTHONPATH: Set up build environment as for petsc/petsc4py. ... $ git clone git@bitbucket.org/mpi4py/mpi4py.git $ cd mpi4py $ export CC=cc $ export CXX=CC $ python setup.py install --prefix=/somewhere/in/work ... Update PYTHONPATH appropriately. Cheers, Lawrence
Hi Lawrence, thanks, that helps and it gets past the point where it froze before. However, now it complains that PETSc needs the chaco package. I had removed the --download-chaco option from the PETSc build, if I add it back in, I get =============================================================================== TESTING: check from config.libraries(config/BuildSystem/config/libraries.py:146)******************************************************************************* UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): ------------------------------------------------------------------------------- You cannot use Chaco package from Sandia as it contains an incorrect ddot() routine that conflicts with BLAS Use --download-chaco ******************************************************************************* That is weird, since I did specify the --download-chaco option. PETSc's configure log is attached. Cheers, Eike -- Dr Eike Hermann Mueller Research Associate (PostDoc) Department of Mathematical Sciences University of Bath Bath BA2 7AY, United Kingdom +44 1225 38 5803 e.mueller@bath.ac.uk http://people.bath.ac.uk/em459/ On 11 Oct 2014, at 12:08, Lawrence Mitchell <lawrence.mitchell@imperial.ac.uk> wrote:
On 11 Oct 2014, at 11:41, Eike Mueller <e.mueller@bath.ac.uk> wrote:
Dear firedraker,
after installing PETSc and petsc4py in my own $WORK, I can run sequentially, but if I run on more than one core it hangs when importing firdrake. I traced this down to the call of PETSc._initialise(args, comm) in the method init() in petsc4py/build/lib.linux-x86_64-2.7/petsc4py/__init__.py, which just does not return. It does pick up my PETSc installation correctly (printed out path, arch in ImportPETSc method in lib/__init__.py).
I build the PETSc branch mlange/plex-distributed-overlap (same as in $FDRAKE_DIR) with the same configure options use there, and then I build petsc4py with make.
I think the problem, since I noticed the same thing yesterday, is that the installed module version of mpi4py was linked against an older version of the MPI library, different to the one you've just built PETSc against. My solution was to just build my own version of mpi4py and push that at the front of PYTHONPATH:
Set up build environment as for petsc/petsc4py. ... $ git clone git@bitbucket.org/mpi4py/mpi4py.git $ cd mpi4py $ export CC=cc $ export CXX=CC $ python setup.py install --prefix=/somewhere/in/work ... Update PYTHONPATH appropriately.
Cheers,
Lawrence _______________________________________________ firedrake mailing list firedrake@imperial.ac.uk https://mailman.ic.ac.uk/mailman/listinfo/firedrake
On 11 Oct 2014, at 12:42, Eike Mueller <e.mueller@bath.ac.uk> wrote:
Hi Lawrence,
thanks, that helps and it gets past the point where it froze before. However, now it complains that PETSc needs the chaco package. I had removed the --download-chaco option from the PETSc build, if I add it back in, I get
=============================================================================== TESTING: check from config.libraries(config/BuildSystem/config/libraries.py:146)******************************************************************************* UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): ------------------------------------------------------------------------------- You cannot use Chaco package from Sandia as it contains an incorrect ddot() routine that conflicts with BLAS Use --download-chaco *******************************************************************************
That is weird, since I did specify the --download-chaco option.
FWIW, this is fixed as of yesterday in PETSc master (I reported the issue then). But not in the branch you're building. Cheers, Lawrence
On 11/10/14 13:10, Lawrence Mitchell wrote:
On 11 Oct 2014, at 12:42, Eike Mueller <e.mueller@bath.ac.uk> wrote:
Hi Lawrence,
thanks, that helps and it gets past the point where it froze before. However, now it complains that PETSc needs the chaco package. I had removed the --download-chaco option from the PETSc build, if I add it back in, I get
=============================================================================== TESTING: check from config.libraries(config/BuildSystem/config/libraries.py:146)******************************************************************************* UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): ------------------------------------------------------------------------------- You cannot use Chaco package from Sandia as it contains an incorrect ddot() routine that conflicts with BLAS Use --download-chaco *******************************************************************************
That is weird, since I did specify the --download-chaco option.
FWIW, this is fixed as of yesterday in PETSc master (I reported the issue then). But not in the branch you're building.
In the short term you can just comment the offending config test: diff --git a/config/BuildSystem/config/packages/Chaco.py b/config/BuildSystem/config/packages/Chaco.py index 3667eaf..d059901 100644 --- a/config/BuildSystem/config/packages/Chaco.py +++ b/config/BuildSystem/config/packages/Chaco.py @@ -42,6 +42,6 @@ class Configure(config.package.Package): def configureLibrary(self): config.package.Package.configureLibrary(self) - if self.dfunctions.check('ddot_',self.lib): - raise RuntimeError('You cannot use Chaco package from Sandia as it contains an incorrect ddot() routine that conflicts with BLAS\nUse --download-chaco') + # if self.dfunctions.check('ddot_',self.lib): + # raise RuntimeError('You cannot use Chaco package from Sandia as it contains an incorrect ddot() routine that conflicts with BLAS\nUse --download-chaco ') Lawrence, do you already have access to the fdrake package account? Florian
Cheers,
Lawrence
participants (3)
- 
                
                Eike Mueller
- 
                
                Florian Rathgeber
- 
                
                Lawrence Mitchell