Re: [firedrake] Troubles when trying to update firedrake
I found a workaround. The problem seems to be known in the preCICE community https://github.com/precice/precice/pull/299 ; importing MPI from mpi4py seems to fix the problem. Does this inspire you for a more generic solution ? Python 3.7.0 (default, Oct 15 2020, 11:45:50) [GCC 6.3.0 20170516] on linux Type "help", "copyright", "credits" or "license" for more information.
from mpi4py import MPI import firedrake
Le jeu. 15 oct. 2020 à 13:39, Karin&NiKo <niko.karin@gmail.com> a écrit :
unfortunately not...
Le jeu. 15 oct. 2020 à 13:17, Lawrence Mitchell <wence@gmx.li> a écrit :
On 15 Oct 2020, at 12:15, Karin&NiKo <niko.karin@gmail.com> wrote:
Dear David, I installed python 3.7.0 using the Archer script (slightly modified) and the installation procedure did succeed. Nevertheless, when trying to import firedrake, I get the following error :
Python 3.7.0 (default, Oct 15 2020, 11:45:50) [GCC 6.3.0 20170516] on linux Type "help", "copyright", "credits" or "license" for more information.
import firedrake
--------------------------------------------------------------------------
It looks like orte_init failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during orte_init; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer):
setting topology failed --> Returned value Error (-1) instead of ORTE_SUCCESS
--------------------------------------------------------------------------
--------------------------------------------------------------------------
It looks like orte_init failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during orte_init; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer):
orte_ess_init failed --> Returned value Error (-1) instead of ORTE_SUCCESS
--------------------------------------------------------------------------
--------------------------------------------------------------------------
It looks like MPI_INIT failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during MPI_INIT; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer):
ompi_mpi_init: ompi_rte_init failed --> Returned "Error" (-1) instead of "Success" (0)
--------------------------------------------------------------------------
*** An error occurred in MPI_Init_thread *** on a NULL communicator *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, *** and potentially your MPI job) [machineName:31007] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
It clearly comes from openMPI but I do not see why it happens since I can run other mpi processes (like "mpirun -n2 ls").
Does
mpirun -n 2 python -c "import firedrake"
work?
On some supercomputers you always have to run MPI-enabled programs via the mpirun system.
Lawrence
Hi again, Were you aware of such problems with openmpi? Have you in mind a robust solution rather than this dirty patch? Thanks for your help, Nicolas Le jeu. 15 oct. 2020 à 13:51, Karin&NiKo <niko.karin@gmail.com> a écrit :
I found a workaround. The problem seems to be known in the preCICE community https://github.com/precice/precice/pull/299 ; importing MPI from mpi4py seems to fix the problem. Does this inspire you for a more generic solution ?
Python 3.7.0 (default, Oct 15 2020, 11:45:50) [GCC 6.3.0 20170516] on linux Type "help", "copyright", "credits" or "license" for more information.
from mpi4py import MPI import firedrake
Le jeu. 15 oct. 2020 à 13:39, Karin&NiKo <niko.karin@gmail.com> a écrit :
unfortunately not...
Le jeu. 15 oct. 2020 à 13:17, Lawrence Mitchell <wence@gmx.li> a écrit :
On 15 Oct 2020, at 12:15, Karin&NiKo <niko.karin@gmail.com> wrote:
Dear David, I installed python 3.7.0 using the Archer script (slightly modified) and the installation procedure did succeed. Nevertheless, when trying to import firedrake, I get the following error :
Python 3.7.0 (default, Oct 15 2020, 11:45:50) [GCC 6.3.0 20170516] on linux Type "help", "copyright", "credits" or "license" for more information.
> import firedrake
--------------------------------------------------------------------------
It looks like orte_init failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during orte_init; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer):
setting topology failed --> Returned value Error (-1) instead of ORTE_SUCCESS
--------------------------------------------------------------------------
--------------------------------------------------------------------------
It looks like orte_init failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during orte_init; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer):
orte_ess_init failed --> Returned value Error (-1) instead of ORTE_SUCCESS
--------------------------------------------------------------------------
--------------------------------------------------------------------------
It looks like MPI_INIT failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during MPI_INIT; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer):
ompi_mpi_init: ompi_rte_init failed --> Returned "Error" (-1) instead of "Success" (0)
--------------------------------------------------------------------------
*** An error occurred in MPI_Init_thread *** on a NULL communicator *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, *** and potentially your MPI job) [machineName:31007] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
It clearly comes from openMPI but I do not see why it happens since I can run other mpi processes (like "mpirun -n2 ls").
Does
mpirun -n 2 python -c "import firedrake"
work?
On some supercomputers you always have to run MPI-enabled programs via the mpirun system.
Lawrence
On 16 Oct 2020, at 13:00, Karin&NiKo <niko.karin@gmail.com> wrote:
Hi again, Were you aware of such problems with openmpi? Have you in mind a robust solution rather than this dirty patch? Thanks for your help, Nicolas
It seems strange that it is required (firedrake does import mpi4py like that, but perhaps in the wrong order?). Generally the impression seems to be that mpich has fewer issues than openmpi. But this is not a terrible hack (although I agree it is somewhat ugly). Thanks, Lawrence
participants (2)
- 
                
                Karin&NiKo
- 
                
                Lawrence Mitchell