Dear all, When trying to run any of my firedrake codes, I get the MPI error below. This is the case even if my script only contains the line "from firedrake import *". I have recently modified my PATH and I expect that the error comes from there, as mpi worked previously. My current PATH is: /Users/mmfg/firedrake/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/opt/X11/bin:/Library/TeX/texbin Any hint ? Thanks a lot, Floriane PMIx has detected a temporary directory name that results in a path that is too long for the Unix domain socket: Temp dir: /var/folders/7h/wbj8xp7n3g5cfbr32ctcmwzcy3jf53/T/openmpi-sessions-1010350243@math-mc1096_0/63253 Try setting your TMPDIR environmental variable to point to something shorter in length [math-mc1096.local:00541] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a daemon on the local node in file ess_singleton_module.c at line 582 [math-mc1096.local:00541] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a daemon on the local node in file ess_singleton_module.c at line 166 -------------------------------------------------------------------------- It looks like orte_init failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during orte_init; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer): orte_ess_init failed --> Returned value Unable to start a daemon on the local node (-127) instead of ORTE_SUCCESS -------------------------------------------------------------------------- -------------------------------------------------------------------------- It looks like MPI_INIT failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during MPI_INIT; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer): ompi_mpi_init: ompi_rte_init failed --> Returned "Unable to start a daemon on the local node" (-127) instead of "Success" (0) -------------------------------------------------------------------------- *** An error occurred in MPI_Init_thread *** on a NULL communicator *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, *** and potentially your MPI job) [math-mc1096.local:541] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
Dear Floriane, The error below tells your exactly what the problem is: the temporary directory is too long. Try: export TMPDIR=/tmp Regards, Anna. On 3 May 2017, at 15:44, Floriane Gidel [RPG] <mmfg@leeds.ac.uk<mailto:mmfg@leeds.ac.uk>> wrote: Dear all, When trying to run any of my firedrake codes, I get the MPI error below. This is the case even if my script only contains the line "from firedrake import *". I have recently modified my PATH and I expect that the error comes from there, as mpi worked previously. My current PATH is: /Users/mmfg/firedrake/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/opt/X11/bin:/Library/TeX/texbin Any hint ? Thanks a lot, Floriane PMIx has detected a temporary directory name that results in a path that is too long for the Unix domain socket: Temp dir: /var/folders/7h/wbj8xp7n3g5cfbr32ctcmwzcy3jf53/T/openmpi-sessions-1010350243@math-mc1096_0/63253 Try setting your TMPDIR environmental variable to point to something shorter in length [math-mc1096.local:00541] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a daemon on the local node in file ess_singleton_module.c at line 582 [math-mc1096.local:00541] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a daemon on the local node in file ess_singleton_module.c at line 166 -------------------------------------------------------------------------- It looks like orte_init failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during orte_init; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer): orte_ess_init failed --> Returned value Unable to start a daemon on the local node (-127) instead of ORTE_SUCCESS -------------------------------------------------------------------------- -------------------------------------------------------------------------- It looks like MPI_INIT failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during MPI_INIT; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer): ompi_mpi_init: ompi_rte_init failed --> Returned "Unable to start a daemon on the local node" (-127) instead of "Success" (0) -------------------------------------------------------------------------- *** An error occurred in MPI_Init_thread *** on a NULL communicator *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, *** and potentially your MPI job) [math-mc1096.local:541] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed! _______________________________________________ firedrake mailing list firedrake@imperial.ac.uk<mailto:firedrake@imperial.ac.uk> https://mailman.ic.ac.uk/mailman/listinfo/firedrake
Thanks Anna! ________________________________ De : firedrake-bounces@imperial.ac.uk <firedrake-bounces@imperial.ac.uk> de la part de Anna Kalogirou <A.Kalogirou@leeds.ac.uk> Envoyé : mercredi 3 mai 2017 15:48:26 À : firedrake@imperial.ac.uk Objet : Re: [firedrake] MPI error Dear Floriane, The error below tells your exactly what the problem is: the temporary directory is too long. Try: export TMPDIR=/tmp Regards, Anna. On 3 May 2017, at 15:44, Floriane Gidel [RPG] <mmfg@leeds.ac.uk<mailto:mmfg@leeds.ac.uk>> wrote: Dear all, When trying to run any of my firedrake codes, I get the MPI error below. This is the case even if my script only contains the line "from firedrake import *". I have recently modified my PATH and I expect that the error comes from there, as mpi worked previously. My current PATH is: /Users/mmfg/firedrake/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/opt/X11/bin:/Library/TeX/texbin Any hint ? Thanks a lot, Floriane PMIx has detected a temporary directory name that results in a path that is too long for the Unix domain socket: Temp dir: /var/folders/7h/wbj8xp7n3g5cfbr32ctcmwzcy3jf53/T/openmpi-sessions-1010350243@math-mc1096_0/63253 Try setting your TMPDIR environmental variable to point to something shorter in length [math-mc1096.local:00541] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a daemon on the local node in file ess_singleton_module.c at line 582 [math-mc1096.local:00541] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a daemon on the local node in file ess_singleton_module.c at line 166 -------------------------------------------------------------------------- It looks like orte_init failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during orte_init; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer): orte_ess_init failed --> Returned value Unable to start a daemon on the local node (-127) instead of ORTE_SUCCESS -------------------------------------------------------------------------- -------------------------------------------------------------------------- It looks like MPI_INIT failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during MPI_INIT; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer): ompi_mpi_init: ompi_rte_init failed --> Returned "Unable to start a daemon on the local node" (-127) instead of "Success" (0) -------------------------------------------------------------------------- *** An error occurred in MPI_Init_thread *** on a NULL communicator *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, *** and potentially your MPI job) [math-mc1096.local:541] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed! _______________________________________________ firedrake mailing list firedrake@imperial.ac.uk<mailto:firedrake@imperial.ac.uk> https://mailman.ic.ac.uk/mailman/listinfo/firedrake
Thanks Lawrence, Indeed it works now ! I didn't know how to shorten the path, thanks for your fast answer! Floriane ________________________________ De : firedrake-bounces@imperial.ac.uk <firedrake-bounces@imperial.ac.uk> de la part de Floriane Gidel [RPG] <mmfg@leeds.ac.uk> Envoyé : mercredi 3 mai 2017 16:10:46 À : firedrake@imperial.ac.uk Objet : Re: [firedrake] MPI error Thanks Anna! ________________________________ De : firedrake-bounces@imperial.ac.uk <firedrake-bounces@imperial.ac.uk> de la part de Anna Kalogirou <A.Kalogirou@leeds.ac.uk> Envoyé : mercredi 3 mai 2017 15:48:26 À : firedrake@imperial.ac.uk Objet : Re: [firedrake] MPI error Dear Floriane, The error below tells your exactly what the problem is: the temporary directory is too long. Try: export TMPDIR=/tmp Regards, Anna. On 3 May 2017, at 15:44, Floriane Gidel [RPG] <mmfg@leeds.ac.uk<mailto:mmfg@leeds.ac.uk>> wrote: Dear all, When trying to run any of my firedrake codes, I get the MPI error below. This is the case even if my script only contains the line "from firedrake import *". I have recently modified my PATH and I expect that the error comes from there, as mpi worked previously. My current PATH is: /Users/mmfg/firedrake/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/opt/X11/bin:/Library/TeX/texbin Any hint ? Thanks a lot, Floriane PMIx has detected a temporary directory name that results in a path that is too long for the Unix domain socket: Temp dir: /var/folders/7h/wbj8xp7n3g5cfbr32ctcmwzcy3jf53/T/openmpi-sessions-1010350243@math-mc1096_0/63253 Try setting your TMPDIR environmental variable to point to something shorter in length [math-mc1096.local:00541] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a daemon on the local node in file ess_singleton_module.c at line 582 [math-mc1096.local:00541] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a daemon on the local node in file ess_singleton_module.c at line 166 -------------------------------------------------------------------------- It looks like orte_init failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during orte_init; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer): orte_ess_init failed --> Returned value Unable to start a daemon on the local node (-127) instead of ORTE_SUCCESS -------------------------------------------------------------------------- -------------------------------------------------------------------------- It looks like MPI_INIT failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during MPI_INIT; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer): ompi_mpi_init: ompi_rte_init failed --> Returned "Unable to start a daemon on the local node" (-127) instead of "Success" (0) -------------------------------------------------------------------------- *** An error occurred in MPI_Init_thread *** on a NULL communicator *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, *** and potentially your MPI job) [math-mc1096.local:541] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed! _______________________________________________ firedrake mailing list firedrake@imperial.ac.uk<mailto:firedrake@imperial.ac.uk> https://mailman.ic.ac.uk/mailman/listinfo/firedrake
Dear Floriane, On 03/05/17 15:44, Floriane Gidel [RPG] wrote:
Dear all,
When trying to run any of my firedrake codes, I get the MPI error below. This is the case even if my script only contains the line "from firedrake import *". I have recently modified my PATH and I expect that the error comes from there, as mpi worked previously. My current PATH is:
/Users/mmfg/firedrake/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/opt/X11/bin:/Library/TeX/texbin
Any hint ?
Thanks a lot,
Floriane
PMIx has detected a temporary directory name that results
in a path that is too long for the Unix domain socket:
Temp dir: /var/folders/7h/wbj8xp7n3g5cfbr32ctcmwzcy3jf53/T/openmpi-sessions-1010350243@math-mc1096_0/63253
Try setting your TMPDIR environmental variable to point to
something shorter in length
If you do: export TMPDIR=/tmp do things start to work again? (This is an annoyance with MacOS whereby the directory for temporary files is too long for UNIX sockets). A newer version of openmpi has fixed this, but I do not think it is yet available via homebrew. Lawrence
participants (3)
- 
                
                Anna Kalogirou
- 
                
                Floriane Gidel [RPG]
- 
                
                Lawrence Mitchell