Dear all,


When trying to run any of my firedrake codes, I get the MPI error below. This is the case even if my script only contains the line "from firedrake import *". I have recently modified my PATH and I expect that the error comes from there, as mpi worked previously. My current PATH is:


/Users/mmfg/firedrake/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/opt/X11/bin:/Library/TeX/texbin


Any hint ?

Thanks a lot,

Floriane


PMIx has detected a temporary directory name that results

in a path that is too long for the Unix domain socket:


    Temp dir: /var/folders/7h/wbj8xp7n3g5cfbr32ctcmwzcy3jf53/T/openmpi-sessions-1010350243@math-mc1096_0/63253


Try setting your TMPDIR environmental variable to point to

something shorter in length

[math-mc1096.local:00541] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a daemon on the local node in file ess_singleton_module.c at line 582

[math-mc1096.local:00541] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a daemon on the local node in file ess_singleton_module.c at line 166

--------------------------------------------------------------------------

It looks like orte_init failed for some reason; your parallel process is

likely to abort.  There are many reasons that a parallel process can

fail during orte_init; some of which are due to configuration or

environment problems.  This failure appears to be an internal failure;

here's some additional information (which may only be relevant to an

Open MPI developer):


  orte_ess_init failed

  --> Returned value Unable to start a daemon on the local node (-127) instead of ORTE_SUCCESS

--------------------------------------------------------------------------

--------------------------------------------------------------------------

It looks like MPI_INIT failed for some reason; your parallel process is

likely to abort.  There are many reasons that a parallel process can

fail during MPI_INIT; some of which are due to configuration or environment

problems.  This failure appears to be an internal failure; here's some

additional information (which may only be relevant to an Open MPI

developer):


  ompi_mpi_init: ompi_rte_init failed

  --> Returned "Unable to start a daemon on the local node" (-127) instead of "Success" (0)

--------------------------------------------------------------------------

*** An error occurred in MPI_Init_thread

*** on a NULL communicator

*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,

***    and potentially your MPI job)

[math-mc1096.local:541] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!