Ok I fixed it. It is a problem with openmpi on macOS Sierra https://www.open-mpi.org/faq/?category=osx
It was fixed by setting a shorter name for the openmpi temp directory:
export TMPDIR=/tmp
Thank you David,
You suggestion indeed solved the issue and the installation has finished. However, when I test the installation I get the following without any more info:
make alltest
Building extension modules
Linting firedrake codebase
Linting firedrake test suite
Linting firedrake scripts
Running all regression tests
make: *** [test] Error 1
Also,
when running the helmholtz.py example, I get something weird:
[UEADGKT4HGBGG7F:12658] [[29900,0],0] ORTE_ERROR_LOG: Bad parameter in file orted/pmix/pmix_server.c at line 262
[UEADGKT4HGBGG7F:12658] [[29900,0],0] ORTE_ERROR_LOG: Bad parameter in file ess_hnp_module.c at line 666
--------------------------------------------------------------------------
It looks like orte_init failed for some reason; your parallel process is
likely to abort. There are many reasons that a parallel process can
fail during orte_init; some of which are due to configuration or
environment problems. This failure appears to be an internal failure;
here's some additional information (which may only be relevant to an
Open MPI developer):
pmix server init failed
--> Returned value Bad parameter (-5) instead of ORTE_SUCCESS
--------------------------------------------------------------------------
[UEADGKT4HGBGG7F:12651] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a daemon on the local node in file ess_singleton_module.c at line 575
[UEADGKT4HGBGG7F:12651] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a daemon on the local node in file ess_singleton_module.c at line 165
--------------------------------------------------------------------------
It looks like orte_init failed for some reason; your parallel process is
likely to abort. There are many reasons that a parallel process can
fail during orte_init; some of which are due to configuration or
environment problems. This failure appears to be an internal failure;
here's some additional information (which may only be relevant to an
Open MPI developer):
orte_ess_init failed
--> Returned value Unable to start a daemon on the local node (-127) instead of ORTE_SUCCESS
--------------------------------------------------------------------------
--------------------------------------------------------------------------
It looks like MPI_INIT failed for some reason; your parallel process is
likely to abort. There are many reasons that a parallel process can
fail during MPI_INIT; some of which are due to configuration or environment
problems. This failure appears to be an internal failure; here's some
additional information (which may only be relevant to an Open MPI
developer):
ompi_mpi_init: ompi_rte_init failed
--> Returned "Unable to start a daemon on the local node" (-127) instead of "Success" (0)
--------------------------------------------------------------------------
*** An error occurred in MPI_Init_thread
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
*** and potentially your MPI job)
[UEADGKT4HGBGG7F:12651] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
Dear all,
I am installing firedrake on my new laptop from scratch (running macOS Sierra).
The installation script runs and installs virtualenv, up to the point where I get this:
Virtual env installed. Please run firedrake-install again.
But when I run
python firedrake-install
again, it just does the same thing (similar to the previous step) and does not install firedrake (and petsc etc..). What shall I do?
Best, Anna.