netCDF4 is in gusto/requirements.txt so I think that means it is taken care of by firedrake.
> They think that as long as we enable the Firedrake Python virtual environment to 'fire up', by loading the relevant Python module, before trying to activate the virtual environment, then the correctly activated virtual environment should then take care of the rest of the required software.
I'm not sure I can fully believe that. I mean, the Firedrake virtual environment is self-contained with respect to Python, so it should take care of all the Python dependencies. We also take care of PETSc, and some other compiled libraries, but some compiled libraries are considered "system". For example, we rely on whatever compilers and MPI implementation the host system provides, we only install Cython and mpi4py ourselves in the Python virtualenv.
I
 don't know what the deal with NetCDF is, perhaps some gusto people know the answer: is compiled NetCDF expected to be present as a pre-installed library, or does Gusto have an installer that takes care of that?
Hi,
So I discussed this with the Isca support team.
They think that as long as we enable the Firedrake Python virtual environment to 'fire up', by loading the relevant Python module, before trying to activate the virtual environment, then the correctly activated virtual environment should then take care of the rest of the required software. The exception is that we also need to load an OpenMPI module, as in the submission script. So they don't think we need to load a netCDF4 module - they think this should be sorted out by the Firedrake virtual environment.
To answer the other question more completely - we found that we had to pass the environment variables to mpirun because otherwise the slave MPI jobs were missing the virtual environment and using the system Python instead.
> Presumably we need to load all modules required by the Firedrake virtual environment prior to activating the virtual environment. I guess this is just a quirk of using clusters.
I believe that is correct, yes.
I did try loading that netCDF4 module (in my submission script) prior to activating the Firedrake virtual environment, however, it then throws a version error at me.
I have checked the list of available modules on the cluster and it does not include the netCDF4 version installed by the Firedrake installer (as below). Therefore possibly this is the problem.
Presumably we need to load all modules required by the Firedrake virtual environment prior to activating the virtual environment. I guess this is just a quirk of using clusters.
I am currently checking your other question - previously we found that the code only worked when we explicitly passed those environment variables to mpirun.
Wait, what do you need the PYTHONPATH for? And why don't you load netCDF?
Yes, if you do that then python does run from within the Firedrake virtual environment...but this is essentially what we do when we submit a job to the scheduler:
#!/bin/sh
#PBS -d .
#PBS -q ptq
#PBS -l walltime=00:05:00
#PBS -A Research_Project-183035
#PBS -l nodes=1:ppn=16
#PBS -m e -M p.burns2@exeter.ac.uk
echo '<><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><>'
echo PBS_O_HOST = $PBS_O_HOST
echo PBS_ENVIRONMENT = $PBS_ENVIRONMENT
echo PBS_NODEFILE = $PBS_NODEFILE
echo '<><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><>'
module purge
#module load OpenMPI/2.0.0-GCC-4.8.5-torque
#module load Python/2.7.11-foss-2016a
module load OpenMPI/1.10.2-GCC-4.9.3-2.25
module load Python/3.5.1-foss-2016a
#module load netCDF/4.4.0-foss-2016a
. ~/firedrake-20171026/bin/activate
source ~/.bash_profile
#exec_address="./examples/boussinesq_2d_lab.py"
exec_address="./examples/sk_nonlinear.py"
echo -e 'Running Firedrake...'
mpirun \
-x FIREDRAKE_TSFC_KERNEL_CACHE_DIR \
-x PYOP2_CACHE_DIR \
-x LIBRARY_PATH \
-x LD_LIBRARY_PATH \
-x CPATH \
-x VIRTUAL_ENV \
-x PATH \
-x PKG_CONFIG_PATH \
-x PYTHONHOME \
-x PYTHONPATH \
-np 16 python $exec_address > log
So maybe the problem is that the python path is incorrect...
P
OK, now what if you load that Python module, and then activate the Firedrake venv, and then run
$ python
?
Confirmed - python does not run from within the virtual environment.
Note that for your second test to make any sense, you first would need to load a python module. So I did:
module load Python/3.5.1-foss-2016a
(foobar) [pb412@login02 ~]$ python
Python 3.5.1 (default, Jul 24 2017, 16:03:47)
[GCC 4.9.3] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>
You don't even have a running Python in your venv! To confirm, can you just run
$ python
while still the Firedrake venv is still _active_?
And outside the Firedrake venv, try this:
$ python3 -mvenv foobar
$ . foobar/bin/activate
$ python
This is what happens when I execute the first line (from within virtual environment):
(firedrake-20171026) [pb412@login02 ~]$ pip uninstall netCDF4
/gpfs/ts0/home/pb412/firedrake-20171026/bin/python: error while loading shared libraries: libpython3.5m.so.1.0: cannot open shared object file: No such file or directory
Paul
What if you try to reinstall netCDF4 from source inside the virtualenv? That is,
$ pip uninstall netCDF4
$ pip uninstall netCDF4
$ pip uninstall netCDF4
$ pip install --no-binary netCDF4 netCDF4
They are indeed syntax errors. Running Python 2 code under Python 3.
So the first important error message I can see in the log is:
-----------------------------------------------------------------------------------------------------
checking for library 'lmpe' ...
/gpfs/ts0/shared/software/OpenMPI/1.10.2-GCC-4.9.3-2.25/bin/mpicc -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -O3 -march=native -fPIC -fPIC -c _configtest.c -o _configtest.o
/gpfs/ts0/shared/software/OpenMPI/1.10.2-GCC-4.9.3-2.25/bin/mpicc _configtest.o -llmpe -o _configtest
/gpfs/ts0/shared/software/binutils/2.25-GCCcore-4.9.3/bin/ld.gold: error: cannot find -llmpe
collect2: error: ld returned 1 exit status
failure.
removing: _configtest.c _configtest.o
building 'mpe' dylib library
creating build/temp.linux-x86_64-3.5
creating build/temp.linux-x86_64-3.5/src
creating build/temp.linux-x86_64-3.5/src/lib-pmpi
/gpfs/ts0/shared/software/OpenMPI/1.10.2-GCC-4.9.3-2.25/bin/mpicc -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -O3 -march=native -fPIC -fPIC -c src/lib-pmpi/mpe.c -o build/temp.linux-x86_64-3.5/src/lib-pmpi/mpe.o
creating build/lib.linux-x86_64-3.5/mpi4py/lib-pmpi
...
- To me it looks like the installer managed to work around the error but I am not certain.....?
Then further down the log there are a number of syntax-type errors. Here are the fist few:
-----------------------------------------------------------------------------------------------------
Removing source in /tmp/pip-build-b2dkectq/wrapt
*** Error compiling '/tmp/pip-build-b2dkectq/astroid/astroid/tests/testdata/python2/data/all.py'...
File "/tmp/pip-build-b2dkectq/astroid/astroid/tests/testdata/python2/data/all.py", line 7
def func(): print 'yo'
^
SyntaxError: invalid syntax
*** Error compiling '/tmp/pip-build-b2dkectq/astroid/astroid/tests/testdata/python2/data/invalid_encoding.py'...
File "/tmp/pip-build-b2dkectq/astroid/astroid/tests/testdata/python2/data/invalid_encoding.py", line 0
SyntaxError: unknown encoding: lala
*** Error compiling '/tmp/pip-build-b2dkectq/astroid/astroid/tests/testdata/python2/data/module.py'...
File "/tmp/pip-build-b2dkectq/astroid/astroid/tests/testdata/python2/data/module.py", line 32
except ValueError, ex:
^
SyntaxError: invalid syntax
*** Error compiling '/tmp/pip-build-b2dkectq/astroid/astroid/tests/testdata/python2/data/module2.py'...
File "/tmp/pip-build-b2dkectq/astroid/astroid/tests/testdata/python2/data/module2.py", line 78
exec 'c = 3'
^
SyntaxError: Missing parentheses in call to 'exec'
P
OK, great.
A quick google of the import error suggests this could be either a cython issue or a python version issue.
I wonder if it would be better to start from the errors that the usual firedrake install log gives?
[I'm at the end of my expertise on such matters now, by the way! What do others think?]
Jemma
By passing the flag '--install gusto' to the installer...
P
Hi Paul,
How did you install Gusto?
Jemma