This works on my smaller university cluster but I wonder if for a system like Edison at NERSC (
http://www.nersc.gov/users/computational-systems/edison/) if there is a better directory for this.
Also, from what I read, Edison's SLURM scheduler loads the executable to the allocated compute nodes from the current working directory, which supposedly can be really slow - they recommend using something like:
srun --bcast=/tmp/$SLURM_JOB_ID --compress=lz4 ...
if 2000 or more nodes are needed. But even on jobs that require no more than a single compute node (24 cores), the firedrake/python modules seem to load very slowly.
Any help or thoughts appreciated.