David,
The error is still there even with the firedrake-install from install-cython-workaround. For some strange reason I cannot link the mip library to Cython using pip install ... but I compile Cython 0.23 from source (./configure --prefix=/home/jchang23/.local && make && make install) it links properly. I suspect this may have more to do with how my local cluster's path dependencies behave.
But now, i get this other error as shown in the attached firedrake-install.log. I am guessing this has to do with not having an HDF5 library available within my custom installed Python?
Also, I note from inspection that apt-get normally installs the blas/lapack libraries for you, so I added the tag '--download-fblaslapack' to the petsc configure options. Is that the right thing to do or should I somehow install blas/lapack as a whole so that all packages depend on it?
Finally, I finally got firedrake to install properly when I loaded the intel MPI compiled python libraries. They happened to have all of the most up-to-date dependencies, and installed with 'python firedrake-install --user --no_package_manager --disable_ssh'. When I run my programs on the head node (and avoid getting yelled at), the RT0 code I have attached runs fine, but when I submit a batch script as shown in runCompare, i get the outputs as shown in output_error/test. Can you guys dissect what's going on or would this be more of an issue that needs to be discussed with the system admins?
Thanks,Justin
On Fri, Sep 4, 2015 at 4:14 AM, David Ham <David.Ham@imperial.ac.uk> wrote:
Hi Justin,
The prospective fix is in install-cython-workaround . Can you try that and tell me if it works for you? It will be merged to master as soon as Travis gives the green light.
Cheers,
David
On Fri, 4 Sep 2015 at 09:42 Ham, David A <david.ham@imperial.ac.uk> wrote:
Hi Justin,
Ah yes. I saw something like this when attempting to set up travis. The problem appears to be that the dependency analysis in pip is broken so it doesn't realise it has to build and install Cython before it builds h5py. I think the workaround is to force Cython installation in the install script before looking at the requirements file. I will push a branch presently and merge it as soon as it passes tests.
Cheers,
David
On Fri, 4 Sep 2015 at 08:17 Justin Chang <jychang48@gmail.com> wrote:
Hi DavidThanks for the updated script. So I am using the system provided openmpi/1.8.3 module but built my own Python 2.7.10 and pip from source. I am getting an error as shown in the attached firedrake-install.log
I also got a similar issue when I built my own openmpi library, although in that case I didn’t have the cython related errors and only had the ones related to hdf5/h5py
Any idea what this is?
Thanks,Justin
On Sep 3, 2015, at 8:54 AM, David Ham <David.Ham@imperial.ac.uk> wrote:
The branch is merged now so I believe this is now fixed._______________________________________________
David
On Thu, 3 Sep 2015 at 12:09 David Ham <David.Ham@imperial.ac.uk> wrote:
It turns out that this is for the usual reason: I am an idiot. A fix for this is in the further-install-fixes branch and I will merge the branch as soon as tests pass.
(What's actually going on is that I am abusing CalledProcessError when I should really raise a special exception for this case. I have now created a new exception and raised that instead).
Cheers,
David
On Thu, 3 Sep 2015 at 01:56 Justin Chang <jychang48@gmail.com> wrote:
Thanks David,
I actually forgot about the --no_package_manager option. But when I tried installing with it, i get this error:
$ python firedrake-install --no_package_manager
Traceback (most recent call last):
File "firedrake-install", line 265, in <module>
raise subprocess.CalledProcessError
TypeError: __init__() takes at least 3 arguments (1 given)
Even if I add the --disable_ssh and --developer tags I get the above error. Am I suppose to do something else with this?
Thanks,
Justin
On Tue, Sep 1, 2015 at 3:57 AM, David Ham <David.Ham@imperial.ac.uk> wrote:
Thanks,
On Sun, 30 Aug 2015 at 08:15 Justin Chang <jychang48@gmail.com> wrote:
I agree that hype and a sparse direct solver like mumps is necessary.
The latter needs --download-metis --download-parmetis
--download-scalapack.
Added and currently testing.
More comments:
4) If mumps is to be included, the install script has to ensure that
one has cmake >=2.5 when configuring metis, otherwise PETSc configure
will return an error. Perhaps include "brew install cmake" and "sudo
apt-get install cmake" in the script?
Also added. Thanks for catching this.
5) Should --download-exodusii (and it's dependencies --download-netcdf
--download-hdf5) also be included in the PETSC_CONFIGURE_OPTIONS? I
imagine that if one wants to solve problems on large-scale
unstructured grids, .exo files would be more feasible to use over .msh
files. Or is exodusii already accounted for in firedrake/PyOP2/etc?
And this.
6) If I run the script on a pristine MacOSX, I get an error saying
"OSError: Unable to find SWIG installation. Please install SWIG
version 2.0.0 or higher." I am guessing "brew install swig" was
somehow missed in the firedrake-install script?
Ah yes. Fecking swig. I've added the dependency to the new branch. Hopefully the swig dependency is going away very soon.
7) Perhaps not as important, but do y'all think it's possible to make
this script more non-Ubuntu HPC system friendly? For isntance, either
a) require the user to install his or her own local OpenMPI and Python
libraries from source, b) let script download the tarballs/git
repositories remotely and install it for you (kind of like how PETSc
handles external packages through --download-<package>), or c) enable
the user to point to the system provided OpenMPI and Python libraries.
Same would need to be done for CMake, SWIG, and PCRE as these packages
cannot simply be obtained from pip.
I think (c) is the case right now. If you run the script on a non-Ubuntu system or pass the --no_package_manager option then it will just assume that you have installed the dependencies somewhere visible and will get on with pip.
I think (b) is [a] hard and [b] in the case of MPI probably a bad idea. Supercomputers are such a diverse bunch of hacked up distributions that I think that installing the compiled dependencies automatically in a way which is portable would require a lot of install script work. In the case of MPI, you really need to use the MPI build which talks to the fast interconnect on your supercomputer, so automatically building vanilla OpenMPI is a Bad Thing.
Cheers,
David
Thanks,
Justin
On Fri, Aug 28, 2015 at 7:20 AM, Lawrence Mitchell
<lawrence.mitchell@imperial.ac.uk> wrote:
>
> _______________________________________________>> On 28 Aug 2015, at 14:01, David Ham <David.Ham@imperial.ac.uk> wrote:
>>
>> Hi All,
>>
>> I have a branch going through testing now which will handle PETSC_CONFIGURE_OPTIONS in a smarter way. Basically we'll just make sure that the things which are required are there, and we'll also honour anything the user has already set. I've also updated docs to say that that's what happens.
>>
>> I am also open to the suggestion that we could add more configuration options to the default set. Would anyone like to suggest what they should be?
>
> I think --download-hypre=1 at least. Maybe also a sparse direct solver (mumps?) which I think needs --download-metis --download-parmetis --download-mumps (maybe some others?)
>
> Lawrence
>
>
> firedrake mailing list
> firedrake@imperial.ac.uk
> https://mailman.ic.ac.uk/mailman/listinfo/firedrake
>
_______________________________________________
firedrake mailing list
firedrake@imperial.ac.uk
https://mailman.ic.ac.uk/mailman/listinfo/firedrake
_______________________________________________
firedrake mailing list
firedrake@imperial.ac.uk
https://mailman.ic.ac.uk/mailman/listinfo/firedrake
firedrake mailing list
firedrake@imperial.ac.uk
https://mailman.ic.ac.uk/mailman/listinfo/firedrake
_______________________________________________
firedrake mailing list
firedrake@imperial.ac.uk
https://mailman.ic.ac.uk/mailman/listinfo/firedrake