I agree that hype and a sparse direct solver like mumps is necessary.
The latter needs --download-metis --download-parmetis
--download-scalapack.
Added and currently testing.
More comments:
4) If mumps is to be included, the install script has to ensure that
one has cmake >=2.5 when configuring metis, otherwise PETSc configure
will return an error. Perhaps include "brew install cmake" and "sudo
apt-get install cmake" in the script?
Also added. Thanks for catching this.
5) Should --download-exodusii (and it's dependencies --download-netcdf
--download-hdf5) also be included in the PETSC_CONFIGURE_OPTIONS? I
imagine that if one wants to solve problems on large-scale
unstructured grids, .exo files would be more feasible to use over .msh
files. Or is exodusii already accounted for in firedrake/PyOP2/etc?
And this.
6) If I run the script on a pristine MacOSX, I get an error saying
"OSError: Unable to find SWIG installation. Please install SWIG
version 2.0.0 or higher." I am guessing "brew install swig" was
somehow missed in the firedrake-install script?
Ah yes. Fecking swig. I've added the dependency to the new branch. Hopefully the swig dependency is going away very soon.
7) Perhaps not as important, but do y'all think it's possible to make
this script more non-Ubuntu HPC system friendly? For isntance, either
a) require the user to install his or her own local OpenMPI and Python
libraries from source, b) let script download the tarballs/git
repositories remotely and install it for you (kind of like how PETSc
handles external packages through --download-<package>), or c) enable
the user to point to the system provided OpenMPI and Python libraries.
Same would need to be done for CMake, SWIG, and PCRE as these packages
cannot simply be obtained from pip.
I think (c) is the case right now. If you run the script on a non-Ubuntu system or pass the --no_package_manager option then it will just assume that you have installed the dependencies somewhere visible and will get on with pip.
I think (b) is [a] hard and [b] in the case of MPI probably a bad idea. Supercomputers are such a diverse bunch of hacked up distributions that I think that installing the compiled dependencies automatically in a way which is portable would require a lot
of install script work. In the case of MPI, you really need to use the MPI build which talks to the fast interconnect on your supercomputer, so automatically building vanilla OpenMPI is a Bad Thing.
Cheers,
David
Thanks,
Justin
On Fri, Aug 28, 2015 at 7:20 AM, Lawrence Mitchell
<lawrence.mitchell@imperial.ac.uk> wrote:
>
>> On 28 Aug 2015, at 14:01, David Ham <
David.Ham@imperial.ac.uk> wrote:
>>
>> Hi All,
>>
>> I have a branch going through testing now which will handle PETSC_CONFIGURE_OPTIONS in a smarter way. Basically we'll just make sure that the things which are required are there, and we'll also honour anything the user has already set. I've also updated
docs to say that that's what happens.
>>
>> I am also open to the suggestion that we could add more configuration options to the default set. Would anyone like to suggest what they should be?
>
> I think --download-hypre=1 at least. Maybe also a sparse direct solver (mumps?) which I think needs --download-metis --download-parmetis --download-mumps (maybe some others?)
>
> Lawrence
>
>
> _______________________________________________
> firedrake mailing list
> firedrake@imperial.ac.uk
>
https://mailman.ic.ac.uk/mailman/listinfo/firedrake
>
_______________________________________________
firedrake mailing list
firedrake@imperial.ac.uk
https://mailman.ic.ac.uk/mailman/listinfo/firedrake