Hi Dave, all,


this is an excellent new feature!

I will be testing it soon.



Cheers,

Gian


Gianmarco Mengaldo

Department of Aeronautics and Mathematics

Imperial College London

SW7 2AZ, London, UK

*Currently at ECMWF


From: nektar-developers-request@sci.utah.edu <nektar-developers-request@sci.utah.edu> on behalf of David Moxey <d.moxey@imperial.ac.uk>
Sent: 15 August 2016 09:30:10
To: nektar-users; <nektar-developers@sci.utah.edu>
Subject: [NEKTAR-DEVELOPERS] HDF5 support
 
Dear all,

The latest master now contains a new parallel output format that is based on the HDF5 dataset format. It's hoped we can make our I/O more efficient at very large core counts as a consequence.

The default format is still our directory-based structure. I'd appreciate it if you can test this out on your simulations and let me know (a) the speed difference in reading/writing and (b) any bugs you encounter.

- To enable HDF5, configure CMake with the NEKTAR_USE_HDF5 setting, which also requires NEKTAR_USE_MPI. Note that most clusters have HDF5 modules available that are usually detected by the CMake configuration (e.g. on ARCHER: module load cray-hdf5-parallel)

- To run a simulation using the new format:

    IncNavierStokesSolver --io-format Hdf5 session.xml
    or
    put <I PROPERTY="IOFormat" VALUE="Hdf5" /> into your <SOLVERINFO> section.

- To convert from XML based format (either serial or parallel) to HDF5:

    FieldConvert in.fld out.fld:fld:format=Hdf5

  Note you don't need the session file for this.

- Or the other way around:

    FieldConvert in.fld out.fld:fld:format=Xml

Thanks also to Rupert Nash and Michael Bareford at EPCC for helping get this done as part of our eCSE funding.

Cheers,

Dave


--
David Moxey (Research and Teaching Fellow)
d.moxey@imperial.ac.uk | www.imperial.ac.uk/people/d.moxey

Room 364, Department of Aeronautics,
Imperial College London,
London, SW7 2AZ, UK.