Hi everybody,

As a follow up question, what output formats do you prefer/recommend to minimize disc space used? Although I haven't systematically checked, I feel like fld files take more space than plt and csv files. Do you have a go-to output file format? What other considerations do you have to choose an output format (e.g., mine are to be readable by ParaView or by Python using some simple modules like pandas etc.)?

Best,
Ilteber

--
İlteber R. Özdemir



On Thu, 11 May 2023 at 22:05, Isaac Rosin <isaac.rosin1@ucalgary.ca> wrote:
Hi Mohsen,

I followed your advice and it appears to be working now. Thank you very much!

Best,
Isaac

From: Mohsen Lahooti <Mohsen.Lahooti@newcastle.ac.uk>
Sent: Tuesday, May 9, 2023 4:30 AM
To: Isaac Rosin <isaac.rosin1@ucalgary.ca>; nektar-users@imperial.ac.uk <nektar-users@imperial.ac.uk>
Subject: Re: Reducing File Count of .fld Outputs in-situ
 
[△EXTERNAL]


Hi Isaac,

 

I am not using the fieldconvert Filter directly in the session file so have no experience with that. My usual practice is to just output the fields and then process them  later which I found it much easier. I use the IO_CheckSteps in the PARAMETERS to output the chk files during the simulation.

 

To overcome the file number you can use hdf5 format, it encapsulate all the partitions in one file, so you will have one file for each output.

To use Hdf5, first, the hdf5 build should be activated when you are building the code. You can either use the hdf5 library installed on the HPE or built it as a thirdparty.

When running the code you can either use <I> IOFormat=Hdf5 </> in SolverInfo in your session file or more conveniently just pass the “-i Hdf5” as an argument to the command line. E.g.

 

mpirun -np 10000 $dir/IncNavierStokesSolver  mesh.xml session.xml -i Hdf5 &> runlog

 

or

 

mpirun -np 10000 $dir/IncNavierStokesSolver  mesh.xml session.xml –io-format Hdf5 &> runlog

 

but if you specify the io format in the session file, you don’t need to pass the command line arguments,

 

Hope this helped.

 

Cheers,

Mohsen

 

From: nektar-users-bounces@imperial.ac.uk <nektar-users-bounces@imperial.ac.uk> on behalf of Isaac Rosin <isaac.rosin1@ucalgary.ca>
Date: Tuesday, 9 May 2023 at 11:03
To: nektar-users@imperial.ac.uk <nektar-users@imperial.ac.uk>
Subject: [Nektar-users] Reducing File Count of .fld Outputs in-situ

External sender. Take care when opening links or attachments. Do not provide your login details.

This email from isaac.rosin1@ucalgary.ca originates from outside Imperial. Do not click on links and attachments unless you recognise the sender. If you trust the sender, add them to your safe senders list to disable email stamping for this address.

 

Dear Nektar++,

 

I am using the FieldConvert filter in my session file to output vorticity. I then isolate regions of my domain using the range option (-r) of FieldConvert in post-processing. To enable the isolation of these regions, I have found that I should produce .fld files with my filter as opposed to .plt, .dat, etc. Here is my filter:

 

      <FILTER TYPE="FieldConvert">

            <PARAM NAME="OutputFile">Fields.vtu</PARAM>

            <PARAM NAME="OutputFrequency">1</PARAM>

            <PARAM NAME="Modules">vorticity</PARAM>

      </FILTER>

 

When running in parallel, this filter will output .fld directories with .fld files in them each corresponding to the processors used. This is where the issue arises: I am running simulations on a cluster using many processors and I need to save lots of time steps. My cluster has a 1 million file limit, and I will quickly hit this if I do not convert and overwrite the .fld directories into single .fld files in-situ. I know the command to do this in post-processing:

 

FieldConvert -f session.xml Fields_#_fc.fld/ Fields_#_fc.fld

 

but so far, have not been successful in implementing this in-situ.

 

I would be very grateful if you could offer a solution or at least steer me in the right direction with this issue. Thank you very much in advance, and I look forward to hearing from you.

 

Sincerely,

Isaac

_______________________________________________
Nektar-users mailing list
Nektar-users@imperial.ac.uk
https://mailman.ic.ac.uk/mailman/listinfo/nektar-users