Hi Syavash, 

As Alexandra said - if you can provide the output of the solver on crash that would be extremely helpful to diagnose any issue. 

In terms of the mesh size - these are small meshes relative to some of meshes we have run internally. Can you provide any details on how you have generated the mesh? I.e:
Additionally the mesh file plus the session file and a description of your case would be useful - as it will help to diagnose the issue. 

Kind regards, 
James Slaughter. 

From: nektar-users-bounces@imperial.ac.uk <nektar-users-bounces@imperial.ac.uk> on behalf of Ehsan Asgari <eh.asgari@gmail.com>
Sent: 08 March 2023 17:19
To: nektar-users <nektar-users@imperial.ac.uk>
Subject: [Nektar-users] Standard way of handling large mesh in parallel run
 

This email from eh.asgari@gmail.com originates from outside Imperial. Do not click on links and attachments unless you recognise the sender. If you trust the sender, add them to your safe senders list to disable email stamping for this address.

 

Hi Everyone

I've been trying to import a mesh generated in a third-party package and run it in the incompressible solver. First, I tried a smaller mesh with a size of 10MB and 32 cores on an HPC with 3GB ram specified for each core. 
It worked and I proceeded to the main larger mesh with a size of 75MB and 128 cores. However, I found it can be quite challenging to establish a simulation as I often get MPI-related errors at the very beginning.
I was wondering if this is the standard way of handling large mesh files in parallel using mpirun:
mpirun -np 128 IncNavierStokesSolver condition.xml grid.xml

I am new to nektar++ so my question might seem trivial.

Kind regards
syavash