Hi Syavash, Is the mesh that you are using converted to a compatible format with nektar++? The command should be fine, however I would recommend to first type the mesh and then the conditions, so the command should look like: mpirun -np 128 IncNavierStokesSolver grid.xml condition.xml Also, another good practise when dealing with large meshes is to install nektar++ with hdf5 and run your simulations using this format for exporting the fields. I hope this helps! If you are experiencing some specific errors, feel free to share a picture with the error and the output of the solver so that we can help you more with it. Kind regards, Alexandra From: nektar-users-bounces@imperial.ac.uk <nektar-users-bounces@imperial.ac.uk> On Behalf Of Ehsan Asgari Sent: 08 March 2023 17:19 To: Nektar-users@imperial.ac.uk Subject: [Nektar-users] Standard way of handling large mesh in parallel run This email from eh.asgari@gmail.com <mailto:eh.asgari@gmail.com> originates from outside Imperial. Do not click on links and attachments unless you recognise the sender. If you trust the sender, add them to your safe senders list <https://spam.ic.ac.uk/SpamConsole/Senders.aspx> to disable email stamping for this address. Hi Everyone I've been trying to import a mesh generated in a third-party package and run it in the incompressible solver. First, I tried a smaller mesh with a size of 10MB and 32 cores on an HPC with 3GB ram specified for each core. It worked and I proceeded to the main larger mesh with a size of 75MB and 128 cores. However, I found it can be quite challenging to establish a simulation as I often get MPI-related errors at the very beginning. I was wondering if this is the standard way of handling large mesh files in parallel using mpirun: mpirun -np 128 IncNavierStokesSolver condition.xml grid.xml I am new to nektar++ so my question might seem trivial. Kind regards syavash