Dear All, Has any of you have ever had any problems with GSMPI when in parallel? I am having some problems running Nektar in parallel on SHARCNET (https://www.sharcnet.ca). I have run Nektar in parallel before on other clusters (https://c-cfd.meil.pw.edu.pl/hpc/), and had no problems using it. Let me describe it: If I do: IncNavierStokesSolver geom.xml cond.xml all is good, If I do (also through the pbs system) mpirun -n 2 IncNavierStokesSolver geom.xml cond.xml I get a Segmentation fault. If I try to delay process communication (e.g. by loading prepartitioned meshes) I can see some output. So I think the problem starts with the first MPI send/recive. That is why think there is something with the GSMPI since it wraps the MPI communication (right?). I have already tried with different compilers (intel and gcc), different boost distributions etc. Are you aware if there might be any problems with certain OpenMPI versions, or something like this? Best Regards, Stan Gepner