Hi Spencer, hi All,
I experience that CompressibleFlowSolver for EulerCFE in 1D is really slow. A computation on a grid with 5000 elements, with P=5 and a 5th order RK_SSP (self-implemented) needs ~1 s per time step on an up-to-date workstation (single core execution). (A calculation
of 1e5 steps (1s with a time step = 1e-5s) needs 27 hours...)
The profiler shows that 68% ot the time the code spends releasing of shared pointers, constructing and destructing Array(), lock() and unlock() and the = operator.
Flat profile:
Each sample counts as 0.01 seconds.
% cumulative self self total
time seconds seconds calls ms/call ms/call name
32.05 1.38 1.38 10032 0.14 0.14 boost::detail::sp_counted_base::release()
9.56 1.79 0.41 20514 0.02 0.02 Nektar::Array<Nektar::OneD, double const>::Array(unsigned int, double const&)
9.09 2.18 0.39 7693730 0.00 0.00 Nektar::Array<Nektar::OneD, double const>::~Array()
8.86 2.56 0.38 137658633 0.00 0.00 boost::unique_lock<boost::mutex>::lock()
8.04 2.90 0.35 137608137 0.00 0.00 boost::mutex::unlock()
7.69 3.23 0.33 20528 0.02 0.02 Nektar::Array<Nektar::OneD, double const>::operator=(Nektar::Array<Nektar::OneD, double const> const&)
7.23 3.54 0.31 2500500 0.00 0.00 Nektar::ExactSolverToro::v_PointSolve(double, double, double, double, double, double, double, double, double, double, double&, double&, double&, double&, double&)
4.31 3.73 0.19 74253 0.00 0.01 Nektar::MemPool::Allocate(unsigned long)
2.33 3.83 0.10 Nektar::Array<Nektar::OneD, int const>::~Array()
...
Running CompressibleFlowSolver with MPI gives a segmentation fault somewhere in the mesh partitioning (1D mesh)
MeshPartition::MeshPartition()
MeshPartition::ReadGeometry()
0x15d70d0
0x15d71a0
0x15d71a0
0
[node92:28879] *** Process received signal ***
[node92:28879] Signal: Segmentation fault (11)
[node92:28879] Signal code: Address not mapped (1)
[node92:28879] Failing at address: 0x38
[node92:28879] [ 0] /lib/x86_64-linux-gnu/libc.so.6(+0x36d40) [0x7f7e7a781d40]
[node92:28879] [ 1] /home/hkuehnelt/nektar++/build/library/LibUtilities/libLibUtilities.so.4.3.0(_ZN6Nektar12LibUtilities13MeshPartition12ReadGeometryERKN5boost10shared_ptrINS0_13SessionReaderEEE+0x1268) [0x7f7e7be44118]
[node92:28879] [ 2] /home/hkuehnelt/nektar++/build/library/LibUtilities/libLibUtilities.so.4.3.0(_ZN6Nektar12LibUtilities13MeshPartitionC1ERKN5boost10shared_ptrINS0_13SessionReaderEEE+0x40f) [0x7f7e7be44a5f]
[node92:28879] [ 3] /home/hkuehnelt/nektar++/build/library/LibUtilities/libLibUtilities.so.4.3.0(_ZN6Nektar12LibUtilities18MeshPartitionMetisC2ERKN5boost10shared_ptrINS0_13SessionReaderEEE+0x17) [0x7f7e7be519f7]
[node92:28879] [ 4] /home/hkuehnelt/nektar++/build/library/LibUtilities/libLibUtilities.so.4.3.0(_ZN6Nektar12LibUtilities18MeshPartitionMetis6createERKN5boost10shared_ptrINS0_13SessionReaderEEE+0xc5) [0x7f7e7be53505]
[node92:28879] [ 5] /home/hkuehnelt/nektar++/build/library/LibUtilities/libLibUtilities.so.4.3.0(_ZN6Nektar12LibUtilities10NekFactoryISsNS0_13MeshPartitionERKN5boost10shared_ptrINS0_13SessionReaderEEENS0_4noneES9_S9_S9_E14CreateInstanceESsS8_+0x96) [0x7f7e7be77c26]
[node92:28879] [ 6] /home/hkuehnelt/nektar++/build/library/LibUtilities/libLibUtilities.so.4.3.0(_ZN6Nektar12LibUtilities13SessionReader13PartitionMeshEv+0x3fd) [0x7f7e7be6cb5d]
[node92:28879] [ 7] /home/hkuehnelt/nektar++/build/library/LibUtilities/libLibUtilities.so.4.3.0(_ZN6Nektar12LibUtilities13SessionReader11InitSessionEv+0x55) [0x7f7e7be6de55]
[node92:28879] [ 8] /home/hkuehnelt/nektar++/build/solvers/CompressibleFlowSolver/CompressibleFlowSolver(_ZN6Nektar12LibUtilities13SessionReader14CreateInstanceEiPPc+0x14c) [0x4396bc]
[node92:28879] [ 9] /home/hkuehnelt/nektar++/build/solvers/CompressibleFlowSolver/CompressibleFlowSolver(main+0x4d) [0x4291ed]
[node92:28879] [10] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5) [0x7f7e7a76cec5]
[node92:28879] [11] /home/hkuehnelt/nektar++/build/solvers/CompressibleFlowSolver/CompressibleFlowSolver() [0x430b99]
[node92:28879] *** End of error message ***
--------------------------------------------------------------------------
mpirun noticed that process rank 0 with PID 28879 on node node92 exited on signal 11 (Segmentation fault).
--------------------------------------------------------------------------
Do you have any advice how to speed up the code and to fix the MPI issue?
Best regards,
Helmut
________________________________________
Helmut Kühnelt
Scientist
Mobility Department
Electric Drive Technologies
FN: 115980 i HG Wien | UID: ATU14703506
This email and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged
and/or confidential information. If you are not the intended recipient, please notify the sender by return e-mail or by telephone and delete this message from your system and any printout thereof. Any unauthorized use, reproduction, or dissemination of this
message is strictly prohibited. Please note that e-mails are susceptible to change. AIT Austrian Institute of Technology GmbH shall not be liable for the improper or incomplete transmission of the information contained in this communication, nor shall it be
liable for any delay in its receipt.