Error when compiling Nektar with Blas and Lapack
Dear All, I'm trying to install Nektar++ 4.0 to a cluster with the options *NEKTAR_USE_BLAS_LAPACK ON ** ** NEKTAR_USE_SYSTEM_BLAS_LAPACK ON** * with the path of the libraries as follows: * NATIVE_BLAS /truba/sw/centos6.4/lib/blas/netlib-gcc/blas_LINUX.a ** ** NATIVE_LAPACK /truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a * However, I'm getting the following error: *Linking CXX shared library libLibUtilities.so** **/usr/bin/ld: /truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a(dgbtrf.o): relocation R_X86_64_32 against `.rodata' can not be used when making a shared object; recompile with -fPIC** **/truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a: could not read symbols: Bad value** **collect2: ld returned 1 exit status** **make[2]: *** [library/LibUtilities/libLibUtilities.so.4.0.0] Error 1** **make[1]: *** [library/LibUtilities/CMakeFiles/LibUtilities.dir/all] Error 2** **make: *** [all] Error 2* What may be the reason for this problem and how can I solve it? Regards, Kamil
Hi Kamil, The issue is related to linking. We generate shared libraries in Nektar++, but here we are trying to link the shared library with a static library. You can only do this if the static library was compiled with the -fPIC option, which generates position-independent code that shared libraries need in order to work at runtime. If you have a shared library, you should use this. Otherwise, you should recompile BLAS/Lapack installation with the -fPIC option. This may be something that your cluster system administrators can help with. Thanks, Dave
On 27 Nov 2014, at 10:37, Kamil ÖZDEN <kamil.ozden.me@gmail.com> wrote:
Dear All,
I'm trying to install Nektar++ 4.0 to a cluster with the options
NEKTAR_USE_BLAS_LAPACK ON NEKTAR_USE_SYSTEM_BLAS_LAPACK ON
with the path of the libraries as follows:
NATIVE_BLAS /truba/sw/centos6.4/lib/blas/netlib-gcc/blas_LINUX.a NATIVE_LAPACK /truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a
However, I'm getting the following error:
Linking CXX shared library libLibUtilities.so /usr/bin/ld: /truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a(dgbtrf.o): relocation R_X86_64_32 against `.rodata' can not be used when making a shared object; recompile with -fPIC /truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a: could not read symbols: Bad value collect2: ld returned 1 exit status make[2]: *** [library/LibUtilities/libLibUtilities.so.4.0.0] Error 1 make[1]: *** [library/LibUtilities/CMakeFiles/LibUtilities.dir/all] Error 2 make: *** [all] Error 2
What may be the reason for this problem and how can I solve it?
Regards, Kamil
_______________________________________________ Nektar-users mailing list Nektar-users@imperial.ac.uk https://mailman.ic.ac.uk/mailman/listinfo/nektar-users
-- David Moxey (Research Associate) d.moxey@imperial.ac.uk | www.imperial.ac.uk/people/d.moxey Room 363, Department of Aeronautics, Imperial College London, London, SW7 2AZ, UK.
Dear Dr. Moxey, I got in contact with the system administrator. He told that it is impossible to recompile blas and lapack on the system with -fPIC option. Alternatively, he copied the Blas and Lapack libraries from the directory in the system to another folder in my home directory and recompiled them there with -fPIC option. However, when I tried to reinstall Nektar by showing the path of new Blas and Lapack libraries in my home directory I got the same error. Is there any other alternative way to overcome this problem? Regards, Kamil On 27-11-2014 13:10, David Moxey wrote:
Hi Kamil,
The issue is related to linking. We generate shared libraries in Nektar++, but here we are trying to link the shared library with a static library. You can only do this if the static library was compiled with the -fPIC option, which generates position-independent code that shared libraries need in order to work at runtime.
If you have a shared library, you should use this. Otherwise, you should recompile BLAS/Lapack installation with the -fPIC option. This may be something that your cluster system administrators can help with.
Thanks,
Dave
On 27 Nov 2014, at 10:37, Kamil ÖZDEN <kamil.ozden.me@gmail.com> wrote:
Dear All,
I'm trying to install Nektar++ 4.0 to a cluster with the options
NEKTAR_USE_BLAS_LAPACK ON NEKTAR_USE_SYSTEM_BLAS_LAPACK ON
with the path of the libraries as follows:
NATIVE_BLAS /truba/sw/centos6.4/lib/blas/netlib-gcc/blas_LINUX.a NATIVE_LAPACK /truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a
However, I'm getting the following error:
Linking CXX shared library libLibUtilities.so /usr/bin/ld: /truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a(dgbtrf.o): relocation R_X86_64_32 against `.rodata' can not be used when making a shared object; recompile with -fPIC /truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a: could not read symbols: Bad value collect2: ld returned 1 exit status make[2]: *** [library/LibUtilities/libLibUtilities.so.4.0.0] Error 1 make[1]: *** [library/LibUtilities/CMakeFiles/LibUtilities.dir/all] Error 2 make: *** [all] Error 2
What may be the reason for this problem and how can I solve it?
Regards, Kamil
_______________________________________________ Nektar-users mailing list Nektar-users@imperial.ac.uk https://mailman.ic.ac.uk/mailman/listinfo/nektar-users
-- David Moxey (Research Associate) d.moxey@imperial.ac.uk | www.imperial.ac.uk/people/d.moxey
Room 363, Department of Aeronautics, Imperial College London, London, SW7 2AZ, UK.
Dear all, When I’m trying to compile branch “MovingBodies” of nektar++ on cx2, I got some errors as follows: CMakeFiles/ExtractMeanModeFromHomo1DFld.dir/ExtractMeanModeFromHomo1DFld.cpp.o: file not recognized: File truncated make[2]: *** [utilities/PostProcessing/ExtractMeanModeFromHomo1DFld-3.4.0] Error 1 make[1]: *** [utilities/PostProcessing/CMakeFiles/ExtractMeanModeFromHomo1DFld.dir/all] Error 2 make[1]: *** Waiting for unfinished jobs…. However, the compiling process of the “master” branch is very smoothly on cx2, and I did not get any such errors when compiling my branch on victoria/euston nodes. How can I fix it? many thanks. Cheers, Yan On 27 Nov 2014, at 13:57, Kamil Ozden <kamil.ozden.me@gmail.com<mailto:kamil.ozden.me@gmail.com>> wrote: Dear Dr. Moxey, I got in contact with the system administrator. He told that it is impossible to recompile blas and lapack on the system with -fPIC option. Alternatively, he copied the Blas and Lapack libraries from the directory in the system to another folder in my home directory and recompiled them there with -fPIC option. However, when I tried to reinstall Nektar by showing the path of new Blas and Lapack libraries in my home directory I got the same error. Is there any other alternative way to overcome this problem? Regards, Kamil On 27-11-2014 13:10, David Moxey wrote: Hi Kamil, The issue is related to linking. We generate shared libraries in Nektar++, but here we are trying to link the shared library with a static library. You can only do this if the static library was compiled with the -fPIC option, which generates position-independent code that shared libraries need in order to work at runtime. If you have a shared library, you should use this. Otherwise, you should recompile BLAS/Lapack installation with the -fPIC option. This may be something that your cluster system administrators can help with. Thanks, Dave On 27 Nov 2014, at 10:37, Kamil ÖZDEN <kamil.ozden.me@gmail.com<mailto:kamil.ozden.me@gmail.com>> wrote: Dear All, I'm trying to install Nektar++ 4.0 to a cluster with the options NEKTAR_USE_BLAS_LAPACK ON NEKTAR_USE_SYSTEM_BLAS_LAPACK ON with the path of the libraries as follows: NATIVE_BLAS /truba/sw/centos6.4/lib/blas/netlib-gcc/blas_LINUX.a NATIVE_LAPACK /truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a However, I'm getting the following error: Linking CXX shared library libLibUtilities.so /usr/bin/ld: /truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a(dgbtrf.o): relocation R_X86_64_32 against `.rodata' can not be used when making a shared object; recompile with -fPIC /truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a: could not read symbols: Bad value collect2: ld returned 1 exit status make[2]: *** [library/LibUtilities/libLibUtilities.so.4.0.0] Error 1 make[1]: *** [library/LibUtilities/CMakeFiles/LibUtilities.dir/all] Error 2 make: *** [all] Error 2 What may be the reason for this problem and how can I solve it? Regards, Kamil _______________________________________________ Nektar-users mailing list Nektar-users@imperial.ac.uk<mailto:Nektar-users@imperial.ac.uk> https://mailman.ic.ac.uk/mailman/listinfo/nektar-users -- David Moxey (Research Associate) d.moxey@imperial.ac.uk<mailto:d.moxey@imperial.ac.uk> | www.imperial.ac.uk/people/d.moxey<http://www.imperial.ac.uk/people/d.moxey> Room 363, Department of Aeronautics, Imperial College London, London, SW7 2AZ, UK. _______________________________________________ Nektar-users mailing list Nektar-users@imperial.ac.uk<mailto:Nektar-users@imperial.ac.uk> https://mailman.ic.ac.uk/mailman/listinfo/nektar-users
Hi Yan, It sounds like you might have exceeded your disk quota when previously compiling the code, leaving a truncated file. If you are well within your quota, deleting the ExtractMeanModeFromHomo1DFld.cpp.o file should force it to recompile it and resolve the issue. Cheers, Chris On 29/11/14 21:03, Bao, Yan wrote:
Dear all,
When I’m trying to compile branch “MovingBodies” of nektar++ on cx2, I got some errors as follows:
CMakeFiles/ExtractMeanModeFromHomo1DFld.dir/ExtractMeanModeFromHomo1DFld.cpp.o: file not recognized: File truncated make[2]: *** [utilities/PostProcessing/ExtractMeanModeFromHomo1DFld-3.4.0] Error 1 make[1]: *** [utilities/PostProcessing/CMakeFiles/ExtractMeanModeFromHomo1DFld.dir/all] Error 2 make[1]: *** Waiting for unfinished jobs….
However, the compiling process of the “master” branch is very smoothly on cx2, and I did not get any such errors when compiling my branch on victoria/euston nodes.
How can I fix it? many thanks.
Cheers, Yan
On 27 Nov 2014, at 13:57, Kamil Ozden <kamil.ozden.me@gmail.com <mailto:kamil.ozden.me@gmail.com>> wrote:
Dear Dr. Moxey,
I got in contact with the system administrator. He told that it is impossible to recompile blas and lapack on the system with -fPIC option.
Alternatively, he copied the Blas and Lapack libraries from the directory in the system to another folder in my home directory and recompiled them there with -fPIC option.
However, when I tried to reinstall Nektar by showing the path of new Blas and Lapack libraries in my home directory I got the same error. Is there any other alternative way to overcome this problem?
Regards, Kamil
On 27-11-2014 13:10, David Moxey wrote:
Hi Kamil,
The issue is related to linking. We generate shared libraries in Nektar++, but here we are trying to link the shared library with a static library. You can only do this if the static library was compiled with the -fPIC option, which generates position-independent code that shared libraries need in order to work at runtime.
If you have a shared library, you should use this. Otherwise, you should recompile BLAS/Lapack installation with the -fPIC option. This may be something that your cluster system administrators can help with.
Thanks,
Dave
On 27 Nov 2014, at 10:37, Kamil ÖZDEN <kamil.ozden.me@gmail.com <mailto:kamil.ozden.me@gmail.com>> wrote:
Dear All,
I'm trying to install Nektar++ 4.0 to a cluster with the options
NEKTAR_USE_BLAS_LAPACK ON NEKTAR_USE_SYSTEM_BLAS_LAPACK ON
with the path of the libraries as follows:
NATIVE_BLAS /truba/sw/centos6.4/lib/blas/netlib-gcc/blas_LINUX.a NATIVE_LAPACK /truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a
However, I'm getting the following error:
Linking CXX shared library libLibUtilities.so /usr/bin/ld: /truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a(dgbtrf.o): relocation R_X86_64_32 against `.rodata' can not be used when making a shared object; recompile with -fPIC /truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a: could not read symbols: Bad value collect2: ld returned 1 exit status make[2]: *** [library/LibUtilities/libLibUtilities.so.4.0.0] Error 1 make[1]: *** [library/LibUtilities/CMakeFiles/LibUtilities.dir/all] Error 2 make: *** [all] Error 2
What may be the reason for this problem and how can I solve it?
Regards, Kamil
_______________________________________________ Nektar-users mailing list Nektar-users@imperial.ac.uk <mailto:Nektar-users@imperial.ac.uk> https://mailman.ic.ac.uk/mailman/listinfo/nektar-users
-- David Moxey (Research Associate) d.moxey@imperial.ac.uk <mailto:d.moxey@imperial.ac.uk> | www.imperial.ac.uk/people/d.moxey <http://www.imperial.ac.uk/people/d.moxey>
Room 363, Department of Aeronautics, Imperial College London, London, SW7 2AZ, UK.
_______________________________________________ Nektar-users mailing list Nektar-users@imperial.ac.uk <mailto:Nektar-users@imperial.ac.uk> https://mailman.ic.ac.uk/mailman/listinfo/nektar-users
-- Chris Cantwell Imperial College London South Kensington Campus London SW7 2AZ Email: c.cantwell@imperial.ac.uk www.imperial.ac.uk/people/c.cantwell
Hi Chris, Thanks for your email. Now, the code has been compiled successfully, However, when I qsub my job on cx2, I get the following message: MPI: r3i3n0: 0x27c6000054417657: /home/ybao/CX2/nektar++/builds/dist/bin/IncNavierStokesSolver: error while loading shared libraries: libfftw3.so.3: cannot openMPI: r3i3n0: 0x27c6000054417657: shared object file: No such file or directory MPI: could not run executable (case #4) In fact, I’ve loaded fftw/3.3.2-double, when running my case. Could you please help me to fix this problem? many thanks! Regards, Yan On 29 Nov 2014, at 21:49, Chris Cantwell <c.cantwell@imperial.ac.uk<mailto:c.cantwell@imperial.ac.uk>> wrote: Hi Yan, It sounds like you might have exceeded your disk quota when previously compiling the code, leaving a truncated file. If you are well within your quota, deleting the ExtractMeanModeFromHomo1DFld.cpp.o file should force it to recompile it and resolve the issue. Cheers, Chris On 29/11/14 21:03, Bao, Yan wrote: Dear all, When I’m trying to compile branch “MovingBodies” of nektar++ on cx2, I got some errors as follows: CMakeFiles/ExtractMeanModeFromHomo1DFld.dir/ExtractMeanModeFromHomo1DFld.cpp.o: file not recognized: File truncated make[2]: *** [utilities/PostProcessing/ExtractMeanModeFromHomo1DFld-3.4.0] Error 1 make[1]: *** [utilities/PostProcessing/CMakeFiles/ExtractMeanModeFromHomo1DFld.dir/all] Error 2 make[1]: *** Waiting for unfinished jobs…. However, the compiling process of the “master” branch is very smoothly on cx2, and I did not get any such errors when compiling my branch on victoria/euston nodes. How can I fix it? many thanks. Cheers, Yan On 27 Nov 2014, at 13:57, Kamil Ozden <kamil.ozden.me@gmail.com<mailto:kamil.ozden.me@gmail.com> <mailto:kamil.ozden.me@gmail.com>> wrote: Dear Dr. Moxey, I got in contact with the system administrator. He told that it is impossible to recompile blas and lapack on the system with -fPIC option. Alternatively, he copied the Blas and Lapack libraries from the directory in the system to another folder in my home directory and recompiled them there with -fPIC option. However, when I tried to reinstall Nektar by showing the path of new Blas and Lapack libraries in my home directory I got the same error. Is there any other alternative way to overcome this problem? Regards, Kamil On 27-11-2014 13:10, David Moxey wrote: Hi Kamil, The issue is related to linking. We generate shared libraries in Nektar++, but here we are trying to link the shared library with a static library. You can only do this if the static library was compiled with the -fPIC option, which generates position-independent code that shared libraries need in order to work at runtime. If you have a shared library, you should use this. Otherwise, you should recompile BLAS/Lapack installation with the -fPIC option. This may be something that your cluster system administrators can help with. Thanks, Dave On 27 Nov 2014, at 10:37, Kamil ÖZDEN <kamil.ozden.me@gmail.com<mailto:kamil.ozden.me@gmail.com> <mailto:kamil.ozden.me@gmail.com>> wrote: Dear All, I'm trying to install Nektar++ 4.0 to a cluster with the options NEKTAR_USE_BLAS_LAPACK ON NEKTAR_USE_SYSTEM_BLAS_LAPACK ON with the path of the libraries as follows: NATIVE_BLAS /truba/sw/centos6.4/lib/blas/netlib-gcc/blas_LINUX.a NATIVE_LAPACK /truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a However, I'm getting the following error: Linking CXX shared library libLibUtilities.so /usr/bin/ld: /truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a(dgbtrf.o): relocation R_X86_64_32 against `.rodata' can not be used when making a shared object; recompile with -fPIC /truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a: could not read symbols: Bad value collect2: ld returned 1 exit status make[2]: *** [library/LibUtilities/libLibUtilities.so.4.0.0] Error 1 make[1]: *** [library/LibUtilities/CMakeFiles/LibUtilities.dir/all] Error 2 make: *** [all] Error 2 What may be the reason for this problem and how can I solve it? Regards, Kamil _______________________________________________ Nektar-users mailing list Nektar-users@imperial.ac.uk<mailto:Nektar-users@imperial.ac.uk> <mailto:Nektar-users@imperial.ac.uk> https://mailman.ic.ac.uk/mailman/listinfo/nektar-users -- David Moxey (Research Associate) d.moxey@imperial.ac.uk<mailto:d.moxey@imperial.ac.uk> <mailto:d.moxey@imperial.ac.uk> | www.imperial.ac.uk/people/d.moxey<http://www.imperial.ac.uk/people/d.moxey> <http://www.imperial.ac.uk/people/d.moxey> Room 363, Department of Aeronautics, Imperial College London, London, SW7 2AZ, UK. _______________________________________________ Nektar-users mailing list Nektar-users@imperial.ac.uk<mailto:Nektar-users@imperial.ac.uk> <mailto:Nektar-users@imperial.ac.uk> https://mailman.ic.ac.uk/mailman/listinfo/nektar-users -- Chris Cantwell Imperial College London South Kensington Campus London SW7 2AZ Email: c.cantwell@imperial.ac.uk<mailto:c.cantwell@imperial.ac.uk> www.imperial.ac.uk/people/c.cantwell<http://www.imperial.ac.uk/people/c.cantwell> _______________________________________________ Nektar-users mailing list Nektar-users@imperial.ac.uk<mailto:Nektar-users@imperial.ac.uk> https://mailman.ic.ac.uk/mailman/listinfo/nektar-users
Hi Yan, It looks like there is no longer an fftw/3.3.2-double module on cx2. Try loading the fftw/3.3.3-double module, recompiling and relinking Nektar++ and trying again. You can test if it is finding the library by running on the login node: ldd dist/bin/IncNavierStokesSolver | grep fftw from your build directory and see if you see a line like: libfftw3.so.3 => /apps/fftw/3.3.3/lib/libfftw3.so.3 Cheers, Chris On 29/11/14 23:46, Bao, Yan wrote:
Hi Chris,
Thanks for your email. Now, the code has been compiled successfully, However, when I qsub my job on cx2, I get the following message:
MPI: r3i3n0: 0x27c6000054417657: /home/ybao/CX2/nektar++/builds/dist/bin/IncNavierStokesSolver: error while loading shared libraries: libfftw3.so.3: cannot openMPI: r3i3n0: 0x27c6000054417657: shared object file: No such file or directory MPI: could not run executable (case #4)
In fact, I’ve loadedfftw/3.3.2-double, when running my case. Could you please help me to fix this problem? many thanks!
Regards, Yan On 29 Nov 2014, at 21:49, Chris Cantwell <c.cantwell@imperial.ac.uk <mailto:c.cantwell@imperial.ac.uk>> wrote:
Hi Yan,
It sounds like you might have exceeded your disk quota when previously compiling the code, leaving a truncated file.
If you are well within your quota, deleting the ExtractMeanModeFromHomo1DFld.cpp.o file should force it to recompile it and resolve the issue.
Cheers, Chris
On 29/11/14 21:03, Bao, Yan wrote:
Dear all,
When I’m trying to compile branch “MovingBodies” of nektar++ on cx2, I got some errors as follows:
CMakeFiles/ExtractMeanModeFromHomo1DFld.dir/ExtractMeanModeFromHomo1DFld.cpp.o: file not recognized: File truncated make[2]: *** [utilities/PostProcessing/ExtractMeanModeFromHomo1DFld-3.4.0] Error 1 make[1]: *** [utilities/PostProcessing/CMakeFiles/ExtractMeanModeFromHomo1DFld.dir/all] Error 2 make[1]: *** Waiting for unfinished jobs….
However, the compiling process of the “master” branch is very smoothly on cx2, and I did not get any such errors when compiling my branch on victoria/euston nodes.
How can I fix it? many thanks.
Cheers, Yan
On 27 Nov 2014, at 13:57, Kamil Ozden <kamil.ozden.me@gmail.com <mailto:kamil.ozden.me@gmail.com> <mailto:kamil.ozden.me@gmail.com>> wrote:
Dear Dr. Moxey,
I got in contact with the system administrator. He told that it is impossible to recompile blas and lapack on the system with -fPIC option.
Alternatively, he copied the Blas and Lapack libraries from the directory in the system to another folder in my home directory and recompiled them there with -fPIC option.
However, when I tried to reinstall Nektar by showing the path of new Blas and Lapack libraries in my home directory I got the same error. Is there any other alternative way to overcome this problem?
Regards, Kamil
On 27-11-2014 13:10, David Moxey wrote:
Hi Kamil,
The issue is related to linking. We generate shared libraries in Nektar++, but here we are trying to link the shared library with a static library. You can only do this if the static library was compiled with the -fPIC option, which generates position-independent code that shared libraries need in order to work at runtime.
If you have a shared library, you should use this. Otherwise, you should recompile BLAS/Lapack installation with the -fPIC option. This may be something that your cluster system administrators can help with.
Thanks,
Dave
On 27 Nov 2014, at 10:37, Kamil ÖZDEN <kamil.ozden.me@gmail.com <mailto:kamil.ozden.me@gmail.com> <mailto:kamil.ozden.me@gmail.com>> wrote:
Dear All,
I'm trying to install Nektar++ 4.0 to a cluster with the options
NEKTAR_USE_BLAS_LAPACK ON NEKTAR_USE_SYSTEM_BLAS_LAPACK ON
with the path of the libraries as follows:
NATIVE_BLAS /truba/sw/centos6.4/lib/blas/netlib-gcc/blas_LINUX.a NATIVE_LAPACK /truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a
However, I'm getting the following error:
Linking CXX shared library libLibUtilities.so /usr/bin/ld: /truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a(dgbtrf.o): relocation R_X86_64_32 against `.rodata' can not be used when making a shared object; recompile with -fPIC /truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a: could not read symbols: Bad value collect2: ld returned 1 exit status make[2]: *** [library/LibUtilities/libLibUtilities.so.4.0.0] Error 1 make[1]: *** [library/LibUtilities/CMakeFiles/LibUtilities.dir/all] Error 2 make: *** [all] Error 2
What may be the reason for this problem and how can I solve it?
Regards, Kamil
_______________________________________________ Nektar-users mailing list Nektar-users@imperial.ac.uk <mailto:Nektar-users@imperial.ac.uk><mailto:Nektar-users@imperial.ac.uk> https://mailman.ic.ac.uk/mailman/listinfo/nektar-users
-- David Moxey (Research Associate) d.moxey@imperial.ac.uk <mailto:d.moxey@imperial.ac.uk><mailto:d.moxey@imperial.ac.uk> | www.imperial.ac.uk/people/d.moxey <http://www.imperial.ac.uk/people/d.moxey> <http://www.imperial.ac.uk/people/d.moxey>
Room 363, Department of Aeronautics, Imperial College London, London, SW7 2AZ, UK.
_______________________________________________ Nektar-users mailing list Nektar-users@imperial.ac.uk <mailto:Nektar-users@imperial.ac.uk><mailto:Nektar-users@imperial.ac.uk> https://mailman.ic.ac.uk/mailman/listinfo/nektar-users
-- Chris Cantwell Imperial College London South Kensington Campus London SW7 2AZ Email:c.cantwell@imperial.ac.uk <mailto:c.cantwell@imperial.ac.uk> www.imperial.ac.uk/people/c.cantwell <http://www.imperial.ac.uk/people/c.cantwell>
_______________________________________________ Nektar-users mailing list Nektar-users@imperial.ac.uk <mailto:Nektar-users@imperial.ac.uk> https://mailman.ic.ac.uk/mailman/listinfo/nektar-users
-- Chris Cantwell Imperial College London South Kensington Campus London SW7 2AZ Email: c.cantwell@imperial.ac.uk www.imperial.ac.uk/people/c.cantwell
Hi Chris, Many thanks for your help! At this moment, I’ve found libfftw3.so.3 in the library, however, I did not find boost in them, please see below: ybao@cx2:~/CX2/nektar++/builds> ldd dist/bin/IncNavierStokesSolver | grep fftw libfftw3.so.3 => /apps/fftw/3.3.3/lib/libfftw3.so.3 (0x00007fffeb32a000) ybao@cx2:~/CX2/nektar++/builds> ldd dist/bin/IncNavierStokesSolver | grep boost libboost_thread.so.1.55.0 => not found libboost_iostreams.so.1.55.0 => not found libboost_date_time.so.1.55.0 => not found libboost_program_options.so.1.55.0 => not found libboost_filesystem.so.1.55.0 => not found libboost_system.so.1.55.0 => not found libboost_thread.so.1.55.0 => not found libboost_iostreams.so.1.55.0 => not found libboost_date_time.so.1.55.0 => not found libboost_program_options.so.1.55.0 => not found libboost_filesystem.so.1.55.0 => not found libboost_system.so.1.55.0 => not found libboost_thread.so.1.55.0 => not found And, when I’m running the case, I got the following errors: MPI: r3i2n0: 0x447c000054609b62: /home/ybao/CX2/nektar++/builds/dist/bin/IncNavierStokesSolver: error while loading shared libraries: libboost_thread.so.1.55.0:MPI: r3i2n0: 0x447c000054609b62: cannot open shared object file: No such file or directory How can I fix them? Thanks again! Regards, Yan On 30 Nov 2014, at 11:24, Chris Cantwell <c.cantwell@imperial.ac.uk<mailto:c.cantwell@imperial.ac.uk>> wrote: Hi Yan, It looks like there is no longer an fftw/3.3.2-double module on cx2. Try loading the fftw/3.3.3-double module, recompiling and relinking Nektar++ and trying again. You can test if it is finding the library by running on the login node: ldd dist/bin/IncNavierStokesSolver | grep fftw from your build directory and see if you see a line like: libfftw3.so.3 => /apps/fftw/3.3.3/lib/libfftw3.so.3 Cheers, Chris On 29/11/14 23:46, Bao, Yan wrote: Hi Chris, Thanks for your email. Now, the code has been compiled successfully, However, when I qsub my job on cx2, I get the following message: MPI: r3i3n0: 0x27c6000054417657: /home/ybao/CX2/nektar++/builds/dist/bin/IncNavierStokesSolver: error while loading shared libraries: libfftw3.so.3: cannot openMPI: r3i3n0: 0x27c6000054417657: shared object file: No such file or directory MPI: could not run executable (case #4) In fact, I’ve loadedfftw/3.3.2-double, when running my case. Could you please help me to fix this problem? many thanks! Regards, Yan On 29 Nov 2014, at 21:49, Chris Cantwell <c.cantwell@imperial.ac.uk<mailto:c.cantwell@imperial.ac.uk> <mailto:c.cantwell@imperial.ac.uk>> wrote: Hi Yan, It sounds like you might have exceeded your disk quota when previously compiling the code, leaving a truncated file. If you are well within your quota, deleting the ExtractMeanModeFromHomo1DFld.cpp.o file should force it to recompile it and resolve the issue. Cheers, Chris On 29/11/14 21:03, Bao, Yan wrote: Dear all, When I’m trying to compile branch “MovingBodies” of nektar++ on cx2, I got some errors as follows: CMakeFiles/ExtractMeanModeFromHomo1DFld.dir/ExtractMeanModeFromHomo1DFld.cpp.o: file not recognized: File truncated make[2]: *** [utilities/PostProcessing/ExtractMeanModeFromHomo1DFld-3.4.0] Error 1 make[1]: *** [utilities/PostProcessing/CMakeFiles/ExtractMeanModeFromHomo1DFld.dir/all] Error 2 make[1]: *** Waiting for unfinished jobs…. However, the compiling process of the “master” branch is very smoothly on cx2, and I did not get any such errors when compiling my branch on victoria/euston nodes. How can I fix it? many thanks. Cheers, Yan On 27 Nov 2014, at 13:57, Kamil Ozden <kamil.ozden.me@gmail.com<mailto:kamil.ozden.me@gmail.com> <mailto:kamil.ozden.me@gmail.com> <mailto:kamil.ozden.me@gmail.com>> wrote: Dear Dr. Moxey, I got in contact with the system administrator. He told that it is impossible to recompile blas and lapack on the system with -fPIC option. Alternatively, he copied the Blas and Lapack libraries from the directory in the system to another folder in my home directory and recompiled them there with -fPIC option. However, when I tried to reinstall Nektar by showing the path of new Blas and Lapack libraries in my home directory I got the same error. Is there any other alternative way to overcome this problem? Regards, Kamil On 27-11-2014 13:10, David Moxey wrote: Hi Kamil, The issue is related to linking. We generate shared libraries in Nektar++, but here we are trying to link the shared library with a static library. You can only do this if the static library was compiled with the -fPIC option, which generates position-independent code that shared libraries need in order to work at runtime. If you have a shared library, you should use this. Otherwise, you should recompile BLAS/Lapack installation with the -fPIC option. This may be something that your cluster system administrators can help with. Thanks, Dave On 27 Nov 2014, at 10:37, Kamil ÖZDEN <kamil.ozden.me@gmail.com<mailto:kamil.ozden.me@gmail.com> <mailto:kamil.ozden.me@gmail.com> <mailto:kamil.ozden.me@gmail.com>> wrote: Dear All, I'm trying to install Nektar++ 4.0 to a cluster with the options NEKTAR_USE_BLAS_LAPACK ON NEKTAR_USE_SYSTEM_BLAS_LAPACK ON with the path of the libraries as follows: NATIVE_BLAS /truba/sw/centos6.4/lib/blas/netlib-gcc/blas_LINUX.a NATIVE_LAPACK /truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a However, I'm getting the following error: Linking CXX shared library libLibUtilities.so /usr/bin/ld: /truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a(dgbtrf.o): relocation R_X86_64_32 against `.rodata' can not be used when making a shared object; recompile with -fPIC /truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a: could not read symbols: Bad value collect2: ld returned 1 exit status make[2]: *** [library/LibUtilities/libLibUtilities.so.4.0.0] Error 1 make[1]: *** [library/LibUtilities/CMakeFiles/LibUtilities.dir/all] Error 2 make: *** [all] Error 2 What may be the reason for this problem and how can I solve it? Regards, Kamil _______________________________________________ Nektar-users mailing list Nektar-users@imperial.ac.uk<mailto:Nektar-users@imperial.ac.uk> <mailto:Nektar-users@imperial.ac.uk><mailto:Nektar-users@imperial.ac.uk> https://mailman.ic.ac.uk/mailman/listinfo/nektar-users -- David Moxey (Research Associate) d.moxey@imperial.ac.uk<mailto:d.moxey@imperial.ac.uk> <mailto:d.moxey@imperial.ac.uk><mailto:d.moxey@imperial.ac.uk> | www.imperial.ac.uk/people/d.moxey<http://www.imperial.ac.uk/people/d.moxey> <http://www.imperial.ac.uk/people/d.moxey> <http://www.imperial.ac.uk/people/d.moxey> Room 363, Department of Aeronautics, Imperial College London, London, SW7 2AZ, UK. _______________________________________________ Nektar-users mailing list Nektar-users@imperial.ac.uk<mailto:Nektar-users@imperial.ac.uk> <mailto:Nektar-users@imperial.ac.uk><mailto:Nektar-users@imperial.ac.uk> https://mailman.ic.ac.uk/mailman/listinfo/nektar-users -- Chris Cantwell Imperial College London South Kensington Campus London SW7 2AZ Email:c.cantwell@imperial.ac.uk<mailto:c.cantwell@imperial.ac.uk> <mailto:c.cantwell@imperial.ac.uk> www.imperial.ac.uk/people/c.cantwell<http://www.imperial.ac.uk/people/c.cantwell> <http://www.imperial.ac.uk/people/c.cantwell> _______________________________________________ Nektar-users mailing list Nektar-users@imperial.ac.uk<mailto:Nektar-users@imperial.ac.uk> <mailto:Nektar-users@imperial.ac.uk> https://mailman.ic.ac.uk/mailman/listinfo/nektar-users -- Chris Cantwell Imperial College London South Kensington Campus London SW7 2AZ Email: c.cantwell@imperial.ac.uk<mailto:c.cantwell@imperial.ac.uk> www.imperial.ac.uk/people/c.cantwell<http://www.imperial.ac.uk/people/c.cantwell> _______________________________________________ Nektar-users mailing list Nektar-users@imperial.ac.uk<mailto:Nektar-users@imperial.ac.uk> https://mailman.ic.ac.uk/mailman/listinfo/nektar-users
Hi Yan, There is now a boost module on cx2 (version 1.57). Since it looks like you have linked against 1.55 (maybe in your home directory?), you would need to rerun make to link against the correct version of the libraries. Cheers, Chris On 30/11/14 13:43, Bao, Yan wrote:
Hi Chris,
Many thanks for your help! At this moment, I’ve found libfftw3.so.3 in the library, however, I did not find boost in them, please see below:
ybao@cx2:~/CX2/nektar++/builds> ldd dist/bin/IncNavierStokesSolver | grep fftw libfftw3.so.3 => /apps/fftw/3.3.3/lib/libfftw3.so.3 (0x00007fffeb32a000)
ybao@cx2:~/CX2/nektar++/builds> ldd dist/bin/IncNavierStokesSolver | grep boost libboost_thread.so.1.55.0 => not found libboost_iostreams.so.1.55.0 => not found libboost_date_time.so.1.55.0 => not found libboost_program_options.so.1.55.0 => not found libboost_filesystem.so.1.55.0 => not found libboost_system.so.1.55.0 => not found libboost_thread.so.1.55.0 => not found libboost_iostreams.so.1.55.0 => not found libboost_date_time.so.1.55.0 => not found libboost_program_options.so.1.55.0 => not found libboost_filesystem.so.1.55.0 => not found libboost_system.so.1.55.0 => not found libboost_thread.so.1.55.0 => not found
And, when I’m running the case, I got the following errors:
MPI: r3i2n0: 0x447c000054609b62: /home/ybao/CX2/nektar++/builds/dist/bin/IncNavierStokesSolver: error while loading shared libraries: libboost_thread.so.1.55.0:MPI: r3i2n0: 0x447c000054609b62: cannot open shared object file: No such file or directory
How can I fix them? Thanks again!
Regards, Yan
On 30 Nov 2014, at 11:24, Chris Cantwell <c.cantwell@imperial.ac.uk <mailto:c.cantwell@imperial.ac.uk>> wrote:
Hi Yan,
It looks like there is no longer an fftw/3.3.2-double module on cx2. Try loading the fftw/3.3.3-double module, recompiling and relinking Nektar++ and trying again.
You can test if it is finding the library by running on the login node: ldd dist/bin/IncNavierStokesSolver | grep fftw from your build directory and see if you see a line like: libfftw3.so.3 => /apps/fftw/3.3.3/lib/libfftw3.so.3
Cheers, Chris
On 29/11/14 23:46, Bao, Yan wrote:
Hi Chris,
Thanks for your email. Now, the code has been compiled successfully, However, when I qsub my job on cx2, I get the following message:
MPI: r3i3n0: 0x27c6000054417657: /home/ybao/CX2/nektar++/builds/dist/bin/IncNavierStokesSolver: error while loading shared libraries: libfftw3.so.3: cannot openMPI: r3i3n0: 0x27c6000054417657: shared object file: No such file or directory MPI: could not run executable (case #4)
In fact, I’ve loadedfftw/3.3.2-double, when running my case. Could you please help me to fix this problem? many thanks!
Regards, Yan On 29 Nov 2014, at 21:49, Chris Cantwell <c.cantwell@imperial.ac.uk <mailto:c.cantwell@imperial.ac.uk> <mailto:c.cantwell@imperial.ac.uk>> wrote:
Hi Yan,
It sounds like you might have exceeded your disk quota when previously compiling the code, leaving a truncated file.
If you are well within your quota, deleting the ExtractMeanModeFromHomo1DFld.cpp.o file should force it to recompile it and resolve the issue.
Cheers, Chris
On 29/11/14 21:03, Bao, Yan wrote:
Dear all,
When I’m trying to compile branch “MovingBodies” of nektar++ on cx2, I got some errors as follows:
CMakeFiles/ExtractMeanModeFromHomo1DFld.dir/ExtractMeanModeFromHomo1DFld.cpp.o: file not recognized: File truncated make[2]: *** [utilities/PostProcessing/ExtractMeanModeFromHomo1DFld-3.4.0] Error 1 make[1]: *** [utilities/PostProcessing/CMakeFiles/ExtractMeanModeFromHomo1DFld.dir/all] Error 2 make[1]: *** Waiting for unfinished jobs….
However, the compiling process of the “master” branch is very smoothly on cx2, and I did not get any such errors when compiling my branch on victoria/euston nodes.
How can I fix it? many thanks.
Cheers, Yan
On 27 Nov 2014, at 13:57, Kamil Ozden <kamil.ozden.me@gmail.com <mailto:kamil.ozden.me@gmail.com> <mailto:kamil.ozden.me@gmail.com> <mailto:kamil.ozden.me@gmail.com>> wrote:
Dear Dr. Moxey,
I got in contact with the system administrator. He told that it is impossible to recompile blas and lapack on the system with -fPIC option.
Alternatively, he copied the Blas and Lapack libraries from the directory in the system to another folder in my home directory and recompiled them there with -fPIC option.
However, when I tried to reinstall Nektar by showing the path of new Blas and Lapack libraries in my home directory I got the same error. Is there any other alternative way to overcome this problem?
Regards, Kamil
On 27-11-2014 13:10, David Moxey wrote: > Hi Kamil, > > The issue is related to linking. We generate shared libraries in > Nektar++, but here we are trying to link the shared library with a > static library. You can only do this if the static library was > compiled with the -fPIC option, which generates position-independent > code that shared libraries need in order to work at runtime. > > If you have a shared library, you should use this. Otherwise, you > should recompile BLAS/Lapack installation with the -fPIC option. This > may be something that your cluster system administrators can help > with. > > Thanks, > > Dave > >> On 27 Nov 2014, at 10:37, Kamil ÖZDEN <kamil.ozden.me@gmail.com >> <mailto:kamil.ozden.me@gmail.com> >> <mailto:kamil.ozden.me@gmail.com> >> <mailto:kamil.ozden.me@gmail.com>> wrote: >> >> Dear All, >> >> I'm trying to install Nektar++ 4.0 to a cluster with the options >> >> NEKTAR_USE_BLAS_LAPACK ON >> NEKTAR_USE_SYSTEM_BLAS_LAPACK ON >> >> with the path of the libraries as follows: >> >> NATIVE_BLAS >> /truba/sw/centos6.4/lib/blas/netlib-gcc/blas_LINUX.a >> NATIVE_LAPACK >> /truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a >> >> However, I'm getting the following error: >> >> Linking CXX shared library libLibUtilities.so >> /usr/bin/ld: >> /truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a(dgbtrf.o): >> relocation R_X86_64_32 against `.rodata' can not be used when making >> a shared object; recompile with -fPIC >> /truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a: could >> not read symbols: Bad value >> collect2: ld returned 1 exit status >> make[2]: *** [library/LibUtilities/libLibUtilities.so.4.0.0] Error 1 >> make[1]: *** [library/LibUtilities/CMakeFiles/LibUtilities.dir/all] >> Error 2 >> make: *** [all] Error 2 >> >> What may be the reason for this problem and how can I solve it? >> >> Regards, >> Kamil >> >> _______________________________________________ >> Nektar-users mailing list >> Nektar-users@imperial.ac.uk <mailto:Nektar-users@imperial.ac.uk> >> <mailto:Nektar-users@imperial.ac.uk><mailto:Nektar-users@imperial.ac.uk> >> https://mailman.ic.ac.uk/mailman/listinfo/nektar-users > > > -- > David Moxey (Research Associate) > d.moxey@imperial.ac.uk <mailto:d.moxey@imperial.ac.uk> > <mailto:d.moxey@imperial.ac.uk><mailto:d.moxey@imperial.ac.uk> | > www.imperial.ac.uk/people/d.moxey > <http://www.imperial.ac.uk/people/d.moxey> > <http://www.imperial.ac.uk/people/d.moxey> > <http://www.imperial.ac.uk/people/d.moxey> > > Room 363, Department of Aeronautics, > Imperial College London, > London, SW7 2AZ, UK. >
_______________________________________________ Nektar-users mailing list Nektar-users@imperial.ac.uk <mailto:Nektar-users@imperial.ac.uk> <mailto:Nektar-users@imperial.ac.uk><mailto:Nektar-users@imperial.ac.uk> https://mailman.ic.ac.uk/mailman/listinfo/nektar-users
-- Chris Cantwell Imperial College London South Kensington Campus London SW7 2AZ Email:c.cantwell@imperial.ac.uk <mailto:c.cantwell@imperial.ac.uk><mailto:c.cantwell@imperial.ac.uk> www.imperial.ac.uk/people/c.cantwell <http://www.imperial.ac.uk/people/c.cantwell> <http://www.imperial.ac.uk/people/c.cantwell>
_______________________________________________ Nektar-users mailing list Nektar-users@imperial.ac.uk <mailto:Nektar-users@imperial.ac.uk><mailto:Nektar-users@imperial.ac.uk> https://mailman.ic.ac.uk/mailman/listinfo/nektar-users
-- Chris Cantwell Imperial College London South Kensington Campus London SW7 2AZ Email:c.cantwell@imperial.ac.uk <mailto:c.cantwell@imperial.ac.uk> www.imperial.ac.uk/people/c.cantwell <http://www.imperial.ac.uk/people/c.cantwell>
_______________________________________________ Nektar-users mailing list Nektar-users@imperial.ac.uk <mailto:Nektar-users@imperial.ac.uk> https://mailman.ic.ac.uk/mailman/listinfo/nektar-users
-- Chris Cantwell Imperial College London South Kensington Campus London SW7 2AZ Email: c.cantwell@imperial.ac.uk www.imperial.ac.uk/people/c.cantwell
Hi Spencer, Chris, Jean, I’ve fixed the problem in my case just by doing CC=mpicc CXX=mpic++ cmake \ -DTHIRDPARTY_BUILD_BOOST=OFF \ -DBoost_USE_MULTITHREADED=OFF \ -DNEKTAR_USE_FFTW=ON \ -DNEKTAR_USE_MKL=ON \ -DNEKTAR_USE_MPI=ON \ -DNEKTAR_USE_SYSTEM_BLAS_LAPACK=OFF \ -DTHIRDPARTY_BUILD_FFTW=OFF .. Thanks again, Cheers, Yan On 30 Nov 2014, at 15:36, Chris Cantwell <c.cantwell@imperial.ac.uk<mailto:c.cantwell@imperial.ac.uk>> wrote: Hi Yan, There is now a boost module on cx2 (version 1.57). Since it looks like you have linked against 1.55 (maybe in your home directory?), you would need to rerun make to link against the correct version of the libraries. Cheers, Chris On 30/11/14 13:43, Bao, Yan wrote: Hi Chris, Many thanks for your help! At this moment, I’ve found libfftw3.so.3 in the library, however, I did not find boost in them, please see below: ybao@cx2:~/CX2/nektar++/builds> ldd dist/bin/IncNavierStokesSolver | grep fftw libfftw3.so.3 => /apps/fftw/3.3.3/lib/libfftw3.so.3 (0x00007fffeb32a000) ybao@cx2:~/CX2/nektar++/builds> ldd dist/bin/IncNavierStokesSolver | grep boost libboost_thread.so.1.55.0 => not found libboost_iostreams.so.1.55.0 => not found libboost_date_time.so.1.55.0 => not found libboost_program_options.so.1.55.0 => not found libboost_filesystem.so.1.55.0 => not found libboost_system.so.1.55.0 => not found libboost_thread.so.1.55.0 => not found libboost_iostreams.so.1.55.0 => not found libboost_date_time.so.1.55.0 => not found libboost_program_options.so.1.55.0 => not found libboost_filesystem.so.1.55.0 => not found libboost_system.so.1.55.0 => not found libboost_thread.so.1.55.0 => not found And, when I’m running the case, I got the following errors: MPI: r3i2n0: 0x447c000054609b62: /home/ybao/CX2/nektar++/builds/dist/bin/IncNavierStokesSolver: error while loading shared libraries: libboost_thread.so.1.55.0:MPI: r3i2n0: 0x447c000054609b62: cannot open shared object file: No such file or directory How can I fix them? Thanks again! Regards, Yan On 30 Nov 2014, at 11:24, Chris Cantwell <c.cantwell@imperial.ac.uk<mailto:c.cantwell@imperial.ac.uk> <mailto:c.cantwell@imperial.ac.uk>> wrote: Hi Yan, It looks like there is no longer an fftw/3.3.2-double module on cx2. Try loading the fftw/3.3.3-double module, recompiling and relinking Nektar++ and trying again. You can test if it is finding the library by running on the login node: ldd dist/bin/IncNavierStokesSolver | grep fftw from your build directory and see if you see a line like: libfftw3.so.3 => /apps/fftw/3.3.3/lib/libfftw3.so.3 Cheers, Chris On 29/11/14 23:46, Bao, Yan wrote: Hi Chris, Thanks for your email. Now, the code has been compiled successfully, However, when I qsub my job on cx2, I get the following message: MPI: r3i3n0: 0x27c6000054417657: /home/ybao/CX2/nektar++/builds/dist/bin/IncNavierStokesSolver: error while loading shared libraries: libfftw3.so.3: cannot openMPI: r3i3n0: 0x27c6000054417657: shared object file: No such file or directory MPI: could not run executable (case #4) In fact, I’ve loadedfftw/3.3.2-double, when running my case. Could you please help me to fix this problem? many thanks! Regards, Yan On 29 Nov 2014, at 21:49, Chris Cantwell <c.cantwell@imperial.ac.uk<mailto:c.cantwell@imperial.ac.uk> <mailto:c.cantwell@imperial.ac.uk> <mailto:c.cantwell@imperial.ac.uk>> wrote: Hi Yan, It sounds like you might have exceeded your disk quota when previously compiling the code, leaving a truncated file. If you are well within your quota, deleting the ExtractMeanModeFromHomo1DFld.cpp.o file should force it to recompile it and resolve the issue. Cheers, Chris On 29/11/14 21:03, Bao, Yan wrote: Dear all, When I’m trying to compile branch “MovingBodies” of nektar++ on cx2, I got some errors as follows: CMakeFiles/ExtractMeanModeFromHomo1DFld.dir/ExtractMeanModeFromHomo1DFld.cpp.o: file not recognized: File truncated make[2]: *** [utilities/PostProcessing/ExtractMeanModeFromHomo1DFld-3.4.0] Error 1 make[1]: *** [utilities/PostProcessing/CMakeFiles/ExtractMeanModeFromHomo1DFld.dir/all] Error 2 make[1]: *** Waiting for unfinished jobs…. However, the compiling process of the “master” branch is very smoothly on cx2, and I did not get any such errors when compiling my branch on victoria/euston nodes. How can I fix it? many thanks. Cheers, Yan On 27 Nov 2014, at 13:57, Kamil Ozden <kamil.ozden.me@gmail.com<mailto:kamil.ozden.me@gmail.com> <mailto:kamil.ozden.me@gmail.com> <mailto:kamil.ozden.me@gmail.com> <mailto:kamil.ozden.me@gmail.com>> wrote: Dear Dr. Moxey, I got in contact with the system administrator. He told that it is impossible to recompile blas and lapack on the system with -fPIC option. Alternatively, he copied the Blas and Lapack libraries from the directory in the system to another folder in my home directory and recompiled them there with -fPIC option. However, when I tried to reinstall Nektar by showing the path of new Blas and Lapack libraries in my home directory I got the same error. Is there any other alternative way to overcome this problem? Regards, Kamil On 27-11-2014 13:10, David Moxey wrote: Hi Kamil, The issue is related to linking. We generate shared libraries in Nektar++, but here we are trying to link the shared library with a static library. You can only do this if the static library was compiled with the -fPIC option, which generates position-independent code that shared libraries need in order to work at runtime. If you have a shared library, you should use this. Otherwise, you should recompile BLAS/Lapack installation with the -fPIC option. This may be something that your cluster system administrators can help with. Thanks, Dave On 27 Nov 2014, at 10:37, Kamil ÖZDEN <kamil.ozden.me@gmail.com<mailto:kamil.ozden.me@gmail.com> <mailto:kamil.ozden.me@gmail.com> <mailto:kamil.ozden.me@gmail.com> <mailto:kamil.ozden.me@gmail.com>> wrote: Dear All, I'm trying to install Nektar++ 4.0 to a cluster with the options NEKTAR_USE_BLAS_LAPACK ON NEKTAR_USE_SYSTEM_BLAS_LAPACK ON with the path of the libraries as follows: NATIVE_BLAS /truba/sw/centos6.4/lib/blas/netlib-gcc/blas_LINUX.a NATIVE_LAPACK /truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a However, I'm getting the following error: Linking CXX shared library libLibUtilities.so /usr/bin/ld: /truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a(dgbtrf.o): relocation R_X86_64_32 against `.rodata' can not be used when making a shared object; recompile with -fPIC /truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a: could not read symbols: Bad value collect2: ld returned 1 exit status make[2]: *** [library/LibUtilities/libLibUtilities.so.4.0.0] Error 1 make[1]: *** [library/LibUtilities/CMakeFiles/LibUtilities.dir/all] Error 2 make: *** [all] Error 2 What may be the reason for this problem and how can I solve it? Regards, Kamil _______________________________________________ Nektar-users mailing list Nektar-users@imperial.ac.uk<mailto:Nektar-users@imperial.ac.uk> <mailto:Nektar-users@imperial.ac.uk> <mailto:Nektar-users@imperial.ac.uk><mailto:Nektar-users@imperial.ac.uk> https://mailman.ic.ac.uk/mailman/listinfo/nektar-users -- David Moxey (Research Associate) d.moxey@imperial.ac.uk<mailto:d.moxey@imperial.ac.uk> <mailto:d.moxey@imperial.ac.uk> <mailto:d.moxey@imperial.ac.uk><mailto:d.moxey@imperial.ac.uk> | www.imperial.ac.uk/people/d.moxey<http://www.imperial.ac.uk/people/d.moxey> <http://www.imperial.ac.uk/people/d.moxey> <http://www.imperial.ac.uk/people/d.moxey> <http://www.imperial.ac.uk/people/d.moxey> Room 363, Department of Aeronautics, Imperial College London, London, SW7 2AZ, UK. _______________________________________________ Nektar-users mailing list Nektar-users@imperial.ac.uk<mailto:Nektar-users@imperial.ac.uk> <mailto:Nektar-users@imperial.ac.uk> <mailto:Nektar-users@imperial.ac.uk><mailto:Nektar-users@imperial.ac.uk> https://mailman.ic.ac.uk/mailman/listinfo/nektar-users -- Chris Cantwell Imperial College London South Kensington Campus London SW7 2AZ Email:c.cantwell@imperial.ac.uk<mailto:c.cantwell@imperial.ac.uk> <mailto:c.cantwell@imperial.ac.uk><mailto:c.cantwell@imperial.ac.uk> www.imperial.ac.uk/people/c.cantwell<http://www.imperial.ac.uk/people/c.cantwell> <http://www.imperial.ac.uk/people/c.cantwell> <http://www.imperial.ac.uk/people/c.cantwell> _______________________________________________ Nektar-users mailing list Nektar-users@imperial.ac.uk<mailto:Nektar-users@imperial.ac.uk> <mailto:Nektar-users@imperial.ac.uk><mailto:Nektar-users@imperial.ac.uk> https://mailman.ic.ac.uk/mailman/listinfo/nektar-users -- Chris Cantwell Imperial College London South Kensington Campus London SW7 2AZ Email:c.cantwell@imperial.ac.uk<http://imperial.ac.uk> <mailto:c.cantwell@imperial.ac.uk> www.imperial.ac.uk/people/c.cantwell<http://www.imperial.ac.uk/people/c.cantwell> <http://www.imperial.ac.uk/people/c.cantwell> _______________________________________________ Nektar-users mailing list Nektar-users@imperial.ac.uk <mailto:Nektar-users@imperial.ac.uk> https://mailman.ic.ac.uk/mailman/listinfo/nektar-users -- Chris Cantwell Imperial College London South Kensington Campus London SW7 2AZ Email: c.cantwell@imperial.ac.uk<mailto:c.cantwell@imperial.ac.uk> www.imperial.ac.uk/people/c.cantwell<http://www.imperial.ac.uk/people/c.cantwell> _______________________________________________ Nektar-users mailing list Nektar-users@imperial.ac.uk https://mailman.ic.ac.uk/mailman/listinfo/nektar-users
Dear Kamil, Can you confirm the library mentioned in the original error message (/truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a) is now the version of the library in your home directory in the latest error message? Cheers, Chris On 27/11/14 13:57, Kamil Ozden wrote:
Dear Dr. Moxey,
I got in contact with the system administrator. He told that it is impossible to recompile blas and lapack on the system with -fPIC option.
Alternatively, he copied the Blas and Lapack libraries from the directory in the system to another folder in my home directory and recompiled them there with -fPIC option.
However, when I tried to reinstall Nektar by showing the path of new Blas and Lapack libraries in my home directory I got the same error. Is there any other alternative way to overcome this problem?
Regards, Kamil
On 27-11-2014 13:10, David Moxey wrote:
Hi Kamil,
The issue is related to linking. We generate shared libraries in Nektar++, but here we are trying to link the shared library with a static library. You can only do this if the static library was compiled with the -fPIC option, which generates position-independent code that shared libraries need in order to work at runtime.
If you have a shared library, you should use this. Otherwise, you should recompile BLAS/Lapack installation with the -fPIC option. This may be something that your cluster system administrators can help with.
Thanks,
Dave
On 27 Nov 2014, at 10:37, Kamil ÖZDEN <kamil.ozden.me@gmail.com> wrote:
Dear All,
I'm trying to install Nektar++ 4.0 to a cluster with the options
NEKTAR_USE_BLAS_LAPACK ON NEKTAR_USE_SYSTEM_BLAS_LAPACK ON
with the path of the libraries as follows:
NATIVE_BLAS /truba/sw/centos6.4/lib/blas/netlib-gcc/blas_LINUX.a NATIVE_LAPACK /truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a
However, I'm getting the following error:
Linking CXX shared library libLibUtilities.so /usr/bin/ld: /truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a(dgbtrf.o): relocation R_X86_64_32 against `.rodata' can not be used when making a shared object; recompile with -fPIC /truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a: could not read symbols: Bad value collect2: ld returned 1 exit status make[2]: *** [library/LibUtilities/libLibUtilities.so.4.0.0] Error 1 make[1]: *** [library/LibUtilities/CMakeFiles/LibUtilities.dir/all] Error 2 make: *** [all] Error 2
What may be the reason for this problem and how can I solve it?
Regards, Kamil
_______________________________________________ Nektar-users mailing list Nektar-users@imperial.ac.uk https://mailman.ic.ac.uk/mailman/listinfo/nektar-users
-- David Moxey (Research Associate) d.moxey@imperial.ac.uk | www.imperial.ac.uk/people/d.moxey
Room 363, Department of Aeronautics, Imperial College London, London, SW7 2AZ, UK.
_______________________________________________ Nektar-users mailing list Nektar-users@imperial.ac.uk https://mailman.ic.ac.uk/mailman/listinfo/nektar-users
-- Chris Cantwell Imperial College London South Kensington Campus London SW7 2AZ Email: c.cantwell@imperial.ac.uk www.imperial.ac.uk/people/c.cantwell
Dear Dr. Cantwell, Yes, it is the same version of the library compiled with -fPIC version. Regards, Kamil 29.11.2014 23:48 tarihinde, Chris Cantwell yazdı:
Dear Kamil,
Can you confirm the library mentioned in the original error message (/truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a) is now the version of the library in your home directory in the latest error message?
Cheers, Chris
On 27/11/14 13:57, Kamil Ozden wrote:
Dear Dr. Moxey,
I got in contact with the system administrator. He told that it is impossible to recompile blas and lapack on the system with -fPIC option.
Alternatively, he copied the Blas and Lapack libraries from the directory in the system to another folder in my home directory and recompiled them there with -fPIC option.
However, when I tried to reinstall Nektar by showing the path of new Blas and Lapack libraries in my home directory I got the same error. Is there any other alternative way to overcome this problem?
Regards, Kamil
On 27-11-2014 13:10, David Moxey wrote:
Hi Kamil,
The issue is related to linking. We generate shared libraries in Nektar++, but here we are trying to link the shared library with a static library. You can only do this if the static library was compiled with the -fPIC option, which generates position-independent code that shared libraries need in order to work at runtime.
If you have a shared library, you should use this. Otherwise, you should recompile BLAS/Lapack installation with the -fPIC option. This may be something that your cluster system administrators can help with.
Thanks,
Dave
On 27 Nov 2014, at 10:37, Kamil ÖZDEN <kamil.ozden.me@gmail.com> wrote:
Dear All,
I'm trying to install Nektar++ 4.0 to a cluster with the options
NEKTAR_USE_BLAS_LAPACK ON NEKTAR_USE_SYSTEM_BLAS_LAPACK ON
with the path of the libraries as follows:
NATIVE_BLAS /truba/sw/centos6.4/lib/blas/netlib-gcc/blas_LINUX.a NATIVE_LAPACK /truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a
However, I'm getting the following error:
Linking CXX shared library libLibUtilities.so /usr/bin/ld: /truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a(dgbtrf.o): relocation R_X86_64_32 against `.rodata' can not be used when making a shared object; recompile with -fPIC /truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a: could not read symbols: Bad value collect2: ld returned 1 exit status make[2]: *** [library/LibUtilities/libLibUtilities.so.4.0.0] Error 1 make[1]: *** [library/LibUtilities/CMakeFiles/LibUtilities.dir/all] Error 2 make: *** [all] Error 2
What may be the reason for this problem and how can I solve it?
Regards, Kamil
_______________________________________________ Nektar-users mailing list Nektar-users@imperial.ac.uk https://mailman.ic.ac.uk/mailman/listinfo/nektar-users
-- David Moxey (Research Associate) d.moxey@imperial.ac.uk | www.imperial.ac.uk/people/d.moxey
Room 363, Department of Aeronautics, Imperial College London, London, SW7 2AZ, UK.
_______________________________________________ Nektar-users mailing list Nektar-users@imperial.ac.uk https://mailman.ic.ac.uk/mailman/listinfo/nektar-users
Dear Kamil, Could you send the current error message you get when you specify the version of the library compiled with -fPIC? Cheers, Chris On 29/11/14 21:55, Kamil ÖZDEN wrote:
Dear Dr. Cantwell,
Yes, it is the same version of the library compiled with -fPIC version.
Regards, Kamil
29.11.2014 23:48 tarihinde, Chris Cantwell yazdı:
Dear Kamil,
Can you confirm the library mentioned in the original error message (/truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a) is now the version of the library in your home directory in the latest error message?
Cheers, Chris
On 27/11/14 13:57, Kamil Ozden wrote:
Dear Dr. Moxey,
I got in contact with the system administrator. He told that it is impossible to recompile blas and lapack on the system with -fPIC option.
Alternatively, he copied the Blas and Lapack libraries from the directory in the system to another folder in my home directory and recompiled them there with -fPIC option.
However, when I tried to reinstall Nektar by showing the path of new Blas and Lapack libraries in my home directory I got the same error. Is there any other alternative way to overcome this problem?
Regards, Kamil
On 27-11-2014 13:10, David Moxey wrote:
Hi Kamil,
The issue is related to linking. We generate shared libraries in Nektar++, but here we are trying to link the shared library with a static library. You can only do this if the static library was compiled with the -fPIC option, which generates position-independent code that shared libraries need in order to work at runtime.
If you have a shared library, you should use this. Otherwise, you should recompile BLAS/Lapack installation with the -fPIC option. This may be something that your cluster system administrators can help with.
Thanks,
Dave
On 27 Nov 2014, at 10:37, Kamil ÖZDEN <kamil.ozden.me@gmail.com> wrote:
Dear All,
I'm trying to install Nektar++ 4.0 to a cluster with the options
NEKTAR_USE_BLAS_LAPACK ON NEKTAR_USE_SYSTEM_BLAS_LAPACK ON
with the path of the libraries as follows:
NATIVE_BLAS /truba/sw/centos6.4/lib/blas/netlib-gcc/blas_LINUX.a NATIVE_LAPACK /truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a
However, I'm getting the following error:
Linking CXX shared library libLibUtilities.so /usr/bin/ld: /truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a(dgbtrf.o): relocation R_X86_64_32 against `.rodata' can not be used when making a shared object; recompile with -fPIC /truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a: could not read symbols: Bad value collect2: ld returned 1 exit status make[2]: *** [library/LibUtilities/libLibUtilities.so.4.0.0] Error 1 make[1]: *** [library/LibUtilities/CMakeFiles/LibUtilities.dir/all] Error 2 make: *** [all] Error 2
What may be the reason for this problem and how can I solve it?
Regards, Kamil
_______________________________________________ Nektar-users mailing list Nektar-users@imperial.ac.uk https://mailman.ic.ac.uk/mailman/listinfo/nektar-users
-- David Moxey (Research Associate) d.moxey@imperial.ac.uk | www.imperial.ac.uk/people/d.moxey
Room 363, Department of Aeronautics, Imperial College London, London, SW7 2AZ, UK.
_______________________________________________ Nektar-users mailing list Nektar-users@imperial.ac.uk https://mailman.ic.ac.uk/mailman/listinfo/nektar-users
-- Chris Cantwell Imperial College London South Kensington Campus London SW7 2AZ Email: c.cantwell@imperial.ac.uk www.imperial.ac.uk/people/c.cantwell
Dear Dr. Cantwell, Here is the error message I got : /*[ 19%] Building CXX object library/LibUtilities/CMakeFiles/LibUtilities.dir/GitRevision.cpp.o*//* *//*Linking CXX shared library libLibUtilities.so*//* *//*/usr/bin/ld: /truba/home/kozden/lib/lapack/netlib-3.5.0-gcc/liblapack.a(dlamch.o): relocation R_X86_64_32 against `.rodata' can not be used when making a shared object; recompile with -fPIC*//* *//*/truba/home/kozden/lib/lapack/netlib-3.5.0-gcc/liblapack.a: could not read symbols: Bad value*//* *//*collect2: ld returned 1 exit status*//* *//*make[2]: *** [library/LibUtilities/libLibUtilities.so.4.0.0] Error 1*//* *//*make[1]: *** [library/LibUtilities/CMakeFiles/LibUtilities.dir/all] Error 2*//* *//*make: *** [all] Error 2*/ Regards, Kamil 30.11.2014 00:08 tarihinde, Chris Cantwell yazdı:
Dear Kamil,
Could you send the current error message you get when you specify the version of the library compiled with -fPIC?
Cheers, Chris
On 29/11/14 21:55, Kamil ÖZDEN wrote:
Dear Dr. Cantwell,
Yes, it is the same version of the library compiled with -fPIC version.
Regards, Kamil
29.11.2014 23:48 tarihinde, Chris Cantwell yazdı:
Dear Kamil,
Can you confirm the library mentioned in the original error message (/truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a) is now the version of the library in your home directory in the latest error message?
Cheers, Chris
On 27/11/14 13:57, Kamil Ozden wrote:
Dear Dr. Moxey,
I got in contact with the system administrator. He told that it is impossible to recompile blas and lapack on the system with -fPIC option.
Alternatively, he copied the Blas and Lapack libraries from the directory in the system to another folder in my home directory and recompiled them there with -fPIC option.
However, when I tried to reinstall Nektar by showing the path of new Blas and Lapack libraries in my home directory I got the same error. Is there any other alternative way to overcome this problem?
Regards, Kamil
On 27-11-2014 13:10, David Moxey wrote:
Hi Kamil,
The issue is related to linking. We generate shared libraries in Nektar++, but here we are trying to link the shared library with a static library. You can only do this if the static library was compiled with the -fPIC option, which generates position-independent code that shared libraries need in order to work at runtime.
If you have a shared library, you should use this. Otherwise, you should recompile BLAS/Lapack installation with the -fPIC option. This may be something that your cluster system administrators can help with.
Thanks,
Dave
On 27 Nov 2014, at 10:37, Kamil ÖZDEN <kamil.ozden.me@gmail.com> wrote:
Dear All,
I'm trying to install Nektar++ 4.0 to a cluster with the options
NEKTAR_USE_BLAS_LAPACK ON NEKTAR_USE_SYSTEM_BLAS_LAPACK ON
with the path of the libraries as follows:
NATIVE_BLAS /truba/sw/centos6.4/lib/blas/netlib-gcc/blas_LINUX.a NATIVE_LAPACK /truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a
However, I'm getting the following error:
Linking CXX shared library libLibUtilities.so /usr/bin/ld: /truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a(dgbtrf.o):
relocation R_X86_64_32 against `.rodata' can not be used when making a shared object; recompile with -fPIC /truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a: could not read symbols: Bad value collect2: ld returned 1 exit status make[2]: *** [library/LibUtilities/libLibUtilities.so.4.0.0] Error 1 make[1]: *** [library/LibUtilities/CMakeFiles/LibUtilities.dir/all] Error 2 make: *** [all] Error 2
What may be the reason for this problem and how can I solve it?
Regards, Kamil
_______________________________________________ Nektar-users mailing list Nektar-users@imperial.ac.uk https://mailman.ic.ac.uk/mailman/listinfo/nektar-users
-- David Moxey (Research Associate) d.moxey@imperial.ac.uk | www.imperial.ac.uk/people/d.moxey
Room 363, Department of Aeronautics, Imperial College London, London, SW7 2AZ, UK.
_______________________________________________ Nektar-users mailing list Nektar-users@imperial.ac.uk https://mailman.ic.ac.uk/mailman/listinfo/nektar-users
Dear Kamil, This still seems to suggest that the version in your home directory is not compiled with -fPIC. Try deleting all library files (*.a) and all compiled object code (*.o) from within the LAPACK source tree and try compiling from fresh again. Also note that you need to add the -fPIC flag to both the OPTS and NOOPT variables in your LAPACK make.inc file (which presumably is what your system administrator altered). Cheers, Chris On 29/11/14 22:29, Kamil ÖZDEN wrote:
Dear Dr. Cantwell,
Here is the error message I got :
/*[ 19%] Building CXX object library/LibUtilities/CMakeFiles/LibUtilities.dir/GitRevision.cpp.o*//* *//*Linking CXX shared library libLibUtilities.so*//* *//*/usr/bin/ld: /truba/home/kozden/lib/lapack/netlib-3.5.0-gcc/liblapack.a(dlamch.o): relocation R_X86_64_32 against `.rodata' can not be used when making a shared object; recompile with -fPIC*//* *//*/truba/home/kozden/lib/lapack/netlib-3.5.0-gcc/liblapack.a: could not read symbols: Bad value*//* *//*collect2: ld returned 1 exit status*//* *//*make[2]: *** [library/LibUtilities/libLibUtilities.so.4.0.0] Error 1*//* *//*make[1]: *** [library/LibUtilities/CMakeFiles/LibUtilities.dir/all] Error 2*//* *//*make: *** [all] Error 2*/
Regards, Kamil
30.11.2014 00:08 tarihinde, Chris Cantwell yazdı:
Dear Kamil,
Could you send the current error message you get when you specify the version of the library compiled with -fPIC?
Cheers, Chris
On 29/11/14 21:55, Kamil ÖZDEN wrote:
Dear Dr. Cantwell,
Yes, it is the same version of the library compiled with -fPIC version.
Regards, Kamil
29.11.2014 23:48 tarihinde, Chris Cantwell yazdı:
Dear Kamil,
Can you confirm the library mentioned in the original error message (/truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a) is now the version of the library in your home directory in the latest error message?
Cheers, Chris
On 27/11/14 13:57, Kamil Ozden wrote:
Dear Dr. Moxey,
I got in contact with the system administrator. He told that it is impossible to recompile blas and lapack on the system with -fPIC option.
Alternatively, he copied the Blas and Lapack libraries from the directory in the system to another folder in my home directory and recompiled them there with -fPIC option.
However, when I tried to reinstall Nektar by showing the path of new Blas and Lapack libraries in my home directory I got the same error. Is there any other alternative way to overcome this problem?
Regards, Kamil
On 27-11-2014 13:10, David Moxey wrote:
Hi Kamil,
The issue is related to linking. We generate shared libraries in Nektar++, but here we are trying to link the shared library with a static library. You can only do this if the static library was compiled with the -fPIC option, which generates position-independent code that shared libraries need in order to work at runtime.
If you have a shared library, you should use this. Otherwise, you should recompile BLAS/Lapack installation with the -fPIC option. This may be something that your cluster system administrators can help with.
Thanks,
Dave
> On 27 Nov 2014, at 10:37, Kamil ÖZDEN <kamil.ozden.me@gmail.com> > wrote: > > Dear All, > > I'm trying to install Nektar++ 4.0 to a cluster with the options > > NEKTAR_USE_BLAS_LAPACK ON > NEKTAR_USE_SYSTEM_BLAS_LAPACK ON > > with the path of the libraries as follows: > > NATIVE_BLAS > /truba/sw/centos6.4/lib/blas/netlib-gcc/blas_LINUX.a > NATIVE_LAPACK > /truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a > > However, I'm getting the following error: > > Linking CXX shared library libLibUtilities.so > /usr/bin/ld: > /truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a(dgbtrf.o): > > relocation > R_X86_64_32 against `.rodata' can not be used when making a shared > object; recompile with -fPIC > /truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a: could > not read symbols: Bad value > collect2: ld returned 1 exit status > make[2]: *** [library/LibUtilities/libLibUtilities.so.4.0.0] Error 1 > make[1]: *** [library/LibUtilities/CMakeFiles/LibUtilities.dir/all] > Error 2 > make: *** [all] Error 2 > > What may be the reason for this problem and how can I solve it? > > Regards, > Kamil > > _______________________________________________ > Nektar-users mailing list > Nektar-users@imperial.ac.uk > https://mailman.ic.ac.uk/mailman/listinfo/nektar-users
-- David Moxey (Research Associate) d.moxey@imperial.ac.uk | www.imperial.ac.uk/people/d.moxey
Room 363, Department of Aeronautics, Imperial College London, London, SW7 2AZ, UK.
_______________________________________________ Nektar-users mailing list Nektar-users@imperial.ac.uk https://mailman.ic.ac.uk/mailman/listinfo/nektar-users
-- Chris Cantwell Imperial College London South Kensington Campus London SW7 2AZ Email: c.cantwell@imperial.ac.uk www.imperial.ac.uk/people/c.cantwell
Dear Dr. Cantwell, Thanks for your help. I'll try this and inform you about the result. Meanwhile I made another installation with ACML on the same cluster with the following ACML and MPI configuration **************** /* ACML /truba/sw/centos6.4/lib/acml/4.4.0/gfortran64/lib/libacml.so *//* *//* ACML_INCLUDE_PATH /truba/sw/centos6.4/lib/acml/4.4.0/gfortran64/include *//* *//* ACML_SEARCH_PATHS /truba/sw/centos6.4/lib/acml/4.4.0/gfortran64/include *//* *//* ACML_USE_OPENMP_LIBRARIES OFF *//* *//* ACML_USE_SHARED_LIBRARIES ON */ ********************** /*MPIEXEC /usr/mpi/gcc/openmpi-1.6.5/bin/mpiexec *//* *//* MPIEXEC_MAX_NUMPROCS 2 *//* *//* MPIEXEC_NUMPROC_FLAG -np *//* *//* MPIEXEC_POSTFLAGS *//* *//* MPIEXEC_PREFLAGS *//* *//* MPI_CXX_COMPILER /usr/mpi/gcc/openmpi-1.6.5/bin/mpicxx *//* *//* MPI_CXX_COMPILE_FLAGS *//* *//* MPI_CXX_INCLUDE_PATH /usr/mpi/gcc/openmpi-1.6.5/include *//* *//* MPI_CXX_LIBRARIES /usr/mpi/gcc/openmpi-1.6.5/lib64/libmpi_cxx.so;/usr/mpi/gcc/openmpi-1.6.5/lib64/libmpi.so;/usr/lib64/libdl.so;/usr/lib64/libm.so;/usr/lib64/librt.so;/usr/lib64/libnsl.so;/usr/lib64/libutil.so;/usr/lib64/libm.so;/usr/lib64/libdl.so *//* *//* MPI_CXX_LINK_FLAGS -Wl,--export-dynamic *//* *//* MPI_C_COMPILER /usr/mpi/gcc/openmpi-1.6.5/bin/mpicc *//* *//* MPI_C_COMPILE_FLAGS *//* *//* MPI_C_INCLUDE_PATH /usr/mpi/gcc/openmpi-1.6.5/include *//* *//* MPI_C_LIBRARIES /usr/mpi/gcc/openmpi-1.6.5/lib64/libmpi.so;/usr/lib64/libdl.so;/usr/lib64/libm.so;/usr/lib64/librt.so;/usr/lib64/libnsl.so;/usr/lib64/libutil.so;/usr/lib64/libm.so;/usr/lib64/libdl.so *//* *//* MPI_C_LINK_FLAGS -Wl,--export-dynamic *//* *//* MPI_EXTRA_LIBRARY /usr/mpi/gcc/openmpi-1.6.5/lib64/libmpi.so;/usr/lib64/libdl.so;/usr/lib64/libm.so;/usr/lib64/librt.so;/usr/lib64/libnsl.so;/usr/lib64/libutil.so;/usr/lib64/libm.so;/usr/lib64/libdl.so *//* *//* MPI_LIBRARY /usr/mpi/gcc/openmpi-1.6.5/lib64/libmpi_cxx.so *********************** */Nektar seems to be installed successfully. However when I try to submit a job by using mpirun command with a script to the AMD processors of cluster (cluster uses SLURM resource manager) I face with such an issue. When I tried to run with 4 processors.Initial conditons are read and first .chk directory is started to write as seen below: /*=======================================================================*/ /**/ /*EquationType: UnsteadyNavierStokes*/ /**/ /*Session Name: Re_1_v2_N6*/ /**/ /*Spatial Dim.: 3*/ /**/ /*Max SEM Exp. Order: 7*/ /**/ /*Expansion Dim.: 3*/ /**/ /*Projection Type: Continuous Galerkin*/ /**/ /*Advection: explicit*/ /**/ /*Diffusion: explicit*/ /**/ /*Time Step: 0.01*/ /**/ /*No. of Steps: 300*/ /**/ /*Checkpoints (steps): 30*/ /**/ /*Integration Type: IMEXOrder1*/ /**/ /*=======================================================================*/ /**/ /*Initial Conditions:*/ /**/ /*- Field u: 0*/ /**/ /*- Field v: 0*/ /**/ /*- Field w: 0.15625*/ /**/ /*- Field p: 0*/ /**/ /*Writing: Re_1_v2_N6_0.chk */ /**/ /**/But after that the analysis is ended by giving the error below: /*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/ /**/ /*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/ /**/ /*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/ /**/ /*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/ /**/ /*slurmd[mercan115]: Job 405433 exceeded memory limit (22245156 > 20480000), being killed*/ /**/ /*slurmd[mercan115]: Exceeded job memory limit*/ /**/ /*slurmd[mercan115]: *** JOB 405433 CANCELLED AT 2014-11-30T23:15:28 ****/ However when I try to run the analysis with 8 processors, the analysis directly ends by giving the error below: /*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/ /**/ /*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/ /**/ /*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/ /**/ /*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/ /**/ /*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/ /**/ /*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/ /**/ /*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/ /**/ /*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/ /**/ /*--------------------------------------------------------------------------*/ /**/ /*mpirun noticed that process rank 2 with PID 24004 on node mercan146.yonetim exited on signal 11 (Segmentation fault).*/ What may be the reason for this problem? Regards, Kamil /**/ 30.11.2014 13:08 tarihinde, Chris Cantwell yazdı:
Dear Kamil,
This still seems to suggest that the version in your home directory is not compiled with -fPIC.
Try deleting all library files (*.a) and all compiled object code (*.o) from within the LAPACK source tree and try compiling from fresh again. Also note that you need to add the -fPIC flag to both the OPTS and NOOPT variables in your LAPACK make.inc file (which presumably is what your system administrator altered).
Cheers, Chris
Dear Kamil, The first error is simply that more memory was needed than the amount you allocated to the job (as you probably realised). The second error is a segmentation fault. Can you reproduce the problem using a (much) smaller job? Cheers, Chris On 30/11/14 21:41, Kamil ÖZDEN wrote:
Dear Dr. Cantwell,
Thanks for your help. I'll try this and inform you about the result.
Meanwhile I made another installation with ACML on the same cluster with the following ACML and MPI configuration
**************** /* ACML /truba/sw/centos6.4/lib/acml/4.4.0/gfortran64/lib/libacml.so *//* *//* ACML_INCLUDE_PATH /truba/sw/centos6.4/lib/acml/4.4.0/gfortran64/include *//* *//* ACML_SEARCH_PATHS /truba/sw/centos6.4/lib/acml/4.4.0/gfortran64/include *//* *//* ACML_USE_OPENMP_LIBRARIES OFF *//* *//* ACML_USE_SHARED_LIBRARIES ON */ ********************** /*MPIEXEC /usr/mpi/gcc/openmpi-1.6.5/bin/mpiexec *//* *//* MPIEXEC_MAX_NUMPROCS 2 *//* *//* MPIEXEC_NUMPROC_FLAG -np *//* *//* MPIEXEC_POSTFLAGS *//* *//* MPIEXEC_PREFLAGS *//* *//* MPI_CXX_COMPILER /usr/mpi/gcc/openmpi-1.6.5/bin/mpicxx *//* *//* MPI_CXX_COMPILE_FLAGS *//* *//* MPI_CXX_INCLUDE_PATH /usr/mpi/gcc/openmpi-1.6.5/include *//* *//* MPI_CXX_LIBRARIES /usr/mpi/gcc/openmpi-1.6.5/lib64/libmpi_cxx.so;/usr/mpi/gcc/openmpi-1.6.5/lib64/libmpi.so;/usr/lib64/libdl.so;/usr/lib64/libm.so;/usr/lib64/librt.so;/usr/lib64/libnsl.so;/usr/lib64/libutil.so;/usr/lib64/libm.so;/usr/lib64/libdl.so *//* *//* MPI_CXX_LINK_FLAGS -Wl,--export-dynamic *//* *//* MPI_C_COMPILER /usr/mpi/gcc/openmpi-1.6.5/bin/mpicc *//* *//* MPI_C_COMPILE_FLAGS *//* *//* MPI_C_INCLUDE_PATH /usr/mpi/gcc/openmpi-1.6.5/include *//* *//* MPI_C_LIBRARIES /usr/mpi/gcc/openmpi-1.6.5/lib64/libmpi.so;/usr/lib64/libdl.so;/usr/lib64/libm.so;/usr/lib64/librt.so;/usr/lib64/libnsl.so;/usr/lib64/libutil.so;/usr/lib64/libm.so;/usr/lib64/libdl.so *//* *//* MPI_C_LINK_FLAGS -Wl,--export-dynamic *//* *//* MPI_EXTRA_LIBRARY /usr/mpi/gcc/openmpi-1.6.5/lib64/libmpi.so;/usr/lib64/libdl.so;/usr/lib64/libm.so;/usr/lib64/librt.so;/usr/lib64/libnsl.so;/usr/lib64/libutil.so;/usr/lib64/libm.so;/usr/lib64/libdl.so *//* *//* MPI_LIBRARY /usr/mpi/gcc/openmpi-1.6.5/lib64/libmpi_cxx.so ***********************
*/Nektar seems to be installed successfully. However when I try to submit a job by using mpirun command with a script to the AMD processors of cluster (cluster uses SLURM resource manager) I face with such an issue.
When I tried to run with 4 processors.Initial conditons are read and first .chk directory is started to write as seen below:
/*=======================================================================*/
/**/
/*EquationType: UnsteadyNavierStokes*/
/**/
/*Session Name: Re_1_v2_N6*/
/**/
/*Spatial Dim.: 3*/
/**/
/*Max SEM Exp. Order: 7*/
/**/
/*Expansion Dim.: 3*/
/**/
/*Projection Type: Continuous Galerkin*/
/**/
/*Advection: explicit*/
/**/
/*Diffusion: explicit*/
/**/
/*Time Step: 0.01*/
/**/
/*No. of Steps: 300*/
/**/
/*Checkpoints (steps): 30*/
/**/
/*Integration Type: IMEXOrder1*/
/**/
/*=======================================================================*/
/**/
/*Initial Conditions:*/
/**/
/*- Field u: 0*/
/**/
/*- Field v: 0*/
/**/
/*- Field w: 0.15625*/
/**/
/*- Field p: 0*/
/**/
/*Writing: Re_1_v2_N6_0.chk */
/**/
/**/But after that the analysis is ended by giving the error below:
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*slurmd[mercan115]: Job 405433 exceeded memory limit (22245156 > 20480000), being killed*/
/**/
/*slurmd[mercan115]: Exceeded job memory limit*/
/**/
/*slurmd[mercan115]: *** JOB 405433 CANCELLED AT 2014-11-30T23:15:28 ****/
However when I try to run the analysis with 8 processors, the analysis directly ends by giving the error below:
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*--------------------------------------------------------------------------*/
/**/
/*mpirun noticed that process rank 2 with PID 24004 on node mercan146.yonetim exited on signal 11 (Segmentation fault).*/
What may be the reason for this problem?
Regards, Kamil
/**/ 30.11.2014 13:08 tarihinde, Chris Cantwell yazdı:
Dear Kamil,
This still seems to suggest that the version in your home directory is not compiled with -fPIC.
Try deleting all library files (*.a) and all compiled object code (*.o) from within the LAPACK source tree and try compiling from fresh again. Also note that you need to add the -fPIC flag to both the OPTS and NOOPT variables in your LAPACK make.inc file (which presumably is what your system administrator altered).
Cheers, Chris
-- Chris Cantwell Imperial College London South Kensington Campus London SW7 2AZ Email: c.cantwell@imperial.ac.uk www.imperial.ac.uk/people/c.cantwell
Dear Dr. Cantwell, I try to run the test file of Nektar++ KovaFlow_m8.xml via script file and got the same Segmentation Fault error. Then I copied the same file to the directory /*nektar++-4.0.0/build/solvers/IncNavierStokesSolver/*//**/and tried to run from the command line by typing the command /*./IncNavierStokesSolver KovaFlow_m8.xml*/ but I got the following error /*./IncNavierStokesSolver: error while loading shared libraries: libacml_mv.so: cannot open shared object file: No such file or directory*/ Regards, Kamil 01.12.2014 22:42 tarihinde, Chris Cantwell yazdı:
Dear Kamil,
The first error is simply that more memory was needed than the amount you allocated to the job (as you probably realised). The second error is a segmentation fault.
Can you reproduce the problem using a (much) smaller job?
Cheers, Chris
On 30/11/14 21:41, Kamil ÖZDEN wrote:
Dear Dr. Cantwell,
Thanks for your help. I'll try this and inform you about the result.
Meanwhile I made another installation with ACML on the same cluster with the following ACML and MPI configuration
**************** /* ACML /truba/sw/centos6.4/lib/acml/4.4.0/gfortran64/lib/libacml.so *//* *//* ACML_INCLUDE_PATH /truba/sw/centos6.4/lib/acml/4.4.0/gfortran64/include *//* *//* ACML_SEARCH_PATHS /truba/sw/centos6.4/lib/acml/4.4.0/gfortran64/include *//* *//* ACML_USE_OPENMP_LIBRARIES OFF *//* *//* ACML_USE_SHARED_LIBRARIES ON */ ********************** /*MPIEXEC /usr/mpi/gcc/openmpi-1.6.5/bin/mpiexec *//* *//* MPIEXEC_MAX_NUMPROCS 2 *//* *//* MPIEXEC_NUMPROC_FLAG -np *//* *//* MPIEXEC_POSTFLAGS *//* *//* MPIEXEC_PREFLAGS *//* *//* MPI_CXX_COMPILER /usr/mpi/gcc/openmpi-1.6.5/bin/mpicxx *//* *//* MPI_CXX_COMPILE_FLAGS *//* *//* MPI_CXX_INCLUDE_PATH /usr/mpi/gcc/openmpi-1.6.5/include *//* *//* MPI_CXX_LIBRARIES /usr/mpi/gcc/openmpi-1.6.5/lib64/libmpi_cxx.so;/usr/mpi/gcc/openmpi-1.6.5/lib64/libmpi.so;/usr/lib64/libdl.so;/usr/lib64/libm.so;/usr/lib64/librt.so;/usr/lib64/libnsl.so;/usr/lib64/libutil.so;/usr/lib64/libm.so;/usr/lib64/libdl.so
*//* *//* MPI_CXX_LINK_FLAGS -Wl,--export-dynamic *//* *//* MPI_C_COMPILER /usr/mpi/gcc/openmpi-1.6.5/bin/mpicc *//* *//* MPI_C_COMPILE_FLAGS *//* *//* MPI_C_INCLUDE_PATH /usr/mpi/gcc/openmpi-1.6.5/include *//* *//* MPI_C_LIBRARIES /usr/mpi/gcc/openmpi-1.6.5/lib64/libmpi.so;/usr/lib64/libdl.so;/usr/lib64/libm.so;/usr/lib64/librt.so;/usr/lib64/libnsl.so;/usr/lib64/libutil.so;/usr/lib64/libm.so;/usr/lib64/libdl.so
*//* *//* MPI_C_LINK_FLAGS -Wl,--export-dynamic *//* *//* MPI_EXTRA_LIBRARY /usr/mpi/gcc/openmpi-1.6.5/lib64/libmpi.so;/usr/lib64/libdl.so;/usr/lib64/libm.so;/usr/lib64/librt.so;/usr/lib64/libnsl.so;/usr/lib64/libutil.so;/usr/lib64/libm.so;/usr/lib64/libdl.so
*//* *//* MPI_LIBRARY /usr/mpi/gcc/openmpi-1.6.5/lib64/libmpi_cxx.so ***********************
*/Nektar seems to be installed successfully. However when I try to submit a job by using mpirun command with a script to the AMD processors of cluster (cluster uses SLURM resource manager) I face with such an issue.
When I tried to run with 4 processors.Initial conditons are read and first .chk directory is started to write as seen below:
/*=======================================================================*/
/**/
/*EquationType: UnsteadyNavierStokes*/
/**/
/*Session Name: Re_1_v2_N6*/
/**/
/*Spatial Dim.: 3*/
/**/
/*Max SEM Exp. Order: 7*/
/**/
/*Expansion Dim.: 3*/
/**/
/*Projection Type: Continuous Galerkin*/
/**/
/*Advection: explicit*/
/**/
/*Diffusion: explicit*/
/**/
/*Time Step: 0.01*/
/**/
/*No. of Steps: 300*/
/**/
/*Checkpoints (steps): 30*/
/**/
/*Integration Type: IMEXOrder1*/
/**/
/*=======================================================================*/
/**/
/*Initial Conditions:*/
/**/
/*- Field u: 0*/
/**/
/*- Field v: 0*/
/**/
/*- Field w: 0.15625*/
/**/
/*- Field p: 0*/
/**/
/*Writing: Re_1_v2_N6_0.chk */
/**/
/**/But after that the analysis is ended by giving the error below:
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*slurmd[mercan115]: Job 405433 exceeded memory limit (22245156 > 20480000), being killed*/
/**/
/*slurmd[mercan115]: Exceeded job memory limit*/
/**/
/*slurmd[mercan115]: *** JOB 405433 CANCELLED AT 2014-11-30T23:15:28 ****/
However when I try to run the analysis with 8 processors, the analysis directly ends by giving the error below:
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*--------------------------------------------------------------------------*/
/**/
/*mpirun noticed that process rank 2 with PID 24004 on node mercan146.yonetim exited on signal 11 (Segmentation fault).*/
What may be the reason for this problem?
Regards, Kamil
/**/ 30.11.2014 13:08 tarihinde, Chris Cantwell yazdı:
Dear Kamil,
This still seems to suggest that the version in your home directory is not compiled with -fPIC.
Try deleting all library files (*.a) and all compiled object code (*.o) from within the LAPACK source tree and try compiling from fresh again. Also note that you need to add the -fPIC flag to both the OPTS and NOOPT variables in your LAPACK make.inc file (which presumably is what your system administrator altered).
Cheers, Chris
Dear Dr. Cantwell, As an additional information I want to state that KovaFlow_m8.xml analysis is running from the command line by using mpirun command but not running when submitted to the cluster by using a script and giving the error below. /*mpirun noticed that process rank 2 with PID 32190 on node mercan155.yonetim exited on signal 11 (Segmentation fault).*/ Is there any option that is to be changed in Nektar configuration to run the analysis in the cluster? NOTE: I have used both mpirun and mpiexec commands in the script but I've taken the same error. If you want I can also send the script to you. Regards, Kamil On 01-12-2014 23:26, Kamil ÖZDEN wrote:
Dear Dr. Cantwell,
I try to run the test file of Nektar++ KovaFlow_m8.xml via script file and got the same Segmentation Fault error.
Then I copied the same file to the directory /*nektar++-4.0.0/build/solvers/IncNavierStokesSolver/*//**/and tried to run from the command line by typing the command
/*./IncNavierStokesSolver KovaFlow_m8.xml*/
but I got the following error
/*./IncNavierStokesSolver: error while loading shared libraries: libacml_mv.so: cannot open shared object file: No such file or directory*/
Regards, Kamil
01.12.2014 22:42 tarihinde, Chris Cantwell yazdı:
Dear Kamil,
The first error is simply that more memory was needed than the amount you allocated to the job (as you probably realised). The second error is a segmentation fault.
Can you reproduce the problem using a (much) smaller job?
Cheers, Chris
On 30/11/14 21:41, Kamil ÖZDEN wrote:
Dear Dr. Cantwell,
Thanks for your help. I'll try this and inform you about the result.
Meanwhile I made another installation with ACML on the same cluster with the following ACML and MPI configuration
**************** /* ACML /truba/sw/centos6.4/lib/acml/4.4.0/gfortran64/lib/libacml.so *//* *//* ACML_INCLUDE_PATH /truba/sw/centos6.4/lib/acml/4.4.0/gfortran64/include *//* *//* ACML_SEARCH_PATHS /truba/sw/centos6.4/lib/acml/4.4.0/gfortran64/include *//* *//* ACML_USE_OPENMP_LIBRARIES OFF *//* *//* ACML_USE_SHARED_LIBRARIES ON */ ********************** /*MPIEXEC /usr/mpi/gcc/openmpi-1.6.5/bin/mpiexec *//* *//* MPIEXEC_MAX_NUMPROCS 2 *//* *//* MPIEXEC_NUMPROC_FLAG -np *//* *//* MPIEXEC_POSTFLAGS *//* *//* MPIEXEC_PREFLAGS *//* *//* MPI_CXX_COMPILER /usr/mpi/gcc/openmpi-1.6.5/bin/mpicxx *//* *//* MPI_CXX_COMPILE_FLAGS *//* *//* MPI_CXX_INCLUDE_PATH /usr/mpi/gcc/openmpi-1.6.5/include *//* *//* MPI_CXX_LIBRARIES /usr/mpi/gcc/openmpi-1.6.5/lib64/libmpi_cxx.so;/usr/mpi/gcc/openmpi-1.6.5/lib64/libmpi.so;/usr/lib64/libdl.so;/usr/lib64/libm.so;/usr/lib64/librt.so;/usr/lib64/libnsl.so;/usr/lib64/libutil.so;/usr/lib64/libm.so;/usr/lib64/libdl.so
*//* *//* MPI_CXX_LINK_FLAGS -Wl,--export-dynamic *//* *//* MPI_C_COMPILER /usr/mpi/gcc/openmpi-1.6.5/bin/mpicc *//* *//* MPI_C_COMPILE_FLAGS *//* *//* MPI_C_INCLUDE_PATH /usr/mpi/gcc/openmpi-1.6.5/include *//* *//* MPI_C_LIBRARIES /usr/mpi/gcc/openmpi-1.6.5/lib64/libmpi.so;/usr/lib64/libdl.so;/usr/lib64/libm.so;/usr/lib64/librt.so;/usr/lib64/libnsl.so;/usr/lib64/libutil.so;/usr/lib64/libm.so;/usr/lib64/libdl.so
*//* *//* MPI_C_LINK_FLAGS -Wl,--export-dynamic *//* *//* MPI_EXTRA_LIBRARY /usr/mpi/gcc/openmpi-1.6.5/lib64/libmpi.so;/usr/lib64/libdl.so;/usr/lib64/libm.so;/usr/lib64/librt.so;/usr/lib64/libnsl.so;/usr/lib64/libutil.so;/usr/lib64/libm.so;/usr/lib64/libdl.so
*//* *//* MPI_LIBRARY /usr/mpi/gcc/openmpi-1.6.5/lib64/libmpi_cxx.so ***********************
*/Nektar seems to be installed successfully. However when I try to submit a job by using mpirun command with a script to the AMD processors of cluster (cluster uses SLURM resource manager) I face with such an issue.
When I tried to run with 4 processors.Initial conditons are read and first .chk directory is started to write as seen below:
/*=======================================================================*/
/**/
/*EquationType: UnsteadyNavierStokes*/
/**/
/*Session Name: Re_1_v2_N6*/
/**/
/*Spatial Dim.: 3*/
/**/
/*Max SEM Exp. Order: 7*/
/**/
/*Expansion Dim.: 3*/
/**/
/*Projection Type: Continuous Galerkin*/
/**/
/*Advection: explicit*/
/**/
/*Diffusion: explicit*/
/**/
/*Time Step: 0.01*/
/**/
/*No. of Steps: 300*/
/**/
/*Checkpoints (steps): 30*/
/**/
/*Integration Type: IMEXOrder1*/
/**/
/*=======================================================================*/
/**/
/*Initial Conditions:*/
/**/
/*- Field u: 0*/
/**/
/*- Field v: 0*/
/**/
/*- Field w: 0.15625*/
/**/
/*- Field p: 0*/
/**/
/*Writing: Re_1_v2_N6_0.chk */
/**/
/**/But after that the analysis is ended by giving the error below:
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*slurmd[mercan115]: Job 405433 exceeded memory limit (22245156 > 20480000), being killed*/
/**/
/*slurmd[mercan115]: Exceeded job memory limit*/
/**/
/*slurmd[mercan115]: *** JOB 405433 CANCELLED AT 2014-11-30T23:15:28 ****/
However when I try to run the analysis with 8 processors, the analysis directly ends by giving the error below:
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*--------------------------------------------------------------------------*/
/**/
/*mpirun noticed that process rank 2 with PID 24004 on node mercan146.yonetim exited on signal 11 (Segmentation fault).*/
What may be the reason for this problem?
Regards, Kamil
/**/ 30.11.2014 13:08 tarihinde, Chris Cantwell yazdı:
Dear Kamil,
This still seems to suggest that the version in your home directory is not compiled with -fPIC.
Try deleting all library files (*.a) and all compiled object code (*.o) from within the LAPACK source tree and try compiling from fresh again. Also note that you need to add the -fPIC flag to both the OPTS and NOOPT variables in your LAPACK make.inc file (which presumably is what your system administrator altered).
Cheers, Chris
Dear Dr. Cantwell, Latest situation about the Nektar installation to the cluster with ACML is that KovaFlow_m8.xml analysis is running with 1 and 2 processors but when I try to run it with 4 processors I'm getting the Segmentation fault error. Regards, Kamil 02.12.2014 14:58 tarihinde, Kamil Ozden yazdı:
Dear Dr. Cantwell,
As an additional information I want to state that KovaFlow_m8.xml analysis is running from the command line by using mpirun command but not running when submitted to the cluster by using a script and giving the error below.
/*mpirun noticed that process rank 2 with PID 32190 on node mercan155.yonetim exited on signal 11 (Segmentation fault).*/
Is there any option that is to be changed in Nektar configuration to run the analysis in the cluster?
NOTE: I have used both mpirun and mpiexec commands in the script but I've taken the same error. If you want I can also send the script to you.
Regards, Kamil
On 01-12-2014 23:26, Kamil ÖZDEN wrote:
Dear Dr. Cantwell,
I try to run the test file of Nektar++ KovaFlow_m8.xml via script file and got the same Segmentation Fault error.
Then I copied the same file to the directory /*nektar++-4.0.0/build/solvers/IncNavierStokesSolver/*//**/and tried to run from the command line by typing the command
/*./IncNavierStokesSolver KovaFlow_m8.xml*/
but I got the following error
/*./IncNavierStokesSolver: error while loading shared libraries: libacml_mv.so: cannot open shared object file: No such file or directory*/
Regards, Kamil
01.12.2014 22:42 tarihinde, Chris Cantwell yazdı:
Dear Kamil,
The first error is simply that more memory was needed than the amount you allocated to the job (as you probably realised). The second error is a segmentation fault.
Can you reproduce the problem using a (much) smaller job?
Cheers, Chris
On 30/11/14 21:41, Kamil ÖZDEN wrote:
Dear Dr. Cantwell,
Thanks for your help. I'll try this and inform you about the result.
Meanwhile I made another installation with ACML on the same cluster with the following ACML and MPI configuration
**************** /* ACML /truba/sw/centos6.4/lib/acml/4.4.0/gfortran64/lib/libacml.so *//* *//* ACML_INCLUDE_PATH /truba/sw/centos6.4/lib/acml/4.4.0/gfortran64/include *//* *//* ACML_SEARCH_PATHS /truba/sw/centos6.4/lib/acml/4.4.0/gfortran64/include *//* *//* ACML_USE_OPENMP_LIBRARIES OFF *//* *//* ACML_USE_SHARED_LIBRARIES ON */ ********************** /*MPIEXEC /usr/mpi/gcc/openmpi-1.6.5/bin/mpiexec *//* *//* MPIEXEC_MAX_NUMPROCS 2 *//* *//* MPIEXEC_NUMPROC_FLAG -np *//* *//* MPIEXEC_POSTFLAGS *//* *//* MPIEXEC_PREFLAGS *//* *//* MPI_CXX_COMPILER /usr/mpi/gcc/openmpi-1.6.5/bin/mpicxx *//* *//* MPI_CXX_COMPILE_FLAGS *//* *//* MPI_CXX_INCLUDE_PATH /usr/mpi/gcc/openmpi-1.6.5/include *//* *//* MPI_CXX_LIBRARIES /usr/mpi/gcc/openmpi-1.6.5/lib64/libmpi_cxx.so;/usr/mpi/gcc/openmpi-1.6.5/lib64/libmpi.so;/usr/lib64/libdl.so;/usr/lib64/libm.so;/usr/lib64/librt.so;/usr/lib64/libnsl.so;/usr/lib64/libutil.so;/usr/lib64/libm.so;/usr/lib64/libdl.so
*//* *//* MPI_CXX_LINK_FLAGS -Wl,--export-dynamic *//* *//* MPI_C_COMPILER /usr/mpi/gcc/openmpi-1.6.5/bin/mpicc *//* *//* MPI_C_COMPILE_FLAGS *//* *//* MPI_C_INCLUDE_PATH /usr/mpi/gcc/openmpi-1.6.5/include *//* *//* MPI_C_LIBRARIES /usr/mpi/gcc/openmpi-1.6.5/lib64/libmpi.so;/usr/lib64/libdl.so;/usr/lib64/libm.so;/usr/lib64/librt.so;/usr/lib64/libnsl.so;/usr/lib64/libutil.so;/usr/lib64/libm.so;/usr/lib64/libdl.so
*//* *//* MPI_C_LINK_FLAGS -Wl,--export-dynamic *//* *//* MPI_EXTRA_LIBRARY /usr/mpi/gcc/openmpi-1.6.5/lib64/libmpi.so;/usr/lib64/libdl.so;/usr/lib64/libm.so;/usr/lib64/librt.so;/usr/lib64/libnsl.so;/usr/lib64/libutil.so;/usr/lib64/libm.so;/usr/lib64/libdl.so
*//* *//* MPI_LIBRARY /usr/mpi/gcc/openmpi-1.6.5/lib64/libmpi_cxx.so ***********************
*/Nektar seems to be installed successfully. However when I try to submit a job by using mpirun command with a script to the AMD processors of cluster (cluster uses SLURM resource manager) I face with such an issue.
When I tried to run with 4 processors.Initial conditons are read and first .chk directory is started to write as seen below:
/*=======================================================================*/
/**/
/*EquationType: UnsteadyNavierStokes*/
/**/
/*Session Name: Re_1_v2_N6*/
/**/
/*Spatial Dim.: 3*/
/**/
/*Max SEM Exp. Order: 7*/
/**/
/*Expansion Dim.: 3*/
/**/
/*Projection Type: Continuous Galerkin*/
/**/
/*Advection: explicit*/
/**/
/*Diffusion: explicit*/
/**/
/*Time Step: 0.01*/
/**/
/*No. of Steps: 300*/
/**/
/*Checkpoints (steps): 30*/
/**/
/*Integration Type: IMEXOrder1*/
/**/
/*=======================================================================*/
/**/
/*Initial Conditions:*/
/**/
/*- Field u: 0*/
/**/
/*- Field v: 0*/
/**/
/*- Field w: 0.15625*/
/**/
/*- Field p: 0*/
/**/
/*Writing: Re_1_v2_N6_0.chk */
/**/
/**/But after that the analysis is ended by giving the error below:
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*slurmd[mercan115]: Job 405433 exceeded memory limit (22245156 > 20480000), being killed*/
/**/
/*slurmd[mercan115]: Exceeded job memory limit*/
/**/
/*slurmd[mercan115]: *** JOB 405433 CANCELLED AT 2014-11-30T23:15:28 ****/
However when I try to run the analysis with 8 processors, the analysis directly ends by giving the error below:
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*--------------------------------------------------------------------------*/
/**/
/*mpirun noticed that process rank 2 with PID 24004 on node mercan146.yonetim exited on signal 11 (Segmentation fault).*/
What may be the reason for this problem?
Regards, Kamil
/**/ 30.11.2014 13:08 tarihinde, Chris Cantwell yazdı:
Dear Kamil,
This still seems to suggest that the version in your home directory is not compiled with -fPIC.
Try deleting all library files (*.a) and all compiled object code (*.o) from within the LAPACK source tree and try compiling from fresh again. Also note that you need to add the -fPIC flag to both the OPTS and NOOPT variables in your LAPACK make.inc file (which presumably is what your system administrator altered).
Cheers, Chris
Dear Kamil, Your problem sounds to be specific to the cluster you are using, or to the use of ACML. Do you use ACML on your workstation where KovaFlow_m8.xml ran successfully using mpirun? How many cores was this on, and how many were you using on the cluster? We will need to see a backtrace at the point when the segmentation fault occurs to be able to diagnose what is going wrong and help further. How you do this will depend on what debugging software is available on your cluster. Your system administrator should be able to help you with this. Cheers, Chris On 02/12/14 21:27, Kamil ÖZDEN wrote:
Dear Dr. Cantwell,
Latest situation about the Nektar installation to the cluster with ACML is that KovaFlow_m8.xml analysis is running with 1 and 2 processors but when I try to run it with 4 processors I'm getting the Segmentation fault error.
Regards, Kamil
02.12.2014 14:58 tarihinde, Kamil Ozden yazdı:
Dear Dr. Cantwell,
As an additional information I want to state that KovaFlow_m8.xml analysis is running from the command line by using mpirun command but not running when submitted to the cluster by using a script and giving the error below.
/*mpirun noticed that process rank 2 with PID 32190 on node mercan155.yonetim exited on signal 11 (Segmentation fault).*/
Is there any option that is to be changed in Nektar configuration to run the analysis in the cluster?
NOTE: I have used both mpirun and mpiexec commands in the script but I've taken the same error. If you want I can also send the script to you.
Regards, Kamil
On 01-12-2014 23:26, Kamil ÖZDEN wrote:
Dear Dr. Cantwell,
I try to run the test file of Nektar++ KovaFlow_m8.xml via script file and got the same Segmentation Fault error.
Then I copied the same file to the directory /*nektar++-4.0.0/build/solvers/IncNavierStokesSolver/*//**/and tried to run from the command line by typing the command
/*./IncNavierStokesSolver KovaFlow_m8.xml*/
but I got the following error
/*./IncNavierStokesSolver: error while loading shared libraries: libacml_mv.so: cannot open shared object file: No such file or directory*/
Regards, Kamil
01.12.2014 22:42 tarihinde, Chris Cantwell yazdı:
Dear Kamil,
The first error is simply that more memory was needed than the amount you allocated to the job (as you probably realised). The second error is a segmentation fault.
Can you reproduce the problem using a (much) smaller job?
Cheers, Chris
On 30/11/14 21:41, Kamil ÖZDEN wrote:
Dear Dr. Cantwell,
Thanks for your help. I'll try this and inform you about the result.
Meanwhile I made another installation with ACML on the same cluster with the following ACML and MPI configuration
**************** /* ACML /truba/sw/centos6.4/lib/acml/4.4.0/gfortran64/lib/libacml.so *//* *//* ACML_INCLUDE_PATH /truba/sw/centos6.4/lib/acml/4.4.0/gfortran64/include *//* *//* ACML_SEARCH_PATHS /truba/sw/centos6.4/lib/acml/4.4.0/gfortran64/include *//* *//* ACML_USE_OPENMP_LIBRARIES OFF *//* *//* ACML_USE_SHARED_LIBRARIES ON */ ********************** /*MPIEXEC /usr/mpi/gcc/openmpi-1.6.5/bin/mpiexec *//* *//* MPIEXEC_MAX_NUMPROCS 2 *//* *//* MPIEXEC_NUMPROC_FLAG -np *//* *//* MPIEXEC_POSTFLAGS *//* *//* MPIEXEC_PREFLAGS *//* *//* MPI_CXX_COMPILER /usr/mpi/gcc/openmpi-1.6.5/bin/mpicxx *//* *//* MPI_CXX_COMPILE_FLAGS *//* *//* MPI_CXX_INCLUDE_PATH /usr/mpi/gcc/openmpi-1.6.5/include *//* *//* MPI_CXX_LIBRARIES /usr/mpi/gcc/openmpi-1.6.5/lib64/libmpi_cxx.so;/usr/mpi/gcc/openmpi-1.6.5/lib64/libmpi.so;/usr/lib64/libdl.so;/usr/lib64/libm.so;/usr/lib64/librt.so;/usr/lib64/libnsl.so;/usr/lib64/libutil.so;/usr/lib64/libm.so;/usr/lib64/libdl.so
*//* *//* MPI_CXX_LINK_FLAGS -Wl,--export-dynamic *//* *//* MPI_C_COMPILER /usr/mpi/gcc/openmpi-1.6.5/bin/mpicc *//* *//* MPI_C_COMPILE_FLAGS *//* *//* MPI_C_INCLUDE_PATH /usr/mpi/gcc/openmpi-1.6.5/include *//* *//* MPI_C_LIBRARIES /usr/mpi/gcc/openmpi-1.6.5/lib64/libmpi.so;/usr/lib64/libdl.so;/usr/lib64/libm.so;/usr/lib64/librt.so;/usr/lib64/libnsl.so;/usr/lib64/libutil.so;/usr/lib64/libm.so;/usr/lib64/libdl.so
*//* *//* MPI_C_LINK_FLAGS -Wl,--export-dynamic *//* *//* MPI_EXTRA_LIBRARY /usr/mpi/gcc/openmpi-1.6.5/lib64/libmpi.so;/usr/lib64/libdl.so;/usr/lib64/libm.so;/usr/lib64/librt.so;/usr/lib64/libnsl.so;/usr/lib64/libutil.so;/usr/lib64/libm.so;/usr/lib64/libdl.so
*//* *//* MPI_LIBRARY /usr/mpi/gcc/openmpi-1.6.5/lib64/libmpi_cxx.so ***********************
*/Nektar seems to be installed successfully. However when I try to submit a job by using mpirun command with a script to the AMD processors of cluster (cluster uses SLURM resource manager) I face with such an issue.
When I tried to run with 4 processors.Initial conditons are read and first .chk directory is started to write as seen below:
/*=======================================================================*/
/**/
/*EquationType: UnsteadyNavierStokes*/
/**/
/*Session Name: Re_1_v2_N6*/
/**/
/*Spatial Dim.: 3*/
/**/
/*Max SEM Exp. Order: 7*/
/**/
/*Expansion Dim.: 3*/
/**/
/*Projection Type: Continuous Galerkin*/
/**/
/*Advection: explicit*/
/**/
/*Diffusion: explicit*/
/**/
/*Time Step: 0.01*/
/**/
/*No. of Steps: 300*/
/**/
/*Checkpoints (steps): 30*/
/**/
/*Integration Type: IMEXOrder1*/
/**/
/*=======================================================================*/
/**/
/*Initial Conditions:*/
/**/
/*- Field u: 0*/
/**/
/*- Field v: 0*/
/**/
/*- Field w: 0.15625*/
/**/
/*- Field p: 0*/
/**/
/*Writing: Re_1_v2_N6_0.chk */
/**/
/**/But after that the analysis is ended by giving the error below:
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*slurmd[mercan115]: Job 405433 exceeded memory limit (22245156 > 20480000), being killed*/
/**/
/*slurmd[mercan115]: Exceeded job memory limit*/
/**/
/*slurmd[mercan115]: *** JOB 405433 CANCELLED AT 2014-11-30T23:15:28 ****/
However when I try to run the analysis with 8 processors, the analysis directly ends by giving the error below:
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*--------------------------------------------------------------------------*/
/**/
/*mpirun noticed that process rank 2 with PID 24004 on node mercan146.yonetim exited on signal 11 (Segmentation fault).*/
What may be the reason for this problem?
Regards, Kamil
/**/ 30.11.2014 13:08 tarihinde, Chris Cantwell yazdı:
Dear Kamil,
This still seems to suggest that the version in your home directory is not compiled with -fPIC.
Try deleting all library files (*.a) and all compiled object code (*.o) from within the LAPACK source tree and try compiling from fresh again. Also note that you need to add the -fPIC flag to both the OPTS and NOOPT variables in your LAPACK make.inc file (which presumably is what your system administrator altered).
Cheers, Chris
-- Chris Cantwell Imperial College London South Kensington Campus London SW7 2AZ Email: c.cantwell@imperial.ac.uk www.imperial.ac.uk/people/c.cantwell
Dear Dr. Cantwell, I installed Nektar to the cluster by switching NEKTAR_USE_ACML option ON. There are 92 nodes and 24 processors on each node. When I use 1 and 2 processors on 1 node the analysis run. However, when I increase the processor number to 4 on 1 node I'm getting the error. Regards, Kamil 02.12.2014 23:43 tarihinde, Chris Cantwell yazdı:
Dear Kamil,
Your problem sounds to be specific to the cluster you are using, or to the use of ACML.
Do you use ACML on your workstation where KovaFlow_m8.xml ran successfully using mpirun? How many cores was this on, and how many were you using on the cluster?
We will need to see a backtrace at the point when the segmentation fault occurs to be able to diagnose what is going wrong and help further. How you do this will depend on what debugging software is available on your cluster. Your system administrator should be able to help you with this.
Cheers, Chris
On 02/12/14 21:27, Kamil ÖZDEN wrote:
Dear Dr. Cantwell,
Latest situation about the Nektar installation to the cluster with ACML is that KovaFlow_m8.xml analysis is running with 1 and 2 processors but when I try to run it with 4 processors I'm getting the Segmentation fault error.
Regards, Kamil
02.12.2014 14:58 tarihinde, Kamil Ozden yazdı:
Dear Dr. Cantwell,
As an additional information I want to state that KovaFlow_m8.xml analysis is running from the command line by using mpirun command but not running when submitted to the cluster by using a script and giving the error below.
/*mpirun noticed that process rank 2 with PID 32190 on node mercan155.yonetim exited on signal 11 (Segmentation fault).*/
Is there any option that is to be changed in Nektar configuration to run the analysis in the cluster?
NOTE: I have used both mpirun and mpiexec commands in the script but I've taken the same error. If you want I can also send the script to you.
Regards, Kamil
On 01-12-2014 23:26, Kamil ÖZDEN wrote:
Dear Dr. Cantwell,
I try to run the test file of Nektar++ KovaFlow_m8.xml via script file and got the same Segmentation Fault error.
Then I copied the same file to the directory /*nektar++-4.0.0/build/solvers/IncNavierStokesSolver/*//**/and tried to run from the command line by typing the command
/*./IncNavierStokesSolver KovaFlow_m8.xml*/
but I got the following error
/*./IncNavierStokesSolver: error while loading shared libraries: libacml_mv.so: cannot open shared object file: No such file or directory*/
Regards, Kamil
01.12.2014 22:42 tarihinde, Chris Cantwell yazdı:
Dear Kamil,
The first error is simply that more memory was needed than the amount you allocated to the job (as you probably realised). The second error is a segmentation fault.
Can you reproduce the problem using a (much) smaller job?
Cheers, Chris
On 30/11/14 21:41, Kamil ÖZDEN wrote:
Dear Dr. Cantwell,
Thanks for your help. I'll try this and inform you about the result.
Meanwhile I made another installation with ACML on the same cluster with the following ACML and MPI configuration
**************** /* ACML /truba/sw/centos6.4/lib/acml/4.4.0/gfortran64/lib/libacml.so *//* *//* ACML_INCLUDE_PATH /truba/sw/centos6.4/lib/acml/4.4.0/gfortran64/include *//* *//* ACML_SEARCH_PATHS /truba/sw/centos6.4/lib/acml/4.4.0/gfortran64/include *//* *//* ACML_USE_OPENMP_LIBRARIES OFF *//* *//* ACML_USE_SHARED_LIBRARIES ON */ ********************** /*MPIEXEC /usr/mpi/gcc/openmpi-1.6.5/bin/mpiexec *//* *//* MPIEXEC_MAX_NUMPROCS 2 *//* *//* MPIEXEC_NUMPROC_FLAG -np *//* *//* MPIEXEC_POSTFLAGS *//* *//* MPIEXEC_PREFLAGS *//* *//* MPI_CXX_COMPILER /usr/mpi/gcc/openmpi-1.6.5/bin/mpicxx *//* *//* MPI_CXX_COMPILE_FLAGS *//* *//* MPI_CXX_INCLUDE_PATH /usr/mpi/gcc/openmpi-1.6.5/include *//* *//* MPI_CXX_LIBRARIES /usr/mpi/gcc/openmpi-1.6.5/lib64/libmpi_cxx.so;/usr/mpi/gcc/openmpi-1.6.5/lib64/libmpi.so;/usr/lib64/libdl.so;/usr/lib64/libm.so;/usr/lib64/librt.so;/usr/lib64/libnsl.so;/usr/lib64/libutil.so;/usr/lib64/libm.so;/usr/lib64/libdl.so
*//* *//* MPI_CXX_LINK_FLAGS -Wl,--export-dynamic *//* *//* MPI_C_COMPILER /usr/mpi/gcc/openmpi-1.6.5/bin/mpicc *//* *//* MPI_C_COMPILE_FLAGS *//* *//* MPI_C_INCLUDE_PATH /usr/mpi/gcc/openmpi-1.6.5/include *//* *//* MPI_C_LIBRARIES /usr/mpi/gcc/openmpi-1.6.5/lib64/libmpi.so;/usr/lib64/libdl.so;/usr/lib64/libm.so;/usr/lib64/librt.so;/usr/lib64/libnsl.so;/usr/lib64/libutil.so;/usr/lib64/libm.so;/usr/lib64/libdl.so
*//* *//* MPI_C_LINK_FLAGS -Wl,--export-dynamic *//* *//* MPI_EXTRA_LIBRARY /usr/mpi/gcc/openmpi-1.6.5/lib64/libmpi.so;/usr/lib64/libdl.so;/usr/lib64/libm.so;/usr/lib64/librt.so;/usr/lib64/libnsl.so;/usr/lib64/libutil.so;/usr/lib64/libm.so;/usr/lib64/libdl.so
*//* *//* MPI_LIBRARY /usr/mpi/gcc/openmpi-1.6.5/lib64/libmpi_cxx.so ***********************
*/Nektar seems to be installed successfully. However when I try to submit a job by using mpirun command with a script to the AMD processors of cluster (cluster uses SLURM resource manager) I face with such an issue.
When I tried to run with 4 processors.Initial conditons are read and first .chk directory is started to write as seen below:
/*=======================================================================*/
/**/
/*EquationType: UnsteadyNavierStokes*/
/**/
/*Session Name: Re_1_v2_N6*/
/**/
/*Spatial Dim.: 3*/
/**/
/*Max SEM Exp. Order: 7*/
/**/
/*Expansion Dim.: 3*/
/**/
/*Projection Type: Continuous Galerkin*/
/**/
/*Advection: explicit*/
/**/
/*Diffusion: explicit*/
/**/
/*Time Step: 0.01*/
/**/
/*No. of Steps: 300*/
/**/
/*Checkpoints (steps): 30*/
/**/
/*Integration Type: IMEXOrder1*/
/**/
/*=======================================================================*/
/**/
/*Initial Conditions:*/
/**/
/*- Field u: 0*/
/**/
/*- Field v: 0*/
/**/
/*- Field w: 0.15625*/
/**/
/*- Field p: 0*/
/**/
/*Writing: Re_1_v2_N6_0.chk */
/**/
/**/But after that the analysis is ended by giving the error below:
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*slurmd[mercan115]: Job 405433 exceeded memory limit (22245156 > 20480000), being killed*/
/**/
/*slurmd[mercan115]: Exceeded job memory limit*/
/**/
/*slurmd[mercan115]: *** JOB 405433 CANCELLED AT 2014-11-30T23:15:28 ****/
However when I try to run the analysis with 8 processors, the analysis directly ends by giving the error below:
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*Warning: Conflicting CPU frequencies detected, using: 2300.000000.*/
/**/
/*--------------------------------------------------------------------------*/
/**/
/*mpirun noticed that process rank 2 with PID 24004 on node mercan146.yonetim exited on signal 11 (Segmentation fault).*/
What may be the reason for this problem?
Regards, Kamil
/**/ 30.11.2014 13:08 tarihinde, Chris Cantwell yazdı: > Dear Kamil, > > This still seems to suggest that the version in your home > directory is > not compiled with -fPIC. > > Try deleting all library files (*.a) and all compiled object code > (*.o) from within the LAPACK source tree and try compiling from > fresh > again. Also note that you need to add the -fPIC flag to both the > OPTS > and NOOPT variables in your LAPACK make.inc file (which > presumably is > what your system administrator altered). > > Cheers, > Chris >
Dear Dr. Cantwell, I do your suggestions and add -fPIC to the variables and compiled Nektar with BLAS and LAPACK again. As a result, I did not get the previous error. But I've another error now which is: /*[ 55%] Built target SolverUtils*//* *//*Linking CXX executable UnitTests*//* *//*../LibUtilities/libLibUtilities.so.4.0.0: undefined reference to `_gfortran_stop_numeric'*//* *//*../LibUtilities/libLibUtilities.so.4.0.0: undefined reference to `_gfortran_transfer_character'*//* *//*../LibUtilities/libLibUtilities.so.4.0.0: undefined reference to `_gfortran_compare_string'*//* *//*../LibUtilities/libLibUtilities.so.4.0.0: undefined reference to `_gfortran_st_write_done'*//* *//*../LibUtilities/libLibUtilities.so.4.0.0: undefined reference to `_gfortran_concat_string'*//* *//*../LibUtilities/libLibUtilities.so.4.0.0: undefined reference to `_gfortran_string_len_trim'*//* *//*../LibUtilities/libLibUtilities.so.4.0.0: undefined reference to `_gfortran_transfer_integer'*//* *//*../LibUtilities/libLibUtilities.so.4.0.0: undefined reference to `_gfortran_st_write'*//* *//*collect2: ld returned 1 exit status*//* *//*make[2]: *** [library/UnitTests/UnitTests-4.0.0] Error 1*//* *//*make[1]: *** [library/UnitTests/CMakeFiles/UnitTests.dir/all] Error 2*//* *//*make: *** [all] Error 2*/ Is there another thing that I need to change in make.inc file of BLAS and LAPACK to overcome this problem? Regards, Kamil 30.11.2014 13:08 tarihinde, Chris Cantwell yazdı:
Dear Kamil,
This still seems to suggest that the version in your home directory is not compiled with -fPIC.
Try deleting all library files (*.a) and all compiled object code (*.o) from within the LAPACK source tree and try compiling from fresh again. Also note that you need to add the -fPIC flag to both the OPTS and NOOPT variables in your LAPACK make.inc file (which presumably is what your system administrator altered).
Cheers, Chris
On 29/11/14 22:29, Kamil ÖZDEN wrote:
Dear Dr. Cantwell,
Here is the error message I got :
/*[ 19%] Building CXX object library/LibUtilities/CMakeFiles/LibUtilities.dir/GitRevision.cpp.o*//* *//*Linking CXX shared library libLibUtilities.so*//* *//*/usr/bin/ld: /truba/home/kozden/lib/lapack/netlib-3.5.0-gcc/liblapack.a(dlamch.o): relocation R_X86_64_32 against `.rodata' can not be used when making a shared object; recompile with -fPIC*//* *//*/truba/home/kozden/lib/lapack/netlib-3.5.0-gcc/liblapack.a: could not read symbols: Bad value*//* *//*collect2: ld returned 1 exit status*//* *//*make[2]: *** [library/LibUtilities/libLibUtilities.so.4.0.0] Error 1*//* *//*make[1]: *** [library/LibUtilities/CMakeFiles/LibUtilities.dir/all] Error 2*//* *//*make: *** [all] Error 2*/
Regards, Kamil
30.11.2014 00:08 tarihinde, Chris Cantwell yazdı:
Dear Kamil,
Could you send the current error message you get when you specify the version of the library compiled with -fPIC?
Cheers, Chris
On 29/11/14 21:55, Kamil ÖZDEN wrote:
Dear Dr. Cantwell,
Yes, it is the same version of the library compiled with -fPIC version.
Regards, Kamil
29.11.2014 23:48 tarihinde, Chris Cantwell yazdı:
Dear Kamil,
Can you confirm the library mentioned in the original error message (/truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a) is now the version of the library in your home directory in the latest error message?
Cheers, Chris
On 27/11/14 13:57, Kamil Ozden wrote:
Dear Dr. Moxey,
I got in contact with the system administrator. He told that it is impossible to recompile blas and lapack on the system with -fPIC option.
Alternatively, he copied the Blas and Lapack libraries from the directory in the system to another folder in my home directory and recompiled them there with -fPIC option.
However, when I tried to reinstall Nektar by showing the path of new Blas and Lapack libraries in my home directory I got the same error. Is there any other alternative way to overcome this problem?
Regards, Kamil
On 27-11-2014 13:10, David Moxey wrote: > Hi Kamil, > > The issue is related to linking. We generate shared libraries in > Nektar++, but here we are trying to link the shared library with a > static library. You can only do this if the static library was > compiled with the -fPIC option, which generates > position-independent > code that shared libraries need in order to work at runtime. > > If you have a shared library, you should use this. Otherwise, you > should recompile BLAS/Lapack installation with the -fPIC option. > This > may be something that your cluster system administrators can help > with. > > Thanks, > > Dave > >> On 27 Nov 2014, at 10:37, Kamil ÖZDEN <kamil.ozden.me@gmail.com> >> wrote: >> >> Dear All, >> >> I'm trying to install Nektar++ 4.0 to a cluster with the options >> >> NEKTAR_USE_BLAS_LAPACK ON >> NEKTAR_USE_SYSTEM_BLAS_LAPACK ON >> >> with the path of the libraries as follows: >> >> NATIVE_BLAS >> /truba/sw/centos6.4/lib/blas/netlib-gcc/blas_LINUX.a >> NATIVE_LAPACK >> /truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a >> >> However, I'm getting the following error: >> >> Linking CXX shared library libLibUtilities.so >> /usr/bin/ld: >> /truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a(dgbtrf.o): >> >> >> relocation >> R_X86_64_32 against `.rodata' can not be used when making a shared >> object; recompile with -fPIC >> /truba/sw/centos6.4/lib/lapack/netlib-3.5.0-gcc/liblapack.a: could >> not read symbols: Bad value >> collect2: ld returned 1 exit status >> make[2]: *** [library/LibUtilities/libLibUtilities.so.4.0.0] >> Error 1 >> make[1]: *** >> [library/LibUtilities/CMakeFiles/LibUtilities.dir/all] >> Error 2 >> make: *** [all] Error 2 >> >> What may be the reason for this problem and how can I solve it? >> >> Regards, >> Kamil >> >> _______________________________________________ >> Nektar-users mailing list >> Nektar-users@imperial.ac.uk >> https://mailman.ic.ac.uk/mailman/listinfo/nektar-users > > > -- > David Moxey (Research Associate) > d.moxey@imperial.ac.uk | www.imperial.ac.uk/people/d.moxey > > Room 363, Department of Aeronautics, > Imperial College London, > London, SW7 2AZ, UK. >
_______________________________________________ Nektar-users mailing list Nektar-users@imperial.ac.uk https://mailman.ic.ac.uk/mailman/listinfo/nektar-users
participants (5)
- 
                
                Bao, Yan
- 
                
                Chris Cantwell
- 
                
                David Moxey
- 
                
                Kamil Ozden
- 
                
                Kamil ÖZDEN