Warning about number of OMP threads
Dear Firedrake, I recently tried to run Firedrake using "mpiexec -n 80", so 80 threads, to improve performance. But I got the warning "OMP_NUM_THREADS is not set or is set to a value greater than 1, we suggest setting OMP_NUM_THREADS=1 to improve performance" which doesn't really make sense in my mind since I want 80 threads. Perhaps I am misunderstanding what's going on here. So, is this a problem, and if so, how can I set OMP_NUM_THREADS=1? Andrew Hicks
Andrew -- One must distinguish between threads and MPI processes. They are different things. One may do computations with P MPI processes and N OpenMP threads per process; for example https://openmp.org/wp-content/uploads/HybridPP_Slides.pdf By using mpiexec you are setting the number of MPI processes. Each MPI process has an independent memory space. By contrast OpenMP threads share a memory. (And I just exhausted my knowledge of OpenMP threads.) In any case, running export OMP_NUM_THREADS=N mpiexec -n P ./program.py might yield a hybrid computation which you want. Ed On Mon, Aug 2, 2021 at 3:57 PM Andrew Hicks <ahick17@lsu.edu> wrote:
Dear Firedrake,
I recently tried to run Firedrake using “mpiexec -n 80”, so 80 threads, to improve performance. But I got the warning “OMP_NUM_THREADS is not set or is set to a value greater than 1, we suggest setting OMP_NUM_THREADS=1 to improve performance” which doesn’t really make sense in my mind since I want 80 threads. Perhaps I am misunderstanding what’s going on here.
So, is this a problem, and if so, how can I set OMP_NUM_THREADS=1?
Andrew Hicks _______________________________________________ firedrake mailing list firedrake@imperial.ac.uk https://mailman.ic.ac.uk/mailman/listinfo/firedrake
-- Ed Bueler Dept of Mathematics and Statistics University of Alaska Fairbanks Fairbanks, AK 99775-6660 306C Chapman
On Tue, Aug 3, 2021 at 3:36 AM Cotter, Colin J <colin.cotter@imperial.ac.uk> wrote:
In any case, running
export OMP_NUM_THREADS=N mpiexec -n P ./program.py
It won’t, because PETSc, and hence Firedrake, is pure MPI, no threading.
https://figshare.com/articles/journal_contribution/Exascale_Computing_withou... Thanks, Matt
_______________________________________________ firedrake mailing list firedrake@imperial.ac.uk https://mailman.ic.ac.uk/mailman/listinfo/firedrake
-- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
https://figshare.com/articles/journal_contribution/Exascale_Computing_withou... Thanks Matt, that’s useful for accelerating these discussions! all the best cjc
participants (4)
- 
                
                Andrew Hicks
- 
                
                Cotter, Colin J
- 
                
                Ed Bueler
- 
                
                Matthew Knepley