optimisation of the running time
Dear all, I would like some advice to decrease the running time in my code 3D_NL.py . Details are given in the attached pdf and below is a quick summary: In the attached files, I run a 3D nonlinear code for potential flow water wave equations. The vertical and horizontal space discretisation are separated, so that Firedrake solves the equations in the horizontal plane only, and the vertical dependency is involved through matrices computed by evaluating the z-dependent integrals with the functions defined in the file 'discr_sigma.py'. Basically, the potential flow phi(x,y,z,t) is defined as phi(x,y,t)phi(z), where phi(z) is discretised as a n^th order Lagrange expansion in one element: phi(z_i) = product_{k=0}^n (z-z_k)/(z_i-z_k) for k!= i There are therefore 3 unknowns, which are : - h (the total depth of water) defined in the horizontal plane - phi_s (the potential flow at the surface) defined in the horizontal plane - hat_psi a vector function containing all the phi_i for i = 1:n , which are the values of phi in the vertical nodes (excluding the surface ones), where n is the order of the Lagrange expansion. Each phi_i is defined in the horizontal plane. The time scheme is defined like this : step 1 : update phi_s and hat_psi simultaneously (that is, sum of n+1 weak formulations) step 2 : update h and hat_psi simultaneously (that is, sum of n+1 weak formulations) step 3 : update phi_s (one weak formulation). (those steps are written in the pdf file as well) At the moment, the equations are solved in a small domain with resolution 0.1, but it's running very slowly. Running in parallel does not improve it since the number of nodes is too small. Is there another way to increase the simulation speed? Maybe by changing the iteration criteria in the solver ? I hope this is clear enough, otherwise I can send you more details. Best regards, Floriane
Hi Floriane, apologies for the slow reply. On 21/06/16 16:03, Floriane Gidel [RPG] wrote:
Dear all,
I would like some advice to decrease the running time in my code 3D_NL.py . Details are given in the attached pdf and below is a quick summary:
...
At the moment, the equations are solved in a small domain with resolution 0.1, but it's running very slowly. Running in parallel does not improve it since the number of nodes is too small.
Is there another way to increase the simulation speed? Maybe by changing the iteration criteria in the solver ?
My first suggestion, looking at the code would be to replace the direct calls to solve in the timestepping loop, for example: solve(WF1 == 0, w1) by a reusable solver that you can call again and again. So, rather than having: while t < Tend: ... solve(WF1 == 0, w1) ... You do: wf1_problem = NonlinearVariationalProblem(WF1, w1) wf1_solver = NonlinearVariationalSolver(wf1_problem) while t < Tend: ... wf1_solver.solve() ... This should reduce the amount of symbolic processing and other things significantly. Once you've done that, you should try profiling your code. To get some breakdowns of the time you can import some convenience timers from PyOP2: from pyop2.profiling import timed_stage For the different "stages" in your code, e.g. the different solvers, or the visualisation output, you can use these as a context manager to agglomerate timings: while t < Tend: ... with timed_stage("WF1 solve"): wf1_solver.solve() ... Now, you can run your program with: python simulation.py -log_view And you will get a breakdown of timings split apart by stages. We can look at this and decide where to go from there. Cheers, Lawrence
Hi Lawrence, Thank you very much for your answer. Using nonlinear solvers indeed improved a lot the running speed! I tried to profile my code but I get the following error concerning the timed_stage module: Traceback (most recent call last): File "3D_NL.py", line 9, in <module> from pyop2.profiling import timed_stage ImportError: cannot import name timed_stage I found online something called "timed_region", would that be the same as timed_stage ? Thanks, Floriane ________________________________ De : firedrake-bounces@imperial.ac.uk <firedrake-bounces@imperial.ac.uk> de la part de Lawrence Mitchell <lawrence.mitchell@imperial.ac.uk> Envoyé : jeudi 7 juillet 2016 16:58:52 À : firedrake@imperial.ac.uk Objet : Re: [firedrake] optimisation of the running time Hi Floriane, apologies for the slow reply. On 21/06/16 16:03, Floriane Gidel [RPG] wrote:
Dear all,
I would like some advice to decrease the running time in my code 3D_NL.py . Details are given in the attached pdf and below is a quick summary:
...
At the moment, the equations are solved in a small domain with resolution 0.1, but it's running very slowly. Running in parallel does not improve it since the number of nodes is too small.
Is there another way to increase the simulation speed? Maybe by changing the iteration criteria in the solver ?
My first suggestion, looking at the code would be to replace the direct calls to solve in the timestepping loop, for example: solve(WF1 == 0, w1) by a reusable solver that you can call again and again. So, rather than having: while t < Tend: ... solve(WF1 == 0, w1) ... You do: wf1_problem = NonlinearVariationalProblem(WF1, w1) wf1_solver = NonlinearVariationalSolver(wf1_problem) while t < Tend: ... wf1_solver.solve() ... This should reduce the amount of symbolic processing and other things significantly. Once you've done that, you should try profiling your code. To get some breakdowns of the time you can import some convenience timers from PyOP2: from pyop2.profiling import timed_stage For the different "stages" in your code, e.g. the different solvers, or the visualisation output, you can use these as a context manager to agglomerate timings: while t < Tend: ... with timed_stage("WF1 solve"): wf1_solver.solve() ... Now, you can run your program with: python simulation.py -log_view And you will get a breakdown of timings split apart by stages. We can look at this and decide where to go from there. Cheers, Lawrence
So would this (using of NL solvers) also be the case for the BL-solver or was that one already optimised? ________________________________ From: firedrake-bounces@imperial.ac.uk <firedrake-bounces@imperial.ac.uk> on behalf of Floriane Gidel [RPG] <mmfg@leeds.ac.uk> Sent: Tuesday, July 12, 2016 8:56 AM To: firedrake@imperial.ac.uk Subject: Re: [firedrake] optimisation of the running time Hi Lawrence, Thank you very much for your answer. Using nonlinear solvers indeed improved a lot the running speed! I tried to profile my code but I get the following error concerning the timed_stage module: Traceback (most recent call last): File "3D_NL.py", line 9, in <module> from pyop2.profiling import timed_stage ImportError: cannot import name timed_stage I found online something called "timed_region", would that be the same as timed_stage ? Thanks, Floriane ________________________________ De : firedrake-bounces@imperial.ac.uk <firedrake-bounces@imperial.ac.uk> de la part de Lawrence Mitchell <lawrence.mitchell@imperial.ac.uk> Envoyé : jeudi 7 juillet 2016 16:58:52 À : firedrake@imperial.ac.uk Objet : Re: [firedrake] optimisation of the running time Hi Floriane, apologies for the slow reply. On 21/06/16 16:03, Floriane Gidel [RPG] wrote:
Dear all,
I would like some advice to decrease the running time in my code 3D_NL.py . Details are given in the attached pdf and below is a quick summary:
...
At the moment, the equations are solved in a small domain with resolution 0.1, but it's running very slowly. Running in parallel does not improve it since the number of nodes is too small.
Is there another way to increase the simulation speed? Maybe by changing the iteration criteria in the solver ?
My first suggestion, looking at the code would be to replace the direct calls to solve in the timestepping loop, for example: solve(WF1 == 0, w1) by a reusable solver that you can call again and again. So, rather than having: while t < Tend: ... solve(WF1 == 0, w1) ... You do: wf1_problem = NonlinearVariationalProblem(WF1, w1) wf1_solver = NonlinearVariationalSolver(wf1_problem) while t < Tend: ... wf1_solver.solve() ... This should reduce the amount of symbolic processing and other things significantly. Once you've done that, you should try profiling your code. To get some breakdowns of the time you can import some convenience timers from PyOP2: from pyop2.profiling import timed_stage For the different "stages" in your code, e.g. the different solvers, or the visualisation output, you can use these as a context manager to agglomerate timings: while t < Tend: ... with timed_stage("WF1 solve"): wf1_solver.solve() ... Now, you can run your program with: python simulation.py -log_view And you will get a breakdown of timings split apart by stages. We can look at this and decide where to go from there. Cheers, Lawrence
No unfortunately I was already using this in the BL code. ________________________________ De : firedrake-bounces@imperial.ac.uk <firedrake-bounces@imperial.ac.uk> de la part de Onno Bokhove <O.Bokhove@leeds.ac.uk> Envoyé : mardi 12 juillet 2016 09:15:35 À : firedrake@imperial.ac.uk Objet : Re: [firedrake] optimisation of the running time So would this (using of NL solvers) also be the case for the BL-solver or was that one already optimised? ________________________________ From: firedrake-bounces@imperial.ac.uk <firedrake-bounces@imperial.ac.uk> on behalf of Floriane Gidel [RPG] <mmfg@leeds.ac.uk> Sent: Tuesday, July 12, 2016 8:56 AM To: firedrake@imperial.ac.uk Subject: Re: [firedrake] optimisation of the running time Hi Lawrence, Thank you very much for your answer. Using nonlinear solvers indeed improved a lot the running speed! I tried to profile my code but I get the following error concerning the timed_stage module: Traceback (most recent call last): File "3D_NL.py", line 9, in <module> from pyop2.profiling import timed_stage ImportError: cannot import name timed_stage I found online something called "timed_region", would that be the same as timed_stage ? Thanks, Floriane ________________________________ De : firedrake-bounces@imperial.ac.uk <firedrake-bounces@imperial.ac.uk> de la part de Lawrence Mitchell <lawrence.mitchell@imperial.ac.uk> Envoyé : jeudi 7 juillet 2016 16:58:52 À : firedrake@imperial.ac.uk Objet : Re: [firedrake] optimisation of the running time Hi Floriane, apologies for the slow reply. On 21/06/16 16:03, Floriane Gidel [RPG] wrote:
Dear all,
I would like some advice to decrease the running time in my code 3D_NL.py . Details are given in the attached pdf and below is a quick summary:
...
At the moment, the equations are solved in a small domain with resolution 0.1, but it's running very slowly. Running in parallel does not improve it since the number of nodes is too small.
Is there another way to increase the simulation speed? Maybe by changing the iteration criteria in the solver ?
My first suggestion, looking at the code would be to replace the direct calls to solve in the timestepping loop, for example: solve(WF1 == 0, w1) by a reusable solver that you can call again and again. So, rather than having: while t < Tend: ... solve(WF1 == 0, w1) ... You do: wf1_problem = NonlinearVariationalProblem(WF1, w1) wf1_solver = NonlinearVariationalSolver(wf1_problem) while t < Tend: ... wf1_solver.solve() ... This should reduce the amount of symbolic processing and other things significantly. Once you've done that, you should try profiling your code. To get some breakdowns of the time you can import some convenience timers from PyOP2: from pyop2.profiling import timed_stage For the different "stages" in your code, e.g. the different solvers, or the visualisation output, you can use these as a context manager to agglomerate timings: while t < Tend: ... with timed_stage("WF1 solve"): wf1_solver.solve() ... Now, you can run your program with: python simulation.py -log_view And you will get a breakdown of timings split apart by stages. We can look at this and decide where to go from there. Cheers, Lawrence
participants (3)
- 
                
                Floriane Gidel [RPG]
- 
                
                Lawrence Mitchell
- 
                
                Onno Bokhove