Re: [firedrake] Parallel issues and projection issues
Hi Will, I'll have a quick look at the parallel issue - I suspect it's just solver tolerances. Can you please provide a maths write-up of the variational problem you are solving so we can think more easily about number 2? Regards, David On Fri, 20 Jan 2017 at 13:09 William Booker <scwb@leeds.ac.uk> wrote:
Dear Firedrakers,
Attached is a code for a skew symmetric FEM scheme, that should conserve the discrete energy of the system.
I have two issues:
1) When I run the code in serial, it conserves energy as expected but when I run it in parallel there seems to be an error. Attached a graph of energy drifts from running on 1/2 cores.
2) To make the code skew symmetric, I have to project physical variables ( density, velocity) into their variational derivatives. Currently this is done by extending the system from (vel, dens) to (vel, dens, vel_deriv, den_deriv) and I include the projection as part of the mixed system. Obviously this is rather expensive as I double the amount of variables used, and I'm looking to use this for more complicated systems.
I believe Colin mentioned to Onno, there may be a better way of including this projection step without extending the mixed system.
Thanks Will
-- Dr David Ham Department of Mathematics Imperial College London
Hi Will, the problem does not immediately appear to be solver tolerances. But rather in your definition of the discrete divergence. You have: # Declare flux indicator function theta = Constant(0.6) def div_u(u, p): return (dot(u, grad(p)))*dx(domain=p.ufl_domain()) \ + (jump(p)*dot((u('-')*(1-theta)+u('+')*theta), n('-')))*dS The second term is the problematic one: dot((u('-')*(1-theta)+u('+')*theta), n('-')) is not symmetric under interchange of "+" and "-" restrictions. Remember that these restrictions are *arbitrary* and their value is local to a particular MPI process. As a result, you can end up in a situation where this integral "double counts", because on one process you pick up a contribution from the local "n('-')" restriction, and on the other process, where in serial you are guaranteed that this would be an "n('+')" (and therefore not contribute), you can also get an "n('-')". Make sense? Can you show how you derived this discrete divergence? Cheers, Lawrence
On 20 Jan 2017, at 14:01, David Ham <david.ham@imperial.ac.uk> wrote:
Hi Will,
I'll have a quick look at the parallel issue - I suspect it's just solver tolerances. Can you please provide a maths write-up of the variational problem you are solving so we can think more easily about number 2?
Regards,
David
On Fri, 20 Jan 2017 at 13:09 William Booker <scwb@leeds.ac.uk> wrote: Dear Firedrakers,
Attached is a code for a skew symmetric FEM scheme, that should conserve the discrete energy of the system.
I have two issues:
1) When I run the code in serial, it conserves energy as expected but when I run it in parallel there seems to be an error. Attached a graph of energy drifts from running on 1/2 cores.
2) To make the code skew symmetric, I have to project physical variables ( density, velocity) into their variational derivatives. Currently this is done by extending the system from (vel, dens) to (vel, dens, vel_deriv, den_deriv) and I include the projection as part of the mixed system. Obviously this is rather expensive as I double the amount of variables used, and I'm looking to use this for more complicated systems.
I believe Colin mentioned to Onno, there may be a better way of including this projection step without extending the mixed system.
Thanks Will
-- Dr David Ham Department of Mathematics Imperial College London _______________________________________________ firedrake mailing list firedrake@imperial.ac.uk https://mailman.ic.ac.uk/mailman/listinfo/firedrake
Dear David and Lawrence, Derivation is attached, with the explanation for DIV_u and the projection issue. The DIV operator is just to avoid repeating code, and is simply a sum of my element internal terms and boundary flux. Is there any way to communicate the same left and right sense to the processors and to guarantee it? Thanks Will ________________________________ From: firedrake-bounces@imperial.ac.uk <firedrake-bounces@imperial.ac.uk> on behalf of Lawrence Mitchell <lawrence.mitchell@imperial.ac.uk> Sent: 20 January 2017 16:47:22 To: firedrake Subject: Re: [firedrake] Parallel issues and projection issues Hi Will, the problem does not immediately appear to be solver tolerances. But rather in your definition of the discrete divergence. You have: # Declare flux indicator function theta = Constant(0.6) def div_u(u, p): return (dot(u, grad(p)))*dx(domain=p.ufl_domain()) \ + (jump(p)*dot((u('-')*(1-theta)+u('+')*theta), n('-')))*dS The second term is the problematic one: dot((u('-')*(1-theta)+u('+')*theta), n('-')) is not symmetric under interchange of "+" and "-" restrictions. Remember that these restrictions are *arbitrary* and their value is local to a particular MPI process. As a result, you can end up in a situation where this integral "double counts", because on one process you pick up a contribution from the local "n('-')" restriction, and on the other process, where in serial you are guaranteed that this would be an "n('+')" (and therefore not contribute), you can also get an "n('-')". Make sense? Can you show how you derived this discrete divergence? Cheers, Lawrence
On 20 Jan 2017, at 14:01, David Ham <david.ham@imperial.ac.uk> wrote:
Hi Will,
I'll have a quick look at the parallel issue - I suspect it's just solver tolerances. Can you please provide a maths write-up of the variational problem you are solving so we can think more easily about number 2?
Regards,
David
On Fri, 20 Jan 2017 at 13:09 William Booker <scwb@leeds.ac.uk> wrote: Dear Firedrakers,
Attached is a code for a skew symmetric FEM scheme, that should conserve the discrete energy of the system.
I have two issues:
1) When I run the code in serial, it conserves energy as expected but when I run it in parallel there seems to be an error. Attached a graph of energy drifts from running on 1/2 cores.
2) To make the code skew symmetric, I have to project physical variables ( density, velocity) into their variational derivatives. Currently this is done by extending the system from (vel, dens) to (vel, dens, vel_deriv, den_deriv) and I include the projection as part of the mixed system. Obviously this is rather expensive as I double the amount of variables used, and I'm looking to use this for more complicated systems.
I believe Colin mentioned to Onno, there may be a better way of including this projection step without extending the mixed system.
Thanks Will
-- Dr David Ham Department of Mathematics Imperial College London _______________________________________________ firedrake mailing list firedrake@imperial.ac.uk https://mailman.ic.ac.uk/mailman/listinfo/firedrake
_______________________________________________ firedrake mailing list firedrake@imperial.ac.uk https://mailman.ic.ac.uk/mailman/listinfo/firedrake
participants (3)
- 
                
                David Ham
- 
                
                Lawrence Mitchell
- 
                
                William Booker