Dear Will,

I think I understand your formulation a bit better now. My first question is why would I chose theta to be any value other than 0.5? Note that theta is not an upwinding value: the preferred direction is totally arbitrary at each face and does not depend on any aspect of the system state. I therefore don't understand why a value other than 0.5 would have better accuracy or stability than 0.5 itself, and at 0.5 the discretisation is invariant under renumbering the mesh (and therefore changing the arbitrary normal orderings).

If there is indeed some good reason to introduce this asymmetry into the discretisation, then we do indeed hit the problem that Firedrake assumes your discretisation is invariant under renumbering and therefore may choose inconsistent numberings on different processors. In order to avoid this, you need to use an (arbitrary) numbering which is not dependent on the mesh numbering. One way to do this is to create a vector field which is guaranteed to have non-zero normal component on every edge:

fs = FunctionSpace(mymesh, "RT", 1)
f = Function(fs)
f.dat.data[:] = 1.

Then your "upwinding" code can use

conditional(ge(dot(f,n), 0.), upwind_expr, downwind_expr)

conditional and ge are UFL functions designed for this sort of thing.

Regards,

David 


On Fri, 27 Jan 2017 at 07:11 William Booker <scwb@leeds.ac.uk> wrote:

Dear David and Lawrence,


Derivation is attached, with the explanation for DIV_u and the projection issue.


The DIV operator is just to avoid repeating code, and is simply a sum of my element internal terms and boundary flux.

Is there any way to communicate the same left and right sense to the processors and to guarantee it?


Thanks

Will


From: firedrake-bounces@imperial.ac.uk <firedrake-bounces@imperial.ac.uk> on behalf of Lawrence Mitchell <lawrence.mitchell@imperial.ac.uk>
Sent: 20 January 2017 16:47:22
To: firedrake
Subject: Re: [firedrake] Parallel issues and projection issues
 
Hi Will,

the problem does not immediately appear to be solver tolerances.  But rather in your definition of the discrete divergence.

You have:

# Declare flux indicator function
theta = Constant(0.6)

def div_u(u, p):
    return (dot(u, grad(p)))*dx(domain=p.ufl_domain()) \
        + (jump(p)*dot((u('-')*(1-theta)+u('+')*theta), n('-')))*dS


The second term is the problematic one:

dot((u('-')*(1-theta)+u('+')*theta), n('-'))

is not symmetric under interchange of "+" and "-" restrictions.  Remember that these restrictions are *arbitrary* and their value is local to a particular MPI process.

As a result, you can end up in a situation where this integral "double counts", because on one process you pick up a contribution from the local "n('-')" restriction, and on the other process, where in serial you are guaranteed that this would be an "n('+')" (and therefore not contribute), you can also get an "n('-')".

Make sense?

Can you show how you derived this discrete divergence?

Cheers,

Lawrence

> On 20 Jan 2017, at 14:01, David Ham <david.ham@imperial.ac.uk> wrote:
>
> Hi Will,
>
> I'll have a quick look at the parallel issue - I suspect it's just solver tolerances. Can you please provide a maths write-up of the variational problem you are solving so we can think more easily about number 2?
>
> Regards,
>
> David
>
>
> On Fri, 20 Jan 2017 at 13:09 William Booker <scwb@leeds.ac.uk> wrote:
> Dear Firedrakers,
>
> Attached is a code for a skew symmetric FEM scheme, that should conserve the discrete energy of the system.
>
> I have two issues:
>
> 1) When I run the code in serial, it conserves energy as expected but when I run it in parallel there seems to be an error. Attached a graph of energy drifts from running on 1/2 cores.
>
> 2) To make the code skew symmetric, I have to project physical variables ( density, velocity) into their variational derivatives. Currently this is done by extending the system from (vel, dens) to (vel, dens, vel_deriv, den_deriv) and I include the projection as part of the mixed system. Obviously this is rather expensive as I double the amount of variables used, and I'm looking to use this for more complicated systems.
>
> I believe Colin mentioned to Onno, there may be a better way of including this projection step without extending the mixed system.
>
>
> Thanks
> Will
>
> --
> Dr David Ham
> Department of Mathematics
> Imperial College London
> _______________________________________________
> firedrake mailing list
> firedrake@imperial.ac.uk
> https://mailman.ic.ac.uk/mailman/listinfo/firedrake


_______________________________________________
firedrake mailing list
firedrake@imperial.ac.uk
https://mailman.ic.ac.uk/mailman/listinfo/firedrake
--
Dr David Ham
Department of Mathematics
Imperial College London