Re: [firedrake] Parallel issues and projection issues
Dear Will, I think I understand your formulation a bit better now. My first question is why would I chose theta to be any value other than 0.5? Note that theta is not an upwinding value: the preferred direction is totally arbitrary at each face and does not depend on any aspect of the system state. I therefore don't understand why a value other than 0.5 would have better accuracy or stability than 0.5 itself, and at 0.5 the discretisation is invariant under renumbering the mesh (and therefore changing the arbitrary normal orderings). If there is indeed some good reason to introduce this asymmetry into the discretisation, then we do indeed hit the problem that Firedrake assumes your discretisation is invariant under renumbering and therefore may choose inconsistent numberings on different processors. In order to avoid this, you need to use an (arbitrary) numbering which is not dependent on the mesh numbering. One way to do this is to create a vector field which is guaranteed to have non-zero normal component on every edge: fs = FunctionSpace(mymesh, "RT", 1) f = Function(fs) f.dat.data[:] = 1. Then your "upwinding" code can use conditional(ge(dot(f,n), 0.), upwind_expr, downwind_expr) conditional and ge are UFL functions designed for this sort of thing. Regards, David On Fri, 27 Jan 2017 at 07:11 William Booker <scwb@leeds.ac.uk> wrote:
Dear David and Lawrence,
Derivation is attached, with the explanation for DIV_u and the projection issue.
The DIV operator is just to avoid repeating code, and is simply a sum of my element internal terms and boundary flux.
Is there any way to communicate the same left and right sense to the processors and to guarantee it?
Thanks
Will ------------------------------ *From:* firedrake-bounces@imperial.ac.uk <firedrake-bounces@imperial.ac.uk> on behalf of Lawrence Mitchell <lawrence.mitchell@imperial.ac.uk> *Sent:* 20 January 2017 16:47:22 *To:* firedrake *Subject:* Re: [firedrake] Parallel issues and projection issues
Hi Will,
the problem does not immediately appear to be solver tolerances. But rather in your definition of the discrete divergence.
You have:
# Declare flux indicator function theta = Constant(0.6)
def div_u(u, p): return (dot(u, grad(p)))*dx(domain=p.ufl_domain()) \ + (jump(p)*dot((u('-')*(1-theta)+u('+')*theta), n('-')))*dS
The second term is the problematic one:
dot((u('-')*(1-theta)+u('+')*theta), n('-'))
is not symmetric under interchange of "+" and "-" restrictions. Remember that these restrictions are *arbitrary* and their value is local to a particular MPI process.
As a result, you can end up in a situation where this integral "double counts", because on one process you pick up a contribution from the local "n('-')" restriction, and on the other process, where in serial you are guaranteed that this would be an "n('+')" (and therefore not contribute), you can also get an "n('-')".
Make sense?
Can you show how you derived this discrete divergence?
Cheers,
Lawrence
On 20 Jan 2017, at 14:01, David Ham <david.ham@imperial.ac.uk> wrote:
Hi Will,
I'll have a quick look at the parallel issue - I suspect it's just solver tolerances. Can you please provide a maths write-up of the variational problem you are solving so we can think more easily about number 2?
Regards,
David
On Fri, 20 Jan 2017 at 13:09 William Booker <scwb@leeds.ac.uk> wrote: Dear Firedrakers,
Attached is a code for a skew symmetric FEM scheme, that should conserve the discrete energy of the system.
I have two issues:
1) When I run the code in serial, it conserves energy as expected but when I run it in parallel there seems to be an error. Attached a graph of energy drifts from running on 1/2 cores.
2) To make the code skew symmetric, I have to project physical variables ( density, velocity) into their variational derivatives. Currently this is done by extending the system from (vel, dens) to (vel, dens, vel_deriv, den_deriv) and I include the projection as part of the mixed system. Obviously this is rather expensive as I double the amount of variables used, and I'm looking to use this for more complicated systems.
I believe Colin mentioned to Onno, there may be a better way of including this projection step without extending the mixed system.
Thanks Will
-- Dr David Ham Department of Mathematics Imperial College London _______________________________________________ firedrake mailing list firedrake@imperial.ac.uk https://mailman.ic.ac.uk/mailman/listinfo/firedrake
_______________________________________________ firedrake mailing list firedrake@imperial.ac.uk https://mailman.ic.ac.uk/mailman/listinfo/firedrake
-- Dr David Ham Department of Mathematics Imperial College London
Dear David, The choice of theta is meant to be arbitrary in the scheme, and we have had thoughts of looking into an optimal theta for implementations. So I would prefer to keep the asymmetry in the scheme. I'll implement the conditional form in my DIV definition, and get back to you about the results. Just a quick query, would the facet normal n not have to be assigned to - or + in that expression? Also, is there a simpler way to implement the projections that I have ? Going forward with a system that effectively doubles the amount of variables I have is a bit cumbersome. Thanks Will ________________________________ From: firedrake-bounces@imperial.ac.uk <firedrake-bounces@imperial.ac.uk> on behalf of David Ham <David.Ham@imperial.ac.uk> Sent: 27 January 2017 12:26:05 To: firedrake Subject: Re: [firedrake] Parallel issues and projection issues Dear Will, I think I understand your formulation a bit better now. My first question is why would I chose theta to be any value other than 0.5? Note that theta is not an upwinding value: the preferred direction is totally arbitrary at each face and does not depend on any aspect of the system state. I therefore don't understand why a value other than 0.5 would have better accuracy or stability than 0.5 itself, and at 0.5 the discretisation is invariant under renumbering the mesh (and therefore changing the arbitrary normal orderings). If there is indeed some good reason to introduce this asymmetry into the discretisation, then we do indeed hit the problem that Firedrake assumes your discretisation is invariant under renumbering and therefore may choose inconsistent numberings on different processors. In order to avoid this, you need to use an (arbitrary) numbering which is not dependent on the mesh numbering. One way to do this is to create a vector field which is guaranteed to have non-zero normal component on every edge: fs = FunctionSpace(mymesh, "RT", 1) f = Function(fs) f.dat.data[:] = 1. Then your "upwinding" code can use conditional(ge(dot(f,n), 0.), upwind_expr, downwind_expr) conditional and ge are UFL functions designed for this sort of thing. Regards, David On Fri, 27 Jan 2017 at 07:11 William Booker <scwb@leeds.ac.uk<mailto:scwb@leeds.ac.uk>> wrote: Dear David and Lawrence, Derivation is attached, with the explanation for DIV_u and the projection issue. The DIV operator is just to avoid repeating code, and is simply a sum of my element internal terms and boundary flux. Is there any way to communicate the same left and right sense to the processors and to guarantee it? Thanks Will ________________________________ From: firedrake-bounces@imperial.ac.uk<mailto:firedrake-bounces@imperial.ac.uk> <firedrake-bounces@imperial.ac.uk<mailto:firedrake-bounces@imperial.ac.uk>> on behalf of Lawrence Mitchell <lawrence.mitchell@imperial.ac.uk<mailto:lawrence.mitchell@imperial.ac.uk>> Sent: 20 January 2017 16:47:22 To: firedrake Subject: Re: [firedrake] Parallel issues and projection issues Hi Will, the problem does not immediately appear to be solver tolerances. But rather in your definition of the discrete divergence. You have: # Declare flux indicator function theta = Constant(0.6) def div_u(u, p): return (dot(u, grad(p)))*dx(domain=p.ufl_domain()) \ + (jump(p)*dot((u('-')*(1-theta)+u('+')*theta), n('-')))*dS The second term is the problematic one: dot((u('-')*(1-theta)+u('+')*theta), n('-')) is not symmetric under interchange of "+" and "-" restrictions. Remember that these restrictions are *arbitrary* and their value is local to a particular MPI process. As a result, you can end up in a situation where this integral "double counts", because on one process you pick up a contribution from the local "n('-')" restriction, and on the other process, where in serial you are guaranteed that this would be an "n('+')" (and therefore not contribute), you can also get an "n('-')". Make sense? Can you show how you derived this discrete divergence? Cheers, Lawrence
On 20 Jan 2017, at 14:01, David Ham <david.ham@imperial.ac.uk<mailto:david.ham@imperial.ac.uk>> wrote:
Hi Will,
I'll have a quick look at the parallel issue - I suspect it's just solver tolerances. Can you please provide a maths write-up of the variational problem you are solving so we can think more easily about number 2?
Regards,
David
On Fri, 20 Jan 2017 at 13:09 William Booker <scwb@leeds.ac.uk<mailto:scwb@leeds.ac.uk>> wrote: Dear Firedrakers,
Attached is a code for a skew symmetric FEM scheme, that should conserve the discrete energy of the system.
I have two issues:
1) When I run the code in serial, it conserves energy as expected but when I run it in parallel there seems to be an error. Attached a graph of energy drifts from running on 1/2 cores.
2) To make the code skew symmetric, I have to project physical variables ( density, velocity) into their variational derivatives. Currently this is done by extending the system from (vel, dens) to (vel, dens, vel_deriv, den_deriv) and I include the projection as part of the mixed system. Obviously this is rather expensive as I double the amount of variables used, and I'm looking to use this for more complicated systems.
I believe Colin mentioned to Onno, there may be a better way of including this projection step without extending the mixed system.
Thanks Will
-- Dr David Ham Department of Mathematics Imperial College London _______________________________________________ firedrake mailing list firedrake@imperial.ac.uk<mailto:firedrake@imperial.ac.uk> https://mailman.ic.ac.uk/mailman/listinfo/firedrake
_______________________________________________ firedrake mailing list firedrake@imperial.ac.uk<mailto:firedrake@imperial.ac.uk> https://mailman.ic.ac.uk/mailman/listinfo/firedrake -- Dr David Ham Department of Mathematics Imperial College London
On 31/01/17 12:49, William Booker wrote:
Dear David,
The choice of theta is meant to be arbitrary in the scheme, and we have had thoughts of looking into an optimal theta for implementations.
So I would prefer to keep the asymmetry in the scheme.
So the point is that if theta = 0.5 is not optimal then it will be *mesh dependent* (because if theta != 0.5 then the best choice is not invariant under renumbering of the mesh).
I'll implement the conditional form in my DIV definition, and get back to you about the results.
Just a quick query, would the facet normal n not have to be assigned to - or + in that expression?
Yes, you want to take an (arbitrarily) restricted side of the expression: so use: dot(f, n)('+')
Also, is there a simpler way to implement the projections that I have ? Going forward with a system that effectively doubles the amount of variables I have is a bit cumbersome.
So the physical variables are effectively diagnosed from the variational derivatives? But then the evolution equation says how the physical variable evolves according to the derivatives? Presumably then the elimination of one set, or the other, of these variables requires that you invert some operator. Is this operator cell-local? Cheers, Lawrence
On 31/01/17 17:44, Lawrence Mitchell wrote:
On 31/01/17 12:49, William Booker wrote:
Dear David,
The choice of theta is meant to be arbitrary in the scheme, and we have had thoughts of looking into an optimal theta for implementations.
So I would prefer to keep the asymmetry in the scheme.
So the point is that if theta = 0.5 is not optimal then it will be *mesh dependent* (because if theta != 0.5 then the best choice is not invariant under renumbering of the mesh).
I'll implement the conditional form in my DIV definition, and get back to you about the results.
Just a quick query, would the facet normal n not have to be assigned to - or + in that expression?
Yes, you want to take an (arbitrarily) restricted side of the expression:
so use:
dot(f, n)('+')
Also, is there a simpler way to implement the projections that I have ? Going forward with a system that effectively doubles the amount of variables I have is a bit cumbersome.
So the physical variables are effectively diagnosed from the variational derivatives?
But then the evolution equation says how the physical variable evolves according to the derivatives?
Presumably then the elimination of one set, or the other, of these variables requires that you invert some operator. Is this operator cell-local?
Having taken a closer look, I think I see what you have: given trial functions u, rho, du, drho and test functions v, sigma, dv, dsigma you have a matrix: [A 0 0 B 0 C D 0 E 0 F 0 0 G 0 H] If I read the code right where: A = <u, v> C = <rho, sigma> F = <du, dv> H = <drho, dsigma> and writing th(f) = theta*dot(f, n)('+') + (1 - theta)*(dot(f, n)('-')) B = <grad drho, v> + << jump(drho), th(u) >> D = <du, grad sigma> + << th(du), jump(sigma) >> and E = <u, dv> G = <rho, dsigma> But you'd like to solve a system only for u and rho. So writing P = [A 0 0 C] Q = [0 B D 0] R = [E 0 0 G] S = [F 0 0 H] You have: [P Q R S] Block eliminating for the latter row gives (may have got this the wrong way round) P - Q S^{-1} R Which is a system you can solve, and only contains the u, rho variables, but includes the discrete projections as part of Q and R. Fortunately, since you're all DG and S is just a mass matrix, S^{-1} is cell local. This kind of stuff is exactly what Thomas Gibson has been working on recently for hybridisation methods. But it's possible that it will work for you. I will let him chime in on whether everything could be done, noting that P, Q, S and R are all mixed matrices. I don't see a way other than this, because this elimination is only weak, not strong (the relationships between u and du, and rho and drho do not hold quad-point-wise). Perhaps this is what Colin had thought? Lawrence
What Lawrence is suggesting looks accurate. Using his definitions of P, Q, R, and S, then the Schur compliment of the block S in that matrix [P Q; R S] gives you a system for u and rho, with LHS operator (P - QS^-1R). The spaces are discontinuous and hence the inverse can be computed cell-local. This is precisely what the Slate package is for in Firedrake. It's not a problem that some of the matrices are mixed; the problem is that P - QS^-1R is still mixed. For Slate to solve this, we need PyOP2 to write into mixed dats (which it currently can't do). However, if you write out P - QS^-1R in terms of A, B, C, D, E, F and G (as defined by Lawrence) and eliminate, say rho, then you would have a system in terms of one unknown (u). And unless I'm missing something, you can just use the computed u to recover rho. Both of these steps (forward elimination and backwards reconstruction) can be done in Slate. I would have to see the full finite element problem before I claim Slate can do this. Further edits may be required for handling the `th(f) = theta*dot(f, n)('+') + (1 - theta)*(dot(f, n)('-'))` term. If you're interested in trying this, then we can have a more detailed discussion. Best regards, - Thomas ________________________________ From: firedrake-bounces@imperial.ac.uk <firedrake-bounces@imperial.ac.uk> on behalf of Lawrence Mitchell <lawrence.mitchell@imperial.ac.uk> Sent: 31 January 2017 18:20:46 To: firedrake Subject: Re: [firedrake] Parallel issues and projection issues On 31/01/17 17:44, Lawrence Mitchell wrote:
On 31/01/17 12:49, William Booker wrote:
Dear David,
The choice of theta is meant to be arbitrary in the scheme, and we have had thoughts of looking into an optimal theta for implementations.
So I would prefer to keep the asymmetry in the scheme.
So the point is that if theta = 0.5 is not optimal then it will be *mesh dependent* (because if theta != 0.5 then the best choice is not invariant under renumbering of the mesh).
I'll implement the conditional form in my DIV definition, and get back to you about the results.
Just a quick query, would the facet normal n not have to be assigned to - or + in that expression?
Yes, you want to take an (arbitrarily) restricted side of the expression:
so use:
dot(f, n)('+')
Also, is there a simpler way to implement the projections that I have ? Going forward with a system that effectively doubles the amount of variables I have is a bit cumbersome.
So the physical variables are effectively diagnosed from the variational derivatives?
But then the evolution equation says how the physical variable evolves according to the derivatives?
Presumably then the elimination of one set, or the other, of these variables requires that you invert some operator. Is this operator cell-local?
Having taken a closer look, I think I see what you have: given trial functions u, rho, du, drho and test functions v, sigma, dv, dsigma you have a matrix: [A 0 0 B 0 C D 0 E 0 F 0 0 G 0 H] If I read the code right where: A = <u, v> C = <rho, sigma> F = <du, dv> H = <drho, dsigma> and writing th(f) = theta*dot(f, n)('+') + (1 - theta)*(dot(f, n)('-')) B = <grad drho, v> + << jump(drho), th(u) >> D = <du, grad sigma> + << th(du), jump(sigma) >> and E = <u, dv> G = <rho, dsigma> But you'd like to solve a system only for u and rho. So writing P = [A 0 0 C] Q = [0 B D 0] R = [E 0 0 G] S = [F 0 0 H] You have: [P Q R S] Block eliminating for the latter row gives (may have got this the wrong way round) P - Q S^{-1} R Which is a system you can solve, and only contains the u, rho variables, but includes the discrete projections as part of Q and R. Fortunately, since you're all DG and S is just a mass matrix, S^{-1} is cell local. This kind of stuff is exactly what Thomas Gibson has been working on recently for hybridisation methods. But it's possible that it will work for you. I will let him chime in on whether everything could be done, noting that P, Q, S and R are all mixed matrices. I don't see a way other than this, because this elimination is only weak, not strong (the relationships between u and du, and rho and drho do not hold quad-point-wise). Perhaps this is what Colin had thought? Lawrence
Dear Thomas, The full problem I'm considering has additional terms, including adding another unknown for the pressure in the system. Would we able to extend the forward elimination and backwards reconstruction to 3 variables? I'll write up the full problem and send the compressible stratified code along with, Thanks Wil ________________________________ From: firedrake-bounces@imperial.ac.uk <firedrake-bounces@imperial.ac.uk> on behalf of Gibson, Thomas <t.gibson15@imperial.ac.uk> Sent: 01 February 2017 08:57:36 To: firedrake Subject: Re: [firedrake] Parallel issues and projection issues What Lawrence is suggesting looks accurate. Using his definitions of P, Q, R, and S, then the Schur compliment of the block S in that matrix [P Q; R S] gives you a system for u and rho, with LHS operator (P - QS^-1R). The spaces are discontinuous and hence the inverse can be computed cell-local. This is precisely what the Slate package is for in Firedrake. It's not a problem that some of the matrices are mixed; the problem is that P - QS^-1R is still mixed. For Slate to solve this, we need PyOP2 to write into mixed dats (which it currently can't do). However, if you write out P - QS^-1R in terms of A, B, C, D, E, F and G (as defined by Lawrence) and eliminate, say rho, then you would have a system in terms of one unknown (u). And unless I'm missing something, you can just use the computed u to recover rho. Both of these steps (forward elimination and backwards reconstruction) can be done in Slate. I would have to see the full finite element problem before I claim Slate can do this. Further edits may be required for handling the `th(f) = theta*dot(f, n)('+') + (1 - theta)*(dot(f, n)('-'))` term. If you're interested in trying this, then we can have a more detailed discussion. Best regards, - Thomas ________________________________ From: firedrake-bounces@imperial.ac.uk <firedrake-bounces@imperial.ac.uk> on behalf of Lawrence Mitchell <lawrence.mitchell@imperial.ac.uk> Sent: 31 January 2017 18:20:46 To: firedrake Subject: Re: [firedrake] Parallel issues and projection issues On 31/01/17 17:44, Lawrence Mitchell wrote:
On 31/01/17 12:49, William Booker wrote:
Dear David,
The choice of theta is meant to be arbitrary in the scheme, and we have had thoughts of looking into an optimal theta for implementations.
So I would prefer to keep the asymmetry in the scheme.
So the point is that if theta = 0.5 is not optimal then it will be *mesh dependent* (because if theta != 0.5 then the best choice is not invariant under renumbering of the mesh).
I'll implement the conditional form in my DIV definition, and get back to you about the results.
Just a quick query, would the facet normal n not have to be assigned to - or + in that expression?
Yes, you want to take an (arbitrarily) restricted side of the expression:
so use:
dot(f, n)('+')
Also, is there a simpler way to implement the projections that I have ? Going forward with a system that effectively doubles the amount of variables I have is a bit cumbersome.
So the physical variables are effectively diagnosed from the variational derivatives?
But then the evolution equation says how the physical variable evolves according to the derivatives?
Presumably then the elimination of one set, or the other, of these variables requires that you invert some operator. Is this operator cell-local?
Having taken a closer look, I think I see what you have: given trial functions u, rho, du, drho and test functions v, sigma, dv, dsigma you have a matrix: [A 0 0 B 0 C D 0 E 0 F 0 0 G 0 H] If I read the code right where: A = <u, v> C = <rho, sigma> F = <du, dv> H = <drho, dsigma> and writing th(f) = theta*dot(f, n)('+') + (1 - theta)*(dot(f, n)('-')) B = <grad drho, v> + << jump(drho), th(u) >> D = <du, grad sigma> + << th(du), jump(sigma) >> and E = <u, dv> G = <rho, dsigma> But you'd like to solve a system only for u and rho. So writing P = [A 0 0 C] Q = [0 B D 0] R = [E 0 0 G] S = [F 0 0 H] You have: [P Q R S] Block eliminating for the latter row gives (may have got this the wrong way round) P - Q S^{-1} R Which is a system you can solve, and only contains the u, rho variables, but includes the discrete projections as part of Q and R. Fortunately, since you're all DG and S is just a mass matrix, S^{-1} is cell local. This kind of stuff is exactly what Thomas Gibson has been working on recently for hybridisation methods. But it's possible that it will work for you. I will let him chime in on whether everything could be done, noting that P, Q, S and R are all mixed matrices. I don't see a way other than this, because this elimination is only weak, not strong (the relationships between u and du, and rho and drho do not hold quad-point-wise). Perhaps this is what Colin had thought? Lawrence
Dear David and Lawrence, I updated my DIV definition to # Define discrete divergence operator def div_u(u, p): return (dot(u, grad(p)))*dx(domain=p.ufl_domain()) + (jump(p)*dot( conditional ( ge(dot(f,n)('+'), 0. ), u('+')*theta , u('-')*(1-theta)) , n('-')))*dS Did I mistake how I was supposed to implement the direction conditional? This gave me the following error: File "2d_compressible_stratified_2.py", line 180, in <module> solve(a == L, out, solver_parameters={'ksp_rtol': 1e-14}) File "/usr/not-backed-up/firedrake/2016-12-14/firedrake/src/firedrake/firedrake/solving.py", line 122, in solve _solve_varproblem(*args, **kwargs) File "/usr/not-backed-up/firedrake/2016-12-14/firedrake/src/firedrake/firedrake/solving.py", line 151, in _solve_varproblem appctx=appctx) File "/usr/not-backed-up/firedrake/2016-12-14/firedrake/src/firedrake/firedrake/variational_solver.py", line 262, in __init__ super(LinearVariationalSolver, self).__init__(*args, **kwargs) File "/usr/not-backed-up/firedrake/2016-12-14/firedrake/src/firedrake/firedrake/variational_solver.py", line 132, in __init__ appctx=appctx) File "/usr/not-backed-up/firedrake/2016-12-14/firedrake/src/firedrake/firedrake/solving_utils.py", line 213, in __init__ appctx=appctx) File "/usr/not-backed-up/firedrake/2016-12-14/firedrake/src/firedrake/firedrake/assemble.py", line 114, in allocate_matrix allocate_only=True) File "<decorator-gen-279>", line 2, in _assemble File "/usr/not-backed-up/firedrake/2016-12-14/firedrake/src/firedrake/firedrake/utils.py", line 62, in wrapper return f(*args, **kwargs) File "/usr/not-backed-up/firedrake/2016-12-14/firedrake/src/firedrake/firedrake/assemble.py", line 182, in _assemble inverse=inverse) File "/usr/not-backed-up/firedrake/2016-12-14/firedrake/src/firedrake/firedrake/tsfc_interface.py", line 189, in compile_form number_map).kernels File "/usr/not-backed-up/firedrake/2016-12-14/firedrake/src/PyOP2/pyop2/caching.py", line 200, in __new__ obj = make_obj() File "/usr/not-backed-up/firedrake/2016-12-14/firedrake/src/PyOP2/pyop2/caching.py", line 190, in make_obj obj.__init__(*args, **kwargs) File "/usr/not-backed-up/firedrake/2016-12-14/firedrake/src/firedrake/firedrake/tsfc_interface.py", line 110, in __init__ tree = tsfc_compile_form(form, prefix=name, parameters=parameters) File "/usr/not-backed-up/firedrake/2016-12-14/firedrake/src/tsfc/tsfc/driver.py", line 51, in compile_form kernels.append(compile_integral(integral_data, fd, prefix, parameters)) File "/usr/not-backed-up/firedrake/2016-12-14/firedrake/src/tsfc/tsfc/driver.py", line 160, in compile_integral ir = fem.compile_ufl(integrand, interior_facet=interior_facet, **config) File "/usr/not-backed-up/firedrake/2016-12-14/firedrake/src/tsfc/tsfc/fem.py", line 380, in compile_ufl result = map_expr_dags(translator, expressions) File "/usr/not-backed-up/firedrake/2016-12-14/firedrake/src/ufl/ufl/corealg/map_dag.py", line 86, in map_expr_dags r = handlers[v._ufl_typecode_](v, *[vcache[u] for u in v.ufl_operands]) File "/usr/not-backed-up/firedrake/2016-12-14/firedrake/src/tsfc/tsfc/ufl2gem.py", line 86, in conditional indices) File "/usr/not-backed-up/firedrake/2016-12-14/firedrake/src/tsfc/gem/gem.py", line 49, in __call__ obj = super(NodeMeta, self).__call__(*args, **kwargs) File "/usr/not-backed-up/firedrake/2016-12-14/firedrake/src/tsfc/gem/gem.py", line 555, in __new__ assert set(multiindex) <= set(expression.free_indices) AssertionError Thanks Will ________________________________ From: firedrake-bounces@imperial.ac.uk <firedrake-bounces@imperial.ac.uk> on behalf of Lawrence Mitchell <lawrence.mitchell@imperial.ac.uk> Sent: 31 January 2017 17:44:40 To: firedrake@imperial.ac.uk Subject: Re: [firedrake] Parallel issues and projection issues On 31/01/17 12:49, William Booker wrote:
Dear David,
The choice of theta is meant to be arbitrary in the scheme, and we have had thoughts of looking into an optimal theta for implementations.
So I would prefer to keep the asymmetry in the scheme.
So the point is that if theta = 0.5 is not optimal then it will be *mesh dependent* (because if theta != 0.5 then the best choice is not invariant under renumbering of the mesh).
I'll implement the conditional form in my DIV definition, and get back to you about the results.
Just a quick query, would the facet normal n not have to be assigned to - or + in that expression?
Yes, you want to take an (arbitrarily) restricted side of the expression: so use: dot(f, n)('+')
Also, is there a simpler way to implement the projections that I have ? Going forward with a system that effectively doubles the amount of variables I have is a bit cumbersome.
So the physical variables are effectively diagnosed from the variational derivatives? But then the evolution equation says how the physical variable evolves according to the derivatives? Presumably then the elimination of one set, or the other, of these variables requires that you invert some operator. Is this operator cell-local? Cheers, Lawrence
Dear Will, On 01/02/17 11:17, William Booker wrote:
Dear David and Lawrence,
I updated my DIV definition to
# Define discrete divergence operator def div_u(u, p): return (dot(u, grad(p)))*dx(domain=p.ufl_domain()) + (jump(p)*dot( conditional ( ge(dot(f,n)('+'), 0. ), u('+')*theta , u('-')*(1-theta)) , n('-')))*dS
Did I mistake how I was supposed to implement the direction conditional?
"/usr/not-backed-up/firedrake/2016-12-14/firedrake/src/tsfc/gem/gem.py", line 555, in __new__ assert set(multiindex) <= set(expression.free_indices) AssertionError
You found a bug in the form compiler. Fixed here: https://github.com/firedrakeproject/tsfc/pull/95 This will be merged soon, hopefully, and you can then firedrake-update to get the fix. I not in passing that your conditional expression doesn't do what you want. You want: u('+')*theta + u('-')*(1 - theta) to look the same under interchange of + and -, so you should write: dot(conditional(ge(dot(f, n)('+'), 0), u('+')*theta + u('-')*(1 - theta), u('-')*theta + u('+')*(1 - theta)), n('-')) That way, if the selection expression is true, you get the first guy, and if it is false (you're "looking from the other side") you get the second one. With the form compiler fix, and that change, you appear to conserve energy in parallel too. I note in passing that your code will be very slow because you call solve in a loop over-and-over. You should refactor it to use a LinearVariationalSolver. This notebook https://nbviewer.jupyter.org/github/firedrakeproject/firedrake/blob/master/d... describes how (and why) you need to do this. Lawrence
On 01/02/17 12:32, Lawrence Mitchell wrote:
You found a bug in the form compiler. Fixed here:
https://github.com/firedrakeproject/tsfc/pull/95
This will be merged soon, hopefully, and you can then firedrake-update to get the fix.
Now merged. Please run firedrake-update to get the fix. Lawrence
Dear Lawrence, "I note in passing that your code will be very slow because you call solve in a loop over-and-over. You should refactor it to use a LinearVariationalSolver. This notebook https://nbviewer.jupyter.org/github/firedrakeproject/firedrake/blob/master/d... describes how (and why) you need to do this." Would it be possible to use this Solver, .solve() method if a part of the weak form was time dependant. I have a body force included in the weak form now. If I created a solver, and then called .solve() while using .assign to change the value of my body force, would you still expect run time to be faster or would this better for non autonomous matrices. Regards Will ________________________________ From: firedrake-bounces@imperial.ac.uk <firedrake-bounces@imperial.ac.uk> on behalf of Lawrence Mitchell <lawrence.mitchell@imperial.ac.uk> Sent: 01 February 2017 12:32:40 To: firedrake@imperial.ac.uk Subject: Re: [firedrake] Parallel issues and projection issues Dear Will, On 01/02/17 11:17, William Booker wrote:
Dear David and Lawrence,
I updated my DIV definition to
# Define discrete divergence operator def div_u(u, p): return (dot(u, grad(p)))*dx(domain=p.ufl_domain()) + (jump(p)*dot( conditional ( ge(dot(f,n)('+'), 0. ), u('+')*theta , u('-')*(1-theta)) , n('-')))*dS
Did I mistake how I was supposed to implement the direction conditional?
"/usr/not-backed-up/firedrake/2016-12-14/firedrake/src/tsfc/gem/gem.py", line 555, in __new__ assert set(multiindex) <= set(expression.free_indices) AssertionError
You found a bug in the form compiler. Fixed here: https://github.com/firedrakeproject/tsfc/pull/95 This will be merged soon, hopefully, and you can then firedrake-update to get the fix. I not in passing that your conditional expression doesn't do what you want. You want: u('+')*theta + u('-')*(1 - theta) to look the same under interchange of + and -, so you should write: dot(conditional(ge(dot(f, n)('+'), 0), u('+')*theta + u('-')*(1 - theta), u('-')*theta + u('+')*(1 - theta)), n('-')) That way, if the selection expression is true, you get the first guy, and if it is false (you're "looking from the other side") you get the second one. With the form compiler fix, and that change, you appear to conserve energy in parallel too. I note in passing that your code will be very slow because you call solve in a loop over-and-over. You should refactor it to use a LinearVariationalSolver. This notebook https://nbviewer.jupyter.org/github/firedrakeproject/firedrake/blob/master/d... describes how (and why) you need to do this. Lawrence
On 16/02/17 18:08, William Booker wrote:
Dear Lawrence,
"I note in passing that your code will be very slow because you call solve in a loop over-and-over. You should refactor it to use a LinearVariationalSolver. This notebook https://nbviewer.jupyter.org/github/firedrakeproject/firedrake/blob/master/d...
describes how (and why) you need to do this."
Would it be possible to use this Solver, .solve() method if a part of the weak form was time dependant. I have a body force included in the weak form now.
Yes.
If I created a solver, and then called .solve() while using .assign to change the value of my body force, would you still expect run time to be faster or would this better for non autonomous matrices.
It will still be faster. You will only be recomputing values, not allocating and deallocating the matrices and rederiving symbolic information (which is not free). Lawrence
participants (4)
- 
                
                David Ham
- 
                
                Gibson, Thomas
- 
                
                Lawrence Mitchell
- 
                
                William Booker