Re: [firedrake] FEniCS implementation works but Firedrake does not, why?
Heh, ok. It's just that it quickly alerted me to some situations where COFFEE was actually producing code with more FLOPS! --cjc On 1 June 2016 at 18:22, Ham, David A <david.ham@imperial.ac.uk> wrote:
On the contrary. Please file a bug against Coffee. If enough people complain Fabio might shut the bloody thing up! ;)
David On Wed, 1 Jun 2016 at 18:19, Cotter, Colin J <colin.cotter@imperial.ac.uk> wrote:
Hi Justin, We'd prefer you not to disable these messages since they help to notify us if there are problems with the COFFEE optimisations.
all the best --cjc
On 1 June 2016 at 18:12, Justin Chang <jychang48@gmail.com> wrote:
A couple more related questions:
1) Every time I run this (or any firedrake code) I get these notifications:
COFFEE finished in XXX seconds (flops: X -> X)
Is there a way to disable these?
2) Also, if I use quadrilateral elements, i also get this notification (I believe it happens during the errornorm() call)
Discontinuous Lagrange element requested on quadrilateral, creating DQ element.
Is there a way to also disable this?
Thanks, Justin
On Wed, Jun 1, 2016 at 11:56 AM, Justin Chang <jychang48@gmail.com> wrote:
Ah this works wonderfully, thanks Lawrence!
On Wed, Jun 1, 2016 at 4:23 AM, Lawrence Mitchell < lawrence.mitchell@imperial.ac.uk> wrote:
Hi Justin,
On 01/06/16 08:44, Justin Chang wrote:
...
#===================== # Medium properties #===================== D = Function(Q) alpha = Constant(0.0) d1 = 1.0 d2 = 0.0001 theta = pi/6.0 co = cos(theta) si = sin(theta)
D.interpolate(Expression(("co*co*d1+si*si*d2","-co*si*(d1-d2)","-co*si*(d1-d2)","si*si*d1+co*co*d2"),co=co,si=si,d1=d1,d2=d2))
FWIW, I recommend moving to using interpolation of UFL expressions for these kind of initialisations (you get, amongst other things) better error messages when mistyping something. See http://firedrakeproject.org/interpolation.html#ufl-expressions
#===================== # Volumetric source #===================== def f(u): return Constant(10)*u*u*u
def df(u): return Constant(30)*u*u
#===================== # Variational form #===================== a = alpha * inner(u,v) * dx + inner(D*nabla_grad(u), nabla_grad(v))*dx + inner(df(u_k) * u,v)*dx
So a is a bilinear form, but it depends on the value of the previous solution.
A = assemble(a,bcs=bcs) solver = LinearSolver(A,options_prefix="solver_")
Here you assemble A and build a LinearSolver.
#======================== # Optimization #======================== lb = Function(V) ub = Function(V) ub.assign(1000) taoSolver = PETSc.TAO().create(PETSc.COMM_WORLD) with lb.dat.vec_ro as lb_vec, ub.dat.vec as ub_vec: taoSolver.setVariableBounds(lb_vec,ub_vec)
def ObjGrad(tao, petsc_x, petsc_g): with b.dat.vec as b_vec: A.M.handle.mult(petsc_x,petsc_g) xtHx = petsc_x.dot(petsc_g) xtf = petsc_x.dot(b_vec) petsc_g.axpy(-1.0,b_vec) return 0.5*xtHx - xtf
taoSolver.setObjectiveGradient(ObjGrad) taoSolver.setType(PETSc.TAO.Type.BLMVM)
#======================== # Picard iteration #======================== while it < maxit and eps > tol: A = assemble(a,bcs=bcs)
Here you reassemble a and assign it to a new variable "A".
# Standard solver if opt == 0: b = assemble(L,bcs=bcs) solver.solve(u_k1,b)
This is never seen inside the solver, so this solver always runs with the originally assembled A (with the initial value for u_k).
# Optimization solver else: b = assemble(L) b -= rhs_opt for bc in bcs: bc.apply(b) with u_k1.dat.vec as u_k1_vec: taoSolver.solve(u_k1_vec) g I'm actually slightly surprised that this one works, but that appears to be because I don't understand how python's scoping rules work.
The LinearSolver object isn't designed to work if your jacobian changes hence the problem.
I recommend you use a LinearVariationalSolver, but you need to be careful, since by default it doesn't rebuild the jacobian every solve.
If I do:
problem = LinearVariationalProblem(a, L, u_k1, bcs=bcs, constant_jacobian=False) solver = LinearVariationalSolver(problem, options_prefix="solver_")
if opt == 0: solver.solve()
Then I get:
python foo.py 50 50 0
Error norm: 2.193e-01 Error norm: 1.705e-02 Error norm: 5.105e-04 Error norm: 4.980e-07 Error norm: 5.332e-13
We could add the option to rebuild the jacobian inside a LinearSolver (maybe if you've called something like solver.reset_operator() first), but it's not available right now.
I've attached a modified version that gets quadratic convergence in both cases. I've also modified the taoSolver setup slightly so that you reuse the same storage for the operator and RHS (rather than building a new one every time).
Cheers,
Lawrence
_______________________________________________ firedrake mailing list firedrake@imperial.ac.uk https://mailman.ic.ac.uk/mailman/listinfo/firedrake
-- http://www.imperial.ac.uk/people/colin.cotter
www.cambridge.org/9781107663916
-- http://www.imperial.ac.uk/people/colin.cotter www.cambridge.org/9781107663916
participants (1)
- 
                
                Colin Cotter