On the contrary. Please file a bug against Coffee. If enough people complain Fabio might shut the bloody thing up! ;)
David
--cjcall the bestHi Justin,We'd prefer you not to disable these messages since they help to notify us if there are problems with the COFFEE optimisations.
On 1 June 2016 at 18:12, Justin Chang <jychang48@gmail.com> wrote:
JustinThanks,Is there a way to also disable this?2) Also, if I use quadrilateral elements, i also get this notification (I believe it happens during the errornorm() call)Is there a way to disable these?A couple more related questions:1) Every time I run this (or any firedrake code) I get these notifications:
COFFEE finished in XXX seconds (flops: X -> X)
Discontinuous Lagrange element requested on quadrilateral, creating DQ element.
On Wed, Jun 1, 2016 at 11:56 AM, Justin Chang <jychang48@gmail.com> wrote:
Ah this works wonderfully, thanks Lawrence!
On Wed, Jun 1, 2016 at 4:23 AM, Lawrence Mitchell <lawrence.mitchell@imperial.ac.uk> wrote:
_______________________________________________Hi Justin,
On 01/06/16 08:44, Justin Chang wrote:
...
> #=====================
> # Medium properties
> #=====================
> D = Function(Q)
> alpha = Constant(0.0)
> d1 = 1.0
> d2 = 0.0001
> theta = pi/6.0
> co = cos(theta)
> si = sin(theta)
> D.interpolate(Expression(("co*co*d1+si*si*d2","-co*si*(d1-d2)","-co*si*(d1-d2)","si*si*d1+co*co*d2"),co=co,si=si,d1=d1,d2=d2))
FWIW, I recommend moving to using interpolation of UFL expressions for
these kind of initialisations (you get, amongst other things) better
error messages when mistyping something. See
http://firedrakeproject.org/interpolation.html#ufl-expressions
> #=====================
> # Volumetric source
> #=====================
> def f(u):
> return Constant(10)*u*u*u
>
> def df(u):
> return Constant(30)*u*u
>
> #=====================
> # Variational form
> #=====================
> a = alpha * inner(u,v) * dx + inner(D*nabla_grad(u), nabla_grad(v))*dx + inner(df(u_k) * u,v)*dx
So a is a bilinear form, but it depends on the value of the previous
solution.
> A = assemble(a,bcs=bcs)
> solver = LinearSolver(A,options_prefix="solver_")
Here you assemble A and build a LinearSolver.
> #========================
> # Optimization
> #========================
> lb = Function(V)
> ub = Function(V)
> ub.assign(1000)
> taoSolver = PETSc.TAO().create(PETSc.COMM_WORLD)
> with lb.dat.vec_ro as lb_vec, ub.dat.vec as ub_vec:
> taoSolver.setVariableBounds(lb_vec,ub_vec)
>
> def ObjGrad(tao, petsc_x, petsc_g):
> with b.dat.vec as b_vec:
> A.M.handle.mult(petsc_x,petsc_g)
> xtHx = petsc_x.dot(petsc_g)
> xtf = petsc_x.dot(b_vec)
> petsc_g.axpy(-1.0,b_vec)
> return 0.5*xtHx - xtf
>
> taoSolver.setObjectiveGradient(ObjGrad)
> taoSolver.setType(PETSc.TAO.Type.BLMVM)
>
> #========================
> # Picard iteration
> #========================
> while it < maxit and eps > tol:
> A = assemble(a,bcs=bcs)
Here you reassemble a and assign it to a new variable "A".
> # Standard solver
> if opt == 0:
> b = assemble(L,bcs=bcs)
> solver.solve(u_k1,b)
This is never seen inside the solver, so this solver always runs with
the originally assembled A (with the initial value for u_k).
> # Optimization solver
> else:
> b = assemble(L)
> b -= rhs_opt
> for bc in bcs:
> bc.apply(b)
> with u_k1.dat.vec as u_k1_vec:
> taoSolver.solve(u_k1_vec)
g
I'm actually slightly surprised that this one works, but that appears
to be because I don't understand how python's scoping rules work.
The LinearSolver object isn't designed to work if your jacobian
changes hence the problem.
I recommend you use a LinearVariationalSolver, but you need to be
careful, since by default it doesn't rebuild the jacobian every solve.
If I do:
problem = LinearVariationalProblem(a, L, u_k1, bcs=bcs,
constant_jacobian=False)
solver = LinearVariationalSolver(problem, options_prefix="solver_")
if opt == 0:
solver.solve()
Then I get:
python foo.py 50 50 0
Error norm: 2.193e-01
Error norm: 1.705e-02
Error norm: 5.105e-04
Error norm: 4.980e-07
Error norm: 5.332e-13
We could add the option to rebuild the jacobian inside a LinearSolver
(maybe if you've called something like solver.reset_operator() first),
but it's not available right now.
I've attached a modified version that gets quadratic convergence in
both cases. I've also modified the taoSolver setup slightly so that
you reuse the same storage for the operator and RHS (rather than
building a new one every time).
Cheers,
Lawrence
firedrake mailing list
firedrake@imperial.ac.uk
https://mailman.ic.ac.uk/mailman/listinfo/firedrake