imperial.ac.uk
Sign In Sign Up
Manage this list Sign In Sign Up

Keyboard Shortcuts

Thread View

  • j: Next unread message
  • k: Previous unread message
  • j a: Jump to all threads
  • j l: Jump to MailingList overview

firedrake

Thread Start a new thread
Download
Threads by month
  • ----- 2025 -----
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2024 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2023 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2022 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2021 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2020 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2019 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2018 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2017 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2016 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2015 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2014 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2013 -----
  • December
  • November
  • October
  • September
firedrake@imperial.ac.uk

June 2016

  • 18 participants
  • 53 discussions
Re: [firedrake] FEniCS implementation works but Firedrake does not, why?
by Colin Cotter 01 Jun '16

01 Jun '16
Hi Justin, We'd prefer you not to disable these messages since they help to notify us if there are problems with the COFFEE optimisations. all the best --cjc On 1 June 2016 at 18:12, Justin Chang <jychang48(a)gmail.com> wrote: > A couple more related questions: > > 1) Every time I run this (or any firedrake code) I get these notifications: > > COFFEE finished in XXX seconds (flops: X -> X) > > Is there a way to disable these? > > 2) Also, if I use quadrilateral elements, i also get this notification (I > believe it happens during the errornorm() call) > > Discontinuous Lagrange element requested on quadrilateral, creating DQ > element. > > Is there a way to also disable this? > > Thanks, > Justin > > On Wed, Jun 1, 2016 at 11:56 AM, Justin Chang <jychang48(a)gmail.com> wrote: > >> Ah this works wonderfully, thanks Lawrence! >> >> On Wed, Jun 1, 2016 at 4:23 AM, Lawrence Mitchell < >> lawrence.mitchell(a)imperial.ac.uk> wrote: >> >>> Hi Justin, >>> >>> On 01/06/16 08:44, Justin Chang wrote: >>> >>> ... >>> >>> > #===================== >>> > # Medium properties >>> > #===================== >>> > D = Function(Q) >>> > alpha = Constant(0.0) >>> > d1 = 1.0 >>> > d2 = 0.0001 >>> > theta = pi/6.0 >>> > co = cos(theta) >>> > si = sin(theta) >>> > >>> D.interpolate(Expression(("co*co*d1+si*si*d2","-co*si*(d1-d2)","-co*si*(d1-d2)","si*si*d1+co*co*d2"),co=co,si=si,d1=d1,d2=d2)) >>> >>> >>> FWIW, I recommend moving to using interpolation of UFL expressions for >>> these kind of initialisations (you get, amongst other things) better >>> error messages when mistyping something. See >>> http://firedrakeproject.org/interpolation.html#ufl-expressions >>> >>> > #===================== >>> > # Volumetric source >>> > #===================== >>> > def f(u): >>> > return Constant(10)*u*u*u >>> > >>> > def df(u): >>> > return Constant(30)*u*u >>> > >>> > #===================== >>> > # Variational form >>> > #===================== >>> > a = alpha * inner(u,v) * dx + inner(D*nabla_grad(u), nabla_grad(v))*dx >>> + inner(df(u_k) * u,v)*dx >>> >>> So a is a bilinear form, but it depends on the value of the previous >>> solution. >>> >>> > A = assemble(a,bcs=bcs) >>> > solver = LinearSolver(A,options_prefix="solver_") >>> >>> Here you assemble A and build a LinearSolver. >>> >>> > #======================== >>> > # Optimization >>> > #======================== >>> > lb = Function(V) >>> > ub = Function(V) >>> > ub.assign(1000) >>> > taoSolver = PETSc.TAO().create(PETSc.COMM_WORLD) >>> > with lb.dat.vec_ro as lb_vec, ub.dat.vec as ub_vec: >>> > taoSolver.setVariableBounds(lb_vec,ub_vec) >>> > >>> > def ObjGrad(tao, petsc_x, petsc_g): >>> > with b.dat.vec as b_vec: >>> > A.M.handle.mult(petsc_x,petsc_g) >>> > xtHx = petsc_x.dot(petsc_g) >>> > xtf = petsc_x.dot(b_vec) >>> > petsc_g.axpy(-1.0,b_vec) >>> > return 0.5*xtHx - xtf >>> > >>> > taoSolver.setObjectiveGradient(ObjGrad) >>> > taoSolver.setType(PETSc.TAO.Type.BLMVM) >>> > >>> > #======================== >>> > # Picard iteration >>> > #======================== >>> > while it < maxit and eps > tol: >>> > A = assemble(a,bcs=bcs) >>> >>> Here you reassemble a and assign it to a new variable "A". >>> >>> > # Standard solver >>> > if opt == 0: >>> > b = assemble(L,bcs=bcs) >>> > solver.solve(u_k1,b) >>> >>> This is never seen inside the solver, so this solver always runs with >>> the originally assembled A (with the initial value for u_k). >>> >>> > # Optimization solver >>> > else: >>> > b = assemble(L) >>> > b -= rhs_opt >>> > for bc in bcs: >>> > bc.apply(b) >>> > with u_k1.dat.vec as u_k1_vec: >>> > taoSolver.solve(u_k1_vec) >>> g >>> I'm actually slightly surprised that this one works, but that appears >>> to be because I don't understand how python's scoping rules work. >>> >>> The LinearSolver object isn't designed to work if your jacobian >>> changes hence the problem. >>> >>> I recommend you use a LinearVariationalSolver, but you need to be >>> careful, since by default it doesn't rebuild the jacobian every solve. >>> >>> If I do: >>> >>> problem = LinearVariationalProblem(a, L, u_k1, bcs=bcs, >>> constant_jacobian=False) >>> solver = LinearVariationalSolver(problem, options_prefix="solver_") >>> >>> if opt == 0: >>> solver.solve() >>> >>> >>> Then I get: >>> >>> python foo.py 50 50 0 >>> >>> Error norm: 2.193e-01 >>> Error norm: 1.705e-02 >>> Error norm: 5.105e-04 >>> Error norm: 4.980e-07 >>> Error norm: 5.332e-13 >>> >>> We could add the option to rebuild the jacobian inside a LinearSolver >>> (maybe if you've called something like solver.reset_operator() first), >>> but it's not available right now. >>> >>> I've attached a modified version that gets quadratic convergence in >>> both cases. I've also modified the taoSolver setup slightly so that >>> you reuse the same storage for the operator and RHS (rather than >>> building a new one every time). >>> >>> Cheers, >>> >>> Lawrence >>> >>> _______________________________________________ >>> firedrake mailing list >>> firedrake(a)imperial.ac.uk >>> https://mailman.ic.ac.uk/mailman/listinfo/firedrake >>> >>> >> > -- http://www.imperial.ac.uk/people/colin.cotter www.cambridge.org/9781107663916
2 1
0 0
Re: [firedrake] FEniCS implementation works but Firedrake does not, why?
by David Ham 01 Jun '16

01 Jun '16
I confirm I can reproduce the firedrake results. I'll try to have a poke to see if I can work out what is going wrong. David On Wed, 1 Jun 2016 at 09:52 Mitchell, Lawrence < lawrence.mitchell(a)imperial.ac.uk> wrote: > > > On 01/06/16 09:44, Fabio Luporini wrote: > > Hi Justin, > > > > since you're talking about possible issues with > > optimised/non-optimised Firedrake, I checked if this had something to > > do with COFFEE, but it really seems it doesn't, so I'm not sure what's > > going on. > > Different type of optimisation! > > Lawrence > >
1 0
0 0
Re: [firedrake] FEniCS implementation works but Firedrake does not, why?
by Fabio Luporini 01 Jun '16

01 Jun '16
Hi Justin, since you're talking about possible issues with optimised/non-optimised Firedrake, I checked if this had something to do with COFFEE, but it really seems it doesn't, so I'm not sure what's going on. Maybe the others can help. -- Fabio 2016-06-01 9:44 GMT+02:00 Justin Chang <jychang48(a)gmail.com>: > Hi all, > > So we have been attempting to convert a FEniCS implementation of a > semi-linear diffusion code into Firedrake. Mainly because we want to employ > PETSc/TAO's optimization routines, which FEniCS does not let us do. > However, the Firedrake implementation is not working, namely our consistent > Newton-Raphson approach. > > Attached is the FEniCS code (P3_Galerkin_NR.py) and the firedrake code > (NR_Nonlinear_poisson.py). For FEniCS you may just run the code as "python > P3_Galerkin_NR.py" but for the firedrake code you must run as: > > python NR_Nonlinear_poisson.py 50 50 0 > > For the FEniCS code, this is my solver output: > > >> python P3_Galerkin_NR.py > Calling FFC just-in-time (JIT) compiler, this may take some time. > Calling FFC just-in-time (JIT) compiler, this may take some time. > Solving linear variational problem. > iter=1: norm=1 > Solving linear variational problem. > iter=2: norm=0.0731224 > Solving linear variational problem. > iter=3: norm=0.00217701 > Solving linear variational problem. > iter=4: norm=1.64398e-06 > Solving linear variational problem. > iter=5: norm=8.89289e-13 > > but for Firedrake, i have this: > > >> python NR_Nonlinear_poisson.py 50 50 0 > COFFEE finished in 0.00142097 seconds (flops: 0 -> 0) > COFFEE finished in 0.00103188 seconds (flops: 0 -> 0) > COFFEE finished in 0.00173187 seconds (flops: 0 -> 0) > COFFEE finished in 0.00173306 seconds (flops: 0 -> 0) > COFFEE finished in 0.0015831 seconds (flops: 0 -> 0) > COFFEE finished in 0.000972033 seconds (flops: 0 -> 0) > COFFEE finished in 0.00118279 seconds (flops: 2 -> 2) > COFFEE finished in 0.000946999 seconds (flops: 0 -> 0) > COFFEE finished in 0.00165105 seconds (flops: 1 -> 1) > Error norm: 2.193e-01 > Error norm: 4.697e-02 > Error norm: 4.830e-02 > Error norm: 8.437e-02 > Error norm: 2.740e-01 > Error norm: 2.872e+00 > Error norm: 8.993e+02 > Error norm: 1.912e+10 > Error norm: 1.919e+32 > Error norm: 1.981e+98 > Traceback (most recent call last): > File "NR_Nonlinear_poisson.py", line 119, in <module> > solver.solve(u_k1,b) > File > "/home/justin/Software/firedrake/src/firedrake/firedrake/linear_solver.py", > line 153, in solve > raise RuntimeError("LinearSolver failed to converge after %d > iterations with reason: %s", self.ksp.getIterationNumber(), > solving_utils.KSPReasons[r]) > RuntimeError: ('LinearSolver failed to converge after %d iterations with > reason: %s', 0, 'DIVERGED_NANORINF') > > > I strongly believe there are no inconsistencies between our FEniCS and > Firedrake codes (in terms of what we want to solve) but for some reason the > latter blows up. However, if I run the optimization version of the > Firedrake code, my solver converges quadratically as expected: > > >> python NR_Nonlinear_poisson.py 50 50 1 > COFFEE finished in 0.00143099 seconds (flops: 0 -> 0) > COFFEE finished in 0.00164199 seconds (flops: 0 -> 0) > COFFEE finished in 0.00169611 seconds (flops: 0 -> 0) > COFFEE finished in 0.00169706 seconds (flops: 0 -> 0) > COFFEE finished in 0.00153899 seconds (flops: 0 -> 0) > COFFEE finished in 0.000952005 seconds (flops: 0 -> 0) > COFFEE finished in 0.000900984 seconds (flops: 1 -> 1) > COFFEE finished in 0.000989914 seconds (flops: 0 -> 0) > Error norm: 2.193e-01 > Error norm: 1.626e-02 > Error norm: 4.870e-04 > Error norm: 9.332e-07 > Error norm: 0.000e+00 > > When I compare the pvd plots between the FEniCS and Firedrake (with > optimization), Firedrake seems correct qualitatively (i.e., the negative > concentrations are gone). But I am confused why the non-optimization > implementation of firedrake does not converge whereas the FEniCS one does. > > Any thoughts? > > Thanks! > Justin >
2 1
0 0
  • ← Newer
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • Older →

HyperKitty Powered by HyperKitty version 1.3.12.