Re: [firedrake] FEniCS implementation works but Firedrake does not, why?
Hi Justin, since you're talking about possible issues with optimised/non-optimised Firedrake, I checked if this had something to do with COFFEE, but it really seems it doesn't, so I'm not sure what's going on. Maybe the others can help. -- Fabio 2016-06-01 9:44 GMT+02:00 Justin Chang <jychang48@gmail.com>:
Hi all,
So we have been attempting to convert a FEniCS implementation of a semi-linear diffusion code into Firedrake. Mainly because we want to employ PETSc/TAO's optimization routines, which FEniCS does not let us do. However, the Firedrake implementation is not working, namely our consistent Newton-Raphson approach.
Attached is the FEniCS code (P3_Galerkin_NR.py) and the firedrake code (NR_Nonlinear_poisson.py). For FEniCS you may just run the code as "python P3_Galerkin_NR.py" but for the firedrake code you must run as:
python NR_Nonlinear_poisson.py 50 50 0
For the FEniCS code, this is my solver output:
python P3_Galerkin_NR.py Calling FFC just-in-time (JIT) compiler, this may take some time. Calling FFC just-in-time (JIT) compiler, this may take some time. Solving linear variational problem. iter=1: norm=1 Solving linear variational problem. iter=2: norm=0.0731224 Solving linear variational problem. iter=3: norm=0.00217701 Solving linear variational problem. iter=4: norm=1.64398e-06 Solving linear variational problem. iter=5: norm=8.89289e-13
but for Firedrake, i have this:
python NR_Nonlinear_poisson.py 50 50 0 COFFEE finished in 0.00142097 seconds (flops: 0 -> 0) COFFEE finished in 0.00103188 seconds (flops: 0 -> 0) COFFEE finished in 0.00173187 seconds (flops: 0 -> 0) COFFEE finished in 0.00173306 seconds (flops: 0 -> 0) COFFEE finished in 0.0015831 seconds (flops: 0 -> 0) COFFEE finished in 0.000972033 seconds (flops: 0 -> 0) COFFEE finished in 0.00118279 seconds (flops: 2 -> 2) COFFEE finished in 0.000946999 seconds (flops: 0 -> 0) COFFEE finished in 0.00165105 seconds (flops: 1 -> 1) Error norm: 2.193e-01 Error norm: 4.697e-02 Error norm: 4.830e-02 Error norm: 8.437e-02 Error norm: 2.740e-01 Error norm: 2.872e+00 Error norm: 8.993e+02 Error norm: 1.912e+10 Error norm: 1.919e+32 Error norm: 1.981e+98 Traceback (most recent call last): File "NR_Nonlinear_poisson.py", line 119, in <module> solver.solve(u_k1,b) File "/home/justin/Software/firedrake/src/firedrake/firedrake/linear_solver.py", line 153, in solve raise RuntimeError("LinearSolver failed to converge after %d iterations with reason: %s", self.ksp.getIterationNumber(), solving_utils.KSPReasons[r]) RuntimeError: ('LinearSolver failed to converge after %d iterations with reason: %s', 0, 'DIVERGED_NANORINF')
I strongly believe there are no inconsistencies between our FEniCS and Firedrake codes (in terms of what we want to solve) but for some reason the latter blows up. However, if I run the optimization version of the Firedrake code, my solver converges quadratically as expected:
python NR_Nonlinear_poisson.py 50 50 1 COFFEE finished in 0.00143099 seconds (flops: 0 -> 0) COFFEE finished in 0.00164199 seconds (flops: 0 -> 0) COFFEE finished in 0.00169611 seconds (flops: 0 -> 0) COFFEE finished in 0.00169706 seconds (flops: 0 -> 0) COFFEE finished in 0.00153899 seconds (flops: 0 -> 0) COFFEE finished in 0.000952005 seconds (flops: 0 -> 0) COFFEE finished in 0.000900984 seconds (flops: 1 -> 1) COFFEE finished in 0.000989914 seconds (flops: 0 -> 0) Error norm: 2.193e-01 Error norm: 1.626e-02 Error norm: 4.870e-04 Error norm: 9.332e-07 Error norm: 0.000e+00
When I compare the pvd plots between the FEniCS and Firedrake (with optimization), Firedrake seems correct qualitatively (i.e., the negative concentrations are gone). But I am confused why the non-optimization implementation of firedrake does not converge whereas the FEniCS one does.
Any thoughts?
Thanks! Justin
On 01/06/16 09:44, Fabio Luporini wrote:
Hi Justin,
since you're talking about possible issues with optimised/non-optimised Firedrake, I checked if this had something to do with COFFEE, but it really seems it doesn't, so I'm not sure what's going on.
Different type of optimisation! Lawrence
participants (2)
- 
                
                Fabio Luporini
- 
                
                Lawrence Mitchell