On 7 Nov 2018, at 09:56, Matthew Knepley <knepley@gmail.com> wrote:
On Wed, Nov 7, 2018 at 4:02 AM Lawrence Mitchell <wence@gmx.li> wrote:
On 7 Nov 2018, at 08:30, Justin Chang <jychang48@gmail.com> wrote:
Hi everyone,
I am attempting to solve the electrolyte domain of li-ion battery model. The concentration problem is somewhat like a nonlinear diffusion/heat equation but it gets messy because it has a div(grad(ln(c))) term. So I would be looking at something like this:
-div(k1*grad(c)) + div(k2*grad(ln(c))) = 0
Which we can rewrite as:
-div((k1 - k2/c)*grad(c)) = 0
If k2 is relatively small enough, the newton solver should converge with little problem. The problem I am having is with the linear solve. If I discretize the above using CG1, no solver works so long as k2 >> 0. With DG1, I can only use direct solvers like LU to solve the resulting linear system of equations. I can't seem to find a good iterative solver/pc to solve this - HYPRE gives me a DIVERGED_NANORINF, and jacobi/bjacobi do not converge at all.
Not sure about preconditioning immediately. But how are you ensuring that you never have c=0 in your solves?
I've attached the code for you guys to play around with. I don't know what is a good PC for grad(ln(c)), my guess is that I'm going to need to provide an assembled preconditioner which approximates the inverse of the k2/c*grad(c) term but it doesn't seem clear to me how I can provide an explicit PC to the nonlinear solver without resorting to hacking at petsc4py.
You can use an ExplicitSchurPC (see eg usage here https://firedrakeproject.org/demos/geometric_multigrid.py.html) along with pc typ python. No need to write petsc4py code directly.
I must be reading the code wrong because I do not see the coefficient. For the system in the file, Hypre should work fine. More broadly, does the cparse grid solve converge? If so, can you show the smoother iterates for a 2-level problem?
If we ignore the IP scheme and just use the C0 scheme Justin posted: Here's the residual, with u_ the state to be solved for: F = dot(grad(v), D(u_)*grad(u_))*dx - v*f*dx - g*v*ds(3) - g*v*ds(4) and D is defined as: def D(c): return Constant(100) - Constant(1.0)/c Which gives you the coefficient variation. I worry with this, given that the linearisation about u* looks like: grad(v) : (grad(u) (k1 - k2/u*) + grad(u*) k2 u / u*^2) So if u* is zero, this blows up, no? Cheers, Lawrence