Hi everyone, Attached is a tentative working code I have for an advective-diffusive-reactive system using operator splitting. I tried to express the system of equations as a mixed finite element form (not sure if this is the right approach). When I run the attached code, i get an error saying "Nonlinear solver failed to converge after 0 nonlinear iterations. Reason: DIVERGED_LINE_SEARCH". Can you guys have a look and see if this is even a logical approach? The parts in question begin at line 133, and the solver is at line 217 (if you comment out line 217, the advection-diffusion part runs well). If this doesn't work, I will explore the pyop2 option. Thanks, Justin On Fri, Sep 4, 2015 at 4:35 AM, David Ham <David.Ham@imperial.ac.uk> wrote:
I'm not totally sure that's obvious. I agree that it's not the theoretically optimal solution, however it is well engineered which may make it a convenient solution, and if it's fast enough, then that's just fine.
On Fri, 4 Sep 2015 at 11:22 Cotter, Colin J <colin.cotter@imperial.ac.uk> wrote:
PETSc shouldn't be used for this, as the problem is solved independently at each node.
--cjc
On 4 September 2015 at 10:40, Ham, David A <david.ham@imperial.ac.uk> wrote:
OK. I think I understand. In essence, assembling PETSc matrices is a core PyOP2 competence, so it *can* do this, although it'll be a little clunky because we don't have the full glory of Firedrake on top.
I think it would help if you try to read some of the PyOP2 documentation for a bit of background:
http://op2.github.io/PyOP2/concepts.html http://op2.github.io/PyOP2/kernels.html http://op2.github.io/PyOP2/linear_algebra.html http://op2.github.io/PyOP2/mixed.html
The last one is important in this case because you're actually solving a mixed system over the mixed function space (A, B, C).
Inconveniently, that means we need 3 kernels to assemble the residual and 7 kernels (one for each nonzero entry) to assemble the Jacobian.
Once you have the code for assembling, I think it should be straightforward to provide PETSc4py with the right callbacks so that its variational inequality solver works.
(Another possibility would be to bypass PyOP2 and write your own code accessing PETSc4py directly. That might actually be easier but assembly will be slower - which you may or may not care about).
I don't have time right this instant to start writing out the Kernels (I have to teach on Monday and I'm not ready yet.) I'll try to get back to this later, or maybe someone else on the list will dive in and save me ;)
Cheers,
David
On Fri, 4 Sep 2015 at 09:37 Justin Chang <jychang48@gmail.com> wrote:
Hi everyone,
Thanks for the input. So I was leaning more towards employing the newton method for solving this. And yes, all the vectors live in the same function space. If the system of equations were solved at each node i, the residual algebraic vector of size 3x1 would be:
r = [c_A(i)+c_C(i)-psi_A(i); c_B(i)+c_C(i)-psi_B(i); k1*c_A(i)*c_B(i)-c_C(i)]
and the Jacobian 3x3 matrix would be:
J = [1, 0, 1; 0, 1, 1; k1*c_B(i), k1*c_A(i), -1]
Ideally I would like to have these expressed as a PETSc matrix and vector because I would like to employ PETSc’s variational inequality solver to enforce lower/upper bounds for c_A/B/C.
As for the pyop2 option:
void solve_system(double *c_A, double *c_B, double *c_C, double *k1,
double *psi_A, double *psi_B) {
// In here, you can dereference these variables to get the components // of each variable at the node, e.g.
// Set first component of c_A to sum of first two components of psi_A. c_A[0] = psi_A[0] + psi_A[1]; }
I guess you can either write the newton (or fixed point) iteration out by hand in the kernel (it shouldn't be too difficult), or else make library calls (getting the correct link lines is possible I think).
Now in your code you do:
solve_kernel = op2.Kernel(kernel_code, "solve_system")
op2.par_loop(solve_kernel, c_A.dof_dset.node_set, c_A.dat(op2.WRITE), c_B.dat(op2.WRITE), c_C.dat(op2.WRITE), k1.dat(op2.READ), psi_A.dat(op2.READ), psi_B.dat(op2.READ))
Does this make sense?
I am guessing void solve_system(…) is a separate C file called solve_system.c but how exactly do I get the components of each variable at the node? So here’s how I did this with scipy:
================================
# Advection-diffusion firedrake code # Compute psi_A, psi_B, and k1 ... …
from scipy.optimize import solve import math
# Solve system of nonlinear reactions def equations(p,psi_A,psi_B,k_1): c_A,c_B,c_C = p return (c_A+c_C-psi_A, c_B+c_C-psi_B, c_C-k_1*c_A*c_B)
# define vectors for primary species c_A_vec = np.zeros(len(psi_A.vector())) c_B_vec = np.zeros(len(psi_A.vector()))
# define vector for secondary species c_C_vec = np.zeros(len(psi_A.vector()))
# Iterate through each node for i in range(len(psi_A.vector())): c_A_i,c_B_i,c_C_i = fsolve(equations,(0.0,0.0,0.0),args=(psi_A.vector()[i],psi_B.vector()[i],k1) c_A_vec[i] = c_A_i c_B_vec[i] = c_B_i c_C_vec[i] = c_C_i
# Output c_A = Function(Q) c_B = Function(Q) c_C = Function(Q) c_A.vector()[:] = c_A_vec c_B.vector()[:] = c_B_vec c_C.vector()[:] = c_C_vec File(“Solution_c_A.pvd”) << c_A File(“Solution_c_B.pvd”) << c_B File(“Solution_c_C.pvd”) << c_C
===================================
The PyOP2 equivalent of the above would be a good starting point I suppose.
Thanks all, Justin
www.cambridge.org/9781107663916
_______________________________________________ firedrake mailing list firedrake@imperial.ac.uk https://mailman.ic.ac.uk/mailman/listinfo/firedrake