PETSc shouldn't be used for this, as the problem is solved independently at each node.--cjc
On 4 September 2015 at 10:40, Ham, David A <david.ham@imperial.ac.uk> wrote:
OK. I think I understand. In essence, assembling PETSc matrices is a core PyOP2 competence, so it *can* do this, although it'll be a little clunky because we don't have the full glory of Firedrake on top.
I think it would help if you try to read some of the PyOP2 documentation for a bit of background:
The last one is important in this case because you're actually solving a mixed system over the mixed function space (A, B, C).
Inconveniently, that means we need 3 kernels to assemble the residual and 7 kernels (one for each nonzero entry) to assemble the Jacobian.
Once you have the code for assembling, I think it should be straightforward to provide PETSc4py with the right callbacks so that its variational inequality solver works.
(Another possibility would be to bypass PyOP2 and write your own code accessing PETSc4py directly. That might actually be easier but assembly will be slower - which you may or may not care about).
I don't have time right this instant to start writing out the Kernels (I have to teach on Monday and I'm not ready yet.) I'll try to get back to this later, or maybe someone else on the list will dive in and save me ;)
Cheers,
DavidOn Fri, 4 Sep 2015 at 09:37 Justin Chang <jychang48@gmail.com> wrote:
Hi everyone,
Thanks for the input. So I was leaning more towards employing the newton method for solving this. And yes, all the vectors live in the same function space. If the system of equations were solved at each node i, the residual algebraic vector of size 3x1 would be:
r = [c_A(i)+c_C(i)-psi_A(i);c_B(i)+c_C(i)-psi_B(i);k1*c_A(i)*c_B(i)-c_C(i)]
and the Jacobian 3x3 matrix would be:
J = [1, 0, 1;0, 1, 1;k1*c_B(i), k1*c_A(i), -1]
Ideally I would like to have these expressed as a PETSc matrix and vector because I would like to employ PETSc’s variational inequality solver to enforce lower/upper bounds for c_A/B/C.
As for the pyop2 option:
void solve_system(double *c_A, double *c_B, double *c_C, double *k1,
double *psi_A, double *psi_B)
{
// In here, you can dereference these variables to get the components
// of each variable at the node, e.g.
// Set first component of c_A to sum of first two components of psi_A.
c_A[0] = psi_A[0] + psi_A[1];
}
I guess you can either write the newton (or fixed point) iteration out by hand in the kernel (it shouldn't be too difficult), or else make library calls (getting the correct link lines is possible I think).
Now in your code you do:
solve_kernel = op2.Kernel(kernel_code, "solve_system")
op2.par_loop(solve_kernel,
c_A.dof_dset.node_set,
c_A.dat(op2.WRITE),
c_B.dat(op2.WRITE),
c_C.dat(op2.WRITE),
k1.dat(op2.READ),
psi_A.dat(op2.READ),
psi_B.dat(op2.READ))
Does this make sense?
I am guessing void solve_system(…) is a separate C file called solve_system.c but how exactly do I get the components of each variable at the node? So here’s how I did this with scipy:
================================
# Advection-diffusion firedrake code# Compute psi_A, psi_B, and k1 ...…
from scipy.optimize import solveimport math
# Solve system of nonlinear reactionsdef equations(p,psi_A,psi_B,k_1):c_A,c_B,c_C = preturn (c_A+c_C-psi_A, c_B+c_C-psi_B, c_C-k_1*c_A*c_B)
# define vectors for primary speciesc_A_vec = np.zeros(len(psi_A.vector()))c_B_vec = np.zeros(len(psi_A.vector()))
# define vector for secondary speciesc_C_vec = np.zeros(len(psi_A.vector()))
# Iterate through each node
for i in range(len(psi_A.vector())):c_A_i,c_B_i,c_C_i = fsolve(equations,(0.0,0.0,0.0),args=(psi_A.vector()[i],psi_B.vector()[i],k1)c_A_vec[i] = c_A_ic_B_vec[i] = c_B_ic_C_vec[i] = c_C_i
# Outputc_A = Function(Q)c_B = Function(Q)c_C = Function(Q)c_A.vector()[:] = c_A_vecc_B.vector()[:] = c_B_vecc_C.vector()[:] = c_C_vecFile(“Solution_c_A.pvd”) << c_AFile(“Solution_c_B.pvd”) << c_BFile(“Solution_c_C.pvd”) << c_C
===================================
The PyOP2 equivalent of the above would be a good starting point I suppose.
Thanks all,Justin