Lawrence,
So I believe I am trying to solve for this:
(J^(00) + J^(RR))*U = F(U) - J^(R0)*U^R
the following two lines:
A = assemble(a, bcs=bcs, tensor=A)
A.M._force_evaluation()
should return (J^(00)+J^(RR)) if I understand this correctly?
Then I need to express F(U) - J^(R0)*U^R in my gradient routine. I have tried all of these options thus far:
1) Applying the action of a:
tmp = Function(V)
for bc in bcs:
bc.apply(tmp)
rhs_bc = assemble(action(a, tmp))
...
def formGrad(...):
r = assemble(F,tensor=r)
r -= rhs_bc
for bc in bcs:
bc.apply(r)
with r.dat.vec a r_vec:
r_vec.copy(petsc_g)
...
but the above starts to diverge after a certain point
2) applying bc.zero()
def formGrad(...):
r = assemble(F,tensor=r)]
for bc in bcs:
zero(r)
with r.dat.vec a r_vec:
r_vec.copy(petsc_g)
...
but the above has the exact same issue as before
...
So what *did* work in the past was the following:
def formGrad(...):
A = assemble(a, bcs=bcs)
with b.dat.vec as b_vec:
A.M.handle.mult(petsc_x,petsc_g)
petsc_g.axpy(-1.0, b_vec)
where b had the following performed on it:
b = assemble(L)
tmp = Function(V)
for bc in bcs:
bc.apply(tmp)
rhs_bc = assemble(action(a, tmp))
b -= rhs_bc
for bc in bcs:
bc.apply(b)
So I know the above works but it takes several TAO iterations to converge. In my previous attempts with the nonlinear form equivalent, i could get the result to converge in exactly one iteration so sometime tells me I either am "cheating" here or there really is a cleaner/better way of implementing this in residual form.
Any thoughts? Appreciate all the help
Thanks,
Justin