Call-back functions not working (I think)
Hi all, So I am implementing PETSc/TAO's semi smooth solver to ensure non-negative solutions for anisotropic diffusion. Attached is the code. Basically, the call-back functions require a Jacobian and a gradient/constraint evaluation routine. It begins around line 160. Run the code as: python SS_ex2.py 30 <guess> where <guess> is either use zeros as initial guess (0) or use the standard formulation solution (i.e., no bounds) as the initial guess (1). The implementation somewhat works if <guess> is 1 or an initial non-zero solution is provided. If no guess is provided, I get a funky solution where everywhere is zero except for the non-homogeneous DirichletBCs. Also, even if I have an initial guess (1), the values of the BC aren't exactly accurate either (I should have a max value of 1 at the very center of the top face, but I get 0.983941) Something tells me I am not implementing the BCs correctly. Can anyone tell me what I am possibly doing wrong here? Thanks! Justin
Hi Justin, On 15/08/16 02:15, Justin Chang wrote:
Hi all,
So I am implementing PETSc/TAO's semi smooth solver to ensure non-negative solutions for anisotropic diffusion. Attached is the code.
Basically, the call-back functions require a Jacobian and a gradient/constraint evaluation routine. It begins around line 160. Run the code as:
python SS_ex2.py 30 <guess>
where <guess> is either use zeros as initial guess (0) or use the standard formulation solution (i.e., no bounds) as the initial guess (1). The implementation somewhat works if <guess> is 1 or an initial non-zero solution is provided. If no guess is provided, I get a funky solution where everywhere is zero except for the non-homogeneous DirichletBCs.
Also, even if I have an initial guess (1), the values of the BC aren't exactly accurate either (I should have a max value of 1 at the very center of the top face, but I get 0.983941)
Something tells me I am not implementing the BCs correctly. Can anyone tell me what I am possibly doing wrong here?
This is entirely possible. I haven't looked at your formulation in detail, but here's one thing to remember about the way boundary conditions are implemented (summarised from http://firedrakeproject.org/boundary_conditions.html). The particular thing to note is that an operator assembled with boundary conditions will just be the identity on the boundary node rows, because we forward-substitute to the right hand side to make the operator symmetric. If you're doing the linear problem by hand somehow, then you have to do this substitution. You can see this happening in the LinearSolver (in linear_solver.py) for example. Plausibly this means that your gradient should have zero boundary conditions applied to it (do bc.zero(...))? Lawrence
Lawrence, So I believe I am trying to solve for this: (J^(00) + J^(RR))*U = F(U) - J^(R0)*U^R the following two lines: A = assemble(a, bcs=bcs, tensor=A) A.M._force_evaluation() should return (J^(00)+J^(RR)) if I understand this correctly? Then I need to express F(U) - J^(R0)*U^R in my gradient routine. I have tried all of these options thus far: 1) Applying the action of a: tmp = Function(V) for bc in bcs: bc.apply(tmp) rhs_bc = assemble(action(a, tmp)) ... def formGrad(...): r = assemble(F,tensor=r) r -= rhs_bc for bc in bcs: bc.apply(r) with r.dat.vec a r_vec: r_vec.copy(petsc_g) ... but the above starts to diverge after a certain point 2) applying bc.zero() def formGrad(...): r = assemble(F,tensor=r)] for bc in bcs: zero(r) with r.dat.vec a r_vec: r_vec.copy(petsc_g) ... but the above has the exact same issue as before ... So what *did* work in the past was the following: def formGrad(...): A = assemble(a, bcs=bcs) with b.dat.vec as b_vec: A.M.handle.mult(petsc_x,petsc_g) petsc_g.axpy(-1.0, b_vec) where b had the following performed on it: b = assemble(L) tmp = Function(V) for bc in bcs: bc.apply(tmp) rhs_bc = assemble(action(a, tmp)) b -= rhs_bc for bc in bcs: bc.apply(b) So I know the above works but it takes several TAO iterations to converge. In my previous attempts with the nonlinear form equivalent, i could get the result to converge in exactly one iteration so sometime tells me I either am "cheating" here or there really is a cleaner/better way of implementing this in residual form. Any thoughts? Appreciate all the help Thanks, Justin On Mon, Aug 15, 2016 at 4:14 AM, Lawrence Mitchell < lawrence.mitchell@imperial.ac.uk> wrote:
Hi Justin,
On 15/08/16 02:15, Justin Chang wrote:
Hi all,
So I am implementing PETSc/TAO's semi smooth solver to ensure non-negative solutions for anisotropic diffusion. Attached is the code.
Basically, the call-back functions require a Jacobian and a gradient/constraint evaluation routine. It begins around line 160. Run the code as:
python SS_ex2.py 30 <guess>
where <guess> is either use zeros as initial guess (0) or use the standard formulation solution (i.e., no bounds) as the initial guess (1). The implementation somewhat works if <guess> is 1 or an initial non-zero solution is provided. If no guess is provided, I get a funky solution where everywhere is zero except for the non-homogeneous DirichletBCs.
Also, even if I have an initial guess (1), the values of the BC aren't exactly accurate either (I should have a max value of 1 at the very center of the top face, but I get 0.983941)
Something tells me I am not implementing the BCs correctly. Can anyone tell me what I am possibly doing wrong here?
This is entirely possible. I haven't looked at your formulation in detail, but here's one thing to remember about the way boundary conditions are implemented (summarised from http://firedrakeproject.org/boundary_conditions.html).
The particular thing to note is that an operator assembled with boundary conditions will just be the identity on the boundary node rows, because we forward-substitute to the right hand side to make the operator symmetric.
If you're doing the linear problem by hand somehow, then you have to do this substitution. You can see this happening in the LinearSolver (in linear_solver.py) for example.
Plausibly this means that your gradient should have zero boundary conditions applied to it (do bc.zero(...))?
Lawrence
_______________________________________________ firedrake mailing list firedrake@imperial.ac.uk https://mailman.ic.ac.uk/mailman/listinfo/firedrake
Okay I think i halfway figured it out. I had to do this: #=================== def formGrad(tao, petsc_x, petsc_g, r=None, bcs=None, F=None, sol=None): with sol.dat.vec as sol_vec: petsc_x.copy(sol_vec) r = assemble(F,tensor=r) for bc in bcs: bc.zero(r) with r.dat.vec_ro as r_vec: r_vec.copy(petsc_g) ... for bc in bcs: bc.apply(sol) for sol.dat.vec in sol_vec: taoSolver.solve(solve_vec) #=================== To match the result of: #=================== b = assemble(L) tmp = Function(V) for bc in bcs: bc.apply(tmp) rhs_bc = assemble(action(a, tmp)) b -= rhs_bc for bc in bcs: bc.apply(b) ... def formGradtao, petsc_x, petsc_g, A=None, a=None, b=None, bcs=None): A = assemble(a, bcs=bcs) with b.dat.vec as b_vec: A.M.handle.mult(petsc_x,petsc_g) petsc_g.axpy(-1.0, b_vec) ... for sol.dat.vec in sol_vec: taoSolver.solve(solve_vec) #=================== So I guess I was primarily missing the lines that updated the solution Function sol inside the gradient evaluation call-back. Is this the correct approach here? As in, if I want to (re)assemble any UFL form inside a petsc4py call-back i need to properly update the solution variable? Justin On Mon, Aug 15, 2016 at 9:00 AM, Justin Chang <jychang48@gmail.com> wrote:
Lawrence,
So I believe I am trying to solve for this:
(J^(00) + J^(RR))*U = F(U) - J^(R0)*U^R
the following two lines:
A = assemble(a, bcs=bcs, tensor=A) A.M._force_evaluation()
should return (J^(00)+J^(RR)) if I understand this correctly?
Then I need to express F(U) - J^(R0)*U^R in my gradient routine. I have tried all of these options thus far:
1) Applying the action of a:
tmp = Function(V) for bc in bcs: bc.apply(tmp) rhs_bc = assemble(action(a, tmp))
...
def formGrad(...): r = assemble(F,tensor=r) r -= rhs_bc for bc in bcs: bc.apply(r) with r.dat.vec a r_vec: r_vec.copy(petsc_g)
...
but the above starts to diverge after a certain point
2) applying bc.zero()
def formGrad(...): r = assemble(F,tensor=r)] for bc in bcs: zero(r) with r.dat.vec a r_vec: r_vec.copy(petsc_g)
...
but the above has the exact same issue as before
...
So what *did* work in the past was the following:
def formGrad(...): A = assemble(a, bcs=bcs) with b.dat.vec as b_vec: A.M.handle.mult(petsc_x,petsc_g) petsc_g.axpy(-1.0, b_vec)
where b had the following performed on it:
b = assemble(L) tmp = Function(V) for bc in bcs: bc.apply(tmp) rhs_bc = assemble(action(a, tmp)) b -= rhs_bc for bc in bcs: bc.apply(b)
So I know the above works but it takes several TAO iterations to converge. In my previous attempts with the nonlinear form equivalent, i could get the result to converge in exactly one iteration so sometime tells me I either am "cheating" here or there really is a cleaner/better way of implementing this in residual form.
Any thoughts? Appreciate all the help
Thanks, Justin
On Mon, Aug 15, 2016 at 4:14 AM, Lawrence Mitchell < lawrence.mitchell@imperial.ac.uk> wrote:
Hi Justin,
On 15/08/16 02:15, Justin Chang wrote:
Hi all,
So I am implementing PETSc/TAO's semi smooth solver to ensure non-negative solutions for anisotropic diffusion. Attached is the code.
Basically, the call-back functions require a Jacobian and a gradient/constraint evaluation routine. It begins around line 160. Run the code as:
python SS_ex2.py 30 <guess>
where <guess> is either use zeros as initial guess (0) or use the standard formulation solution (i.e., no bounds) as the initial guess (1). The implementation somewhat works if <guess> is 1 or an initial non-zero solution is provided. If no guess is provided, I get a funky solution where everywhere is zero except for the non-homogeneous DirichletBCs.
Also, even if I have an initial guess (1), the values of the BC aren't exactly accurate either (I should have a max value of 1 at the very center of the top face, but I get 0.983941)
Something tells me I am not implementing the BCs correctly. Can anyone tell me what I am possibly doing wrong here?
This is entirely possible. I haven't looked at your formulation in detail, but here's one thing to remember about the way boundary conditions are implemented (summarised from http://firedrakeproject.org/boundary_conditions.html).
The particular thing to note is that an operator assembled with boundary conditions will just be the identity on the boundary node rows, because we forward-substitute to the right hand side to make the operator symmetric.
If you're doing the linear problem by hand somehow, then you have to do this substitution. You can see this happening in the LinearSolver (in linear_solver.py) for example.
Plausibly this means that your gradient should have zero boundary conditions applied to it (do bc.zero(...))?
Lawrence
_______________________________________________ firedrake mailing list firedrake@imperial.ac.uk https://mailman.ic.ac.uk/mailman/listinfo/firedrake
participants (2)
- 
                
                Justin Chang
- 
                
                Lawrence Mitchell