Retrieving Matrix and Vector from LinearVariationalSolver
Hi, I'm working on a problem that requires solving a linear problem with non-constant bilinear and linear forms. I would like to wrap it into a LinearVariationalSolver, but also be able to access the assembled matrix and right hand side vector. I have figured out how to retrieve the assembled matrix, but as for grabbing the assembled right hand side, I've only been able to get it with a workaround. Here is a simple example of what I've been doing: ************************************************************************** from firedrake import * from firedrake.petsc import PETSc mesh = UnitSquareMesh(5,5) x,y = SpatialCoordinate(mesh) V = FunctionSpace(mesh,'CG',1) f = 1 + x*y u = TrialFunction(V) v = TestFunction(V) mu = Constant(0.0) k = Constant(0.0) a = inner(grad(u),grad(v))*dx + mu*u*v*dx L = k*f*v*dx U = Function(V) problem = LinearVariationalProblem(a,L, U,constant_jacobian=False) solver = LinearVariationalSolver(problem, solver_parameters={'ksp_type':'cg'}) # assign values to mu and k and solve mu.assign(1.0) k.assign(3.0) solver.solve() # grab Jacobian and check that it agrees with assembly of bilinear form A = solver.snes.getJacobian()[0].copy() A_ = assemble(a, mat_type='aij') A_.force_evaluation() A_ = A_.petscmat print((A - A_).norm()) # this prints 0.0, so they agree # grab rhs and check that it agrees with assembly of bilinear form # since F(u;v) = a(u,v) - L(v), and intial guess of function is zero, # then -F(0;v) = -a(0,v) + L(v) = L(v) b = -solver.snes.getFunction()[0].copy() b_ = assemble(L).dat.data b_ = PETSc.Vec().createWithArray(b_) print((b-b_).norm()) # this prints 6.5e-17, so they agree # change constants to new values and solve mu.assign(5.0) k.assign(-1.0) solver.solve() # grab Jacobian A = solver.snes.getJacobian()[0].copy() A_ = assemble(a, mat_type='aij') A_.force_evaluation() A_ = A_.petscmat print((A - A_).norm()) # prints 0.0, so they agree # grab rhs b = -solver.snes.getFunction()[0].copy() b_ = assemble(L).dat.data b_ = PETSc.Vec().createWithArray(b_) print((b-b_).norm()) # this prints 3.37 - they no longer agree # the problem is that the initial guess of the function is no longer zero, instead its the solution for the previous parameters # so, reset function U to have zero values U.assign(0) solver.solve() # check rhs again b = -solver.snes.getFunction()[0].copy() b_ = assemble(L).dat.data b_ = PETSc.Vec().createWithArray(b_) print((b-b_).norm()) # this prints 1.6e-17 - so they agree once more ************************************************************************************* So to grab the assembled vector, I need to make sure the solution is initialized to zero, and then evaluate the negative of the residual. Is there a cleaner way to grab the assembled vector out of the solver? Thanks, Peter Sentz
Hi Peter, The case of a linear problem with changing LHS (and RHS) is handled by passing `constant_jacobian=False` to the `LinearVariationalProblem` constructor. There’s no need to pull apart the solver. The initial guess which is used is always the value of the solution Function when solve is called (modified to comply with any Dirichlet BCs). If you want a zero initial guess, you just need to zero that Function. Regards, David From: <firedrake-bounces@imperial.ac.uk> on behalf of "Sentz, Peter" <psentz@sandia.gov> Date: Thursday, 13 June 2019 at 23:24 To: firedrake <firedrake@imperial.ac.uk> Subject: [firedrake] Retrieving Matrix and Vector from LinearVariationalSolver Hi, I'm working on a problem that requires solving a linear problem with non-constant bilinear and linear forms. I would like to wrap it into a LinearVariationalSolver, but also be able to access the assembled matrix and right hand side vector. I have figured out how to retrieve the assembled matrix, but as for grabbing the assembled right hand side, I've only been able to get it with a workaround. Here is a simple example of what I've been doing: ************************************************************************** from firedrake import * from firedrake.petsc import PETSc mesh = UnitSquareMesh(5,5) x,y = SpatialCoordinate(mesh) V = FunctionSpace(mesh,'CG',1) f = 1 + x*y u = TrialFunction(V) v = TestFunction(V) mu = Constant(0.0) k = Constant(0.0) a = inner(grad(u),grad(v))*dx + mu*u*v*dx L = k*f*v*dx U = Function(V) problem = LinearVariationalProblem(a,L, U,constant_jacobian=False) solver = LinearVariationalSolver(problem, solver_parameters={'ksp_type':'cg'}) # assign values to mu and k and solve mu.assign(1.0) k.assign(3.0) solver.solve() # grab Jacobian and check that it agrees with assembly of bilinear form A = solver.snes.getJacobian()[0].copy() A_ = assemble(a, mat_type='aij') A_.force_evaluation() A_ = A_.petscmat print((A - A_).norm()) # this prints 0.0, so they agree # grab rhs and check that it agrees with assembly of bilinear form # since F(u;v) = a(u,v) - L(v), and intial guess of function is zero, # then -F(0;v) = -a(0,v) + L(v) = L(v) b = -solver.snes.getFunction()[0].copy() b_ = assemble(L).dat.data b_ = PETSc.Vec().createWithArray(b_) print((b-b_).norm()) # this prints 6.5e-17, so they agree # change constants to new values and solve mu.assign(5.0) k.assign(-1.0) solver.solve() # grab Jacobian A = solver.snes.getJacobian()[0].copy() A_ = assemble(a, mat_type='aij') A_.force_evaluation() A_ = A_.petscmat print((A - A_).norm()) # prints 0.0, so they agree # grab rhs b = -solver.snes.getFunction()[0].copy() b_ = assemble(L).dat.data b_ = PETSc.Vec().createWithArray(b_) print((b-b_).norm()) # this prints 3.37 - they no longer agree # the problem is that the initial guess of the function is no longer zero, instead its the solution for the previous parameters # so, reset function U to have zero values U.assign(0) solver.solve() # check rhs again b = -solver.snes.getFunction()[0].copy() b_ = assemble(L).dat.data b_ = PETSc.Vec().createWithArray(b_) print((b-b_).norm()) # this prints 1.6e-17 - so they agree once more ************************************************************************************* So to grab the assembled vector, I need to make sure the solution is initialized to zero, and then evaluate the negative of the residual. Is there a cleaner way to grab the assembled vector out of the solver? Thanks, Peter Sentz
participants (2)
- 
                
                Ham, David A
- 
                
                Sentz, Peter