[Re-adding firedrake@imperial.ac.uk to CC, please do keep the mailing
list in the CC line so that others can see the response too. You'll
likely get faster and/or more accurate help that way!]
On 23/04/18 21:10, Eric M wrote:
> revision:
> I put gx in VGrad as well. My outputs are as expected.
> Though it seems slightly strange to have what should be an exact
> function (gx) interpolated to a discontinuous space.
See below.
> On Mon, Apr 23, 2018 at 2:51 PM, Eric M <eric.malitz@gmail.com
I suggested a DG space because in general the grad of some CG space is> <mailto:eric.malitz@gmail.com>> wrote: 
>
> Aha, there is a confusion here between what is a coefficient
> in some finite element space, and what is a symbolic expression.
> You start with u, which is a Function in some finite element
> space.
> Now grad(u)[0], or u.dx(0) (these actually produce the same
> thing) are symbolic expressions.
> If you want to look at the values, you first need to
> interpolate (or project) the expression into a suitable finite
> element space:
> e.g, if u is in FunctionSpace(mesh, "CG", 2)
> Then perhaps:
> VGrad = FunctionSpace(mesh, "DG", 1)
> gradu = project(grad(u), VGrad)
> gradu.dat.data[:] ...
>
>
> For an exact solution g, I have gx, it's derivative. I wish to
> compare gx against the x derivative of a solution uk. I have:
>
> V = FunctionSpace(mesh, "Lagrange", 2)
> Sigma = TensorFunctionSpace(mesh, "Lagrange", 2)
> gx=Function(V)
> gx.interpolate(x*exp((x**2+y**2)/2)) 
>
> VGrad=FunctionSpace(mesh,"DG",1) 
>
> (solve a problem, split uk from the mixed solution so uk is in V)
>
> gradu_x=project(grad(uk)[0],VGrad) 
>
> But now gradu_x and gx are different lengths. I see why we would
> use this VGrad. Does it make sense to do
> VGrad=FunctionSpace(mesh,"Lagrange",2) instead? 
not continuous. But if you have some exact expression you can compare
to, then you can compute norms directly:
gx = x*exp((x**2 + y**2)/2)
L2diff = norm(gx - grad(uk)[0])
Make sense?
Cheers,
Lawrence