Dimension of the gradient of a vector
Dear all, I am trying to understand how the gradient operator works on Firedrake and there is something I misunderstand. As far as I understood, if I define (in the HORIZONTAL plane): V = FunctionSpace(mesh,"CG", 1) and W = VectorFunctionSpace(mesh,"CG", 1, dim=N) such that : psi_1 = Function (V) psi_N = Function(W) (vector of dim (1,N)) then, grad(psi_1) is a rank 1 tensor of 2 rows or columns (does not matter), grad(psi_N) is a rank 2 tensor of dim (N, 2) 1) Is that correct ? 2) Then, I understand that we can compute grad(psi_N)*grad(psi_1) but not grad(psi_1)*grad(psi_N), because the dimensions agree in the first case but not in the second one. However, when taking the transpose of grad(psi_N), I don't understand what happens. The product: grad(psi_1)*(grad(psi_N)).T cannot be computed, and the product (grad(psi_N)).T * grad(psi_1) can be computed, while I would expect the contrary (since I expect (grad(psi_N)).T to be of dimension (2,N) ). Moreover, if I print the expression of (grad(psi_N)).T*grad(psi_1), I get : { A | A_{i_8} = sum_{i_9} ((grad(w_6))^T)[i_8, i_9] * (grad(w_4))[i_9] } which is actually A | A_i = sum_{j=1}^{j=2} (grad(psi_N))^T(i,j) * d psi_1/dxj But if grad(psi_N) has dimension (N,2), then (grad(psi_N))^T has dimension (2,N) and therefore the expression above would not work if N>2. Can someone explain me how the gradient of psi_N and its transpose are evaluated, and what are their rank and dimension ? Is there a command to print these dimensions ? Thanks and best regards, Floriane
On 01/06/16 15:59, Floriane Gidel [RPG] wrote:
Dear all,
I am trying to understand how the gradient operator works on Firedrake and there is something I misunderstand.
As far as I understood, if I define (in the HORIZONTAL plane):
V = FunctionSpace(mesh,"CG", 1) and
W = VectorFunctionSpace(mesh,"CG", 1, dim=N)
such that :
psi_1 = Function (V)
psi_N = Function(W) (vector of dim (1,N))
then,
grad(psi_1) is a rank 1 tensor of 2 rows or columns (does not matter),
grad(psi_N) is a rank 2 tensor of dim (N, 2)
1) Is that correct ?
Assuming you're in 2D: yes
2) Then, I understand that we can compute grad(psi_N)*grad(psi_1) but not grad(psi_1)*grad(psi_N), because the dimensions agree in the first case but not in the second one.
However, when taking the transpose of grad(psi_N), I don't understand what happens.
The trouble with rank>=1 tensors is that there are many possible products: dot products, inner products, outer products, etc. So when you say I understand grad(psi_N)*grad(psi_1) you probably mean the dot-product. Have a look at chapter 4 of the Fenics manual: http://launchpad.net/fenics-book/trunk/final/+download/fenics-manual-2011-10... that describes the UFL language and explains the different tensor products. Basically, you should only use * if one of the tensors is rank0 (a scalar). UFL for some reason does allow rank2*rank1 - presumably because people tend to be familiar with the matrix-vector product. Note that UFL does not check whether there is dimensional agreement (only rank) as this isn't always available at this level - it will only show up when you actually evaluate the expression. Cheers Stephan
On 02/06/16 11:06, Stephan Kramer wrote:
On 01/06/16 15:59, Floriane Gidel [RPG] wrote:
Dear all,
I am trying to understand how the gradient operator works on Firedrake and there is something I misunderstand.
As far as I understood, if I define (in the HORIZONTAL plane):
V = FunctionSpace(mesh,"CG", 1) and
W = VectorFunctionSpace(mesh,"CG", 1, dim=N)
such that :
psi_1 = Function (V)
psi_N = Function(W) (vector of dim (1,N))
then,
grad(psi_1) is a rank 1 tensor of 2 rows or columns (does not matter),
grad(psi_N) is a rank 2 tensor of dim (N, 2)
1) Is that correct ?
Assuming you're in 2D: yes
2) Then, I understand that we can compute grad(psi_N)*grad(psi_1) but not grad(psi_1)*grad(psi_N), because the dimensions agree in the first case but not in the second one.
However, when taking the transpose of grad(psi_N), I don't understand what happens.
The trouble with rank>=1 tensors is that there are many possible products: dot products, inner products, outer products, etc. So when you say I understand grad(psi_N)*grad(psi_1) you probably mean the dot-product. Have a look at chapter 4 of the Fenics manual: http://launchpad.net/fenics-book/trunk/final/+download/fenics-manual-2011-10... that describes the UFL language and explains the different tensor products.
Basically, you should only use * if one of the tensors is rank0 (a scalar). UFL for some reason does allow rank2*rank1 - presumably because people tend to be familiar with the matrix-vector product. Note that UFL does not check whether there is dimensional agreement (only rank) as this isn't always available at this level - it will only show up when you actually evaluate the expression.
If you like, you can always use summation convention to indicate the product you wish to take: i, j = indices(2) grad(psi_1)[i] * grad(psi_N)[j, i] contracts the index of the (2, )-tensor grad(psi_1) against the second index of the (N, 2)-tensor grad(psi_N) forming a (10, )-tensor that is then indexed with the free index j. Lawrence
Ok, thanks Stephan and Lawrence, that should help a lot! Best wishes, Floriane ________________________________________ De : firedrake-bounces@imperial.ac.uk <firedrake-bounces@imperial.ac.uk> de la part de Lawrence Mitchell <lawrence.mitchell@imperial.ac.uk> Envoyé : jeudi 2 juin 2016 11:17 À : firedrake@imperial.ac.uk Objet : Re: [firedrake] Dimension of the gradient of a vector On 02/06/16 11:06, Stephan Kramer wrote:
On 01/06/16 15:59, Floriane Gidel [RPG] wrote:
Dear all,
I am trying to understand how the gradient operator works on Firedrake and there is something I misunderstand.
As far as I understood, if I define (in the HORIZONTAL plane):
V = FunctionSpace(mesh,"CG", 1) and
W = VectorFunctionSpace(mesh,"CG", 1, dim=N)
such that :
psi_1 = Function (V)
psi_N = Function(W) (vector of dim (1,N))
then,
grad(psi_1) is a rank 1 tensor of 2 rows or columns (does not matter),
grad(psi_N) is a rank 2 tensor of dim (N, 2)
1) Is that correct ?
Assuming you're in 2D: yes
2) Then, I understand that we can compute grad(psi_N)*grad(psi_1) but not grad(psi_1)*grad(psi_N), because the dimensions agree in the first case but not in the second one.
However, when taking the transpose of grad(psi_N), I don't understand what happens.
The trouble with rank>=1 tensors is that there are many possible products: dot products, inner products, outer products, etc. So when you say I understand grad(psi_N)*grad(psi_1) you probably mean the dot-product. Have a look at chapter 4 of the Fenics manual: http://launchpad.net/fenics-book/trunk/final/+download/fenics-manual-2011-10... that describes the UFL language and explains the different tensor products.
Basically, you should only use * if one of the tensors is rank0 (a scalar). UFL for some reason does allow rank2*rank1 - presumably because people tend to be familiar with the matrix-vector product. Note that UFL does not check whether there is dimensional agreement (only rank) as this isn't always available at this level - it will only show up when you actually evaluate the expression.
If you like, you can always use summation convention to indicate the product you wish to take: i, j = indices(2) grad(psi_1)[i] * grad(psi_N)[j, i] contracts the index of the (2, )-tensor grad(psi_1) against the second index of the (N, 2)-tensor grad(psi_N) forming a (10, )-tensor that is then indexed with the free index j. Lawrence
participants (3)
- 
                
                Floriane Gidel [RPG]
- 
                
                Lawrence Mitchell
- 
                
                Stephan Kramer