--cjccheersAndrew: presumably you didn't implement them because you anticipated some fiddliness for tensor-products?Another slight problem is that we don't have trace elements for quadrilaterals or tensor product elements at the moment. Our approach to trace spaces is also rather hacked up, we extract the facet basis functions from an H(div) basis and the tabulator returns DOFs by dotting the local basis functions by the local normal.Hi Eike,If you take a look at the test_hybridisation_inverse branch, in tests/regression/test_hybridisation_schur, you'll see a hacked up attempt at doing this for simplices. It's a bit fiddly because you need to assemble the form multiple times, once as a mixed system and once as a single block, so I'm thinking of making a tool to automate some of this by doing automated substitutions in UFL. Lawrence and I said we might try to sketch out how to do this.
On 16 March 2015 at 08:49, Eike Mueller <E.Mueller@bath.ac.uk> wrote:
Dear firedrakers,
I have two questions regarding the extension of a hybridised solver to a tensor-product approach:
(1) In firedrake, is there already a generic way of multiplying locally assembled matrices? I need this for the hybridised solver, so for example I want to (locally) assemble the velocity mass matrix M_u and divergence operator D and then multiply them to get, for example:
D^T M_u^{-1} D
I can create a hack by assembling them into vector-valued DG0 fields and then writing the necessary operations to multiply them and abstract that into a class (as I did for the column-assembled matrices), but I wanted to check if this is supported generically in firdrake (i.e. if there is support for working with a locally assembled matrix representation). If I can do that, then I can see how I can build all operator that are needed in the hybridised equation and for mapping between the Lagrange multipliers and pressure/velocity. For the columnwise smoother, I then need to extract bits of those locally assembled matrices and assemble them columnwise as for the DG0 case.
(2) The other ingredient we need for the Gopalakrishnan and Tan approach is a tensor-product solver in the P1 space. So can I already prolongate/restrict in the horizontal-direction only in this space? I recall that Lawrence wrote a P1 multigrid, but I presume this is for a isotropic grid which is refined in all coordinate directions. Again I can probably do it 'by hand' by just L2 projecting between the spaces, but this will not be the most efficient way. Getting the columnwise smoother should work as for the DG0 case: I need to assemble the matrix locally and then pick out the vertical couplings and build them into a columnwise matrix, which I store as a vector-valued P1 space on the horizontal host-grid.
Thanks a lot,
Eike
--
Dr Eike Hermann Mueller
Lecturer in Scientific Computing
Department of Mathematical Sciences
University of Bath
Bath BA2 7AY, United Kingdom
+44 1225 38 6241
e.mueller@bath.ac.uk
http://people.bath.ac.uk/em459/
_______________________________________________
firedrake mailing list
firedrake@imperial.ac.uk
https://mailman.ic.ac.uk/mailman/listinfo/firedrake
_______________________________________________ firedrake mailing list firedrake@imperial.ac.uk https://mailman.ic.ac.uk/mailman/listinfo/firedrake