Re: [firedrake] hybridisation and tensor-product multigrid
Hi Eike, Lawrence will have to answer 2. As to number 1, we can't do it yet, but I suspect it's not actually too hard to do if we can work out the syntax for doing it. In implementation it's kind of similar in spirit to the inverse assembly that we just implemented. Cheers, David On 16 March 2015 at 08:49, Eike Mueller <E.Mueller@bath.ac.uk> wrote:
Dear firedrakers,
I have two questions regarding the extension of a hybridised solver to a tensor-product approach:
(1) In firedrake, is there already a generic way of multiplying locally assembled matrices? I need this for the hybridised solver, so for example I want to (locally) assemble the velocity mass matrix M_u and divergence operator D and then multiply them to get, for example:
D^T M_u^{-1} D
I can create a hack by assembling them into vector-valued DG0 fields and then writing the necessary operations to multiply them and abstract that into a class (as I did for the column-assembled matrices), but I wanted to check if this is supported generically in firdrake (i.e. if there is support for working with a locally assembled matrix representation). If I can do that, then I can see how I can build all operator that are needed in the hybridised equation and for mapping between the Lagrange multipliers and pressure/velocity. For the columnwise smoother, I then need to extract bits of those locally assembled matrices and assemble them columnwise as for the DG0 case.
(2) The other ingredient we need for the Gopalakrishnan and Tan approach is a tensor-product solver in the P1 space. So can I already prolongate/restrict in the horizontal-direction only in this space? I recall that Lawrence wrote a P1 multigrid, but I presume this is for a isotropic grid which is refined in all coordinate directions. Again I can probably do it 'by hand' by just L2 projecting between the spaces, but this will not be the most efficient way. Getting the columnwise smoother should work as for the DG0 case: I need to assemble the matrix locally and then pick out the vertical couplings and build them into a columnwise matrix, which I store as a vector-valued P1 space on the horizontal host-grid.
Thanks a lot,
Eike
-- Dr Eike Hermann Mueller Lecturer in Scientific Computing
Department of Mathematical Sciences University of Bath Bath BA2 7AY, United Kingdom
+44 1225 38 6241 e.mueller@bath.ac.uk http://people.bath.ac.uk/em459/
_______________________________________________ firedrake mailing list firedrake@imperial.ac.uk https://mailman.ic.ac.uk/mailman/listinfo/firedrake
-- Dr David Ham Departments of Mathematics and Computing Imperial College London http://www.imperial.ac.uk/people/david.ham
participants (1)
- 
                
                David Ham