Sounds like we need the ability to fire a Function from one communicator to another. We need to think about how to do this...

On Tue, 13 Jun 2017 at 14:26 Cotter, Colin J <colin.cotter@imperial.ac.uk> wrote:
Hi David,
(1) Jemma can provide details but each elliptic problem fits comfortably on one core (we are doing shallow-water). I don't think our productivity can be addressed by keeping the elliptic problems serial but using spatial parallelism.
(2) We are using a fairly complicated setup - GMRES on a 4x4 block, fieldsplit into 2x2 blocks and multiplicative, each of the 2x2 diagonal blocks is solved by hybridisation.
(3) Jemma can provide details, but I was able to fit ~40 solvers into memory, but then we needed to go up by a factor of 10-20. The individual solves are quite cheap but we need to do a lot of them. The idea is you do a large number of concurrent solves and then take a big timestep.

all the best
--cjc

On 13 June 2017 at 13:19, David Ham <David.Ham@imperial.ac.uk> wrote:
Hi All,

Can you provide the basic characteristics of this problem please:

* How many DoFs per realisation.
* What solvers are you using.
* When you say "hammered", how much memory are you using per solver, how much time is creating a solver taking.

At the moment you haven't provided enough information for us to understand where you are in parameter space.

Regards,

David

On Tue, 13 Jun 2017 at 14:14 Cotter, Colin J <colin.cotter@imperial.ac.uk> wrote:
Dear Firedrakers,
  I hope that everyone is enjoying Fenics 17. The weather is glorious here back in London!

Jemma and I are working some more on this REXI+averaging approach. Basically the way it works is you solve a lot of independent (but with different coefficients) elliptic problems with the same RHS, and then compute a weighted sum of the solutions (with different weights for each elliptic problem) which you then use to progress the solution. Eventually we want to make a proper parallel implementation of this, but right now we just need to see what works, and are just running over the loop in serial and getting hammered either by memory to store all of the solvers, or by time if we rebuild the solver for each elliptic problem. Is there a quick-and-dirty way to implement this parallelism so that we can make progress? Our starting point is that we don't have a clue about MPI programming!

all the best
--cjc
--
Dr David Ham
Department of Mathematics
Imperial College London

_______________________________________________
firedrake mailing list
firedrake@imperial.ac.uk
https://mailman.ic.ac.uk/mailman/listinfo/firedrake

--
Dr David Ham
Department of Mathematics
Imperial College London