Dear all, I finished a first draft of the results section for the Firedrake paper. Any feedback gratefully received! If you don't have access to the repository you can get a PDF from https://wwwhomes.doc.ic.ac.uk/~fr710/paper.pdf Cheers, Florian
I'd still be interested in feedback on this! On 04/10/14 09:12, Florian Rathgeber wrote:
Dear all,
I finished a first draft of the results section for the Firedrake paper. Any feedback gratefully received!
If you don't have access to the repository you can get a PDF from
https://wwwhomes.doc.ic.ac.uk/~fr710/paper.pdf
Cheers, Florian
On 11/10/14 11:27, Florian Rathgeber wrote:
I'd still be interested in feedback on this!
On 04/10/14 09:12, Florian Rathgeber wrote:
Dear all,
I finished a first draft of the results section for the Firedrake paper. Any feedback gratefully received!
If you don't have access to the repository you can get a PDF from
Hi Florian, Thanks for the reminder, I missed this the first time. Comments follow in no particular order. I'm really impressed (and amazed) that in the Cahn--Hilliard example, firedrake's assembly is two orders of magnitude faster! That's remarkable. Any idea why that's the case? I assume the DOLFIN runs used the same CFLAGS to the compiler etc? (I didn't see that mentioned anywhere, although I may have missed it) I'm looking forward to reading about why that's the case in the as-yet-unwritten section. I'd prefer you didn't mention it as a "dolfin-adjoint application", because the O-K solver doesn't really have anything to do with d-a (it's just a repository I store random solvers in). Maybe an acknowledgement at the end for the preconditioner setup or implementation instead of footnote 5? You should cite Jessica Bosch's and Andy Wathen's paper on the C-H preconditioner: @article{bosch2014, author = {Bosch, J. and Kay, D. and Stoll, M. and Wathen, A.}, title = {Fast solvers for {Cahn--Hilliard} inpainting}, journal = {SIAM Journal on Imaging Sciences}, volume = {7}, number = {1}, pages = {67--97}, year = {2014}, doi = {10.1137/130921842}, } I think it would be clearer to write something like """ The inverse Schur complement, S^-1, is approximated by \begin{equation} S^-1 \approx \hat{S}^-1 = H^-1 M H^-1, \end{equation} where H and M are ... """ rather than the paragraph after (26-27), which is unnecessarily verbose. Are there actual solves with H^-1 done, or does it just use one AMG V-cycle? (My experience with the O-K solver is that you're much better off doing the latter, but you all know what you're doing). I'm surprised that MATNEST doesn't make as much difference, I thought it would do more. It would be nice to see the memory usage too: I'm guessing that's where MATNEST would make a bigger difference. At scale (to billions of DOFs) I only run with 2/24 cores per node because of memory limitations, probably because of all the damn copies. In the graphs, it would be nice to have a "total runtime" to compare dolfin and firedrake from a user's perspective, as well as the breakdown into assembly and solve etc. Speaking of the graphs, is there a reason for the choice of cyan-magenta-brown? I'd imagine there are colour combinations that would be easier to read. Maybe the 538 style (http://matplotlib.org/examples/style_sheets/plot_fivethirtyeight.html)? Does that cause difficulties for the daltonists among us? Do you guys run into problems with starting the Python interpreter on many cores? Chris Richardson's been doing some work on that, and has had some partial success with zipping the files the Python interpreter loads; if you've solved this problem, or ran into it, it would be good to mention it in the paper. Cheerio, Patrick
On 11 Oct 2014 15:16, "Patrick Farrell" <patrick.farrell@maths.ox.ac.uk> wrote:
I'd imagine there are colour combinations that would be easier to read. Maybe the 538 style (http://matplotlib.org/examples/style_sheets/plot_fivethirtyeight.html)? Does that cause difficulties for the daltonists among us?
I don't know. Is one of those supposed to be green? Cjc
On 11 Oct 2014, at 16:37, Colin Cotter <cjcimperial@gmail.com> wrote:
On 11 Oct 2014 15:16, "Patrick Farrell" <patrick.farrell@maths.ox.ac.uk> wrote:
I'd imagine there are colour combinations that would be easier to read. Maybe the 538 style (http://matplotlib.org/examples/style_sheets/plot_fivethirtyeight.html)? Does that cause difficulties for the daltonists among us?
I don't know. Is one of those supposed to be green? Cjc
Red, blue and browny orange, so no. Lawrence
On 11 Oct 2014, at 15:16, Patrick Farrell <patrick.farrell@maths.ox.ac.uk> wrote:
I'm surprised that MATNEST doesn't make as much difference, I thought it would do more. It would be nice to see the memory usage too: I'm guessing that's where MATNEST would make a bigger difference. At scale (to billions of DOFs) I only run with 2/24 cores per node because of memory limitations, probably because of all the damn copies.
I'd expect those copies to be about the cost of a few matvecs in time, as you say, the memory problems are probably bigger. Note as well we haven't run really big problems (nothing like billions of dofs), so arguably we're not at "extreme scale" yet. ...
Do you guys run into problems with starting the Python interpreter on many cores? Chris Richardson's been doing some work on that, and has had some partial success with zipping the files the Python interpreter loads; if you've solved this problem, or ran into it, it would be good to mention it in the paper.
I'm pretty sure we don't have a solution to the problem. We've had other problems running at scale which have possibly been masking the python load problem (now fixed), so we haven't really been looking at it. But maybe Florian can comment otherwise. Lawrence
On 11/10/14 15:16, Patrick Farrell wrote:
On 11/10/14 11:27, Florian Rathgeber wrote:
I'd still be interested in feedback on this!
On 04/10/14 09:12, Florian Rathgeber wrote:
Dear all,
I finished a first draft of the results section for the Firedrake paper. Any feedback gratefully received!
If you don't have access to the repository you can get a PDF from
Hi Florian,
Thanks for the reminder, I missed this the first time. Comments follow in no particular order.
Many thanks for those very helpful comments! Some remarks inline.
I'm really impressed (and amazed) that in the Cahn--Hilliard example, firedrake's assembly is two orders of magnitude faster! That's remarkable. Any idea why that's the case? I assume the DOLFIN runs used the same CFLAGS to the compiler etc? (I didn't see that mentioned anywhere, although I may have missed it) I'm looking forward to reading about why that's the case in the as-yet-unwritten section.
Generated code for DOLFIN is compiled with -O3 -ffast-math -march=native (as recommended by Marie), for Firedrake with -O3 -fno-tree-vectorize. I have now mentioned those. I'm using the DOLFIN build maintained by Chris and yourself (fenics/dev), so I'm assuming this is highly optimised. I'd be interested to know those flags. The performance difference afaict is due a combination of 1) splitting the mixed forms 2) caching of ParLoop objects on the residual/Jacobian forms 3) lower execution overhead and inlining of PyOP2 kernels
I'd prefer you didn't mention it as a "dolfin-adjoint application", because the O-K solver doesn't really have anything to do with d-a (it's just a repository I store random solvers in). Maybe an acknowledgement at the end for the preconditioner setup or implementation instead of footnote 5?
You should cite Jessica Bosch's and Andy Wathen's paper on the C-H preconditioner:
@article{bosch2014, author = {Bosch, J. and Kay, D. and Stoll, M. and Wathen, A.}, title = {Fast solvers for {Cahn--Hilliard} inpainting}, journal = {SIAM Journal on Imaging Sciences}, volume = {7}, number = {1}, pages = {67--97}, year = {2014}, doi = {10.1137/130921842}, }
I think it would be clearer to write something like
""" The inverse Schur complement, S^-1, is approximated by \begin{equation} S^-1 \approx \hat{S}^-1 = H^-1 M H^-1, \end{equation} where H and M are ... """
rather than the paragraph after (26-27), which is unnecessarily verbose.
Thanks, I have incorporated your suggestions.
Are there actual solves with H^-1 done, or does it just use one AMG V-cycle? (My experience with the O-K solver is that you're much better off doing the latter, but you all know what you're doing).
It is just using one AMG V-cycle. My experience was the same as yours, anything more than one V-cycle slows things down considerably.
I'm surprised that MATNEST doesn't make as much difference, I thought it would do more. It would be nice to see the memory usage too: I'm guessing that's where MATNEST would make a bigger difference. At scale (to billions of DOFs) I only run with 2/24 cores per node because of memory limitations, probably because of all the damn copies.
Lawrence has already commented on this.
In the graphs, it would be nice to have a "total runtime" to compare dolfin and firedrake from a user's perspective, as well as the breakdown into assembly and solve etc.
I'm not convinced the total runtime is a very useful comparison at the moment due to very different implementation of the mesh generator and the well-known performance issues with DMPlex which would distort those timings. A breakdown into assembly and solve could be useful.
Speaking of the graphs, is there a reason for the choice of cyan-magenta-brown? I'd imagine there are colour combinations that would be easier to read. Maybe the 538 style (http://matplotlib.org/examples/style_sheets/plot_fivethirtyeight.html)? Does that cause difficulties for the daltonists among us?
This is the default colour cycle of the "Set2" colour map from ColorBrewer. I'll try the style you suggest, thanks.
Do you guys run into problems with starting the Python interpreter on many cores? Chris Richardson's been doing some work on that, and has had some partial success with zipping the files the Python interpreter loads; if you've solved this problem, or ran into it, it would be good to mention it in the paper.
We have not, as Lawrence mentioned. Florian
Cheerio,
Patrick
On 12/10/14 21:04, Florian Rathgeber wrote:
On 11/10/14 15:16, Patrick Farrell wrote:
Speaking of the graphs, is there a reason for the choice of cyan-magenta-brown? I'd imagine there are colour combinations that would be easier to read. Maybe the 538 style (http://matplotlib.org/examples/style_sheets/plot_fivethirtyeight.html)? Does that cause difficulties for the daltonists among us?
This is the default colour cycle of the "Set2" colour map from ColorBrewer. I'll try the style you suggest, thanks.
Turns out that's no easy win: I customise the plots very heavily, so using one of the pre-defined style sheets hardly has an effect and backing out my customisations doesn't really work either. One thing I do is match the line colours for DOLFIN and Firedrake and distinguish by line style, which I can't really do with the style presets, because they have a fixed number of colours in the cycle. I might just change the colour map. Any further advice on this welcome. Florian
Hi Florian, in section 8.2.2 on the Poisson solve, do you know how much of the non-perfect scaling of the solver can be attributed to an increase in the number of CG iterations? In theory the number of iterations should stay constant with a multigrid preconditioner, but I think this is only true if you use geometric MG with a FMG cycle and solve to a tolerance which depends on the grid resolution. I would suspect that the number of iterations grows, so would it be worth mentioning that part of the reason for the poorer scaling of the solver is algorithmic (and say how much the number of iterations increases)? Since the parallel AMG is algorithmically not identical to the sequential version, algorithmic and parallel scalability can only be partly separated, but it might still be worth mentioning it. Thanks, Eike -- Dr Eike Hermann Mueller Research Associate (PostDoc) Department of Mathematical Sciences University of Bath Bath BA2 7AY, United Kingdom +44 1225 38 5633 e.mueller@bath.ac.uk http://people.bath.ac.uk/em459/
On 11 Oct 2014, at 11:27, Florian Rathgeber <florian.rathgeber@imperial.ac.uk> wrote:
I'd still be interested in feedback on this!
On 04/10/14 09:12, Florian Rathgeber wrote: Dear all,
I finished a first draft of the results section for the Firedrake paper. Any feedback gratefully received!
If you don't have access to the repository you can get a PDF from
https://wwwhomes.doc.ic.ac.uk/~fr710/paper.pdf
Cheers, Florian
_______________________________________________ firedrake mailing list firedrake@imperial.ac.uk https://mailman.ic.ac.uk/mailman/listinfo/firedrake
On 12/10/14 11:29, Eike Mueller wrote:
Hi Florian,
in section 8.2.2 on the Poisson solve, do you know how much of the non-perfect scaling of the solver can be attributed to an increase in the number of CG iterations? In theory the number of iterations should stay constant with a multigrid preconditioner, but I think this is only true if you use geometric MG with a FMG cycle and solve to a tolerance which depends on the grid resolution. I would suspect that the number of iterations grows, so would it be worth mentioning that part of the reason for the poorer scaling of the solver is algorithmic (and say how much the number of iterations increases)? Since the parallel AMG is algorithmically not identical to the sequential version, algorithmic and parallel scalability can only be partly separated, but it might still be worth mentioning it.
The number of iterations varies slightly, but not significantly: 1: 21 3: 21 6: 22 12: 22 24: 22 48: 22 96: 22 192: 22 384: 21 768: 22 1536: 22 The number of level in the AMG preconditioner is: 1: 20 3: 19 6: 19 12: 20 24: 20 48: 20 96: 20 192: 19 384: 20 768: 20 1536: 20 Thanks for that explanation, I'll add a sentence or 2. Florian
Thanks,
Eike
On 11 Oct 2014, at 11:27, Florian Rathgeber <florian.rathgeber@imperial.ac.uk <mailto:florian.rathgeber@imperial.ac.uk>> wrote:
I'd still be interested in feedback on this!
On 04/10/14 09:12, Florian Rathgeber wrote:
Dear all,
I finished a first draft of the results section for the Firedrake paper. Any feedback gratefully received!
If you don't have access to the repository you can get a PDF from
https://wwwhomes.doc.ic.ac.uk/~fr710/paper.pdf
Cheers, Florian
Hi Florian, thanks, that's interesting, is this for the weak scaling in Fig. 3? For the strong scaling (Fig. 2) the number of iterations and levels should stay constant (sorry, I should have been a bit more clear in my previous email). If yes, then I suspect that hypre uses a direct (or iterative) coarse grid solver, which will not scale (algorithmically) since, for a constant number of levels of ~20 the global problem size on the coarsest level will grow. For the strong scaling, the bottleneck will be the parallel scalability of the coarse grid solver. Eike On 12/10/14 21:31, Florian Rathgeber wrote:
On 12/10/14 11:29, Eike Mueller wrote:
Hi Florian,
in section 8.2.2 on the Poisson solve, do you know how much of the non-perfect scaling of the solver can be attributed to an increase in the number of CG iterations? In theory the number of iterations should stay constant with a multigrid preconditioner, but I think this is only true if you use geometric MG with a FMG cycle and solve to a tolerance which depends on the grid resolution. I would suspect that the number of iterations grows, so would it be worth mentioning that part of the reason for the poorer scaling of the solver is algorithmic (and say how much the number of iterations increases)? Since the parallel AMG is algorithmically not identical to the sequential version, algorithmic and parallel scalability can only be partly separated, but it might still be worth mentioning it. The number of iterations varies slightly, but not significantly: 1: 21 3: 21 6: 22 12: 22 24: 22 48: 22 96: 22 192: 22 384: 21 768: 22 1536: 22
The number of level in the AMG preconditioner is: 1: 20 3: 19 6: 19 12: 20 24: 20 48: 20 96: 20 192: 19 384: 20 768: 20 1536: 20
Thanks for that explanation, I'll add a sentence or 2.
Florian
Thanks,
Eike
On 11 Oct 2014, at 11:27, Florian Rathgeber <florian.rathgeber@imperial.ac.uk <mailto:florian.rathgeber@imperial.ac.uk>> wrote:
I'd still be interested in feedback on this!
On 04/10/14 09:12, Florian Rathgeber wrote:
Dear all,
I finished a first draft of the results section for the Firedrake paper. Any feedback gratefully received!
If you don't have access to the repository you can get a PDF from
https://wwwhomes.doc.ic.ac.uk/~fr710/paper.pdf
Cheers, Florian
_______________________________________________ firedrake mailing list firedrake@imperial.ac.uk https://mailman.ic.ac.uk/mailman/listinfo/firedrake
-- Dr Eike Hermann Mueller Research Associate (PostDoc) Department of Mathematical Sciences University of Bath Bath BA2 7AY, United Kingdom +44 1225 38 5803 e.mueller@bath.ac.uk http://people.bath.ac.uk/em459/
On 13/10/14 08:58, Eike Mueller wrote:
Hi Florian,
thanks, that's interesting, is this for the weak scaling in Fig. 3? For the strong scaling (Fig. 2) the number of iterations and levels should stay constant (sorry, I should have been a bit more clear in my previous email). If yes, then I suspect that hypre uses a direct (or iterative) coarse grid solver, which will not scale (algorithmically) since, for a constant number of levels of ~20 the global problem size on the coarsest level will grow.
These were for P3 strong scaling. For P1 weak scaling with 1k DOFs/core we get: Levels: 1: 4 3: 6 6: 7 12: 9 24: 10 48: 11 96: 12 192: 13 384: 13 768: 15 1536: 16 KSP iterations: 1: 7 3: 8 6: 9 12: 10 24: 11 48: 12 96: 13 192: 13 384: 15 768: 15 1536: 16
For the strong scaling, the bottleneck will be the parallel scalability of the coarse grid solver.
The problem at the coarsest grid is tiny (4 rows on 1536 cores). I had experimented with restricting the number of levels but found it hardly made a difference and was an awfully hard to tune parameter. Florian
Eike
On 12/10/14 21:31, Florian Rathgeber wrote:
On 12/10/14 11:29, Eike Mueller wrote:
Hi Florian,
in section 8.2.2 on the Poisson solve, do you know how much of the non-perfect scaling of the solver can be attributed to an increase in the number of CG iterations? In theory the number of iterations should stay constant with a multigrid preconditioner, but I think this is only true if you use geometric MG with a FMG cycle and solve to a tolerance which depends on the grid resolution. I would suspect that the number of iterations grows, so would it be worth mentioning that part of the reason for the poorer scaling of the solver is algorithmic (and say how much the number of iterations increases)? Since the parallel AMG is algorithmically not identical to the sequential version, algorithmic and parallel scalability can only be partly separated, but it might still be worth mentioning it. The number of iterations varies slightly, but not significantly: 1: 21 3: 21 6: 22 12: 22 24: 22 48: 22 96: 22 192: 22 384: 21 768: 22 1536: 22
The number of level in the AMG preconditioner is: 1: 20 3: 19 6: 19 12: 20 24: 20 48: 20 96: 20 192: 19 384: 20 768: 20 1536: 20
Thanks for that explanation, I'll add a sentence or 2.
Florian
Thanks,
Eike
On 11 Oct 2014, at 11:27, Florian Rathgeber <florian.rathgeber@imperial.ac.uk <mailto:florian.rathgeber@imperial.ac.uk>> wrote:
I'd still be interested in feedback on this!
On 04/10/14 09:12, Florian Rathgeber wrote:
Dear all,
I finished a first draft of the results section for the Firedrake paper. Any feedback gratefully received!
If you don't have access to the repository you can get a PDF from
https://wwwhomes.doc.ic.ac.uk/~fr710/paper.pdf
Cheers, Florian
Hi Florian, this agrees with I had expected, at least part of the increase in runtime is due to the increase in the number of iterations (which roughly doubles), i.e. algorithmic growth. Eike On 13/10/14 09:12, Florian Rathgeber wrote:
On 13/10/14 08:58, Eike Mueller wrote:
Hi Florian,
thanks, that's interesting, is this for the weak scaling in Fig. 3? For the strong scaling (Fig. 2) the number of iterations and levels should stay constant (sorry, I should have been a bit more clear in my previous email). If yes, then I suspect that hypre uses a direct (or iterative) coarse grid solver, which will not scale (algorithmically) since, for a constant number of levels of ~20 the global problem size on the coarsest level will grow. These were for P3 strong scaling. For P1 weak scaling with 1k DOFs/core we get:
Levels: 1: 4 3: 6 6: 7 12: 9 24: 10 48: 11 96: 12 192: 13 384: 13 768: 15 1536: 16
KSP iterations: 1: 7 3: 8 6: 9 12: 10 24: 11 48: 12 96: 13 192: 13 384: 15 768: 15 1536: 16
For the strong scaling, the bottleneck will be the parallel scalability of the coarse grid solver. The problem at the coarsest grid is tiny (4 rows on 1536 cores). I had experimented with restricting the number of levels but found it hardly made a difference and was an awfully hard to tune parameter.
Florian
Eike
On 12/10/14 21:31, Florian Rathgeber wrote:
On 12/10/14 11:29, Eike Mueller wrote:
Hi Florian,
in section 8.2.2 on the Poisson solve, do you know how much of the non-perfect scaling of the solver can be attributed to an increase in the number of CG iterations? In theory the number of iterations should stay constant with a multigrid preconditioner, but I think this is only true if you use geometric MG with a FMG cycle and solve to a tolerance which depends on the grid resolution. I would suspect that the number of iterations grows, so would it be worth mentioning that part of the reason for the poorer scaling of the solver is algorithmic (and say how much the number of iterations increases)? Since the parallel AMG is algorithmically not identical to the sequential version, algorithmic and parallel scalability can only be partly separated, but it might still be worth mentioning it. The number of iterations varies slightly, but not significantly: 1: 21 3: 21 6: 22 12: 22 24: 22 48: 22 96: 22 192: 22 384: 21 768: 22 1536: 22
The number of level in the AMG preconditioner is: 1: 20 3: 19 6: 19 12: 20 24: 20 48: 20 96: 20 192: 19 384: 20 768: 20 1536: 20
Thanks for that explanation, I'll add a sentence or 2.
Florian
Thanks,
Eike
On 11 Oct 2014, at 11:27, Florian Rathgeber <florian.rathgeber@imperial.ac.uk <mailto:florian.rathgeber@imperial.ac.uk>> wrote:
I'd still be interested in feedback on this!
On 04/10/14 09:12, Florian Rathgeber wrote:
Dear all,
I finished a first draft of the results section for the Firedrake paper. Any feedback gratefully received!
If you don't have access to the repository you can get a PDF from
https://wwwhomes.doc.ic.ac.uk/~fr710/paper.pdf
Cheers, Florian
_______________________________________________ firedrake mailing list firedrake@imperial.ac.uk https://mailman.ic.ac.uk/mailman/listinfo/firedrake
-- Dr Eike Hermann Mueller Research Associate (PostDoc) Department of Mathematical Sciences University of Bath Bath BA2 7AY, United Kingdom +44 1225 38 5803 e.mueller@bath.ac.uk http://people.bath.ac.uk/em459/
Thanks again for all the feeback! This is now incorporated (apart from the plot colours, need to do some further matplotlib fiddling) and the PDF is updated: https://wwwhomes.doc.ic.ac.uk/~fr710/paper.pdf Cheers, Florian On 11/10/14 11:27, Florian Rathgeber wrote:
I'd still be interested in feedback on this!
On 04/10/14 09:12, Florian Rathgeber wrote:
Dear all,
I finished a first draft of the results section for the Firedrake paper. Any feedback gratefully received!
If you don't have access to the repository you can get a PDF from
https://wwwhomes.doc.ic.ac.uk/~fr710/paper.pdf
Cheers, Florian
I would encourage everyone to completely avoid colours in 2d graphs except for aesthetic reasons. You never know what device or printer (or retina) the reader is using. Cjc ________________________________________ From: firedrake-bounces@imperial.ac.uk [firedrake-bounces@imperial.ac.uk] on behalf of Florian Rathgeber [florian.rathgeber@imperial.ac.uk] Sent: 15 October 2014 09:02 To: firedrake Subject: Re: [firedrake] Firedrake paper results Thanks again for all the feeback! This is now incorporated (apart from the plot colours, need to do some further matplotlib fiddling) and the PDF is updated: https://wwwhomes.doc.ic.ac.uk/~fr710/paper.pdf Cheers, Florian On 11/10/14 11:27, Florian Rathgeber wrote:
I'd still be interested in feedback on this!
On 04/10/14 09:12, Florian Rathgeber wrote:
Dear all,
I finished a first draft of the results section for the Firedrake paper. Any feedback gratefully received!
If you don't have access to the repository you can get a PDF from
https://wwwhomes.doc.ic.ac.uk/~fr710/paper.pdf
Cheers, Florian
On 15/10/14 09:46, Cotter, Colin J wrote:
I would encourage everyone to completely avoid colours in 2d graphs except for aesthetic reasons. You never know what device or printer (or retina) the reader is using.
I'm doing that by also having different markers for different lines (paired with colours) and differentiating DOLFIN/Firedrake by the line style. From your perspective, is that sufficient?
Cjc ________________________________________ From: firedrake-bounces@imperial.ac.uk [firedrake-bounces@imperial.ac.uk] on behalf of Florian Rathgeber [florian.rathgeber@imperial.ac.uk] Sent: 15 October 2014 09:02 To: firedrake Subject: Re: [firedrake] Firedrake paper results
Thanks again for all the feeback! This is now incorporated (apart from the plot colours, need to do some further matplotlib fiddling) and the PDF is updated: https://wwwhomes.doc.ic.ac.uk/~fr710/paper.pdf
Cheers, Florian
On 11/10/14 11:27, Florian Rathgeber wrote:
I'd still be interested in feedback on this!
On 04/10/14 09:12, Florian Rathgeber wrote:
Dear all,
I finished a first draft of the results section for the Firedrake paper. Any feedback gratefully received!
If you don't have access to the repository you can get a PDF from
https://wwwhomes.doc.ic.ac.uk/~fr710/paper.pdf
Cheers, Florian
2014-10-15 9:02 GMT+01:00 Florian Rathgeber < florian.rathgeber@imperial.ac.uk>:
Thanks again for all the feeback! This is now incorporated (apart from the plot colours, need to do some further matplotlib fiddling)
I haven't followed the thread in detail so I'm not sure this is relevant, but there is Olga Botvinnik's prettyplotlib [1] which might help to avoid fiddling with matplotlib manually too much. Acutally, I just saw her blog post [2] that she's just stopped actively developing it but I'm sure it still works, and she also points to seaborn [3] as an alternative. Maybe either of these helps. Cheers, Max [1] http://blog.olgabotvinnik.com/prettyplotlib/ [2] http://blog.olgabotvinnik.com/blog/2014/10/06/no-longer-actively-developing-... [3] https://github.com/mwaskom/seaborn
and the PDF is updated: https://wwwhomes.doc.ic.ac.uk/~fr710/paper.pdf
Cheers, Florian
On 11/10/14 11:27, Florian Rathgeber wrote:
I'd still be interested in feedback on this!
On 04/10/14 09:12, Florian Rathgeber wrote:
Dear all,
I finished a first draft of the results section for the Firedrake paper. Any feedback gratefully received!
If you don't have access to the repository you can get a PDF from
https://wwwhomes.doc.ic.ac.uk/~fr710/paper.pdf
Cheers, Florian
_______________________________________________ firedrake mailing list firedrake@imperial.ac.uk https://mailman.ic.ac.uk/mailman/listinfo/firedrake
On 15/10/14 10:22, Maximilian Albert wrote:
2014-10-15 9:02 GMT+01:00 Florian Rathgeber <florian.rathgeber@imperial.ac.uk <mailto:florian.rathgeber@imperial.ac.uk>>:
Thanks again for all the feeback! This is now incorporated (apart from the plot colours, need to do some further matplotlib fiddling)
I haven't followed the thread in detail so I'm not sure this is relevant, but there is Olga Botvinnik's prettyplotlib [1] which might help to avoid fiddling with matplotlib manually too much. Acutally, I just saw her blog post [2] that she's just stopped actively developing it but I'm sure it still works, and she also points to seaborn [3] as an alternative. Maybe either of these helps.
Thanks, I'm aware of that. The problem is the same as with the style sheets introduced in 1.4 which Patrick suggested: I *need* to heavily customise the plots because: 1) I want to have matching lines for DOLFIN/Firedrake in the same colour, which is not supported by any of the styles 2) I need custom subplots to show different plots side-by-side. 3) I need to be able to hide and/or override the tick labels So alas I think a style preset is not an option, though I could maybe use it as a starting point and then selectively override. Florian
Cheers, Max
[1] http://blog.olgabotvinnik.com/prettyplotlib/ [2] http://blog.olgabotvinnik.com/blog/2014/10/06/no-longer-actively-developing-... [3] https://github.com/mwaskom/seaborn
and the PDF is updated: https://wwwhomes.doc.ic.ac.uk/~fr710/paper.pdf
Cheers, Florian
On 11/10/14 11:27, Florian Rathgeber wrote: > I'd still be interested in feedback on this! > > On 04/10/14 09:12, Florian Rathgeber wrote: >> Dear all, >> >> I finished a first draft of the results section for the Firedrake paper. >> Any feedback gratefully received! >> >> If you don't have access to the repository you can get a PDF from >> >> https://wwwhomes.doc.ic.ac.uk/~fr710/paper.pdf >> >> Cheers, >> Florian
2014-10-15 10:30 GMT+01:00 Florian Rathgeber < florian.rathgeber@imperial.ac.uk>:
Thanks, I'm aware of that. The problem is the same as with the style sheets introduced in 1.4 which Patrick suggested: I *need* to heavily customise the plots because:
1) I want to have matching lines for DOLFIN/Firedrake in the same colour, which is not supported by any of the styles
2) I need custom subplots to show different plots side-by-side.
3) I need to be able to hide and/or override the tick labels
So alas I think a style preset is not an option, though I could maybe use it as a starting point and then selectively override.
Ah, I see. Yes, that makes things more tricky. Sorry for the noise then! Best wishes, Max
p12 §8 - I thought the kernels were being compiled with the intel compiler? p12 §8.2.1 know -> known On 15 Oct 2014, at 10:30, Florian Rathgeber wrote:
On 15/10/14 10:22, Maximilian Albert wrote:
2014-10-15 9:02 GMT+01:00 Florian Rathgeber <florian.rathgeber@imperial.ac.uk <mailto:florian.rathgeber@imperial.ac.uk>>:
Thanks again for all the feeback! This is now incorporated (apart from the plot colours, need to do some further matplotlib fiddling)
I haven't followed the thread in detail so I'm not sure this is relevant, but there is Olga Botvinnik's prettyplotlib [1] which might help to avoid fiddling with matplotlib manually too much. Acutally, I just saw her blog post [2] that she's just stopped actively developing it but I'm sure it still works, and she also points to seaborn [3] as an alternative. Maybe either of these helps.
Thanks, I'm aware of that. The problem is the same as with the style sheets introduced in 1.4 which Patrick suggested: I *need* to heavily customise the plots because:
1) I want to have matching lines for DOLFIN/Firedrake in the same colour, which is not supported by any of the styles
2) I need custom subplots to show different plots side-by-side.
3) I need to be able to hide and/or override the tick labels
So alas I think a style preset is not an option, though I could maybe use it as a starting point and then selectively override.
Florian
Cheers, Max
[1] http://blog.olgabotvinnik.com/prettyplotlib/ [2] http://blog.olgabotvinnik.com/blog/2014/10/06/no-longer-actively-developing-... [3] https://github.com/mwaskom/seaborn
and the PDF is updated: https://wwwhomes.doc.ic.ac.uk/~fr710/paper.pdf
Cheers, Florian
On 11/10/14 11:27, Florian Rathgeber wrote:
I'd still be interested in feedback on this!
On 04/10/14 09:12, Florian Rathgeber wrote:
Dear all,
I finished a first draft of the results section for the Firedrake paper. Any feedback gratefully received!
If you don't have access to the repository you can get a PDF from
https://wwwhomes.doc.ic.ac.uk/~fr710/paper.pdf
Cheers, Florian
_______________________________________________ firedrake mailing list firedrake@imperial.ac.uk https://mailman.ic.ac.uk/mailman/listinfo/firedrake
On 11/11/14 15:56, Kelly, Paul H J wrote:
p12 §8 - I thought the kernels were being compiled with the intel compiler?
Not on ARCHER. The reason being that we can't use the Intel compiler on the backend notes because it can't talk to the licensing server...
p12 §8.2.1 know -> known
Fixed, thanks!
On 15 Oct 2014, at 10:30, Florian Rathgeber wrote:
On 15/10/14 10:22, Maximilian Albert wrote:
2014-10-15 9:02 GMT+01:00 Florian Rathgeber <florian.rathgeber@imperial.ac.uk <mailto:florian.rathgeber@imperial.ac.uk>>:
Thanks again for all the feeback! This is now incorporated (apart from the plot colours, need to do some further matplotlib fiddling)
I haven't followed the thread in detail so I'm not sure this is relevant, but there is Olga Botvinnik's prettyplotlib [1] which might help to avoid fiddling with matplotlib manually too much. Acutally, I just saw her blog post [2] that she's just stopped actively developing it but I'm sure it still works, and she also points to seaborn [3] as an alternative. Maybe either of these helps.
Thanks, I'm aware of that. The problem is the same as with the style sheets introduced in 1.4 which Patrick suggested: I *need* to heavily customise the plots because:
1) I want to have matching lines for DOLFIN/Firedrake in the same colour, which is not supported by any of the styles
2) I need custom subplots to show different plots side-by-side.
3) I need to be able to hide and/or override the tick labels
So alas I think a style preset is not an option, though I could maybe use it as a starting point and then selectively override.
Florian
Cheers, Max
[1] http://blog.olgabotvinnik.com/prettyplotlib/ [2] http://blog.olgabotvinnik.com/blog/2014/10/06/no-longer-actively-developing-... [3] https://github.com/mwaskom/seaborn
and the PDF is updated: https://wwwhomes.doc.ic.ac.uk/~fr710/paper.pdf
Cheers, Florian
On 11/10/14 11:27, Florian Rathgeber wrote:
I'd still be interested in feedback on this!
On 04/10/14 09:12, Florian Rathgeber wrote:
Dear all,
I finished a first draft of the results section for the Firedrake paper. Any feedback gratefully received!
If you don't have access to the repository you can get a PDF from
https://wwwhomes.doc.ic.ac.uk/~fr710/paper.pdf
Cheers, Florian
participants (9)
- 
                
                Colin Cotter
- 
                
                Cotter, Colin J
- 
                
                Eike Mueller
- 
                
                Florian Rathgeber
- 
                
                Florian Rathgeber
- 
                
                Kelly, Paul H J
- 
                
                Lawrence Mitchell
- 
                
                Maximilian Albert
- 
                
                Patrick Farrell