On 11/10/14 11:27, Florian Rathgeber wrote:
I'd still be interested in feedback on this!
On 04/10/14 09:12, Florian Rathgeber wrote:
Dear all,
I finished a first draft of the results section for the Firedrake paper. Any feedback gratefully received!
If you don't have access to the repository you can get a PDF from
Hi Florian, Thanks for the reminder, I missed this the first time. Comments follow in no particular order. I'm really impressed (and amazed) that in the Cahn--Hilliard example, firedrake's assembly is two orders of magnitude faster! That's remarkable. Any idea why that's the case? I assume the DOLFIN runs used the same CFLAGS to the compiler etc? (I didn't see that mentioned anywhere, although I may have missed it) I'm looking forward to reading about why that's the case in the as-yet-unwritten section. I'd prefer you didn't mention it as a "dolfin-adjoint application", because the O-K solver doesn't really have anything to do with d-a (it's just a repository I store random solvers in). Maybe an acknowledgement at the end for the preconditioner setup or implementation instead of footnote 5? You should cite Jessica Bosch's and Andy Wathen's paper on the C-H preconditioner: @article{bosch2014, author = {Bosch, J. and Kay, D. and Stoll, M. and Wathen, A.}, title = {Fast solvers for {Cahn--Hilliard} inpainting}, journal = {SIAM Journal on Imaging Sciences}, volume = {7}, number = {1}, pages = {67--97}, year = {2014}, doi = {10.1137/130921842}, } I think it would be clearer to write something like """ The inverse Schur complement, S^-1, is approximated by \begin{equation} S^-1 \approx \hat{S}^-1 = H^-1 M H^-1, \end{equation} where H and M are ... """ rather than the paragraph after (26-27), which is unnecessarily verbose. Are there actual solves with H^-1 done, or does it just use one AMG V-cycle? (My experience with the O-K solver is that you're much better off doing the latter, but you all know what you're doing). I'm surprised that MATNEST doesn't make as much difference, I thought it would do more. It would be nice to see the memory usage too: I'm guessing that's where MATNEST would make a bigger difference. At scale (to billions of DOFs) I only run with 2/24 cores per node because of memory limitations, probably because of all the damn copies. In the graphs, it would be nice to have a "total runtime" to compare dolfin and firedrake from a user's perspective, as well as the breakdown into assembly and solve etc. Speaking of the graphs, is there a reason for the choice of cyan-magenta-brown? I'd imagine there are colour combinations that would be easier to read. Maybe the 538 style (http://matplotlib.org/examples/style_sheets/plot_fivethirtyeight.html)? Does that cause difficulties for the daltonists among us? Do you guys run into problems with starting the Python interpreter on many cores? Chris Richardson's been doing some work on that, and has had some partial success with zipping the files the Python interpreter loads; if you've solved this problem, or ran into it, it would be good to mention it in the paper. Cheerio, Patrick