Tuesday 30 June at 1610 in Huxley 311. No registration required.
Dear all,
FEniCS is a system for automatically creating numerical simulations. As a
part of the FEniCS '15 workshop at Imperial, FEniCS project founder Prof.
Anders Logg of Chalmers University of Technology will be presenting a
public lecture. This is an opportunity to hear about some of the latest
advances in automated simulation technology from one of the leaders in the
field.
Prof. Logg's abstract is below, I look forward to seeing you there.
Regards,
David
*Implementing mathematics: domain specific languages and automated
computing*
Computer simulation is today an indispensable tool for scientists and
engineers in modeling, understanding and predicting nature. Having emerged
as a complement to theory and experimentation, it is becoming increasingly
more important as a result of advancements in hardware, software and
algorithms.
However, in spite of its success and ever increasing importance, simulation
software is still largely written by hand, following a primitive, outdated
and unsustainable pipeline: first express a model in the language of
mathematics, then translate this model - using pen and paper - to a complex
system of data structures and algorithms, then express those data
structures and algorithms in a programming language. Even if those
algorithms can today be expressed in high level programming languages, the
pipeline still involves the translation (obfuscation) of the mathematical
model to computer code.
In this talk, I will argue that we should not strive to translate
mathematical models or methods to computer code. Instead, we should strive
to develop exact computer representations of mathematics that make the
original mathematical model or method native to the mathematical /
programming language.
I will highlight three examples of ongoing work in this direction. First,
the FEniCS Project, an ongoing effort to develop a domain specific language
for expression and solution of partial differential equations; second, an
application of the domain specific language of FEniCS for expressing the
Einstein-Vlasov equations and computing the mass distribution of galaxies;
third, a new effort to implement the abstractions of exterior calculus in a
functional programming language (Haskell) to express and thereby compute
all elements of the periodic table of finite elements.
Acknowledgments: This talk is based on joint work with many people, in
particular the developers of the FEniCS Project (http://fenicsproject.org)
Håkan Andreasson and Ellery Ames (Einstein-Vlasov); Mary Sheeran, Patrik
Jansson, Irene Lobo Valbuena, Simon Pfreundschuh and Andreas Rosén
(functional finite element exterior calculus); and Douglas Arnold (periodic
table of the finite elements).
Hi Tuomas,
Concerning the sets of spaces which this will work for, I think that the
current implementation probably works for any horizontal space but only for
vertical CG. One could just try relaxing the test in that case.
The reason for the restriction to vertical CG is that fs.bt_masks uses the
topological association of nodes with mesh entities in order to work out
which nodes are on the top or bottom of the cell. In order to allow DG in
the vertical, it would be necessary to support the geometric definition (ie
which basis functions do not vanish on the top/bottom).
If you look at functionspace.py:78 you can see where the bottom and top
masks are generated. This uses entity_closure_dofs() from FIAT. In order to
use the geometric definition of dofs one would need to support using
facet_support_dofs() (which is how BC maps are set up at
functionspace.py:375). Currently FIAT TensorFiniteElement objects do
support entity_closure_dofs() but nobody has done the legwork to get them
to support facet_support_dofs.
Concerning documentation, COFFEE is deplorably underdocumented (although
the author is on this list so maybe this will change ;). The actual
available AST nodes are only "documented" by reading the source:
https://github.com/coneoproject/COFFEE/blob/master/coffee/base.py However,
you are fundamentally writing a PyOP2 kernel, and the C api for that *is*
documented at: http://op2.github.io/PyOP2/kernels.html so that hopefully
helps somewhat.
Otherwise, keep asking here or in IRC (IRC tends to get faster responses,
at least between about 10am and 8pm UK time).
BTW, any chance of seeing you at FEniCS '15?
Cheers,
David
On 10 April 2015 at 02:44, Tuomas Karna <tuomas.karna(a)gmail.com> wrote:
> Hi all,
>
> A couple of questions regarding those extruded mesh -> parent mesh copy
> operations,
>
> A while back I got the reverse 2d->3d copy working with a hard-coded pyop2
> kernel (like in mesh extrusion). I'm not familiar with COFFEE syntax, is
> there documentation/examples somewhere?
>
> The 3d->2d copy routine in extrusion_extraction branch checks that the
> function space is CG. Does this method easily generalize to other spaces?
> DGxCG prisms seem to work OK. I'm also interested in using DGxDG and
> RTxCG/DG spaces.
>
>
> Cheers,
>
> Tuomas
>
>
> On 03/05/2015 10:24 AM, Tuomas Karna wrote:
>
> Thanks David,
>
> This is great, I'll try doing the reverse operation.
>
> - Tuomas
>
> On 03/05/2015 08:20 AM, David Ham wrote:
>
> Hi Tuomas,
>
> This wasn't there this morning but I've implemented one of the cases
> (pulling out the top and bottom maps). The result is in the
> extrusion_extraction branches of both PyOP2 and Firedrake. The 2d->3d
> operation would be rather similar except that you'd have to interrogate the
> fiat_element on the extruded Function in order to determine which extruded
> nodes are "on top of" the 2d nodes you have. Unfortunately I won't have
> time to do that one soon (huge amounts to do before SIAM CSE next week) but
> feel free to have a try and complain when it doesn't work!
>
> Cheers,
>
> David
>
> On 4 March 2015 at 22:23, Tuomas Karna <tuomas.karna(a)gmail.com> wrote:
>
>> Hi all,
>>
>> I'd need to copy nodal values between fields on parent and extruded
>> meshes. For example, copy 2d->3d (constant over vertical) or 3d->2d
>> (extract surface/bottom level). The horizontal function space is the
>> same. Is there an easy way to do this? Is there a map of extruded nodes
>> somewhere?
>>
>> Thanks,
>>
>> Tuomas
>>
>>
>>
>> _______________________________________________
>> firedrake mailing list
>> firedrake(a)imperial.ac.uk
>> https://mailman.ic.ac.uk/mailman/listinfo/firedrake
>>
>
>
>
> --
> Dr David Ham
> Departments of Mathematics and Computing
> Imperial College London
>
> http://www.imperial.ac.uk/people/david.ham
>
>
> _______________________________________________
> firedrake mailing listfiredrake@imperial.ac.ukhttps://mailman.ic.ac.uk/mailman/listinfo/firedrake
>
>
>
>
--
Dr David Ham
Departments of Mathematics and Computing
Imperial College London
http://www.imperial.ac.uk/people/david.ham
Dear firedrakers,
I now re-ran my code on up to 1536 cores on ARCHER, but I get a problem when I try to project an expression onto a DG0 function space on an extruded grid.
The full (very large) log is here https://gist.github.com/eikehmueller/83a5fc139e1fedb5306c but as far as I can tell
the following crashes:
r_p.project(expression,solver_parameters={'ksp_type':'cg','pc_type':'jacobi’})
and here is the relevant part of the trace that I attempted to reconstruct:
File "/work/n02/n02/eike/git_workspace/firedrake/firedrake/function.py", line 157, in project
return projection.project(b, self, *args, **kwargs)
File "/work/n02/n02/eike/git_workspace/firedrake/firedrake/projection.py", line 94, in project
[…]
solving_utils.check_snes_convergence(self.snes)
File "/work/n02/n02/eike/git_workspace/firedrake/firedrake/variational_solver.py", line 163, in solve
File "/work/n02/n02/eike/git_workspace/PyOP2/pyop2/profiling.py", line 199, in wrapper
%s""" % (snes.getIterationNumber(), msg))
File "/work/n02/n02/eike/git_workspace/firedrake/firedrake/solving_utils.py", line 62, in check_snes_convergence
return f(*args, **kwargs)
RuntimeError: Nonlinear solve failed to converge after 1 nonlinear iterations.
It does work fine on smaller processor numbers. Maybe the PETSc integers overflow again, the number of cells is 5242880 x 64 = 335544320 ~ 2^{28}, which is not too far from 2^{32}, but I thought I check in case you’ve seen something similar before. I thought I had managed to run problems of this size in the past (i.e. earlier this year).
Thanks,
Eike