PETSc via petsc4py/PyOP2 really slow
Hi all, If I had a purely firedrake code, everything runs fast on my Macbook. But when I need to make direct calls to the TaoSolver via petsc4py or PETSc calls via PyOP2, the program is extremely slow. This was not the case as of several commits ago (like probably two months). I suspect it has something to do with not adding any optimization flags to the PETSc configure options? The PETSc developers suggest adding more than simply --with-debugging=0. For instance, if using GNU compilers one could use COPTFLAGS='-O3 -march=native' It seems to me that whatever compiler optimization flags were used for Firedrake/PyOP2 function calls are not used for direct usage of PETSc function calls? Thanks, Justin
On 21 Dec 2015, at 23:41, Justin Chang <jychang48@gmail.com> wrote:
Hi all,
If I had a purely firedrake code, everything runs fast on my Macbook. But when I need to make direct calls to the TaoSolver via petsc4py or PETSc calls via PyOP2, the program is extremely slow. This was not the case as of several commits ago (like probably two months).
Humph, I /think/ we haven't changed the way we build PETSc in that time.
I suspect it has something to do with not adding any optimization flags to the PETSc configure options? The PETSc developers suggest adding more than simply --with-debugging=0. For instance, if using GNU compilers one could use COPTFLAGS='-O3 -march=native'
Yes, we just build PETSc with whatever the default set of flags that come from "python setup.py", which I think is just --with-debugging=0. If you're using the installation script, you can modify the PETSC_CONFIGURE_OPTIONS environment variable to pass more aggressive flags.
It seems to me that whatever compiler optimization flags were used for Firedrake/PyOP2 function calls are not used for direct usage of PETSc function calls?
Generated code is compiled with -march=native -O3, but yes, the calls to the PETSc library obviously just use whatever compilation flags PETSc used. Lawrence
So the "slowness" actually came from setting a really small optimization solver tolerance, and not necessarily the compiler optimization or anything. In the past I was solving really small problems so I had used the same KSP and TAO solvers tolerances, hence the performance was roughly the same. Recently I went into larger problems, and I needed smaller TAO tolerances but did not change the KSP ones, hence the difference in performance. Thanks, Justin On Mon, Jan 4, 2016 at 3:11 AM, Lawrence Mitchell < lawrence.mitchell@imperial.ac.uk> wrote:
On 21 Dec 2015, at 23:41, Justin Chang <jychang48@gmail.com> wrote:
Hi all,
If I had a purely firedrake code, everything runs fast on my Macbook. But when I need to make direct calls to the TaoSolver via petsc4py or PETSc calls via PyOP2, the program is extremely slow. This was not the case as of several commits ago (like probably two months).
Humph, I /think/ we haven't changed the way we build PETSc in that time.
I suspect it has something to do with not adding any optimization flags to the PETSc configure options? The PETSc developers suggest adding more than simply --with-debugging=0. For instance, if using GNU compilers one could use COPTFLAGS='-O3 -march=native'
Yes, we just build PETSc with whatever the default set of flags that come from "python setup.py", which I think is just --with-debugging=0. If you're using the installation script, you can modify the PETSC_CONFIGURE_OPTIONS environment variable to pass more aggressive flags.
It seems to me that whatever compiler optimization flags were used for Firedrake/PyOP2 function calls are not used for direct usage of PETSc function calls?
Generated code is compiled with -march=native -O3, but yes, the calls to the PETSc library obviously just use whatever compilation flags PETSc used.
Lawrence
_______________________________________________ firedrake mailing list firedrake@imperial.ac.uk https://mailman.ic.ac.uk/mailman/listinfo/firedrake
participants (2)
- 
                
                Justin Chang
- 
                
                Lawrence Mitchell