Hi Roberto,

I am not sure what phifluid looks like in the following:

>> uT = u
>> for i in range(ndofs):
>>    uT = uT + s[i]*phifluid[i]
>>
>>#  ( phifluid[:] are functions belonging to W_f that are zero everywhere except on some boundary nodes )

but I think one possible approach is to somehow replace s[i] with a scalar Function that is zero outside of the beam.

Would you be able to tell us more about phifluid? (Is it also defined using EU = VectorElement("Lagrange", 'triangle', orderU)?) Am I also correct thinking that a 1D beam in on the boundary?


Thanks,
Koki

From: firedrake-bounces@imperial.ac.uk <firedrake-bounces@imperial.ac.uk> on behalf of Roberto Federico Ausas <rfausas@icmc.usp.br>
Sent: Friday, May 1, 2020 1:56 AM
To: Ham, David A <david.ham@imperial.ac.uk>
Cc: firedrake <firedrake@imperial.ac.uk>
Subject: Re: [firedrake] very slow code when using a lot of functions belonging to Real spaces
 
Hi David,

Effectively, this is probably not a very elegant way for solving the FSI problem. In a traditional implementation of course I would work at the algebraic level by altering the linear systems, more efficiently.

I've written down some equations that may help understand the idea.

Thanks



On Thu, Apr 30, 2020 at 11:29 AM Ham, David A <david.ham@imperial.ac.uk> wrote:

Hi Roberto,

 

I’m not amazingly surprised that a mixed function space with 23 components takes a long time. The matrix system is a block of 23^2 matrices and there will be some assembly infrastructure for each of them, even the ones which are zero. This is really pushing the boundaries of what we can do. It seems odd to me that there are so many real function space objects. Can you provide a brief maths write-up of what you are trying to do?

 

Regards,

 

David

 

From: <firedrake-bounces@imperial.ac.uk> on behalf of Roberto Federico Ausas <rfausas@icmc.usp.br>
Date: Wednesday, 29 April 2020 at 15:08
To: firedrake <firedrake@imperial.ac.uk>
Subject: [firedrake] very slow code when using a lot of functions belonging to Real spaces

 

Hello,

I have recently posted some questions related to the use of Real function spaces for some FSI problems I'm trying to solve.

 

The problem I'm facing now is that the solving phase is consuming absurd amount of time. It does not seem to be the PETSc resolution itself, but the assembly phase, since, once the KSP iterations begin they proceed reasonably fast.

 

The piece of code I believe is relevant is:

 

#------------------------------------------------------------------------

.

.

.
EU = VectorElement("Lagrange", 'triangle', orderU)
EP = FiniteElement("Lagrange", 'triangle', orderP)
EL = FiniteElement("Real", 'triangle', 0)
ES = []

ndofs = 20

for i in range(ndofs):
   ES.append(FiniteElement("Real", 'triangle', 0))
EM = MixedElement([EU,EP,EL,ES])

 

W_f = FunctionSpace(mesh, EM)  # mesh has a few hundred nodes

sol = Function(W_f)

auxs = split(sol)
u = auxs[0]
p = auxs[1]
l = auxs[2]
s = []
for i in range(ndofs):
    s.append(auxs[i+3])
   
auxT = TestFunctions(W_f)
v = auxT[0]
q = auxT[1]
r = auxT[2]
d = []
for i in range(ndofs):
    d.append(auxT[i+3])

uT = u
for i in range(ndofs):
     uT = uT + s[i]*phifluid[i]

 

#  ( phifluid[:] are functions belonging to W_f that are zero everywhere except on some boundary nodes )

 

print('   |>          - Fluid part')
Res  = momentum_residual(uT, p, l, s, v, mesh) # Stokes operator
Res += mass_residual(uT, p, l, s, q, r, mesh) # Incompressibility restriction

print('   |>          - Beam part')
aream1 = 1/assemble(Constant(1) * dx(mesh))
 for i in range(ndofs):
    Res += momentum_residual(uT, p, l, s, d[i]*phifluid[i], mesh)
    force_b = Constant(force_beam(vphi, dphibeam[i]))
    Res -= d[i]*force_b*aream1*dx

 

# Until this point the code runs reasonably fast

 

# Here starts the bottleneck

idsplit = '2'

for i in range(ndofs):
      idsplit = idsplit+','+str(i+3) # idsplit ends up being 2,3,4,5,...

   sp = {
      "mat_type": "matfree",
      "snes_type": "newtonls",
      "snes_monitor": None,
      "snes_converged_reason": None,
      "snes_linesearch_type": "basic",
      "snes_max_it": 8,
      "snes_rtol": 1e-10,
      "ksp_type": "fgmres",
      "ksp_monitor_true_residual": None,
      "ksp_max_it": 10,
      "pc_type": "fieldsplit",
      "pc_fieldsplit_type": "schur",
      "pc_fieldsplit_schur_fact_type": "full",
      "pc_fieldsplit_0_fields": "0,1",
      "pc_fieldsplit_1_fields": idsplit,
      "fieldsplit_0_ksp_type": "preonly",
      "fieldsplit_0_pc_type": "python",
      "fieldsplit_0_ksp_monitor_true_residual": None,
      "fieldsplit_0_pc_python_type": "firedrake.AssembledPC",
      "fieldsplit_0_assembled_pc_type": "lu",
      "fieldsplit_0_assembled_pc_factor_mat_solver_type": "mumps",
      "fieldsplit_0_assembled_mat_mumps_icntl_14": 200,
      "mat_mumps_icntl_14": 200,
      "fieldsplit_1_ksp_type": "gmres",
      "fieldsplit_1_ksp_monitor_true_residual": None,
      "fieldsplit_1_ksp_max_it": ndofs+1,
      "fieldsplit_1_ksp_convergence_test": "skip",
      "fieldsplit_1_pc_type": "none"
   }

solve(Res == 0, sol, bcs=bcs, J=Jac, solver_parameters=sp)

 

#------------------------------------------------------------------------

 

Any tip on what could be going on would be welcome.

Regards

 

--

Roberto F. Ausas
Instituto de Ciências Matemáticas e de Computação - ICMC/USP
São Carlos, SP - Brasil



--
Roberto F. Ausas
Instituto de Ciências Matemáticas e de Computação - ICMC/USP
São Carlos, SP - Brasil
http://www.lmacc.icmc.usp.br/~ausas