HI Asim,
How many parallel cores are you running on. Sometime starting up these flows can be tricky especially if you are immediately jumping to a high Reynolds number. Have you tried first starting the flow at a Lower Reynolds number?
Also 100 x 200 is quite a few elements in the x-y plane. Remember the polynomial order adds in more points on top of the mesh discretisation.
I would perhaps recommend trying a smaller mesh to see how that goes first. Actually I note there is a file called TurbChFl_3D1H.xml in the
~/Nektar/Solvers/IncNavierStokesSolver/Examples directory which might be worth looking at. I think this was a mesh used in Ale Bolis’ thesis which you can find under:
Cheers,Spencer.
On 1 Feb 2016, at 07:01, ceeao <ceeao@nus.edu.sg> wrote:
Hi Spencer,
Thank you for the quick reply and suggestion. I switched indeed to 3D homo 1D case and this time I have problems with the divergence of linear solvers.
I refined the grid in the channel flow example to 100x200x64 in x-y-z directions, and left everything else the same. When I employ the default global system solver "IterativeStaticCond" with this setup, I get divergence: "Exceeded maximum number of iterations (5000)". I checked the initial fields and mesh in Paraview, everything seems to be normal. I also tried the "LowEnergyBlock" preconditioner, and apparently this one is valid only in sheer 3D cases.
My knowledge in iterative solvers for hp-Fem is minimal. Therefore, I was wondering if you could suggest maybe a robust option that at least converge. My concern is getting some rough estimates for the speed of Nektar++ in my oscillating channel flow problem. If the speed will be promising, I will switch to Nektar++ from OpenFOAM, as OpenFOAM is low-order and not really suitable for DNS.
Thanks again in advance.
Cheers,
Asim
On 01/31/2016 11:53 PM, Sherwin, Spencer J wrote:
Hi Asim,
I think your conclusions is correct. We did some early implementation into the 2D Homogeneous expansion but have not pulled it all the way through since we did not have a full project on this topic. We have however kept the existing code running through our regression test. For now I would perhaps suggest you try the 3D homo 1D approach for your runs since you can use parallelisation in that code.
Cheers,Spencer.
On 29 Jan 2016, at 04:00, ceeao <ceeao@nus.edu.sg> wrote:
Dear all,
I just installed the library, and need to simulate DNS of a channel flow
with oscillating pressure gradient.
As I have two homogeneous directions I applied standard Fourier
discretization in these directions.
It seems like this case is not parallelized yet, and I got the error in
the subject.
I was wondering if I'm overlooking something. If not, are there maybe
any plans in the future to include parallelization of 2D FFT's?
Thank you in advance.
Best,
Asim Onder
Research Fellow
National University of Singapore
________________________________
Important: This email is confidential and may be privileged. If you are not the intended recipient, please delete it and notify us immediately; you should not copy or use it for any purpose, nor disclose its contents to any other person. Thank you.
_______________________________________________
Nektar-users mailing list
Nektar-users@imperial.ac.uk
https://mailman.ic.ac.uk/mailman/listinfo/nektar-users
Spencer SherwinMcLaren Racing/Royal Academy of Engineering Research Chair,Professor of Computational Fluid Mechanics,Department of Aeronautics,Imperial College LondonSouth Kensington CampusLondon SW7 2AZ
+44 (0) 20 759 45052
Important: This email is confidential and may be privileged. If you are not the intended recipient, please delete it and notify us immediately; you should not copy or use it for any purpose, nor disclose its contents to any other person. Thank you.
Spencer SherwinMcLaren Racing/Royal Academy of Engineering Research Chair,Professor of Computational Fluid Mechanics,Department of Aeronautics,Imperial College LondonSouth Kensington CampusLondon SW7 2AZ
+44 (0) 20 759 45052