Parallel transposition not implemented yet for 3D-Homo-2D approach
Dear all, I just installed the library, and need to simulate DNS of a channel flow with oscillating pressure gradient. As I have two homogeneous directions I applied standard Fourier discretization in these directions. It seems like this case is not parallelized yet, and I got the error in the subject. I was wondering if I'm overlooking something. If not, are there maybe any plans in the future to include parallelization of 2D FFT's? Thank you in advance. Best, Asim Onder Research Fellow National University of Singapore ________________________________ Important: This email is confidential and may be privileged. If you are not the intended recipient, please delete it and notify us immediately; you should not copy or use it for any purpose, nor disclose its contents to any other person. Thank you.
Hi Asim, I think your conclusions is correct. We did some early implementation into the 2D Homogeneous expansion but have not pulled it all the way through since we did not have a full project on this topic. We have however kept the existing code running through our regression test. For now I would perhaps suggest you try the 3D homo 1D approach for your runs since you can use parallelisation in that code. Cheers, Spencer. On 29 Jan 2016, at 04:00, ceeao <ceeao@nus.edu.sg<mailto:ceeao@nus.edu.sg>> wrote: Dear all, I just installed the library, and need to simulate DNS of a channel flow with oscillating pressure gradient. As I have two homogeneous directions I applied standard Fourier discretization in these directions. It seems like this case is not parallelized yet, and I got the error in the subject. I was wondering if I'm overlooking something. If not, are there maybe any plans in the future to include parallelization of 2D FFT's? Thank you in advance. Best, Asim Onder Research Fellow National University of Singapore ________________________________ Important: This email is confidential and may be privileged. If you are not the intended recipient, please delete it and notify us immediately; you should not copy or use it for any purpose, nor disclose its contents to any other person. Thank you. _______________________________________________ Nektar-users mailing list Nektar-users@imperial.ac.uk<mailto:Nektar-users@imperial.ac.uk> https://mailman.ic.ac.uk/mailman/listinfo/nektar-users Spencer Sherwin McLaren Racing/Royal Academy of Engineering Research Chair, Professor of Computational Fluid Mechanics, Department of Aeronautics, Imperial College London South Kensington Campus London SW7 2AZ s.sherwin@imperial.ac.uk<mailto:s.sherwin@imperial.ac.uk> +44 (0) 20 759 45052
Hi Spencer, Thank you for the quick reply and suggestion. I switched indeed to 3D homo 1D case and this time I have problems with the divergence of linear solvers. I refined the grid in the channel flow example to 100x200x64 in x-y-z directions, and left everything else the same. When I employ the default global system solver "IterativeStaticCond" with this setup, I get divergence: "Exceeded maximum number of iterations (5000)". I checked the initial fields and mesh in Paraview, everything seems to be normal. I also tried the "LowEnergyBlock" preconditioner, and apparently this one is valid only in sheer 3D cases. My knowledge in iterative solvers for hp-Fem is minimal. Therefore, I was wondering if you could suggest maybe a robust option that at least converge. My concern is getting some rough estimates for the speed of Nektar++ in my oscillating channel flow problem. If the speed will be promising, I will switch to Nektar++ from OpenFOAM, as OpenFOAM is low-order and not really suitable for DNS. Thanks again in advance. Cheers, Asim On 01/31/2016 11:53 PM, Sherwin, Spencer J wrote: Hi Asim, I think your conclusions is correct. We did some early implementation into the 2D Homogeneous expansion but have not pulled it all the way through since we did not have a full project on this topic. We have however kept the existing code running through our regression test. For now I would perhaps suggest you try the 3D homo 1D approach for your runs since you can use parallelisation in that code. Cheers, Spencer. On 29 Jan 2016, at 04:00, ceeao <<mailto:ceeao@nus.edu.sg>ceeao@nus.edu.sg<mailto:ceeao@nus.edu.sg>> wrote: Dear all, I just installed the library, and need to simulate DNS of a channel flow with oscillating pressure gradient. As I have two homogeneous directions I applied standard Fourier discretization in these directions. It seems like this case is not parallelized yet, and I got the error in the subject. I was wondering if I'm overlooking something. If not, are there maybe any plans in the future to include parallelization of 2D FFT's? Thank you in advance. Best, Asim Onder Research Fellow National University of Singapore ________________________________ Important: This email is confidential and may be privileged. If you are not the intended recipient, please delete it and notify us immediately; you should not copy or use it for any purpose, nor disclose its contents to any other person. Thank you. _______________________________________________ Nektar-users mailing list Nektar-users@imperial.ac.uk<mailto:Nektar-users@imperial.ac.uk> https://mailman.ic.ac.uk/mailman/listinfo/nektar-users Spencer Sherwin McLaren Racing/Royal Academy of Engineering Research Chair, Professor of Computational Fluid Mechanics, Department of Aeronautics, Imperial College London South Kensington Campus London SW7 2AZ s.sherwin@imperial.ac.uk<mailto:s.sherwin@imperial.ac.uk> +44 (0) 20 759 45052 ________________________________ Important: This email is confidential and may be privileged. If you are not the intended recipient, please delete it and notify us immediately; you should not copy or use it for any purpose, nor disclose its contents to any other person. Thank you.
HI Asim, How many parallel cores are you running on. Sometime starting up these flows can be tricky especially if you are immediately jumping to a high Reynolds number. Have you tried first starting the flow at a Lower Reynolds number? Also 100 x 200 is quite a few elements in the x-y plane. Remember the polynomial order adds in more points on top of the mesh discretisation. I would perhaps recommend trying a smaller mesh to see how that goes first. Actually I note there is a file called TurbChFl_3D1H.xml in the ~/Nektar/Solvers/IncNavierStokesSolver/Examples directory which might be worth looking at. I think this was a mesh used in Ale Bolis’ thesis which you can find under: http://wwwf.imperial.ac.uk/ssherw/spectralhp/papers/PhDThesis/Bolis_Thesis.p... Cheers, Spencer. On 1 Feb 2016, at 07:01, ceeao <ceeao@nus.edu.sg<mailto:ceeao@nus.edu.sg>> wrote: Hi Spencer, Thank you for the quick reply and suggestion. I switched indeed to 3D homo 1D case and this time I have problems with the divergence of linear solvers. I refined the grid in the channel flow example to 100x200x64 in x-y-z directions, and left everything else the same. When I employ the default global system solver "IterativeStaticCond" with this setup, I get divergence: "Exceeded maximum number of iterations (5000)". I checked the initial fields and mesh in Paraview, everything seems to be normal. I also tried the "LowEnergyBlock" preconditioner, and apparently this one is valid only in sheer 3D cases. My knowledge in iterative solvers for hp-Fem is minimal. Therefore, I was wondering if you could suggest maybe a robust option that at least converge. My concern is getting some rough estimates for the speed of Nektar++ in my oscillating channel flow problem. If the speed will be promising, I will switch to Nektar++ from OpenFOAM, as OpenFOAM is low-order and not really suitable for DNS. Thanks again in advance. Cheers, Asim On 01/31/2016 11:53 PM, Sherwin, Spencer J wrote: Hi Asim, I think your conclusions is correct. We did some early implementation into the 2D Homogeneous expansion but have not pulled it all the way through since we did not have a full project on this topic. We have however kept the existing code running through our regression test. For now I would perhaps suggest you try the 3D homo 1D approach for your runs since you can use parallelisation in that code. Cheers, Spencer. On 29 Jan 2016, at 04:00, ceeao <<mailto:ceeao@nus.edu.sg>ceeao@nus.edu.sg<mailto:ceeao@nus.edu.sg>> wrote: Dear all, I just installed the library, and need to simulate DNS of a channel flow with oscillating pressure gradient. As I have two homogeneous directions I applied standard Fourier discretization in these directions. It seems like this case is not parallelized yet, and I got the error in the subject. I was wondering if I'm overlooking something. If not, are there maybe any plans in the future to include parallelization of 2D FFT's? Thank you in advance. Best, Asim Onder Research Fellow National University of Singapore ________________________________ Important: This email is confidential and may be privileged. If you are not the intended recipient, please delete it and notify us immediately; you should not copy or use it for any purpose, nor disclose its contents to any other person. Thank you. _______________________________________________ Nektar-users mailing list Nektar-users@imperial.ac.uk<mailto:Nektar-users@imperial.ac.uk> https://mailman.ic.ac.uk/mailman/listinfo/nektar-users Spencer Sherwin McLaren Racing/Royal Academy of Engineering Research Chair, Professor of Computational Fluid Mechanics, Department of Aeronautics, Imperial College London South Kensington Campus London SW7 2AZ s.sherwin@imperial.ac.uk<mailto:s.sherwin@imperial.ac.uk> +44 (0) 20 759 45052 ________________________________ Important: This email is confidential and may be privileged. If you are not the intended recipient, please delete it and notify us immediately; you should not copy or use it for any purpose, nor disclose its contents to any other person. Thank you. Spencer Sherwin McLaren Racing/Royal Academy of Engineering Research Chair, Professor of Computational Fluid Mechanics, Department of Aeronautics, Imperial College London South Kensington Campus London SW7 2AZ s.sherwin@imperial.ac.uk<mailto:s.sherwin@imperial.ac.uk> +44 (0) 20 759 45052
Hi Spencer, Nektar-Users, I followed the suggestion and coarsened the grid a bit. This way it worked impressively fast, but the flow is stable and remains laminar, as I didn't add any perturbations. I need to kick the transition to have turbulence. If I add white noise, even very low magnitude, conjugate gradient solver blows up again. I also tried adding some sinusoidal perturbations to boundary conditions, and again had troubles with CG. I don't really get CG's extreme sensitivity to perturbations. Any suggestion is much appreciated. Thanks in advance. Cheers, Asim On 02/08/2016 04:48 PM, Sherwin, Spencer J wrote: HI Asim, How many parallel cores are you running on. Sometime starting up these flows can be tricky especially if you are immediately jumping to a high Reynolds number. Have you tried first starting the flow at a Lower Reynolds number? Also 100 x 200 is quite a few elements in the x-y plane. Remember the polynomial order adds in more points on top of the mesh discretisation. I would perhaps recommend trying a smaller mesh to see how that goes first. Actually I note there is a file called TurbChFl_3D1H.xml in the ~/Nektar/Solvers/IncNavierStokesSolver/Examples directory which might be worth looking at. I think this was a mesh used in Ale Bolis’ thesis which you can find under: http://wwwf.imperial.ac.uk/ssherw/spectralhp/papers/PhDThesis/Bolis_Thesis.p... Cheers, Spencer. On 1 Feb 2016, at 07:01, ceeao <<mailto:ceeao@nus.edu.sg>ceeao@nus.edu.sg<mailto:ceeao@nus.edu.sg>> wrote: Hi Spencer, Thank you for the quick reply and suggestion. I switched indeed to 3D homo 1D case and this time I have problems with the divergence of linear solvers. I refined the grid in the channel flow example to 100x200x64 in x-y-z directions, and left everything else the same. When I employ the default global system solver "IterativeStaticCond" with this setup, I get divergence: "Exceeded maximum number of iterations (5000)". I checked the initial fields and mesh in Paraview, everything seems to be normal. I also tried the "LowEnergyBlock" preconditioner, and apparently this one is valid only in sheer 3D cases. My knowledge in iterative solvers for hp-Fem is minimal. Therefore, I was wondering if you could suggest maybe a robust option that at least converge. My concern is getting some rough estimates for the speed of Nektar++ in my oscillating channel flow problem. If the speed will be promising, I will switch to Nektar++ from OpenFOAM, as OpenFOAM is low-order and not really suitable for DNS. Thanks again in advance. Cheers, Asim On 01/31/2016 11:53 PM, Sherwin, Spencer J wrote: Hi Asim, I think your conclusions is correct. We did some early implementation into the 2D Homogeneous expansion but have not pulled it all the way through since we did not have a full project on this topic. We have however kept the existing code running through our regression test. For now I would perhaps suggest you try the 3D homo 1D approach for your runs since you can use parallelisation in that code. Cheers, Spencer. On 29 Jan 2016, at 04:00, ceeao <ceeao@nus.edu.sg<mailto:ceeao@nus.edu.sg>> wrote: Dear all, I just installed the library, and need to simulate DNS of a channel flow with oscillating pressure gradient. As I have two homogeneous directions I applied standard Fourier discretization in these directions. It seems like this case is not parallelized yet, and I got the error in the subject. I was wondering if I'm overlooking something. If not, are there maybe any plans in the future to include parallelization of 2D FFT's? Thank you in advance. Best, Asim Onder Research Fellow National University of Singapore ________________________________ Important: This email is confidential and may be privileged. If you are not the intended recipient, please delete it and notify us immediately; you should not copy or use it for any purpose, nor disclose its contents to any other person. Thank you. _______________________________________________ Nektar-users mailing list Nektar-users@imperial.ac.uk<mailto:Nektar-users@imperial.ac.uk> https://mailman.ic.ac.uk/mailman/listinfo/nektar-users Spencer Sherwin McLaren Racing/Royal Academy of Engineering Research Chair, Professor of Computational Fluid Mechanics, Department of Aeronautics, Imperial College London South Kensington Campus London SW7 2AZ <mailto:s.sherwin@imperial.ac.uk>s.sherwin@imperial.ac.uk<mailto:s.sherwin@imperial.ac.uk> +44 (0) 20 759 45052 ________________________________ Important: This email is confidential and may be privileged. If you are not the intended recipient, please delete it and notify us immediately; you should not copy or use it for any purpose, nor disclose its contents to any other person. Thank you. Spencer Sherwin McLaren Racing/Royal Academy of Engineering Research Chair, Professor of Computational Fluid Mechanics, Department of Aeronautics, Imperial College London South Kensington Campus London SW7 2AZ s.sherwin@imperial.ac.uk<mailto:s.sherwin@imperial.ac.uk> +44 (0) 20 759 45052 ________________________________ Important: This email is confidential and may be privileged. If you are not the intended recipient, please delete it and notify us immediately; you should not copy or use it for any purpose, nor disclose its contents to any other person. Thank you.
Hi Asim, Getting a flow through transition is very challenging since there is a strong localisation of shear and this can lead to aliasing issues which can then cause instabilities. Both Douglas and Dave have experienced this with recent simulations so I am cc’ing them to make some suggestions. I would be inclined to be using spectralhpdealiasing and svv. Hopefully Douglas can send you an example of how to switch this on. Cheers, Spencer. On 11 Feb 2016, at 10:32, ceeao <ceeao@nus.edu.sg<mailto:ceeao@nus.edu.sg>> wrote: Hi Spencer, Nektar-Users, I followed the suggestion and coarsened the grid a bit. This way it worked impressively fast, but the flow is stable and remains laminar, as I didn't add any perturbations. I need to kick the transition to have turbulence. If I add white noise, even very low magnitude, conjugate gradient solver blows up again. I also tried adding some sinusoidal perturbations to boundary conditions, and again had troubles with CG. I don't really get CG's extreme sensitivity to perturbations. Any suggestion is much appreciated. Thanks in advance. Cheers, Asim On 02/08/2016 04:48 PM, Sherwin, Spencer J wrote: HI Asim, How many parallel cores are you running on. Sometime starting up these flows can be tricky especially if you are immediately jumping to a high Reynolds number. Have you tried first starting the flow at a Lower Reynolds number? Also 100 x 200 is quite a few elements in the x-y plane. Remember the polynomial order adds in more points on top of the mesh discretisation. I would perhaps recommend trying a smaller mesh to see how that goes first. Actually I note there is a file called TurbChFl_3D1H.xml in the ~/Nektar/Solvers/IncNavierStokesSolver/Examples directory which might be worth looking at. I think this was a mesh used in Ale Bolis’ thesis which you can find under: http://wwwf.imperial.ac.uk/ssherw/spectralhp/papers/PhDThesis/Bolis_Thesis.p... Cheers, Spencer. On 1 Feb 2016, at 07:01, ceeao <<mailto:ceeao@nus.edu.sg>ceeao@nus.edu.sg<mailto:ceeao@nus.edu.sg>> wrote: Hi Spencer, Thank you for the quick reply and suggestion. I switched indeed to 3D homo 1D case and this time I have problems with the divergence of linear solvers. I refined the grid in the channel flow example to 100x200x64 in x-y-z directions, and left everything else the same. When I employ the default global system solver "IterativeStaticCond" with this setup, I get divergence: "Exceeded maximum number of iterations (5000)". I checked the initial fields and mesh in Paraview, everything seems to be normal. I also tried the "LowEnergyBlock" preconditioner, and apparently this one is valid only in sheer 3D cases. My knowledge in iterative solvers for hp-Fem is minimal. Therefore, I was wondering if you could suggest maybe a robust option that at least converge. My concern is getting some rough estimates for the speed of Nektar++ in my oscillating channel flow problem. If the speed will be promising, I will switch to Nektar++ from OpenFOAM, as OpenFOAM is low-order and not really suitable for DNS. Thanks again in advance. Cheers, Asim On 01/31/2016 11:53 PM, Sherwin, Spencer J wrote: Hi Asim, I think your conclusions is correct. We did some early implementation into the 2D Homogeneous expansion but have not pulled it all the way through since we did not have a full project on this topic. We have however kept the existing code running through our regression test. For now I would perhaps suggest you try the 3D homo 1D approach for your runs since you can use parallelisation in that code. Cheers, Spencer. On 29 Jan 2016, at 04:00, ceeao <ceeao@nus.edu.sg<mailto:ceeao@nus.edu.sg>> wrote: Dear all, I just installed the library, and need to simulate DNS of a channel flow with oscillating pressure gradient. As I have two homogeneous directions I applied standard Fourier discretization in these directions. It seems like this case is not parallelized yet, and I got the error in the subject. I was wondering if I'm overlooking something. If not, are there maybe any plans in the future to include parallelization of 2D FFT's? Thank you in advance. Best, Asim Onder Research Fellow National University of Singapore ________________________________ Important: This email is confidential and may be privileged. If you are not the intended recipient, please delete it and notify us immediately; you should not copy or use it for any purpose, nor disclose its contents to any other person. Thank you. _______________________________________________ Nektar-users mailing list Nektar-users@imperial.ac.uk<mailto:Nektar-users@imperial.ac.uk> https://mailman.ic.ac.uk/mailman/listinfo/nektar-users Spencer Sherwin McLaren Racing/Royal Academy of Engineering Research Chair, Professor of Computational Fluid Mechanics, Department of Aeronautics, Imperial College London South Kensington Campus London SW7 2AZ <mailto:s.sherwin@imperial.ac.uk>s.sherwin@imperial.ac.uk<mailto:s.sherwin@imperial.ac.uk> +44 (0) 20 759 45052 ________________________________ Important: This email is confidential and may be privileged. If you are not the intended recipient, please delete it and notify us immediately; you should not copy or use it for any purpose, nor disclose its contents to any other person. Thank you. Spencer Sherwin McLaren Racing/Royal Academy of Engineering Research Chair, Professor of Computational Fluid Mechanics, Department of Aeronautics, Imperial College London South Kensington Campus London SW7 2AZ s.sherwin@imperial.ac.uk<mailto:s.sherwin@imperial.ac.uk> +44 (0) 20 759 45052 ________________________________ Important: This email is confidential and may be privileged. If you are not the intended recipient, please delete it and notify us immediately; you should not copy or use it for any purpose, nor disclose its contents to any other person. Thank you. Spencer Sherwin McLaren Racing/Royal Academy of Engineering Research Chair, Professor of Computational Fluid Mechanics, Department of Aeronautics, Imperial College London South Kensington Campus London SW7 2AZ s.sherwin@imperial.ac.uk<mailto:s.sherwin@imperial.ac.uk> +44 (0) 20 759 45052
participants (2)
- 
                
                ceeao
- 
                
                Sherwin, Spencer J