******************* This email originates from outside Imperial. Do not click on links and attachments unless you recognise the sender. If you trust the sender, add them to your safe senders list https://spam.ic.ac.uk/SpamConsole/Senders.aspx to disable email stamping for this address. ******************* Dear Nektar++, I am using the FieldConvert filter in my session file to output vorticity. I then isolate regions of my domain using the range option (-r) of FieldConvert in post-processing. To enable the isolation of these regions, I have found that I should produce .fld files with my filter as opposed to .plt, .dat, etc. Here is my filter: <FILTER TYPE="FieldConvert"> <PARAM NAME="OutputFile">Fields.vtu</PARAM> <PARAM NAME="OutputFrequency">1</PARAM> <PARAM NAME="Modules">vorticity</PARAM> </FILTER> When running in parallel, this filter will output .fld directories with .fld files in them each corresponding to the processors used. This is where the issue arises: I am running simulations on a cluster using many processors and I need to save lots of time steps. My cluster has a 1 million file limit, and I will quickly hit this if I do not convert and overwrite the .fld directories into single .fld files in-situ. I know the command to do this in post-processing: FieldConvert -f session.xml Fields_#_fc.fld/ Fields_#_fc.fld but so far, have not been successful in implementing this in-situ. I would be very grateful if you could offer a solution or at least steer me in the right direction with this issue. Thank you very much in advance, and I look forward to hearing from you. Sincerely, Isaac
Hi Isaac, I am not using the fieldconvert Filter directly in the session file so have no experience with that. My usual practice is to just output the fields and then process them later which I found it much easier. I use the IO_CheckSteps in the PARAMETERS to output the chk files during the simulation. To overcome the file number you can use hdf5 format, it encapsulate all the partitions in one file, so you will have one file for each output. To use Hdf5, first, the hdf5 build should be activated when you are building the code. You can either use the hdf5 library installed on the HPE or built it as a thirdparty. When running the code you can either use <I> IOFormat=Hdf5 </> in SolverInfo in your session file or more conveniently just pass the “-i Hdf5” as an argument to the command line. E.g. mpirun -np 10000 $dir/IncNavierStokesSolver mesh.xml session.xml -i Hdf5 &> runlog or mpirun -np 10000 $dir/IncNavierStokesSolver mesh.xml session.xml –io-format Hdf5 &> runlog but if you specify the io format in the session file, you don’t need to pass the command line arguments, Hope this helped. Cheers, Mohsen From: nektar-users-bounces@imperial.ac.uk <nektar-users-bounces@imperial.ac.uk> on behalf of Isaac Rosin <isaac.rosin1@ucalgary.ca> Date: Tuesday, 9 May 2023 at 11:03 To: nektar-users@imperial.ac.uk <nektar-users@imperial.ac.uk> Subject: [Nektar-users] Reducing File Count of .fld Outputs in-situ ⚠ External sender. Take care when opening links or attachments. Do not provide your login details. This email from isaac.rosin1@ucalgary.ca originates from outside Imperial. Do not click on links and attachments unless you recognise the sender. If you trust the sender, add them to your safe senders list<https://spam.ic.ac.uk/SpamConsole/Senders.aspx> to disable email stamping for this address. Dear Nektar++, I am using the FieldConvert filter in my session file to output vorticity. I then isolate regions of my domain using the range option (-r) of FieldConvert in post-processing. To enable the isolation of these regions, I have found that I should produce .fld files with my filter as opposed to .plt, .dat, etc. Here is my filter: <FILTER TYPE="FieldConvert"> <PARAM NAME="OutputFile">Fields.vtu</PARAM> <PARAM NAME="OutputFrequency">1</PARAM> <PARAM NAME="Modules">vorticity</PARAM> </FILTER> When running in parallel, this filter will output .fld directories with .fld files in them each corresponding to the processors used. This is where the issue arises: I am running simulations on a cluster using many processors and I need to save lots of time steps. My cluster has a 1 million file limit, and I will quickly hit this if I do not convert and overwrite the .fld directories into single .fld files in-situ. I know the command to do this in post-processing: FieldConvert -f session.xml Fields_#_fc.fld/ Fields_#_fc.fld but so far, have not been successful in implementing this in-situ. I would be very grateful if you could offer a solution or at least steer me in the right direction with this issue. Thank you very much in advance, and I look forward to hearing from you. Sincerely, Isaac
Hi Mohsen, I followed your advice and it appears to be working now. Thank you very much! Best, Isaac ________________________________ From: Mohsen Lahooti <Mohsen.Lahooti@newcastle.ac.uk> Sent: Tuesday, May 9, 2023 4:30 AM To: Isaac Rosin <isaac.rosin1@ucalgary.ca>; nektar-users@imperial.ac.uk <nektar-users@imperial.ac.uk> Subject: Re: Reducing File Count of .fld Outputs in-situ [△EXTERNAL] Hi Isaac, I am not using the fieldconvert Filter directly in the session file so have no experience with that. My usual practice is to just output the fields and then process them later which I found it much easier. I use the IO_CheckSteps in the PARAMETERS to output the chk files during the simulation. To overcome the file number you can use hdf5 format, it encapsulate all the partitions in one file, so you will have one file for each output. To use Hdf5, first, the hdf5 build should be activated when you are building the code. You can either use the hdf5 library installed on the HPE or built it as a thirdparty. When running the code you can either use <I> IOFormat=Hdf5 </> in SolverInfo in your session file or more conveniently just pass the “-i Hdf5” as an argument to the command line. E.g. mpirun -np 10000 $dir/IncNavierStokesSolver mesh.xml session.xml -i Hdf5 &> runlog or mpirun -np 10000 $dir/IncNavierStokesSolver mesh.xml session.xml –io-format Hdf5 &> runlog but if you specify the io format in the session file, you don’t need to pass the command line arguments, Hope this helped. Cheers, Mohsen From: nektar-users-bounces@imperial.ac.uk <nektar-users-bounces@imperial.ac.uk> on behalf of Isaac Rosin <isaac.rosin1@ucalgary.ca> Date: Tuesday, 9 May 2023 at 11:03 To: nektar-users@imperial.ac.uk <nektar-users@imperial.ac.uk> Subject: [Nektar-users] Reducing File Count of .fld Outputs in-situ ⚠ External sender. Take care when opening links or attachments. Do not provide your login details. This email from isaac.rosin1@ucalgary.ca originates from outside Imperial. Do not click on links and attachments unless you recognise the sender. If you trust the sender, add them to your safe senders list<https://spam.ic.ac.uk/SpamConsole/Senders.aspx> to disable email stamping for this address. Dear Nektar++, I am using the FieldConvert filter in my session file to output vorticity. I then isolate regions of my domain using the range option (-r) of FieldConvert in post-processing. To enable the isolation of these regions, I have found that I should produce .fld files with my filter as opposed to .plt, .dat, etc. Here is my filter: <FILTER TYPE="FieldConvert"> <PARAM NAME="OutputFile">Fields.vtu</PARAM> <PARAM NAME="OutputFrequency">1</PARAM> <PARAM NAME="Modules">vorticity</PARAM> </FILTER> When running in parallel, this filter will output .fld directories with .fld files in them each corresponding to the processors used. This is where the issue arises: I am running simulations on a cluster using many processors and I need to save lots of time steps. My cluster has a 1 million file limit, and I will quickly hit this if I do not convert and overwrite the .fld directories into single .fld files in-situ. I know the command to do this in post-processing: FieldConvert -f session.xml Fields_#_fc.fld/ Fields_#_fc.fld but so far, have not been successful in implementing this in-situ. I would be very grateful if you could offer a solution or at least steer me in the right direction with this issue. Thank you very much in advance, and I look forward to hearing from you. Sincerely, Isaac
Hi everybody, As a follow up question, what output formats do you prefer/recommend to minimize disc space used? Although I haven't systematically checked, I feel like fld files take more space than plt and csv files. Do you have a go-to output file format? What other considerations do you have to choose an output format (e.g., mine are to be readable by ParaView or by Python using some simple modules like pandas etc.)? Best, Ilteber -- İlteber R. Özdemir On Thu, 11 May 2023 at 22:05, Isaac Rosin <isaac.rosin1@ucalgary.ca> wrote:
Hi Mohsen,
I followed your advice and it appears to be working now. Thank you very much!
Best, Isaac ------------------------------ *From:* Mohsen Lahooti <Mohsen.Lahooti@newcastle.ac.uk> *Sent:* Tuesday, May 9, 2023 4:30 AM *To:* Isaac Rosin <isaac.rosin1@ucalgary.ca>; nektar-users@imperial.ac.uk <nektar-users@imperial.ac.uk> *Subject:* Re: Reducing File Count of .fld Outputs in-situ
[△EXTERNAL]
Hi Isaac,
I am not using the fieldconvert Filter directly in the session file so have no experience with that. My usual practice is to just output the fields and then process them later which I found it much easier. I use the IO_CheckSteps in the PARAMETERS to output the chk files during the simulation.
To overcome the file number you can use hdf5 format, it encapsulate all the partitions in one file, so you will have one file for each output.
To use Hdf5, first, the hdf5 build should be activated when you are building the code. You can either use the hdf5 library installed on the HPE or built it as a thirdparty.
When running the code you can either use <I> IOFormat=Hdf5 </> in SolverInfo in your session file or more conveniently just pass the “-i Hdf5” as an argument to the command line. E.g.
mpirun -np 10000 $dir/IncNavierStokesSolver mesh.xml session.xml -i Hdf5 &> runlog
or
mpirun -np 10000 $dir/IncNavierStokesSolver mesh.xml session.xml –io-format Hdf5 &> runlog
but if you specify the io format in the session file, you don’t need to pass the command line arguments,
Hope this helped.
Cheers,
Mohsen
*From: *nektar-users-bounces@imperial.ac.uk < nektar-users-bounces@imperial.ac.uk> on behalf of Isaac Rosin < isaac.rosin1@ucalgary.ca> *Date: *Tuesday, 9 May 2023 at 11:03 *To: *nektar-users@imperial.ac.uk <nektar-users@imperial.ac.uk> *Subject: *[Nektar-users] Reducing File Count of .fld Outputs in-situ
⚠ External sender. Take care when opening links or attachments. Do not provide your login details.
This email from isaac.rosin1@ucalgary.ca originates from outside Imperial. Do not click on links and attachments unless you recognise the sender. If you trust the sender, add them to your safe senders list <https://spam.ic.ac.uk/SpamConsole/Senders.aspx> to disable email stamping for this address.
Dear Nektar++,
I am using the FieldConvert filter in my session file to output vorticity. I then isolate regions of my domain using the range option (-r) of FieldConvert in post-processing. To enable the isolation of these regions, I have found that I should produce .fld files with my filter as opposed to .plt, .dat, etc. Here is my filter:
<FILTER TYPE="FieldConvert">
<PARAM NAME="OutputFile">Fields.vtu</PARAM>
<PARAM NAME="OutputFrequency">1</PARAM>
<PARAM NAME="Modules">vorticity</PARAM>
</FILTER>
When running in parallel, this filter will output .fld directories with .fld files in them each corresponding to the processors used. This is where the issue arises: I am running simulations on a cluster using many processors and I need to save lots of time steps. My cluster has a 1 million file limit, and I will quickly hit this if I do not convert and overwrite the .fld directories into single .fld files in-situ. I know the command to do this in post-processing:
FieldConvert -f session.xml Fields_#_fc.fld/ Fields_#_fc.fld
but so far, have not been successful in implementing this in-situ.
I would be very grateful if you could offer a solution or at least steer me in the right direction with this issue. Thank you very much in advance, and I look forward to hearing from you.
Sincerely,
Isaac _______________________________________________ Nektar-users mailing list Nektar-users@imperial.ac.uk https://mailman.ic.ac.uk/mailman/listinfo/nektar-users
Hi Ilteber, Nektar++ outputs the simulation results in XML or Hdf5 formats irrespective of the file extension fld or chk. Fld stands for field and chk for checkpoint but practically there is not any difference between these two extensions as far as the user concerns. Both XML and Hdf5 formats are in binary and contains all related data and their corresponding meta data of the simulation. There would be no escape from fld or chk if you want to have your fields. CSV files and generally text files are files that also produced during simulation and depend on the filter you are using them can be small or large. For example using FilterAeroForces you will get a text file with .fce extension that contains the forces and usually is not too big. However, using History point filter with large number of point can lead to a large file (my recent simulation produces a file of about 5GB just for the history point but I can easily access it directly with vim and process it with python) To be able to post-process the results you need to convert the fld/chk file to vtu or plt or dat format. If using plt the output will be in binary and hence reduces the size but vtu is in ascii format, hence a bit more spacy. Additionally, while processing the file, you can use “equispacedoutput” module with FieldConvert which reduces the file size in most of situations. We also recently supports the VTK library which I recommend trying it (build nektar with VTK support) but I don’t have much experience with it yet. Finally, though plt is Tecplot format, you can open it with Paraview or VisIt too. My recent experience is that if the plt file is not too big, less than 6GB, Paraview handles it verywell but for more than that I couldn’t open plt file with neither Paraview nor VisIt and I have 64GB ram. Hope this helped. Cheers, Mohsen From: İlteber Özdemir <rilteber@gmail.com> Date: Thursday, 11 May 2023 at 21:54 To: Isaac Rosin <isaac.rosin1@ucalgary.ca> Cc: Mohsen Lahooti <Mohsen.Lahooti@newcastle.ac.uk>, nektar-users@imperial.ac.uk <nektar-users@imperial.ac.uk> Subject: Re: [Nektar-users] Reducing File Count of .fld Outputs in-situ Hi everybody, As a follow up question, what output formats do you prefer/recommend to minimize disc space used? Although I haven't systematically checked, I feel like fld files take more space than plt and csv files. Do you have a go-to output file format? What other considerations do you have to choose an output format (e.g., mine are to be readable by ParaView or by Python using some simple modules like pandas etc.)? Best, Ilteber -- İlteber R. Özdemir On Thu, 11 May 2023 at 22:05, Isaac Rosin <isaac.rosin1@ucalgary.ca<mailto:isaac.rosin1@ucalgary.ca>> wrote: Hi Mohsen, I followed your advice and it appears to be working now. Thank you very much! Best, Isaac ________________________________ From: Mohsen Lahooti <Mohsen.Lahooti@newcastle.ac.uk<mailto:Mohsen.Lahooti@newcastle.ac.uk>> Sent: Tuesday, May 9, 2023 4:30 AM To: Isaac Rosin <isaac.rosin1@ucalgary.ca<mailto:isaac.rosin1@ucalgary.ca>>; nektar-users@imperial.ac.uk<mailto:nektar-users@imperial.ac.uk> <nektar-users@imperial.ac.uk<mailto:nektar-users@imperial.ac.uk>> Subject: Re: Reducing File Count of .fld Outputs in-situ [△EXTERNAL] Hi Isaac, I am not using the fieldconvert Filter directly in the session file so have no experience with that. My usual practice is to just output the fields and then process them later which I found it much easier. I use the IO_CheckSteps in the PARAMETERS to output the chk files during the simulation. To overcome the file number you can use hdf5 format, it encapsulate all the partitions in one file, so you will have one file for each output. To use Hdf5, first, the hdf5 build should be activated when you are building the code. You can either use the hdf5 library installed on the HPE or built it as a thirdparty. When running the code you can either use <I> IOFormat=Hdf5 </> in SolverInfo in your session file or more conveniently just pass the “-i Hdf5” as an argument to the command line. E.g. mpirun -np 10000 $dir/IncNavierStokesSolver mesh.xml session.xml -i Hdf5 &> runlog or mpirun -np 10000 $dir/IncNavierStokesSolver mesh.xml session.xml –io-format Hdf5 &> runlog but if you specify the io format in the session file, you don’t need to pass the command line arguments, Hope this helped. Cheers, Mohsen From: nektar-users-bounces@imperial.ac.uk<mailto:nektar-users-bounces@imperial.ac.uk> <nektar-users-bounces@imperial.ac.uk<mailto:nektar-users-bounces@imperial.ac.uk>> on behalf of Isaac Rosin <isaac.rosin1@ucalgary.ca<mailto:isaac.rosin1@ucalgary.ca>> Date: Tuesday, 9 May 2023 at 11:03 To: nektar-users@imperial.ac.uk<mailto:nektar-users@imperial.ac.uk> <nektar-users@imperial.ac.uk<mailto:nektar-users@imperial.ac.uk>> Subject: [Nektar-users] Reducing File Count of .fld Outputs in-situ ⚠ External sender. Take care when opening links or attachments. Do not provide your login details. This email from isaac.rosin1@ucalgary.ca<mailto:isaac.rosin1@ucalgary.ca> originates from outside Imperial. Do not click on links and attachments unless you recognise the sender. If you trust the sender, add them to your safe senders list<https://spam.ic.ac.uk/SpamConsole/Senders.aspx> to disable email stamping for this address. Dear Nektar++, I am using the FieldConvert filter in my session file to output vorticity. I then isolate regions of my domain using the range option (-r) of FieldConvert in post-processing. To enable the isolation of these regions, I have found that I should produce .fld files with my filter as opposed to .plt, .dat, etc. Here is my filter: <FILTER TYPE="FieldConvert"> <PARAM NAME="OutputFile">Fields.vtu</PARAM> <PARAM NAME="OutputFrequency">1</PARAM> <PARAM NAME="Modules">vorticity</PARAM> </FILTER> When running in parallel, this filter will output .fld directories with .fld files in them each corresponding to the processors used. This is where the issue arises: I am running simulations on a cluster using many processors and I need to save lots of time steps. My cluster has a 1 million file limit, and I will quickly hit this if I do not convert and overwrite the .fld directories into single .fld files in-situ. I know the command to do this in post-processing: FieldConvert -f session.xml Fields_#_fc.fld/ Fields_#_fc.fld but so far, have not been successful in implementing this in-situ. I would be very grateful if you could offer a solution or at least steer me in the right direction with this issue. Thank you very much in advance, and I look forward to hearing from you. Sincerely, Isaac _______________________________________________ Nektar-users mailing list Nektar-users@imperial.ac.uk<mailto:Nektar-users@imperial.ac.uk> https://mailman.ic.ac.uk/mailman/listinfo/nektar-users
Hi Mohsen, Maybe I wrote the question a bit wrong, let me rephrase myself: Instead of running the simulation first and then post-processing the results, I find it easier to do this on the go by using FieldConvert inside the simulation. Then, I have all the variables I need at the end of the simulation. One problem with this is that it limits the modules I can use, also the output formats (not completely sure about this). For example it would be nice (maybe in a future release?) if I could use all the modules of FieldConvert and write the output in any format I like (e.g., I use plt). I haven't tried to use "equispacedoutput", but I guess it is worth trying as it should take less space to store values on an even grid. I was wondering if you were doing something like this. Thank you for your answer. Best, Ilteber -- İlteber R. Özdemir On Fri, 12 May 2023 at 00:08, Mohsen Lahooti <Mohsen.Lahooti@newcastle.ac.uk> wrote:
Hi Ilteber,
Nektar++ outputs the simulation results in XML or Hdf5 formats irrespective of the file extension fld or chk. Fld stands for field and chk for checkpoint but practically there is not any difference between these two extensions as far as the user concerns. Both XML and Hdf5 formats are in binary and contains all related data and their corresponding meta data of the simulation. There would be no escape from fld or chk if you want to have your fields.
CSV files and generally text files are files that also produced during simulation and depend on the filter you are using them can be small or large. For example using FilterAeroForces you will get a text file with .fce extension that contains the forces and usually is not too big. However, using History point filter with large number of point can lead to a large file (my recent simulation produces a file of about 5GB just for the history point but I can easily access it directly with vim and process it with python)
To be able to post-process the results you need to convert the fld/chk file to vtu or plt or dat format. If using plt the output will be in binary and hence reduces the size but vtu is in ascii format, hence a bit more spacy. Additionally, while processing the file, you can use “equispacedoutput” module with FieldConvert which reduces the file size in most of situations.
We also recently supports the VTK library which I recommend trying it (build nektar with VTK support) but I don’t have much experience with it yet.
Finally, though plt is Tecplot format, you can open it with Paraview or VisIt too. My recent experience is that if the plt file is not too big, less than 6GB, Paraview handles it verywell but for more than that I couldn’t open plt file with neither Paraview nor VisIt and I have 64GB ram.
Hope this helped.
Cheers,
Mohsen
*From: *İlteber Özdemir <rilteber@gmail.com> *Date: *Thursday, 11 May 2023 at 21:54 *To: *Isaac Rosin <isaac.rosin1@ucalgary.ca> *Cc: *Mohsen Lahooti <Mohsen.Lahooti@newcastle.ac.uk>, nektar-users@imperial.ac.uk <nektar-users@imperial.ac.uk> *Subject: *Re: [Nektar-users] Reducing File Count of .fld Outputs in-situ
Hi everybody,
As a follow up question, what output formats do you prefer/recommend to minimize disc space used? Although I haven't systematically checked, I feel like fld files take more space than plt and csv files. Do you have a go-to output file format? What other considerations do you have to choose an output format (e.g., mine are to be readable by ParaView or by Python using some simple modules like pandas etc.)?
Best,
Ilteber
--
İlteber R. Özdemir
On Thu, 11 May 2023 at 22:05, Isaac Rosin <isaac.rosin1@ucalgary.ca> wrote:
Hi Mohsen,
I followed your advice and it appears to be working now. Thank you very much!
Best,
Isaac ------------------------------
*From:* Mohsen Lahooti <Mohsen.Lahooti@newcastle.ac.uk> *Sent:* Tuesday, May 9, 2023 4:30 AM *To:* Isaac Rosin <isaac.rosin1@ucalgary.ca>; nektar-users@imperial.ac.uk <nektar-users@imperial.ac.uk> *Subject:* Re: Reducing File Count of .fld Outputs in-situ
[△EXTERNAL]
Hi Isaac,
I am not using the fieldconvert Filter directly in the session file so have no experience with that. My usual practice is to just output the fields and then process them later which I found it much easier. I use the IO_CheckSteps in the PARAMETERS to output the chk files during the simulation.
To overcome the file number you can use hdf5 format, it encapsulate all the partitions in one file, so you will have one file for each output.
To use Hdf5, first, the hdf5 build should be activated when you are building the code. You can either use the hdf5 library installed on the HPE or built it as a thirdparty.
When running the code you can either use <I> IOFormat=Hdf5 </> in SolverInfo in your session file or more conveniently just pass the “-i Hdf5” as an argument to the command line. E.g.
mpirun -np 10000 $dir/IncNavierStokesSolver mesh.xml session.xml -i Hdf5 &> runlog
or
mpirun -np 10000 $dir/IncNavierStokesSolver mesh.xml session.xml –io-format Hdf5 &> runlog
but if you specify the io format in the session file, you don’t need to pass the command line arguments,
Hope this helped.
Cheers,
Mohsen
*From: *nektar-users-bounces@imperial.ac.uk < nektar-users-bounces@imperial.ac.uk> on behalf of Isaac Rosin < isaac.rosin1@ucalgary.ca> *Date: *Tuesday, 9 May 2023 at 11:03 *To: *nektar-users@imperial.ac.uk <nektar-users@imperial.ac.uk> *Subject: *[Nektar-users] Reducing File Count of .fld Outputs in-situ
⚠ External sender. Take care when opening links or attachments. Do not provide your login details.
This email from isaac.rosin1@ucalgary.ca originates from outside Imperial. Do not click on links and attachments unless you recognise the sender. If you trust the sender, add them to your safe senders list <https://spam.ic.ac.uk/SpamConsole/Senders.aspx> to disable email stamping for this address.
Dear Nektar++,
I am using the FieldConvert filter in my session file to output vorticity. I then isolate regions of my domain using the range option (-r) of FieldConvert in post-processing. To enable the isolation of these regions, I have found that I should produce .fld files with my filter as opposed to .plt, .dat, etc. Here is my filter:
<FILTER TYPE="FieldConvert">
<PARAM NAME="OutputFile">Fields.vtu</PARAM>
<PARAM NAME="OutputFrequency">1</PARAM>
<PARAM NAME="Modules">vorticity</PARAM>
</FILTER>
When running in parallel, this filter will output .fld directories with .fld files in them each corresponding to the processors used. This is where the issue arises: I am running simulations on a cluster using many processors and I need to save lots of time steps. My cluster has a 1 million file limit, and I will quickly hit this if I do not convert and overwrite the .fld directories into single .fld files in-situ. I know the command to do this in post-processing:
FieldConvert -f session.xml Fields_#_fc.fld/ Fields_#_fc.fld
but so far, have not been successful in implementing this in-situ.
I would be very grateful if you could offer a solution or at least steer me in the right direction with this issue. Thank you very much in advance, and I look forward to hearing from you.
Sincerely,
Isaac
_______________________________________________ Nektar-users mailing list Nektar-users@imperial.ac.uk https://mailman.ic.ac.uk/mailman/listinfo/nektar-users
participants (3)
- 
                
                Isaac Rosin
- 
                
                İlteber Özdemir
- 
                
                Mohsen Lahooti