Re: [firedrake] Saving txt files while running in parallel
Yes, I guess that in Floriane's original code, each process is computing its own max_eta (since eta.dat.data contains the values for degrees of freedom on that MPI process). Then all process are trying to write (or append?) to the same file(?). You could get the separate max values by making the filename depend on op2.MPI.comm.rank, so that each process writes to a separate file. Otherwise, computing the max over all subdomains indeed requires a parallel operation. On 8 June 2016 at 09:53, Shipton, Jemma <j.shipton@imperial.ac.uk> wrote:
Hi Floriane,
We have defined a max method that works in parallel for our code... try something like:
def max(f): fmax = op2.Global(1, [-1000], dtype=float) op2.par_loop(op2.Kernel("""void maxify(double *a, double *b) { a[0] = a[0] < fabs(b[0]) ? fabs(b[0]) : a[0]; }""", "maxify"), f.dof_dset.set, fmax(op2.MAX), f.dat(op2.READ)) return fmax.data[0]
But I agree, it would be nice if that weren't necessary.
Hope that helps,
Jemma
------------------------------ *From:* firedrake-bounces@imperial.ac.uk <firedrake-bounces@imperial.ac.uk> on behalf of Floriane Gidel [RPG] <mmfg@leeds.ac.uk> *Sent:* 08 June 2016 09:33:46 *To:* firedrake *Subject:* [firedrake] Saving txt files while running in parallel
Dear all,
I am running a Firedrake code in parallel with 4 cores, in which I save the maximal value of the amplitude of the wave at each time, with the command:
max_eta = max(eta.dat.data)
Eta_file.write('%-10s %-10s %-10s\n' % (t,'', eta_max))
But when opening the .txt file, I notice two issues:
- the data are not saved continuously: time goes from 0 to 100 (let's say) and then starts from 90 again;
- the values of max_eta are not the maximal values of eta, except at the beginning. So it looks like the function max takes the maximum value of eta only in one subdomain, so that after the wave has crossed the subdomain, the maximal value of eta goes back to the depth of rest value.
How can I force this command to be applied on the full domain, even if the code is run in parallel?
Thanks,
Floriane
Ok, thank you both for the tips. Cheers, Floriane ________________________________ De : firedrake-bounces@imperial.ac.uk <firedrake-bounces@imperial.ac.uk> de la part de Andrew McRae <A.T.T.McRae@bath.ac.uk> Envoyé : mercredi 8 juin 2016 09:59:36 À : firedrake@imperial.ac.uk Objet : Re: [firedrake] Saving txt files while running in parallel Yes, I guess that in Floriane's original code, each process is computing its own max_eta (since eta.dat.data contains the values for degrees of freedom on that MPI process). Then all process are trying to write (or append?) to the same file(?). You could get the separate max values by making the filename depend on op2.MPI.comm.rank, so that each process writes to a separate file. Otherwise, computing the max over all subdomains indeed requires a parallel operation. On 8 June 2016 at 09:53, Shipton, Jemma <j.shipton@imperial.ac.uk<mailto:j.shipton@imperial.ac.uk>> wrote: Hi Floriane, We have defined a max method that works in parallel for our code... try something like: def max(f): fmax = op2.Global(1, [-1000], dtype=float) op2.par_loop(op2.Kernel("""void maxify(double *a, double *b) { a[0] = a[0] < fabs(b[0]) ? fabs(b[0]) : a[0]; }""", "maxify"), f.dof_dset.set, fmax(op2.MAX), f.dat(op2.READ)) return fmax.data[0] But I agree, it would be nice if that weren't necessary. Hope that helps, Jemma ________________________________ From: firedrake-bounces@imperial.ac.uk<mailto:firedrake-bounces@imperial.ac.uk> <firedrake-bounces@imperial.ac.uk<mailto:firedrake-bounces@imperial.ac.uk>> on behalf of Floriane Gidel [RPG] <mmfg@leeds.ac.uk<mailto:mmfg@leeds.ac.uk>> Sent: 08 June 2016 09:33:46 To: firedrake Subject: [firedrake] Saving txt files while running in parallel Dear all, I am running a Firedrake code in parallel with 4 cores, in which I save the maximal value of the amplitude of the wave at each time, with the command: max_eta = max(eta.dat.data) Eta_file.write('%-10s %-10s %-10s\n' % (t,'', eta_max)) But when opening the .txt file, I notice two issues: - the data are not saved continuously: time goes from 0 to 100 (let's say) and then starts from 90 again; - the values of max_eta are not the maximal values of eta, except at the beginning. So it looks like the function max takes the maximum value of eta only in one subdomain, so that after the wave has crossed the subdomain, the maximal value of eta goes back to the depth of rest value. How can I force this command to be applied on the full domain, even if the code is run in parallel? Thanks, Floriane
Dear Andrew, I would like to save the separate max values on different files using op2.MPI.comm.rank, but the attribute 'comm' seems to be unknown: print op2.MPI.comm.rank AttributeError: 'module' object has no attribute 'comm' The command was working on my laptop, but not on the Linux machine where Firedrake has been updated very recently. Do you know where the error can come from ? Thanks, Floriane ________________________________ De : firedrake-bounces@imperial.ac.uk <firedrake-bounces@imperial.ac.uk> de la part de Andrew McRae <A.T.T.McRae@bath.ac.uk> Envoyé : mercredi 8 juin 2016 09:59:36 À : firedrake@imperial.ac.uk Objet : Re: [firedrake] Saving txt files while running in parallel Yes, I guess that in Floriane's original code, each process is computing its own max_eta (since eta.dat.data contains the values for degrees of freedom on that MPI process). Then all process are trying to write (or append?) to the same file(?). You could get the separate max values by making the filename depend on op2.MPI.comm.rank, so that each process writes to a separate file. Otherwise, computing the max over all subdomains indeed requires a parallel operation. On 8 June 2016 at 09:53, Shipton, Jemma <j.shipton@imperial.ac.uk<mailto:j.shipton@imperial.ac.uk>> wrote: Hi Floriane, We have defined a max method that works in parallel for our code... try something like: def max(f): fmax = op2.Global(1, [-1000], dtype=float) op2.par_loop(op2.Kernel("""void maxify(double *a, double *b) { a[0] = a[0] < fabs(b[0]) ? fabs(b[0]) : a[0]; }""", "maxify"), f.dof_dset.set, fmax(op2.MAX), f.dat(op2.READ)) return fmax.data[0] But I agree, it would be nice if that weren't necessary. Hope that helps, Jemma ________________________________ From: firedrake-bounces@imperial.ac.uk<mailto:firedrake-bounces@imperial.ac.uk> <firedrake-bounces@imperial.ac.uk<mailto:firedrake-bounces@imperial.ac.uk>> on behalf of Floriane Gidel [RPG] <mmfg@leeds.ac.uk<mailto:mmfg@leeds.ac.uk>> Sent: 08 June 2016 09:33:46 To: firedrake Subject: [firedrake] Saving txt files while running in parallel Dear all, I am running a Firedrake code in parallel with 4 cores, in which I save the maximal value of the amplitude of the wave at each time, with the command: max_eta = max(eta.dat.data) Eta_file.write('%-10s %-10s %-10s\n' % (t,'', eta_max)) But when opening the .txt file, I notice two issues: - the data are not saved continuously: time goes from 0 to 100 (let's say) and then starts from 90 again; - the values of max_eta are not the maximal values of eta, except at the beginning. So it looks like the function max takes the maximum value of eta only in one subdomain, so that after the wave has crossed the subdomain, the maximal value of eta goes back to the depth of rest value. How can I force this command to be applied on the full domain, even if the code is run in parallel? Thanks, Floriane
Hello Floriane, Please take your communicator from the mesh or a similar object, e.g. mesh.comm instead of op2.MPI.comm where mesh is your mesh object. Regards, Miklos On 29/08/16 10:35, Floriane Gidel [RPG] wrote:
Dear Andrew,
I would like to save the separate max values on different files using op2.MPI.comm.rank, but the attribute 'comm' seems to be unknown:
print op2.MPI.comm.rank
AttributeError: 'module' object has no attribute 'comm'
The command was working on my laptop, but not on the Linux machine where Firedrake has been updated very recently. Do you know where the error can come from ?
Thanks,
Floriane
------------------------------------------------------------------------ *De :* firedrake-bounces@imperial.ac.uk <firedrake-bounces@imperial.ac.uk> de la part de Andrew McRae <A.T.T.McRae@bath.ac.uk> *Envoyé :* mercredi 8 juin 2016 09:59:36 *À :* firedrake@imperial.ac.uk *Objet :* Re: [firedrake] Saving txt files while running in parallel Yes, I guess that in Floriane's original code, each process is computing its own max_eta (since eta.dat.data contains the values for degrees of freedom on that MPI process). Then all process are trying to write (or append?) to the same file(?).
You could get the separate max values by making the filename depend on op2.MPI.comm.rank, so that each process writes to a separate file. Otherwise, computing the max over all subdomains indeed requires a parallel operation.
On 8 June 2016 at 09:53, Shipton, Jemma <j.shipton@imperial.ac.uk <mailto:j.shipton@imperial.ac.uk>> wrote:
Hi Floriane,
We have defined a max method that works in parallel for our code... try something like:
def max(f): fmax = op2.Global(1, [-1000], dtype=float) op2.par_loop(op2.Kernel("""void maxify(double *a, double *b) { a[0] = a[0] < fabs(b[0]) ? fabs(b[0]) : a[0]; }""", "maxify"), f.dof_dset.set, fmax(op2.MAX), f.dat(op2.READ)) return fmax.data[0]
But I agree, it would be nice if that weren't necessary.
Hope that helps,
Jemma
------------------------------------------------------------------------ *From:* firedrake-bounces@imperial.ac.uk <mailto:firedrake-bounces@imperial.ac.uk> <firedrake-bounces@imperial.ac.uk <mailto:firedrake-bounces@imperial.ac.uk>> on behalf of Floriane Gidel [RPG] <mmfg@leeds.ac.uk <mailto:mmfg@leeds.ac.uk>> *Sent:* 08 June 2016 09:33:46 *To:* firedrake *Subject:* [firedrake] Saving txt files while running in parallel
Dear all,
I am running a Firedrake code in parallel with 4 cores, in which I save the maximal value of the amplitude of the wave at each time, with the command:
max_eta = max(eta.dat.data)
Eta_file.write('%-10s %-10s %-10s\n' % (t,'', eta_max))
But when opening the .txt file, I notice two issues:
- the data are not saved continuously: time goes from 0 to 100 (let's say) and then starts from 90 again;
- the values of max_eta are not the maximal values of eta, except at the beginning. So it looks like the function max takes the maximum value of eta only in one subdomain, so that after the wave has crossed the subdomain, the maximal value of eta goes back to the depth of rest value.
How can I force this command to be applied on the full domain, even if the code is run in parallel?
Thanks,
Floriane
_______________________________________________ firedrake mailing list firedrake@imperial.ac.uk https://mailman.ic.ac.uk/mailman/listinfo/firedrake
Thanks Miklós, it seems to work now. However, there is still an error when using the max function: max_eta = max(eta.dat.data) OSError: [Errno 12] Cannot allocate memory Is that related to the parallel running? Best, Floriane ________________________________ De : firedrake-bounces@imperial.ac.uk <firedrake-bounces@imperial.ac.uk> de la part de Miklós Homolya <m.homolya14@imperial.ac.uk> Envoyé : lundi 29 août 2016 10:54:30 À : firedrake@imperial.ac.uk Objet : Re: [firedrake] Saving txt files while running in parallel Hello Floriane, Please take your communicator from the mesh or a similar object, e.g. mesh.comm instead of op2.MPI.comm where mesh is your mesh object. Regards, Miklos On 29/08/16 10:35, Floriane Gidel [RPG] wrote: Dear Andrew, I would like to save the separate max values on different files using op2.MPI.comm.rank, but the attribute 'comm' seems to be unknown: print op2.MPI.comm.rank AttributeError: 'module' object has no attribute 'comm' The command was working on my laptop, but not on the Linux machine where Firedrake has been updated very recently. Do you know where the error can come from ? Thanks, Floriane ________________________________ De : firedrake-bounces@imperial.ac.uk<mailto:firedrake-bounces@imperial.ac.uk> <firedrake-bounces@imperial.ac.uk><mailto:firedrake-bounces@imperial.ac.uk> de la part de Andrew McRae <A.T.T.McRae@bath.ac.uk><mailto:A.T.T.McRae@bath.ac.uk> Envoyé : mercredi 8 juin 2016 09:59:36 À : firedrake@imperial.ac.uk<mailto:firedrake@imperial.ac.uk> Objet : Re: [firedrake] Saving txt files while running in parallel Yes, I guess that in Floriane's original code, each process is computing its own max_eta (since eta.dat.data contains the values for degrees of freedom on that MPI process). Then all process are trying to write (or append?) to the same file(?). You could get the separate max values by making the filename depend on op2.MPI.comm.rank, so that each process writes to a separate file. Otherwise, computing the max over all subdomains indeed requires a parallel operation. On 8 June 2016 at 09:53, Shipton, Jemma <j.shipton@imperial.ac.uk<mailto:j.shipton@imperial.ac.uk>> wrote: Hi Floriane, We have defined a max method that works in parallel for our code... try something like: def max(f): fmax = op2.Global(1, [-1000], dtype=float) op2.par_loop(op2.Kernel("""void maxify(double *a, double *b) { a[0] = a[0] < fabs(b[0]) ? fabs(b[0]) : a[0]; }""", "maxify"), f.dof_dset.set, fmax(op2.MAX), f.dat(op2.READ)) return fmax.data[0] But I agree, it would be nice if that weren't necessary. Hope that helps, Jemma ________________________________ From: firedrake-bounces@imperial.ac.uk<mailto:firedrake-bounces@imperial.ac.uk> <firedrake-bounces@imperial.ac.uk<mailto:firedrake-bounces@imperial.ac.uk>> on behalf of Floriane Gidel [RPG] <mmfg@leeds.ac.uk<mailto:mmfg@leeds.ac.uk>> Sent: 08 June 2016 09:33:46 To: firedrake Subject: [firedrake] Saving txt files while running in parallel Dear all, I am running a Firedrake code in parallel with 4 cores, in which I save the maximal value of the amplitude of the wave at each time, with the command: max_eta = max(eta.dat.data) Eta_file.write('%-10s %-10s %-10s\n' % (t,'', eta_max)) But when opening the .txt file, I notice two issues: - the data are not saved continuously: time goes from 0 to 100 (let's say) and then starts from 90 again; - the values of max_eta are not the maximal values of eta, except at the beginning. So it looks like the function max takes the maximum value of eta only in one subdomain, so that after the wave has crossed the subdomain, the maximal value of eta goes back to the depth of rest value. How can I force this command to be applied on the full domain, even if the code is run in parallel? Thanks, Floriane _______________________________________________ firedrake mailing list firedrake@imperial.ac.uk<mailto:firedrake@imperial.ac.uk> https://mailman.ic.ac.uk/mailman/listinfo/firedrake
I don't know. Can you just try max_eta = max(eta.dat.data_ro) If that doesn't help, I refer this to Lawrence. On 29/08/16 11:10, Floriane Gidel [RPG] wrote:
Thanks Miklós, it seems to work now.
However, there is still an error when using the max function:
max_eta = max(eta.dat.data) OSError: [Errno 12] Cannot allocate memory
Is that related to the parallel running?
Best, Floriane ------------------------------------------------------------------------ *De :* firedrake-bounces@imperial.ac.uk <firedrake-bounces@imperial.ac.uk> de la part de Miklós Homolya <m.homolya14@imperial.ac.uk> *Envoyé :* lundi 29 août 2016 10:54:30 *À :* firedrake@imperial.ac.uk *Objet :* Re: [firedrake] Saving txt files while running in parallel
Hello Floriane,
Please take your communicator from the mesh or a similar object, e.g. mesh.comm instead of op2.MPI.comm where mesh is your mesh object.
Regards,
Miklos
On 29/08/16 10:35, Floriane Gidel [RPG] wrote:
Dear Andrew,
I would like to save the separate max values on different files using op2.MPI.comm.rank, but the attribute 'comm' seems to be unknown:
print op2.MPI.comm.rank
AttributeError: 'module' object has no attribute 'comm'
The command was working on my laptop, but not on the Linux machine where Firedrake has been updated very recently. Do you know where the error can come from ?
Thanks,
Floriane
------------------------------------------------------------------------ *De :* firedrake-bounces@imperial.ac.uk <firedrake-bounces@imperial.ac.uk> de la part de Andrew McRae <A.T.T.McRae@bath.ac.uk> *Envoyé :* mercredi 8 juin 2016 09:59:36 *À :* firedrake@imperial.ac.uk *Objet :* Re: [firedrake] Saving txt files while running in parallel Yes, I guess that in Floriane's original code, each process is computing its own max_eta (since eta.dat.data contains the values for degrees of freedom on that MPI process). Then all process are trying to write (or append?) to the same file(?).
You could get the separate max values by making the filename depend on op2.MPI.comm.rank, so that each process writes to a separate file. Otherwise, computing the max over all subdomains indeed requires a parallel operation.
On 8 June 2016 at 09:53, Shipton, Jemma <j.shipton@imperial.ac.uk <mailto:j.shipton@imperial.ac.uk>> wrote:
Hi Floriane,
We have defined a max method that works in parallel for our code... try something like:
def max(f): fmax = op2.Global(1, [-1000], dtype=float) op2.par_loop(op2.Kernel("""void maxify(double *a, double *b) { a[0] = a[0] < fabs(b[0]) ? fabs(b[0]) : a[0]; }""", "maxify"), f.dof_dset.set, fmax(op2.MAX), f.dat(op2.READ)) return fmax.data[0]
But I agree, it would be nice if that weren't necessary.
Hope that helps,
Jemma
------------------------------------------------------------------------ *From:* firedrake-bounces@imperial.ac.uk <mailto:firedrake-bounces@imperial.ac.uk> <firedrake-bounces@imperial.ac.uk <mailto:firedrake-bounces@imperial.ac.uk>> on behalf of Floriane Gidel [RPG] <mmfg@leeds.ac.uk <mailto:mmfg@leeds.ac.uk>> *Sent:* 08 June 2016 09:33:46 *To:* firedrake *Subject:* [firedrake] Saving txt files while running in parallel
Dear all,
I am running a Firedrake code in parallel with 4 cores, in which I save the maximal value of the amplitude of the wave at each time, with the command:
max_eta = max(eta.dat.data)
Eta_file.write('%-10s %-10s %-10s\n' % (t,'', eta_max))
But when opening the .txt file, I notice two issues:
- the data are not saved continuously: time goes from 0 to 100 (let's say) and then starts from 90 again;
- the values of max_eta are not the maximal values of eta, except at the beginning. So it looks like the function max takes the maximum value of eta only in one subdomain, so that after the wave has crossed the subdomain, the maximal value of eta goes back to the depth of rest value.
How can I force this command to be applied on the full domain, even if the code is run in parallel?
Thanks,
Floriane
_______________________________________________ firedrake mailing list firedrake@imperial.ac.uk https://mailman.ic.ac.uk/mailman/listinfo/firedrake
_______________________________________________ firedrake mailing list firedrake@imperial.ac.uk https://mailman.ic.ac.uk/mailman/listinfo/firedrake
It gives the same error... ________________________________ De : Miklós Homolya <m.homolya14@imperial.ac.uk> Envoyé : lundi 29 août 2016 11:14:26 À : firedrake@imperial.ac.uk; Floriane Gidel [RPG] Cc : Lawrence Mitchell Objet : Re: [firedrake] Saving txt files while running in parallel I don't know. Can you just try max_eta = max(eta.dat.data_ro) If that doesn't help, I refer this to Lawrence. On 29/08/16 11:10, Floriane Gidel [RPG] wrote: Thanks Miklós, it seems to work now. However, there is still an error when using the max function: max_eta = max(eta.dat.data) OSError: [Errno 12] Cannot allocate memory Is that related to the parallel running? Best, Floriane ________________________________ De : firedrake-bounces@imperial.ac.uk<mailto:firedrake-bounces@imperial.ac.uk> <firedrake-bounces@imperial.ac.uk><mailto:firedrake-bounces@imperial.ac.uk> de la part de Miklós Homolya <m.homolya14@imperial.ac.uk><mailto:m.homolya14@imperial.ac.uk> Envoyé : lundi 29 août 2016 10:54:30 À : firedrake@imperial.ac.uk<mailto:firedrake@imperial.ac.uk> Objet : Re: [firedrake] Saving txt files while running in parallel Hello Floriane, Please take your communicator from the mesh or a similar object, e.g. mesh.comm instead of op2.MPI.comm where mesh is your mesh object. Regards, Miklos On 29/08/16 10:35, Floriane Gidel [RPG] wrote: Dear Andrew, I would like to save the separate max values on different files using op2.MPI.comm.rank, but the attribute 'comm' seems to be unknown: print op2.MPI.comm.rank AttributeError: 'module' object has no attribute 'comm' The command was working on my laptop, but not on the Linux machine where Firedrake has been updated very recently. Do you know where the error can come from ? Thanks, Floriane ________________________________ De : firedrake-bounces@imperial.ac.uk<mailto:firedrake-bounces@imperial.ac.uk> <firedrake-bounces@imperial.ac.uk><mailto:firedrake-bounces@imperial.ac.uk> de la part de Andrew McRae <A.T.T.McRae@bath.ac.uk><mailto:A.T.T.McRae@bath.ac.uk> Envoyé : mercredi 8 juin 2016 09:59:36 À : firedrake@imperial.ac.uk<mailto:firedrake@imperial.ac.uk> Objet : Re: [firedrake] Saving txt files while running in parallel Yes, I guess that in Floriane's original code, each process is computing its own max_eta (since eta.dat.data contains the values for degrees of freedom on that MPI process). Then all process are trying to write (or append?) to the same file(?). You could get the separate max values by making the filename depend on op2.MPI.comm.rank, so that each process writes to a separate file. Otherwise, computing the max over all subdomains indeed requires a parallel operation. On 8 June 2016 at 09:53, Shipton, Jemma <j.shipton@imperial.ac.uk<mailto:j.shipton@imperial.ac.uk>> wrote: Hi Floriane, We have defined a max method that works in parallel for our code... try something like: def max(f): fmax = op2.Global(1, [-1000], dtype=float) op2.par_loop(op2.Kernel("""void maxify(double *a, double *b) { a[0] = a[0] < fabs(b[0]) ? fabs(b[0]) : a[0]; }""", "maxify"), f.dof_dset.set, fmax(op2.MAX), f.dat(op2.READ)) return fmax.data[0] But I agree, it would be nice if that weren't necessary. Hope that helps, Jemma ________________________________ From: firedrake-bounces@imperial.ac.uk<mailto:firedrake-bounces@imperial.ac.uk> <firedrake-bounces@imperial.ac.uk<mailto:firedrake-bounces@imperial.ac.uk>> on behalf of Floriane Gidel [RPG] <mmfg@leeds.ac.uk<mailto:mmfg@leeds.ac.uk>> Sent: 08 June 2016 09:33:46 To: firedrake Subject: [firedrake] Saving txt files while running in parallel Dear all, I am running a Firedrake code in parallel with 4 cores, in which I save the maximal value of the amplitude of the wave at each time, with the command: max_eta = max(eta.dat.data) Eta_file.write('%-10s %-10s %-10s\n' % (t,'', eta_max)) But when opening the .txt file, I notice two issues: - the data are not saved continuously: time goes from 0 to 100 (let's say) and then starts from 90 again; - the values of max_eta are not the maximal values of eta, except at the beginning. So it looks like the function max takes the maximum value of eta only in one subdomain, so that after the wave has crossed the subdomain, the maximal value of eta goes back to the depth of rest value. How can I force this command to be applied on the full domain, even if the code is run in parallel? Thanks, Floriane _______________________________________________ firedrake mailing list firedrake@imperial.ac.uk<mailto:firedrake@imperial.ac.uk> https://mailman.ic.ac.uk/mailman/listinfo/firedrake _______________________________________________ firedrake mailing list firedrake@imperial.ac.uk<mailto:firedrake@imperial.ac.uk> https://mailman.ic.ac.uk/mailman/listinfo/firedrake
Hi Floriane,
max_eta = max(eta.dat.data) OSError: [Errno 12] Cannot allocate memory
AFAIK that should work. Is it possible that you're running out of memory? Cheers, Tuomas On 08/29/2016 03:58 AM, Floriane Gidel [RPG] wrote:
It gives the same error...
------------------------------------------------------------------------ *De :* Miklós Homolya <m.homolya14@imperial.ac.uk> *Envoyé :* lundi 29 août 2016 11:14:26 *À :* firedrake@imperial.ac.uk; Floriane Gidel [RPG] *Cc :* Lawrence Mitchell *Objet :* Re: [firedrake] Saving txt files while running in parallel
I don't know. Can you just try
max_eta = max(eta.dat.data_ro)
If that doesn't help, I refer this to Lawrence.
On 29/08/16 11:10, Floriane Gidel [RPG] wrote:
Thanks Miklós, it seems to work now.
However, there is still an error when using the max function:
max_eta = max(eta.dat.data) OSError: [Errno 12] Cannot allocate memory
Is that related to the parallel running?
Best, Floriane ------------------------------------------------------------------------ *De :* firedrake-bounces@imperial.ac.uk <firedrake-bounces@imperial.ac.uk> de la part de Miklós Homolya <m.homolya14@imperial.ac.uk> *Envoyé :* lundi 29 août 2016 10:54:30 *À :* firedrake@imperial.ac.uk *Objet :* Re: [firedrake] Saving txt files while running in parallel
Hello Floriane,
Please take your communicator from the mesh or a similar object, e.g. mesh.comm instead of op2.MPI.comm where mesh is your mesh object.
Regards,
Miklos
On 29/08/16 10:35, Floriane Gidel [RPG] wrote:
Dear Andrew,
I would like to save the separate max values on different files using op2.MPI.comm.rank, but the attribute 'comm' seems to be unknown:
print op2.MPI.comm.rank
AttributeError: 'module' object has no attribute 'comm'
The command was working on my laptop, but not on the Linux machine where Firedrake has been updated very recently. Do you know where the error can come from ?
Thanks,
Floriane
------------------------------------------------------------------------ *De :* firedrake-bounces@imperial.ac.uk <firedrake-bounces@imperial.ac.uk> de la part de Andrew McRae <A.T.T.McRae@bath.ac.uk> *Envoyé :* mercredi 8 juin 2016 09:59:36 *À :* firedrake@imperial.ac.uk *Objet :* Re: [firedrake] Saving txt files while running in parallel Yes, I guess that in Floriane's original code, each process is computing its own max_eta (since eta.dat.data contains the values for degrees of freedom on that MPI process). Then all process are trying to write (or append?) to the same file(?).
You could get the separate max values by making the filename depend on op2.MPI.comm.rank, so that each process writes to a separate file. Otherwise, computing the max over all subdomains indeed requires a parallel operation.
On 8 June 2016 at 09:53, Shipton, Jemma <j.shipton@imperial.ac.uk <mailto:j.shipton@imperial.ac.uk>> wrote:
Hi Floriane,
We have defined a max method that works in parallel for our code... try something like:
def max(f): fmax = op2.Global(1, [-1000], dtype=float) op2.par_loop(op2.Kernel("""void maxify(double *a, double *b) { a[0] = a[0] < fabs(b[0]) ? fabs(b[0]) : a[0]; }""", "maxify"), f.dof_dset.set, fmax(op2.MAX), f.dat(op2.READ)) return fmax.data[0]
But I agree, it would be nice if that weren't necessary.
Hope that helps,
Jemma
------------------------------------------------------------------------ *From:* firedrake-bounces@imperial.ac.uk <mailto:firedrake-bounces@imperial.ac.uk> <firedrake-bounces@imperial.ac.uk <mailto:firedrake-bounces@imperial.ac.uk>> on behalf of Floriane Gidel [RPG] <mmfg@leeds.ac.uk <mailto:mmfg@leeds.ac.uk>> *Sent:* 08 June 2016 09:33:46 *To:* firedrake *Subject:* [firedrake] Saving txt files while running in parallel
Dear all,
I am running a Firedrake code in parallel with 4 cores, in which I save the maximal value of the amplitude of the wave at each time, with the command:
max_eta = max(eta.dat.data)
Eta_file.write('%-10s %-10s %-10s\n' % (t,'', eta_max))
But when opening the .txt file, I notice two issues:
- the data are not saved continuously: time goes from 0 to 100 (let's say) and then starts from 90 again;
- the values of max_eta are not the maximal values of eta, except at the beginning. So it looks like the function max takes the maximum value of eta only in one subdomain, so that after the wave has crossed the subdomain, the maximal value of eta goes back to the depth of rest value.
How can I force this command to be applied on the full domain, even if the code is run in parallel?
Thanks,
Floriane
_______________________________________________ firedrake mailing list firedrake@imperial.ac.uk https://mailman.ic.ac.uk/mailman/listinfo/firedrake
_______________________________________________ firedrake mailing list firedrake@imperial.ac.uk https://mailman.ic.ac.uk/mailman/listinfo/firedrake
_______________________________________________ firedrake mailing list firedrake@imperial.ac.uk https://mailman.ic.ac.uk/mailman/listinfo/firedrake
Hi Tuomas, Apparently I have enough memory. After some tests, it seems that it comes from my installation of Firedrake because other codes are not running either, while they do on my laptop. For instance, in another program, the command mesh = UnitIntervalMesh(N) raises the error ValueError: Number of cells must be a positive integer while the file runs correctly on my laptop with another version of Firedrake. I did not encounter any issue while running the installation script, but I know that Firedrake requires a version of OpenMPI different from the one installed on my machine, so I load the required version before sourcing Firedrake. Maybe this is not enough for Firedrake to access the required version? Below are the commands I used to load the module and install Firedrake: module load/mpi/compat-openmpi16-x86_64 python firedrake-install --no-package-manager --disable-ssh and to source the Firedrake environment before running simulations: module load/mpi/compat-openmpi16-x86_64 source firedrake/bin/activate Does Firedrake use the loaded version of openmpi with these commands, or is there something missing? I expect the problem to come from there... Thanks a lot, Floriane ________________________________ De : firedrake-bounces@imperial.ac.uk <firedrake-bounces@imperial.ac.uk> de la part de Tuomas Karna <tuomas.karna@gmail.com> Envoyé : lundi 29 août 2016 19:00:28 À : firedrake@imperial.ac.uk Objet : Re: [firedrake] Saving txt files while running in parallel Hi Floriane, max_eta = max(eta.dat.data) OSError: [Errno 12] Cannot allocate memory AFAIK that should work. Is it possible that you're running out of memory? Cheers, Tuomas On 08/29/2016 03:58 AM, Floriane Gidel [RPG] wrote: It gives the same error... ________________________________ De : Miklós Homolya <m.homolya14@imperial.ac.uk><mailto:m.homolya14@imperial.ac.uk> Envoyé : lundi 29 août 2016 11:14:26 À : firedrake@imperial.ac.uk<mailto:firedrake@imperial.ac.uk>; Floriane Gidel [RPG] Cc : Lawrence Mitchell Objet : Re: [firedrake] Saving txt files while running in parallel I don't know. Can you just try max_eta = max(eta.dat.data_ro) If that doesn't help, I refer this to Lawrence. On 29/08/16 11:10, Floriane Gidel [RPG] wrote: Thanks Miklós, it seems to work now. However, there is still an error when using the max function: max_eta = max(eta.dat.data) OSError: [Errno 12] Cannot allocate memory Is that related to the parallel running? Best, Floriane ________________________________ De : firedrake-bounces@imperial.ac.uk<mailto:firedrake-bounces@imperial.ac.uk> <firedrake-bounces@imperial.ac.uk><mailto:firedrake-bounces@imperial.ac.uk> de la part de Miklós Homolya <m.homolya14@imperial.ac.uk><mailto:m.homolya14@imperial.ac.uk> Envoyé : lundi 29 août 2016 10:54:30 À : firedrake@imperial.ac.uk<mailto:firedrake@imperial.ac.uk> Objet : Re: [firedrake] Saving txt files while running in parallel Hello Floriane, Please take your communicator from the mesh or a similar object, e.g. mesh.comm instead of op2.MPI.comm where mesh is your mesh object. Regards, Miklos On 29/08/16 10:35, Floriane Gidel [RPG] wrote: Dear Andrew, I would like to save the separate max values on different files using op2.MPI.comm.rank, but the attribute 'comm' seems to be unknown: print op2.MPI.comm.rank AttributeError: 'module' object has no attribute 'comm' The command was working on my laptop, but not on the Linux machine where Firedrake has been updated very recently. Do you know where the error can come from ? Thanks, Floriane ________________________________ De : firedrake-bounces@imperial.ac.uk<mailto:firedrake-bounces@imperial.ac.uk> <firedrake-bounces@imperial.ac.uk><mailto:firedrake-bounces@imperial.ac.uk> de la part de Andrew McRae <A.T.T.McRae@bath.ac.uk><mailto:A.T.T.McRae@bath.ac.uk> Envoyé : mercredi 8 juin 2016 09:59:36 À : firedrake@imperial.ac.uk<mailto:firedrake@imperial.ac.uk> Objet : Re: [firedrake] Saving txt files while running in parallel Yes, I guess that in Floriane's original code, each process is computing its own max_eta (since eta.dat.data contains the values for degrees of freedom on that MPI process). Then all process are trying to write (or append?) to the same file(?). You could get the separate max values by making the filename depend on op2.MPI.comm.rank, so that each process writes to a separate file. Otherwise, computing the max over all subdomains indeed requires a parallel operation. On 8 June 2016 at 09:53, Shipton, Jemma <j.shipton@imperial.ac.uk<mailto:j.shipton@imperial.ac.uk>> wrote: Hi Floriane, We have defined a max method that works in parallel for our code... try something like: def max(f): fmax = op2.Global(1, [-1000], dtype=float) op2.par_loop(op2.Kernel("""void maxify(double *a, double *b) { a[0] = a[0] < fabs(b[0]) ? fabs(b[0]) : a[0]; }""", "maxify"), f.dof_dset.set, fmax(op2.MAX), f.dat(op2.READ)) return fmax.data[0] But I agree, it would be nice if that weren't necessary. Hope that helps, Jemma ________________________________ From: firedrake-bounces@imperial.ac.uk<mailto:firedrake-bounces@imperial.ac.uk> <firedrake-bounces@imperial.ac.uk<mailto:firedrake-bounces@imperial.ac.uk>> on behalf of Floriane Gidel [RPG] <mmfg@leeds.ac.uk<mailto:mmfg@leeds.ac.uk>> Sent: 08 June 2016 09:33:46 To: firedrake Subject: [firedrake] Saving txt files while running in parallel Dear all, I am running a Firedrake code in parallel with 4 cores, in which I save the maximal value of the amplitude of the wave at each time, with the command: max_eta = max(eta.dat.data) Eta_file.write('%-10s %-10s %-10s\n' % (t,'', eta_max)) But when opening the .txt file, I notice two issues: - the data are not saved continuously: time goes from 0 to 100 (let's say) and then starts from 90 again; - the values of max_eta are not the maximal values of eta, except at the beginning. So it looks like the function max takes the maximum value of eta only in one subdomain, so that after the wave has crossed the subdomain, the maximal value of eta goes back to the depth of rest value. How can I force this command to be applied on the full domain, even if the code is run in parallel? Thanks, Floriane _______________________________________________ firedrake mailing list firedrake@imperial.ac.uk<mailto:firedrake@imperial.ac.uk> https://mailman.ic.ac.uk/mailman/listinfo/firedrake _______________________________________________ firedrake mailing list firedrake@imperial.ac.uk<mailto:firedrake@imperial.ac.uk> https://mailman.ic.ac.uk/mailman/listinfo/firedrake _______________________________________________ firedrake mailing list firedrake@imperial.ac.uk<mailto:firedrake@imperial.ac.uk> https://mailman.ic.ac.uk/mailman/listinfo/firedrake
Answers inlined below. On 30/08/16 07:11, Floriane Gidel [RPG] wrote:
mesh = UnitIntervalMesh(N)
raises the error
ValueError: Number of cells must be a positive integer
whilethe file runs correctly on my laptop with another version of Firedrake.
Well, what is the value of N?
I did not encounter any issue while running the installation script, but I know that *Firedrake requires a version of OpenMPI different from the one installed on my machine*,
It's not correct to say that Firedrake requires a specific MPI implementation or that it requires a specific version of OpenMPI. Firedrake, in principle at least, should work with any MPI implementation. The install script happens to install OpenMPI on Ubuntu and Mac OS X, but that's just for convenience. If you have another MPI implementation installed, with the --no-package-manager option firedrake-install should just pick that up and use it.
so I load the required version before sourcing Firedrake. Maybe this is not enough for Firedrake to access the required version?
Below are the commands I used to load the module and install Firedrake:
module load/mpi/compat-openmpi16-x86_64
python firedrake-install --no-package-manager --disable-ssh
and to source the Firedrake environment before running simulations:
module load/mpi/compat-openmpi16-x86_64
source firedrake/bin/activate
Does Firedrake use the loaded version of openmpi with these commands, or is there something missing?
Firedrake uses whatever MPI implementation provides mpicc and similar command. You can check e.g. what $ mpicc -v says. It is important to use the same MPI when installing and when using Firedrake, but you seem to have done this correctly. I wonder though why did you load an OpenMPI implementation... If this is on a cluster, you should use whatever the "native" MPI of that cluster is.
Hi Miklós, Basically I had installed Firedrake on my linux machine and the programs were running correctly, but later on it stopped working. I asked the IT and they said : "It seems that Firedrake got broken after an update of one of it's many dependencies. Firedrake requires an older version of OpenMPI to function correctly. You need to load the module mpi/compat-openmpi16-x86_64 to activate this older version." By doing this, I have no error coming from mpi, but other errors as those I sent earlier (OSError, ValueError...), and I don't have them on my laptop. It is not on a cluster. ________________________________ De : Miklós Homolya <m.homolya14@imperial.ac.uk> Envoyé : mardi 30 août 2016 10:07:42 À : firedrake@imperial.ac.uk; Floriane Gidel [RPG] Objet : Re: [firedrake] Saving txt files while running in parallel Answers inlined below. On 30/08/16 07:11, Floriane Gidel [RPG] wrote: mesh = UnitIntervalMesh(N) raises the error ValueError: Number of cells must be a positive integer while the file runs correctly on my laptop with another version of Firedrake. Well, what is the value of N? I did not encounter any issue while running the installation script, but I know that Firedrake requires a version of OpenMPI different from the one installed on my machine, It's not correct to say that Firedrake requires a specific MPI implementation or that it requires a specific version of OpenMPI. Firedrake, in principle at least, should work with any MPI implementation. The install script happens to install OpenMPI on Ubuntu and Mac OS X, but that's just for convenience. If you have another MPI implementation installed, with the --no-package-manager option firedrake-install should just pick that up and use it. so I load the required version before sourcing Firedrake. Maybe this is not enough for Firedrake to access the required version? Below are the commands I used to load the module and install Firedrake: module load/mpi/compat-openmpi16-x86_64 python firedrake-install --no-package-manager --disable-ssh and to source the Firedrake environment before running simulations: module load/mpi/compat-openmpi16-x86_64 source firedrake/bin/activate Does Firedrake use the loaded version of openmpi with these commands, or is there something missing? Firedrake uses whatever MPI implementation provides mpicc and similar command. You can check e.g. what $ mpicc -v says. It is important to use the same MPI when installing and when using Firedrake, but you seem to have done this correctly. I wonder though why did you load an OpenMPI implementation... If this is on a cluster, you should use whatever the "native" MPI of that cluster is.
OK, so in case of a system upgrade which brings new versions (not just bugfixes) of the C compiler, MPI library etc. I suggest doing a fresh installation of Firedrake. Currently, this isn't a scenario that firedrake-update can correctly handle. ________________________________ From: Floriane Gidel [RPG] <mmfg@leeds.ac.uk> Sent: 30 August 2016 10:47:36 To: Homolya, Miklós; firedrake Subject: RE: [firedrake] Saving txt files while running in parallel Hi Miklós, Basically I had installed Firedrake on my linux machine and the programs were running correctly, but later on it stopped working. I asked the IT and they said : "It seems that Firedrake got broken after an update of one of it's many dependencies. Firedrake requires an older version of OpenMPI to function correctly. You need to load the module mpi/compat-openmpi16-x86_64 to activate this older version." By doing this, I have no error coming from mpi, but other errors as those I sent earlier (OSError, ValueError...), and I don't have them on my laptop. It is not on a cluster. ________________________________ De : Miklós Homolya <m.homolya14@imperial.ac.uk> Envoyé : mardi 30 août 2016 10:07:42 À : firedrake@imperial.ac.uk; Floriane Gidel [RPG] Objet : Re: [firedrake] Saving txt files while running in parallel Answers inlined below. On 30/08/16 07:11, Floriane Gidel [RPG] wrote: mesh = UnitIntervalMesh(N) raises the error ValueError: Number of cells must be a positive integer while the file runs correctly on my laptop with another version of Firedrake. Well, what is the value of N? I did not encounter any issue while running the installation script, but I know that Firedrake requires a version of OpenMPI different from the one installed on my machine, It's not correct to say that Firedrake requires a specific MPI implementation or that it requires a specific version of OpenMPI. Firedrake, in principle at least, should work with any MPI implementation. The install script happens to install OpenMPI on Ubuntu and Mac OS X, but that's just for convenience. If you have another MPI implementation installed, with the --no-package-manager option firedrake-install should just pick that up and use it. so I load the required version before sourcing Firedrake. Maybe this is not enough for Firedrake to access the required version? Below are the commands I used to load the module and install Firedrake: module load/mpi/compat-openmpi16-x86_64 python firedrake-install --no-package-manager --disable-ssh and to source the Firedrake environment before running simulations: module load/mpi/compat-openmpi16-x86_64 source firedrake/bin/activate Does Firedrake use the loaded version of openmpi with these commands, or is there something missing? Firedrake uses whatever MPI implementation provides mpicc and similar command. You can check e.g. what $ mpicc -v says. It is important to use the same MPI when installing and when using Firedrake, but you seem to have done this correctly. I wonder though why did you load an OpenMPI implementation... If this is on a cluster, you should use whatever the "native" MPI of that cluster is.
This is what I have done: delete Firedrake and re-install it with the command : python firedrake-install --no-package-manager --disable-ssh ________________________________ De : Homolya, Miklós <m.homolya14@imperial.ac.uk> Envoyé : mardi 30 août 2016 10:53:52 À : Floriane Gidel [RPG]; firedrake Objet : Re: [firedrake] Saving txt files while running in parallel OK, so in case of a system upgrade which brings new versions (not just bugfixes) of the C compiler, MPI library etc. I suggest doing a fresh installation of Firedrake. Currently, this isn't a scenario that firedrake-update can correctly handle. ________________________________ From: Floriane Gidel [RPG] <mmfg@leeds.ac.uk> Sent: 30 August 2016 10:47:36 To: Homolya, Miklós; firedrake Subject: RE: [firedrake] Saving txt files while running in parallel Hi Miklós, Basically I had installed Firedrake on my linux machine and the programs were running correctly, but later on it stopped working. I asked the IT and they said : "It seems that Firedrake got broken after an update of one of it's many dependencies. Firedrake requires an older version of OpenMPI to function correctly. You need to load the module mpi/compat-openmpi16-x86_64 to activate this older version." By doing this, I have no error coming from mpi, but other errors as those I sent earlier (OSError, ValueError...), and I don't have them on my laptop. It is not on a cluster. ________________________________ De : Miklós Homolya <m.homolya14@imperial.ac.uk> Envoyé : mardi 30 août 2016 10:07:42 À : firedrake@imperial.ac.uk; Floriane Gidel [RPG] Objet : Re: [firedrake] Saving txt files while running in parallel Answers inlined below. On 30/08/16 07:11, Floriane Gidel [RPG] wrote: mesh = UnitIntervalMesh(N) raises the error ValueError: Number of cells must be a positive integer while the file runs correctly on my laptop with another version of Firedrake. Well, what is the value of N? I did not encounter any issue while running the installation script, but I know that Firedrake requires a version of OpenMPI different from the one installed on my machine, It's not correct to say that Firedrake requires a specific MPI implementation or that it requires a specific version of OpenMPI. Firedrake, in principle at least, should work with any MPI implementation. The install script happens to install OpenMPI on Ubuntu and Mac OS X, but that's just for convenience. If you have another MPI implementation installed, with the --no-package-manager option firedrake-install should just pick that up and use it. so I load the required version before sourcing Firedrake. Maybe this is not enough for Firedrake to access the required version? Below are the commands I used to load the module and install Firedrake: module load/mpi/compat-openmpi16-x86_64 python firedrake-install --no-package-manager --disable-ssh and to source the Firedrake environment before running simulations: module load/mpi/compat-openmpi16-x86_64 source firedrake/bin/activate Does Firedrake use the loaded version of openmpi with these commands, or is there something missing? Firedrake uses whatever MPI implementation provides mpicc and similar command. You can check e.g. what $ mpicc -v says. It is important to use the same MPI when installing and when using Firedrake, but you seem to have done this correctly. I wonder though why did you load an OpenMPI implementation... If this is on a cluster, you should use whatever the "native" MPI of that cluster is.
OK, in this case you don't even need to load the old OpenMPI anymore. Does it work now? On 30/08/16 10:57, Floriane Gidel [RPG] wrote:
This is what I have done: delete Firedrake and re-install it with the command :
python firedrake-install --no-package-manager --disable-ssh
------------------------------------------------------------------------ *De :* Homolya, Miklós <m.homolya14@imperial.ac.uk> *Envoyé :* mardi 30 août 2016 10:53:52 *À :* Floriane Gidel [RPG]; firedrake *Objet :* Re: [firedrake] Saving txt files while running in parallel
OK, so in case of a system upgrade which brings new versions (not just bugfixes) of the C compiler, MPI library etc. I suggest doing a fresh installation of Firedrake. Currently, this isn't a scenario that firedrake-update can correctly handle.
------------------------------------------------------------------------ *From:* Floriane Gidel [RPG] <mmfg@leeds.ac.uk> *Sent:* 30 August 2016 10:47:36 *To:* Homolya, Miklós; firedrake *Subject:* RE: [firedrake] Saving txt files while running in parallel
Hi Miklós,
Basically I had installed Firedrake on my linux machine and the programs were running correctly, but later on it stopped working. I asked the IT and they said :
"It seems that Firedrake got broken after an update of one of it's many dependencies. Firedrake requires an older version of OpenMPI to function correctly. You need to load the module mpi/compat-openmpi16-x86_64 to activate this older version."
By doing this, I have no error coming from mpi, but other errors as those I sent earlier (OSError, ValueError...), and I don't have them on my laptop. It is not on a cluster.
------------------------------------------------------------------------ *De :* Miklós Homolya <m.homolya14@imperial.ac.uk> *Envoyé :* mardi 30 août 2016 10:07:42 *À :* firedrake@imperial.ac.uk; Floriane Gidel [RPG] *Objet :* Re: [firedrake] Saving txt files while running in parallel
Answers inlined below.
On 30/08/16 07:11, Floriane Gidel [RPG] wrote:
mesh = UnitIntervalMesh(N)
raises the error
ValueError: Number of cells must be a positive integer
whilethe file runs correctly on my laptop with another version of Firedrake.
Well, what is the value of N?
I did not encounter any issue while running the installation script, but I know that *Firedrake requires a version of OpenMPI different from the one installed on my machine*,
It's not correct to say that Firedrake requires a specific MPI implementation or that it requires a specific version of OpenMPI. Firedrake, in principle at least, should work with any MPI implementation.
The install script happens to install OpenMPI on Ubuntu and Mac OS X, but that's just for convenience. If you have another MPI implementation installed, with the --no-package-manager option firedrake-install should just pick that up and use it.
so I load the required version before sourcing Firedrake. Maybe this is not enough for Firedrake to access the required version?
Below are the commands I used to load the module and install Firedrake:
module load/mpi/compat-openmpi16-x86_64
python firedrake-install --no-package-manager --disable-ssh
and to source the Firedrake environment before running simulations:
module load/mpi/compat-openmpi16-x86_64
source firedrake/bin/activate
Does Firedrake use the loaded version of openmpi with these commands, or is there something missing?
Firedrake uses whatever MPI implementation provides mpicc and similar command. You can check e.g. what $ mpicc -v says.
It is important to use the same MPI when installing and when using Firedrake, but you seem to have done this correctly.
I wonder though why did you load an OpenMPI implementation... If this is on a cluster, you should use whatever the "native" MPI of that cluster is.
If I do not load the old openmpi before the installation, the installation breaks. It only works if I run module load/mpi/compat-openmpi16-x86_64 python firedrake-install --no-package-manager --disable-ssh With this installation, if I do not load the old openmpi before sourcing Firedrake, I have an mpi error. This is why I run module load/mpi/compat-openmpi16-x86_64 source firedrake/bin/activate before running my programs. So it does not work without loading the old openmpi... ________________________________ De : Miklós Homolya <m.homolya14@imperial.ac.uk> Envoyé : mardi 30 août 2016 10:59:45 À : Floriane Gidel [RPG]; firedrake Objet : Re: [firedrake] Saving txt files while running in parallel OK, in this case you don't even need to load the old OpenMPI anymore. Does it work now? On 30/08/16 10:57, Floriane Gidel [RPG] wrote: This is what I have done: delete Firedrake and re-install it with the command : python firedrake-install --no-package-manager --disable-ssh ________________________________ De : Homolya, Miklós <m.homolya14@imperial.ac.uk><mailto:m.homolya14@imperial.ac.uk> Envoyé : mardi 30 août 2016 10:53:52 À : Floriane Gidel [RPG]; firedrake Objet : Re: [firedrake] Saving txt files while running in parallel OK, so in case of a system upgrade which brings new versions (not just bugfixes) of the C compiler, MPI library etc. I suggest doing a fresh installation of Firedrake. Currently, this isn't a scenario that firedrake-update can correctly handle. ________________________________ From: Floriane Gidel [RPG] <mmfg@leeds.ac.uk><mailto:mmfg@leeds.ac.uk> Sent: 30 August 2016 10:47:36 To: Homolya, Miklós; firedrake Subject: RE: [firedrake] Saving txt files while running in parallel Hi Miklós, Basically I had installed Firedrake on my linux machine and the programs were running correctly, but later on it stopped working. I asked the IT and they said : "It seems that Firedrake got broken after an update of one of it's many dependencies. Firedrake requires an older version of OpenMPI to function correctly. You need to load the module mpi/compat-openmpi16-x86_64 to activate this older version." By doing this, I have no error coming from mpi, but other errors as those I sent earlier (OSError, ValueError...), and I don't have them on my laptop. It is not on a cluster. ________________________________ De : Miklós Homolya <m.homolya14@imperial.ac.uk><mailto:m.homolya14@imperial.ac.uk> Envoyé : mardi 30 août 2016 10:07:42 À : firedrake@imperial.ac.uk<mailto:firedrake@imperial.ac.uk>; Floriane Gidel [RPG] Objet : Re: [firedrake] Saving txt files while running in parallel Answers inlined below. On 30/08/16 07:11, Floriane Gidel [RPG] wrote: mesh = UnitIntervalMesh(N) raises the error ValueError: Number of cells must be a positive integer while the file runs correctly on my laptop with another version of Firedrake. Well, what is the value of N? I did not encounter any issue while running the installation script, but I know that Firedrake requires a version of OpenMPI different from the one installed on my machine, It's not correct to say that Firedrake requires a specific MPI implementation or that it requires a specific version of OpenMPI. Firedrake, in principle at least, should work with any MPI implementation. The install script happens to install OpenMPI on Ubuntu and Mac OS X, but that's just for convenience. If you have another MPI implementation installed, with the --no-package-manager option firedrake-install should just pick that up and use it. so I load the required version before sourcing Firedrake. Maybe this is not enough for Firedrake to access the required version? Below are the commands I used to load the module and install Firedrake: module load/mpi/compat-openmpi16-x86_64 python firedrake-install --no-package-manager --disable-ssh and to source the Firedrake environment before running simulations: module load/mpi/compat-openmpi16-x86_64 source firedrake/bin/activate Does Firedrake use the loaded version of openmpi with these commands, or is there something missing? Firedrake uses whatever MPI implementation provides mpicc and similar command. You can check e.g. what $ mpicc -v says. It is important to use the same MPI when installing and when using Firedrake, but you seem to have done this correctly. I wonder though why did you load an OpenMPI implementation... If this is on a cluster, you should use whatever the "native" MPI of that cluster is.
Do you have any MPI available if you don't load the old MPI? What does "mpicc -v" say in that case? What is the installation error you get if you don't load MPI? Yes, if you installed with a specific MPI, you must have the same MPI when sourcing Firedrake. More importantly, what about the other errors you had? (OSError: allocation failed, N isn't a positive integer etc.) On 30/08/16 11:13, Floriane Gidel [RPG] wrote:
If I do not load the old openmpi before the installation, the installation breaks. It only works if I run
module load/mpi/compat-openmpi16-x86_64 python firedrake-install --no-package-manager --disable-ssh
With this installation, if I do not load the old openmpi before sourcing Firedrake, I have an mpi error. This is why I run module load/mpi/compat-openmpi16-x86_64 source firedrake/bin/activate before running my programs.
So it does not work without loading the old openmpi... ------------------------------------------------------------------------ *De :* Miklós Homolya <m.homolya14@imperial.ac.uk> *Envoyé :* mardi 30 août 2016 10:59:45 *À :* Floriane Gidel [RPG]; firedrake *Objet :* Re: [firedrake] Saving txt files while running in parallel
OK, in this case you don't even need to load the old OpenMPI anymore.
Does it work now?
On 30/08/16 10:57, Floriane Gidel [RPG] wrote:
This is what I have done: delete Firedrake and re-install it with the command :
python firedrake-install --no-package-manager --disable-ssh
------------------------------------------------------------------------ *De :* Homolya, Miklós <m.homolya14@imperial.ac.uk> *Envoyé :* mardi 30 août 2016 10:53:52 *À :* Floriane Gidel [RPG]; firedrake *Objet :* Re: [firedrake] Saving txt files while running in parallel
OK, so in case of a system upgrade which brings new versions (not just bugfixes) of the C compiler, MPI library etc. I suggest doing a fresh installation of Firedrake. Currently, this isn't a scenario that firedrake-update can correctly handle.
------------------------------------------------------------------------ *From:* Floriane Gidel [RPG] <mmfg@leeds.ac.uk> *Sent:* 30 August 2016 10:47:36 *To:* Homolya, Miklós; firedrake *Subject:* RE: [firedrake] Saving txt files while running in parallel
Hi Miklós,
Basically I had installed Firedrake on my linux machine and the programs were running correctly, but later on it stopped working. I asked the IT and they said :
"It seems that Firedrake got broken after an update of one of it's many dependencies. Firedrake requires an older version of OpenMPI to function correctly. You need to load the module mpi/compat-openmpi16-x86_64 to activate this older version."
By doing this, I have no error coming from mpi, but other errors as those I sent earlier (OSError, ValueError...), and I don't have them on my laptop. It is not on a cluster.
------------------------------------------------------------------------ *De :* Miklós Homolya <m.homolya14@imperial.ac.uk> *Envoyé :* mardi 30 août 2016 10:07:42 *À :* firedrake@imperial.ac.uk; Floriane Gidel [RPG] *Objet :* Re: [firedrake] Saving txt files while running in parallel
Answers inlined below.
On 30/08/16 07:11, Floriane Gidel [RPG] wrote:
mesh = UnitIntervalMesh(N)
raises the error
ValueError: Number of cells must be a positive integer
whilethe file runs correctly on my laptop with another version of Firedrake.
Well, what is the value of N?
I did not encounter any issue while running the installation script, but I know that *Firedrake requires a version of OpenMPI different from the one installed on my machine*,
It's not correct to say that Firedrake requires a specific MPI implementation or that it requires a specific version of OpenMPI. Firedrake, in principle at least, should work with any MPI implementation.
The install script happens to install OpenMPI on Ubuntu and Mac OS X, but that's just for convenience. If you have another MPI implementation installed, with the --no-package-manager option firedrake-install should just pick that up and use it.
so I load the required version before sourcing Firedrake. Maybe this is not enough for Firedrake to access the required version?
Below are the commands I used to load the module and install Firedrake:
module load/mpi/compat-openmpi16-x86_64
python firedrake-install --no-package-manager --disable-ssh
and to source the Firedrake environment before running simulations:
module load/mpi/compat-openmpi16-x86_64
source firedrake/bin/activate
Does Firedrake use the loaded version of openmpi with these commands, or is there something missing?
Firedrake uses whatever MPI implementation provides mpicc and similar command. You can check e.g. what $ mpicc -v says.
It is important to use the same MPI when installing and when using Firedrake, but you seem to have done this correctly.
I wonder though why did you load an OpenMPI implementation... If this is on a cluster, you should use whatever the "native" MPI of that cluster is.
If I don't load the old MPI, it seems like I don't have any: "mpicc -v" gives "mpicc: command not found". Without loading MPI, the installation gives the following error: ******************************************************************************* UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): ------------------------------------------------------------------------------- Did not find package MPI needed by hdf5. Enable the package using --with-mpi ******************************************************************************* Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pip-2fa0Ik-build/setup.py", line 302, in <module> **metadata) File "/usr/lib64/python2.7/distutils/core.py", line 152, in setup dist.run_commands() File "/usr/lib64/python2.7/distutils/dist.py", line 953, in run_commands self.run_command(cmd) File "/usr/lib64/python2.7/distutils/dist.py", line 972, in run_command cmd_obj.run() File "/tmp/pip-2fa0Ik-build/setup.py", line 218, in run config(prefix, self.dry_run) File "/tmp/pip-2fa0Ik-build/setup.py", line 148, in config if status != 0: raise RuntimeError(status) RuntimeError: 256 ---------------------------------------- Command "/home/fgidel/firedrake/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-2fa0Ik-build/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-OxfBa5-record/install-record.txt --single-version-externally-managed --compile --install-headers /home/fgidel/firedrake/include/site/python2.7/petsc" failed with error code 1 in /tmp/pip-2fa0Ik-build/ Traceback (most recent call last): File "firedrake-install", line 884, in <module> install("petsc/") File "firedrake-install", line 491, in install run_pip_install(["--ignore-installed", package]) File "firedrake-install", line 354, in run_pip_install check_call(pipinstall + pipargs) File "firedrake-install", line 205, in check_call subprocess.check_call(arguments, env=env) File "/usr/lib64/python2.7/subprocess.py", line 542, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['/home/fgidel/firedrake/bin/pip', 'install', '--no-deps', '--ignore-installed', 'petsc/']' returned non-zero exit status 1 Should I try with the option --with-mpi ? The OSError and AttributeError (where N is defined as a positive integer) occur when I load the old MPI. I cannot try without loading the old MPI since Firedrake isn't installed correctly in that case. The ValueError ("ValueError: Number of cells must be a positive integer") for N is fixed when I put an integer instead of N. Then the file runs correctly. However, I don't know how to overcome the OSError in my other program ("OSError: [Errno 12] Cannot allocate memory"). I tried on another machine with the same installation of Firedrake and loading the same openmpi, and it gives the same error so I don't think it comes from a lack of memory. In another program, I also get an error that I don't get on my laptop (where an older version of firedrake is installed): wf1_solver = NonlinearVariationalSolver(wf1_problem) File "/home/fgidel/firedrake/lib/python2.7/site-packages/coffee/scheduler.py", line 74, in _merge_loops while isinstance(loop_b.children[0], (Block, For)): IndexError: list index out of range Exception AttributeError: "'NonlinearVariationalSolver' object has no attribute '_parameters'" in <bound method NonlinearVariationalSolver.__del__ of <firedrake.variational_solver.NonlinearVariationalSolver object at 0x4f4d490>> ignored Maybe the program editor is causing problem ? I remember, last year I had errors due to an old version of Xcode on Mac. But I guess the installation would have raised an error if a dependency was missing or not recent enough... ________________________________ De : Miklós Homolya <m.homolya14@imperial.ac.uk> Envoyé : mardi 30 août 2016 11:20:37 À : Floriane Gidel [RPG]; firedrake Objet : Re: [firedrake] Saving txt files while running in parallel Do you have any MPI available if you don't load the old MPI? What does "mpicc -v" say in that case? What is the installation error you get if you don't load MPI? Yes, if you installed with a specific MPI, you must have the same MPI when sourcing Firedrake. More importantly, what about the other errors you had? (OSError: allocation failed, N isn't a positive integer etc.) On 30/08/16 11:13, Floriane Gidel [RPG] wrote: If I do not load the old openmpi before the installation, the installation breaks. It only works if I run module load/mpi/compat-openmpi16-x86_64 python firedrake-install --no-package-manager --disable-ssh With this installation, if I do not load the old openmpi before sourcing Firedrake, I have an mpi error. This is why I run module load/mpi/compat-openmpi16-x86_64 source firedrake/bin/activate before running my programs. So it does not work without loading the old openmpi... ________________________________ De : Miklós Homolya <m.homolya14@imperial.ac.uk><mailto:m.homolya14@imperial.ac.uk> Envoyé : mardi 30 août 2016 10:59:45 À : Floriane Gidel [RPG]; firedrake Objet : Re: [firedrake] Saving txt files while running in parallel OK, in this case you don't even need to load the old OpenMPI anymore. Does it work now? On 30/08/16 10:57, Floriane Gidel [RPG] wrote: This is what I have done: delete Firedrake and re-install it with the command : python firedrake-install --no-package-manager --disable-ssh ________________________________ De : Homolya, Miklós <m.homolya14@imperial.ac.uk><mailto:m.homolya14@imperial.ac.uk> Envoyé : mardi 30 août 2016 10:53:52 À : Floriane Gidel [RPG]; firedrake Objet : Re: [firedrake] Saving txt files while running in parallel OK, so in case of a system upgrade which brings new versions (not just bugfixes) of the C compiler, MPI library etc. I suggest doing a fresh installation of Firedrake. Currently, this isn't a scenario that firedrake-update can correctly handle. ________________________________ From: Floriane Gidel [RPG] <mmfg@leeds.ac.uk><mailto:mmfg@leeds.ac.uk> Sent: 30 August 2016 10:47:36 To: Homolya, Miklós; firedrake Subject: RE: [firedrake] Saving txt files while running in parallel Hi Miklós, Basically I had installed Firedrake on my linux machine and the programs were running correctly, but later on it stopped working. I asked the IT and they said : "It seems that Firedrake got broken after an update of one of it's many dependencies. Firedrake requires an older version of OpenMPI to function correctly. You need to load the module mpi/compat-openmpi16-x86_64 to activate this older version." By doing this, I have no error coming from mpi, but other errors as those I sent earlier (OSError, ValueError...), and I don't have them on my laptop. It is not on a cluster. ________________________________ De : Miklós Homolya <m.homolya14@imperial.ac.uk><mailto:m.homolya14@imperial.ac.uk> Envoyé : mardi 30 août 2016 10:07:42 À : firedrake@imperial.ac.uk<mailto:firedrake@imperial.ac.uk>; Floriane Gidel [RPG] Objet : Re: [firedrake] Saving txt files while running in parallel Answers inlined below. On 30/08/16 07:11, Floriane Gidel [RPG] wrote: mesh = UnitIntervalMesh(N) raises the error ValueError: Number of cells must be a positive integer while the file runs correctly on my laptop with another version of Firedrake. Well, what is the value of N? I did not encounter any issue while running the installation script, but I know that Firedrake requires a version of OpenMPI different from the one installed on my machine, It's not correct to say that Firedrake requires a specific MPI implementation or that it requires a specific version of OpenMPI. Firedrake, in principle at least, should work with any MPI implementation. The install script happens to install OpenMPI on Ubuntu and Mac OS X, but that's just for convenience. If you have another MPI implementation installed, with the --no-package-manager option firedrake-install should just pick that up and use it. so I load the required version before sourcing Firedrake. Maybe this is not enough for Firedrake to access the required version? Below are the commands I used to load the module and install Firedrake: module load/mpi/compat-openmpi16-x86_64 python firedrake-install --no-package-manager --disable-ssh and to source the Firedrake environment before running simulations: module load/mpi/compat-openmpi16-x86_64 source firedrake/bin/activate Does Firedrake use the loaded version of openmpi with these commands, or is there something missing? Firedrake uses whatever MPI implementation provides mpicc and similar command. You can check e.g. what $ mpicc -v says. It is important to use the same MPI when installing and when using Firedrake, but you seem to have done this correctly. I wonder though why did you load an OpenMPI implementation... If this is on a cluster, you should use whatever the "native" MPI of that cluster is.
Hello On 30/08/16 12:59, Floriane Gidel [RPG] wrote:
If I don't load the old MPI, it seems like I don't have any: "mpicc -v" gives "mpicc: command not found".
OK, what if you install and use Firedrake loading the new MPI instead of the old one?
Should I try with the option --with-mpi ?
Definitely not.
I tried the two available versions and both give the same thing: the installation works, but I get errors when running the files. ________________________________ De : Miklós Homolya <m.homolya14@imperial.ac.uk> Envoyé : mardi 30 août 2016 13:09:27 À : Floriane Gidel [RPG]; firedrake Objet : Re: [firedrake] Saving txt files while running in parallel Hello On 30/08/16 12:59, Floriane Gidel [RPG] wrote: If I don't load the old MPI, it seems like I don't have any: "mpicc -v" gives "mpicc: command not found". OK, what if you install and use Firedrake loading the new MPI instead of the old one? Should I try with the option --with-mpi ? Definitely not.
participants (5)
- 
                
                Andrew McRae
- 
                
                Floriane Gidel [RPG]
- 
                
                Homolya, Miklós
- 
                
                Miklós Homolya
- 
                
                Tuomas Karna