Hi Floriane,

max_eta = max(eta.dat.data)
OSError: [Errno 12] Cannot allocate memory

AFAIK that should work. Is it possible that you're running out of memory?

Cheers,

Tuomas

On 08/29/2016 03:58 AM, Floriane Gidel [RPG] wrote:


It gives the same error...


De : Miklós Homolya <m.homolya14@imperial.ac.uk>
Envoyé : lundi 29 août 2016 11:14:26
À : firedrake@imperial.ac.uk; Floriane Gidel [RPG]
Cc : Lawrence Mitchell
Objet : Re: [firedrake] Saving txt files while running in parallel
 

I don't know. Can you just try


max_eta = max(eta.dat.data_ro)


If that doesn't help, I refer this to Lawrence.


On 29/08/16 11:10, Floriane Gidel [RPG] wrote:


Thanks Miklós, it seems to work now. 

However, there is still an error when using the max function: 


max_eta = max(eta.dat.data)
OSError: [Errno 12] Cannot allocate memory

Is that related to the parallel running?

Best,
Floriane

De : firedrake-bounces@imperial.ac.uk <firedrake-bounces@imperial.ac.uk> de la part de Miklós Homolya <m.homolya14@imperial.ac.uk>
Envoyé : lundi 29 août 2016 10:54:30
À : firedrake@imperial.ac.uk
Objet : Re: [firedrake] Saving txt files while running in parallel
 

Hello Floriane,


Please take your communicator from the mesh or a similar object, e.g. mesh.comm instead of op2.MPI.comm where mesh is your mesh object.


Regards,

Miklos


On 29/08/16 10:35, Floriane Gidel [RPG] wrote:

Dear Andrew,


I would like to save the separate max values on different files using op2.MPI.comm.rank, but the attribute 'comm' seems to be unknown: 


print op2.MPI.comm.rank

AttributeError: 'module' object has no attribute 'comm'


The command was working on my laptop, but not on the Linux machine where Firedrake has been updated very recently. Do you know where the error can come from ?


Thanks,

Floriane


De : firedrake-bounces@imperial.ac.uk <firedrake-bounces@imperial.ac.uk> de la part de Andrew McRae <A.T.T.McRae@bath.ac.uk>
Envoyé : mercredi 8 juin 2016 09:59:36
À : firedrake@imperial.ac.uk
Objet : Re: [firedrake] Saving txt files while running in parallel
 
Yes, I guess that in Floriane's original code, each process is computing its own max_eta (since eta.dat.data contains the values for degrees of freedom on that MPI process).  Then all process are trying to write (or append?) to the same file(?).

You could get the separate max values by making the filename depend on op2.MPI.comm.rank, so that each process writes to a separate file.  Otherwise, computing the max over all subdomains indeed requires a parallel operation.

On 8 June 2016 at 09:53, Shipton, Jemma <j.shipton@imperial.ac.uk> wrote:

Hi Floriane,


We have defined a max method that works in parallel for our code... try something like:


def max(f):
    fmax = op2.Global(1, [-1000], dtype=float)
    op2.par_loop(op2.Kernel("""void maxify(double *a, double *b)
    {
    a[0] = a[0] < fabs(b[0]) ? fabs(b[0]) : a[0];
    }""", "maxify"),
                 f.dof_dset.set, fmax(op2.MAX), f.dat(op2.READ))
    return fmax.data[0]

But I agree, it would be nice if that weren't necessary.

Hope that helps,

Jemma

From: firedrake-bounces@imperial.ac.uk <firedrake-bounces@imperial.ac.uk> on behalf of Floriane Gidel [RPG] <mmfg@leeds.ac.uk>
Sent: 08 June 2016 09:33:46
To: firedrake
Subject: [firedrake] Saving txt files while running in parallel
 

Dear all,


I am running a Firedrake code in parallel with 4 cores, in which I save the maximal value of the amplitude of the wave at each time, with the command:


max_eta = max(eta.dat.data)

Eta_file.write('%-10s %-10s %-10s\n' % (t,'', eta_max))



But when opening the .txt file, I notice two issues:

- the data are not saved continuously: time goes from 0 to 100 (let's say) and then starts from 90 again;

- the values of max_eta are not the maximal values of eta, except at the beginning. So it looks like the function max takes the maximum value of eta only in one subdomain, so that after the wave has crossed the subdomain, the maximal value of eta goes back to the depth of rest value. 


How can I force this command to be applied on the full domain, even if the code is run in parallel?


Thanks,

Floriane






_______________________________________________
firedrake mailing list
firedrake@imperial.ac.uk
https://mailman.ic.ac.uk/mailman/listinfo/firedrake



_______________________________________________
firedrake mailing list
firedrake@imperial.ac.uk
https://mailman.ic.ac.uk/mailman/listinfo/firedrake



_______________________________________________
firedrake mailing list
firedrake@imperial.ac.uk
https://mailman.ic.ac.uk/mailman/listinfo/firedrake