Hi,


It appears to be running now in both serial and parallel after I updated the computer's Xcode software.


Regards,

Paul


From: firedrake-bounces@imperial.ac.uk <firedrake-bounces@imperial.ac.uk> on behalf of Tuomas Karna <tuomas.karna@gmail.com>
Sent: 20 September 2017 16:45:38
To: firedrake@imperial.ac.uk
Subject: Re: [firedrake] netCDF4 error
 
Hi Paul,

Could you please send the output of `firedrake-status` command?
Also, I presume you have write access to the directory where gusto tries to write the outputs? NetCDF/hdf5 errors are not often very informative. Does gusto work in serial?


Regards,

Tuomas

On 09/20/2017 06:41 AM, Burns, Paul wrote:

Hi Jemma,


Yes it is a Mac, and yes I was running in parallel.


Thanks,

Paul


From: firedrake-bounces@imperial.ac.uk <firedrake-bounces@imperial.ac.uk> on behalf of Shipton, Jemma <j.shipton@imperial.ac.uk>
Sent: 20 September 2017 14:17:18
To: firedrake
Subject: Re: [firedrake] netCDF4 error
 

Hi Paul,


Could you please tell us what kind of machine (mac laptop?) you're running on and whether you are running in parallel?


Thanks!


Jemma


From: firedrake-bounces@imperial.ac.uk <firedrake-bounces@imperial.ac.uk> on behalf of Burns, Paul <P.Burns2@exeter.ac.uk>
Sent: 19 September 2017 17:26:56
To: firedrake
Subject: [firedrake] netCDF4 error
 

Dear Sir/Madam,


I am getting the following netCDF4 error when I try to run Gusto:


----------------------

UFL:WARNING Discontinuous Lagrange element requested on interval * interval, creating DQ element.

UFL:WARNING Discontinuous Lagrange element requested on interval * interval, creating DQ element.

Traceback (most recent call last):

  File "examples/sk_nonlinear.py", line 112, in <module>

    stepper.run(t=0, tmax=tmax)

  File "/Users/pb412/firedrake/src/gusto/gusto/timeloop.py", line 169, in run

    state.dump(t, pickup=False)

  File "/Users/pb412/firedrake/src/gusto/gusto/state.py", line 345, in dump

    self.pointdata_output.dump(self.fields, t)

  File "/Users/pb412/firedrake/src/gusto/gusto/state.py", line 112, in dump

    var[idx, :] = vals

  File "netCDF4/_netCDF4.pyx", line 1908, in netCDF4._netCDF4.Dataset.__exit__ (netCDF4/_netCDF4.c:14289)

  File "netCDF4/_netCDF4.pyx", line 2012, in netCDF4._netCDF4.Dataset.close (netCDF4/_netCDF4.c:16381)

  File "netCDF4/_netCDF4.pyx", line 1996, in netCDF4._netCDF4.Dataset._close (netCDF4/_netCDF4.c:16289)

  File "netCDF4/_netCDF4.pyx", line 1581, in netCDF4._netCDF4._ensure_nc_success (netCDF4/_netCDF4.c:12601)

RuntimeError: NetCDF: HDF error

--------------------------------------------------------------------------

MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD

with errorcode 1.

----------------------

There error occurs independent of the model I run (e.g. Rayleigh_Taylor, sk_nonlinear, etc) or the branch I am using (e.g. master, boussinesq_2d_lab, etc).  So it seems there is something wrong with the way I have setup the netCDF4 software on my machine...?  


Please advise what the best way forward is.


Regards,

Paul




_______________________________________________
firedrake mailing list
firedrake@imperial.ac.uk
https://mailman.ic.ac.uk/mailman/listinfo/firedrake