freepooma-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [pooma-dev] Parallel File I/O


From: Richard Guenther
Subject: Re: [pooma-dev] Parallel File I/O
Date: Wed, 28 Aug 2002 16:19:50 +0200 (CEST)

On Wed, 28 Aug 2002, Arno Candel wrote:

> Hi,
>
> Is there a clever way to handle large distributed Array I/O to disk? I
> don't want all contexts to block each other while reading/writing.
>
> A straight-forward reader implementation like
>
> Array<3, double, MultiPatch<GridTag,Remote<Brick> >  A;
> A.initialize(Domain, Partition, DistributedTag());
>
> for i=A.domain()[0].first() to A.domain()[0].last()
>  for j=A.domain()[1].first() to A.domain()[1].last()
>   for k=A.domain()[2].first() to A.domain()[2].last()
>     {
>       my_ifstream >> value;
>       A(i,j,k) = value;
>     }

You are effectively doing all work n times here ;)

I use something like the following (which does I/O on one node only - the
only way to work reliably with something like NFS):

  for (Layout_t::const_iterator domain = A.layout().beginGlobal(); domain
!= A.layout().endGlobal(); domain++) {
     Interval<Dim> d = intersect((*domain).domain(), totalDomain);
     // make local copy of remote data
     Array<Dim, TypeofA::Element_t, Remote<Brick> > a;
     a.engine() = Engine<Dim, TypeofA::Element_t, Remote<Brick> >(0, d);
     a = A(d);
     Pooma::blockAndEvaluate();
     // do I/O - on node 0 only
     if (Pooma::context() != 0)
       continue;
     // from here on, use a.engine().localEngine() for all access to a!
  }

An equivalent loop for distributed I/O would loop through the layouts
local patch list and use the localEngine() of A directly.

Hope this helps, Richard.

--
Richard Guenther <address@hidden>
WWW: http://www.tat.physik.uni-tuebingen.de/~rguenth/
The GLAME Project: http://www.glame.de/

reply via email to

[Prev in Thread] Current Thread [Next in Thread]