freepooma-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [pooma-dev] Parallel File I/O


From: Arno Candel
Subject: Re: [pooma-dev] Parallel File I/O
Date: Thu, 29 Aug 2002 14:29:47 -0100
User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.0.0) Gecko/20020529

Many thanks!

I just implemented a serial writer. Can you give me a hint how to structure a reader which reads a file from NFS into a distributed array?

Thanks in advance,
Arno

Richard Guenther wrote:
On Wed, 28 Aug 2002, Arno Candel wrote:

  
Hi,

Is there a clever way to handle large distributed Array I/O to disk? I
don't want all contexts to block each other while reading/writing.

A straight-forward reader implementation like

Array<3, double, MultiPatch<GridTag,Remote<Brick> >  A;
A.initialize(Domain, Partition, DistributedTag());

for i=A.domain()[0].first() to A.domain()[0].last()
 for j=A.domain()[1].first() to A.domain()[1].last()
  for k=A.domain()[2].first() to A.domain()[2].last()
    {
      my_ifstream >> value;
      A(i,j,k) = value;
    }
    

You are effectively doing all work n times here ;)

I use something like the following (which does I/O on one node only - the
only way to work reliably with something like NFS):

  for (Layout_t::const_iterator domain = A.layout().beginGlobal(); domain
!= A.layout().endGlobal(); domain++) {
     Interval<Dim> d = intersect((*domain).domain(), totalDomain);
     // make local copy of remote data
     Array<Dim, TypeofA::Element_t, Remote<Brick> > a;
     a.engine() = Engine<Dim, TypeofA::Element_t, Remote<Brick> >(0, d);
     a = A(d);
     Pooma::blockAndEvaluate();
     // do I/O - on node 0 only
     if (Pooma::context() != 0)
       continue;
     // from here on, use a.engine().localEngine() for all access to a!
  }

An equivalent loop for distributed I/O would loop through the layouts
local patch list and use the localEngine() of A directly.

Hope this helps, Richard.

--
Richard Guenther <address@hidden>
WWW: http://www.tat.physik.uni-tuebingen.de/~rguenth/
The GLAME Project: http://www.glame.de/
  


reply via email to

[Prev in Thread] Current Thread [Next in Thread]