bug-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Review of Thomas's >2GB ext2fs proposal


From: Neal H. Walfield
Subject: Re: Review of Thomas's >2GB ext2fs proposal
Date: Tue, 17 Aug 2004 05:09:01 -0400
User-agent: Wanderlust/2.8.1 (Something) SEMI/1.14.3 (Ushinoya) FLIM/1.14.3 (Unebigoryƍmae) APEL/10.6 Emacs/21.2 (i386-debian-linux-gnu) MULE/5.0 (SAKAKI)

At 17 Aug 2004 01:52:36 -0700,
Thomas Bushnell BSG wrote:
> 
> "Neal H. Walfield" <neal@cs.uml.edu> writes:
> 
> > Interesting.  But can't we do this already?  Instead of using
> > store_read and store_write in pager code, we need only use store_map
> > to get a memory object to the disk.  Then when we get a page in
> > request, we do a vm_map on the supplied memory object and return that
> > to Mach.
> 
> If you try that, the kernel always copies the page.  Pages are not
> allowed to exist in multiple memory objects at once; this is a fairly
> fundamental consequence of the way Mach implements pagers and handles
> memory management.
> 
> You end up with the same data living in two pages, taking up double
> memory.  Every page of disk should live in only one pager at a time,
> for optimal performance.

This is not what I am doing.  I understand the problem of having two
pagers refer to the same block on disk.  But I am referring to
recursive mappings (which is exactly what Roland wants to take
advantage of in his proposal).  Hence, the store returns a memory
object.  That is used by two pagers as backing store---always via a
vm_map and memory_object_data_supply but never using store_read and
store_write.  Hence we have:

  mapping -> user pager A \
                            => store pager -> disk
  mapping -> user pager B /

not:

  mapping -> user pager A -> disk
  mapping -> user pager B -> disk

The latter of which, I agree, does cause problems.

> > > By contrast, mine also requires a caching strategy, but it is only a
> > > caching strategy for memory mappings, which, unlike core, are not a
> > > limited globally shared resource.
> > 
> > But what is your caching strategy?  My caching strategy has only
> > previous pages as its overhead.  It is completely synchronized with
> > Mach's eviction scheme.  What is yours?
> 
> I only cache mappings, so it doesn't really matter.  Creating mappings
> is very very cheap (especially if you allow Mach to pick the addresses
> by using the anywhere flag).  Creating mappings is just vm_map, which
> is speedy and quick--it's the same as vm_allocate.  But I would use
> LRU for lack of something else.
> 
> Mappings carry no overhead, and are quick and easy to set up and tear
> down. 

If that is the case, then I think we should drop the whole reference
counting system and have the accessor functions just do a vm_map and
vm_unmap as required.  This means that we also don't need the hashes
(which are fairly delicate).  Do you feel that confident that this is
really that cheap?  If so, why do you still want to use a cache?

Neal






reply via email to

[Prev in Thread] Current Thread [Next in Thread]