bug-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: 2nd attemt at reviving the filesystem limit discussion.


From: M. Gerards
Subject: Re: 2nd attemt at reviving the filesystem limit discussion.
Date: Sun, 22 Dec 2002 22:37:45 +0100
User-agent: Internet Messaging Program (IMP) 3.1

Quoting "Neal H. Walfield" <neal@cs.uml.edu>:

> My idea is to maintain a ~1GB area of "metadata control space."  This
> area, rather than being a one-to-one mapping of memory to backing
> store (as it currently is), would lazily map (via a hash) the metadata
> lazily as it is requested.  (The hash would be used to save the active
> mappings.)  When a page of meta-data is requested, the hash table is
> consulted, if the page is already mapped, the virtual address is
> returned; if not, free space in the 1GB area is allocated.  The
> pager_read_page and pager_write_page routines would consult these
> hashes and read the data from or write the data to the right area of
> the disk as appropriate.

Why does pager_read_page need to consult the hashes? I understand
pager_write_page has to because it needs to disallocate the mapping, but
pager_write_page does not need to modify/use the hashes. Maybe I didn't
understand the libpager interfaces or what you discribed.
 
> My method differs from Thomas' in that I only worry about meta-data.
> Additionally, the mappings are torn down when the Mach thinks it is
> appropriate; not immediately after the region is used or by some other
> internal algorithm.
> 
> The advantage to this algorithm over Roland's is that there would be
> fewer system calls--no need to create and rip down memory objects.
> Also, the startup time would be cheaper as the meta-data is allocated
> lazily rather than a Grand Reorganization when the translator is
> started.

I think there is one problem. I assume most filesystems make assumptions about
the order of the pages. When the filesystem wants to work in a memory area >1
page it wants to have those pages in the same order in memory as they are on
disk. I assume the function that will be used to map has a parameter to
configure the size of this mapping (or how many mappings will be created). It is
possible that some of these pages are already mapped elsewhere. Adding pages to
the hash twice isn't a good thing to do I assume :).

Another thing that might happen is the kernel want to torn down mappings while
they are still used. The change is very low when LRU is used, but IMHO we should
think this will never happen just because of that. This problem can be solved by
using a usage counter or something simular.
 
Is someone working on an implementation of this method or any of the other
methods or did someone plan to do so? I get quite irritated when I see "This
filesystem is out of space, and will now crash.  Bye!" ;)

If noone else will do this I am willing to _try_ to implement Neals method. 

Thanks,
Marco Gerards



reply via email to

[Prev in Thread] Current Thread [Next in Thread]