bug-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: A memory-based filesystem for the lazy [or impatient]


From: Roland McGrath
Subject: Re: A memory-based filesystem for the lazy [or impatient]
Date: Thu, 21 Dec 2000 15:51:34 -0500 (EST)

> On Thu, Dec 21, 2000 at 02:10:45AM -0500, Roland McGrath wrote:
> > If you are going to take that approach, i.e. a memory-based disk rather
> > than a memory-based filesystem, I would suggest adding a type to libstore.
> 
> But how do you format the store before it is used?

Just like any other, with mke2fs.

        settrans -c /dev/ramdisk /hurd/storeio -Tcopy zero:100M
        mke2fs /dev/ramdisk
        settrans -a /tmp /hurd/ext2fs /dev/ramdisk

It is true that there isn't a handy way to make:

        settrans -a /tmp /hurd/ext2fs -Tcopy zero:100M

do what you'd like.  This is unfortunate, since it is definitely better for
performance to have the store implementation in ext2fs's libstore rather
than in storeio's.  With the changes I just made to libstore/copy.c last
night (which I haven't even tried to compile--I'm relying on you to test
and fix them if need be), it shouldn't be too bad going through storeio.

The problem is that file_get_storage_info cannot meaningfully describe the
store, so the filesystem's libstore has to make normal RPCs to storeio to
access the store.  This is not so bad, since it will only do it when pages
are written back.  The changes I made last night to libstore should make it
use vm_read/vm_write instead of actual data copying, but there is a bunch
of overhead in doing the ool RPC data from fs to storeio and then the
vm_read/vm_write inside storeio (which is more or less like a second ool RPC).

We could make something similar perform better by implementing the
STORAGE_MEMORY type.  Then storeio would just allocate a default-pager
memory object, and that memory object port could be passed back through
file_get_storage_info to the filesystem's libstore.

But, the plan as it stands is reasonable enough.  The performance questions
I've raised only come into play when the filesystem does pagein/pageouts,
which won't happen much until you've used up RAM.  Hmm, except for
synchronous inode updates.  We really ought to add an --unsafe-asynchronous
option to diskfs that makes it never do any sync'ing except for explicit
fsync and syncfs calls.  This is useful in some cases for real disk
filesystems too; FreeBSD has such an option that they use when doing clean
installs, i.e. where if anything goes wrong it is acceptable to have to
wipe the disk and start over (since fsck might not be able to recover after
a crash).

This approach of having the pageout path turn into a vm_write of another
task's anonymous page probably has horrible VM performance.  Thomas can
give us a clearer idea what exactly this will mean.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]