bug-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: memory_object_lock_request and memory_object_data_return fnord


From: Thomas Bushnell, BSG
Subject: Re: memory_object_lock_request and memory_object_data_return fnord
Date: 13 Mar 2002 18:39:54 -0800
User-agent: Gnus/5.09 (Gnus v5.9.0) Emacs/21.1

Neal H Walfield <neal@cs.uml.edu> writes:

> Hmm.  True.  But then only if the dirty bit was set in the
> memory_object_data_return message.

Oh right, somehow I forgot that the kernel handed you the dirty bit.

> > Ideally, if you need to write to the page, you do it the same way
> > libdiskfs does: map the pager yourself, and write to it that way.
> 
> Well, the Mach documentation explicitly says to avoid this as it often
> leads to deadlock.

Only because they are thinking about limited numbers of threads.  We
always run the pagers in their own threads, so it's no trouble.

> I am trying to understand the motivation for having a relatively
> complex interface to manage page ownership which, in the Hurd, we do
> not use.

Ah, the principal motivation is to allow, for example, a pager to
manage pages shared between many "kernels".  The reason to demand
pages back or require locking, in general, is so that you can hand
them up to other "kernels".

In principle, we need to do this already!  The glaringest security
issue with the Hurd right now is the assumption that all users will
just take their pagers and hand them to the kernel with vm_map.  But
they might play as "kernels" themselves.  To deal with this, the
pagers need to be able to deal with multiple "kernels", and also have
strategies for dealing with recalcitrant "kernels" that aren't
behaving properly.

> From what I can see, the pager_memcpy function can be extremely slow.
> Just consider what happens when we just want to replace a page on disk
> (which has not yet been read in to memory).  pager_memcpy causes a
> page fault.  The kernel sends a message to the manager which reads the
> page from disk (which is completely unnecessary), then, we write to
> the page and eventually, it is flushed back to the disk.  This is even
> worse if we are writing to multiple pages -- our thread and the
> manager thread play ping-pong!  This could be avoided by acquiring as
> much of the range up front as possible.

Why do you think this is so horrible?  This is just demand paging.

What's supposed to be going on behind the scenes is that the kernel
should detect that you are faulting the pages in sequentially, and ask
for pages from the pager ahead of time, optimizing the sequential
access case.

Thomas




reply via email to

[Prev in Thread] Current Thread [Next in Thread]