l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Unmapping fpages


From: Espen Skoglund
Subject: Re: Unmapping fpages
Date: Wed, 29 Dec 2004 13:04:04 +0100

[Neal H Walfield]
> The problem isn't giving tasks continued readonly access to the C
> library.  The problem is providing a way to assure access to memory
> is private.

I was only using the C library example because you used it in your
original post to try to express that there was some serious problem.

> Let's say that we have some code in the C library which some tasks
> use.  That is, they request the data from the file system and have
> all received a read-only map of it from physmem.  Eventually, these
> tasks may no longer need the data and deallocate it.  Alternatively,
> but equally important and functionally equivalent, the tasks may be
> forced to release the physical memory because of memory pressure.
> Let's assume that one of the tasks is malicious and doesn't unmap
> the memory.  Once physmem reallocates the memory, the malicious task
> gets a readonly window into the task that allocated it.

If all tasks need to release the memory, then the physmem task can
simply perform the unmap on all the tasks itself.

> So it is not a question of simply having continued read-only access
> to the C library after allegedly releasing physical memory: the
> problem is giving tasks access to other tasks' potentially sensitive
> information.

> In short: we have to multiplex memory because memory is a scarce
> resource.  Your solution seems to assume that memory is allocated
> once and never multiplexed.

No, my "solution" was not a solution at all.  It was merely a question
about what you are actually trying to solve, and whether there really
was a problem to be solved in the first place.

> Physical memory is only a cache of the underlying backing store.  It
> is true as you say, the contents of shared objects won't change,
> however, the physical memory which provides a temporary mapping to
> it will.

...in which case you'll probably want to unmap the memory from all
address spaces anyway.

> Where would you keep the alias mappings?  In a proxy task as I
> suggested?

That would be one option, yes.

>> Yes, I know, the situation is not ideal.  We do, however, have
>> ideas on how to remedy the situation should it prove to be a real
>> problem.

> Can you give us some idea of that your thoughts are so that we could
> integrate them into our plans?

You associate a tag with each mapping you perform.  Upon unmap you can
specify the tag you want to perform the operation on; or more
generally, upon unmap you specify a mask and a compare bitmask that
you want to perform the operation on.

> Could you confirm then that if there is a 4MB mapping in physmem and
> it maps the first 4kb to a client task that physmem can call unmap
> on the 4kb fpage and expect the client task to no longer have a
> mapping?

Yes.

>>> ...and moreover would mean that we could not flush mappings on an
>>> per-task basis.
>> 
>> How does this relate to unaligned mappings?  I don't understand.

> What I am trying to say here is that we would like unmap on a
> per-task basis.  If physmem maps fpage X to client A and B and then
> A deallocates it but not B can we unmap (from physmem) only the
> mapping from A?  I think this would only be possible if we used a
> proxy task and had a mapping from physmem at address X to proxy at
> address Y to A and a second mapping from physmem at address X to
> proxy at address Z to B.  The if we want to only unmap the mapping
> in A we need to unmap the mapping from physmem to A (i.e. the
> mapping at Y in the proxy server).

Ok, I still don't understand how this relates to unaligned mappings,
though. ;-)

Anyhow, I do appreciate your concern about not having selective unmap,
and as I said earlier, we do have ideas on how to deal with this.  I'm
just curious about what the implications of not having selective unmap
within your system is.  That is, does it have a real impact on system
performance or on code complexity?

        eSk





reply via email to

[Prev in Thread] Current Thread [Next in Thread]