[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Denial of service attack via libpager

From: Richard Braun
Subject: Re: Denial of service attack via libpager
Date: Tue, 30 Aug 2016 12:12:27 +0200
User-agent: Mutt/1.5.23 (2014-03-12)

On Mon, Aug 29, 2016 at 03:58:29PM -1000, Brent W. Baccala wrote:
> On Sun, Aug 28, 2016 at 11:15 PM, Richard Braun <rbraun@sceen.net> wrote:
> > OK, this comes from the fact that io_map directly provides memory
> > objects indeed... Do we actually want to pass them around ? How
> > come calls like memory_object_init (specifically meant to be used
> > between the kernel and the pager) can be made from any client ?
> Good question!
> How could we authenticate the kernel to avoid unprivileged access?

The usual method is to add a new unprivileged abstraction. We would
have privileged memory object rights on which all calls are possible
(something like memory_object_priv_t) and unprivileged ones (the
regular memory_object_t). This is a kernel modification.

We could also temporarily refuse to hand out memory objects and make
the client pass its own task so the server does the mapping. This would
take care of the DoS issue at least. This would be a pure Hurd change.

> My goal is to build a single system image Hurd cluster.  We need to support
> multiple processes mmap'ing the same file, for basic POSIX compatibility.
> If those processes are on different nodes, then the file server needs to
> handle multiple kernels as paging clients.

OK I forgot the context. And in this context, I agree.

> > In addition, I've just thought about something else : if we handle
> > multiple clients, how do we make sure that two kernels, caching the
> > same file, don't just completely corrupt its content ? We'd need
> > some kind of cooperation to prevent the file being mapped more than
> > once I suppose, right ?
> >
> They can already do that with ordinary io_write's.  It's not that clients
> can trash the file if they don't have write access; they can't.  It's a
> denial of service issue.

No, what I mean is that the in-memory view of the same file on two
different systems would become different, and this is why cache
coherence would be required.

> Or, well, a complete cache coherence protocol, with a very large
> > overhead per operation.
> >
> That's what I'm talking about!
> Let's think about the "very large overhead" for a minute.  If we've only
> got a single client, there's no extra overhead at all.  That's the case
> we've got now, so we're not losing anything.

I'm not sure about this, since for one the kernel would have to
(synchronously) notify a pager when a mapping is upgraded from read to
write access, so that the pager can invalidate all other mappings.
So just for this, we would effectively add constant overhead for
every shared mapping that would become writable. But agreed, even in
that case, it shouldn't happen often in the single client case.

> If two processes on separate nodes have mmap'ed the same file with write
> permissions, you bet that could generate some very large overhead!  The
> programmer has to take that into account, and avoid using that technique in
> critical sections of code.

It's more complicated than that. We would want regular POSIX programs
to be able to map and still have the proper behaviour. We would really
not want the programmer to take that into account, so we'd need strong
ordering constraints. And because of that, this would be the same as
cache line bouncing in multiprocessor machines, except across a network.
But if that's what's required, then so be it. Users can adjust their
own use if they want better performance.

> Yet it needs to be supported for POSIX compatibility, and in non-critical
> code it might not be a problem at all.  Two tasks could probably bat a 4KB
> page back and forth a hundred times and you'd never notice, just so long as
> they settled down and stopped doing it once they got initialized.

I'm not saying we shouldn't do this because of the overhead, but it has
to be considered.

> Furthermore, my reading of the memory_object protocol suggests that Mach's
> designers already had this in mind.  We don't need to design a cache
> coherence protocol, since we've already got one.  We just need to implement
> it.

I agree they certainly thought about it, but I really don't think it's

I understand what you want, and I'm all for it actually. Your results
with your network server are already impressive. Good luck.

Richard Braun

reply via email to

[Prev in Thread] Current Thread [Next in Thread]