l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: design goals vs mechanisms


From: ness
Subject: Re: design goals vs mechanisms
Date: Tue, 01 Nov 2005 13:50:56 +0100
User-agent: Mozilla Thunderbird 1.0.7 (X11/20051031)

Jonathan S. Shapiro wrote:
On Tue, 2005-11-01 at 00:26 +0100, Michal Suchanek wrote:

On 10/28/05, Jonathan S. Shapiro <address@hidden> wrote:

On Fri, 2005-10-28 at 15:08 +0200, Michal Suchanek wrote:

On 10/26/05, Marcus Brinkmann <address@hidden> wrote:

At Wed, 26 Oct 2005 22:43:06 +0200,
Bas Wijnen <address@hidden> wrote:

If you want stability, you probably want to do some of the following:
* allocate a fixed amount of resources statically up front,
 instead dynamically at run time.

I kind of dislike this. That is one of the things that is nice in
Hurd/Mach: you have no limit on the size of strings like filenames. I
do not think I would like a filename that has 1M characters, but I do
not know what is the  "reasonable limit" for filename length.

I don't know anybody who *does* like this, but go read your sentence
again. What you are saying is: "I want the system to run robustly, but I
am unable to specify the conditions under which it must do so."

This just won't work. It isn't even a kernel vs. application issue.


hehe, I now understand why AMS likes to throw so much dirt in your direction :)

Some of your replies are so much constructive...

Like you publish a paper explaining ways of transferring unbound data
and then reply "it just won't work" when I complain about size limit
on strings.


I understand how the confusion came about, but this is not quite what I
said.

Robustness is a matter of degree. In most general purpose applications
we are prepared to tolerate rare application failure and recover. In
these applications, dynamically sized strings are perfectly okay **at
the application level**.

When extreme robustness is required, static preallocation really does
become necessary.

The trusted buffer object described in the synchronous IPC
vulnerabilities paper does not provide any *kernel* mechanism for
transferring an unbounded string. What it does is find a clever way to
avoid having the kernel do so -- this preserves kernel robustness, and
leaves application robustness for the application to decide.

Now: back to your file system example. One of the properties we would
like to have in a file system is knowledge that every operation will
complete (either successfully or by failure) in bounded time. If the
open() call must process an unbounded string, then this cannot be
achieved. Yes, EROS has a mechanism that would allow an unbounded string
to be transferred, but there is no way for the file system to *process*
that string in bounded time.

If the file system does not provide bounded time operations, then NO
application using that file system can be robust, because no application
using that file system has any contract that its open() operations will
*ever* complete in the face of unspecified but permitted usage by other
(unrelated) applications that happen to share that file system.

All you say is true. But in my eyes it's not the system's part to define the maximum size of a path. It should allow to pass unlimited size strings, and the filesystems decide what to do. Maybye it skips the string after 4096 sign. Or, if this is not required, it doesn't do so.

In fact, the issue is even more direct than this. In principle, strings
are bounded by the address space size. Therefore, this is not really a
debate about bounded vs. unbounded. It is a debate about what the bound
should be to ensure the degree of liveness and robustness that we want
in the system.

Speaking only for myself, I suspect that a PATH_MAX of 4096 is enough.
This is because file systems whose paths are longer than this cannot, in
practice, be managed successfully by real human beings, so the limit is
not hit in practice. The only case that I have ever seen where this
limit got hit was in the context of a backup operation on a file system
that was already near the limit. In Coyotos/EROS, this would not have
occurred, because the backup directory would not have been stuck "under"
some other directory.

Finally, let me emphasize a subtle distinction: I am not saying that
there needs to be a bounded number of *components* in the file name. I
am saying that the length of any *one* component (the part betweeen two
adjacent '/' characters) needs to be bounded, and also the number of
components that will be processed by a directory lookup operation in a
single unit of operation must also be bounded. This would not preclude
iterative traversal in order to have a longer effective path name.

shap

--
-ness-




reply via email to

[Prev in Thread] Current Thread [Next in Thread]