bug-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: What shall the filter do to bottommost translators


From: olafBuddenhagen
Subject: Re: What shall the filter do to bottommost translators
Date: Thu, 29 Jan 2009 08:19:52 +0100
User-agent: Mutt/1.5.18 (2008-05-17)

Hi,

On Tue, Jan 13, 2009 at 10:55:05PM +0200, Sergiu Ivanov wrote:
> On Fri, Jan 9, 2009 at 9:01 AM, <olafBuddenhagen@gmx.net> wrote:
> >  On Wed, Dec 31, 2008 at 02:42:21PM +0200, Sergiu Ivanov wrote:
> > > On Mon, Dec 29, 2008 at 8:25 AM, <olafBuddenhagen@gmx.net> wrote:

> > > > The most radical approach would be to actually start a new nsmux
> > > > instance for each filesystem in the mirrored tree. This might in
> > > > fact be easiest to implement, though I'm not sure about other
> > > > consequences... What do you think? Do you see how it could work?
> > > > Do you consider it a good idea?
> > > >
> > > I'm not really sure about the details of such implementation, but
> > > when I consider the recursive magic stuff, I'm rather inclined to
> > > come to the conclusion that this will be way too much...
> >
> > Too much what?...
> >
> 
> Too many instances of nsmux, too many context switches, too much
> resources consumed for things that could possibly be implemented to
> work faster with less effort, this was what I meant.

I must admit that I don't see how recursive magic is relevant in this
context... But perhaps that's because I'm mentally blocking any serious
considerations concerning recursive magic for now :-)

> However, now that I've read your mail, only the relatively large
> number of processes troubles me.

Not so large actually. For each translator in the original tree, we get
one nsmux instance.

> > The root filesystem will look up "a/b", and see that "b" is
> > translated. It will obtain the root of the translator, yielding the
> > translated node "b'", and return that as the retry port, along with
> > "c" as the retry name. (And RETRY_REAUTH I think.)
> 
> [...] As for the retry type, I think we have agreed before in this
> thread that RETRY_REAUTH is returned only when a ``..'' is requested
> on the root of the translator. That is, in this case RETRY_NORMAL will
> occur.

Ah, right. And it confuses me mightily...

> > Now what about the control ports? The client could do a lookup for
> > "/n/a/b" with O_NOTRANS for example, and then invoke
> > file_get_translator_cntl() to get the control port of the translator
> > sitting on "b". nsmux in this case forwards the request to the
> > original "/a/b" node, but doesn't pass the result to the client
> > directly. Instead, it creates an new port: a proxy control port to
> > be precise. The real control port is stored in the port structure,
> > and the proxy port is passed to the client. Any invocations the
> > client does on this proxy control port are forwarded to the real one
> > as appropriate.
> 
> An implementation of this functionality that would require least
> effort would mean that when the client does
> file_get_translator_cntl(), nsmux will create a new instance of
> libnetfs's struct node, create a port to it and give this port to the
> client. Of course, this instance of struct node need not contain all
> the fields required in other instances which are meant to mirror
> *filesystem* nodes. In this way, the existence of the special instance
> of struct node I'm talking about would by no means violate any
> concepts (it seems to me).
> 
> As I've already said, I'm strongly inclined to perceive a libnetfs
> node in a more general meaning than a filesystem node.

If there are fields in the node structure that only make sense for
actual filesystem nodes, isn't that really a pretty strong indication
that the netfs nodes are *not* a generic concept?...

I really wonder why you are so set on misusing them. Aside from the
confusion this would create -- is there actually anything it would buy
us over a plain port structure (with the required data in the hook)? Is
there any kind of handling we need that netfs does on the netfs nodes
and we would need to do manually otherwise? Are you sure it wouldn't
actually cause *more* trouble, having to take care that the pseudo-nodes
are *not* handles as real FS nodes?...

> > With the "distributed" nsmux, things would work a bit differently.
> > Again there is a lookup for "/n/a/b/c", i.e. "a/b/c" on the proxy
> > filesystem provided by nsmux; again it is forwarded to the root
> > filesystem, and results in "a/b'" being returned along with a retry
> > notification. The distributed nsmux now creates a proxy node of
> > "a/b" (note: the untranslated b). It starts another nsmux instance,
> > mirroring "/a/b'", and attaches this to the "a/b" proxy node.
> >
> > Again, the client will finish the lookup. (By doing the retry on the
> > new nsmux instance.)
> 
> Aha, sounds great! I've had no inkling even as to such possibility :-)
> Very beautiful idea!

It's rather ironic that you take to the idea just when I discovered that
it's flawed and pretty much gave up on it...

> > Now unfortunately I realized, while thinking about this explanation,
> > that returning the real control port of nsmux, while beautiful in
> > its simplicity, isn't really useful... If someone invoked
> > fsys_goaway() on this port for example, it would make the nsmux
> > instance go away, instead of the actual translator on the mirrored
> > tree.
> >
> > And we can't simply forward requests on the nsmux control port to
> > the mirrored filesystem: The main nsmux instance must handle both
> > requests on it's real control port itself (e.g. when someone does
> > "settrans -ga /n"), and forward requests from clients that did
> > fsys_getcontrol on one of the proxied nodes to the mirrored
> > filesystem. So we can't do without passing some kind of proxy
> > control port to the clients, rather than the nsmux control port.
> 
> Hm... I'm thinking of the following thing: can we make nsmux behave
> differently in different contexts? Namely, normally requests like
> fsys_goaway should go to the translator in the real filesystem,
> however, at some point, nsmux will treat them as directed to itself.
> 
> One possibility to implement this would be adding a special command
> line option which would tell nsmux to forward RPCs to the real
> translator. Note that there are also runtime options available for a
> translator (and these options are classically the same as simple
> command line options) and we can modify them via
> fsys_{get,set}_options. When a new instance of nsmux is started by an
> already-existing instance, it will be started with this special
> command line switch. All meaningful RPCs will be forwarded to the
> translator in the real tree. When the parent instance of nsmux would
> want to shut down a child, it would just reset the special option in
> the child (fsys_set_options) and do an fsys_goaway on the child's
> control port.
> 
> OTOH, when a *user* (not the parent instance of nsmux) would like to
> shutdown a child instance (which, BTW, may not be desirable, what do
> you think?), they can use the fsysopts command to remove the
> corresponding option and then do settrans -ga, for instance, to
> achieve the shutdown.

You are right that the child nsmux instances probably never need to be
addressed by the user directly; they can be totally transparent. The
main instance is different, though.

Your suggestion actually wouldn't work: fsys_set_options after all is
also just an RPC on the control port, and would have to be forwarded to
the original translator...

Certainly we could come up with some other mechanism to do the switching
-- but please, for $Deity's sake, don't.

It would mean terrible usability; it would introduce some state that
would break tools not aware of it; and if the state is global (which it
would be in the variant you suggested), it would introduce race
conditions as well. Quite frankly, the whole idea is an immensly ugly
hack.

Moreover, I don't even see why we would want to do something like that.
After all, this effectively does introduce the concept of proxy control
ports and real control ports for the proxy server -- only they are
multiplexed over a single port...

Just expose these ports explicitely -- no uglyness, and actually simpler
to implement.

So again the same conclusion: We do need the proxy control ports after
all -- although an attempt to avoid them was precisely what made me come
up with this distributed design in the first place...

Having said that, the idea is not entirely without merit. Effectively it
means that when traversing a translator stack, instead of the single
nsmux instance creating several proxy nodes for the same filesystem
location at different translation levels, we get only one proxy node per
location in the main nsmux, and the other levels are handled by other
nsmux instances. I must admit to a certain elegance in this... But I'm
not quite sure whether it would indeed make things clearer and simpler,
rather than more complicated.

I suggest proceeding like with the shadow nodes: It could be beneficial
to think of them as distinct translators conceptually, but until we
fully understand the implications, better leave the actual
implementation as it is...

> It doesn't at all look as if something in authentication is unclear to
> you ;-)

Eh? Look a few paragraphs above, or below:

> > This process needs to be done for each filesystem server the client
> > contacts. This is why a reauthentication needs to be done after
> > crossing a translator boundary -- or at least that is my
> > understanding of it. The retry port returned to the client when a
> > translator is encountered during lookup, is obtained by the server
> > containing the node on which the translator sits, and obviously
> > can't have the client's authentication; the client has to
> > authenticate to the new server itself.
> 
> Hm... Again, the problem about RETRY_REAUTH, which seems to happen
> only when looking up ``..''...
[...]
> It seems to me, anyways, that in the standard implementation of
> netfs_S_dir_lookup RETRY_REAUTH happens when looking up ``..'' on the
> root node of the filesystem.

... which is totally beyond me.

> Do you think we have to abandon this tactic and make nsmux do a
> RETRY_REAUTH any time its encounters (or starts) a translator?

No, I don't think so. If it's not required in the normal case (again,
beyond me), I guess it's not required in the proxy case either.

Truth to be told, I have no clue about the implications of the auth
mechanism on proxying...

> > BTW, I just realized there is one area we haven't considered at all
> > so far: Who should be able to start dynamic translators, and as
> > which user should they run?...
> 
> nsmux starts dynamic translators using fshelp_start_translator, which
> invokes fshelp_start_translator_long. This latter function creates a
> new task and makes it a *child* of the process in which it runs.
> Therefore, dynamic translators are children of nsmux and can do
> anything the user who starts nsmux can do.

Yeah, I'm aware of that.

> Is this OK? Or are you thinking of the possibility that nsmux be
> started at startup with root priveleges?..

I think it's better to look at it the other way round: First decide what
the *desired* behaviour is, and only then consider how to achieve it...

But that's a completely new discussion, and I'd rather clear up the
other points first...

> > And it is actually not entirely correct: Let's say we use a filter
> > to skip "b" (and the rest of the stack on top of it). The filter
> > would traverse the stack through all this shadow machinery, and
> > would end up with a port to the root node of "a" that is still
> > proxied by the various shadow nodes. The filter should return a
> > "clean" node, without any shadowing on top of it; but the shadow
> > nodes don't know when the filter is done and passes the result to
> > the client, so they can't put themselfs out of the loop...
> 
> While this may be a bit of offtopic, I would like to ask you what the
> issue with correctness is? I can see only the sophisticated shadow
> machinery in it's complicated action, but everything seems all right
> about the way it handles information: nothing gets lost.

Think about what happens when the filter is done, and returns the
resulting port to the client. It skipped a couple of translators off the
top -- returning a port to some node in the middle of the stack. The
effect should be exactly as if someone started at the bottom of the
translator stack, followed the first couple of retries, but then
suddenly stopped.

If there are any static translators present on this node, they could be
traversed normally in further requests. Dynamic translators however
should disappear: They exist only to their clients. Once we looked up
something outside of the nodes they provide -- which we did with the
filter -- we can't get back from there. We are no longer a client.

But that's not what happens in the scenario I described. The filter
never gets a port to the real middle node. What it actually gets is a
node proxied by all the shadow nodes in the stack. It is proxied by
them, because the filter would have to see the dynamic translators if it
continues traversal. But once the filter is done, the client should get
a real port to the middle node, with no shadow nodes in between. (Of
course the middle node is still mirrored by a "normal" nsmux proxy node
if it is a directory, so that further magical lookups are possible; but
the rest of the proxy stack should be gone.)

Most of the time the still active shadow nodes would do no harm, except
for killing performance. If another filter is applied through a magic
lookup, this would attach a new shadow node right at the middle node;
the traversal would be diverted right there, before ever reaching the
offending old shadow translators.

However, if someone does further translator lookups by some other means,
the whole stack would be visible; and as soon as he arrives at the nodes
where the old shadows sit, they would cry "here", and happily report the
dynamic translators which should no longer be visible.

Admittedly that is a quite specific case. The performance issue is
probably more relevant in practice.

Unfortunately I realized now that traversing top to bottom only
partially solves this problem; we will have to address it explicitely
after all... [sigh]

> > It seems that my initial intuition about traversing top to bottom
> > being simpler/more elegant, now proves to have been merited...
[...]
> > Note that unlike what I originally suggested, this probably doesn't
> > even strictly need a new RPC implemented by all the translators, to
> > ask for the underlying node directly: As nsmux inserts proxy nodes
> > at each level of the translator stack, nsmux should always be able
> > to provide the underlying node, without the help of the actual
> > translator...
[...]
> Yes, this is right, nsmux can do that. However, how do you suggest the
> top-to-bottom traversal should take place (in terms of RPCs)? Shall we
> reuse some existing RPCs?..

Not sure. Any suggestions? :-)

> Hm, and another (quite important) question: what do we to static
> translator stacks? There are no shadow nodes between static
> translators, so nsmux has no chance of traversing the static stack in
> top-to-bottom fashion...

Seems my explanation wasn't totally clear after all...

It's not the shadow nodes that inform about the next-lower node in the
stack. The shadow nodes only divert queries regarding what translator
sits on the node they shadow; all other RPCs are forwarded to the actual
shadowed node, which is a normal nsmux proxy node. This one handles the
requests for the next-lower node.

For static translators, there is no shadow, but there is still the proxy
node that can handle this request.

> As for the questions about the bottom-to-top approach I'm very
> interested to know the following: did you manage to design all this
> shadow-proxy machinery just in your mind, or did you put something
> down on paper? :-) I'm asking because I had to redraw the same graph
> for about five times before I got the idea :-)

Indeed I designed it all in my mind (I have a very good visual
imagination); but it really took me quite long, and some bits only
became clear while writing it up.

I'm totally aware though that this kind of stuff is very hard to
communicate without a drawing -- which is why I suggested to start
making drawings quite a while back...

Unfortunately making drawings that make sense when studied
asynchronously is so damn hard :-( It's so much simpler to make a
drawing in real time, and explain while going along... The great
advantage of a face-to-face meeting.

> Ah, and another thing: as far as I could notice from your
> descriptions, the role of proxy nodes is rather humble, so my question
> is: what is the reason of keeping them at all? I'd rather not include
> proxy nodes in between dynamic translators. When a client invokes an
> RPC on its port to a proxy node, nsmux knows (obviously) to which
> shadow node the proxy node is connected, so nsmux can ignore the proxy
> node in the matter of setting translators. My idea is to keep proxy
> nodes as a kind of ``handles'' which are meant only to be seen to the
> client and shall not be employed in the internal work.

Not sure what you are trying to achieve here. You must be aware that we
need both the shadows and the normal proxies as explicit nodes, as ports
for both are handed out to external processes. (The normal proxies are
handed out to clients doing a lookup, and the shadows are handed to the
dynamic translators attached to them.)

Of course in the monolithic implementation we are free to organize the
data internally as we like -- but I don't see any point in anything else
than storing all data right with the respective nodes it belongs to,
shadow or otherwise...

> > PS. Your mails contain both plain text and HTML versions of the
> > content. This unnecessarily bloats the messages, and is generally
> > not seen favourably on mailing lists. Please change the
> > configuration of your mail client to send plain text only.
> 
> I'm sorry, it wasn't intended. I'm using web interface for mail
> management, and when I recently tried to switch browsers, the change
> has influenced my messages. However, I've come back to my usual
> Firefox, so no more things like that should occur.
> 
> Actually, I was not even aware that the other browser was putting
> garbage in my mail :-(

It's not the browser. The browser just submits a text input field. It's
the mail client that generates the mail (with the pointless HTML part)
from that -- a web mail client isn't any different from a normal mail
client running on the user's PC in that regard.

There must be some configuration option to disable the HTML.

(Yes, this last mail has the HTML part, just like the others.)

-antrik-




reply via email to

[Prev in Thread] Current Thread [Next in Thread]