[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Fixing grub-probe to work with user-space part stores

From: olafBuddenhagen
Subject: Re: Fixing grub-probe to work with user-space part stores
Date: Tue, 27 Jul 2010 20:22:14 +0200
User-agent: Mutt/1.5.20 (2009-06-14)


On Wed, Jul 14, 2010 at 07:51:07PM +0200, Carl Fredrik Hammar wrote:
> On Tue, Jul 13, 2010 at 08:00:24PM +0200, Jérémie Koenig wrote:

> > The first one would be to make part.c embed extra information in its
> > stores and their encoded form (possibly in the "misc" fiels of a
> > remap store).
> > It would also change the existing principle by which stores and
> > storage_info provide a way to access the underlying storage in the
> > most direct way possible, as opposed to encoding the history of how
> > the store was constructed.
> It could also be a good idea to have a proper part store instead of
> just remapping the device store.  Then, you could just look at the
> name of the part store to get the partition number.

This wouldn't remove the philosophy break though -- on the contrary, it
would be make it even more pronounced...

> The only way this could work reliably is if we introduce a new RPC
> file_get_underlying_storage_info(), which returns the backing store
> used by the filesystem.  (I'm not sure if this should be in fs.defs or
> fsys.defs though.)  This would be a hurdish analogy to st_dev thin is
> returned by stat(), except that it returns a store instead of a device
> number.  IIRC grub-probe on other systems matches st_dev with the
> st_rdev of device files, but this doesn't work on Hurd because having
> stable device numbers in a distributed system is actually quite hard.

I also considered this option. It doesn't really solve the problem
though... It would remove the ambiguity of specific translator
parameters -- but at least for all the existing FS translators, these
are rather straightforward anyways. The real problem is the ambiguity of
store representations. Kernel partitions; part stores on kernel devices;
and userspace drivers -- they can all refer to the same partition in
different ways. And when using additional translation stores (remap
etc.), then even within the same device scheme it can happen that
different representations refer to the same partition. It would be quite
complex to cover even all the standard cases; and each new store type
would require extra handling.

> > The second one, which I favor and am working on so far, would be to
> > enumerate the _grub_ devices, and use get_storage_info() on them
> > too. We would compare the result with that of the file being looked
> > up, and use the grub-detected partition information to deteremine on
> > which _grub_ device the file resides. The result would be converted
> > to a system device node path by using the device.map information.
> This is a bit hacky but could be a good intermediate solution if your
> not comfortable with adding a new RPC that isn't strongly related to
> your project.

I'm not convinced so far that this is indeed more hacky than the other
approach mentioned above...

GRUB effectively needs a three-way mapping between filesystems, device
node names, and GRUB device names. The mapping between GRUB devices and
the /dev/hd* device node names is almost trivial. (Assuming that the
device nodes follow a standard naming scheme.) The tricky part is
mapping filesystems either to GRUB devices *or* to device node names.

Mapping directly to device node names requires a full understanding of
the store information obtained from the filesystem (whether extracted
from command line parameters, or with a special RPC) -- for all possible
device schemes. Finding out the GRUB devices (using GRUB's own partition
handling code) OTOH requires only a basic understanding of the various
device schemes; while the blanks are filled in by a probing mechanism,
which can be made pretty robust I believe. My current impression is that
it's actually the more flexible solution...


reply via email to

[Prev in Thread] Current Thread [Next in Thread]