l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: patch for various things


From: Marcus Brinkmann
Subject: Re: patch for various things
Date: Wed, 02 Jun 2004 03:44:00 +0200
User-agent: Wanderlust/2.10.1 (Watching The Wheels) SEMI/1.14.6 (Maruoka) FLIM/1.14.6 (Marutamachi) APEL/10.6 Emacs/21.3 (i386-pc-linux-gnu) MULE/5.0 (SAKAKI)

At Tue, 1 Jun 2004 01:36:56 +0200,
Bas Wijnen wrote:
> Here I'd expect a "+ 1" is needed, because it should contain the trailing
> '\0'.  However, I'm not sure since it can be compensated.  If this is the
> case, there should be a comment about this.

Might very well be the case, I have to check myself (too lazy to do it now :)
How about reading the code of add_unused_area and following how it is used?

> Index: wortel/ia32-cmain.c
> ===================================================================
> RCS file: /cvsroot/hurd/hurd-l4/wortel/ia32-cmain.c,v
> retrieving revision 1.11
> diff -u -p -r1.11 ia32-cmain.c
> --- wortel/ia32-cmain.c       26 Apr 2004 21:15:21 -0000      1.11
> +++ wortel/ia32-cmain.c       31 May 2004 23:21:22 -0000
> @@ -176,7 +176,7 @@ find_components (void)
>        /* Add the argument string of the first module to the list of
>        unused pages.  */
>        add_unused_area ((l4_word_t) mod[0].string,
> -                    strlen ((char *) mod[0].string));
> +                    strlen ((char *) mod[0].string) + 1);
>  
>        mods_count = mbi->mods_count - 1;
>        if (mods_count > MOD_NUMBER)
> 
> Here I'd think a non-numerical error message is better, just like many others
> in the same file.  There are also quite some which use "(l4_error_code >> 1) &
> 7", which makes no sense at all to me, because I didn't check the error codes.
> Is there a reason why those are also not printed with l4_strerror?

In fact, this is not very clean yet.  Sometimes the error code is just
a plain error code, but often it is actually an IPC error, and then
the error code number has a special meaning (see the specs).  In this
case, it is an IPC error.  If there is an IPC error, I often do the
bit manipulation to get at the right flags (again, see specs), but in
some cases I am lazy and just dump all flags and leave it at that.
 
> Index: wortel/wortel.c
> ===================================================================
> RCS file: /cvsroot/hurd/hurd-l4/wortel/wortel.c,v
> retrieving revision 1.29
> diff -u -p -r1.29 wortel.c
> --- wortel/wortel.c   26 Apr 2004 21:15:21 -0000      1.29
> +++ wortel/wortel.c   31 May 2004 23:21:32 -0000
> @@ -331,8 +331,8 @@ start_physmem (void)
>  
>    ret = l4_thread_start (physmem, 0, mods[MOD_PHYSMEM].ip);
>    if (!ret)
> -    panic ("Sending startup message to physmem thread failed: %u",
> -        l4_error_code ());
> +    panic ("Sending startup message to physmem thread failed: %s",
> +        l4_strerror (l4_error_code ()));
>  
>    /* Set up the extra threads for physmem.  FIXME: UTCB location has
>       the same issues as described above.  */
> 
> 
> This panic seems to be copy-pasted from serve_bootstrap_requests.  In
> serve_requests a MSG_SHUTDOWN would not be an error I'd think, and even if it
> is, the message shouldn't be that the bootstrap failed.

Actually, serve_requests is unused and unmaintained old crap that I
didn't bother to remove.  It hints at the idea that after bootstrap no
bootstrap RPCs will be parsed anymore.  However, this idea is probably
wrong ;)  (If you think of wortel as a sleazy manager OS).
 
> @@ -1096,7 +1096,7 @@ serve_requests (void)
>         continue;
>       }
>        else if (label == WORTEL_MSG_SHUTDOWN)
> -     panic ("Bootstrap failed");
> +     shutdown ();
>        else
>       panic ("Invalid message with tag 0x%x", msg_tag);
>      }
> 
> The complete patch (with apart from the above some textual fixes) is attached.

As for changelog entries, for the manual only a very simple entry is
needed ("...Various grammar fixes..." etc).

> -The thread ID of $\sigma_0$ is (\verb/UserBase, 1)/.
> +The thread ID of $\sigma_0$ is (\verb/UserBase, 1/).

Yes, and all those like it.

> -\item The exception handler set to \verb/nilthread/.
> +\item The exception handler is set to \verb/nilthread/.

Yes.

> -application ask the mapper (another application which has mapped
> -this memory region the first application) to resolve the mapping.
> -This can be done recursively.  Normally, this resolving of mapping
> -can be speed up using a cache services, since a small number of
> +application asks the mapper (another application which has mapped
> +this memory region to the first application) to resolve the mapping.
> +This can be done recursively.  Normally, this resolving of a mapping
> +can be sped up using a cache service, since a small number of

Sounds about right (without looking at context).

>  zero copying if there is only one virtual driver.  When there is
> -more than one virtual driver pages have to copied for all other
> +more than one virtual driver pages have to be copied for all other

Still sounds awkward ("is more than one pages"?)

> -  a needed for bootstrapping configuration can be given as argument on
> +  needed for bootstrapping, the configuration can be given as arguments on

Yes.

> -  assotiatet plugin manager.
> +  associated plugin manager.

Yes.

> -  devices on the bus can be accessed by doing a memory accesses or by
> +  devices on the bus can be accessed by memory access or by

Yes.

> -  the platform specific details of accessing the bus.  
> +  the platform specific details of accessing the bus.

Uhm, ok.

>    It might be that not all drivers are found during bootstrapping and
> -  hence later on drivers could be loaded.  This is done by regenerate
> -  new attach notification sending to bus's plugin manager.  The plugin
> -  manager loads then if possible a new driver.  A probe funtion is not
> +  hence later on drivers could be loaded.  This is done by generating
> +  new attach notifications, which are sent to the bus's plugin manager.
> +  The plugin manager then loads a new driver, if possible.
> +  A probe funtion is not

Yes.

>    needed since all supported hardware can be identified by
> -  vendor/device identifactions (unlike ISA hardware).  For hardware
> -  busses which don't support such identifaction only static
> +  vendor/device identification (unlike ISA hardware).  For hardware
> +  busses which don't support such identification only static

Yes.

> -\item detach\_irq : detach an ISR thread form the IRQ
> +\item detach\_irq : detach an ISR thread from the IRQ

Of course.

> -controllers.  Another import fact is that on PC architecture the order
> +controllers.  Another import fact is that on the PC architecture the order

Yes.

>  The device driver framework will be started by deva, which is started
> -by wortel.  All drivers and server (e.g. the plugin manager) are
> +by wortel.  All drivers and the server (e.g. the plugin manager) are
>  stored in a archive which will be extracted by deva.

Yes.

> -A Plugin manager handles driver loading for devices.  It ask for drivers
> -deva.  
> +A Plugin manager handles driver loading for devices.  It asks deva for
> +drivers.  

Yes.

> -If a simple hardware device is found the ddf will load an driver for
> +If a simple hardware device is found the ddf will load a driver for

Yes.

> -\item The PCI Bus Driver detects a hardware device for which now driver
> -has been loaded yet.  It generates an insert event which is sends to
> +\item The PCI Bus Driver detects a hardware device for which no driver
> +has been loaded yet.  It generates an insert event which it sends to

Yes.

>  \item The Root Bus Driver receives the event signal.  Note it is not
> -necessary that the Root Bus Driver handles for all drivers the insert
> -signal.  It forwards the signal to the/a Plugin Manager (PLM).
> +necessary that the Root Bus Driver handles the insert signal for all drivers.
> +It forwards the signal to the/a Plugin Manager (PLM).

Yes.

> -data is return in a container and only the handle of the container is
> +data is returned in a container and only the handle of the container is

Yes.

> -address space.  The bootstrap thread starts to run an generated a
> +address space.  The bootstrap thread starts to run and generates a

Yes.

> -Deva as new plugin manager.  Deva will send all signal/messages from
> +Deva as new plugin manager.  Deva will send all signals/messages from

Yes.

> -hold by a client.  It was previously granted to the client by the
> +held by a client.  It was previously granted to the client by the

Yes.

> -size and constructor/desctructor callbacks), and a demuxer for
> +size and constructor/destructor callbacks), and a demuxer for

Yes.

> -     task, we bump up the task_id number up whenever a new task is
> +     task, we bump up the task_id number whenever a new task is

Yes.

Thanks!
Marcus




reply via email to

[Prev in Thread] Current Thread [Next in Thread]