bug-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Diagram of Hurd I/O, read


From: Thomas Schwinge
Subject: Diagram of Hurd I/O, read
Date: Thu, 22 Sep 2011 12:51:55 +0200
User-agent: Notmuch/0.7-57-g64222ef (http://notmuchmail.org) Emacs/23.3.1 (i486-pc-linux-gnu)

Hi!

Yesterday night (while other Europeans have been sleeping) ;-), Sergio,
Olaf, Marcus have been discussing about how Hurd I/O, read in particular,
is spread over the various components, whishing there to be a diagram of
it.

And, I have the answer, and it's the same one as so often: there already
is, and it's even already in the web pages, hurd/io_path.mdwn.

When I was looking at it in August 2008 (uhm, three years already...), I
found it lacking a bit (if I remember correctly), and began working on
fixing that.  My preliminary patch is attached; waiting for
continuation/completion ever since August 2008.  Please, if someone goes
through all the code anyway, take my patch as a basis and do any further
changes on top of it (if it does make sense).
(<http://www.gnu.org/software/hurd/contributing/web_pages.html> has
instruction how to check out and locally build the web pages.  If you
don't want to bother with the Markdown markup syntax: no problem, I can
easily fix that for you afterwards.)

The current version of the text is here (but is not yet including my
patch): <http://www.gnu.org/software/hurd/hurd/io_path.html>


Grüße,
 Thomas


From 5a7f0df43751e45c1219fa1213f9abc69eb918cf Mon Sep 17 00:00:00 2001
From: Thomas Schwinge <tschwinge@gnu.org>
Date: Wed, 20 Aug 2008 16:35:44 +0200
Subject: [PATCH] hurd/io_path: Rework completely.

---
 hurd/io_path.mdwn |   71 ++++++++++++++++++++++++++++++++++++++++++++++------
 1 files changed, 62 insertions(+), 9 deletions(-)

diff --git a/hurd/io_path.mdwn b/hurd/io_path.mdwn
index 96e6aa5..b55dc20 100644
--- a/hurd/io_path.mdwn
+++ b/hurd/io_path.mdwn
@@ -10,18 +10,71 @@ is included in the section entitled
 
 # read
 
-  * [[glibc]]'s `read` is in `glibc/sysdeps/mach/hurd/read.c:__libc_read`.
+[[glibc]]'s `read` is in `glibc/sysdeps/mach/hurd/read.c:__libc_read`.
 
-  * That calls `glibc/hurd/fd-read.c:_hurd_fd_read()`.
+A buffer (and its size) to store the to-be-read data in is supplied by the
+caller of `read`.
 
-  * That calls `__io_read`, which is an [[RPC]], i.e., that actually results
-    into the [[translator/ext2fs]] server calling
-    `hurd/libdiskfs/io-read.c:diskfs_S_io_read`.
+> `__libc_read` calls `glibc/hurd/fd-read.c:_hurd_fd_read`.
 
-  * That calls `_diskfs_rdwr_internal`, which calls
-    `hurd/libpager/pager-memcpy.c:pager_memcpy`, which usually basically just
-    tell the kernel to virtually project the memory object corresponding to the
-    file in the caller process's memory.  No read is actually done.
+>> `_hurd_fd_read` calls `__io_read`, which is an [[RPC]]:
+>> `hurd/hurd/io.defs:io_read`.
+
+>>> Enter user-side RPC stub `glibc.obj/hurd/RPC_io_read.c:__io_read`.  Process
+>>> stuff, switch to kernel, etc.
+
+(For example) [[translator/hello]] server, [[libtrivfs]]-based.  Enter
+server-side RPC stub `hurd.obj/libtrivfs/ioServer.c:_Xio_read`.  Process stuff,
+call `hurd/trans/hello.c:trivfs_S_io_read`.
+
+A 2048 byte buffer is provided.
+
+> `trivfs_S_io_read`.  Depending on the internatl state, either a new memory
+> region is set-up (and returned as out-of-line data), or the desired amount of
+> data is returned in-line.
+
+Back in `_Xio_read`.
+
+If the 2048 byte buffer is not decided to be used (out-of-line case or bigger
+than 2048 bytes case; server decides to instead provide a new memory region),
+the [[`dealloc`|microkernel/mach/mig/dealloc]] flag is being set, which causes
+Mach to unmap that memory region from the server's address space, i.e., doing a
+memory *move* from the server to the client.
+
+Leave server-side RPC stub `_Xio_read`.
+
+>>> Return from kernel, continue client-side RPC stub `io_read`.  Have to copy
+>>> data.  Three cases: out-of-line data (pass pointer to memory area);
+>>> returned more data than fits into the originally supplied buffer (allocate
+>>> new buffer, copy all data into it, pass pointer of new buffer); otherwise
+>>> copy as much data as is available into the originally supplied buffer.
+>>> I.e., in all cases *all* data which was provided by the server is made
+>>> available to the caller.
+
+>> Back in `_hurd_fd_read`.  If a new buffer has been allocated previously, or
+>> the out-of-line mechanism has been used, the returned data now has to be
+>> copied into the originally supplied buffer.  If the server returned more
+>> data than requested, this is a [[protocol_violation|EGRATUITOUS]].
+
+> Back in `__libc_read`.
+
+
+---
+
+Samuel:
+
+(For example) [[translator/ext2fs]] server, enter server-side RPC stub
+`hurd.obj/libdiskfs/ioServer.c:_Xio_read`.  Process stuff, call
+`hurd/libdiskfs/io-read.c:diskfs_S_io_read`.
+
+A 2048 byte buffer is provided.
+
+> `diskfs_S_io_read` calls `_diskfs_rdwr_internal`.
+
+>> That calls `hurd/libpager/pager-memcpy.c:pager_memcpy`, which usually
+>> basically just tells the kernel to virtually project the memory object
+>> corresponding to the file in the caller process's memory.  No read is
+>> actually done.
 
   * Then, when the process actually reads the data, the kernel gets the user
     page fault (`gnumach/i386/i386/trap.c:user_trap`), which calls `vm_fault`,
-- 
1.7.5.4

Attachment: pgpRtDHgy94mC.pgp
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]