[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Multipage requests for GNU Mach 1,3

From: Sergio L_pez
Subject: Multipage requests for GNU Mach 1,3
Date: Fri, 17 Dec 2004 00:15:39 +0100


I've been playing a little with GNU Mach, and I think there is a thing
that could be nice to implement in it. In "vm/vm_fault.c", when the
kernel is requesting some data from a translatator for a memory_object,
we can read this code:

        if ((rc = memory_object_data_request(object->pager,
                m->offset + object->paging_offset,
                PAGE_SIZE, access_required)) != KERN_SUCCESS) {

And this is the syntax for m_o_d_request (from The GNU Mach Reference

        kern_return_t seqnos_memory_object_data_request (
                memory_object_t memory_object,
                mach_port_seqno_t seqno,
                memory_object_control_t memory_control,
                vm_offset_t offset, 
                vm_offset_t length,
                vm_prot_t desired_access)

As you can see, the parameter for "length" is always "PAGE_SIZE" (you
know, 4K in x86) in GNU Mach. This means that for a translator which
works reading and writting from a disk (like ext2fs), every I/O
operation is splitted up into 4K fragments.

But, in OSF Mach, things are a bit different. The memory_objects have a
property named "cluster_size", and "length" in m_o_d_request is
determined by that. I don't know where OSF Mach sets the value of
cluster_size, but we can do it in m_o_ready/m_o_set_attributes, so every
translator can set this as it wants.

This means that, when a page fault is triggered from a memory_object
that needs data, vm_fault.c fills (cluster_size/PAGE_SIZE) pages,
starting with the one that generated the fault. Many times, we read more
data than we ever use, but even with this issue, benchmarks [1] (I've
made and fast (ugly, buggy and dirty) implementation over GNU Mach to
test it [2]) show that the performance for I/O operations is slightly

But with this strategy we have a trouble that must be resolved. Many
times, GNU Mach requests more pages than the translator (ext2fs in my
tests) can fill (if your are dumping a 17K file, with a 16K cluster_size
(4 pages), first call will fill all the pages, and the second only 1) ,
and we must free it some way. I think that m_o_d_unavailable and
m_o_d_error don't fit well for this purpose, so I've hacked the glue
code (linux/dev/glue/block.c), to make that "device_read" writes the
pages directly to the memory_object, freeing the unused ones at time
(probably, there is a much better way to do this ;-).

What do you think about this?

[1] http://es.gnu.org/pipermail/bee-devel/2004-November/000191.html

[2] You can see the code in the Bee's GNU Mach CVS,
http://bee.nopcode.org, "neomach" directory (the changes are uncomented,
and dirty, I'll fix it in a near future :-P). Sorry, I can't make a diff
easily because the related changes are mixed with others, but when I
have enough time, I'll implement this over a "clean" GNU Mach to send
the patch, so you can easily review it.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]