[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

New behaviour for reading from a memory object.

From: Sergio Lopez
Subject: New behaviour for reading from a memory object.
Date: Mon, 01 Aug 2005 03:16:58 +0200


When a translator needs access to a portion of a memory object (i.e. in
answer for read()/write() functions), it usually must map part of that
object into its own space for later copying the data to the destination
buffer (vm_read/vm_write/vm_copy doesn't support working with
memory_objects nor unaligned addresses). This behaviour could be

I'm trying to implement an easy and fast way to read/write from a memory
object without the need of mapping its contents. This change consists

- kern_return_t vm_fault_copy_tmp() [vm_fault_copy_tmp.c]

This is a copy of vm_fault_copy with support for unaligned addresses. It
can copy pages from a memory object to a userspace buffer. Right now,
this buffer must contain something in each page to be sure that its
pages are already allocated (this will be easily solved by manually
faulting the page when vm_page_lookup() returns VM_PAGE_NULL).

- kern_return_t vm_read_fast() [vm_read_fast.c]

This is the RPC exported to the user. It looks for the proper entry in
the target task's map, checks its size, checks the object and calls

A "pager_memcpy" that makes use of this functions may look like this:

pager_memcpy_direct (struct pager *pager, memory_object_t memobj,
                     vm_offset_t offset, task_t target_task,
                     vm_address_t address, size_t *size,
                     vm_prot_t prot)
  error_t err = 0;
  size_t nbytes = *size;

  /* XXX nbytes is not properly updated right now */

  err = vm_write_fast(target_task, address, nbytes, memobj, offset);

  return err;

Please note that the arguments have changed so it can receive
target_task. This means that the RPC of the translator and libc must be
changed to support this. I made my tests by exporting io-read-direct()
(in libdiskfs) and read_direct (in libc) which are copies from
io-read()/read() with minor changes to support this behaviour.

Test results (reading a file, data already cached)

Reading 1000 chunks of 8K each (miliseconds):
Old behaviour: 260-270
vm_read_fast: 110-120

Reading 1000 chunks of 4K each (miliseconds):
Old behaviour: 180-190
vm_read_fast: 90-100

Reading 1000 chunks of 1K each (miliseconds):
Old behaviour: 100-110
vm_read_fast: 70-80

NOTE: I've disabled vm_copy() in pager_memcpy because it's (much) slower
that simply memcpy'ing when reading.

This advantage will probably increase if we convert vm_read_fast (which
right now is a complex RPC, because it receives a memory_object) into a
syscall or a simple RPC.


- Make sure that one aplication can't write to another by guessing its
task_t value. If it can, create an authentication mechanism (probably
with mach_task_self()).

- Add support for buffers of more than one map entry (vm_read_fast).

- When vm_page_lookup() returns VM_PAGE_NULL, call vm_fault_page() and
insert result_page into application's pmap (vm_fault_copy_tmp).

- Make sure that every translator can support this behaviour. If they
can't, look for a way to make both implementations compatible.

- Try to convert vm_read_fast to a simple RPC or a syscall

- Implement vm_write_fast()

This feature is _very_ experimental, and it needs much work to be
finished. But, before going any further, I will really apreciate to hear
the opinion from long-time Hurd/Mach hackers (Roland?).


Attachment: vm_read_fast.c
Description: Text Data

Attachment: vm_fault_copy_tmp.c
Description: Text Data

reply via email to

[Prev in Thread] Current Thread [Next in Thread]