bug-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] new interface: memory_object_get_proxy


From: Sergey Bugaev
Subject: Re: [PATCH] new interface: memory_object_get_proxy
Date: Sun, 24 Oct 2021 20:50:26 +0300

On Sun, Oct 24, 2021 at 7:28 PM Joan Lledó <jlledom@mailfence.com> wrote:
> +routine memory_object_get_proxy(
> +               task            : task_t;

Naming: perhaps memory_object_create_vm_proxy ()? or even
memory_object_create_task_vm_proxy ()?

I would expect the request port argument to be a vm_task_t (i.e. a
vm_map), not a full task. But I see that you need to pass
task->itk_space to memory_object_create_proxy (). But
memory_object_create_proxy () doesn't actually need the task either,
it just checks it against NULL (as usual) and never uses it again. So
maybe it'd be cleaner to make both calls accept a vm_task_t? Not that
this matters.

> +  vm_map_lock_read(task->map);
> +  if (!vm_map_lookup_entry(task->map, address, &tmp_entry)) {
> +    if ((entry = tmp_entry->vme_next) == vm_map_to_entry(task->map)) {
> +      vm_map_unlock_read(task->map);
> +      return(KERN_NO_SPACE);
> +    }
> +  } else {
> +    entry = tmp_entry;
> +  }
> +  vm_map_unlock_read(task->map);
> +
> +  /* Limit the allowed protection and range to the entry ones */
> +  max_protection &= entry->max_protection;
> +  if (len > entry->vme_end - entry->vme_start)
> +    len = entry->vme_end - entry->vme_start;
> +  offset = entry->offset;
> +  start = 0;

I don't think you can access the entry once you've unlocked the map.

(But note that I'm not very familiar with the kernel internals, so I
can be wrong about this.)

Should the implementation cap the length to that of the entry
silently, or should it return an error if called with an overly long
len argument?

> +
> +  return memory_object_create_proxy(task->itk_space, max_protection,
> +                                   &entry->object.vm_object->pager, 1,
> +                                   &offset, 1,
> +                                   &start, 1,
> +                                   &len, 1, port);
> +}

I'm... not sure if this is wrong, but just to bring some attention to
it: isn't memory_object_create_proxy () supposed to *consume* the
'objects' ports array in the successful case? I see it uses
ipc_port_copy_send (object[0]), why does that not cause a leak? For a
reference point, mach_ports_register (), which is another kernel RPC
that accepts a port array, sure does consume it.

If it *is* the case that memory_object_create_proxy () should consume
the ports, then your new routine should make an extra ref to the pager
port before passing it to memory_object_create_proxy ().

Overall: yes, this is what I was suggesting! Thank you for working on it!

Sergey



reply via email to

[Prev in Thread] Current Thread [Next in Thread]