bug-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH] Implement the sync libnetfs stubs.


From: Sergiu Ivanov
Subject: [PATCH] Implement the sync libnetfs stubs.
Date: Mon, 17 Aug 2009 23:44:59 +0300
User-agent: Mutt/1.5.16 (2007-06-09)

* netfs.c (netfs_attempt_sync): Sync every writable directory
associated with the supplied node.
(netfs_attempt_syncfs): Send file_syncfs to every writable
directory maintained by unionfs.
---

Hello,

On Sun, Aug 16, 2009 at 08:41:27PM +0200, olafBuddenhagen@gmx.net wrote:
> On Mon, Aug 03, 2009 at 09:19:17PM +0300, Sergiu Ivanov wrote:
> > On Sat, Jul 18, 2009 at 08:08:20AM +0200, olafBuddenhagen@gmx.net wrote:
> 
> > > So there is no other way to associate the two lists? This is ugly
> > > indeed. In this case, I think it would be better not to use the iterator
> > > at all -- what you did here looks really hackish, and it breaks the
> > > iterator paradigm anyways...
> > 
> > Yeah, true, it breaks the paradigm.  However, I actually borrowed this
> > piece of code from node_init_root (node.c), so this is the style used
> > in unionfs.  Should I forget about consistency in this case, what do
> > you think?
> 
> If the code is indeed copied, it's probably better to keep it as is.

Yeah, by ``borrowed'' I indeed meant ``copied'', so I'll keep the code
as it is.
 
> > Ah, so you mean forwarding syncfs to all unioned *directories*?
> > Sorry, I thought your were talking about doing fsys_syncfs on all
> > unioned *filesystems* :-)
> 
> Yes, I meant forwarding the syncfs to all the root nodes of the unioned
> file systems.
> 
> > In this case, I'd tell you that the current implementation does
> > exactly what you are talking about: sends file_sync to all writeable
> > directories maintained by unionfs :-)
> 
> No, no, no! I said file_syncfs(), not file_sync()!
> 
> I don't know enough about this stuff, to say with confidence how these
> RPCs differ exactly; but I have absolutely no doubt that there is a good
> *reason* for the existence of both. I simply don't believe that
> implementing one in terms of the other has the right effect.

Hm :-( I haven't paid attention to the existence of both file_sync and
file_syncfs so far -- I thought there existed only file_sync.  I think
that file_syncfs is equivalent to fsys_syncfs, the difference being in
the target of invocation (file_syncfs is invoked on a port to a file,
while fsys_syncfs is invoked on the control port).  At least libnetfs
and libdiskfs implement both in exactly the same way.  It looks as
though the existence of both RPCs were indeed motivated by the
necessity of solving the problem Fredrik pointed out (syncing
filesystems of which you are not an owner).

I changed netfs_attempt_syncfs to send file_syncfs to all filesystems
maintained by unionfs.

Sorry for being dumb for so long a time :-(

Regards,
scolobb

---
 netfs.c |   82 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++--
 1 files changed, 79 insertions(+), 3 deletions(-)

diff --git a/netfs.c b/netfs.c
index 89d1bf6..0180b1a 100644
--- a/netfs.c
+++ b/netfs.c
@@ -1,5 +1,6 @@
 /* Hurd unionfs
-   Copyright (C) 2001, 2002, 2003, 2005 Free Software Foundation, Inc.
+   Copyright (C) 2001, 2002, 2003, 2005, 2009 Free Software Foundation, Inc.
+
    Written by Moritz Schulte <moritz@duesseldorf.ccc.de>.
 
    This program is free software; you can redistribute it and/or
@@ -282,7 +283,45 @@ error_t
 netfs_attempt_sync (struct iouser *cred, struct node *np,
                    int wait)
 {
-  return EOPNOTSUPP;
+  /* The error we are going to report back (last failure wins).  */
+  error_t final_err = 0;
+
+  /* The index of the currently analyzed filesystem.  */
+  int i = 0;
+
+  /* The information about the currently analyzed filesystem.  */
+  ulfs_t * ulfs;
+
+  mutex_lock (&ulfs_lock);
+
+  /* Sync every writable directory associated with `np`.
+
+     TODO: Rewrite this after having modified ulfs.c and node.c to
+     store the paths and ports to the underlying directories in one
+     place, because now iterating over both lists looks ugly.  */
+  node_ulfs_iterate_unlocked (np)
+  {
+    error_t err;
+
+    /* Get the information about the current filesystem.  */
+    err = ulfs_get_num (i, &ulfs);
+    assert (err == 0);
+
+    /* Since `np` may not necessarily be present in every underlying
+       directory, having a null port is perfectly valid.  */
+    if ((node_ulfs->port != MACH_PORT_NULL)
+       && (ulfs->flags & FLAG_ULFS_WRITABLE))
+      {
+       err = file_sync (node_ulfs->port, wait, 0);
+       if (err)
+         final_err = err;
+      }
+
+    ++i;
+  }
+
+  mutex_unlock (&ulfs_lock);
+  return final_err;
 }
 
 /* This should sync the entire remote filesystem.  If WAIT is set,
@@ -290,7 +329,44 @@ netfs_attempt_sync (struct iouser *cred, struct node *np,
 error_t
 netfs_attempt_syncfs (struct iouser *cred, int wait)
 {
-  return 0;
+  /* The error we are going to report back (last failure wins).  */
+  error_t final_err = 0;
+
+  /* The index of the currently analyzed filesystem.  */
+  int i = 0;
+
+  /* The information about the currently analyzed filesystem.  */
+  ulfs_t * ulfs;
+
+  mutex_lock (&ulfs_lock);
+
+  /* Sync every writable filesystem maintained by unionfs.
+
+     TODO: Rewrite this after having modified ulfs.c and node.c to
+     store the paths and ports to the underlying directories in one
+     place, because now iterating over both lists looks ugly.  */
+  node_ulfs_iterate_unlocked (netfs_root_node)
+  {
+    error_t err;
+
+    /* Get the information about the current filesystem.  */
+    err = ulfs_get_num (i, &ulfs);
+    assert (err == 0);
+
+    /* Note that, unlike the situation in netfs_attempt_sync, having a
+       null port here is abnormal.  */
+    if (ulfs->flags & FLAG_ULFS_WRITABLE)
+      {
+       err = file_syncfs (node_ulfs->port, wait, 0);
+       if (err)
+         final_err = err;
+      }
+
+    ++i;
+  }
+
+  mutex_unlock (&ulfs_lock);
+  return final_err;
 }
 
 /* lookup */
-- 
1.6.3.3





reply via email to

[Prev in Thread] Current Thread [Next in Thread]