gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] RFC/Review: libgfapi object handle based extensions


From: Amar Tumballi
Subject: Re: [Gluster-devel] RFC/Review: libgfapi object handle based extensions
Date: Wed, 9 Oct 2013 01:11:53 +0530


After giving this some more thought, I feel the cleanest way is to make inode_t and inode table graph aware. This way for a given GFID there will be one and only one inode_t at a given time no matter how many graphs are switched. It is also worth noting that relationship between two GFIDs does not change with a graph switch, so having a separate inode table with duplicate inodes and dentries has always been redundant in a way. The initial decision to have separate inode table per graph was done because inode table was bound to an xlator_t (which in turn was bound to a graph).


Initial design / implementation of this is @ http://review.gluster.org/6046 Please review the way its handled...
 
If we make inode_t and inode table multi-graph aware, the same inode_t would be valid on a new graph. We would need new code to keep track of the latest graph on which a given inode has been "initialized / discovered" in order to force a discover() on new graph if necessary (dentry relations would just continue to be valid), and after a new graph switch, to force cleanup of xlators from old graph.


This is not yet addressed with the above patch, and I would need some help there..
 
Another reason why I prefer this new approach is, making inode_t graph independent makes old graph destruction completely "in our control", without having to depend on /force fuse to issue FORGET on inode_ts from the old graph. That entire problem gets eliminated as inode_ts would now be graph independent.

(copying Raghavendra Bhat who is performing graph destruction work and Amar)

Thoughts?

Now after implementing the suggested method, I feel its much better for overall dynamic graph/volume management. makes the code simple.

Regards,
Amar

reply via email to

[Prev in Thread] Current Thread [Next in Thread]