gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] Glusterd: A New Hope


From: Vidar Hokstad
Subject: Re: [Gluster-devel] Glusterd: A New Hope
Date: Mon, 25 Mar 2013 14:53:38 +0000

On Mon, Mar 25, 2013 at 1:07 PM, Jeff Darcy <address@hidden> wrote:

I'm a little surprised by the positive reactions to the "Gluster on
Gluster" approach.  Even though Kaleb and I considered it for HekaFS,
it's still a bit of a hack.  In particular, we'd still have to solve the
problems of keeping that private instance available, restarting daemons
and initiating repair etc. - exactly the problems it's supposed to be
solving for the rest of the system.

For my part the reason I like it because it seems conceptually "pure" and all the other solutions still need to be bootstrapped somehow anyway, and I know how getting the Gluster setup works. Though I'm aware that it might be more complicated than it's worth even if it might seem clean from the outside. You've undoubtably thought a lot more about the problems with it than I have... 

In any case, if the bootstrapping of this is all hidden behind just an extra option on "peer probe" for example, then it doesn't make all that much of a difference if that triggers bootstrapping a Gluster configuration volume or Doozer or something else entirely as long as the chosen option doesn't make day to day troubleshooting and maintenance much harder...

If we just do something like "first three servers to be configured become
configuration servers" then we run a very high risk of choosing exactly
those servers that are most likely to fail together.  :(  As long as the
extra configuration is limited to one option on "peer probe" is it
really a problem?

I think perhaps I'm looking at it from the specific point of view of a user that will generally have fairly small volumes. When you have volumes with (say) no more than 5-6 nodes in most cases, it'd be more painful to now also have to worry about whether that failed node is one of the configuration nodes and have to create a new one. It's one more thing that can go wrong. For a large deployment I agree you _need_ to know those kind of things, but for a small one I'd be inclined to just make every node hold the configuration data. 

As long as there's no hard limits that prevents us from just adding the "as master" on "peer probe" of all nodes for small clusters (other than perhaps performance tradeoffs etc. if the number grows too large), then that'd be fine, I think. I can avoid the complexity of paying attention to which ones are "special", and people with larger clusters can still control it in detail...

Vidar

reply via email to

[Prev in Thread] Current Thread [Next in Thread]