gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] Architecture advice


From: Joe Landman
Subject: Re: [Gluster-devel] Architecture advice
Date: Thu, 08 Jan 2009 08:23:15 -0500
User-agent: Thunderbird 2.0.0.19 (X11/20090105)

Dan Parsons wrote:
Now that I'm upgrading to gluster 1.4/2.0, I'm going to take the time
and rearchitect things.

Hardware: Gluster servers: 4 blades connected via 4gbit fc to fast,
dedicated storage. Each server has two bonded Gig-E links to the rest
of my network, for 8gbit/s theoretical throughput.

Just make sure the channel bonded gigabits are a) not broadcom based, b) not using anything other than mode 0 (long story, but me offline if you want to hear some horror stories of hard to fix crashes). If you have access to 10 GbE/IB, these would be superior solutions (entire system ... storage and clients).

Gluster clients: 33 blades each with one, gig-e connection. They use
local storage for OS and gluster for input/output files.

Specific questions: (1) There are many times, in our workflow, when
more than a few nodes will want the same file at the same time. This
made me want to use the stripe xlator. In this way, when a client
node saturates its gig-e link reading the file, each gluster server
is using only 250mbit/s, leaving room for more clients. If I wasn't
using stripe, this hypothetical file would be on just one server
node, and it would get slammed if more than two client nodes talked
to it. Is there a better way of doing this? Did I make the correct
decision in using stripe xlator for this purpose? Can I achieve the
same thing using just afr?

Without spending money to fix the storage architecture, you really will need to look at afr, as stripe may help on single requests more than multiple (guessing). You should be able to benchmark/test this, but I would imagine that AFR would help you with multiple simultaneous read/only access to specific files. Read/write will be more complex.

If you can spend money to fix the storage architecture, 10GbE or IB everywhere (storage nodes, client nodes, ...). You won't regret it.

(2) I would like to architect the system such that if one node goes
down, the others can keep serving the data, even if overall
throughput is less. This means that all data would need to be
accessible from all clients. Is this something I would use afr xlator
for? If so, do I even need stripe anymore, to handle my need to have

Server side AFR.  Stripe may not help the reliability here.

multiple servers capable of sending different chunks of the same
file? And how does the HA xlator play into this?

We have a mix of (small quantity of gigantic files) and (extremely
gigantic quantity of small files), so I'm sure there will need to be
some parameter tuning.

Thanks in advance. If this question would be better addressed under
some sort of support agreement, please let me know.

Dan Parsons


------------------------------------------------------------------------


_______________________________________________ Gluster-devel mailing
list address@hidden http://lists.nongnu.org/mailman/listinfo/gluster-devel


--
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics LLC,
email: address@hidden
web  : http://www.scalableinformatics.com
       http://jackrabbit.scalableinformatics.com
phone: +1 734 786 8423 x121
fax  : +1 866 888 3112
cell : +1 734 612 4615




reply via email to

[Prev in Thread] Current Thread [Next in Thread]