gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Gluster-devel] Architecture advice


From: Dan Parsons
Subject: [Gluster-devel] Architecture advice
Date: Thu, 8 Jan 2009 08:09:35 +0000

Now that I'm upgrading to gluster 1.4/2.0, I'm going to take the time and 
rearchitect things.

Hardware:
Gluster servers:
4 blades connected via 4gbit fc to fast, dedicated storage. Each server has two 
bonded Gig-E links to the rest of my network, for 8gbit/s theoretical 
throughput.

Gluster clients:
33 blades each with one, gig-e connection. They use local storage for OS and 
gluster for input/output files.

Specific questions:
(1) There are many times, in our workflow, when more than a few nodes will want 
the same file at the same time. This made me want to use the stripe xlator. In 
this way, when a client node saturates its gig-e link reading the file, each 
gluster server is using only 250mbit/s, leaving room for more clients. If I 
wasn't using stripe, this hypothetical file would be on just one server node, 
and it would get slammed if more than two client nodes talked to it. Is there a 
better way of doing this? Did I make the correct decision in using stripe 
xlator for this purpose? Can I achieve the same thing using just afr? 

(2) I would like to architect the system such that if one node goes down, the 
others can keep serving the data, even if overall throughput is less. This 
means that all data would need to be accessible from all clients. Is this 
something I would use afr xlator for? If so, do I even need stripe anymore, to 
handle my need to have multiple servers capable of sending different chunks of 
the same file? And how does the HA xlator play into this?

We have a mix of (small quantity of gigantic files) and (extremely gigantic 
quantity of small files), so I'm sure there will need to be some parameter 
tuning.

Thanks in advance. If this question would be better addressed under some sort 
of support agreement, please let me know. 

Dan Parsons

reply via email to

[Prev in Thread] Current Thread [Next in Thread]