[Novalug] GFS?

Russell Evans russell-evans@qwest.net
Thu May 3 11:13:00 EDT 2007


http://www.drbd.org/
Each device (DRBD provides more than one of these devices) has a state,
which can be 'primary' or 'secondary'. On the node with the primary
device the application is supposed to run and to access the device
(/dev/drbdX). Every write is sent to the local 'lower level block
device' and to the node with the device in 'secondary' state. The
secondary device simply writes the data to its lower level block
device. Reads are always carried out locally.

If the primary node fails, heartbeat is switching the secondary device
into primary state and starts the application there. (If you are using
it with a non-journaling FS this involves running fsck)

If the failed node comes up again, it is a new secondary node and has
to synchronise its content to the primary. This, of course, will happen
whithout interruption of service in the background.

And, of course, we only will resynchronize those parts of the device
that actually have been changed. DRBD has always done intelligent
resynchronization when possible. Starting with the DBRD-0.7 series, you
can define an "active set" of a certain size. This makes it possible to
have a total resync time of 1--3 min, regardless of device size
(currently up to 4TB), even after a hard crash of an active node. 

http://www.linux-ha.org/DataRedundancyByDrbd#head-fb1cc8be5518f2ead2ea0f9e9f485c0ca25bd89e
Disaster Recovery with "Tele-DRBD"

    * The typical use of DRBD and HA clustering is probably two
machines connected with normal networks, and one or more crossover
cables, a few meters apart, within one server room, or at least within
the same building. Possibly even a few hundred meters apart, in the
next building. But you could use DRBD over long distance links, too.
When you have the replica several hundred kilometers away in some other
data center for Disaster Recovery, your data will survive even a major

      earthquake at your primary location. You want to use protocol A
and a huge sndbuf-size here, and probably adjust the timeout, too.

      Think about privacy! Since with DRBD the complete disk content
goes over the wire, if this wire is not a crossover cable but the
(supposedly hostile) Internet, you should route DRBD traffic through
some virtual private network (VPN).

      Make sure no one other than the partner node can access the DRBD
ports, or someone might provoke a connection loss, and then race for
the first reconnect, to get a full sync of your disk's content. 


Thank you
Russell




On Thu, 3 May 2007 01:25:26 +0000
"Paul M." <paul@gpmidi.net> wrote:

> Something like OpenAFS or NBD might work better.
> OpenAFS
> File level replication/failover
> http://www.openafs.org/
> NBD
> Device level replication
> http://nbd.sourceforge.net/
> -Paul
> 
> On 5/2/07, Nick Danger <nick@hackermonkey.com> wrote:
> > I think I misunderstood the purpose of GFS :-) What Im looking for
> > is to have two geographically separated NAS units. NAS units are
> > cheap in single form, 3 terrabytes for less then 10grand.  The
> > question is, how can I mirror the two file systems for failover? I
> > know how to do it at the application/network level, just not at the
> > data/FS level.  I kept thinking GFS but that seems more like making
> > lots of disks appear as one, not for mirroring. Unless Im reading
> > it wrong.
> >
> > So, pointers? Links? Case studies? I'll summarize what I find and
> > send it back out to the list.
> >
> > Thanks All,
> >
> > Nick
> >
> > _______________________________________________
> > Novalug mailing list
> > Novalug@calypso.tux.org
> > http://calypso.tux.org/cgi-bin/mailman/listinfo/novalug
> >
> >
> _______________________________________________
> Novalug mailing list
> Novalug@calypso.tux.org
> http://calypso.tux.org/cgi-bin/mailman/listinfo/novalug
> 



More information about the Novalug mailing list