9

Deploying Highly Available NFS Server with DRBD and Heartbeat on Debian

 3 years ago
source link: http://www.linux-admins.net/2014/04/deploying-highly-available-nfs-server.html
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Deploying Highly Available NFS Server with DRBD and Heartbeat on Debian

Here's a quick and dirty way of making NFS highly available by using DRBD for block level replication and Heartbeat as the messaging layer. Just as a side note - I would recommend that for larger setups you use Pacemaker and Corosync instead of Heartbeat, but for a simple two node NFS cluster this is more than sufficient.

First let's install NFS, DRBD and heartbeat. For this example I'll be using Debian Squeeze and DRBD version 8.3.

On both servers run:

[root@servers ~] apt-get install nfs-kernel-server drbd8-utils heartbeat [root@servers ~] update-rc.d -f nfs-kernel-server remove

I'll be using LVM with the following layout:

[root@servers ~] lvdisplay --- Logical volume --- LV Name /dev/raid10/drbd VG Name raid10 LV UUID AZewL8-SB3A-3flE-zu3K-QDeZ-vkgU-e5Vv0j LV Write Access read/write LV Status available # open 1 LV Size 290.00 GiB Current LE 74240 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 254:1

--- Logical volume --- LV Name /dev/raid10/drbd_meta VG Name raid10 LV UUID zMqj7N-5AHI-EFsA-ksEA-godF-Gu94-tle7eQ LV Write Access read/write LV Status available # open 1 LV Size 10.00 GiB Current LE 2560 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 254:2

Make sure that your /etc/hosts contains the servers hostname

[root@servers ~] cat /etc/hosts 127.0.0.1 localhost 10.13.238.41 server1.example.com server1 10.13.238.42 server2.example.com server2

I'll be exporting /export:

[root@servers ~] cat /etc/exports /export 10.13.238.0/24(rw,fsid=0,insecure,no_subtree_check,sync)

The DRBD config file should look similar to the following (this config file works with drbd8-utils=2:8.3.7-2.1 that comes with Squeeze):

[root@servers ~] cat /etc/drbd.conf resource r0 {

protocol C;

startup { degr-wfc-timeout 120; }

disk { on-io-error detach; }

net { }

syncer { rate 10M; al-extents 257; }

on server1 { # Replace with `uname -n` device /dev/drbd0; # Replace with the device name drbd should use disk /dev/raid10/drbd; # Replace with the data partition on server 1 address 10.13.238.41:7788; # IP address on server 1 meta-disk /dev/raid10/drbd_meta[0]; # Replace with the DRBD meta partition on server 1 }

on server2 { device /dev/drbd0; disk /dev/raid10/drbd; address 10.13.238.42:7788; meta-disk /dev/raid10/drbd_meta[0]; } }

All we do in the config file is define a resource named r0 that contains both servers that will participate in the cluster server1 and server2 along with their IP's and the block devices underneath for the data and the metadata.

Initially we need to load the drbd kernel module (after that the drbd init script will do this) then initializes the meta data storage, attach a local backing block device to the DRBD resource's device and setup the network configuration on both servers:

[root@servers ~] modprobe drbd [root@servers ~] drbdadm create-md r0 [root@servers ~] drbdadm up r0

One of the two servers is going to be the primary in a sense that the drbd block device will be mounted and used by NFS, and if it fails the second server will take over.

To promote the resource's device into primary role (you need to do this before any access to the device, such as creating or mounting a file system) and start the synchronization process run the following only on the primary server:

[root@server1 ~] drbdadm primary --force r0 [root@server1 ~] drbdadm -- --overwrite-data-of-peer primary all [root@server1 ~] drbdadm -- connect all

To check the process of the block level sync you can monitor /proc/drbd.

At this point you can create the file system on /dev/drbd0 mount it and copy your data.

To make sure that NFS has all of it's metadata (file locks, etc.) shared as well do the following on the first server:

[root@server1 ~] mkfs.ext3 /dev/drbd0 [root@server1 ~] mkdir -p /export/nfs/rpc_pipefs/nfs [root@server1 ~] mount -t ext3 /dev/drbd0 /export [root@server1 ~] mv /var/lib/nfs/ /export/ [root@server1 ~] ln -s /export/nfs/ /var/lib/nfs [root@server1 ~] umount /export

And on the second server:

[root@server2 ~] mkdir -p /export/nfs/rpc_pipefs/nfs [root@server2 ~] rm -rf /var/lib/nfs/ [root@server2 ~] ln -s /export/nfs/ /var/lib/nfs

Time to install and configure heartbeat on both servers:

[root@servers ~] apt-get install heartbeat

[root@servers ~] cat /etc/heartbeat/ha.cf logfacility local0 keepalive 2 deadtime 10 bcast eth0 node server1 server2

[root@servers ~] cat /etc/heartbeat/haresources ops-n01 IPaddr::10.13.238.50/24/eth0 drbddisk::r0 Filesystem::/dev/drbd0::/export::ext3 nfs-kernel-server

[root@servers ~] cat /etc/heartbeat/authkeys auth 3 3 md5 somepasswordhere

[root@servers ~] chmod 600 /etc/heartbeat/authkeys

The 10.13.238.50 is the VIP that will be moved between both servers in the event of a failure. The rest of the options are pretty much self-explanatory.

To start drbd and heartbeat run:

[root@servers ~] /etc/init.d/drbd start [root@servers ~] /etc/init.d/heartbeat start

At this point you should be able to mount the NFS export on your client using the VIP.

To simulate failure just stop heartbeat on the primary server and watch the IP address transition to the second server, the drdb unmounted and mounted on the second server.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK