Skip to content
kalebskeithley edited this page Apr 1, 2020 · 34 revisions

Setting up HA for NFS-Ganesha using CTDB

Suitable for use with GlusterFS 3.12 and later.

install the storhaug package on all participating nodes

Install the storhaug package on all nodes using the appropriate command for your system:

  • dnf install -y storhaug-nfs
  • yum -y install storhaug-nfs
  • apt (or apt-get) install storhaug

Note: this will install all the dependencies, e.g. ctdb, nfs-ganesha-gluster, glusterfs, and their related dependencies.

Create a passwordless ssh key and copy it to all participating nodes

On one of the participating nodes (Fedora, RHEL, CentOS):
node1% ssh-keygen -f /etc/sysconfig/storhaug.d/secret.pem
or (Debian, Ubuntu):
node1% ssh-keygen -f /etc/default/storhaug.d/secret.pem
When prompted for a password, press the Enter key.

Copy the public key to all the nodes nodes (Fedora, RHEL, CentOS):
node1% ssh-copy-id -i /etc/sysconfig/storhaug.d/secret.pem.pub root@node1
node1% ssh-copy-id -i /etc/sysconfig/storhaug.d/secret.pem.pub root@node2
node1% ssh-copy-id -i /etc/sysconfig/storhaug.d/secret.pem.pub root@node3
...
or (Debian, Ubuntu):
node1% ssh-copy-id -i /etc/default/storhaug.d/secret.pem.pub root@node1
node1% ssh-copy-id -i /etc/default/storhaug.d/secret.pem.pub root@node2
node1% ssh-copy-id -i /etc/default/storhaug.d/secret.pem.pub root@node3
...

You can confirm that it works with (Fedora, RHEL, CentOS):
node1% ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /etc/sysconfig/storhaug.d/secret.pem root@node1
or (Debian, Ubuntu):
node1% ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /etc/default/storhaug.d/secret.pem root@node1

populate /etc/ctdb/nodes and /etc/ctdb/public_addresses

Select one node as your lead node, e.g. node1. On the lead node, create/edit /etc/ctdb/nodes and populate it with the (fixed) IP addresses of the participating nodes. It should look like this:
192.168.122.81
192.168.122.82
192.168.122.83
192.168.122.84

On the lead node, create/edit /etc/ctdb/public_addresses and populate it with the floating IP addresses (a.k.a. VIPs) for the participating nodes. These must be different than the IP addresses in /etc/ctdb/nodes. It should look like this:
192.168.122.85/24 eth0
192.168.122.86/24 eth0
192.168.122.87/24 eth0
192.168.122.88/24 eth0

Samba 4.8.x and earlier: edit /etc/ctdb/ctdbd.conf

Ensure that the line CTDB_MANAGES_NFS=yes exists. If not, add it or change it from no to yes.
Uncomment (or add if necessary) the CTDB_NODES=/etc/ctdb/nodes and CTDB_PUBLIC_ADDRESSES=/etc/ctdb/public_addresses lines.


Note: Managing Samba and Winbind is beyond the scope of this wiki at this point in time. If you want CTDB to manage those as well, this is where you would check thatCTDB_MANAGES_SAMBA=yes and CTDB_MANAGES_WINBIND=yes are set accordingly.


Add or change the following lines:
CTDB_NFS_CALLOUT=/etc/ctdb/nfs-ganesha-callout
CTDB_NFS_STATE_FS_TYPE=glusterfs
CTDB_NFS_STATE_MNT=/run/gluster/shared_storage
CTDB_NFS_SKIP_SHARE_CHECK=yes
NFS_HOSTNAME=localhost

Samba 4.9.0 and later: enable event script

On each node:
nodex% ln -s /etc/ctdb/nfs-checks-ganesha.d/20.nfs-ganesha.check /etc/ctdb/events/legacy/20.nfs-ganesha.script
node1% ctdb event script enable legacy 20.nfs-ganesha

create a bare minimum /etc/ganesha/ganesha.conf file

On the lead node:
node1% touch /etc/ganesha/ganesha.conf
or
node1% echo "### NFS-Ganesha.config" > /etc/ganesha/ganesha.conf

Note: you can edit this later to set global configuration options.

create a trusted storage pool and start the gluster shared-storage volume

On all the participating nodes:
node1% systemctl start glusterd
node2% systemctl start glusterd
node3% systemctl start glusterd
...

On the lead node, peer probe the other nodes:
node1% gluster peer probe node2
node1% gluster peer probe node3
...

Optional: on one of the other nodes, peer probe node1:
node2% gluster peer probe node1

Enable the gluster shared-storage volume:
node1% gluster volume set all cluster.enable-shared-storage enable
This takes a few moments. When done check that the gluster_shared_storage volume is mounted at /run/gluster/shared_storage on all the nodes.

start the ctdbd and ganesha.nfsd daemons

On the lead node:
node1% storhaug setup
You can watch the ctdb (/var/log/ctdb.log) and ganesha log (/var/log/ganesha/ganesha.log) to monitor their progress. From this point on you may enter storhaug commands from any of the participating nodes.

export a gluster volume

Create a gluster volume
node1% gluster volume create replica 2 myvol node1:/bricks/vol/myvol node2:/bricks/vol/myvol node3:/bricks/vol/myvol node4:/bricks/vol/myvol ...

Start the gluster volume you just created
node1% gluster volume start myvol

Export the gluster volume from ganesha
node1% storhaug export myvol