GlusterFS configuration example
Now let's try a little bit more practical challenge. I will show a sample plugin to configure GlusterFS storage nodes. The following is the general configuration steps of GlusterFS storage cluster.
1) Install RHEL on all storage nodes.
2) Install GlusterFS on all storage nodes.
3) Create brick directories used for GlusterFS volumes on all storage nodes.
4) Create cluster peers and define volumes from a management node.
This plugin takes care of step (2) and (3). It accepts the following options:
gluster-hosts: 192.168.122.192,192.168.122.193 # IP addresses of storage nodes. bricks-pv-devs: /dev/vdb,/dev/vdc # PV devices for bircks. bricks-vols: vol01:2,vol02:2 # List of LV:size(GB) pairs.
It assumes that all storage nodes have the same storage device configuration, and do the following things:
1) Install GlusterFS and configure iptables on specified hosts.
2) Create logical volumes/filesystem and mount them on /brick/
Now let's start the preparation.
PuppetLabs' LVM module
You need PuppetLabs' LVM module for LVM configuration. Install it with the following steps.
# wget https://forge.puppetlabs.com/puppetlabs/lvm/0.1.2.tar.gz # tar -xvzf 0.1.2.tar.gz # chown -R root.root puppetlabs-lvm-0.1.2 # cp -a puppetlabs-lvm-0.1.2 /usr/lib/python2.*/site-packages/packstack/puppet/modules/lvm
As explained before, names of Puppet modules are hared coded in the "puppet_950.py" plugin. Let's add this new module.
plugins/puppet_950.py
def copyPuppetModules(): os_modules = ' '.join(('apache', 'cinder', 'concat', 'create_resources', 'firewall', 'glance', 'horizon', 'inifile', 'keystone', 'memcached', 'mysql', 'nova', 'openstack', 'packstack', 'qpid', 'rsync', 'ssh', 'stdlib', 'swift', 'sysctl', 'vlan', 'xinetd', 'lvm')) ## Add lvm here.
Manifest templates
The following two manifests are used for brick and glusterfs configuration.
puppet/templates/bricks.pp
$pv_devs = [ %(CONFIG_PV_DEVS_QUOTED)s ] # sample: [ "/dev/vdb","/dev/vdc" ] $bricks = [ %(CONFIG_BRICK_LIST_QUOTED)s ] # sample: [ "vol01:2","vol02:3" ] / "Volume name:Size(GB)" $vgname = "vg_bricks" define pvCreate { physical_volume { "$name": ensure => present, before => Volume_group[ $vgname ], } } define lvCreate { $val = split( $name, ":" ) logical_volume { "lv_${val[0]}": ensure => present, volume_group => $vgname, size => "${val[1]}G", require => Volume_group[ $vgname ], before => Filesystem["/dev/${vgname}/lv_${val[0]}"], } filesystem { "/dev/${vgname}/lv_${val[0]}": ensure => present, fs_type => "ext4", options => "-I 512", before => Mount["/bricks/${val[0]}"], } file { "/bricks/${val[0]}": ensure => "directory", before => Mount["/bricks/${val[0]}"], require => File["/bricks"], } mount { "/bricks/${val[0]}": device => "/dev/${vgname}/lv_${val[0]}", fstype => "ext4", ensure => "mounted", options => "defaults", atboot => "true", } } ### file { "/bricks": ensure => "directory", } pvCreate { $pv_devs: } volume_group { "$vgname": ensure => present, physical_volumes => $pv_devs, } lvCreate { $bricks: }
puppet/templates/glusterfs.pp
$iptables_content="# Configuration for GlusterFS *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 111 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 24007:24050 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 38465:38468 -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -j REJECT --reject-with icmp-host-prohibited COMMIT " yumrepo { 'glusterfs_repo': name => 'glusterfs', descr => 'Repository for GlusterFS 3.3', baseurl => 'http://download.gluster.org/pub/gluster/glusterfs/3.3/LATEST/EPEL.repo/epel-6Server/x86_64/', enabled => '1', gpgcheck => '0', } package { 'glusterfs_server': name => [ 'glusterfs-server', 'glusterfs-geo-replication', 'glusterfs-fuse' ], ensure => installed, require => Yumrepo['glusterfs_repo'], } service { 'glusterd': name => 'glusterd', ensure => 'running', enable => true, hasstatus => true, subscribe => Package['glusterfs_server'], } package { 'nfstools': name => [ 'rpcbind', 'nfs-utils' ], ensure => installed, } service { 'rpcbind': name => 'rpcbind', ensure => 'running', enable => true, hasstatus => true, subscribe => Package['nfstools'], } file { '/etc/sysconfig/iptables': owner => 'root', group => 'root', mode => '0600', content => $iptables_content, notify => Service['iptables'], } service { 'iptables': name => 'iptables', ensure => 'running', enable => true, start => '/etc/init.d/iptables restart', }
GlusterFS plugin
Finally, here is the plugin for GlusterFS configuration.
plugins/glusterfs_040.py
""" plugin for configuring glusterfs and bricks """ import os import logging from packstack.installer import validators import packstack.installer.common_utils as utils from packstack.installer.exceptions import ScriptRuntimeError from packstack.modules.ospluginutils import getManifestTemplate, appendManifestFile, manifestfiles # Controller object will be initialized from main flow controller = None PLUGIN_NAME = "APP-GLUSTER" logging.debug("plugin %s loaded", __name__) def initConfig(controllerObject): global controller controller = controllerObject paramsList = [ {"CMD_OPTION" : "gluster-hosts", "USAGE" : "A comma separated list of IP addresses to configure glusterfs", "PROMPT" : "Enter a comma separated list of IP addresses to configure glusterfs", "OPTION_LIST" : [], "VALIDATORS" : [validators.validate_multi_ssh], "DEFAULT_VALUE" : utils.getLocalhostIP(), "MASK_INPUT" : False, "LOOSE_VALIDATION": True, "CONF_NAME" : "CONFIG_GLUSTER_HOSTS", "USE_DEFAULT" : False, "NEED_CONFIRM" : False, "CONDITION" : False }, {"CMD_OPTION" : "bricks-pv-devs", "USAGE" : "A comma separated list of devices used for PVs", "PROMPT" : "Enter a comma separated list of devices used for PVs. Leave blank if you don't configure bricks)", "OPTION_LIST" : ["(^[^,\s]+(,[^,\s]+)*$|^$)"], "VALIDATORS" : [validators.validate_regexp], "DEFAULT_VALUE" : "", "MASK_INPUT" : False, "LOOSE_VALIDATION": False, "CONF_NAME" : "CONFIG_PV_DEVS", "USE_DEFAULT" : False, "NEED_CONFIRM" : False, "CONDITION" : False }, {"CMD_OPTION" : "bricks-vols", "USAGE" : "A comma separated list of bricks with vol_name:size(GB)", "PROMPT" : "Enter a comma separated list of bricks in vol_name:size(GB) eg) \"vol01:5,vol02:5\". Leave blank if you don't configure bricks", "OPTION_LIST" : ["(^[^,\s]+:\d+(,[^,\s]+:\d+)*$|^$)"], "VALIDATORS" : [validators.validate_regexp], "DEFAULT_VALUE" : "", "MASK_INPUT" : False, "LOOSE_VALIDATION": False, "CONF_NAME" : "CONFIG_BRICK_LIST", "USE_DEFAULT" : False, "NEED_CONFIRM" : False, "CONDITION" : False }, ] groupDict = { "GROUP_NAME" : "GLUSTER", "DESCRIPTION" : "glusterfs options", "PRE_CONDITION" : "CONFIG_GLUSTER", "PRE_CONDITION_MATCH" : "y", "POST_CONDITION" : False, "POST_CONDITION_MATCH" : True} controller.addGroup(groupDict, paramsList) def initSequences(controller): if controller.CONF['CONFIG_GLUSTER'] != 'y': return configsteps = [ {'title': 'Configuring glusterfs', 'functions':[createglustermanifest]}, ] controller.addSequence( "Installing Gluster Component", [], [], configsteps ) if ( controller.CONF['CONFIG_PV_DEVS'] and controller.CONF['CONFIG_BRICK_LIST'] ): configsteps = [ {'title': 'Configuring bricks', 'functions':[createbricksmanifest]}, ] controller.addSequence( "Installing Bricks Component", [], [], configsteps ) def createglustermanifest(): for host in controller.CONF['CONFIG_GLUSTER_HOSTS'].split(','): manifestfile = "%s_glusterfs.pp" % host manifestdata = getManifestTemplate("glusterfs.pp") appendManifestFile(manifestfile, manifestdata, 'glusterfs') def createbricksmanifest(): for host in controller.CONF['CONFIG_GLUSTER_HOSTS'].split(','): manifestfile = "%s_bricks.pp" % host controller.CONF['CONFIG_PV_DEVS_QUOTED'] = ','.join( map(_quoteword, controller.CONF['CONFIG_PV_DEVS'].split(','))) controller.CONF['CONFIG_BRICK_LIST_QUOTED'] = ','.join( map(_quoteword, controller.CONF['CONFIG_BRICK_LIST'].split(','))) manifestdata = getManifestTemplate("bricks.pp") appendManifestFile(manifestfile, manifestdata, 'glusterfs') def _quoteword(str = ""): str.strip() if not str.startswith('"'): str = '"' + str if not str.endswith('"'): str = str + '"' return str
You need to add the global option to "prescript_000.py" plugin.
plugins/prescript_000.py
"NEED_CONFIRM" : False, "CONDITION" : False }, ## Add from here {"CMD_OPTION" : "config-gluster", "USAGE" : "Set to 'y' if you would like Packstack to configure glusterfs", "PROMPT" : "Should Packstack configure glusterfs", "OPTION_LIST" : ["y", "n"], "VALIDATORS" : [validators.validate_options], "DEFAULT_VALUE" : "y", "MASK_INPUT" : False, "LOOSE_VALIDATION": False, "CONF_NAME" : "CONFIG_GLUSTER", "USE_DEFAULT" : False, "NEED_CONFIRM" : False, "CONDITION" : False }, ## Add until here. ] groupDict = { "GROUP_NAME" : "GLOBAL", "DESCRIPTION" : "Global Options",
Test run
Ok, let's try the new plugin. I prepared two VMs with two additional virtual disks /dev/vdb, /dev/vdc for the target storage nodes.
# packstack Welcome to Installer setup utility Enter the path to your ssh Public key to install on servers [/root/.ssh/id_rsa.pub] : Enter a comma separated list of NTP server(s). Leave plain if Packstack should not install ntpd on instances.: 192.168.122.1 Should Packstack configure motd [y|n] [y] : n Should Packstack configure glusterfs [y|n] [y] : y Enter a comma separated list of IP addresses to configure glusterfs [192.168.122.191] : 192.168.122.192,192.168.122.193 Enter a comma separated list of devices used for PVs. Leave blank if you don't configure bricks): /dev/vdb,/dev/vdc Enter a comma separated list of bricks in vol_name:size(GB) eg) "vol01:5,vol02:5". Leave blank if you don't configure bricks: vol01:2,vol02:2 To subscribe each server to EPEL enter "y" [y|n] [y] : Enter a comma separated list of URLs to any additional yum repositories to install: To subscribe each server to Red Hat enter a username here: To subscribe each server to Red Hat enter your password here : To subscribe each server to Red Hat Enterprise Linux 6 Server Beta channel (only needed for Preview versions of RHOS) enter "y" [y|n] [n] : To subscribe each server with RHN Satellite enter RHN Satellite server URL: Installer will be installed using the following configuration: ============================================================== ssh-public-key: /root/.ssh/id_rsa.pub ntp-severs: 192.168.122.1 config-motd: n config-gluster: y gluster-hosts: 192.168.122.192,192.168.122.193 bricks-pv-devs: /dev/vdb,/dev/vdc bricks-vols: vol01:2,vol02:2 use-epel: y additional-repo: rh-username: rh-password: rh-beta-repo: n rhn-satellite-server: Proceed with the configuration listed above? (yes|no): yes Installing: Clean Up... [ DONE ] Setting up ssh keys... [ DONE ] Adding pre install manifest entries... [ DONE ] Installing time synchronization via NTP... [ DONE ] Configuring glusterfs... [ DONE ] Configuring bricks... [ DONE ] Preparing servers... [ DONE ] Adding post install manifest entries... [ DONE ] Installing Dependencies... [ DONE ] Copying Puppet modules and manifests... [ DONE ] Applying Puppet manifests... Applying 192.168.122.192_prescript.pp Applying 192.168.122.193_prescript.pp 192.168.122.193_prescript.pp : [ DONE ] 192.168.122.192_prescript.pp : [ DONE ] Applying 192.168.122.192_ntpd.pp Applying 192.168.122.193_ntpd.pp 192.168.122.193_ntpd.pp : [ DONE ] 192.168.122.192_ntpd.pp : [ DONE ] Applying 192.168.122.192_glusterfs.pp Applying 192.168.122.193_glusterfs.pp Applying 192.168.122.192_bricks.pp Applying 192.168.122.193_bricks.pp 192.168.122.192_glusterfs.pp : [ DONE ] 192.168.122.193_glusterfs.pp : [ DONE ] 192.168.122.192_bricks.pp : [ DONE ] 192.168.122.193_bricks.pp : [ DONE ] Applying 192.168.122.192_postscript.pp Applying 192.168.122.193_postscript.pp 192.168.122.193_postscript.pp : [ DONE ] 192.168.122.192_postscript.pp : [ DONE ] [ DONE ] **** Installation completed successfully ****** Additional information: * A new answerfile was created in: /root/packstack-answers-20130522-115125.txt * The installation log file is available at: /var/tmp/packstack/20130522-115055-29dGci/openstack-setup.log
Well-done. Let's see the result...
# ssh root@192.168.122.192 df Filesystem 1K-blocks Used Available Use% Mounted on /dev/vda3 14976856 1105068 13111008 8% / tmpfs 510284 0 510284 0% /dev/shm /dev/vda1 495844 33423 436821 8% /boot /dev/mapper/vg_bricks-lv_vol02 2031440 68608 1857976 4% /bricks/vol02 /dev/mapper/vg_bricks-lv_vol01 2031440 68608 1857976 4% /bricks/vol01 # ssh root@192.168.122.192 pvs PV VG Fmt Attr PSize PFree /dev/vdb vg_bricks lvm2 a-- 8.00g 4.00g /dev/vdc vg_bricks lvm2 a-- 8.00g 8.00g # ssh root@192.168.122.192 vgs VG #PV #LV #SN Attr VSize VFree vg_bricks 2 2 0 wz--n- 15.99g 11.99g # ssh root@192.168.122.192 lvs LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert lv_vol01 vg_bricks -wi-ao--- 2.00g lv_vol02 vg_bricks -wi-ao--- 2.00g # ssh root@192.168.122.192 "iptables -nL" Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:111 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpts:24007:24050 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpts:38465:38468 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited Chain FORWARD (policy ACCEPT) target prot opt source destination REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited Chain OUTPUT (policy ACCEPT) target prot opt source destination # ssh root@192.168.122.192 "service glusterd status" glusterd (pid 9783) is running...
Yeah, it worked :)
Now you can login to one of the storage node, and create a cluster and volumes. It's not that hard to automate them, too. But the real challenge is how we can design possible options to cope with the variety of volume configurations....