Install Open vSwitch on Scientific Linux 6.7
In preparation for testing OVS (Open vSwitch) for ATLAS use I am documenting what needs to be done to build the RPMS and install the most current release (available at http://openvswitch.org/releases/
I have prepared RPMs (both src and x86_64) available below and I am documenting the needed steps here If you just want to install and run it skip down to Download RPMS
Preparing the RPM Build
First we need to have the right tools on the build host. There are lots of articles on Google about creating RPMs. To get started make sure you have the rpm-build
yum -y install rpm-build
You should also have the ~/rpmbuild tree setup in your home area. If not just do:
mkdir -p ~/rpmbuild/SPECS; mkdir ~/rpmbuild/SOURCES
Get Python 2.7 Installed
The most recent OVS (v2.5.0) also requires Python2.7 which is problematic on RHEL/CentOS/SL6.x where the default is python v2.6.x. NOTE: On RHEL/CentOS/SL7.x this is not a problem since python 2.7 is the default there.
Fortunately we can setup Python 2.7 on SL6.7 without breaking the system using Software Collection Libraries. As 'root' you will need to install the right YUM repo and tools:
yum -y install http://ftp.scientificlinux.org/linux/scientific/6x/external_products/softwarecollections/yum-conf-softwarecollections-1.0-1.el6.noarch.rpm
Now we can install Python 2.7
yum -y install python27
This puts python2.7 in /opt/rh/python27. It won't be used unless you "source" the enable script.
You can see the version has changed to 2.7.x after setting up the environment via the enable
Building the RPMS
Next cd to the ~/rpmbuild/SOURCES directory and get the tarball:
We can then unpack the tar-ball so we can get the needed .spec files for use with rpmbuild.
tar -zxvf openvswitch-2.5.0.tar.gz
At this point I had to "fix" some issues. I updated the README.RHEL file in my version in the src RPMS below. The changes were to allow the use of the SCL (Software Collection Library) feature in RHEL/CentOS/SL 6.x which supports installing non-standard package versions that could be a mismatch with the OS version. As noted above we require python 2.7 and to get this on SL6.x we install python27. I needed to change the openvswitch.spec file to Require: python27 instead of python >= 2.7 I also altered the init script to source the python 2.7 enable script if it exists. At this point we can do:
cp openvswitch-2.5.0/rhel/*.spec ~/rpmbuild/SPECS
Now we can try to build the RPMS
rpmbuild -ba openvswitch.spec
If this completes OK we can build the kmod RPM. If you hit an error related to trying to delete a non-existent file, edit the openvswitch.spec file and change the 'rm \' to 'rm -f \' and re-try.
rpmbuild -ba openvswitch-kmod-rhel6.spec
Note this builds an openvswitch module RPM that is specific to the kernel used on the build host. If this is not the same as your target system it may be better to use the dkms version. Let's build that as well:
rpmbuild -ba openvswitch-dkms.spec
We should now have both src and x86_64 RPMS in ~/rpmbuild/SRPMS/ and ~/rpmbuild/RPMS/x86_64/
If you want to skip the above steps you can just grab the RPMS from here
FYI, you can recreate new x86_64 versions via rpmbuild --rebuild openvswitch-2.5.0-1.src.rpm
Now that we have RPMS we can install them for use on a system. Because of the dependency on Python2.7 noted above we will also need to install it on each SL6.7 host. Here are the commands
- Install SCL: yum -y install http://ftp.scientificlinux.org/linux/scientific/6x/external_products/softwarecollections/yum-conf-softwarecollections-1.0-1.el6.noarch.rpm
- Install pythion27:
yum -y install python27
- Install openvswitch:
yum -y openvswitch-2.5.0-2.x86_64.rpm /openvswitch-dkms-2.5.0-1.el6.x86_64.rpm
- Verify is installed and is chkconfig'ed on: chkconfig --list openvswitch
- Start it:
service openvswitch start
As long as it starts OK we have finished the install successfully. If you have problems check for information form dmesg
and by tailing /var/log/messages
Configuring OVS for Testing and Use
Now that our target system has python27 and OpenvSwitch
installed we need to enable it to be used. The plan is to create a new bridge, br0, associated with the primary public interface on this system. We will then reconfigure to move the hosts IP address to br0.
In production systems this can be tricky. The steps above didn't make any changes to the network on the host. Putting OVS "in-line" with the production network however can cause service interruptions and needs to be down with care.
This section will describe three examples cases of how to implement br0 on variously configured systems.
Settting up to Recover
First lets describe how to ensure we can quickly test changes to an SL6.x host network configuration and recover if there are problems.
Lets create two new directories:
. Next copy the following into old_config (as 'root'):
cp ifcfg-* old_config/
cp route-* old_config/
Now we have a saved copy of the current network configuration. Lets create a script to clean up the configurations files first
# Cleans up network configuration files
rm -f ifcfg-* route-*
Set this to be executable: chmod a+x clean-nc.sh
Now lets create a script to restore the old (original) configuration:
# Restore old network configuration
cp old_config/* ./
Make this executable: chmod a+x old_config.sh
If you execute old_config.sh it will restore your current (old) configuration. At this point we can edit, add and remove ifcfg-* and route-* files to define a new network configuration. When you think it is right copy it to new_config
cp ifcfg-* new_config/
cp route-* new_config/
Now we can create a script to put the new configuration in place.
# Put new network configuration in place
cp new_config/* ./
This would put the new network configuration in place when it is run. We can now do things like: service network stop; ./new_config.sh; service network start However you need to be VERY CAREFUL in doing something like this. You can drop your network connection if there is a problem with the new network configuration.
The only safe way to run this is on the console of the host (or over a KVM to the console or over a Serial-Over-Lan (SOL) connection).
Configuring OVS for a Single NIC Host
This is the easiest option.
Configuring OVS for a Host with a Bonded Connection
In this case we need to translate the existing bonding configuration defined in /etc/modprobe.d/bonding.conf (or similar) into an OVS bond. My first attempt doing this failed. All the interfaces came up in OVS but now traffic would flow.
Configuring OVS for a Host Using VLANs
You can easily add "tagged" interfaces in OVS that can replace linux tagged vlan interfaces.
- 05 Mar 2016