OVX offers full support for OpenStack networking through its Neutron plugin. We are not yet part of the OpenStack mainline code, but you can download and install the full source code via our GitHub Neutron fork. For the time being, we only support OpenStack Havana (IceHouse support is in the works).

Our Neutron fork also contains a Neutron OpenCloud plugin, which adds features such as NAT and public IP addresses for virtual networks. You should run this plugin when deploying an OpenCloud site (click through for information on configuration details).

1. Installation

Start by cloning the repo and checking out the 0.0-MAINT branch (only use 0.1-DEV if you really know what you’re doing).

git clone -b ovx https://github.com/OPENNETWORKINGLAB/neutron.git -b 0.0-MAINT

You will also need to patch Nova, so VMs are plugged into the correct bridge.

diff --git a/nova/network/neutronv2/api.py b/nova/network/neutronv2/api.py
index c423af3..0346b39 100644
--- a/nova/network/neutronv2/api.py
+++ b/nova/network/neutronv2/api.py
@@ -972,7 +972,8 @@ class API(base.Base):
         # TODO(berrange) Neutron should pass the bridge name
         # in another binding metadata field
         if vif_type == network_model.VIF_TYPE_OVS:
-            bridge = CONF.neutron.ovs_bridge
+            # bridge = CONF.neutron.ovs_bridge
+            bridge = port.get('binding:profile', {}).get('bridge', CONF.neutron.ovs_bridge)
             ovs_interfaceid = port['id']
         elif vif_type == network_model.VIF_TYPE_BRIDGE:
             bridge = "brq" + port['network_id']

2. Configuration

How to run a full OpenStack deployment is out of scope for this document. Please refer to the official OpenStack website for more information.

2.1. Before you start

First, you will need to create and configure the OVS bridges on each physical node. At the minimum, you need a data bridge and a control bridge (default names are br-int and br-ctl). The data bridge should be configured in OpenFlow mode, point to OVX as its controller, and connect to your data plane interface. The control bridge should be in L2 learning switch mode, configured with an IP, and connected to your control plane interface. The data plane interfaces should be connected to a physical switch in OpenFlow mode, the control plane interfaces should be connected to a traditional, L2 learning physical switch.

You will also need to create a VM image that runs an SDN controller on boot. OVX will spawn this default SDN controller for each virtual network you create. We have included a script in neutron/neutron/plugins/ovx/build-floodlight-vm.sh to build an image that uses the FloodLight OpenFlow controller. Run the script to build the controller image, then import it in OpenStack’s Glance (note that the given name should correspond to your plugin configuration).

glance image-create --name ovx-floodlight --disk-format=qcow2 --container-format=bare --file ubuntu-14.04-server-cloudimg-amd64-disk1.img

2.2. Neutron Server Config

Configure Neutron to use the correct core plugin in /etc/neutron/neutron.conf:

core_plugin = neutron.plugins.ovx.plugin.OVXNeutronPlugin

2.3. Neutron Options

Example /etc/neutron/plugins/ovx/ovx.ini config file.

[ovx]
username = admin                            # OVX admin user
password =                                  # OVX admin password
api_host = localhost                        # OVX RPC API server address
api_port = 8080                             # OVX RPC API server port
of_host = localhost                         # OVX OpenFlow server address
of_port = 6633                              # OVX OpenFlow server port

[ovs]
data_bridge = br-int                        # Data network bridge
ctrl_bridge = br-ctl                        # Control network bridge

[nova]
username = admin                            # Nova username
password =                                  # Nova password
project_id = admin                          # Nova project ID (name, not the tenant UUID)
auth_url = http://localhost:5000/v2.0/      # Nova authentication URL
image_name = ovx-floodlight                 # SDN controller image name
image_port = 6633                           # OpenFlow port of SDN controller image
flavor = m1.small                           # Machine flavor on which to run SDN controller
key_name =                                  # Name of keypair to inject into controller instance
timeout = 30                                # Number of seconds to try start the controller instance

2.4. Nova config

In /etc/nova/nova.conf, set the following:

compute_driver = nova.virt.libvirt.LibvirtDriver
libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtGenericVIFDriver

3. Extensions

The OVX Neutron plugin comes with two additional Neutron extensions to configure the virtual network topology, and the virtual network OS.

3.1. Topology

We implement the topology extension so you can spawn virtual networks with a topology of your choosing. Right now, we only support two topologies: (1) a single big switch that maps to all physical switches, and (2) a clone of the physical topology. Custom topologies where the tenant can specify his own virtual topology, irrespective of the physical topology, is in the works.

To this end, the topology extension exposes a new field topology:type that can take on values bigswitch, physical, or custom.

For instance, to create a virtual network named test with a topology that is a clone of the physical topology, do the following:

neutron net-create test --topology:type physical

3.2. Network OS

The netos extension allows you to configure the network OS that controls your virtual network. It exposes the following fields:

netos:flavor      the machine flavor on which to run the network OS
netos:image       the image to boot as network OS
netos:port        the OpenFlow port of the network OS
netos:key         the key to inject in the network OS to log in
netos:url         the URL where the user-managed network OS is running (specify the url in the following form: proto:hostname:port)

If netos:url is specified, all other netos parameters are ignored.

4. DevStack

DevStack is the easiest way to experiment with the system or to develop new features. We strongly recommend using a hypervisor that supports nested virtualization such as VMWare Fusion or WorkStation, but not VirtualBox. This is needed because you will be spawning VMs inside a VM environment when you run the OVX Neutron plugin in DevStack.

Start by pulling from our GitHub repo. If you intend to do any OVX-related development on DevStack, please fork our repo and then clone from your repo.

git clone -b ovx https://github.com/leenheer/devstack

Download and customize local.conf according to your setup, and save it in the devstack project directory.

Make sure OpenVirteX is running before you start ./stack.sh. You should also apply the patch to Nova, as detailed in the installation section on this page.

From here, you should start reading the DevStack website.

5. Development

If you intend to do development on the OVX Neutron plugin, please use GitHub pull requests.