Welcome to the OpenVirteX tutorial!
For Libera tutorial, refer to the following link – Libera/README.md at master · os-libera/Libera (github.com)

In this tutorial, you’ll complete a set of exercises designed to explain the main concepts of OVX, our network virtualization platform. Soon, you’ll be able to create, configure and start virtual networks, which all behave exactly like a physical network and which are fully isolated from each other.

To get you started quickly, this tutorial is distributed as a preconfigured virtual machine with the needed software. Just run the VM in VirtualBox using the instructions in the next section.

1. Introduction

1.1. Pre-requisites

You will need a computer with at least 2GB of RAM and at least 5GB of free hard disk space. A faster processor or solid-state drive will speed up the virtual machine boot time, and a larger screen will help to manage multiple terminal windows. The computer will also need access to the public Internet.

The computer can run Windows, Mac OS X, or Linux – all work fine with VirtualBox, the only software requirement.

To install VirtualBox, you will need administrative access to the machine.

The tutorial instructions requires prior knowledge of SDN in general, and OpenFlow and Mininet in particular. So please first complete the OpenFlow tutorial and the Mininet walkthrough. Although not a requirement, completing the FlowVisor tutorial before starting this one is highly recommended. Since OpenVirteX can be considered the next-generation FlowVisor, many of the concepts are applicable to both systems.

1.2. Stuck? Found a bug? Questions?

Email us if you’re stuck, think you’ve found a bug, or just want to send some feedback. Please have a look at the guidelines to learn how to efficiently submit a bug report.


2. Setup Your Environment

2.1. Install required software

You will need to acquire two files: a VirtualBox installer and the Tutorial VM (last updated Feb 18, 2021).

After you have downloaded VirtualBox, install it, then go to the next section to verify that the VM is working on your system.

2.2. Create Virtual Machine

Start up VirtualBox, then select Machine>New, give it a name, and select Linux as type and Ubuntu (64 bit) as version. Press Continue.

Next, configure the VM with 2 GB (2048 MB) of memory. Press Continue.

Select ‘Use an existing virtual hard drive file’, and point it to the vmdk file you downloaded. Select Create.

Now you can start the VM by double clicking it; once it starts you can login with user ovx and password ovx.

After some time you should see the desktop view for ubuntu. You can open a terminal by double clicking Terminal.

Make sure to read the command prompt notes below; they’re important to knowing where to run each command.

2.3. Important Command Prompt Notes

In this tutorial, commands are shown along with a command prompt to indicate the subsystem for which they are intended.

For example,

$ ls

indicates that the ls command should be typed at a Terminal command prompt within the VM, which generally ends in $ if you are a regular user or # if you are root.

Other prompts used in this tutorial include

mininet>

for commands entered in the Mininet console.

Finally, text boxes that start with a date and time are typically OVX log messages, for instance

17:34:53.275 [pool-7-thread-14] INFO PhysicalSwitch - Switch connected with dpid 9, name 00:00:00:00:00:00:00:09 and type Open vSwitch

2.4. Start Mininet

We’ll be using the same physical topology for all exercises, so now is a good time to start Mininet. The network is loosely based on the Internet2 NDDI topology; it has 11 core switches in major cities, and each core switch has 4 hosts attached. The DPIDs are listed in the table below, and the host MAC addresses are the last 6 bytes of the DPIDs, with the last byte equal to host number. For instance, the 3rd host on SFO has MAC 00:00:00:00:02:03. Hosts for each core switch have IP address 10.0.0.1 through 10.0.0.4, and connect to their core switch on ports 1 through 4. The other port numbers are shown in the topology image.

topo

City DPID
SEA 00:00:00:00:00:00:01:00
SFO 00:00:00:00:00:00:02:00
LAX 00:00:00:00:00:00:03:00
ATL 00:00:00:00:00:00:04:00
IAD 00:00:00:00:00:00:05:00
EWR 00:00:00:00:00:00:06:00
SLC 00:00:00:00:00:00:07:00
MCI 00:00:00:00:00:00:08:00
ORD 00:00:00:00:00:00:09:00
CLE 00:00:00:00:00:00:0A:00
IAH 00:00:00:00:00:00:0B:00

Start a terminal and run the following command from the home directory. This will start Mininet with the topology described above, and will point all switches to connect to OVX. You should see Mininet complain that it can’t connect to its remote controller, and we’ll fix that as soon as we start OVX next. Keep this terminal window open as you will need to use it throughout the tutorial. Note that the internet2.py script assumes OVX is listening on localhost; you should point it to the correct address if you are are not using the tutorial VM.

$ sudo python internet2.py

2.5. Start OVX

Start another terminal, go into the OVX scripts directory, and start OVX. This will take some time the first you do this, as OVX needs to be compiled.

$ cd OpenVirteX/scripts
$ sh ovx.sh

Now that OVX has started, you can observe its log messages. After some general startup statements, you should see OVX connect to all switches in the network, for instance:

17:34:53.275 [pool-7-thread-14] INFO PhysicalSwitch - Switch connected with dpid 9, name 00:00:00:00:00:00:00:09 and type Open vSwitch

You can also observe the link detection process. For example, below it has detected that dpid 00:00:00:00:00:00:06:00 port 6 is connected to dpid 00:00:00:00:00:00:05:00 port 7.

17:34:53.290 [pool-7-thread-12] INFO PhysicalNetwork - Adding physical link between 00:00:00:00:00:00:06:00/6 and 00:00:00:00:00:00:05:00/7

You can verify that this is correct on the Mininet console. The Mininet script internet2.py shows us that this is a link between EWR and IAD. Using the net command in Mininet shows us that there is indeed a link between port 6 of EWR and port 7 of IAD.

EWR lo: [OMITTED] EWR-eth6:IAD-eth7

3. Learning to Fly

In this first exercise, we will create a very simple but fully virtual network. We’ll go over all the steps necessary to configure OpenVirteX.

We will start by trying to create a network composed out of two hosts, h_SEA_1 and h_LAX_2, located in Seattle and Los Angeles, respectively. The final virtual network will look like this

vnet1

City Virtual DPID
SEA 00:a4:23:05:00:00:00:01
SFO 00:a4:23:05:00:00:00:02
LAX 00:a4:23:05:00:00:00:03

Since we haven’t configured OVX yet, you can verify that remote hosts cannot talk to each other. For instance:

mininet> h_SEA_1 ping -c3 h_LAX_2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
From 10.0.0.1 icmp_seq=1 Destination Host Unreachable
From 10.0.0.1 icmp_seq=2 Destination Host Unreachable
From 10.0.0.1 icmp_seq=3 Destination Host Unreachable

--- 10.0.0.2 ping statistics ---
3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2017ms

Note that OpenVirteX is seeing the packet_in’s associated with the ping, but since OVX doesn’t know which virtual network the data belongs to, it is dropping them for now.

3.1. Virtual network configuration

Now we are ready to create our first virtual network. To do so, we will be using ovxctl, a command line tool that speaks to OVX through its JSON API. Start a new terminal, and acquaint yourself with this tool by reading the help output:

$ cd OpenVirteX/utils
$ python ovxctl.py --help

1) Create virtual network

The following command creates a virtual network that will have a controller speaking the tcp protocol and running on localhost port 10000. Note that we configured the VM image to run this controller, so you don’t have to worry about it. Also, the virtual network’s hosts will be using IPs in the 10.0.0.0/16 range. Executing this command returns a tenant ID or virtual network identifier. You can also verify this in the OVX log.

$ python ovxctl.py -n createNetwork tcp:localhost:10000 10.0.0.0 16

2) Create virtual switches

The next few commands create virtual switches for the virtual network identified by tenant ID 1. We pass in the physical switch DPIDs that form the virtual switch; in this case these are SEA, SFO and LAX in this order. For now, each virtual switch will correspond to exactly one physical switch; later in this tutorial we will show you other options. Each time we create a virtual switch, we get a virtual switch DPID.

$ python ovxctl.py -n createSwitch 1 00:00:00:00:00:00:01:00
$ python ovxctl.py -n createSwitch 1 00:00:00:00:00:00:02:00
$ python ovxctl.py -n createSwitch 1 00:00:00:00:00:00:03:00

For the curious, the structure of the virtual DPID is as follows: first byte is 0x00, the 24-bit ON.Lab OUI (Organizationally Unique Identifier) of a4:23:05, and the final 4 bytes represent a counter (starting at 1) for the number of virtual switches per tenant. For instance, the second virtual switch we create in this virtual network has DPID 00:a4:23:05:00:00:00:02. As before, you should also inspect the OVX log to ensure the virtual switch has been created.

3) Create virtual ports

We are now ready to create ports on our virtual switches, which eventually will be used to connect either hosts or links. Virtual ports are created by specifying the tenant ID, physical DPID and physical port. The call returns the virtual switch DPID and virtual port number. Although we don’t show this in this exercise, it is possible to create multiple virtual ports that correspond to the same physical port.

We create two ports on the virtual switch that corresponds to SEA: one for host h_SEA_1, which is connected on physical port 1, and one to connect SEA to SFO, on physical port 5.

$ python ovxctl.py -n createPort 1 00:00:00:00:00:00:01:00 1
$ python ovxctl.py -n createPort 1 00:00:00:00:00:00:01:00 5

On middle switch SFO, we need two ports to connect to SFO and LAX.

$ python ovxctl.py -n createPort 1 00:00:00:00:00:00:02:00 5
$ python ovxctl.py -n createPort 1 00:00:00:00:00:00:02:00 6

The setup for LAX is similar to SEA:

$ python ovxctl.py -n createPort 1 00:00:00:00:00:00:03:00 5
$ python ovxctl.py -n createPort 1 00:00:00:00:00:00:03:00 2

Refer back to the OVX log to verify that these virtual ports have indeed been created.

Virtual links are used to connect ports on switches, just as you would expect in a physical network. They are created by specifying the tenant ID, the DPID and port number of the virtual source switch, the DPID and port number of virtual destination switch, the routing mode, and the number of backup routes you want. The call returns the virtual link ID. Note that OVX will automatically create the reverse link as well, which has the same link ID.

The routing mode parameter supports either spf or manual. The former will enable shortest path routing internal to OVX. Alternatively, using “manual” you will have to specify the physical path yourself through the setLinkPath call. The number of backup routes is only honored when using spf. You can list as many or little backup paths if you use manual routing.

Let’s go ahead and connect the virtual switches.

$ python ovxctl.py -n connectLink 1 00:a4:23:05:00:00:00:01 2 00:a4:23:05:00:00:00:02 1 spf 1
$ python ovxctl.py -n connectLink 1 00:a4:23:05:00:00:00:02 2 00:a4:23:05:00:00:00:03 1 spf 1

Referring to the OVX log, you will notice the virtual link being created in both directions.

5) Connect hosts

Virtual switch ports can also be used to connect hosts. You create this connection by giving the tenant ID, virtual switch DPID, virtual switch port, and the host MAC address. The call returns the host ID.

Let’s connect h_SEA_1 to port 1 on the first virtual switch and h_LAX_2 to port 2 on the third virtual switch. You can verify the MAC addresses by running h_SEA_1 ifconfig in the Mininet console.

$ python ovxctl.py -n connectHost 1 00:a4:23:05:00:00:00:01 1 00:00:00:00:01:01
$ python ovxctl.py -n connectHost 1 00:a4:23:05:00:00:00:03 2 00:00:00:00:03:02

As always, have a look at the OVX log.

3.2. Starting a virtual network

Our virtual network is configured and ready to be booted. We just have to give OpenVirteX the tenant ID and we’re off.

$ python ovxctl.py -n startNetwork 1

You can see the virtual switches connecting to the controller in both the OVX log and the controller’s log. From the controller’s point of view, OVX will look and behave exactly like a physical network composed of switches. At the same time, OVX looks like a controller from the physical switches’ perspective. Hence OpenVirteX is a proxy-controller.

15:05:41.620 [pool-14-thread-12] INFO ControllerChannelHandler - Connected dpid 00:a4:23:05:00:00:00:02 to controller /127.0.0.1:10000
15:05:41.621 [pool-12-thread-12] INFO ControllerChannelHandler - Connected dpid 00:a4:23:05:00:00:00:03 to controller /127.0.0.1:10000
15:05:41.622 [pool-16-thread-12] INFO ControllerChannelHandler - Connected dpid 00:a4:23:05:00:00:00:01 to controller /127.0.0.1:10000

If we try to ping between h_SEA_1 and h_LAX_2, we see that everything works fine.

mininet> h_SEA_1 ping -c3 h_LAX_2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_req=1 ttl=64 time=34.1 ms
64 bytes from 10.0.0.2: icmp_req=2 ttl=64 time=0.415 ms
64 bytes from 10.0.0.2: icmp_req=3 ttl=64 time=0.053 ms

--- 10.0.0.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.053/11.538/34.147/15.987 ms

To truly convince yourself that the controller is dealing with a virtual network, point your browser in the VM to http://localhost:10001/ui/index.html. This is Floodlight’s GUI; press the Topology link at the top and you should see your virtual network appear.

1) Inspecting the flow tables

To gain more insight in how OVX does its magic, let’s inspect the flow tables on the physical switches. Be sure to run the ping between h_SEA_1 and h_LAX_2 right before this command, otherwise the flows might have expired and nothing will show up.

mininet> dpctl dump-flows

*** SEA ------------------------------------------------------------------------
NXST_FLOW reply (xid=0x4):
 cookie=0x100000002, duration=6.298s, table=0, n_packets=4, n_bytes=336, idle_timeout=5, idle_age=1, priority=0,in_port=5,vlan_tci=0x0000,dl_src=a4:23:05:01:00:00,dl_dst=a4:23:05:10:00:04 actions=mod_nw_src:10.0.0.2,mod_nw_dst:10.0.0.1,mod_dl_src:00:00:00:00:03:02,mod_dl_dst:00:00:00:00:01:01,output:1
 cookie=0x100000003, duration=6.272s, table=0, n_packets=3, n_bytes=238, idle_timeout=5, idle_age=1, priority=0,in_port=1,vlan_tci=0x0000,dl_src=00:00:00:00:01:01,dl_dst=00:00:00:00:03:02 actions=mod_nw_dst:1.0.0.4,mod_nw_src:1.0.0.3,mod_dl_src:a4:23:05:01:00:00,mod_dl_dst:a4:23:05:10:00:02,output:5
*** SFO ------------------------------------------------------------------------
NXST_FLOW reply (xid=0x4):
 cookie=0x100000002, duration=6.294s, table=0, n_packets=4, n_bytes=336, idle_timeout=5, idle_age=1, priority=0,in_port=6,vlan_tci=0x0000,dl_src=a4:23:05:01:00:00,dl_dst=a4:23:05:20:00:04 actions=mod_dl_src:a4:23:05:01:00:00,mod_dl_dst:a4:23:05:10:00:04,output:5
 cookie=0x100000003, duration=6.274s, table=0, n_packets=3, n_bytes=238, idle_timeout=5, idle_age=1, priority=0,in_port=5,vlan_tci=0x0000,dl_src=a4:23:05:01:00:00,dl_dst=a4:23:05:10:00:02 actions=mod_dl_src:a4:23:05:01:00:00,mod_dl_dst:a4:23:05:20:00:02,output:6

From this, we can tell that OVX rewrites packets at the edge of the network. In particular, the IP addresses of h_SEA_1 and h_LAX_2 get rewritten from 10.0.0.x to 1.0.0.x and back. This is to ensure traffic isolation among virtual networks; we will revisit this point in the next section. You may note that the MAC addresses of packets are also rewritten; this is explained in more detail in the architecture documentation. The critical point to understand is that the controller is unaware of all this rewriting. It only deals with packets that have their original IP (in the 10.0/16 range) and MAC addresses.

This completes the first exercise of this tutorial. From an existing physical network infrastructure, we created a virtual topology to our specification. We told OVX which switches, links and hosts should make up our virtual network. In the following exercise, we will create another virtual network and show that we can use the same IP addressing while traffic remains fully isolated among virtual networks.


4. Air Traffic Control

This exercise builds on the previous, so be sure to have a system where you just completed the previous exercise. We’ll add another simple virtual network that uses the same IP address space as the first. We’ll show that traffic is completely isolated among tenants by inspecting the physical flow tables.

4.1. Multiple Virtual Networks

Having just a single virtual network is not that useful, right? So let’s create the same virtual network, but with hosts h_SEA_3 and h_LAX_4.

$ python ovxctl.py -n createNetwork tcp:localhost:20000 10.0.0.0 16
$ python ovxctl.py -n createSwitch 2 00:00:00:00:00:00:01:00
$ python ovxctl.py -n createSwitch 2 00:00:00:00:00:00:02:00
$ python ovxctl.py -n createSwitch 2 00:00:00:00:00:00:03:00
$ python ovxctl.py -n createPort 2 00:00:00:00:00:00:01:00 3
$ python ovxctl.py -n createPort 2 00:00:00:00:00:00:01:00 5
$ python ovxctl.py -n createPort 2 00:00:00:00:00:00:02:00 5
$ python ovxctl.py -n createPort 2 00:00:00:00:00:00:02:00 6
$ python ovxctl.py -n createPort 2 00:00:00:00:00:00:03:00 5
$ python ovxctl.py -n createPort 2 00:00:00:00:00:00:03:00 4
$ python ovxctl.py -n connectLink 2 00:a4:23:05:00:00:00:01 2 00:a4:23:05:00:00:00:02 1 spf 1
$ python ovxctl.py -n connectLink 2 00:a4:23:05:00:00:00:02 2 00:a4:23:05:00:00:00:03 1 spf 1
$ python ovxctl.py -n connectHost 2 00:a4:23:05:00:00:00:01 1 00:00:00:00:01:03
$ python ovxctl.py -n connectHost 2 00:a4:23:05:00:00:00:03 2 00:00:00:00:03:04
$ python ovxctl.py -n startNetwork 2

Note that we are using tenant ID 2 this time, and pointing this network to a controller running on port 20000 (we’ve preinstalled Floodlight to listen on this port in the tutorial VM).

4.2. Traffic Isolation

Now you can ping between both host pairs in each virtual network, but not between hosts of different virtual networks. For instance, verify that h_SEA_1 can not ping h_LAX_4.

Again inspect the flow tables in Mininet to understand how OVX enforces traffic isolation. We see that this time, packets in virtual network 2 get translated to 2.0.0.0/8 IP addresses, which ensures a clean separation from 1.0.0.0/8 packets that belong to virtual network 1.

The next exercise will demonstrate some of the topology virtualization tricks OVX has up its sleeve.


5. Build Your Own Topology

As you know by now, OpenVirteX looks and behaves like a real network from the controller’s perspective. We’ve taken this concept a step further by allowing tenants to create any topology they wish. The two primitives that realize this capability are the virtual switch and the virtual link.

Let’s go over some examples to show you what it does. Be sure you have completed the two previous exercises.

5.1. Big switches

We are going to create a single virtual switch that is composed out of multiple physical switches. In this case the physical switches are IAD, EWR and CLE. Then we’ll attach hosts h_IAD_1, h_EWR_2, and h_CLE_3.

$ python ovxctl.py -n createNetwork tcp:localhost:30000 10.0.0.0 16
$ python ovxctl.py -n createSwitch 3 00:00:00:00:00:00:05:00,00:00:00:00:00:00:06:00,00:00:00:00:00:00:0A:00
$ python ovxctl.py -n createPort 3 00:00:00:00:00:00:05:00 1
$ python ovxctl.py -n createPort 3 00:00:00:00:00:00:06:00 2
$ python ovxctl.py -n createPort 3 00:00:00:00:00:00:0A:00 3
$ python ovxctl.py -n connectHost 3 00:a4:23:05:00:00:00:01 1 00:00:00:00:05:01
$ python ovxctl.py -n connectHost 3 00:a4:23:05:00:00:00:01 2 00:00:00:00:06:02
$ python ovxctl.py -n connectHost 3 00:a4:23:05:00:00:00:01 3 00:00:00:00:0A:03
$ python ovxctl.py -n startNetwork 3

Verify that your network is functioning as expected, i.e., you can ping between all hosts. It’s interesting to see that, from the controller’s perspective, traffic is forwarded between ports on a single switch. OVX is translating the port forwarding into operations that span across the physical network. Point your browser to http://localhost:30001/ui/index.html to see what the controller thinks he is dealing with.

By default, OVX will take care of the routing between ports of the virtual switch using a basic shortest path algorithm. If you prefer you can define this yourself by calling setInteralRouting to change the configuration of the virtual switch, and then using the connectRoute calls to manually set the routes between virtual port pairs.

Observe that in this example, we created a virtual switch that is composed of 3 physical switches, but nothing is stopping you from creating a virtual switch that encompasses the whole physical network. It makes your controller’s job so much easier, as all it has to do is forward traffic between ports on a switch; OVX will take care of the rest.

One implementation detail should be mentioned here; there is no limit in the number of physical switches that can be aggregated into a virtual switch. The only restriction we impose in OVX is that you cannot partition a single physical switch into multiple virtual switches.

A virtual link is simply a link between two virtual switches. They are very easy to create, so let’s try a simple example.

First, stop virtual network 3. Then create a virtual switch that maps to the physical switch in MCI. After that we create two virtual ports; one on the virtual switch on the East Coast (mapping to physical port 5 on IAD), the other one on the virtual MCI switch (mapping to physical port 6). Then we are ready to connect the link between our two virtual switches; the command takes as arguments a pair of virtual DPIDs and ports, the routing algorithm (in this case, spf or shortest path first) and the number of backup routes. Finally, we add another host and restart the network.

$ python ovxctl.py -n stopNetwork 3
$ python ovxctl.py -n createSwitch 3 00:00:00:00:00:00:08:00
$ python ovxctl.py -n createPort 3 00:00:00:00:00:00:05:00 5
$ python ovxctl.py -n createPort 3 00:00:00:00:00:00:08:00 6
$ python ovxctl.py -n connectLink 3 00:a4:23:05:00:00:00:01 4 00:a4:23:05:00:00:00:02 1 spf 1
$ python ovxctl.py -n createPort 3 00:00:00:00:00:00:08:00 4
$ python ovxctl.py -n connectHost 3 00:a4:23:05:00:00:00:02 2 00:00:00:00:08:04
$ python ovxctl.py -n startNetwork 3

Convince yourself that traffic is forwarded correctly across the virtual link.

Here, OVX calculated the routing automatically for the virtual link. If you prefer, you can specify a manual route by configuring the routing algorithm to manual and then using the setLinkPath call.

5.3. Putting it all together

You are now ready to start building any topology you see fit. As an exercise left to the reader, you should build a network that consists of two big switches, one in the Eastern US (IAD, EWR, and CLE) and the other on the West Coast (SEA, SFO, and LAX). Add a couple of hosts on each coast, and then connect the virtual switches with two virtual links. Verify that everything works as expected.


6. Automagic Networks

While developing OVX, we realized that it takes a lot of effort to configure anything but the smallest virtual network. We needed a way to rapidly deploy virtual network topologies. To this end, we created a network embedder to automate the process of mapping the virtual topology onto the physical.

Now is a good time to restart OVX; this will flush its configuration and all virtual networks. Simply press Ctrl-C on the terminal where you started OVX and then restart it by pressing Up followed by Return.

Starting the embedder is simply a matter of typing the command below in the OpenVirteX directory; the embedder is now patiently waiting on port 8000 for users to send it a JSON file that describes their virtual topology requests.

$ python utils/embedder.py

6.1. Big switch

Say all you want is to connect hosts over a single big switch. The example below shows you the contents of the file bigswitch.json. You should be able to tell that this will connect hosts located in SFO, IAD, EWR, and ORD.

{
    "id": "1",
    "jsonrpc": "2.0",
    "method": "createNetwork",
    "params": {
        "network": {
            "controller": {
                "ctrls": [
                    "tcp:localhost:10000"
                ],
                "type": "custom"
            },
            "hosts": [
                {
                    "dpid": "00:00:00:00:00:00:02:00",
                    "mac": "00:00:00:00:02:01",
                    "port": 1
                },
                {
                    "dpid": "00:00:00:00:00:00:05:00",
                    "mac": "00:00:00:00:05:02",
                    "port": 2
                },
                {
                    "dpid": "00:00:00:00:00:00:06:00",
                    "mac": "00:00:00:00:06:03",
                    "port": 3
                },
                {
                    "dpid": "00:00:00:00:00:00:09:00",
                    "mac": "00:00:00:00:09:04",
                    "port": 4
                }
            ],
            "routing": {
                "algorithm": "spf",
                "backup_num": 1
            },
            "subnet": "192.168.0.0/24",
            "type": "bigswitch"
        }
    }
}

Note how this file describes the type of network we want, the details of the controller and subnet, and lists the hosts we want to connect. To send that request to the embedder, type the following:

$ curl localhost:8000 -X POST -d @bigswitch.json

You should now be able to ping between hosts h_SFO_1, h_IAD_2, h_EWR_3, and h_ORD_4. But wait. What just happened here? Once it received the request, the embedder contacted OVX to obtain the physical topology. It automatically created the virtual switch, created the required ports, and connected the hosts to it. All this by a simple description of your virtual network. Inspect your virtual topology in Floodlight’s UI: http://localhost:10001/ui/index.html.

6.2. Physical clone

We can do exactly the same if we want a clone of the physical network. Have a look at the physical.json file below and see if you can make sense out of it. Simply type the following command, and verify you can ping between hosts h_SEA_1, h_ATL_2, h_MCI_3, and h_IAH_4.

{
    "id": "1",
    "jsonrpc": "2.0",
    "method": "createNetwork",
    "params": {
        "network": {
            "controller": {
                "ctrls": [
                    "tcp:localhost:20000"
                ],
                "type": "custom"
            },
            "copy-dpid": true,
            "hosts": [
                {
                    "dpid": "00:00:00:00:00:00:01:00",
                    "mac": "00:00:00:00:01:01",
                    "port": 1
                },
                {
                    "dpid": "00:00:00:00:00:00:04:00",
                    "mac": "00:00:00:00:04:02",
                    "port": 2
                },
                {
                    "dpid": "00:00:00:00:00:00:08:00",
                    "mac": "00:00:00:00:08:03",
                    "port": 3
                },
                {
                    "dpid": "00:00:00:00:00:00:0B:00",
                    "mac": "00:00:00:00:0B:04",
                    "port": 4
                }
            ],
            "routing": {
                "algorithm": "spf",
                "backup_num": 1
            },
            "subnet": "192.168.0.0/24",
            "type": "physical"
        }
    }
}
$ curl localhost:8000 -X POST -d @physical.json

You should also point your browser to http://localhost:20001/ui/index.html to convince yourself that your virtual network really looks like a clone of the physical network.

You may have noticed that, for the physical clone, we added a field copy-dpid in the JSON file that was set to true. The field means that the virtual switches will take on the exact same DPID as the physical ones. This in contrast to the OVX-generated and OUI-based virtual DPIDs. The option is especially useful as we already know the physical DPIDs and you’re not always sure of the order in which OVX will create the virtual switches.

6.3. Custom networks

We’re still finishing the code for custom topology embedding. So you’ll have to sit tight for this one.

6.4. Automatic controllers

At times you know exactly which controller to use. But sometimes you just don’t care at all and you’re only interested in connecting a few VMs. Our embedder has built-in support for this.

So far, we’ve defined our controller type to be custom. But if you use default, the embedder will spawn a controller for you, and will point your virtual network to that controller. This way, your hands are free to focus on the things that matter to you, and not worry about mundane things like the network…

Right now, we’ve only coded support for spawning a VM that runs Floodlight in VirtualBox. But, you are welcome to expand this feature to spawn controllers the way you need them to. And be nice and share your code with the OVX team, so everybody can use it.


7. Networking. Uninterrupted.

Customizing your virtual topology is a great way to hide the details of the underlying physical network. It also gives OpenVirteX the chance to automate network protection and restoration for you. This way, you get to focus on the applications, policies or services that matter to you. OVX ensures your network is there and operating as expected.

To do so, you can configure a virtual link to have multiple backup paths. The same holds for port pairs in your virtual switch. Then, if a physical link fails, OVX can automatically switch over to the backup path. What’s even better is that it can do this in a matter of milliseconds, without the controller noticing a thing!

7.1. First steps

We’re assuming you still have the big switch network from the previous section running as virtual network 1.

You can see in the OVX log that OVX calculated both a primary and backup path between each port pair of the switch. The example below shows the routes being used between ports 1 and 3, which correspond to the hosts in SFO and EWR. This may be different for you, so be sure to figure out which route is being used between these two cities.

16:59:39.154 [qtp1204835011-73] INFO  OVXBigSwitch - Add route for big-switch 00:a4:23:05:00:00:00:01 between ports (1,2) with priority: 64 and path: [00:00:00:00:00:00:02:00/6-00:00:00:00:00:00:03:00/5, 00:00:00:00:00:00:03:00/7-00:00:00:00:00:00:0b:00/5, 00:00:00:00:00:00:0b:00/7-00:00:00:00:00:00:04:00/5, 00:00:00:00:00:00:04:00/7-00:00:00:00:00:00:05:00/5]
16:59:39.154 [qtp1204835011-73] INFO  OVXBigSwitch - Add route for big-switch 00:a4:23:05:00:00:00:01 between ports (2,1) with priority: 64 and path: [00:00:00:00:00:00:05:00/5-00:00:00:00:00:00:04:00/7, 00:00:00:00:00:00:04:00/5-00:00:00:00:00:00:0b:00/7, 00:00:00:00:00:00:0b:00/5-00:00:00:00:00:00:03:00/7, 00:00:00:00:00:00:03:00/5-00:00:00:00:00:00:02:00/6]
16:59:39.156 [qtp1204835011-73] INFO  OVXBigSwitch - Add backup route for big-switch 00:a4:23:05:00:00:00:01 between ports (1,2) with priority: 63 and path: [00:00:00:00:00:00:02:00/5-00:00:00:00:00:00:01:00/5, 00:00:00:00:00:00:01:00/6-00:00:00:00:00:00:07:00/5, 00:00:00:00:00:00:07:00/7-00:00:00:00:00:00:08:00/5, 00:00:00:00:00:00:08:00/7-00:00:00:00:00:00:09:00/5, 00:00:00:00:00:00:09:00/7-00:00:00:00:00:00:0a:00/5, 00:00:00:00:00:00:0a:00/6-00:00:00:00:00:00:05:00/6]
16:59:39.156 [qtp1204835011-73] INFO  OVXBigSwitch - Add backup route for big-switch 00:a4:23:05:00:00:00:01 between ports (2,1) with priority: 63 and path: [00:00:00:00:00:00:05:00/6-00:00:00:00:00:00:0a:00/6, 00:00:00:00:00:00:0a:00/5-00:00:00:00:00:00:09:00/7, 00:00:00:00:00:00:09:00/5-00:00:00:00:00:00:08:00/7, 00:00:00:00:00:00:08:00/5-00:00:00:00:00:00:07:00/7, 00:00:00:00:00:00:07:00/5-00:00:00:00:00:00:01:00/6, 00:00:00:00:00:00:01:00/5-00:00:00:00:00:00:02:00/5]

7.2. Fast reroute

We know that OVX pre-calculated a backup path for each port pair in the big switch. As soon as OVX detects that a link has gone down, it finds out which flows are currently using that link, and shifts traffic away onto the backup path. Let’s try it out.

First, start a ping across the country between h_SFO_1 and h_EWR_3. You will have to type the following command in the xterm window of h_SFO_1. Note that the first ping has a much higher latency (tens of milliseconds) because of the reactive flow installation. Let the ping run.

mininet> xterm h_SFO_1
# ping 10.0.0.3
PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
64 bytes from 10.0.0.3: icmp_req=1 ttl=64 time=34.9 ms
64 bytes from 10.0.0.3: icmp_req=2 ttl=64 time=0.179 ms
64 bytes from 10.0.0.3: icmp_req=3 ttl=64 time=0.043 ms
64 bytes from 10.0.0.3: icmp_req=4 ttl=64 time=0.045 ms
64 bytes from 10.0.0.3: icmp_req=5 ttl=64 time=0.042 ms
64 bytes from 10.0.0.3: icmp_req=6 ttl=64 time=0.047 ms
64 bytes from 10.0.0.3: icmp_req=7 ttl=64 time=0.338 ms
64 bytes from 10.0.0.3: icmp_req=8 ttl=64 time=0.048 ms
64 bytes from 10.0.0.3: icmp_req=9 ttl=64 time=0.047 ms

Remember the primary and backup paths you wrote down? The following command will bring down a link on the primary path; this may be different for your case though. When you type in the command in the Mininet terminal, observe the h_SFO_1 terminal window where the ping is still running.

mininet> link LAX IAH down

In the ping output above, notice how the time for ping sequence number 7 is much higher that the rest (except the very first one of course). That’s when we brought the link down, and OVX pushed down the flow_mods for the new path. Since it pre-calculated the path, it could push down the rules over the full path very quickly. This is much quicker than the initial ping, where packet_in’s were generated on every hop of the path.

If we wanted, we could make it even faster by pro-actively pushing down backup path rules. The trade-off is that we would be using a lot more space in the switches’ flow tables.

Be sure to also look at the flow tables before and after bringing down a link. It’s the only way you can be really sure that we’re not playing tricks on you.

7.3. More backups

OVX tries to calculate a single backup path by default; if you want to have more then you can ask OVX to calculate more backups. If you want, open the bigswitch.json file, find the routing section, and change the backup_num value. If you feed this json to the embedder (after restarting OVX of course, because OVX requires the hosts to be unique!), you can bring down links in the backup path and still have connectivity between your hosts. Be sure to try it out!

7.4. Taking control of routing

By default, OVX will calculate shortest path routes for you. Most of the times, this is perfectly fine. But sometimes you want to take control of the routing. You can explicitly configure the routes in a big switch yourself. You can use the connectRoute call for this purpose. It takes a tenant ID, DPID of the virtual switch, the source and destination ports on the virtual switch, the physical path to connect these ports, and the priority. The path with the highest priority is the primary path; backups are used in decreasing order of priority.

So far, we’ve only talked about big switches. Everything we just reviewed is also available on virtual links. Automatic calculation of backup paths? Check. Explicit routing interface? Check. Pro-active push of backup routes? Check. Be sure to test it out yourself.

7.6. Keeping it cool

After a link failure, OVX automatically switches to a backup path. After the link comes back up, OVX will revert to the original situation. You may feel that this may increase the potential for route flapping; be sure to get in touch with us if this is a problem for you.


8. What goes up must come down

Creating virtual ports, links, switches and networks is fun and all, but what if you make a mistake? Or you want to remove some part of your network to see how it affects your service? Let’s find out how to stop and/or remove some of these virtual elements.

8.1. Start and stop

Assuming you have the big switch and the physical clone networks up and running from the Automagic Networks section.

Let’s start a ping between h_SEA_1 & h_IAH_4 and h_ATL_2 & h_MCI_3; you can do so by opening two xterm windows from Mininet, and then starting the ping from that terminal.

mininet> xterm h_SEA_1
mininet> xterm h_ATL_2

Then, type the first command in the h_SEA_1 window, and the second one in the h_ATL_2. Note that we have to use IP addresses instead of host names, since we are not in the Mininet terminal.

# ping 10.0.0.4
# ping 10.0.0.3

OVX exposes a virtual network to the controller, and as such we are free to start and stop any elements in that virtual network. For instance, if we stop the switch in ATL, then the second ping should stop, while the first one should continue. Verify that this is correct by running:

$ python ovxctl.py -n stopSwitch 2 00:00:00:00:00:00:04:00

We are free to bring up the switch, at run-time, whenever we like; the whole network is virtual after all. You can do so by using the startSwitch command. The ping between ATL and MCI should start up again.

$ python ovxctl.py -n startSwitch 2 00:00:00:00:00:00:04:00

8.2. Clean up after yourself

Say you are building a virtual topology but you made a mistake. For instance, you did not want host h_IAH_4 but instead you wanted h_EWR_4. While the previously started pings are still running, open a new xterm in host h_SEA_1, and start a ping to h_EWR_4. Obviously, this should not work.

You can now go ahead and remove the host in IAH; the disconnectHost call takes the tenant ID and the host ID as input. In our case, the host ID is 4 which you can verify in the embedder log.

$ python ovxctl.py -n disconnectHost 2 4

8.3. Exploring further

Here we just scratched the surface here what OVX can do in terms of starting / stopping and adding / deleting virtual elements. Since everything is virtual, you have a lot of flexibility in controlling your network and its elements. You can disconnect virtual links, remove ports, switches, and even complete networks. Run ovxctl.py to learn more about the full API.