This post is the second in a series that plans to document my progress through installing and configuring a OpenStack Lab, using Cumulus switches for the network.

For other posts in this series, see the Overview section in the introduction post.

This post will cover the creation of our Virtual Machines in VirtualBox using Vagrant.


Installation of Vagrant on Ubuntu (my choice of desktop distribution) requires downloading the deb from the official website.
There is a Vagrant package that is available in the default repositories, but this isn’t kept up to date.

Once downloaded, we’ll install the package and ensure all of the dependencies are resolved. (I didn’t have any further dependencies to install, but it’s a good habit to run the command after manually installing a package anyway).

An additional step I take is copying the bash completion script for Vagrant over so that it can be used (I don’t know why this isn’t done by default, seeing as the file is included in the package anyway – see github issue #8987).

Remember to replace the version directories with your version of Vagrant.
You’ll also need to start a new shell in order for this to take effect.

Usually with Vagrant the next step would be to install a plugin to support the hypervisor (named a Vagrant ‘provider’) you are using, but VirtualBox is supported by default.



Now we need to add a ‘Box’, which is Vagrant’s term for an image.
The difference between Vagrant boxes and most other operating system images is that boxes are pre-made and configure, from which new virtual machines are cloned instead of installed.

For example, there is a Cumulus VX Vagrant box available that we’ll use in this project – named “CumulusCommunity/cumulus-vx”. We’ll add this now along with the CentOS 7 and Ubuntu 16.04 boxes.

During this step, if a box has more versions for more than just one provider it will prompt to ask which version to download.

These boxes are stored globally for the user in the directory “~/.vagrant.d/boxes” and the currently installed boxes can be shown using the command below



Now that we’ve got our host ready, we’ll create a ‘Vagrantfile’:

“The primary function of the Vagrantfile is to describe the type of machine required for a project, and how to configure and provision these machines” – https://www.vagrantup.com/docs/vagrantfile/

More simply put, the file marks the root directory of a project in Vagrant and some configuration associated with it. This is created using the “vagrant init” command in the directory that you want to use as your project root:


By default this file is pretty empty, with only three lines uncommented.

The “do” and “end” lines form a ‘block’ in Ruby, in which you define items for the function (the “Vagrant.configure” part) to invoke.
With the Vagrantfile, all we’re doing is defining configuration. Nothing fancy here!
Let’s change some then.

According to the Getting Started documentation, the first thing we should change is the base box to whichever of the boxes we’ve downloaded we want to use.
However because our project isn’t quite as simple as using only one VM we need to define multiple, and this is done by adding “config.vm.define” blocks inside the main configure block.
We’ll do one block for each of the VM’s we need – 2 spine switches, 4 leaf switches, 2 OpenStack machines and a CentOS VM for ZTP:

With this configured, we could go ahead and provision then boot all of the machines we’ve specified using the “vagrant up” command, and then the “vagrant destroy” to delete the VM’s.

However, there is still more configuration to do before we can be happy that our lab can be created correctly.

VirtualBox configuration

VirtualBox specific configuration can be defined both globally individually for each VM using another block – “.vm.provider”.

For example, we can set what each VM is named in VirtualBox:

We can also specify how many CPU’s and how much RAM we allocate to the VM:


Most other VirtualBox specific configuration needs to be done through the use of VBoxManage “modifyvm” commands and a “.customize” entry.
In the below, we’re adding all VM’s to a group in VirtualBox so that we can easily tell that it is a part of the OpenStack Lab.

Note that this is done at the “config” level, outside of each host.
This is an example of the global configuration that I referenced earlier.

In VirtualBox, we now have a group folder for our lab machines:

This would usually look like the following if you we’re using the VBoxManage command line utility:


We’ll apply the configuration completed above to the rest of our machines and we’re done with VirtualBox configuration:



Vagrant networking is as simple (or as hard) as the provider (hypervisor) you are using makes it.

There is one limitation in our scenario, in that VirtualBox can only support a maximum of 36 interfaces per machine
This doesn’t bother us because the maximum amount of interfaces one of our machines has is 7 (Each leaf switch has 1 management, 2 uplinks to the spine switches, 2 links in a bond to the other leaf switch in the MCLAG pair, 2 downlink interfaces to the attached OpenStack host).


One other thing to be aware of is that Vagrant sets the first network interface as a NAT interface by default.
A NAT interface in VirtualBox allows all communication outbound (through NAT of the hosts primary interface), but only allows connections inbound (including from other VM’s) if they are port-forwarded.
While we could leave this as the default and configure port-forwards for each inbound connection, this becomes cumbersome when you start to think about  DHCP traffic (to/from the ZTP server) and OpenStack control traffic.

Instead we can change the first network interface for all hosts to be part of a “public network” (a bridged network in VirtualBox).

This comes with the caveat that Vagrant can no longer configure the network interface of the VM.
This instead needs to be completed by one of the following:

  • Use a Vagrant box that comes with a preconfigured network interface
  • Configure the interface manually by logging in through the console
  • Use a DHCP to assign network addressing

We’ll use DHCP in this lab as we’re going to be configuring one for ZTP anyway.

Because Vagrant no longer configures the interfaces itself, we also need to tell it what IP address to connect to in order to finish its’ provisioning.
This is done with the “.ssh.host” option under each host.


The last thing we need to do is configure the network interfaces for the links between the VM’s.
This will be done using “private networks” (internal networks in VirtualBox).

For example, to configure the link between the first spine and leaf switches, we need to add the below to each the config of each VM:


After we add the above to all of or devices, our configuration should look like the below:


With this configuration file we have configured Vagrant to provision all of our VM’s with all of the correct interface-to-adapter mappings.

While we could run “vagrant up” to ask Vagrant to provision and turn on the VM’s, Vagrant won’t finish as it will not be able to reach the hosts.
Before it can do so, we need to configure a DHCP server to hand out addresses, which I’ll cover in the next post.



Vagrant – Getting Started
Vagrant – Official Documentation
VirtualBox Manual – Chapter 6: Virtual Networking
VirtualBox Manual – Chapter 8: VBoxManage
Cumulus VX Documentation – Getting Started on VirtualBox
Cumulus VX Documentation – Vagrant and VirtualBox

Versions used:

Desktop Machine: kubuntu-16.04.3-desktop-amd64
VirtualBox: virtualbox-5.2
Vagrant: 2.0.3
Cumulus VX Vagrant Box: CumulusCommunity/cumulus-vx (virtualbox, 3.5.3)
CentOS Vagrant Box: centos/7 (virtualbox, 1803.01)
Ubuntu Server Vagrant Box: ubuntu/xenial64 (virtualbox, 20180413.0.0)