Using VirtualBox, I set up a Linux server with the intention of hosting multiple virtual machine instances in order to make as much use of the server as possible. Networking proved to be a big problem. The ISP allocates up to 8 eight public IP addresses for you, so each machine can have it's own address. There are different ways to make the virtual machines available to the public internet.

Bridged network

This basically works like a virtual Ethernet switch that connects your virtual instances and one designated physical network card together. Each host has a different MAC address and behaves like a distinct device on the network. Herein lies the problem: my ISPs registers the MAC address of the physical network adapter with it's network switches, and as soon as the switch sees an Ethernet frame coming from my port with a different MAC address, it will not only discard that frame, but also shutdown the port completely and permanently. This is to avoid ARP spoofing, and is a reasonable way. Anyway, whatever you do, DO NOT connect VirtualBox instances in bridged mode to the public network adapter. This only works in normal Ethernet setups, and not in data centers with managed switches and monitoring against ARP spoofing. I got my networking back after a call to support, but because the port was completely disabled, I could only access the server with a serial terminal for which the ISP provides a SSH access. Without this way to disable the bridged mode configuration, the minute the port would have been re-enabled, the ARP spoofing detection would have kicked in again and disabled the port another time.

Bridged network №2

The setup is basically the same, the instances are all in bridged mode, connected to the public network adapter, but this time, each machine gets the same MAC address as the physical adapter of the hypervisor. This kind of works, because the ISP's switch never sees a foreign MAC address, and the internal VirtualBox architecture simply distributes all incoming frames to all connected virtual instances. However, it still creates some problems:

  • All machines virtual network adapters (btw. virtio) are basically permanently in promiscuous mode. Although only IP packets destined for a particular machine are really picked up, firewalls permanently log dropped packets because of the wrong destination address. It also opens up a security hole and reduces performance by having all machines inspect traffic for all the other machines, even the hypervisor itself.
  • Connections between the machines were really slow. I never exactly figured out why, but because the whole setup was so ridiculous (there has to be a reason why each network card usually gets it's unique MAC address), I didn't bother with it. There was also a lot of spurious ARP traffic, probably because the machines couldn't agree on what MAC address belonged to what IP.

NAT

This is the default mode when creating a new VirtualBox instance. I never bothered using it, because in tests it was awfully slow, so slow actually that is was unusable (this may have changed). It also creates a lot of problems. The basic setup would be to have the hypervisor get all the public IPs, which creates the first problem, as to which IP the hypervisor itself should use, and which are reserved for your virtual machines. The virtual instances then get private IP addresses on a host-only adapter, and each additional public IP is then forwarded to a private one. While this probably works, and the speed issue could be resolved by using iptables instead of the VirtualBox built-in NAT mode, an "identity problem" remains because none of the virtual instances really know their real public address. This will seriously sabotage protocols like FTP that need to know their own public IP address.

Proxy ARP

After a lot of research, I found a way that actually works and behaves well. Each virtual instance is configured with it's public IP address and correct DNS and gateway settings, as if it were directly connected to the external network. Also each virtual network adapter has it's own unique MAC address. The only difference is that they are connected to a host-only, internal network adapter which can be created with VirtualBox.

Now the machines can talk to each other, but not with the internet, and they cannot be reached from the outside. This is because no one knows they are actually there. To mitigate this, the almost ancient proxy_arp is enabled. It basically works as an ethernet bridge, which allows one host to impersonate the Ethernet interface of another. Enabling it is simple:

echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp echo 1 > /proc/sys/net/ipv4/conf/vboxnet0/proxy_arp

Be advised that the host-only networking adapter only gets enabled after at least one virtual instance has been started and connected to the network. Instead you can force it up with a simple command, which makes it easier to put everything together into a boot script:

ifconfig vboxnet0 [HYPERVISOR_IP] netmask [YOUR_NETMASK] up

Replace [HYPERVISOR_IP] with your primary public IP address of your hypervisor, and accordingly it's netmask, which often will be 255.255.255.255.

After issuing this command, the hypervisor will not only answer ARP requests for it's own MAC address, but also forward the requests and relay the answers back to the source adapter where it received the request. This way the external router knows that it can reach the additional public addresses on the physical network adapter of the hypervisor.

The only thing left to do is enabling routing and adding routes so that IP packets actually get forwarded in both directions. You might also need to tweak your iptables configuration to allow traffic through, as the hypervisor now acts as a transparent Ethernet bridge and a stateful firewall at the same time.

route add [FIRST_VIRTUAL_IP] vboxnet0 route add [SECOND_VIRTUAL_IP] vboxnet0 route add [THIRD_VIRTUAL_IP] vboxnet0 echo 1 > /proc/sys/net/ipv4/ip_forward

As usual, replace the [...] with your actual IPs and add as many routes as you have IP addresses allocated for virtual machines. As an optional step you can configure each virtual instance with direct routes as to avoid the roundtrip to the ISPs gateway for internal traffic. But this only improves performance, and makes no other difference.

The only remaining problem was the ISPs preferred IP setup, with netmask 255.255.255.255 and default gateway at 10.255.255.1. Most Linux servers won't accept this configuration out of the box (I'll write an article to make it work), and some firewall products like Microsoft Forefront Threat Management Gateway (TMG) won't even run with it. The hypervisor setup could be changed to give the virtual hosts proper IP addresses and netmasks (i.e. a netmask where the host address and the default gateway share the same subnet).