Minnowboard Turbot as a Gigabit Router (part 1)

The Minnowboard Turbot is a x86-based single board computer, making it a fairly rare breed. Most single-board computers, like the Raspberry Pi, are ARM-based and come with some gotchas for those that take the user-level simplicity of the x86 platform for granted. Among the features taken for granted are UEFI, ACPI, and PCIe (and, more generally, discoverable buses/devices).

A use case I always had in mind for these single-board computers is a gigabit home router. However most single board computers can’t do this for a number of reasons:

  1. They only have a single ethernet port. If you have a switch with VLAN support you can get around this, but most people don’t
  2. The ethernet port they do have is 10/100, not gigabit. I have gigabit internet so it matters.
  3. The ethernet port is gigabit but is really a USB 2.0 device wired on the board (480Mbps max)
  4. The CPU is too weak to handle the packet forwarding speed

I got a Minnowboard Turbot from Netgate and, at least on paper, it seems like it could finally be possible! While the Turbot only has a single gigabit ethernet port, it does have a USB 3.0 port as well. So I bought a USB 3.0 gigabit ethernet adapter to be the second gigabit port. (There are quite a few of these on the market but almost all of them use the same AX88179 chipset)

Now the real test was can the board actually bridge packets at gigabit speeds. I decided to start with the simplest setup first. I installed Fedora 24 and bridged the two network adapters using systemd-networkd. This turns the Turbot into a simple 2-port L2 bridge for testing throughput and CPU utilization.

I am testing with iperf for simplicity. Mind you, this uses the largest packet size so this is not a test for packet processing speed. Just if the hardware can forward gigabit in the best case.

And it can.

I put two boxes on either side of the bridge and run iperf between them:

Screenshot from 2016-08-03 11-39-52

I did a 60 second run and monitored the CPU %soft utilization (time spend in soft interrupt context processing packets) with mpstat, locked to a single core that was doing the packet processing:

soft

The average utilization on the interrupt processing core is about 42% although there is a lot of variability. At the end of the day though, there doesn’t seem to be any drop in throughput due to CPU saturation.

Now that I know it is possible, next test is an actual firewall/NAT speed test.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.