Docker MTU issues on Openstack

While testing in a docker container running on a VM in my Openstack cluster, I encountered a weird issue when trying to connect to services over TLS. For example, I could curl https://google.com from within the container, but not https://github.com. DNS was working, routing was working. I could ping github.com. I just couldn’t establish a TLS connection.

After some packet tracing, I noticed that TLS ClientHello was never leaving the VM in the github.com case.

Looking at the MTU for the docker bridge vs the virtual ethernet adapter

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc fq_codel state UP group default qlen 1000
link/ether fa:16:3e:81:9d:b8 brd ff:ff:ff:ff:ff:ff
inet 192.168.12.15/24 brd 192.168.12.255 scope global dynamic eth0
valid_lft 83165sec preferred_lft 83165sec
inet6 fe80::f816:3eff:fe81:9db8/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:a1:a2:06 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fea1:a206/64 scope link
valid_lft forever preferred_lft forever
29: vethda551f3@if28: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether fe:53:b9:85:37:b8 brd ff:ff:ff:ff:ff:ff link-netnsid 2
inet6 fe80::fc53:b9ff:fe85:37b8/64 scope link
valid_lft forever preferred_lft forever

Openstack creates virtual ethernet devices with an MTU of 1450, I assume to accommodate encapsulation overheads. When the docker bridge is created, it does not detect the MTU of the underlying adapter. It just defaults to 1500. That MTU is propagated to the veth interfaces in the containers. That causes workloads inside the container to occasionally generate packets that are too large to be forwarded by the host and are dropped.

After going into my /etc/sysconfig/docker and adding the --mtu option
# Modify these options if you want to change the way the docker daemon runs
OPTIONS='--selinux-enabled --log-driver=journald --mtu=1450'

systemctl restart docker and order was restored. So if TLS connections are failing in your docker containers, think MTU.

The Turbot Tower

I’ve had a number of people ask me about the Turbot Tower I built for doing Kubernetes development and demos

Here is one such demo I made recently:

There is the bill of materials:

  1. Minnowbord Turbot x3 ($140 each)
  2. Silverjaw Lure x3 ($50 each)
  3. Kingston mSATA 60GB SSD x3 ($35 each)
  4. Netgear GS105 5-port Gigabit Switch ($25)
  5. M3 Nylon Stand-offs ($10)
  6. 5V 10A Switching Power Supply ($25)
  7. 2.1mm DC Power Splitter ($4)

Total BOM cost is $739. Final dimensions are 4″(D)x3.75″(W)x5.5″(H). Not bad at all for a 3 machine cluster, each with GbE, SSD storage, 2GB RAM, and a dual-core processor!

Minnowboard Turbot as a Gigabit Router (part 1)

The Minnowboard Turbot is a x86-based single board computer, making it a fairly rare breed. Most single-board computers, like the Raspberry Pi, are ARM-based and come with some gotchas for those that take the user-level simplicity of the x86 platform for granted. Among the features taken for granted are UEFI, ACPI, and PCIe (and, more generally, discoverable buses/devices).

A use case I always had in mind for these single-board computers is a gigabit home router. However most single board computers can’t do this for a number of reasons:

  1. They only have a single ethernet port. If you have a switch with VLAN support you can get around this, but most people don’t
  2. The ethernet port they do have is 10/100, not gigabit. I have gigabit internet so it matters.
  3. The ethernet port is gigabit but is really a USB 2.0 device wired on the board (480Mbps max)
  4. The CPU is too weak to handle the packet forwarding speed

I got a Minnowboard Turbot from Netgate and, at least on paper, it seems like it could finally be possible! While the Turbot only has a single gigabit ethernet port, it does have a USB 3.0 port as well. So I bought a USB 3.0 gigabit ethernet adapter to be the second gigabit port. (There are quite a few of these on the market but almost all of them use the same AX88179 chipset)

Now the real test was can the board actually bridge packets at gigabit speeds. I decided to start with the simplest setup first. I installed Fedora 24 and bridged the two network adapters using systemd-networkd. This turns the Turbot into a simple 2-port L2 bridge for testing throughput and CPU utilization.

I am testing with iperf for simplicity. Mind you, this uses the largest packet size so this is not a test for packet processing speed. Just if the hardware can forward gigabit in the best case.

And it can.

I put two boxes on either side of the bridge and run iperf between them:

Screenshot from 2016-08-03 11-39-52

I did a 60 second run and monitored the CPU %soft utilization (time spend in soft interrupt context processing packets) with mpstat, locked to a single core that was doing the packet processing:

soft

The average utilization on the interrupt processing core is about 42% although there is a lot of variability. At the end of the day though, there doesn’t seem to be any drop in throughput due to CPU saturation.

Now that I know it is possible, next test is an actual firewall/NAT speed test.

Running Fedora on Windows 10 using WSL

Getting a Fedora root filesystem

Option 1: Trust me and download the rootfs tarball

Includes dnf hacks and exclude rules for hacked packages in dnf.conf (hopefully not needed for too long)
https://drive.google.com/file/d/0B2x_P2FaPipUUjRuSkF4c01WOG8/view

Option 2: Build the rootfs tar yourself from Koji

Download the docker image for Fedora 23
http://koji.fedoraproject.org/koji/tasks?owner=ausil&state=closed&view=flat&method=createImage&order=-id

In this example the latest version was 20160408.

Become root, to maintain permissions on the untarred files, and do the following

tar xfp Fedora-Docker-Base-23-20160408.x86_64.tar.gz
cd dad7397f64776b5ac85b0bdbf5d511bc0a434b363309570bb2cf3082f382aaec
mkdir rootfs
cd rootfs
tar xfp ../layer.tar
tar chf rootfs.tar etc/ usr/ var/ bin lib lib64 sbin

Installing bash (Ubuntu) in Windows 10

  • Start -> Search for “developer settings” -> Select “Developer Mode”
  • Start -> Search for “windows features” -> Check “Windows Subsystem for Linux (Beta)”
  • Reboot
  • Open a command prompt (Start -> Search for “cmd”)
  • Run “bash”
  • Select “y” to install bash

Bootstrapping Fedora

  • Copy the rootfs.tar(.gz) to the Windows desktop for your user (method unspecified)
  • Within the bash shell (at /mnt/c/User/Your User)

    cd Desktop
    cp rootfs.tar* ~
    cd
    tar xfp rootfs.tar*
    exit

Now the select directories of the fedora root filesystem are in the root’s home directory

The next step is to overwrite the select directories in the rootfs.

  • Open a file manager in Windows (Start -> “file”)
  • Go to C:\Users\Your User\AppData\Local\lxss\rootfs
  • Delete etc usr var bin lib lib64 sbin
  • Go to C:\Users\Your User\AppData\Local\lxss\root
  • Cut etc usr var bin lib lib64 sbin
  • Go back to C:\Users\Your User\AppData\Local\lxss\rootfs
  • Paste

Now we have effectively bootstrapped a Fedora userspace

Open a command prompt, type bash and now you are in a Fedora enviroment (run dnf if you don’t believe me)

Patching (if you chose option 2)

The lxscore.sys syscall translation driver supports many linux syscalls, but not all.  There are also some programs that access things in /sys, /proc/,  and /dev that aren’t available in this enviroment.  We have to hack some things in python3 and dnf to work around this.

dnf metadata fetch fails with Error 22: Invalid argument

edit /usr/lib64/python3.4/shutil.py around line 134 (search for “listxattr”)
replace if hasattr(os, 'listxattr'): with if False:

Transaction check failed, not enough disk space

edit /usr/lib/python3.4/site-packages/dnf/rpm/transaction.py around line 108 (search for “DISKSPACE”)
remove if conf.get('diskspacecheck') == 0: and reduce indent for self.ts.setProbFilter(rpm.RPMPROB_FILTER_DISKSPACE)

Prevent hacks from being removed on dnf update

edit /etc/dnf/dnf.conf
add exclude=python3-libs python3-dnf to [main] section

Building a Thermostat with the Raspberry Pi

For a long time, I wanted to do something with my original RPi B (the one with 256MB) but never found the time. Now have the RPi2, so I have two of them doing nothing. The RPi2 is more useful in general since it is multi-core and ARMv7 (i.e. linux distributions build officially supported binary packages for ARMv7), but my original RPi just sat around… sad… collecting dust.

A new year brought with it new inspiration and motivation to build a thermostat out of it.

Parts List:
RPi B+ ($25)
Any SD card 2G or more (~$10)
Edimax Wireless-N USB Adapter ($10)
Plugable USB Bluetooth 4.0 Low Energy Micro Adapter ($14)
F/F Jumper Wires ($4)
SainSmart 4-Channel Relay Module ($9)
TI SensorTag CC2650STK ($30)
50′ 5-wire Thermostat Wire ($17)

Total: ~$120

So roughly half the cost of a Nest, which is currently $250.

Install Raspbian

The first thing to do is download the lastest Raspbian Lite image from the Raspberry Pi website. The torrent is MUCH faster than the direct download, so I recommend doing that.

Once downloaded, decompress the image. The filename might be different.

unzip 2015-11-21-raspbian-jessie-lite.zip

Now insert your SD card into the card reader. Determine the device path of the SD card by looking at the kernel log after you plug in the card reader

$ dmesg | tail -n 20
[11201.184676] usb 1-3.4: new high-speed USB device number 3 using xhci_hcd
[11201.273099] usb 1-3.4: New USB device found, idVendor=0951, idProduct=1624
[11201.273104] usb 1-3.4: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[11201.273106] usb 1-3.4: Product: DataTraveler G2
[11201.273109] usb 1-3.4: Manufacturer: Kingston
[11201.273111] usb 1-3.4: SerialNumber: 000AEBFEF59CA931C64C0033
[11201.297086] usb-storage 1-3.4:1.0: USB Mass Storage device detected
[11201.297206] scsi host6: usb-storage 1-3.4:1.0
[11201.297332] usbcore: registered new interface driver usb-storage
[11201.301072] usbcore: registered new interface driver uas
[11202.296377] scsi 6:0:0:0: Direct-Access     Kingston DataTraveler G2  1.00 PQ: 0 ANSI: 2
[11202.297220] sd 6:0:0:0: Attached scsi generic sg4 type 0
[11202.297866] sd 6:0:0:0: [sdd] 7835648 512-byte logical blocks: (4.01 GB/3.73 GiB)
[11202.298152] sd 6:0:0:0: [sdd] Write Protect is off
[11202.298156] sd 6:0:0:0: [sdd] Mode Sense: 23 00 00 00
[11202.298443] sd 6:0:0:0: [sdd] No Caching mode page found
[11202.298446] sd 6:0:0:0: [sdd] Assuming drive cache: write through

From the output, I can see that my SD card is a /dev/sdd. MAKE SURE YOU GET THIS RIGHT, YOU CAN BLOW AWAY YOUR MACHINE.

Now copy the image to the SD card, replacing /dev/sdX with the device name of your SD card.

dd if=2015-11-21-raspbian-jessie-lite.img of=/dev/sdX bs=1M

Eject the card

sync; eject /dev/sdX

Insert the card, the wifi adapter, and the bluetooth adapter into the Raspberry Pi and power it on. The default user/password is pi/raspberry. Use sudo to gain root access.

Network Configuration

Configuring a static IP address is necessary for logging remotely and set up a DNS record so you can address it by name (DNS is optional).

Edit /etc/network/interfaces to look like this, adjusting the address and gateway to be appropriate for your network.

source-directory /etc/network/interfaces.d

auto lo
iface lo inet loopback

iface eth0 inet dhcp

allow-hotplug wlan0
iface wlan0 inet static
    wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf
    address 192.168.0.10/24
    gateway 192.168.0.1

Edit /etc/wpa_supplicant/wpa_supplicant.conf to look like this, adjusting ssid, key_mgmt, and psk (password) to be appropriate for you network.

ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1

network={
	ssid="mynetwork"
	scan_ssid=1
	key_mgmt=WPA-PSK
	psk="secretnomore"
}

Enable the wireless interface

ifup wlan0

Make sure the wlan0 interface configured properly

$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether b8:27:eb:f6:d2:7c brd ff:ff:ff:ff:ff:ff
105: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 74:da:38:5b:d5:9c brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.10/24 brd 10.42.5.255 scope global wlan0
       valid_lft forever preferred_lft forever
    inet6 fe80::76da:38ff:fe5b:d59c/64 scope link 
       valid_lft forever preferred_lft forever

$ ip r
default via 10.42.5.1 dev wlan0 
192.168.0.0/24 dev wlan0  proto kernel  scope link  src 192.168.0.10

$ ping -c 1 google.com
PING google.com (173.194.115.35) 56(84) bytes of data.
64 bytes from dfw06s40-in-f3.1e100.net (173.194.115.35): icmp_seq=1 ttl=54 time=13.7 ms

--- google.com ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 13.785/13.785/13.785/0.000 ms

Software

I wrote the thermostat software myself in Go. Nice libraries for for communicating the the SensorTag (GATT) and RPi GPIO already exist for Go so it made development quick and easy. It is by no means robust to certain failures (namely communication loss to the SensorTag), but it gets the job done and you are free to extend it. I wanted a simple feature set.

Option 1: Download the binary tarball

Download, decompress and run (as root)

wget http://www.variantweb.net/pub/rpi-thermostat.tar.gz
tar xf rpi-thermostat.tar.gz
./rpi-thermostat

Option 2: Compile from source

Cross compile the rpi-thermostat binary for the RPi on your desktop machine

mkdir rpi-thermostat
cd rpi-thermostat
export GOPATH=$PWD
go get github.com/sjenning/rpi-thermostat
cd src/github.com/sjenning/rpi-thermostat
GOOS=linux GOARCH=arm go build

Connecting the Relays

Thermostats are not that complex. The following in a crash course.

R – 24VAC
G – Fan/Blower
Y – Cool/Compressor
W – Heat
B – Common (not used here)

The following are connections made for the different modes. R-G, for example, denotes that the Red and Green wires are connected via a relay.

Fan = R-G
Cool = R-G, R-Y
Heat = R-G, R-W

For this software, the mappings of GPIO pins to relay controls are

Relay for Fan (G) is controlled by pin 17
Relay for Cool (Y) is controlled by pin 21
Relay for Heat (B) is controlled by pin 22

You can see the pinout for all RPi models here

Connect the relays as follows

5V -> Vcc
17 -> IN1
21 -> IN2
22 -> IN3
Gnd -> Gnd
(the pin for IN4 on the relays will not be connected)

IN1 relays G (Fan)
IN2 relays Y (Cool)
IN3 relays W (Heat)

The R wire is the one the the relays connect to the various other wires. I connected the R wire coming in from the unit to the first relay, on the active side, and used small jumpers to connect R to the active side of all the other relays.

IMG_20160116_152444

SensorTag

This setup uses the TI SensorTag, communicating over Bluetooth Low Energy (BTLE), to obtain the current temperature. Unfortunately the marketing material for the SensorTag is not completely honest.

It claims that the tag will run for a year or so on a single 240mAh-ish CR2032 coin battery. However, the device draws about 0.5mA continuously ( unless you turn the motion sensor on, in which case it uses almost 5mA!) So a little maths (240mAh/0.5mA) results in 480 hours (20 days) of life.

I, for one, did not want to be changing out the battery on the tag every 3 weeks. The tag has solder points for an 2xAAA battery pack. I got this on Amazon. The solder points are here.

IMG_20160122_113235

Starting on boot

Copy the systemd unit file to /etc/systemd/system/rpi-thermostat.service then start and enable on boot

systemctl start rpi-thermostat
systemctl enable rpi-thermostat

All Done

Just go to http://192.168.0.10/, adjusting the IP for your setup and you should get the web UI for thermostat controls!

webapp

Getting WiFi Working on the Raspberry Pi 2 from the Command Line

Install the WiFi support and TUI interface for NetworkManager

dnf install NetworkManager-wifi NetworkManager-tui

Determine the name of your wireless interface

ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether b8:27:eb:76:01:0e brd ff:ff:ff:ff:ff:ff
3: wlan0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DORMANT group default qlen 1000
    link/ether 74:da:38:5b:d5:9c brd ff:ff:ff:ff:ff:ff

Start the NetworkManager TUI

nmtui

Selection_004

Selection_003

Selection_005

Selection_006

Selection_008

Bring the wireless interface up

ifup wlan0
Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/5)

Make sure you have an IP address

ip a
...
3: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 74:da:38:5b:d5:9c brd ff:ff:ff:ff:ff:ff
    inet 10.42.5.120/24 brd 10.42.5.255 scope global dynamic wlan0
       valid_lft 7169sec preferred_lft 7169sec
    inet6 fe80::76da:38ff:fe5b:d59c/64 scope link 
       valid_lft forever preferred_lft forever

A default route

ip r
default via 10.42.5.1 dev wlan0  proto static  metric 600
...

Ping google.com

ping -c 1 google.com
PING google.com (216.58.218.110) 56(84) bytes of data.
64 bytes from (216.58.218.110): icmp_seq=1 ttl=54 time=8.50 ms

--- google.com ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 8.505/8.505/8.505/0.000 ms

You’re ready to go!

WordPress behind an nginx SSL reverse proxy

/etc/nginx/conf.d/ssl.conf (inside the ssl server block)

location /blog/ {
  proxy_pass http://backend:8081/;
  proxy_set_header X-Forwarded-Host $host;
  proxy_set_header X-Forwarded-Proto $scheme;
}

Add this to wp-config.php

/**
 * Handle SSL reverse proxy
 */
if ($_SERVER['HTTP_X_FORWARDED_PROTO'] == 'https')
    $_SERVER['HTTPS']='on';

if (isset($_SERVER['HTTP_X_FORWARDED_HOST'])) {
    $_SERVER['HTTP_HOST'] = $_SERVER['HTTP_X_FORWARDED_HOST'];
}

If the URI on the proxy is different than the URI on the backend, add this to wp-config.php too

$_SERVER['REQUEST_URI'] = "/blog".$_SERVER['REQUEST_URI'];

where “/blog” is the URI prefix on the proxy

Creating VLAN Bridges with systemd-networkd

This is an example of how to setup a machine with two bridges that trunk to two different VLANs (trusted:4 and untrusted:5) on the external network via the eth0.

All of the files are in /etc/systemd/network.

br-trusted.netdev

[NetDev]
Name=br-trusted
Kind=bridge

br-trusted.network

[Match]
Name=br-trusted

[Network]
Address=192.168.0.0/24
Gateway=192.168.0.1
DNS=192.168.0.1
Domains=example.com

br-untrusted.netdev

[NetDev]
Name=br-untrusted
Kind=bridge

br-untrusted.network

[Match]
Name=br-untrusted

eth0.network

[Match]
Name=eth0

[Network]
VLAN=trusted
VLAN=untrusted

trusted.netdev

[NetDev]
Name=trusted
Kind=vlan

[VLAN]
Id=4

trusted.network

[Match]
Name=trusted

[Network]
Bridge=br-trusted

untrusted.netdev

[NetDev]
Name=untrusted
Kind=vlan

[VLAN]
Id=5

untrusted.network

[Match]
Name=untrusted

[Network]
Bridge=br-untrusted

Creating a GPG key on Fedora 22

Install the required tools

dnf install gnupg2 rng-tools -y

Start rngd. This provides entropy for the key generation process.

rngd -r /dev/urandom

Create a master GPG key. The key represents your GPG identity. Note that the command we are using is gpg2, not gpg.

# gpg2 --gen-key
gpg (GnuPG) 2.1.4; Copyright (C) 2015 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

gpg: directory '/root/.gnupg' created
gpg: new configuration file '/root/.gnupg/gpg.conf' created
gpg: WARNING: options in '/root/.gnupg/gpg.conf' are not yet active during this run
gpg: keybox '/root/.gnupg/pubring.kbx' created
Note: Use "gpg2 --full-gen-key" for a full featured key generation dialog.

GnuPG needs to construct a user ID to identify your key.

Real name: Demo User
Email address: demo@example.com
You selected this USER-ID:
    "Demo User <demo@example.com>"

Change (N)ame, (E)mail, or (O)kay/(Q)uit? o
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
gpg: /root/.gnupg/trustdb.gpg: trustdb created
gpg: key 9CDBF8B3 marked as ultimately trusted
gpg: directory '/root/.gnupg/openpgp-revocs.d' created
public and secret key created and signed.

gpg: checking the trustdb
gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
gpg: depth: 0  valid:   1  signed:   0  trust: 0-, 0q, 0n, 0m, 0f, 1u
pub   rsa2048/9CDBF8B3 2015-06-17
      Key fingerprint = 2FDB EB02 1D3C 8698 FBAD  DF1C B5BF 8F3A 9CDB F8B3
uid       [ultimate] Demo User <demo@example.com>
sub   rsa2048/AE893683 2015-06-17

# gpg2 --list-keys
/root/.gnupg/pubring.kbx
------------------------
pub   rsa2048/9CDBF8B3 2015-06-17
uid       [ultimate] Demo User <demo@example.com>
sub   rsa2048/AE893683 2015-06-17

[root@demo ~]# gpg2 --list-secret-keys 
/root/.gnupg/pubring.kbx
------------------------
sec   rsa2048/9CDBF8B3 2015-06-17
uid       [ultimate] Demo User <demo@example.com>
ssb   rsa2048/AE893683 2015-06-17

This has created a master key, 9CDBF8B3, with an encryption subkey, AE893683.

In the following steps, use your master key ID in place of 9CDBF8B3.

The key can be published to a keyserver like this

gpg2 --keyserver pgp.mit.edu --send-keys 9CDBF8B3

In the event that the master key is lost or compromised, a revocation certificate will be needed to indicate that the key, and all its subkeys, should no longer be used.

gpg2 --gen-revoke 9CDBF8B3 > 9CDBF8B3-revoke.asc

All the *.asc files mentioned should be stored offline for security.

Backup your keys

gpg2 --export 9CDBF8B3 > 9CDBF8B3-pub.asc
gpg2 --export-secret-keys 9CDBF8B3 > 9CDBF8B3-sec.asc

If the key is published to a keyserver, the backup of the public keys is not needed as they can be retrieved from the keyserver.

Test encryption and decryption

# echo "this is a secret" > test.txt
# gpg2 -e -r 9CDBF8B3 test.txt 
# gpg2 -d test.txt.gpg 
gpg: encrypted with 2048-bit RSA key, ID AE893683, created 2015-06-17
      "Demo User <demo@example.com>"
this is a secret

systemd machinectl vs docker

machinectl (and machined) are part of systemd and offer container control similar to Docker.

systemd attempts to be much more narrow in the scope of the containers than Docker.  It considers image creation, distribution, and versioning to be out-of-band and best handled by exiting technologies.

For example, images can simply be (compressed) tarballs with sha256sum for integrity checking and gpg signing for trust.  They can be distributed and any way that any other file is distributed (HTTP, FTP, USB drive, etc).  They are versioned and snapshotted using Btrfs.

In other words, image creation, distribution, and versioning can be done with tools that are common and have existed for a very long time.

machined can also boot most docker containers (pull-dkr) and raw disk images (pull-raw), in addition the tarball case above (pull-tar).

systemd also considers multi-node container orchestration to be out-of-band. systemd focuses on single-node container management and allows for much easier persistent container management, like a VM, where docker tends to assume containers are short-lived and ephemeral.

systemd contains a service template for systemd-nspawn, making it very simple to boot containers when the the system boots and monitor container state, just like any other systemd service.

Here is a table of the analogous subcommands between machinectl and docker.

machinectl docker operation
list ps show running containers
status (none) show detailed information about the status of a single container
start start start a named container
login attach get login prompt inside container
enable (none) start container on boot
disable (none) do not start container on boot
poweroff stop shutdown container
reboot restart restart container
terminate kill immediately stop container
kill (none) send signals to processes inside the container
copy-from cp copy file from container to host
copy-to (none) copy file to container from host
bind run with -v bind mount from host to container (system can bind at start time with “systemd-nspwan –bind” or at runtime with “machinectl bind”)
list-images ps -a show existing containers
clone (none) create new container as snapshot of another (docker run does this implicitly)
rename rename rename a container
remove rm/rmi remove a container/image (systemd doesn’t make the distinction)
clone+start run create a container as a snapshot of a base image and start the container
pull-[tar|raw|dkr] pull retrieve image

Much of this code is still new in systemd.  I’m hoping to have a tutorial up soon showing how machined is very good at single-node persistent container management.