It’s been a while since my last post. I’ve been producing a series of posts about how to setup virtual environment for different programming languages and frameworks: Python, Ruby, node.js—and I need to do an installment on Go and also an installment on how to handle multi-language virtual environments. I haven’t forgotten, and I hope to finish that virtual environments series soon.

Work, of course, does tend to get in the way, and the past couple months have held more than their share of long work days. Also, I attended QCon New York in June and what I saw and heard there has really got me looking harder at utilizing virtual systems (VMs, containers, clouds) to automate software development.

A couple weeks ago I decided that I needed a server on which I could run a hypervisor and a set of lightweight VMs, mostly acting as Docker hosts. And I wanted a mature virtualization solution—one on which I could experiment with CloudStack, for example—and that meant Linux. I bought a modest system to support my investigations; this post is about my initial struggles trying to provision that machine.

Hardware first

I had been thinking about buying a barebone computer to do some experiments with VMs and cloud infrastructure. I’d been thinking specifically about the Intel NUCs because I wanted something I could tuck away under a monitor or mount under my desk. And I wanted something quiet. In fact, I’ve been keeping my eye on NUCs for over a year. But they were always just a little under powered, especially with respect to memory capacity, and reviews kept mentioning that the fans were loud and that there were reliability problems. So I waited.

Recently I discovered Gibabyte BRIX computers. The BRIX machines are very similar to NUCs. And Gigabyte just came out with the GB-BSI5A-6200, which is a dual core i5 system, that has USB 3.1 (both USB-C and USB-A ports) and permits 32GB of memory. That’s what I’d been waiting for—a NUC-like box that would support 32GB of RAM. So I pulled the trigger on that, adding two 16GB memory sticks and a 500GB M.2 SSD.

I got my hands on the unit this past weekend. I installed Ubuntu Desktop (more on why, below) and that’s when I got my first disappointment: the machine has fans—I thought it did not—and they can be loud.

But I did some investigation and I think I can control the CPU fan so it doesn’t idle so fast and thus stays quiet most of the time. And, other than the fan noise, the unit is quite fast and has performed flawlessly.

Logitech Unifying receiver

I initially started working with a wired mouse and keyboard. But since they consumed both USB 3.0 ports, I decided to use just one port by using a wireless Logitech mouse and keyboard. I was pleased to find that the Unifying Receiver worked just fine with the hardware, even when just booted into the BIOS setup screens. I didn’t have to do any pairing (don’t know why) and I have an open USB 3.0 port.

I understand that there are some packages that can be installed on Ubuntu to manage Logitech wireless devices, and I’ll be trying that out soon.

Next, the software—oh my!

It’s been a long time since I’ve had to provision a Linux system from scratch. I’ve been using prebuilt AMIs on AWS and boot2docker images on OS X and Windows hosted VMs. (using Parallels and Hyper-V, respectively).

Back in the early 2000’s I became very frustrated dealing with the administration and maintenance of my personal Linux systems (desktop and laptop). When I found out that OS X had a Unix core (Darwin BSD, specifically) married to a sweet graphical UI, I jumped with both feet into Apple hardware and OS X and have never regretted it. OS X is stable, easily upgraded, and works flawlessly with a vast array of external devices that I would have been struggling to get working with a Linux desktop system.

Trying to get this little barebone i5 box provisioned with Ubuntu Linux has brought back echoes of the frustration and constant roadblocks that I remember provisioning and maintaining Linux systems over 15 years ago. I could do a little rant here about how much of what I found in the way of official documentation, community documentation, and organic Q&A forums and tutorials was outdated, incomplete and did not specify the applicable software versions or even the date the info was posted. In a way, it’s worse that 15 years ago because things are changing faster, supporting projects have shorter lifetimes (they spin up, get superseded, and then go dark fairly quickly, often within a couple years), and the amount of information cruft to sift through is growing relentlessly.

As I said, I could inject a satisfying rant—but, as is often the case, Scott Adams has captured the essence of it with Dilbert: http://dilbert.com/strip/2016-07-17.

In any case, it’s 2016 and if you want to be learning and experimenting with cloud and container virtualization and orchestration then it seems apparent to me that the best path forward is with Linux, so that’s where I’m heading.

What hypervisor?

There are a lot of options for hypervisors. VirtualBox doesn’t count (it’s not a hypervisor). There’s VMware, there’s Xen, also XenServer and KVM. Hyper-V is also a choice, as you can run Linux and Unix VMs on Hyper-V. There are probably others.

I thought I’d try Xen first. Why? Because AWS (Amazon Web Services) uses Xen. And because more CloudStack features are available with a Xen hypervisor than with any other Hypervisor.

First attempt

So I thought I’d install Ubuntu Desktop and add Xen to that. Ubuntu 16.04 LTS Desktop and Server have the same kernel; they differ in the set of packages that are initially installed, with Desktop adding a GUI and a bunch of client ‘office’ software and so on; Server installs no GUI and offers a simple command line shell. I chose Ubuntu Desktop so that I could get back up to speed on using Linux as a workstation, and so that I could browse Xen documentation while I configured and managed it.

The Ubuntu Desktop install went just fine. I took all the defaults except that I encrypted the whole disk. After the reboot I resized the root logical volume to be much smaller (just 20GB instead of all the available space); this lets me create additional logical volumes, at least one per VM.

Then I installed Xen using apt-get. I ran into trouble on the reboot. Xen was unable to load. This turns out to be related to UEFI, which I had enabled in the BIOS.1

So I reconfigured the BIOS, reinstalled Ubuntu, reconfigured LVM, reinstalled Xen and rebooted. This time all was well, and I had Ubuntu Desktop as my Xen dom0 VM2

Now I thought I’d try to create a VM running RancherOS, a nicely architected Docker host OS. That’s when I got my next disappointment.

Xen supports HVMs, PVMs, and PVHVMs – hardware virtualized machines, para-virtualized machines, and para-virtualized + hardware virtualized machines. I wanted to use an HVM. That’s OK, except that there was no way to connect to the console of the HVM virtual machine and interact with the RancherOS installer. It turns out that in all cases Xen can only connect the console if the OS being installed contains support for Xen para-virtualization.3 To install a pure HVM with a guest OS that doesn’t have Xen para-virtualization built in then you need to use a VNC client to connect to the guest.

All that was setting off alarm bells. Because I want to automate the creation of VMs using something like Vagrant or docker-machine or Chef, I did some more checking and found that there’s no Xen support for docker-machine or Vagrant, not for creating VMs.4 Therefore, each time I wanted to create a new HVM guest I would have to manually log on and perform the install interactively via VNC.

Second attempt

I decided to abandon Ubuntu Desktop and Xen in favor of Ubuntu Server and KVM. Installing Ubuntu Server was a different experience that installing the Desktop.5 First, it’s a VGA graphics install, rather than a full GUI install, and a lot more questions are asked and a lot more options made available.6

I did the install at least 4 times. The first time I got an error on reboot—somehow related to UEFI, I think. I’m installing off a bootable USB drive that I’d created with UNetbootin on my OS X system. There were two bootable partitions on the drive; one UEFI and one not UEFI. I probably had selected the UEFI one. In any case, I did the whole install all over again, this time booting from the non-UEFI partition on the USB drive. This time the machine booted without crashing, but dropped me into Grub, with some message about not finding a mountable partition on a path starting with /dev/sdb.

During the install the USB drive shows up as sda and the internal SSD drive shows up as sdb, but when the system boots without the USB drive it will see the SSD as sda. After some DuckDuckGo searching and some soul searching I decided that UNetbootin was not building the boot drive properly. I did have a Windows 8.1 VM on which I could run the Universal USB Installer and recreate the boot USB (that utility was recommended in the Ubuntu documentation). I gave it a go. I made two bootable USB drives, on for Ubuntu Server 16.04.1 LTS and one for Ubuntu Desktop 16.04.1 LTS.

Using the new bootable USB I installed Ubuntu Server yet again, and on the reboot I was encourage not to see a Grub prompt; Linux came up!7 After determining the IP address of the server8 I was able to SSH into it from my OS X system. Sweet.

I then moved my public SSH key onto the server and reconfigured the SSH daemon to prevent password logins (and require SSH keys for authentication).

Next steps

Next steps are to do some housekeeping and then learn how to use KVM. Here’s my to do list:

  • disable the WiFi radio (not needed)
  • expand the root logical volume to give a bit more room for package installation
  • implement a backup procedure and test it
  • try creating a few VMs
    • manually creating via Ubuntu Server ISO
    • manually creating via boot2docker ISO
    • using Vagrant via Ubuntu Server ISO
    • using Vagrant via boot2docker ISO
    • using docker-machine via Ubuntu Server ISO
    • using docker-machine via boot2docker ISO

I’ll report on how the next steps unfold in my next post on this project.

  1. I could find no Xen install documentation that warned about UEFI compatibility. I stumbled across this issue in a few forums discovered by searching.

  2. dom0 is the term that refers to the guest VM that is used to manage the hypervisor, and which Xen starts and connects to the physical console after physical machine boot). domU is the term for any other guest VM.

  3. Here’s the list of guest OS support for PVM and PVHVM: http://wiki.xen.org/wiki/DomU_Support_for_Xen

  4. Actually, there is some support for docker-machine if you are running the Citrix XenServer, but that feels too heavyweight and also less flexible than a pure Ubuntu solution.

  5. One thing I did learn from the Desktop experience was that encrypting the physical disk is a pain because the system boot is blocked waiting for you to enter the password to permit decryption and allow the drive to be mounted. I decided not to encrypt the drive when I did the Server install. Maybe there’s a way around that boot-blocking password issue, but I didn’t want to spend a huge amount of time searching for a solution.

  6. I followed the standard install path, with these variations:

    I chose “Guided - use entire disk and set up LVM” and then allocated 40GB to the guided volume. (That turned out to use 32GB for swap and a bit over 5GB for root. The 40GB I specified was in decimal units (i.e., 1K = 1000), while lvmdisplay reports sizing in binary units (i.e., 1K = 1024).

    I’m going to expand the root volume to 10GB before I create any other logical volumes.

    In the software selection step, I chose only “Standard system utilities”, “OpenSSH Server”, “Virtualization Server”, and “Basic Ubuntu server”.

    After the install I reconfigured the sshd SSH daemon to prevent password authentication and now rely only on SSH keys for authentication.

  7. Actually, I couldn’t tell that it came up. I thought it was hung because my screen was blanks. Another 30 minutes of searching I stumbled on to an offhand comment in a forum post, somewhere, that mentioned you have to use the key combo Shift-Alt-F1 to bring up the console. That tidbit was nowhere in the Ubuntu install guides or Server documentation. I did that and saw the login prompt; then I new all was well and the install had succeeded.

  8. I don’t have a DNS server on my home network that will register hostnames, so I have to go by IP address.