How to Build an Evil Supercomputer in Your Living Room

Today I’d like to invite you into the living room to take a look at a project that I have been working on- Tempest- my own personal evil supercomputer.

Why build an evil supercomputer? While the better question is “Why wouldn’t you?”, my answer to the original question is that I wanted to really make sure that I had a good understanding of “microservices”- and for that I wanted to have my own little cloud to play around with.

It’s not done yet, and I expect it to evolve, but I figured I would share my setup and findings thus far in case anyone wanted to try a similar experiment. I’ve got a picture of the finished product below:

Miniature supercomputer
Evil has never looked so… compact.

It Starts with a Hardware Problem

One of the things that I regularly hear about clouds1 is that they are built using something called “commodity hardware.” This means that instead of using really big, expensive, specialized boxes, they use smaller cheaper ones.

I couldn’t actually find a great example of “commodity hardware”, so I decided to go with the cheapest hardware I could find that met my minimum specs: 4GB of RAM and an Intel/AMD chipset. More than a Raspberry Pi, less than a real server, but good enough for my purposes.

After looking at a bunch of options (including Raspberries and Oranges), I settled on the Cenovo MiniPC. It’s intended to be a set top box for a TV, so it is small and intended to be energy efficient. I was able to buy three nodes on eBay for about $125 each.

I decided to use my home router, and I had some Cat6 cables sitting around from a prior experiment, so those were “free”, but if I wanted to expand I’d probably get a switch for ~$30 and additional cables for ~$3 each, which brings my budget to the following:

Upfront Cost Cost Per Additional Node
Node $375 $125
Router/Switch $0 $30/8 = $3.50
Cords $0 $3
Total $375 ~$135

The downside was that the NIC cards that come with these MiniPCs don’t support Wake-On-LAN. This means that I can’t turn them on remotely. It’s not the end of the world by any means, but it would’ve been nice if I could use that to remotely control whether Tempest was running or not.

There are a few things that I want to tackle in the future:

  1. Set up a subnet for Tempest so that it’s not on the same network as the rest of my personal devices. Seems like something could go wrong.
  2. Set up PXE to auto-install Ubuntu Server when a new node is added.

The cool thing is that I think I can set those servers up as containers on the cloud itself if I’m clever (or stupid- you can let me know in the comments 😀).

Then come the Penguins

For my operating system I chose Ubuntu Server. I’d like to say it was because of hours of analysis and comparing different Linux distributions to find the optimal candidate, but quite frankly it was more because I’m used to working with Ubuntu on my personal machine and they make it way easy to install.

The setup was fairly straightforward- I loaded the image onto a flash drive and used it to overwrite Windows 10 with Ubuntu one server at a time. I ended up naming them after comic book locations to throw any would-be heroes off the scent.

It was at this point I realized that I was setting myself up to do everything three times for the rest of the venture, and being a lazy programmer, I decided to look at alternatives. One interesting option out of the box is Ubuntu’s Metal As A Service. It’s an interesting setup which in theory would manage all my nodes and make sure the correct OS is installed on each one.

Unfortunately I ran into issues- MaaS seems to work best when you have your own DHCP server and the ability to power cycle your equipment (which as I mentioned I can’t do with these particular motherboards). After inadvertently bringing down my whole network and going several rounds with Comcast tech support, I decided that I’d put MaaS on the backburner.

Setting up the OS wasn’t the only thing I had to do three times- I had to configure the OS with users and software (primarily SSH and Docker). This too was a pain, and I played around with Chef but ran into complications (the sort that seem to get left out of the tutorials), and decided to add it to the todo list as well.

One thing I didn’t do to make Tempest truly cloud-like was add a hypervisor or VM layer. I probably could, but in my mind it adds a lot of unnecessary overhead when I’m running containers (and I don’t have to worry about sharing the hardware with others).

At the end of the day I had my servers set up “by hand”, and a few more items for the TODO list:

  1. Set up Chef to manage the node configurations and software.
  2. Set up a node with OpenStack so I can run VMs if needed.
  3. Try Ubuntu’s Metal As A Service again.

Can’t Contain the Containers

The next step was to set up a container manager on each node. I chose Docker and Swarm for the job. Docker because it’s the craze these days and it’s helpful for work, Swarm because it sort of came along for the ride. In the future I may experiment with Kubernetes, but for the moment Swarm does everything I need and it doesn’t make me do as much duplication as if I tried to mix the two.

This went fairly smooth with this one, and I’ve already got a bunch of things I want to try out in the near future:

  1. Storing data in containers- what happens when the node dies? What happens when the cluster dies? (i.e. a power outage)
  2. Network functions- DHCP, DNS, TFTP, etc.

At the end of the day, Tempest looks something like this:

Network Diagram

 

While this was a relatively short post, I thought it would be cool to share my progress thus far. Looking at my TODO list, my next steps should probably be setting up Chef and then playing around with some actual microservices.

If you’re interested in following along, feel free to subscribe below!

  1. Technically Tempest isn’t a cloud, but by the time we’re done it’ll kinda look like one