My my, that is a provocative title for this blog, isn't it?
I've just finished up work on my latest experiment with BOSH: the Containers BOSH Release. As more and more of my life slips into Docker containers (on its way to Kubernetes), I find myself wanting to spend less and less time re-packaging stuff for BOSH. Lots of software already exists in Docker form, since that's a pre-requisite for deploying things on Helm (and K8s), which is all the rage.
The other day, I said to myself, “James, what if we just ran the Docker images on top of BOSH?” It seemed crazy at first, but the more I through it through (and the more I played with the implementation) the more sane and normal it became.
The premise of the Containers BOSH Release is simple: start with a docker-compose recipe, and run it on one or more BOSH VMs.
Hello, Vault!
I like to start with easy examples, so we're going to spin up a single-node Vault instance to prove that this crazy Containers thing works.
We'll start with the simplest of manifest stubs:
---
name: vault
stemcells:
- alias: default
os: ubuntu-xenial
version: latest
releases:
- name: containers
version: latest
update:
canaries: 1
max_in_flight: 1
serial: true
canary_watch_time: 1000-120000
update_watch_time: 1000-120000
instance_groups:
- name: docker
instances: 1
azs: [z1]
vm_type: default
stemcell: default
networks: [{name: default}]
jobs:
- name: docker
release: containers
properties:
recipe:
# .... START HERE ...
Under the recipe
property, we'll just insert a bit of docker-compose:
# ... continuing on ...
recipe:
version: '3'
services:
vault:
image: vault
ports: ['8200:8200']
environment:
VAULT_API_ADDR: http://127.0.0.1:8200
VAULT_LOCAL_CONFIG: >-
{
"disable_mlock": 1,
"backend": {
"file": {
"path": "/vault/file"
}
},
"listener": {
"tcp": {
"address": "0.0.0.0:8200",
"tls_disable": 1
},
},
"default_lease_ttl": "168h",
"max_lease_ttl": "720h"
}
cap_add: [IPC_LOCK]
command: [vault, server, -config, /vault/config/local.json]
That's it. Toss that at your favorite BOSH director, and when it's all deployed, you should be able to access the Vault on port 8200.
(If you want the full manifest, download it from here)
$ bosh deploy -n vault.yml
$ bosh vms
Instance Process State AZ IPs VM CID VM Type Active
docker/b046a21f running z1 10.128.16.143 vm-a47c3bd5 default true
Let's target that with safe:
$ safe target http://10.128.16.143:8200 dockerized
Now targeting dockerized at http://10.128.16.143:8200
Since this is a new Vault, we're going to need to initialize it:
$ safe init
Unseal Key #1: df16cd701bde233c768cda6c20e214e640bc43cd1b81d977a983d5590dd2659a03
Unseal Key #2: fe8ea8ab8a22ef931d5338dd6f4f2f6932ffa22f6caa26fd30cd57e11ffe137260
Unseal Key #3: c6a872983488ae92e30bb0f74a1a2795978e247c502b4684e70189fe0ba2ad90c6
Unseal Key #4: 6c72063fcf8a9b82a7c72fa286d14f84c9e46c7e30d21d9040ebbfab7725740170
Unseal Key #5: 836f40ec1f7c34460fc86b7547caccc7ffa6b680b9a69a0205b1cddebcb33d2530
Initial Root Token: s.BoFveccftUE9y9j1p4WfyPpO
Vault initialized with 5 keys and a key threshold of 3. Please
securely distribute the above keys. When the Vault is re-sealed,
restarted, or stopped, you must provide at least 3 of these keys
to unseal it again.
Vault does not store the master key. Without at least 3 keys,
your Vault will remain permanently sealed.
safe has unsealed the Vault for you, and written a test value
at secret/handshake.
You have been automatically authenticated to the Vault with the
initial root token. Be safe out there!
There you go, a Vault! And we didn't have to write our own BOSH release.
Taking SHIELD For A Spin
Let's try a bit more complicated of an example, shall we?
The SHIELD docker-compose recipe consists of five different containers that work together to provide all the moving parts necessary to evaluate SHIELD's effectiveness as a data protection solution:
- core - The SHIELD Core (i.e. most of SHIELD itself)
- vault - A private vault, for storing encryption parameters.
- agent - The SHIELD agent, for executing backup / restore
- webdav - Cheap, local cloud storage. operations.
- demo - You gotta have something to backup, right?
Despite all of this complexity, deploying to BOSH via Containers is just as straightforward — just drop the docker-compose.yml
file contents (properly indented, of course) under the recipe:
property of the docker
job.
If you want, you can read the entire BOSH manifest.
Once that is deployed, I used bosh vms
to get the IP address of the deployed VM, and then targeted that IP (on port 9009) with the SHIELD CLI:
$ bosh vms
Instance Process State AZ IPs VM CID VM Type Active
docker/d9b2c11d running z1 10.128.16.144 vm-add4d02e default true
$ shield api http://10.128.16.144:9009 docker-shield
docker-shield (http://10.128.16.144:9009) OK
SHIELD DOCKER
$ shield -c docker-shield login
SHIELD Username: admin
SHIELD Password:
logged in successfully
$ shield -c docker-shield status
SHIELD DOCKER v8.2.1
API Version 2
If you want, you can head on over to the SHIELD web UI (also on port 9009).
Playing with Anchore (on BOSH)
Anchore is a security scanning solution for Docker images. You give it an image URL and it will pull that image down, unpack it, and scan it for known CVEs and other vulnerabilities. As more of my F/OSS projects move to delivering OCI images as release assets, I wanted a solution that could scan (and re-scan) those images quickly and painlessly.
(by the way, according to their Slack org, Anchore rhymes with encore).
It's a neat system, and its canonical deployment is via a docker-compose recipe, making it a perfect fit for this new mental model of deploying to BOSH.
To deploy, I started with the upstream docker-compose recipe, and then tweaked it slightly (mostly by renaming the Docker containers). The final manifest is here. Go ahead and deploy it; we're going to segue briefly into some theory, but we'll come back to Anchore soon enough.
How Does It Work?
The Containers BOSH release does the following for you:
- Package up Docker 18.x and
docker-compose
- Create a
docker-compose.yml
file on the BOSH VM, based on what you put in therecipe
property. - Call
docker-compose up
from a script thatmonit
(BOSH's supervisor) babysits for you.
That's (virtually) all there is to it. Don't let the simplicity deceive you! This straightforward formula unlocks some serious potential in using BOSH to run your (already) dockerized workloads. More importantly, it frees us from needing to figure out how to package and execute upstream software.
All the stability of BOSH, all of the flexibility of Docker!
What About Air Gapping?
The first thing our docker-compose up
is going to do is contact some registry somewhere and pull down the images it needs to run the project. Depending on your environment, this may either be acceptable (I hope you are pinning your image tags), or it may not. In "air gapped" environments, you cannot directly download anything from the public Internet and run it, for security reasons.
That pretty much rules out this new thing, right?
Wrong. I saw this edge case a mile away — I work with lots of environments that are either forced to use semi-broken HTTP(S) proxies, or are simply not allowed out to the Internet.
After the Docker daemon boots, but before we run docker-compose up
, we scan the disk looking for other BOSH jobs that might be able to provide us with the raw material of OCI-compliant images: layered tarballs.
The code in question looks a little something like this:
for image in $(cat /var/vcap/jobs/*/docker-bosh-release-import/*.lst)
do
docker load <$image
done
Any job that gets co-located on the instance group has the option of defining a list of paths to exported OCI image tarballs, that we will load into our local Docker daemon.
So yes, if you want to run in an air gapped environment, you do still have to write BOSH releases, but they are super simple and require almost no thought. I even wrote a proof-of-concept image release that packages up the Vault image we've been playing with.
What About docker exec
?
One of my favorite features of Kubernetes is kubectl exec
; being able to just bounce into a container and poke around to verify things is amazingly powerful.
You can imitate this powerful feature by binding the Docker daemon to a TCP port. Normally, Docker just binds a UNIX domain socket (like a port, but its a file). This provides some level of protection for Docker, since you have to be on the Docker host to see the socket file, and you have to have the appropriate group memberships to be allowed to write to it.
If you set the bind:
property of the docker
job to a TCP port number, the Docker daemon will also listen on that port, across all interfaces, for inbound control messages. You can combine this with the -H ...
flag to the docker
CLI utility, or, better yet, the $DOCKER_HOST
environment variable.
If you look closely at the Anchore BOSH manifest, you'll notice that it binds port 5001, letting us do this:
$ bosh -d anchore vms
Instance Process State AZ IPs VM CID VM Type Active
docker/2e016a81 running z1 10.128.16.142 vm-ffbfedd1 default true
$ docker -H 10.128.16.142:5001 ps
CONTAINER ID IMAGE STATUS PORTS NAMES
47fb54c4b539 (anchore) Up (healthy) 8228/tcp running_simpleq_1
16addb64383e (anchore) Up (healthy) 8228/tcp running_policy-engine_1
c56b06c1ce10 (anchore) Up (healthy) *:8228->8228/tcp running_api_1
f0ab474b62fc (anchore) Up (healthy) 8228/tcp running_analyzer_1
3c46e9c70e3c (anchore) Up (healthy) 8228/tcp running_catalog_1
c96f8a5b9596 (postgres) Up 5432/tcp running_db_1
Since we have complete access to Docker, let's try an exec
:
$ export DOCKER_HOST=10.128.16.142:5001
$ docker exec -it running_api_1 /bin/bash
[anchore@c56b06c1ce10 anchore-engine]$
Success! From here, you can use the embedded anchore-cli
to interact with the scanning solution:
[anchore@c56b06c1ce10 anchore-engine]$ anchore-cli system feeds list
Feed Group LastSync RecordCount
vulnerabilities alpine:3.3 2019-06-13T01:23:20.228251 457
vulnerabilities alpine:3.4 2019-06-13T01:23:32.916373 681
vulnerabilities alpine:3.5 2019-06-13T01:23:49.301987 875
vulnerabilities alpine:3.6 2019-06-13T01:24:10.832360 1051
vulnerabilities alpine:3.7 2019-06-13T01:24:38.684207 1125
vulnerabilities alpine:3.8 2019-06-13T01:25:08.305060 1220
vulnerabilities alpine:3.9 2019-06-13T01:25:39.714090 1284
vulnerabilities amzn:2 2019-06-13T01:26:02.169712 178
vulnerabilities centos:5 2019-06-13T01:27:20.146913 1323
vulnerabilities centos:6 2019-06-13T01:28:43.593786 1333
vulnerabilities centos:7 2019-06-13T01:29:58.219342 793
vulnerabilities debian:10 2019-06-13T01:37:15.136873 20352
vulnerabilities debian:7 2019-06-13T01:45:14.279082 20455
vulnerabilities debian:8 2019-06-13T01:53:57.173538 21775
vulnerabilities debian:9 2019-06-13T02:02:19.820286 20563
vulnerabilities debian:unstable 2019-06-13T02:10:43.437588 21245
vulnerabilities ol:5 2019-06-13T02:12:14.888089 1233
vulnerabilities ol:6 2019-06-13T02:14:05.563999 1417
vulnerabilities ol:7 2019-06-13T02:15:33.553661 915
vulnerabilities ubuntu:12.04 2019-06-13T02:20:54.345817 14948
vulnerabilities ubuntu:12.10 2019-06-13T02:22:54.519832 5652
vulnerabilities ubuntu:13.04 2019-06-13T02:24:15.630364 4127
vulnerabilities ubuntu:14.04 2019-06-13T02:30:58.521151 18693
vulnerabilities ubuntu:14.10 2019-06-13T02:32:41.325661 4456
vulnerabilities ubuntu:15.04 2019-06-13T02:34:46.806824 5789
vulnerabilities ubuntu:15.10 2019-06-13T02:37:12.849533 6513
vulnerabilities ubuntu:16.04 2019-06-13T02:43:25.005589 15795
vulnerabilities ubuntu:16.10 2019-06-13T02:46:17.313668 8647
vulnerabilities ubuntu:17.04 2019-06-13T02:49:16.106250 9157
vulnerabilities ubuntu:17.10 2019-06-13T02:51:51.897617 7935
vulnerabilities ubuntu:18.04 2019-06-13T02:55:06.965036 10047
vulnerabilities ubuntu:18.10 2019-06-13T02:57:35.083799 8134
vulnerabilities ubuntu:19.04 2019-06-13T02:59:23.443062 6586
etc.
What About Insecure Registries?
In a perfect world, we'd all use TLS everywhere, and everyone would have a verifiable chain of trust back to a CA root authority.
Ha!
Sometimes, you can't help it that your Docker Registry is protected by a self-signed certificate, or one whose CA isn't in the system roots. Occasionally, you have to go without transport security altogether and run on plain old HTTP.
For that, Docker supports the concept of an insecure registry, and the Containers BOSH release lets you supply a list of those ip:port
endpoints that just won't pass muster on X.509 validation:
jobs:
- name: docker
release: containers
properties:
insecure-registries:
- docker.corp.int:3000
That way, if you have to pull any images from those registries as part of your compose recipe, you're covered.
What About Private Registries?
That one we're still working on, but given that we can now trivially spin our own Docker Registries, using this new BOSH release and upstream Docker images, I expect that will get fixed soon enough. You might even be the one to PR that!
Where To From Here?
I hope my experiments here have piqued your interest. Go out, snag a copy of the latest release, and deploy your favorite Dockerized workload, now with the power of BOSH!
Oh, and if you run into any problems along the way, or find a way to improve the BOSH release, we hope to hear from you.
Happy Hacking!