Over the years, I've collected a small bag of useful, if somewhat silly, tricks for using Docker (and what I'll call low effort containerization) to make my day-to-day computing life easier. Here are some of my favorites!
1. Liberal (Over-)Use of Shell Shortcuts
If you're a purist for the tool names you've been given by the project developers, it's probably best for you to just skip this section outright.
I hate typing. So much so that I have tons of shell aliases, git aliases, and tiny utilities with small names littered about my systems — and I can't live without them.
Specifically, I can't live without dr
:
$ cat ~/bin/dr
#!/bin/sh
docker run -it --rm "$@"
Seriously, I use this command at least once a day. Anywhere you see a code listing with a docker run
in it, you can bet that I'm actually shortening that to just plain dr
.
The --rm
is important; it tells the Docker daemon to clean up the container filesystem and configuration when the contained process exits. This is important because if you bounce through Docker containers as mush as I'm going to propose you start doing, you'll soon end up with tons of dead containers eating up space on your hard disk.
The -it
is there because primarily, when I'm typing in Docker-y commands, I'm at a terminal (-t
), and I really want to be able to send data through standard input (-i
).
2. Extracting Text Files via cat
This one seems almost too basic to mention, but I see far too many professionals overlooking the sheer power and awesomeness of cat
combined with docker run
.
I spin a lot of HTTP APIs, and I almost always reach for nginx
when I need a general purpose, lightweight reverse proxy to front them. I also firmly believe in explicitly stating configuration; I am not comfortable "accepting the defaults" of things like the standard nginx
image.
No biggie! The containerization platforms I play with (Docker / Kubernetes) make it trivial to mount in your own files, shadowing the configuration that comes with any particular image. Usually, though, I want to start from the default configuration and either accept it explicitly, or tweak it.
Which brings me back to cat
: ever wonder what the default nginx
image's root configuration file looks like? Wonder no more!
$ docker run --rm nginx cat /etc/nginx/nginx.conf
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
A shell redirect later, and I'm munging that default configuration in vim and getting ready to pop out yet another proxied HTTP(S) REST API, containerized.
3. Docker Composed and At The Ready
I do a lot of development on my local machine. All of the projects that I'm currently working on have at least one semi-complicated auxiliary system with which they interact: PostgreSQL, MariaDB, Mongo, Redis, etc.
In the olden days before containers, I used to run actual hardware to host these types of systems, and I would then connect my dev applications up to those spinning systems for testing, demoing, etc.
Now I use Docker Compose.
For each project I'm working on, I maintain at least one docker-compose.yml
that spins up all of my data service dependencies, with lots and lots of port-forwarding. Here's an example, for https://vaultofcardboard.com:
version: '3'
services:
pg:
image: postgres:12
ports: ["$VCB_PG_PORT:5432"]
environment:
POSTGRES_PASSWORD: foo
redis:
image: redis
ports: ["$VCB_REDIS_PORT:6379"]
This sets up my two data systems, Redis and PostgreSQL. They forward ports, so I can access them on loopback. Precisely which ports are forwarded to the canonical, in-container ports is deferred until later, allowing me to run lots and lots of different projects at once, without (a) port collisions or (b) hard-to-remember automatically allocated ports.
You'll also notice that this compose recipe lacks volumes of any sort. That's on purpose, and brings me to the next trick...
4. Ephemeral Data Systems
Most of my local work is cutting edge, experimental, push-the-envelope sort of work. Can we do X? What's the performance differential if we go with Y instead? How do we migrate data between these two versions of Z?
For most of that work, I don't care about persistence. In fact, if I've got schema setup scripts and data import / restore logic, I really don't want persistence. It just opens me up to a bunch of headache related to leftover data.
Instead, I spin containers with ZERO persistent volume mounts. When you run the postgres
image, you're supposed to mount something at /var/lib/postgresql/data
. Not me.
You see, if you don't bother mounting volumes, then a container recycle is enough to get you a brand new database, rip-roarin' and ready to go. No more mucking about with RDBMS transactions, cleanup scripts, or the like. With ephemeral containers, just bounce or recreate them with docker-compose
.
These days, my data systems live for, on average, about an hour. Tops.
Conclusion
I've been very happy with Docker. It hasn't replaced my operational workload (still primarily on Linux VMs / Kubernetes for that), but it has dramatically changed the way I test, develop, and evaluate software solutions.