Shared hosting with Docker
Running some Docker containers is easy. Just execute a docker run
and the application is up.
Bind the Port to the host-system and it’s accessible to the world. But we need more:
- Access the applications using sub-domains
- HTTPS with Let’s encrypt
- Optional user/password authentication with http-basic-auth
- Encapsulate container networks for security reasons
- Choose upstream backends based on context paths
- Set custom headers and proxy params
To get this done we’ll use nginx as reverse proxy, acme.sh to issue the SSL certificates and docker-compose to bring the containers and Docker-networks up.
Nginx reverse proxy
Only one container can bind the http and https ports (80, 443) so we need a reverse proxy which forwards the traffic to the target container. I recommend nginx for this because it has many features and it’s an excellent piece of software. You could also take a look at Traefik for simpler setups, it has a Docker backend which will automatically adjust the config on container start/stop or craft your own like Crafted Docker Reverse Proxy.
Preparing the config
Most of the nginx config is customized so we’ll mount it into the container.
Let’s pull the latest nginx alpine image: docker pull nginx:stable-alpine
.
Alpine Linux is a security-oriented, lightweight Linux distribution based on musl libc and busybox.
I put everything under /opt/nginx
:
mkdir -p /opt/nginx/conf/sites
cd /opt/nginx/conf
# Fetch mime.types file
wget https://raw.githubusercontent.com/nginx/nginx/master/conf/mime.types
Now add the config files. Don’t let the directory paths confuse you. We’ll
create volume mounts, /opt/nginx/conf
becomes /etc/nginx
inside the container.
# nginx.conf
user nginx;
worker_processes 8;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 8192;
multi_accept on;
use epoll;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
# For Docker container name lookups
resolver 127.0.0.11 valid=30s;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
client_body_timeout 12;
client_header_timeout 12;
send_timeout 10;
keepalive_timeout 15;
keepalive_requests 500;
client_body_buffer_size 10K;
client_header_buffer_size 1k;
large_client_header_buffers 2 1k;
client_max_body_size 0;
server_tokens off;
include /etc/nginx/sites/*.conf;
}
# proxy_params.conf
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Proxy ""; # prevent http-proxy attacks
Default vhost
This page will be shown when no other virtual host matches.
Make sure that this is the first server
block in your configuration.
# sites/01-default.conf
server {
listen 80;
root /var/www/default;
}
Create the document root for this vhost block, we’ll later mount www
from the host-system.
mkdir -p /opt/nginx/www/default
echo "This is my example server" > /opt/nginx/www/default/index.html
First start
We’ll use docker-compose to bring up the containers. It is easy and documents most settings like port mappings and volume mounts.
# /opt/nginx/docker-compose.yml
version: '2'
services:
nginx:
image: nginx:stable-alpine
container_name: nginx
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./conf:/etc/nginx
- ./www:/var/www:ro
- ./logs:/var/log/nginx
networks:
- public
networks:
public:
external: true
Most of this is self-explanatory but take a look at the restart policy which tells the Docker daemon to restart the container on failures and auto-starts it on system reboots.
The next important point is the external network called public
. It is the shared network for all
containers that have to be reached by Nginx. At first we have to create it with docker create network public
.
Now we can start Nginx: docker-compose up -d
. If you access your server with a browser
you should now see the index.html
from the default vhost. If you encounter problems, see docker-compose logs
and /opt/nginx/logs
.
Logrotate
For now, all your nginx log will grow infinitely which could exhaust the disk space. This could be solved
by deploying a logrotate config to /etc/logrotate.d/nginx
. It will compress the files and removes old ones.
/opt/nginx/logs/*.log {
daily
missingok
rotate 31
compress
delaycompress
notifempty
sharedscripts
postrotate
docker exec nginx nginx -s reload > /dev/null
endscript
}
Hello world app
Now it’s time to start our first application and make it accessible through the reverse proxy. Let’s use psitrax/hello-world for this.
Create the file /opt/hello-world/docker-compose.yml
:
version: '2'
services:
hello-world:
image: psitrax/hello-world
restart: unless-stopped
networks:
- public
# - default
networks:
public:
external: true
As you can see, the hello-world service gets attached to the public
network so it can be
reached by the nginx container. To create a virtual network for the internal services of this application
you can just add it to the list like the commented default
. If you need a database for example, add default
to the database service networks list to be able to establish a connection from the application but not from nginx.
Start the hello-world app by executing docker-compose up -d
.
The container should be now in state Up
but you cannot access the http service which it provides because we have
no public port mapping. Let’s create the nginx config:
# /opt/nginx/conf/sites/10-hello-world.conf
server {
listen 80;
server_name hello.example.com;
location / {
set $upstr "http://hello-world:9000";
proxy_pass $upstr;
include proxy_params.conf;
}
}
And reload the nginx config with docker exec nginx nginx -s reload
.
Now open http://hello.example.com in your browser, you should see the hello-world page.
HTTPS
SSL is very important so let’s encrypt.
Let’s Encrypt is a free, automated, and open Certificate Authority.
There are many acme clients, I’ve chosen acme.sh
because it’s written in bash and has only very common dependencies like curl.
# Install acme.sh
mkdir -p /opt/nginx/acme
export LE_WORKING_DIR=/opt/nginx/acme ; curl https://get.acme.sh | sh
# Place for the nginx certificates
mkdir -p /opt/nginx/conf/certs
# Generate the diffie hellman params
`openssl dhparam -out /opt/nginx/conf/dhparams.pem 2048`
It’s a good practice to extract the ssl nginx config into it’s own file:
# ssl_params.conf
# enable session resumption to improve https performance
# http://vincent.bernat.im/en/blog/2011-ssl-session-reuse-rfc5077.html
ssl_session_cache shared:SSL:50m;
ssl_session_timeout 5m;
ssl_dhparam /etc/nginx/dhparams.pem;
# enables server-side protection from BEAST attacks
# http://blog.ivanristic.com/2013/09/is-beast-still-a-threat.html
ssl_prefer_server_ciphers on;
# disable SSLv3(enabled by default since nginx 0.8.19) since it's less secure then TLS http://en.wikipedia.org/wiki/Secure_Sockets_Layer#SSL_3.0
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
# ciphers chosen for forward secrecy and compatibility
# http://blog.ivanristic.com/2013/08/configuring-apache-nginx-and-openssl-for-forward-secrecy.html
ssl_ciphers "ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4";
Newcert helper script
A little helper script for issuing the certificates makes it even easier and it catches some
common mistakes. It validates the .well-known
path for the domain, issues the certificate and
deploys the files to nginx. Put it into /opt/nginx/acme/newcert
.
#!/bin/bash
set -e
if [ -z "$1" ] ; then
>&2 echo "Domain param missing"
exit 1
fi
DOMAIN=$1
cd /opt/nginx/acme
mkdir -p /opt/nginx/www/$DOMAIN/.well-known
echo getcert-test > /opt/nginx/www/$DOMAIN/.well-known/getcert-test
if [ "$(curl -s http://$DOMAIN/.well-known/getcert-test)" != "getcert-test" ]; then
>&2 echo "Access test to http://$DOMAIN/.well-known for Webroot /opt/nginx/www/$DOMAIN failed!";
exit 1
fi
rm /opt/nginx/www/$DOMAIN/.well-known/getcert-test
mkdir -p /opt/nginx/conf/certs/$DOMAIN
./acme.sh \
--issue \
--domain $DOMAIN \
--webroot /opt/nginx/www/$DOMAIN
./acme.sh \
--installcert \
--domain $DOMAIN \
--certpath /opt/nginx/conf/certs/$DOMAIN/cert.pem \
--keypath /opt/nginx/conf/certs/$DOMAIN/key.pem \
--fullchainpath /opt/nginx/conf/certs/$DOMAIN/fullchain.pem \
--reloadcmd "docker exec nginx nginx -s reload"
Make it executable: chmod +x /opt/nginx/acme/newcert
.
Hello-world with HTTPS
Modify the 10-hello-world.conf
:
server {
listen 80;
server_name hello.example.com;
location ~ ^/\.well-known/ {
root /var/www/hello.example.com;
break;
}
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
server_name hello.example.com;
# ssl_certificate /etc/nginx/certs/hello.example.com/fullchain.pem;
# ssl_certificate_key /etc/nginx/certs/hello.example.com/key.pem;
include /etc/nginx/ssl_params.conf;
location / {
set $upstr "http://hello-world:9000";
proxy_pass $upstr;
include proxy_params.conf;
}
}
We comment the two ssl certificate directives as they don’t exist yet. Reload the nginx configuration
with docker exec nginx nginx -s reload
. Now it’s time to issue the let’s encrypt certificate:
/opt/nginx/acme/newcert hello.example.com
Now the certificate files should exist and we can remove the comment chars in 10-hello-world.conf
and reload nginx. Open your browser, from now on you get redirected to https. Nice!