One IP address, multiple SSL sites? Beating the great IPv4 squeeze

Forget IPv6. Names – I want NAMES! PACK. IT. IN

Cat in a small box photo via Shutterstock

We're fresh out of IPv4 addresses. Getting hold of a subnet from your average ISP for hosting purposes is increasingly difficult and expensive, even the public cloud providers are getting stingy. While we wait for IPv6 to become usable, there are ways to stretch out the IPv4 space.

There are several big problems with IPv6 that I won't bother rehashing here, but the real barrier to adoption is that consumer-facing ISPs in many parts of the world still aren't handing out IPv6 addresses to subscribers. Canada in particular is bad for this.

Even putting aside the inability to get IPv6 addresses directly from the ISPs on consumer lines, getting an IPv6 subnet for use with business fibre connections can often be a nightmare of justification forms and bureaucratic nonsense.

The ISP of one of my clients, for example, wants me to detail the name of each computer that will be attached to a given IPv6 address and what's used for. I just stare at the spreadsheet like a deer in headlights unsure where to even begin with something like that. I'm not sure that drawing a penis in the spreadsheet cells and sending it backed labelled "my answer makes as much sense as your question" is going to get me what I need.

For the foreseeable future, then, we'll need to make sure all out web-facing services are visible via IPv4. There are still people who just don't have IPv6 access, and from the looks of things this is a problem that's going to persist. So how to we go about taking the few IPv4 addresses we have and making the most out of them?

Reverse proxies

To understand reverse proxies, we should talk a little bit about Network Address Translation (NAT). NAT breaks the end-to-end model obsession that is responsible for most of the horrible things about IPv6. The end-to-end model is the idea that computer A should be able to address computer B with no translation layers in between. The IP address of each computer should be publicly routable and communication interfered with as little as possible.

NAT is the opposite of that. NAT is a fantastic means of plopping an entire network down behind a single IP address and making individual servers behind that IP available on different ports. Instead of offering up RDP access to a computer on the default port of 3389, for example, you might set it to 31826, causing the end user the overwhelming burden of typing out rdp.example.com:31826 instead of rdp.example.com when connecting.

Developers coding applications to live behind NAT have to think a little bit about what might live between the two computers and then use the same techniques and libraries everyone else has used for the past 20 years to make things behind NAT work. I'm told it's ruinously awful.

If IPv6 purists hate NAT I can only imagine what they think of reverse proxies, but reverse proxies are the future. A reverse proxy accepts all traffic for a given service on a given IP address, figures out which back-end server should serve the request and the routes the traffic. These are most common with HTTP and HTTPS traffic.

HTTP and HTTPS traffic contain a host header (Server Name Indication (SNI) for HTTPS). The host header contains the information about which server you want to access. Instead of simply asking a web server for the website at its IP address, you would specifically ask for www.example.com. Web traffic reverse proxies look at the host header, compare it to their configurations and pass the traffic back to a back-end server.

Web traffic reverse proxies typically also do caching of static content so as to speed up those back-end servers. Increasingly they're also used for denial of service detection, content inspection or – and this is my favourite – providing an SSL frontend to a whole mess of individual sites that don't do SSL natively. Those backend websites can be running any web server; if they deliver traffic over HTTP, we can reverse-proxy them with nginx.

Commercial reverse proxy software does, of course, also exist. Citrix NetScaler VPX can act as one, as can Barracuda NG Firewall, Smoothwall UTM and Untangle. There are many, many more, and many exist for services other than HTTP. But let's nginx - free, open source and very, very widespread.

A practical example

Reverse proxies seem daunting, but they aren't quite so terrible to implement as might be imagined. It took me a couple of days of fussing to get a CentOS 7 reverse proxy set up, and here's the basics. (Please note I'm presuming a base level of proficiency with RHEL-based Linuxes that includes installing, logging in and using both vim and yum.)

Do a minimal CentOS 7 install, disable SELinux, and follow the basic steps outlined here:

yum install epel-release
yum install net-tools wget python nginx git
yum update
mkdir /var/www/nginx_cache
mkdir /var/www/nginx_tmp
mkdir /var/www/letsencrypt-auto
chown nginx:nginx /var/www/nginx_cache
chown nginx:nginx /var/www/nginx_tmp
chown root:root /var/www/letsencrypt-auto
chmod 0755 /var/www/nginx_cache
chmod 0755 /var/www/nginx_tmp
chmod 0755 /var/www/letsencrypt-auto

Replace /etc/nginx/nginx.conf with this one:

user nginx;
worker_processes 2;
worker_rlimit_nofile 2048;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;

events {
    worker_connections 1024;
    multi_accept       on;
    use                epoll;
}

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    # Proxy cache and temp configuration.
    proxy_cache_path /var/www/nginx_cache levels=1:2
                     keys_zone=main:10m
                     max_size=1g inactive=30m;
    proxy_temp_path /var/www/nginx_tmp;

    # Gzip Configuration.
    gzip on;
    gzip_disable msie6;
    gzip_static on;
    gzip_comp_level 4;
    gzip_proxied any;
    gzip_types text/plain
               text/css
               application/x-javascript
               text/xml
               application/xml
               application/xml+rss
               text/javascript;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;
    client_max_body_size 100m;

    # Report real IPs from X-Forwarded-For header from Cloudflare IPs
    set_real_ip_from 103.21.244.0/22;
    set_real_ip_from 103.22.200.0/22;
    set_real_ip_from 103.31.4.0/22;
    set_real_ip_from 104.16.0.0/12;
    set_real_ip_from 108.162.192.0/18;
    set_real_ip_from 131.0.72.0/22;
    set_real_ip_from 141.101.64.0/18;
    set_real_ip_from 162.158.0.0/15;
    set_real_ip_from 172.64.0.0/13;
    set_real_ip_from 173.245.48.0/20;
    set_real_ip_from 188.114.96.0/20;
    set_real_ip_from 190.93.240.0/20;
    set_real_ip_from 197.234.240.0/22;
    set_real_ip_from 198.41.128.0/17;
    set_real_ip_from 199.27.128.0/21;
    set_real_ip_from 2400:cb00::/32;
    set_real_ip_from 2606:4700::/32;
    set_real_ip_from 2803:f800::/32;
    set_real_ip_from 2405:b500::/32;
    set_real_ip_from 2405:8100::/32;
    set_real_ip_from 2c0f:f248::/32;
    set_real_ip_from 2a06:98c0::/29;
    real_ip_header     CF-Connecting-IP;

    # Load modular configuration files from the /etc/nginx/conf.d directory.
    include /etc/nginx/conf.d/*.conf;
}

Create /etc/nginx/conf.d/servers.conf and add in either:

server {
    listen 80;
    server_name example.com www.example.com;
    access_log  /dev/null;

    # Allows letsencrypt-auto to verify domain ownership
    location ^~ /.well-known/acme-challenge/ {
        default_type "text/plain";
        allow all;
        root /var/www/letsencrypt-auto;
    }

    proxy_ignore_headers "Cache-Control" "Expires";
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    client_max_body_size 100m;
    proxy_pass_header Set-Cookie;

    # Catch the wordpress cookies.
    # Must be set to blank first for when they don't exist.
    set $wordpress_auth "";

    if ($http_cookie ~* "wordpress_logged_in_[^=]*=([^%]+)%7C") {
        set $wordpress_auth wordpress_logged_in_$1;
    }

    # Set the proxy cache key
    set $cache_key $scheme$host$uri$is_args$args;

    # Don't cache these pages.
    location ~* ^/(wp-admin|wp-login.php){
        proxy_pass http://examplebackend;
    }

    location / {
        proxy_pass http://examplebackend;
        proxy_cache_bypass $cookie_nocache $arg_nocache;
        proxy_cache main;
        proxy_cache_key $cache_key;
        proxy_cache_valid 30m; # 200, 301 and 302 will be cached.
        # Fallback to stale cache on certain errors.
        # 503 is deliberately missing, if we're down for maintenance
        # we want the page to display.
        proxy_cache_use_stale error
                              timeout
                              invalid_header
                              http_500
                              http_502
                              http_504
                              http_404;

        # 2 rules to dedicate the no caching rule for logged in users.
        proxy_cache_bypass $wordpress_auth; # Do not cache the response.
        proxy_no_cache $wordpress_auth; # Do not serve response from cache.
  }
}

(if you want to cache content) or

# Backend server definition
upstream examplebackend {
    server 172.16.0.199;
}

server {
    listen 80;
    server_name example.com www.example.com;
    # Allows letsencrypt-auto to verify domain ownership
    location ^~ /.well-known/acme-challenge/ {
        default_type "text/plain";
        allow all;
        root /var/www/letsencrypt-auto;
    }

    proxy_ignore_headers "Cache-Control" "Expires";
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    client_max_body_size 100m;
    proxy_pass_header Set-Cookie;

    # Catch the wordpress cookies.
    # Must be set to blank first for when they don't exist.
    set $wordpress_auth "";
    if ($http_cookie ~* "wordpress_logged_in_[^=]*=([^%]+)%7C") {
        set $wordpress_auth wordpress_logged_in_$1;
    }

    # Set the proxy cache key
    set $cache_key $scheme$host$uri$is_args$args;

    location / {
        proxy_pass http://examplebackend;
        proxy_cache_valid any 0;
  }
}

(if you want to bypass cache for that backend server). These configs are designed to be Wordpress compatible, which occupied the majority of my time.

Add one of these blocks for each backend server you have. If you have a backend server that hosts multiple websites you can simply add all the domains it hosts. If you want to mix an match cached and not cached, that's cool too; you can have as many proxy blocks point to the backend server as you want. I'll leave decoding the proxy blocks as an exercise for the reader, I trust they're sufficiently well commented to grasp the basics.

Poke the appropriate holes in your firewall(s). Make sure that port 80 on the IPv4 address in question back to the nginx server. At this point, if you restart nginx you should have a fully functional HTTP reverse proxy.

Next up is enabling letsencrypt so that we can get SSL configured. My work here is largely based off this blog post by John Maguire. Let's start by installing letsencrypt. (The rest of this presumes you're doing this as root and giting letsencrypt from /root. Please discuss the security implications of this with someone who knows Linux well and adjust scripts appropriately once you've decided where you actually want things to live.)

$ cd ~
$ git clone https://github.com/letsencrypt/letsencrypt
$ cd letsencrypt
$ ./letsencrypt-auto

It will complain that "no installers are available on your OS yet; try running 'letsencrypt-auto certonly' to get a cert you can install manually". That's fine, ignore it. Now let's make a nice prime for openssl. This will take forever, so go get a coffee. Drink it slowly.

$ cd /etc/nginx
$ openssl dhparam -out dhparam.pem 4096

Once that's done, create a script called letsencrypt_gen in /root/letsencrypt and copy:

#!/usr/bin/env bash

if [[ -z "$DOMAINS" ]]; then
    echo "Please set DOMAINS environment variable " \
         "(e.g. \"-d example.com -d www.example.com\")"
    exit 1
fi

if [[ -z "$DIR" ]]; then
    export DIR=/var/www/letsencrypt-auto
fi


mkdir -p $DIR && /root/letsencrypt/letsencrypt-auto certonly -v \
    --server https://acme-v01.api.letsencrypt.org/directory \
    --webroot \
    --webroot-path=$DIR \
    $DOMAINS

service nginx reload

into it. chmod 0755 the script after you've created it. Next, run the following to generate a letsencrypt certificate:

$ cd ~/letsencrypt
$ DOMAINS="-d example.com -d www.example.com" \
  /root/letsencrypt/letsencrypt_gen

Assuming the DNS for example.com (which is the first domain in the sequence listed) points to the nginx server, this should create the appropriate files in /etc/letsencrypt. You can now add

server {
    listen 443 ssl;
    server_name example.com www.example.com;

    # certificates from letsencrypt

    ssl_certificate      /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key  /etc/letsencrypt/live/example.com/privkey.pem;
    ssl_session_timeout 1d;
    ssl_session_cache shared:SSL:50m;
    ssl_session_tickets off;

    # Diffie-Hellman parameter for DHE ciphersuites
    ssl_dhparam /etc/nginx/dhparam.pem;

    # modern configuration
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

    ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK';

    ssl_prefer_server_ciphers on;

    # HSTS (ngx_http_headers required) - 6 months
    # add_header Strict-Transport-Security max-age=15768000;

    # OCSP stapling
    ssl_stapling on;
    ssl_stapling_verify on;

    # verify chain of trust of OCSP response
    ssl_trusted_certificate /etc/letsencrypt/live/example.com/chain.pem;

    proxy_ignore_headers "Cache-Control" "Expires";
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    client_max_body_size 100m;
    proxy_pass_header Set-Cookie;

    # Catch the wordpress cookies.
    # Must be set to blank first for when they don't exist.
    set $wordpress_auth "";
    if ($http_cookie ~* "wordpress_logged_in_[^=]*=([^%]+)%7C") {
        set $wordpress_auth wordpress_logged_in_$1;
    }

    # Set the proxy cache key
    set $cache_key $scheme$host$uri$is_args$args;

    location / {
        proxy_pass http://examplebackend;
        proxy_cache_valid any 0;
     }
}

to /etc/nginx/conf.d/servers.conf and restart nginx. Poke appropriate firewall holes, etc.

Letsencrypt certs don't last long, so you'll want to set up a nightly cronjob to make sure certbot looks for any certificates about to expire and renews them. The cronjob command is simply /root/letsencrypt/certbot-auto renew, which in the case of this guide would have to be run as root.

Your reverse proxy should now be responding to SSL requests for example.com and passing those back to the non-SSL backend server. No need to set up SSL on the backend server, no need to buy a certificate. You may now run virtually unlimited websites behind a single IP address, all with their own automatically renewing SSL certificates.

Coping with a scarce resource

It wasn't that long ago that setting up SSL websites required each site that wanted SSL to have its own IP address. SNI changed that, and suddenly I can do remarkable things with 30 IPv4 addresses. 10 years ago, 30 Ipv4 addresses felt positively restrictive.

Reverse proxies are only one example of how the technology industry is coping with IPv4 address exhaustion. Far from accepting the purity of the end-to-end model as sacred and rushing out to greet IPv6 as our One True Salvation, we're becoming more efficient at using the IPv4 space, clinging to it by our fingernails until the bitter end.

Yes, we're out of IPv4 addresses, but that doesn't mean we can stop using IPv4. It means we have to get smarter about it. It will be many years yet before most of us can start deploying public-facing services without IPv4 connectivity.

In the mean time, best to brush up on reverse proxies and next generation firewalls to see what capabilities they have for extending the life of your existing IPv4 footprint. As demonstrated above, the technologies involved really aren't that complicated. ®


Biting the hand that feeds IT © 1998–2017