arrow-left arrow-right brightness-2 chevron-left chevron-right circle-half-full facebook-box facebook loader magnify menu-down rss-box star twitter-box twitter white-balance-sunny window-close
Set up: Synology hosting docker containers
4 min read

Set up: Synology hosting docker containers

How to configure your Synology to host public facing docker containers. Including; DDNS, reverse proxy and SSL.
Set up: Synology hosting docker containers

How to configure your Synology to host public facing docker containers. Including; DDNS, reverse proxy and SSL.

Serving public facing sites from docker containers on a Synology is a matter of "good enough".  Sure there are some very specific use cases for building things just for you and your family - media servers, photo sharing etc... but as the barriers to entry continue to plummet, how hard is it to build something in a docker container and make it accessible to the world?

Living in the UK, I have a reasonable high speed broadband service (@70Mbps) which has been pretty stable (I can't remember the last time it went down) and I don't pay for a static IP address.

I have a Synology 1019+ connected to a UPS just in case.

I love my Synology NAS and I also loath it.  It is fantastically simple to set up and use (mostly), with its pretty and (quite) intuitive GUI - DiskStation Manager software.  The trouble is that what it does it REALLY quite complicated.  Occasionally you encounter file permission errors, or strange messages about root users, if you want to use Terminal or SSH you may need to start looking "under the hood" - how Synology implements these complicated things you thought your knew already.

And so it is with docker...

This guide is intended to help once you have your docker container successfully running on your Synology and you are looking at providing external access.  I am assuming that you have already set up your router to forward web traffic to your Synology and will deal only with the specifics on the DiskStation.

So first up, you need a domain name.  I have used a range of different registrars over the years, but have settled on google domains as my preferred registrar for reasons that will become clear later.  Suffice to say that it is not the cheapest, nor expensive and offers the widest possible control of your DNS.

There are basically 3 distinct parts to getting your docker container externally accessible:

  1. The domain
  2. The reverse proxy
  3. the SSL certificate

The Domain

You can define how users will access your service/site (via your domain) for example you might serve it off a subdomain  You will need to automatically update your DNS settings with your IP address - called Dynamic DNS - this is supported out of the box both by Synology (which need to detect that your IP has changed) and google who need to update their DNS servers.  This step is NOT necessary if you have a static IP address.

Using the google DNS page, add a new synthetic record: Dynamic DNS and place @ in the subdomain box.  This will point all traffic to your Synology box.  You could create a subdomain for your service here.

Once you have this set up - Google will provide you with a username and password, which will allow you to update your local IP address:

Head over to your Synology control panel and under external access - add a new DDNS record.

Service provider: Google

Hostname: your domain

Username: from google synthetic record

password: from google synthetic record

Your Synology should now update google when your ISP changes your WAN (public) IP address.

The reverse proxy

When traffic arrives at your front door (the Synology) it needs to know what to do with it.  This part of the set up, allows you to get the htpps:// working properly so traffic directed to your domain will be routed to the container.

Using the Application Portal, click onto the Reverse Proxy tab and create a rule...

Create a rule that repoints the http traffic to https.  This tells the Synology that ANY connection which is made on http should be re-routed to https (important later).  The description is just for you, but you need to make sure that port 80 traffic is rerouted to port 443.

Next you need to create another reverse proxy record that tells the Synology what to do with https (which is ALL) traffic for your domain.  This time your domain traffic will arrive on port 443 and you need to reroute it to you docker container running your service.

Protocol: HTTP

Hostname: the IP address of your Synology

Port: the port that your container is running on

You should ensure that your Synology is on a static IP address on your local network AND your docker container should have a specified port (so it always starts at the same location).

Also - I have not found any details on the enable HSTS or enable HTTP/2  they don't seem to make much difference (on or off).

The SSL cert

Your Synology needs a SSL certificate for your domain, to let visitors know that you are who you say you are.

In the control panel —> security select the certificates tab and add a new one.  Choose the "add new certificate" option and then "get a certificate from Let's Encrypt"

Complete the form, making sure that you have included a subdomain IF you set this up with google.

Once you have the certificate you will need to link the certificate to the reverse proxy settings you have created.  Do this using the "configure" option on the certificates tab.

If you have stuck with it to the end - I hope you are up and running.  Mileage may vary, this was correct when written and I would be interested in any thoughts you may have.  Did it work for you?  Leave me a comment.