Recently I wanted some analytics on my blog to see what my popular pages were, and where the traffic was being referred from. Google Analytics doesn’t float my boat as it’s not exactly privacy preserving, and I don’t want to feed the service sacrificing giant. I looked at a few options, and decided to go with umami.
We’ll go through how to set up all the related services first, and then install umami. You might be interested in standing up your analytics differently, or you might just use different services and so can’t follow this guide step by step, but at the very least you can see how to deploy umami with Docker.
As a high level architecture, umami will be installed onto a host along with a web server. We’ll configure DNS to point our domain to the host, and the web server will proxy traffic from the internet to umami. This host services our umami administrator panel, as well as the location for the tracking JavaScript. The tracking code is generated by umami, which you’ll need to inject into the head of the website that you’d like analytics on. We will setup our analytics domain to be analytics.mysite.com
, so if you ever see that domain, replace it with your own domain (or IP address).
Requirements
Minimum
At the bare minimum, you’ll need an internet accessible host with a public IP, either by DDNS (I wrote about setting this up for free with ddclient and Cloudflare) or a static IP that you can install umami on. This is technically enough to get your analytics configured and working, but it’s a bit rough around the edges. I use a VPS with a static IP, and it’s got Ubuntu 20.04 installed.
Recommended
To make our analytics setup more complete, here are the other components you’ll want in addition to the minimum ones:
- A web server. We’ll use nginx.
- Certificates. We’ll use acme.sh to get ’em (which uses Let’s Encrypt).
- DNS management. We’ll use Cloudflare.
- A domain. Where you buy it from doesn’t matter.
nginx
On our Ubuntu instance, we’ll need to install nginx so we can enable TLS, have a nice looking domain, and set ourselves up if we want other things on the host.
We’ll grab the latest version of nginx instead of whatever the Ubuntu repository has. To do that, we’ll want to grab nginx’s signing key, and add the official nginx repository into our list of sources.
|
|
Now we’ll want to update our sources. To get the most recent stable version for Focal Fossa (Ubuntu 20.04) we’ll edit our /etc/apt/sources.list
file with:
|
|
Then add the below snippet into the file.
|
|
Note that if you don’t include [arch=amd64]
you’ll get the below warning when doing apt update
. It’s benign, but still annoying:
N: Skipping acquire of configured file 'nginx/binary-i386/Packages' as repository 'https://nginx.org/packages/ubuntu focal InRelease' doesn't support architecture 'i386'
Now we can actually install nginx.
|
|
We don’t have certificates yet, but we will soon. We can put some placeholder files in so that the nginx configuration validator doesn’t complain. Don’t forget to replace analytics.mysite.com
with your domain.
|
|
Now we’re going to add in an nginx configuration file. Again, you’ll need to change analytics.mysite.com
to your domain name. I have some safe to deploy security headers in there, with some commented out as they’ll require customisation. I recommend reading about them so you can configure them for your site!
|
|
The snippet below is the nginx configuration file that you’ll need to change up a bit.
|
|
We should be able to start nginx now.
|
|
acme.sh
On our Ubuntu instance, we need to get signed certificates, and we can use acme.sh to do that for us. In this case, we’re going to install it as root, because we’ll need the tool to have root permissions to reload the nginx configuration for us when it issues a new certificate. Naturally, we’re going to continue avoiding all reasonable security practices, and download a random shell script from the internet and immediately run it. For real though, you can visit the URL and see what it’s doing.
|
|
We’re going to use acme.sh to issue domain validated certificates using CloudFlare’s DNS API. acme.sh has documentation on DNS API’s for various services which you can find here.
Once again, make sure to update analytics.mysite.com
to your domain name, and also add in your correct values to be exported. You’ll notice we’re exporting the credentials but we’d want these to hang around for issuing certificates in the future. The values will be saved in ~/.acme.sh/account.conf
, and will be reused when needed. Note that in this case, ~/
is root, not your user account.
|
|
Now we’ll “install” them to the correct location, and allow acme.sh to reload the nginx service as it needs to.
|
|
Make sure to drop out of root back into our standard user.
|
|
Docker and Docker Compose
On our Ubuntu instance, we’ll need to install Docker and Docker Compose as that’s how we’re going to deploy umami.
The curl command below for Docker Compose will grab version 1.27.4, so check the releases in the GitHub repository to see if there’s a more up to date version.
|
|
umami
Getting umami installed with Docker is easy, all we need is the repository on our Ubuntu instance (along with Docker and Docker Compose).
|
|
The docker-compose.yml
file is where we can configure umami. If you don’t update the ports to be "127.0.0.1:3001:3000"
then nginx won’t be able to correctly proxy the connection. We made this change so we only expose 3001
internally on localhost. We added "restart: always"
so the containers will come back up if the host dies and resurrects for any reason.
You should change:
- The password - find and replace
mylengthypassword
. - The hash salt - find and replace
thisismyrandomhashsalt
.
The password and the hash are for the umami database, so keep them strong, and keep them safely stored in a password manager. It’s also worth noting that these credentials will be available in plaintext on the host so, you know, that’s a thing.
|
|
Now we’ll start umami!
|
|
umami Backups
If you want to take a backup of your umami Postgres database, you can export to /tmp/
with the below. It will place a file named something similar to umami_db_dump_26-12-2020_18_00_00.gz
.
|
|
Then if you have a fresh install of umami, you can import your backup with the below snippet. You’ll need to have the .gz
backup file in your current directory, and have the correct filename in the command instead of umami_db_dump_26-12-2020_18_00_00.gz
which is in the snippet.
|
|
Content Security Policy
On the websites that you’d like to use umami analytics with, if you’ve got a Content Security Policy (CSP), then you will need to add your domain or IP into it. If you don’t, your tracking JavaScript will not be loaded. If you don’t have a CSP, I’d recommend implementing one!
You’ll want something like Content-Security-Policy: default-src 'self'; script-src 'self' analytics.mysite.com; connect-src analytics.mysite.com;
along with your other requirements.
Last Steps
Now you have all of your infrastructure and software ready to go, there’s two more things you need to do. If you need more info about umami, you can check out the official documentation for more.
Change admin Password
Visit your domain and log in to umami. The default username is admin
with a default password of umami
. Change the password immediately to something strong, and store it in a password manager.
Add a Website
To get the analytics to umami, first add a website. Then, get the tracking code, and add it to the head of the website you’d like analytics on.