Mastodon

Mastodon is a Twitter replacement that runs on the Fediverse. Using it is a great way to escape from Elon’s hijinks after he blew 44 Billion USD on Twitter/X and subsequently turned it into a shithole.

Unfortunately, there’s not a lot of great documentation on how to run Mastodon with Docker Compose, which is my orchestration tool of choice. I’ll cover how to use Docker Compose in this blog post.

Shout-out to Ben Tasker’s Docker Compose Guide for Mastodon, which is how I got my Mastodon instance running. In my version of the guide I am trying to give instructions alongside full, finalized configuration files, as I found the piecemeal style of Ben’s guide to be difficult to follow at times.

How to Self Host Mastodon

With that said, let’s get into the guide:

  1. First, you’ll want to create a directory where we’ll keep the docker-compose.yml file and all related deployment data etc. Run mkdir mastodon && cd mastodon to make this directory and enter it.
  2. Copy over the below docker-compose.yml file into the directory you just created.
Click to expand docker-compose.yml
version: '3'
services:
  db:
    restart: always
    image: postgres:14-alpine
    shm_size: 256mb
    networks:
      - internal_network
    healthcheck:
      test: ['CMD', 'pg_isready', '-U', 'postgres']
    volumes:
      - ./postgres14:/var/lib/postgresql/data

  redis:
    restart: always
    image: redis:7-alpine
    networks:
      - internal_network
    healthcheck:
      test: ['CMD', 'redis-cli', 'ping']
    volumes:
      - ./redis:/data

  web:
    image: ghcr.io/mastodon/mastodon:v4.2.7
    restart: always
    env_file: .env.production
    command: bundle exec puma -C config/puma.rb
    networks:
      - external_network
      - internal_network
    healthcheck:
      test: ['CMD-SHELL', 'wget -q --spider --proxy=off localhost:3000/health || exit 1']
    ports:
      - '127.0.0.1:3000:3000'
    depends_on:
      - db
      - redis
    volumes:
      - ./public/system:/mastodon/public/system

  http:
    restart: always
    image: nginx:mainline-alpine
    container_name: mastodon-http
    networks:
      - external_network
      - internal_network
    ports:
        - 9880:80
    volumes:
        - ./nginx/conf.d:/etc/nginx/conf.d
        - ./public:/mastodon/public

  streaming:
    image: ghcr.io/mastodon/mastodon:v4.2.7
    restart: always
    env_file: .env.production
    command: node ./streaming
    networks:
      - external_network
      - internal_network
    healthcheck:
      test: ['CMD-SHELL', 'wget -q --spider --proxy=off localhost:4000/api/v1/streaming/health || exit 1']
    ports:
      - '127.0.0.1:4000:4000'
    depends_on:
      - db
      - redis
  sidekiq:
    image: ghcr.io/mastodon/mastodon:v4.2.7
    restart: always
    env_file: .env.production
    command: bundle exec sidekiq
    depends_on:
      - db
      - redis
    networks:
      - external_network
      - internal_network
    volumes:
      - ./public/system:/mastodon/public/system
    healthcheck:
      test: ['CMD-SHELL', "ps aux | grep '[s]idekiq\ 6' || false"]

networks:
  external_network:
  internal_network:
    internal: true

Note that this docker-compose.yml file doesn’t contain an elasticsearch component, which is required if you wish to enable full-text search. However, we’re going to leave that as out of scope for this guide as it is more involved.

  1. Unfortunately we’ll need to run some manual steps here in order to prepare the Postgres database for use. Generate a random password for the Postgres user. Then run the following command, replacing <PASSWORD> with your generated password
docker run --rm --name postgres \
-v $PWD/postgres14:/var/lib/postgresql/data \
-e POSTGRES_PASSWORD="<PASSWORD>" \
-d postgres:14-alpine
  1. Then we need to give the Postgres user the appropriate privileges and role:

    • docker exec -it postgres psql -U postgres
    • CREATE USER mastodon WITH PASSWORD '<PASSWORD>' CREATEDB; exit (replacing <PASSWORD> with the one you generated in step 3).
    • docker stop postgres
  2. Now we’ll need to configure Mastodon itself. Create an .env.production file: vim .env.production. Add the following contents:

DB_HOST=db
DB_PORT=5432
DB_NAME=mastodon
DB_USER=mastodon
DB_PASS=<PASSWORD>
REDIS_HOST=redis
REDIS_PORT=6379
REDIS_PASSWORD=

Remember to replace <PASSWORD> with the password from step 3. REDIS_PASSWORD is supposed to be empty.

  1. Now, we’re going to need to run a task to help setup Mastodon: docker-compose run --rm web bundle exec rake mastodon:setup. This will prompt you for some information:
Click to see prompts for mastodon:setup
Domain name: mastodon.vegan.dev

Single user mode disables registrations and redirects the landing page to your public profile.
Do you want to enable single user mode? No

Are you using Docker to run Mastodon? Yes

PostgreSQL host: db
PostgreSQL port: 5432
Name of PostgreSQL database: mastodon
Name of PostgreSQL user: mastodon
Password of PostgreSQL user: <PASSWORD>
Database configuration works! 

Redis host: redis
Redis port: 6379
Redis password: 
Redis configuration works!

Do you want to store uploaded files on the cloud? No

Do you want to send e-mails from localhost? No

Note that actually sending e-mails is out of scope of this guide as well, as this guide is aimed at the user who intends to run their Masoton instance only for themselves and not allow other users to register.

Also note that if mastodon:setup fails, then you’ll need to add DISABLE_DATABASE_ENVIRONMENT_CHECK=1 to .env.production to avoid further errors, or start over with a fresh Postgres instance.

  1. Once you’ve responded to all the prompts, the terminal will print more environment variables that you will need to copy/paste back into the .env.production file we created in step 5.
Click to see final output of mastodon:setup
LOCAL_DOMAIN=mastodon.vegan.dev
SINGLE_USER_MODE=false
SECRET_KEY_BASE=<redacted> # this is generated for you during mastodon:setup
OTP_SECRET=<redacted> # this is generated for you during mastodon:setup
VAPID_PRIVATE_KEY=<redacted> # this is generated for you during mastodon:setup
VAPID_PUBLIC_KEY=<redacted> # this is generated for you during mastodon:setup
DB_HOST=db
DB_PORT=5432
DB_NAME=mastodon
DB_USER=mastodon
DB_PASS=<PASSWORD> # this is the password you generated in step 3
REDIS_HOST=redis
REDIS_PORT=6379
REDIS_PASSWORD=
# The below are only required if you *need* emails, i.e. for user registration
SMTP_SERVER=<redacted>
SMTP_PORT=587
SMTP_LOGIN=<redacted>
SMTP_PASSWORD=<redacted>
SMTP_AUTH_METHOD=plain
SMTP_OPENSSL_VERIFY_MODE=peer
SMTP_ENABLE_STARTTLS=always
SMTP_FROM_ADDRESS=<redacted>

You can simply wholesale replace the previous contents of .env.production with the output from mastodon:setup

  1. Setup Nginx configuration: mkdir -p nginx/conf.d && vim nginx/conf.d/mastodon.conf, copying the below example into mastodon.conf:
Click to see Nginx configuration
map $http_upgrade $connection_upgrade {
  default upgrade;
  ''      close;
}

upstream web {
    server web:3000 fail_timeout=0;
}

upstream streaming {
    # Instruct nginx to send connections to the server with the least number of connections
    # to ensure load is distributed evenly.
    least_conn;

    server streaming:4000 fail_timeout=0;
}

proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=CACHE:10m inactive=7d max_size=1g;

server {
  listen 80;
  listen [::]:80;
  server_name mastodon.vegan.dev;
  #root /home/mastodon/live/public;
  root /mastodon/public;
  index index.html index.htm;
  location /.well-known/acme-challenge/ { allow all; }
  keepalive_timeout    70;
  sendfile             on;
  client_max_body_size 99m;
    error_log /dev/stdout;

  gzip on;
  gzip_disable "msie6";
  gzip_vary on;
  gzip_proxied any;
  gzip_comp_level 6;
  gzip_buffers 16 8k;
  gzip_http_version 1.1;
  gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript image/svg+xml image/x-icon;

  location / {
    try_files $uri @proxy;
  }

  # If Docker is used for deployment and Rails serves static files,
  # then needed must replace line `try_files $uri =404;` with `try_files $uri @proxy;`.
  location = /sw.js {
    add_header Cache-Control "public, max-age=604800, must-revalidate";
    add_header Strict-Transport-Security "max-age=63072000; includeSubDomains";
    try_files $uri @proxy;
  }

  location ~ ^/assets/ {
    add_header Cache-Control "public, max-age=2419200, must-revalidate";
    add_header Strict-Transport-Security "max-age=63072000; includeSubDomains";
    try_files $uri @proxy;
  }

  location ~ ^/avatars/ {
    add_header Cache-Control "public, max-age=2419200, must-revalidate";
    add_header Strict-Transport-Security "max-age=63072000; includeSubDomains";
    try_files $uri @proxy;
  }

  location ~ ^/emoji/ {
    add_header Cache-Control "public, max-age=2419200, must-revalidate";
    add_header Strict-Transport-Security "max-age=63072000; includeSubDomains";
    try_files $uri @proxy;
  }

  location ~ ^/headers/ {
    add_header Cache-Control "public, max-age=2419200, must-revalidate";
    add_header Strict-Transport-Security "max-age=63072000; includeSubDomains";
    try_files $uri @proxy;
  }

  location ~ ^/packs/ {
    add_header Cache-Control "public, max-age=2419200, must-revalidate";
    add_header Strict-Transport-Security "max-age=63072000; includeSubDomains";
    try_files $uri @proxy;
  }

  location ~ ^/shortcuts/ {
    add_header Cache-Control "public, max-age=2419200, must-revalidate";
    add_header Strict-Transport-Security "max-age=63072000; includeSubDomains";
    try_files $uri @proxy;
  }


  location ~ ^/sounds/ {
    add_header Cache-Control "public, max-age=2419200, must-revalidate";
    add_header Strict-Transport-Security "max-age=63072000; includeSubDomains";
    try_files $uri @proxy;
  }

  location ~ ^/system/ {
    add_header Cache-Control "public, max-age=2419200, immutable";
    add_header Strict-Transport-Security "max-age=63072000; includeSubDomains";
    add_header X-Content-Type-Options nosniff;
    add_header Content-Security-Policy "default-src 'none'; form-action 'none'";
    try_files $uri @proxy;
  }

  location ^~ /api/v1/streaming {
    proxy_set_header Host $host;
    #proxy_set_header X-Real-IP $remote_addr;
    #proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto https;
    proxy_set_header Proxy "";

    proxy_pass http://streaming;
    proxy_buffering off;
    proxy_redirect off;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $connection_upgrade;

    add_header Strict-Transport-Security "max-age=63072000; includeSubDomains";

    tcp_nodelay on;
  }

  location @proxy {
    proxy_set_header Host $host;
    #proxy_set_header X-Real-IP $remote_addr;
    #proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto https;
    proxy_set_header Proxy "";
    proxy_pass_header Server;

    proxy_pass http://web;
    proxy_buffering on;
    proxy_redirect off;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $connection_upgrade;

    proxy_cache CACHE;
    proxy_cache_valid 200 7d;
    proxy_cache_valid 410 24h;
    proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
    add_header X-Cached $upstream_cache_status;

    tcp_nodelay on;
  }

  error_page 404 500 501 502 503 504 /500.html;
}

Note that in my setup, TLS termination occurs on my reverse proxy and then encrypted traffic is sent from reverse proxy to my homelab over VPN, and therefore my Nginx configuration above is using plain HTTP and not HTTPS.

  1. Fix file system permissions… Run the following to create files: docker compose up -d && docker compose down and then the following to fix permissions: sudo chown -R 70:70 postgres && sudo chown -R 991:991 public/
  2. Fix Nginx 404 errors. For some reason, the file system permissions fix in step 9 only fixes some issues, and I still encountered 404 errors from Nginx being unable to read the static files required for the Front End UI due to more permission errors. A really hacky way to fix this issue is to copy the public folder out of the web container and onto the host, so that Nginx in the http container can read the static files. In order to copy those files, run the following command: docker cp mastodon-web-1:/mastodon/public public. This is definitely not a good practice, but with my day job and other life responsibilities, I don’t have time to look into it and make a proper fix.
  3. Start up the Mastodon stack: docker compose up -d

Accessing Mastodon from the Public Internet

You might, however, be wondering about how can I access this service when I’m not at home, on the same network as my home lab server? Well, as with anything with software, there’s multiple solutions to this, each can be valid for their own goals and use-cases. However, it’s actually a surprisingly complex topic, so I’ve covered this in a separate post: Ingress

There is one mastodon specific thing you’ll want to do here, and that is add a DNS A Record that points mastodon.<your-domain>.<your-TLD> at the public IP address of the VPS running the reverse proxy.

Follow-up Items

  1. Go into Preferences -> Administration -> Server Settings -> Registrations and set it so that nobody can sign up (unless you’re planning on running a public instance).
  2. Go into Preferences -> Administration -> Server Settings -> Discovery and check Allow trends without prior review, otherwise you’ll get emails asking you to approve newly observed hashtags).
  3. If you’re a privacy nut:
    • Disable public feeds: The federated tab displays public toots from instances that your instance federates with, including users you do not follow. If you’re hosting Mastodon on your own domain, this could cause search engines to associate your domain with unwanted content, particularly if you follow users on large servers. Go to Preferences -> Administration -> Server Settings -> Discovery and then uncheck Allow unauthenticated access to public timelines and click save changes.
    • Prevent search engines from indexing profile and posts: Go to Preferences -> Preferences (again) -> Other and then check Opt-out of search engine indexing and click save changes.