Setting up Nextcloud on a VM or Raspberry Pi, Backed up with Restic

Published

Lately Dropbox has been putting the squeeze on me to start paying for it's service in order to support more than 3 devices and get more storage. I already have Google Drive since I have Google Fiber, but after hearing good things about Nextcloud from a friend of mine, and starting to be disillusioned with relying on cloud services that don't necessarily have your best interest at heart, I decided to give self-hosting Nextcloud a try.

Docker and Ansible

Nextcloud provides official Docker images. You can use Docker Compose to set up all your containers, but I would suggest trying to use Ansible, so that we can set up containers and other configurations with the same tool (we'll need it for Restic backups). For performance reasons we'll set up Postgres instead of the normal Sqlite. This is Nextcloud's recommendation for performance and in my experience holds true.

---

- name: nextcloud postgres volume
  docker_volume:
    name: nextcloud-postgres

- name: nextcloud data volume
  docker_volume:
    name: nextcloud-data

- name: nextcloud postgres container
  docker_container:
    name: nextcloud-postgres
    image: postgres:13
    restart_policy: always
    volumes:
      - "nextcloud-postgres:/var/lib/postgresql/data"
    comparisons:
      '*': strict
    env:
      POSTGRES_USER: nextcloud
      POSTGRES_PASSWORD: "{{ postgres_password }}"

- name: nextcloud container
  notify: restart traefik
  docker_container:
    name: nextcloud
    image: nextcloud
    restart_policy: always
    pull: yes
    comparisons:
      '*': strict
    volumes:
      - "nextcloud-data:/var/www/html"
    links:
      - "nextcloud-postgres:nextcloud-postgres"
    env:
      POSTGRES_HOST: nextcloud-postgres
      POSTGRES_DB: nextcloud
      POSTGRES_USER: nextcloud
      POSTGRES_PASSWORD: "{{ postgres_password }}"
    labels:
      traefik.enable: 'true'
      traefik.http.routers.nextcloud.rule: Host(`nextcloud.myhost.com`)
      traefik.http.routers.nextcloud.entrypoints: https
      traefik.http.routers.nextcloud.tls: 'true'
      traefik.http.services.nextcloud.loadbalancer.server.port: "80"

If you're not familiar with Ansible but are familiar with Docker compose YAML files, it shouldn't be too hard to follow along here. We are setting up 2 data volumes, and then the meat if it is the 2 containers we set up and mount those containers into.

You'll want to pass in a postgres_password variable. I use ansible-vault to encrypt the files locally, along with keyring to store the encryption key in the system keychain. That said, it's not the most critical password in the world so you don't have to get too nuts here.

Setting the restart policy to "always" ensures that the container automatically starts on system boot. The "comparisons strict" portion is a bit of an Ansible quirk that forces it to recreate the container if any part of the spec changes, such as adding or removing a label or environment variable.

Side note: Nextcloud also provides an FPM-exposing image via :fpm for use with Nginx, but I didn't find it to be any more performant than the normal Apache based container image.

Traefik

I have a different blog post for Traefik, but I use it as my HTTPS-terminating load balancer for my servers.

Setting up the Traefik container can look like:

- name: create traefik container
  tags: traefik
  docker_container:
    name: traefik
    image: traefik:v2.4
    pull: yes
    restart_policy: always
    comparisons:
      '*': strict
    command:
      - "--providers.docker=true"
      - "--providers.docker.exposedbydefault=false"
      - "--entrypoints.http.address=:80"
      - "--entrypoints.https.address=:443"
      - "--certificatesresolvers.myhttpchallenge.acme.httpchallenge=true"
      - "--certificatesresolvers.myhttpchallenge.acme.httpchallenge.entrypoint=http"
      - "--certificatesresolvers.myhttpchallenge.acme.email=youremail@example.com"
      - "--certificatesresolvers.myhttpchallenge.acme.storage=/acme/acme.json"
      - "--accesslog"
      - "--accesslog.filepath=/logs/access.json"
      - "--accesslog.format=json"
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - "acme-data:/acme"
      - "/var/log/traefik:/logs"
      - "/var/run/docker.sock:/var/run/docker.sock"

What this will do is set up Traefik to listen to Docker images that match specific labels, and use Let's Encrypt to get a free TLS certificate and manage the renewing of said cert. It will listen on ports 80 and 443 directly, proxying requests to the Nextcloud container.

Run this, assuming your network and DNS are all set up, and you should be able to visit the host and see Nextcloud pop up!

Backups

If you're deploying this as a big production setup, you'll probably want to use some kind of RAID to ensure that losing a disk doesn't cause data loss. For me, backups are more important than redundancy. This does mean that, however much data you plan on storing, you need to double it to account for backups.

Enter Restic! Restic is an open source (Golang based) tool for doing encrypted, incremental backups. It can back up to local files or S3-compatable backends. Restic is available in Ubuntu, so just apt install restic and you've got it.

The documentation does a great job explaining how to get set up, but essentially what you'll do once initially is set up repositories (or just the one), and then you'll back up to those repositories regularly.

# Set up a local repository
restic init --repo /backups/restic

# Set up a Digitalocean Spaces repository
$ export AWS_ACCESS_KEY_ID=<MY_ACCESS_KEY>
$ export AWS_SECRET_ACCESS_KEY=<MY_SECRET_ACCESS_KEY>

restic -r s3:sfo2.digitaloceanspaces.com/myspacesbucket/mysubfolder init

Since we're setting up Nextcloud and Postgres using docker volumes, we just need to find where they exist on the filesystem:

# Run docker volume inspect and pick out the mount point
$ docker volume inspect nextcloud-data
[
    {
        "CreatedAt": "2021-01-18T19:18:06Z",
        "Driver": "local",
        "Labels": null,
        "Mountpoint": "/var/lib/docker/volumes/nextcloud-data/_data",
        "Name": "nextcloud-data",
        "Options": {},
        "Scope": "local"
    }
]

Now you can trigger a backup with:

# To simplify scripting, use environment variables to pass in the password
# and which repository to back up to.
export RESTIC_REPOSITORY=/backups/restic
export RESTIC_PASSWORD=mysupercoolpassword

# Trigger an actual backup, run with --verbose or not if you want
restic --verbose backup /var/lib/docker/volumes/nextcloud-data/_data
restic --verbose backup /var/lib/docker/volumes/postgres-data/_data

# Clean up all but the last 10 backups
restic forget --keep-last 10 --prune

Run this in a cron job or better yet, a Systemd Timer. Since backups are incremental, you can run this frequently without being too punished, so I run a backup twice a day.

Actually Using Nextcloud

At this point Nextcloud is all ready to go. You create your admin account and if you're a single user, that's all you need. The iOS and Windows client work great. The Contacts WebDav integration is pretty good as well. Performance on a Raspberry Pi 4 is very serviceable, though when syncing a new device, CPU spikes and performance chugs. Apache spawns a new process for each request, kind of a side effect of how PHP works. But I'm very happy with it.

Using Nextcloud has felt like quite a win for me. It's not that I dislike Dropbox or Google Drive, though I would be lying if I wasn't put off by Dropbox's new UI and monetization efforts. It's just that we depend a whole lot on services that can just decide for any reason to stop doing business with us. We're far from being a hellish dystopia, but it feels nice knowing that my data is safe on my own server, free from any potentially prying eyes or conflicting interests.

As long as my backups work and my server doesn't get hacked, of course. Win some lose some.