• 0 Posts
  • 26 Comments
Joined 1 year ago
cake
Cake day: June 30th, 2023

help-circle



  • Bash and a dedicated user should work with very little effort. Basically, create a user on your VM (maybe called git), set up passwordless (and keyless) ssh for this user but force the command to be the git-shell. Next a simple bash script which iterates directories in this user’s home directory and runs git fetchall. Set cron to run this script periodically (every hour?). To add a new repository, just ssh as your regular user and su to the git user, then clone the new repository into the home directory. To change the upstream, do the same but simply update the remote.

    This could probably be packaged as a dockerfile pretty easily, if you don’t mind either needing to specify the port, or losing the machine’s port 22.

    EDIT: I found this after posting, might be the easiest way to serve the repositories, in combination with the update script. There’s a bunch more info in the Git Book too, the next section covers setting up HTTP…



  • Yes, I have. I should probsbly test them again though, as it’s been a while, and Immich at least has had many potentially significant changes.

    LVM snapshots are virtually instant, and there is no merge operation, so deleting the snapshot is also virtually instant. The way it works is by creating a new space where the difference from the main volume are written, so each time the application writes to the main volume the old block will be copied to the snapshot first. This does mean that disk performance will be somewhat lower than without snapshots, however I’ve not really noticed any practical implications. (I believe LVM typically creates my snapshots on a different physical disk from where the main volume lives though.)

    You can my backup script here.




  • For no 1, that shouldn’t be dind, the container would be controlling the host docker, wouldn’t it?

    If so, keep in mind that this is the same as giving root SSH access to the host machine.

    As far as security goes, anything that allows GitHub to cause your server to download (pull) and use a set of arbitrary of Docker images with arbitrary configuration is remote code execution. It doesn’t really matter what you to secure access to the machine, if someone compromises your GitHub account.

    I would probably set up SSH with a key dedicated to GitHub, specifically for deploying. If SSH is configured to only allow keys for access, it’s not much of a security risk to open it up to the internet. I would then configure that key to only be able to run a single command, which I would make a very simple bash script which runs git fetch, and then git verify-commit origin/main (or whatever branch you deploy), befor checking out the latest commit on that branch.

    You can sign commits fairly easily using SSH keys now, which combined with the above allows you to store your data on GitHub without having to trust them to have RCE on your host.



  • My recommendation would be to utilize LVM. Set up a PV on the new drive and create an LV filling the drive (wit an FS), then move all the data off of one drive onto this new drive, reformat the first old drive as a second PV in the volume group, and expand the size of the LV. Repeat the process for the second old drive. Then, instead of extending the LV, set the parity option on the LV to 1. You can add further disks, increasing the LV size or adding parity or mirroring in the future, as needed. This also gives you the advantage that you can (once you have some free space) create another LV that has different mirroring or parity requirements.







  • I followed the guide found here, however with a few modifications.

    Notably, I did not encrypt the borg repository, and heavily modified the backup script.

    #!/bin/bash -ue
    
    # The udev rule is not terribly accurate and may trigger our service before
    # the kernel has finished probing partitions. Sleep for a bit to ensure
    # the kernel is done.
    #
    # This can be avoided by using a more precise udev rule, e.g. matching
    # a specific hardware path and partition.
    sleep 5
    
    #
    # Script configuration
    #
    
    # The backup partition is mounted there
    MOUNTPOINT=/mnt/external
    
    # This is the location of the Borg repository
    TARGET=$MOUNTPOINT/backups/backups.borg
    
    # Archive name schema
    DATE=$(date '+%Y-%m-%d-%H-%M-%S')-$(hostname)
    
    # This is the file that will later contain UUIDs of registered backup drives
    DISKS=/etc/backups/backup.disk
    
    # Find whether the connected block device is a backup drive
    for uuid in $(lsblk --noheadings --list --output uuid)
    do
            if grep --quiet --fixed-strings $uuid $DISKS; then
                    break
            fi
            uuid=
    done
    
    if [ ! $uuid ]; then
            echo "No backup disk found, exiting"
            exit 0
    fi
    
    echo "Disk $uuid is a backup disk"
    partition_path=/dev/disk/by-uuid/$uuid
    # Mount file system if not already done. This assumes that if something is already
    # mounted at $MOUNTPOINT, it is the backup drive. It won't find the drive if
    # it was mounted somewhere else.
    (mount | grep $MOUNTPOINT) || mount $partition_path $MOUNTPOINT
    drive=$(lsblk --inverse --noheadings --list --paths --output name $partition_path | head --lines 1)
    echo "Drive path: $drive"
    
    # Log Borg version
    borg --version
    
    echo "Starting backup for $DATE"
    
    # Make sure all data is written before creating the snapshot
    sync
    
    
    # Options for borg create
    BORG_OPTS="--stats --one-file-system --compression lz4 --checkpoint-interval 86400"
    
    # No one can answer if Borg asks these questions, it is better to just fail quickly
    # instead of hanging.
    export BORG_RELOCATED_REPO_ACCESS_IS_OK=no
    export BORG_UNKNOWN_UNENCRYPTED_REPO_ACCESS_IS_OK=no
    
    
    #
    # Create backups
    #
    
    function backup () {
      local DISK="$1"
      local LABEL="$2"
      shift 2
    
      local SNAPSHOT="$DISK-snapshot"
      local SNAPSHOT_DIR="/mnt/snapshot/$DISK"
    
      local DIRS=""
      while (( "$#" )); do
        DIRS="$DIRS $SNAPSHOT_DIR/$1"
        shift
      done
    
      # Make and mount the snapshot volume
      mkdir -p $SNAPSHOT_DIR
      lvcreate --size 50G --snapshot --name $SNAPSHOT /dev/data/$DISK
      mount /dev/data/$SNAPSHOT $SNAPSHOT_DIR
    
      # Create the backup
      borg create $BORG_OPTS $TARGET::$DATE-$DISK $DIRS
    
    
      # Check the snapshot usage before removing it
      lvs
      umount $SNAPSHOT_DIR
      lvremove --yes /dev/data/$SNAPSHOT
    }
    
    # usage: backup <lvm volume> <snapshot name> <list of folders to backup>
    backup photos immich immich
    # Other backups listed here
    
    echo "Completed backup for $DATE"
    
    # Just to be completely paranoid
    sync
    
    if [ -f /etc/backups/autoeject ]; then
            umount $MOUNTPOINT
            udisksctl power-off -b $drive
    fi
    
    # Send a notification
    curl -H 'Title: Backup Complete' -d "Server backup for $DATE finished" 'http://10.30.0.1:28080/backups'
    

    Most of my services are stored on individual LVM volumes, all mounted under /mnt, so immich is completely self-contained under /mnt/photos/immich/. The last line of my script sends a notification to my phone using ntfy.