* Local backup [[https://restic.readthedocs.io/en/latest/index.html][Restic]] is used to create incremental backups. Host ~raspberrypi4~ is set for backing up ~glusterfs~ files. All ~glusterfs~ files are mounted under =/mnt=. An ~NFS~ drive is mounted under =/data/backup= as the backup destination. =/etc/fstab= entries: #+begin_src localhost:/dockervol /mnt glusterfs defaults,_netdev,noauto,x-systemd.automount 0 0 192.168.1.1:/mnt/sda1/backup /data/backup nfs defaults 0 0 #+end_src ** One time backup Apply =backup.job.yaml=. ** Scheduled backup Apply =backup.cronjob.yaml=. ** Restore Apply =backup.pod.yaml=. It creates a =restic-cli= pod on ~raspberrypi4~. =kubectl exec -it restic-cli -- /bin/sh= gives you a shell in the pod's container. Select the snapshot id by running =restic --repo /data/repo snapshots --insecure-no-password=. Run =restic --repo /data/repo restore /data/glusterfs --insecure-no-password= to fully restore from a snapshot. * Remote backup - WIP =kubectl apply -f backup.remote.pod.yaml= It creates an =rclone-cli= pod on ~raspberrypi4~. =kubectl exec -it rclone-cli -- /bin/sh= gives you a shell in the pod's container. Run =rclone -vvv --progress --stats-one-line --max-stats-groups 10 --log-format date,time,UTC,longfile --fast-list --buffer-size 256Mi --oos-no-check-bucket --oos-upload-cutoff 10Mi --multi-thread-cutoff 16Mi --multi-thread-streams 3000 --transfers 3000 --checkers 64 --retries 2 --oos-chunk-size 10Mi --oos-upload-concurrency 10000 --oos-attempt-resume-upload --oos-leave-parts-on-error sync /data/repo oos:backup -vv=