Does anyone have recommendations for centralized backup servers that use the server/client model?
My backups are relatively simple in that I use rsync to pull everything from remote machines to a single server and then run restic on that server to back them up and also copy that backup to cloud storage.
I’ve been looking at some other software again like Bacula/Bareos/UrBackup and wondering if anyone’s currently using one of them or something like it that they like?
Ideally I’m looking for a more user-friendly polished interface for managing backups across multiple servers and desktops/laptops. I’m testing Bareos now, but it’ll probably not work out since the web ui doesn’t allow adding new jobs/volumes/etc.
Borg is great, I actually switched from borg to restic because borg would index everything before backing up the files. Restic starts backing up while indexing is still happening. Not a huge deal normally, but it mattered a lot when it’d take 8+ hours to index lots of tiny files on a couple servers.
I never experienced it been slow.
Normally the hard drives and network is slower the Borg.
Have you reported the use case to the Borg backup developer’s?
There were related github issues open, but not opened by me. It wasn’t that borg itself was slow, but it was more to do with the number of files being backed up. Even a simple
find . -type f
would take hours. My problem with it was that borg wouldn’t start uploading data until that indexing finished.Restic immediately started uploading while it was still indexing so it cut the overall backup time way down.
If it took 4 hours to index files and then 4 hours to upload those files, borg would total 8 hours while restic was only 4 hours total.
This was something like 10TB and hundreds of millions of files though. I never had an issue with borg on smaller datasets.