You might get su
ed.
That’s true, sr.ht it not a drop-in-replacement, but rather a full on alternative.
~/Documents/projects/<YYYY>-<MM>-<DD>_<name>
Fascinating idea!
I use sourcehut, specifically because I like their web gui!
While you’re at it, would you mind implementing good braille support for wayland?
NixOS just sits on your face. All the stuff in front of you is awesome. Though you might suffocate at any moment given the options. Oh and sticking your nose too deep into things might get you a broken nose.
I’m using rustic
, a lock-free rust-written drop-in-replacement of restic
, which (I’m referring to restic
and therefore in extension to rustic
) supports always-encrypted, deduplicating, compressed and easy backups without you needing to worry about whether to do a full- or incremental-backup.
All my machines run hourly backups of all mounted partitions to an append-only repo at borgbase. I have a file with ignore pattern globs to skip unwanted files and dirs (i.e.: **/.cache
).
While I think borgbase is ok, ther’re just using hetzner storage boxes in the background, which are cheaper if you use them directly. I’m thinking of migrating my backups to a handfull of homelabs from trusted friends and family instead.
The backups have a randomized delay of 5m and typically take about 8-9s each (unless big new files need to be uploaded). They are triggered by persistent systemd-timers.
The backups have been running across my laptop, pc and server for about 6 months now and I’m at ~380 GiB storage usage total.
I’ve mounted backup snapshots on multiple occasions already to either get an old version of a file, or restore it entirely.
There is a tool called redu
which is like ncdu
but works on restic
/rustic
repos. This makes it easy to identify which files blow up your backup size.
That woul’ve been: Minetest Immortal
Thanks for the writeup! So far I’ve been using ollama, but I’m always open for trying out alternatives. To be honest, it seems I was oblivious to the existence of alternatives.
Your post is suggesting that the same models with the same parameters generate different result when run on different backends?
I can see how the backend would have an influence hanfling concurrent api calls, ram/vram efficiency, supported hardware/drivers and general speed.
But going as far as having different context windows and quality degrading issues is news to me.
Is there an inherent benefit for using NVLINK? Should I specifically try out Aprodite over the other recommendations when having 2x 3090 with NVLINK available?
I use sourcehut.
dd if=/dev/zero of=image.png bs=1k count=1024 conv=notrunc
Good question: https://github.com/styluslabs/Write/commits/master/LICENSE
If you connect to the network and open firefox, it will display a toast to open the corresponding captive portals page. You can then login through that. Given that your VPN isn’t blocking unencrypted connections etc.
I assume the network advertises a captive portals url and identifies you based on your MAC address.
The config is server-side (router).
yes: sntx.space, check out the spurce button in the bottom right corner.
I’m building/running it the homebrewed-unconventional route. That is I have just a bit of html/css and other files I want to serve, then I use nix to build that into a usable website and serve it on one of my homelab machines via nginx. That is made available through a VPS running HA-Proxy and its public IP. The Nebula overlay network (VPN) connects the two machines.
What do you use it for? How’s the daily-driver experience?
What axis though?