Podman not because of security but because of quadlets (systemd integration). Makes setting up and managing container services a breeze.
Podman not because of security but because of quadlets (systemd integration). Makes setting up and managing container services a breeze.
I was wondering if your tool was displaying cache as usage, but I guess not. Not sure what you have running that’s consuming that much.
I mentioned this in another comment, but I’m currently running a simulation of a whole proxmox cluster with nodes, storage servers, switches and even a windows client machine active. I’m running that all on gnome with Firefox and discord open and this is my usage
$ free -h
total used free shared buff/cache available
Mem: 46Gi 16Gi 9.1Gi 168Mi 22Gi 30Gi
Swap: 3.8Gi 0B 3.8Gi
Of course discord is inside Firefox, so that helps, but still…
What does free -h
say?
About 6 months ago I upgraded my desktop from 16 to 48 gigs cause there were a few times I felt like I needed a bigger tmpfs.
Anyway, the other day I set up a simulation of this cluster I’m configuring, just kept piling up virtual machines without looking cause I knew I had all the ram I could need for them. Eventually I got curious and checked my usage, I had just only reached 16 gigs.
I think basically the only time I use more that the 16 gigs I had is when I fire up my GPU passthrough windows VM that I use for games, which isn’t your typical usage.
You realise that if that were to be “fixed”, you wouldn’t end up paying the low price, Brazil would end up paying the high price? One they can’t afford because they make as much in a month as you do in a week, or worse.
You can hide chat and you’ll barely even notice it’s online. And I don’t see how it’s grindy - in fact they made the base game so easy your companion can kill everyone for you.
If you just play the base game content from 2011, it’s 8 completely voice acted stories that are interconnected into one big story. And it’s free.
Have you ever played swtor? It’s a lot like kotor 3 in many respects.
Some editors can embed neovim, for example: vscode-neovim. Not sure how well that works though as I never tried it.
Well personally if a package is not on aur I first check if there’s an appimage available, or if there’s a flatpak. If neither exist, I generally make a package for myself.
It sounds intimidating, but for most software the package description is just gonna be a single file of maybe 10-15 lines. It’s a useful skill to learn and there’s lots of tutorials explaining how to get into it, as well as the arch wiki serving as documentation. Not to mention, every aur or arch package can be looked at as an example, just click the “view PKGBUILD” link on the side on the package view. You can even simply download an existing package with git clone and just change some bits.
Alternatively you can just make it locally and use it like that, i.e. just run make without install.
Aur and pacman are 90% of why I use arch.
Also fyi to OP: never install software system-wide without your package manager. No sudo make install
, no curl .. | sudo bash
or whatever the readme calls for. Not because it’s unsafe, but because eventually you’re likely to end up with a broken system, and then you’ll blame your distro for it, or just Linux in general.
My desktop install is about a decade old now, and never broke because I only ever use the package manager.
Of course in your home folder anything goes.
I think they meant you don’t know what the binary is called because it doesn’t match the package name. I usually list the package files to see what it put in /use/bin
in such cases.
But check that it has all the features you need because it lags behind gitea in some aspects (like ci).
Podman quadlets have been a blessing. They basically let you manage containers as if they were simple services. You just plop a container unit file in /etc/containers/systemd/
, daemon-reload and presto, you’ve got a service that other containers or services can depend on.
I’ve been in love with the concept of ansible since I discovered it almost a decade ago, but I still hate how verbose it is, and how cumbersome the yaml based DSL is. You can have a role that basically does the job of 3 lines of bash and it’ll need 3 yaml files in 4 directories.
About 3 years ago I wrote a big ansible playbook that would fully configure my home server, desktop and laptop from a minimal arch install. Then I used said playbook for my laptop and server.
I just got a new laptop and went to look at the playbook but realised it probably needs to be updated in a few places. I got feelings of dread thinking about reading all that yaml and updating it.
So instead I’m just gonna rewrite everything in simple python with a few helper functions. The few roles I rewrote are already so much cleaner and shorter. Should be way faster and more user friendly and maintainable.
I’ll keep ansible for actual deployments.
Someone found a way to weaponise bikeshedding.
Everyone just confirming aliteral’s point.
All public companies are, it’s just what Boeing makes things that fall out of the sky if they mess up, so it’s more obvious.
Just have NAS A send a rocket with the data to NAS B.
I remember the clusterfuck that existed before systemd, so I love systemd.
Haven’t seen this one mentioned yet.