and/or getting your games from places like gog.com
and/or getting your games from places like gog.com
I’d go for HLS due to its simplicity: just files over http(s). VPN or not - depends on your network. If your machine is accessible from the internet, just putting the files into a webserver subdirectory with a long random path and using https will be secure enough for the usecase. Can be done with an ffmpeg oneliner.
The downside of HLS is the lag (practically – 10s or more, maybe 5 if you squeeze it hard). It is in no way realtime. Webrtc does it better (and other things too), but it is also a bigger pain to set up and forward.
Also, just in case, test that the webcam works fine if left active 24/7. I had (a cheapo) one that required a powercycle after a week or so…
For me it’s GOG first. Using lgogdownloader and wine directly (in a custom apparmor profile). No DRM, no forced updates, no annoying client that takes forever to start. Games are also dramatically much easier to isolate and sandbox this way.
If the game is not there, then yes, Steam (as a separate unix user).
Damn, they don’t send to NL :(
Whatever works for you. Just do it. It is convenient as f when you are just starting. You can always improve incrementally later on when (if) you encounter a problem.
Too much noise/power costs to run a small thing - get a pi and run it there. Too much impct on your desktop performance - okay, buy a dedicated monster. Want to deep dive into isolating things (and VMs are too much of a hassle) - get multiple devices.
No need to spend money (maybe sponsoring more e-waste) and time until it’s justified for your usecases.
Better dependency control. I strongly prefer software that only depends on the stuff I can get from the package manager. This lowers the chance of supply chain attacks. Doesn’t prevent them, but I expect repo maintiners to do a better job looking at packages, than a developer who just puts another pip/gem/npm install
in a dockerfile.
Also if something is only available in a container, it sort of screams “this code is such a mess, we don’t even know a simple way to run it” to me.
For me it’s lack of convenient hotkeys and keyboard-based navigation. Used Vimperator on FF until they killed it. Now using qutebrowser, which uses qtwebengine, wbich uses outdated chromium. Sad story.
This might be actually it (or at least one of the “competitor” projects they mention in the docs), thanks! Just need to figure out how to do a nice grid layout of the graphs.
I know R a liiiitle bit, so that may help too =)
Did you ever notice that grafanalib
is noticeable behind grafana itself?
That’s something that turned me off it, but I wonder if it was a one-time situation because of some major change in grafana…
create graph on the UI
that’s something I want to avoid
hard for me to imagine a situation where graphs need to be edited so often
the whole system is under development (trying new views, changing how the data is represented, etc), so I don’t need to imagine it, I have it right in front of me ;)
Something like that, where I just write a function that spits out a numpy array or something like that and it gets plotted, would be great, but there is one thing Grafana can do and vega-altair
, plotly
and even matplotlib
(*): a UI that allows to select a time interval to view.
So I can freely pan/zoom in/out in time, and only the required part of the data will be loaded (with something like select ... where time between X and Y
under the hood). So if I look at a single day, it will only load that day, and only if I dare to zoom out too much it will spend some time loading everything from the last year.
(*) yes, you can do interactive things with matplotlib, but you don’t really want to, unless you must…
To be precise, the page explains how to configure some things and how to upload the config. I also tried that.
The problem is in the dashboard jsons. They are not well documented (docs on specific plots are missing), and are a pain to edit (as any json).
The grafanalib
tool I mentioned tries to help with that by implementing a sort of DSL for dashboards, but it is not ideal.
(edit: lost a word)
I have static ips for the server-ish things and few important devices too, but for the rest (swarm of shellys, esp32, etc.) I’m too lazy to maintain the list =)
After much suffering with local zones (mainly due to stubborn devices ignoring dns servers coming via dhcp and retarded corporate vpn messing with resolv.conf) I just use xxx.local.mydomain.tld with a small script that parses the leases files and updates the data via cloud flare api.
Yeah, but then you have to use Evolution.
Maybe, after a few months (or a year, as I may or may not have experienced) of “communication” you’ll be allowed to use Thunderbird. Only for it to be suddenly blocked again later because some dude didn’t understand why can’t everyone just use Outlook.
And don’t even dream of having a script to, say, sort and preprocess your mail.