Just your normal everyday casual software dev. Nothing to see here.

  • 0 Posts
  • 94 Comments
Joined 11 months ago
cake
Cake day: August 15th, 2023

help-circle

  • I’m currently running proxmox on a 32 gig server running a ryzen 5600 G, it’s going fine the containers don’t actually use all that much RAM and personally I’m actually seeing a better benchmarks than I did when I just ran as a Bare Bones Ubuntu server, my biggest issue has actually been a larger IO strain than anything, because it’s a lot more IO heavy now since everything’s containerized. I think I easily could run it with a lower amount of ram I would just have to turn off some of the more RAM intensive items

    As for if I regret changing, no way Jose, I absolutely love the ability of having everything containerized because I can set things up how I want it when I want it and if I end up screwing something up configuration wise or decide that I no longer need that service I can just nuke the container without having to remember well what did I install on this program so I can remove it and do other programs need this dependency to work. Plus while I haven’t tinkered as much in this area, you can hard set what resources you want a lot to each instance, so if you have a program like say a pi hole that you know is never going to use x amount of resources to be able to appropriately work you can restrict what it can do so if something does go wrong with it it doesn’t use all of your system resources

    The biggest con out of it is probably having to figure out how to do the networking side because every container is going to have a different IP address, I found using a web dashboard is my friend because I can have heimdel tell me where all my services are and I just have to click the icon to bring me to the right IP address, it took a lot of work to figure out how it’s operational and how to get it working, but the benefits I’ve gotten of having it is amazing. Just make sure you have a spare disk to temporarily clone partitions to because it’s extremly difficult to use existing disks in the machine. I’ve been slowly going one at a time copying it over to an external drive nuking the and then reinitializing the disc as part of the proxmox lvm and then copying the data back over onto their appropriate image file.


  • I personally will never use nextcloud, it is nice interface side but while I was researching the product I came across concerns with the security of the product. Those concerns have since then been fixed but the way they resolved the issue has made me lose all respect for them as a secure Cloud solution.

    Basically when they first introduced encrypting folders, there was a bug in the encryption program, and the only thing that ever would be encrypted was The Parent Directory but any subfolder in that directory would proceed to not be encrypted. The issue with that is that unless you had server-side access to view the files you had no way of knowing that your files weren’t actually being encrypted.

    All this is fine it’s a beta feature right? Except for when I read the GitHub issue on the report, they gaslit the reporter who reported the issue saying that despite the fact that it is advertised as feature on their stable branch, the feature was actually in beta status so therefore should not be used in a production environment, and then on top of , the feature was never removed from their features list, and proceeded to take another 3 months before anyone even started working on the issue report.

    This might not seem like a big deal to a lot of people, but as someone who is paranoid over security features, the projects inaction over something as critical as that while trying to advertise themselves as being a business grade solution made me flee hardcore

    That being said I fully agree with you out of the different Cloud platforms that I’ve had, nextCloud does seem to be the most refined and even has the ability to emulate an office suite which is really nice, I just can’t trust them, I just ended up using syncthing and took the hit on the feature set



  • I mean it gets the point across, regardless of the service dog or a pet(which shouldn’t be in the TSA security line in the first place cuz generally airports will have a designated drop off or require Kennels) , in this case it doesn’t matter how dense you are, it’s clear: do not pet the dogs, if the reader wants to say that it means no petting dogs on the entire trip, the airport doesn’t care as long as you’re not petting the dogs at the airport, and therefor not getting in the way of procedure or causing a potential safety issue for the port



  • Seconding this, I took the plunge a month or two back myself using proxmox for my home lab. Fair warning if you have never operated anything virtualized outside of using virtualbox or Docker like I was you are in for an ice Plunge so if you do go this route prepare for a shock, it is so nice once everything is up and running properly though and it’s real nice being able to delegate what resource uses what and how much, but getting used to the entire system is a very big jump, and it’s definitely going to be a backup existing Drive migrate data over to a new Drive style migration, it is not a fun project to try to do without having a spare drive to be able to use as a transfer Drive


  • honestly I don’t think there is a better way, like others have said you can use a trash program or you can chmod the git directory before deleting but, I would recommend against the comments saying alias the command, that can lead to even bigger problems if you typo thr alias or mess up in the script. rf can’t break anything unless you say the wrong directory which would be the same with aliases anyway,

    My recommendation out of them all would be using a trash program to move it to the trash that way if you do screw up the location you have a way to restore it otherwise you could make a script to list the files affected using ls and then prompt a yes/no prompt using read before doing the rm script, but that’s something you definitely want to test in a sandbox or user restricted environment if you’re not used to scripting in case something breaks




  • I’ve been the same way with my switch, I haven’t touched it for probably a year and a half outside of a short bit when tears of the Kingdom launched where I replayed some of breath of the wild and a small portion of Animal Crossing, I agree

    They had been falling for at least 4 years now, I had already strayed from doing anything on the switch from lack of appeal, but their crackdown on dmca with the yuzu Community was the final straw for me, I didn’t even use the emulator myself but I’ve always heavily embraced emulation and the ability to Tinker with stuff that you purchase and that just didn’t sit right with me.

    It was also around that time that I actually read into what happened with Gary Bowser, and that made me sick to my stomach, because it was essentially the equivalent of Prosecuting a cashier for the crimes that the CEO did.

    Honestly even if they were still prospering I wouldn’t recommend the products, it’s hard enough to recommend their product to customers in the first place due the platform restrictions and the fact that they just keep regurgitating the three same IP over and over again, and that’s without the active hostilities towards their fanbase


  • Adding on to this that if they do decide not to go Windows do not use Debian.

    Don’t get me wrong it’s hella stable if you’re using stuff from like five six years ago, but if you’re trying to do anything remotely new or gaming related I would probably pass and try for one of the ones that are less stable. This is coming from someone who just made this mistake, steam will install but proton will not because the dependencies that proton relies on don’t exist in any of debian’s default sources, of course the launcher won’t actually tell you this unless you try to launch it from command line. On top of this if you’re planning on using games that originated on a windows partition, proton isn’t able to use those partitions unless you force yourself the owner by using uid and gid in fstab for the partitions, but it won’t tell you that either it will just fail to launch.

    I’m at the point where I think I’m just going to Nuke my Debian install and just go with another system because man has it fought me every step of the way in this process


  • Roblox in particular has been super hostile to the Linux community, they’ve two or three times now intentionally changed their application to make it so it won’t run under wine. If Roblox is something that is a hard requirement for him, I would highly recommend against any of the non-windows derivatives. The lead development team on Roblox seems to have the ideology that anything that isn’t Windows is a hacker platform and therefore they attempt to remove access from those platforms wherever possible. I don’t personally agree with it but, it is what it is.

    I also wish people would stop blindly recommending Unix platforms as a drop-in replacement for gaming on Windows. I have yet to see anyone who has been able to just install any of the flavors and have it “just work”. I fully agree that we are ages better in terms of compatibility than it was even 5 years ago, but at 100% should be going into it as a “you will have issues prepare to have to troubleshoot” and if this was his first time using anything not windows, I would have hard recommended against nuking the windows install, at the very least shrink the C partition on Windows which can be done via GParted, which thankfully is already pre-installed on the Linux Mint installation media.

    It’s disappointing that he is looking to go back, but I can fully understand his frustration, as someone who’s recently retaking the plunge after 6 or 7 years of being on windows again, I find myself getting aggravated at times trying to make hack scripts to make things work as well.

    That being said, if he is wanting to go back you shouldn’t force using it, that’s only going to remove the possibility of him switching back in the future(like when MS makes w10 a subscription model either end of this year or the year after which will force w11)


  • I think the answer is because they don’t believe they need that market, they were obviously okay with losing that market share in the first place due to the fact that they put the requirement in there. As their announcements have said, requiring a PSN account is something that gives them more control over abuse (and ofc data) and allowing more players that are in countries where they are not currently allowed to have those accounts are counterproductive to what they are currently driving for

    I’m not surprised they didn’t reverse the region locks they believe it’s something for the best and that’s not something that the consumer is going to be able to change, regardless of the reviews or protest, worst case scenario for them is they just go back to being fully console exclusive if the PR pressure gets too bad




  • This type of review bombing is actually against steams terms of service for reviews in the first place, they’ve stepped in a few times now to hide campaigns like that, I expect they will do the same with this one. Basically it’ll keep the recent review metric but, it will hide the reviews from the historical and the overall metric. So worst case out of this will be it has a negative recent reviews for awhile.

    your last sentence is actually the exact reason they implemented that policy and they moreorless quote it in their forum post where they talk about how the new system works


  • I concider bloat to be either unneeded files/programs. So duplicated libraries, unused apps, not personal data files that are stagnant, anything similar to that. It’s hard to put a metric on it, I just browse through my files every once and awhile and delete the unused stuff, but with the push for container based stuff I forsee that method will become increasingly harder as time goes on


  • TPM is a good way, Mine is setup to have encryption of / via TPM with luks so it can boot no issues, then actual sensitive data like the /home/my user is encrypted using my password and the backup system + fileserver is standard luks with password.

    This setup allows for unassisted boot up of main systems (such as SSH) which let’s you sign in to manually unlock more sensative drives.