Some dingbat that occasionally builds neat stuff without breaking others. The person running this public-but-not-promoted instance because reasons.

  • 0 Posts
  • 276 Comments
Joined 5 months ago
cake
Cake day: May 24th, 2024

help-circle

  • WiFi continually beacons out to try and find previously connected networks and will select for the best signal from an AP it can reach. Extenders can be a trick if you’re sitting in the ‘crossover’ space between the extender and the back haul it connects to.

    What you might try instead is one of those distributed AP systems like Unifi or similar where all the APs are controlled by a switch and work in unison. The one I have at least has an ability to disconnect someone if they drop below a certain level and migrate them to another AP without breaking the session states.

    The other option that I can think of is just turning off the auto connect for the extender net and only using that manually.




  • They’re a part of the mix. Firewalls, Proxies, WAF (often built into a proxy), IPS, AV, and whatever intelligence systems one may like work together to do their tasks. Visibility of traffic is important as well as the management burden being low enough. I used to have to manually log into several boxes on a regular basis to update software, certs, and configs, now a majority of that is automated and I just get an email to schedule a restart if needed.

    A reverse proxy can be a lot more than just host based routing though. Take something like a Bluecoat or F5 and look at the options on it. Now you might say it’s not a proxy then because it does X/Y/Z but at the heart of things creating that bridged intercept for the traffic is still the core functionality.





  • Fedi platforms have a key distinction putting them separate from most other online platforms in that you can literally create your own and have all the rights of a platform admin today, and have access to the very same content as you would having an account on another’s node. In that regard there’s much less room to complain about unilateral actions by the instance owner than there would be for other systems. As the size of an instance grows you run a greater risk any time you take such an action, but so long as it’s consistent with past behavior it shouldn’t be a major problem. Large instances like .world have made some cuts that ruffled a few feathers and then backed them off if people objected, but sometimes direct democracy isn’t particularly viable in what might be a time sensitive situation.


  • It changes over extended time spans on the order of generations, and I might say it’s cyclical but it’s hard to see in a given lifetime.

    In the early 1900s USA people where held at the absolute mercy of the wealthy, working long hours in wretched conditions for a pittance.

    During and shortly after the WW1 & WW2 there was a massive push for unity and worker rights, the unions took shape and the working class took a large chunk of power away from the owners to better their standing.

    In the 50s-70s there was a time of keeping pace with the neighbors, competitive but also concerned with the well-being of your fellow people.

    Then from the 80s through early 2000s it switched and became a hyper individualistic ‘I got mine’ mindset.

    In the last couple decades we’ve started to see a return to a push for collective good, but it has been held back a lot by a heavily divided population with half blaming the other half for the decay of society while those with means just sit back and watch the sniping from afar.

    I’ve only been around for those last couple portions so a lot of my perspective is just my impressions from history books, but I guess the point I’d make is to look at the ebb and flow of things in historical context. People’s willingness to defer to power is both personal and couched in the willingness of society to support the individual.











  • It depends on the load on the disk. My main docker host pretty well has to be on the SSD to not complain about access times, but there are a dozen other services on the same VM. There’s some advisory out there that things with constant IO should avoid SSDs to not wear out the read/write too fast, but I haven’t seen anything specific on just how much is too much.

    Personally I split the difference and run the system on SSD and host the bulk data on a separate NAS with a pile of spinning disks.