• 0 Posts
  • 84 Comments
Joined 1年前
cake
Cake day: 2023年7月30日

help-circle

  • I mean… DX 9, 10, and 11 were all released prior to Nadella being CEO/chairman.

    But in software, it’s very commonplace for library versions not to be backwards compatible without recompiling the software. This isn’t the same thing as being able to open a word doc last saved on a floppy disk in 1997 on Word 365 2024 version, this is about loading executable code. Even core libraries in Linux (like OpenSSL and ncurses) respect this same schema, and more strongly than MS.

    Using OpenSSL as an example, RHEL 7 provides an interface to OpenSSL 1.0. But 1.1 is not available in the core OS, you’d have to install it separately. 1.1 was introduced to the core in RHEL 8, with a compatibility library on a separate package to support 1.0 packages that hadn’t been recompiled against 1.1 yet. In RHEL 9, the same was true of OpenSSL 3 - a compatibility library for 1.1, and 1.0 support fully dropped from core. So no matter which version you use, you still have to install the right library package. That library package will then also have to work on your version of libc - which is often reasonably wide, but it has it limits just the same.

    Edit because I forgot a sentence in the last paragraph - like DirectX, VC++, and OpenGL, you have to match the version of ncurses, OpenSSL, etc exactly to the major (and often the minor) version or else the executable won’t load up and will generate a linking error. Even if you did mangle the binary code to link it, you’d still end up with data corruption or crashes because the library versions are too different to operate.



  • DirectX, OpenGL, Visual C++ Redist and many other support libraries in software programs typically require the same major version of the support libraries that they were shipped with.

    For DirectX, that major version is 9, 10, 11, 12. Any major library change has to be recompiled into the game by the original developer. (Or a very VERY dedicated modder with solid low level knowledge)

    Same goes for OpenGL, except I think they draw the line at the second number as well - 2.0, 3.0, 4.0, 4.1, 4.2, 4.3, 4.4.

    For VC++, these versions come in years - typically you’ll see 2008, 2010, 2013, and the last version 2015-2022 is special. Programs written in the 2013 version or lower only require the latest version of that year to run. For the 2015-2022 library, they didn’t change the major version spec so any program requiring 2015+ can (usually) just use the latest version installed.

    The one library that does weird things to this rule is DXVK and Intel’s older DX9-on-12. These are translation shim libraries that allow the application to speak DX9 etc and translate it on the fly to the commands of a much more modern library - Vulkan in the case of DXVK or DX12 in Intel’s case.

    Edited to remove a reference to 9-on-12 that I think I had backwards.






  • Others have some good information here - all I’d like to add to the root is that Windows and Mac have a built-in DNS cache and it’s pretty straightforward to add a DNS cache to systemd distros (if it’s not already installed or in use) using systemd-resolved or dnsmasq if you really dislike systemd. Some distros enable this from install time.

    Systems that utilize a DNS cache will keep copies of DNS query results for a period of time, making the application-level name lookup speed essentially 0ms for a cached result. Cold results obviously incur the latency of the DNS server itself.



  • If a big MMO closes that’d be rough, but those types of games tend to form communities anyways like Minecraft. You don’t have to pay Microsoft a monthly rate to host a Java server for you and a few friends, you just have to have a little bit of IT knowledge and maybe a helper package to get you and your friends going. It’s still a single binary, even if it doesn’t run on a laptop well for larger settings.

    With a big MMO, there will form support groups and turnkey scripts to get stuff working as well as it can be, and forums online for finding existing open community servers by people who have the hardware and knowledge to host a few dozen to a few hundred of their closest friends online.

    Life finds a way.

    If it’s a complicated multi-node package where you need stuff to be split up better as gateway/world/area/instance, the community servers that will form may tend towards larger player groups, since the knowledge and resource to do that is more specific.





  • If it’s anything like when I used a Mac regularly 7y ago, Homebrew doesn’t install to /bin, it installs to /usr/local/bin, which only works for scripts that use env in their shell “marker” (if you don’t call it directly with the shell). You’re just putting a higher bash in the path, not truly updating the one that comes with the system.


  • TLDR: probably a lot of people continue using the thing that they know if it just works as long as it works well enough not to be a bother.

    Many many years ago when I learned, I think the only ones I found were Apache and IIS. I had a Mac at the time which came pre installed with Apache2, so I learned Apache2 and got okay at it. While by release dates Nginx and HAProxy most definitely existed, I don’t think I came across either in my research. I don’t have any notes from the time because I didn’t take any because I was in high school.

    When I started Linux things, I kept using Apache for a while because I knew it. Found Nginx, learned it in a snap because the config is more natural language and hierarchical than Apache’s XMLish monstrosity. Then for the next decade I kept using Nginx whenever I needed a webserver fast because I knew it would work with minimal tinkering.

    Now, as of a few years ago, I knew that haproxy, caddy, and traefik all existed. I even tried out Caddy on my homelab reverse proxy server (which has about a dozen applications routed through it), and the first few sites were easy - just let the auto-LetsEncrypt do its job - but once I got to the sites that needed manual TLS (I have both an internal CA and utilize Cloudflare’ origin HTTPS cert), and other special config, Caddy started becoming as cumbersome as my Nginx conf.d directory. At the time, I also didn’t have a way to get software updates easily on my then-CentOS 7 server, so Caddy was okay-enough, but it was back to Nginx with me because it was comparatively easier to manage.

    HAProxy is something I’ve added to my repertoire more recently. It took me quite a while and lots of trial and error to figure out the config syntax which is quite different from anything I’d used before (except maybe kinda like Squid, which I had learned not a year prior…), but once it clicked, it clicked. Now I have an internal high availability (+keepalived) load balancer than can handle so many backend servers and do wildcard TLS termination and validate backend TLS certs. I even got LDAP and LDAPS load balancing to AD working on that for services like Gitea that don’t behave well when there’s more than one LDAPS backend server.

    So, at some point I’ll get around to converting that everything reverse proxy to HAProxy. But I’ll probably need to deploy another VM or two because the existing one also has a static web server and I’ve been meaning to break up that server’s roles anyways (long ago, it was my everything server before I used VMs).