• 0 Posts
  • 66 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle
  • Do you use autocomplete? AI in some of the various ways that’s being posited is just spicy autocomplete. You can run a pretty decent local AI on SSE2 instructions alone.

    Now you don’t have to accept spicy-autocomplete just like you don’t have to accept plain jane-autocomplete. The choice is yours, Mozilla isn’t planning on spinning extra cycles in your CPU or GPU if you don’t want them spun.

    But I distinctly remember the grumbles when Firefox brought local db ops into the browser to give it memory for forms. Lots of people didn’t like the notion of filling out a bank form or something and then that popping into a sqlite db.

    So, your opinion, I don’t blame you. I don’t agree with your opinion, but I don’t blame you. Completely normal reaction. Don’t let folks tell you different. Just like we need the gas pedal for new things, we need the brake as well. I would hate to see you go and leave Firefox, BUT I would really hate you having to feel like something was forced upon you and you just had to grin and bear it.




  • It absolutely could. Heck, RPMs and DEBs pulled from random sites can do the exact same thing as well. Even source code can hide something if not checked. There’s even a very famous hack presented by Ken Thompson in 1984 that really speaks to the underlying thing, “what is trust?”

    And that’s really what this gets into. The means of delivery change as the years go by, but the underlying principal of trust is the thing that stays the same. In general, Canonical does review somewhat apps published to snapcraft. However, that review does not mean you are protected and this is very clearly indicated within the TOS.

    14.1 Your use of the Snap Store is at your sole risk

    So yeah, don’t load up software you, yourself, cannot review. But also at the same time, there’s a whole thing of trust here that’s going to need to be reviewed. Not, “Oh you can never trust Canonical ever again!” But a pretty straightforward systematic review of that trust:

    • How did this happen?
    • Where was this missed in the review?
    • How can we prevent this particular thing that allowed this to happen in the future?
    • How do we indicate this to the users?
    • How do we empower them to verify that such has been done by Canonical?

    No one should take this as “this is why you shouldn’t trust Ubuntu!” Because as you and others have said, this could happen to anyone. This should be taken as a call for Canonical to review how they put things on snapcraft and what they can do to ensure users have all the tools so that they can ensure “at least for this specific issue” doesn’t happen again. We cannot prevent every attack, but we can do our best to prevent repeating the same attack.

    It’s all about building trust. And yeah, Flathub and AppImageHub can, and should, take a lesson from this to preemptively prevent this kind of thing from happening there. I know there’s a propensity to wag the finger in the distro wars, tribalism runs deep, but anything like this should be looked as an opportunity to review that very important aspect of “trust” by all. It’s one of the reasons open source is very important, so that we can all openly learn from each other.




  • I submit Nintendo’s online service as evidence that, that is not true in the least bit. MK8, Smash, Splatoon 3, all of them have atrocious online. Pokemon Unite using Tencent’s online services runs circles around anything Nintendo has offered with online being a major factor and that’s on same hardware.

    Nintendo has their IP and they take extremely good care of it. No argument there. But holy shit is Nintendo’s online service absolute trash. I will always have something Nintendo because I must always have my Animal Crossing, but holy fuck, let’s not kid ourselves about Nintendo’s online stuff. Anything that’s using Nintendo’s servers for match making or their network stack for connectivity is just garbage.

    I will always love a good Mario, Pikmin, or Animal Crossing but Nintendo clearly isn’t investing a single cent into online anything. And that is just my 2¢.


  • I have a Brother HL-L3230CDW. It has been a horse and has quickly become my most prized possession of all things that I own. It takes anyone’s toner and produces quality without question. It works with my various Linux, Macs, Windows, and Android devices without hesitation and minimal fuss to get setup.

    So that’s what I would recommend. Is a good bit of coin up front but in my opinion, it has paid for itself in cheaper long run TCO and sanity in that it just fucking works.



  • Both are vendor specific implementations of processing on GPUs. This is in opposition to open standards like OpenCL, which a lot of the exascale big boys out there mostly use.

    nVidia spent a lot of cash on “outreach” to get CUDA into a lot of various packages in R, python, and what not. That did a lot of displacement from OpenCL stuff. These libraries are what a lot of folks spin up on as most of the leg work is done for them in the library. With the exascale rigs, you literally have a team that does nothing but code very specific things on the machine in front of them, so yeah, they go with the thing that is the most portable, but doesn’t exactly yield libraries for us mere mortals to use.

    AMD has only recently had the cash to start paying folks to write libs for their stuff. So were starting to see it come to python libs and what not. Likely, once it becomes a fight of CUDA v ROCm, people will start heading back over to OpenCL. The “worth it” for vendor lock-in for CUDA and ROCm will diminish more and more over time. But as it stands, with CUDA you do get a good bit of “squeezing that extra bit of steam out of your GPU” by selling your soul to nVidia.

    That last part also plays into the “why” of CUDA and ROCm. If you happen to NOT have a rig with 10,000 GPUs, then the difference between getting 98% of your GPU and 99.999% of your GPU means a lot to you. If you do have 10,000 GPUs, having like a 1% inefficiency is okay, you’ve got 10,000 GPUs the 1% loss is barely noticeable and not worth it to lose portability with OpenCL.




  • One of the specific issues from those who’ve worked with Wayland and is echoed here in Nate’s other post that you mentioned.

    Wayland has not been without its problems, it’s true. Because it was invented by shell-shocked X developers, in my opinion it went too far in the other direction.

    I tend to disagree. Had say the XDG stuff been specified in protocol, implementation of handlers for some of that XDG stuff would have been required in things that honestly wouldn’t have needed them. I don’t think infotainment systems need a concept of copy/paste but having to write:

    Some_Sort_Of_Return handle_copy(wl_surface *srf, wl_buffer* buf) {
    //Completely ignore this
    return 0;
    }
    
    Some_Sort_Of_Return handle_paste(wl_surface *srf, wl_buffer* buf) {
    //Completely ignore this
    return 0;
    }
    
    

    Is really missing the point of starting fresh, is bytes in the binary that didn’t need to be there, and while my example is pretty minimal for shits and giggles IRL would have been a great way to introduce “randomness” and “breakage” for those just wanting to ignore this entire aspect.

    But one of those agree to disagree. I think the level of hands off Wayland went was the correct amount. And now that we have things like wlroots even better, because if want to start there you can now start there and add what you need. XDG is XDG and if that’s what you want, you can have it. But if you want your own way (because eff working nicely with GNOME and KDE, if that’s your cup of tea) you’ve got all the rope in the world you will ever need.

    I get what Nate is saying, but things like XDG are just what happened with ICCCM. And when Wayland came in super lightweight, it allowed the inevitably of XDG to have lots of room to specify. ICCCM had to contort to fit around X. I don’t know, but the way I like to think about it is like unsalted butter. Yes, my potato is likely going to need salt and butter. But I like unsalted butter because then if I want a pretty light salt potato, I’m not stuck with starting from salted butter’s level of salt.

    I don’t know, maybe I’m just weird like that.


  • Over on Nate’s other blog entry he indicates this:

    The fundamental X11 development model was to have a heavyweight window server–called Xorg–which would handle everything, and everyone would use it. Well, in theory there could be others, and at various points in time there were, but in practice writing a new one that isn’t a fork of an old one is nearly impossible

    And I think this is something people tend to forget. X11 as a protocol is complex and writing an implementation of it is difficult to say the least. Because of this, we’ve all kind of relied on Xorg’s implementation of it and things like KDE and GNOME piggyback on top of that. However, nothing (outside of the pure complexity) prevented KWin (just as an example) implementing it’s own X server. KWin having it’s own X server would give it specific things that would better handle the things KWin specifically needed.

    Good parallel is how crazy insane the HTML5 spec has become and how now pretty much only Google can write a browser for that spec (with thankfully Firefox also keeping up) and everyone is just cloning that browser and putting their specific spin to it. But if a deep enough core change happens, that’s likely to find its way into all of the spins. And that was some of the issue with X. Good example here, because of the specific way X works an “OK” button (as an example) is actually implemented by your toolkit as a child window. Menus those are windows too. In fact pretty much no toolkit uses primitives anymore. It’s all windows with lots and lots of text attributes. And your toolkit Qt, Gtk, WINGs, EFL, etc handle all those attributes so that events like “clicking a mouse button” work like had you clicked a button and not a window that’s drawn to look like a button.

    That’s all because these toolkits want to do things that X won’t explicitly allow them to do. Now the various DEs can just write an X server that has their concept of what a button should do, how it should look, etc… And that would work except that, say you fire up GIMP that uses Gtk and Gtk has it’s idea of how that widget should look and work and boom things break with the KDE X server. That’s because of the way X11 is defined. There’s this middle man that always sits there dictating how things work. Clients draw to you, not to the screen in X. And that’s fundamentally how X and Wayland are different.

    I think people think of Wayland in the same way of X11. That there’s this Xorg that exists and we’ll all be using it and configuring it. And that’s not wholly true. In X we have the X server and in that department we had Xorg/XFree86 (and some other minor bit players). The analog for that in Wayland (roughly, because Wayland ≠ X) is the Compositor. Of which we have Mutter, Clayland, KWin, Weston, Enlightenment, and so on. Which that’s more than just one that we’re used to. That’s because the Wayland protocol is simple enough for these multiple implementations.

    The skinny is that a Compositor needs to at the very least provide these:

    • wl_display - This is the protocol itself.
    • wl_registry - A place to register objects that come into the compositor.
    • wl_surface - A place for things to draw.
    • wl_buffer - When those things draw there should be one of these for them to pack the data into.
    • wl_output - Where rubber hits the road pretty much, wl_surface should display wl_buffer onto this thing.
    • wl_keyboard/wl_touch/etc - The things that will interact with the other things.
    • wl_seat - The bringing together of the above into something a human being is interacting with.

    And that’s about it. The specifics of how to interface with hardware and what not is mostly left to the kernel. In fact, pretty much compositors are just doing everything in EGL, that is KWin’s wl_buffer (just random example here) is a eglCreatePbufferSurface with other stuff specific to what KWin needs and that’s it. I would assume Mutter is pretty much the same case here. This gets a ton of the formality stuff that X11 required out of the way and allows Compositors more direct access to the underlying hardware. Which was pretty much the case for all of the Window Managers since 2010ish. All of them basically Window Manage in OpenGL because OpenGL allowed them to skip a lot of X, but of course there is GLX (that one bit where X and OpenGL cross) but that’s so much better than dealing with Xlib and everything it requires that would routinely require “creative” workarounds.

    This is what’s great about Wayland, it allows KWin to focus on what KWin needs, mutter to focus on what mutter needs, but provides enough generic interface that Qt applications will show up on mutter just fine. Wayland goes out of its way to get out of the way. BUT that means things we’ve enjoyed previously aren’t there, like clipboards, screen recording, etc. Because X dictated those things and for Wayland, that’s outside of scope.






  • I am so sorry this got so long. I’m absolutely horrible at brevity.

    Applications use things called libraries to provide particular functions rather than implement those functions themselves. So like “handle HTTP request” as an example, you can just use a HTTP library to handle it for you so you can focus on developing your application.

    As time progresses, libraries change and release new versions. Most of the time one version is compatible with the other. Sometimes, especially when there is a major version change, the two version are incompatible. If an application relied on that library and a major incompatible change was made, the application also needs to be changed for the new version of the library.

    A Linux distro usually selects the version of each library that they are going to ship with their release and maintain it via updates. However, your distro provider and some neat program you might use are usually two different people. So the neat program you use might have change their application to be compatible with a library that might not make it into your distro until next release.

    At that point you have one of two options. Wait until your distro provides the updated library or the go it alone route of you updating your own library (which libraries can depend on other libraries, which means you could be opening a whole Pandora’s box here). The go it alone route also means that you have to turn off your distro’s updates because they’ll just overwrite everything you’ve done library wise.

    This is where snaps, flatpaks, and appimages come into play. In a very basic sense, they provide a means for a program to include all the libraries it’ll need to run, without those libraries conflicting with your current setup from the distro. You might hear them as “containerized programs”, however, they’re not exactly the Docker style “container”, but from an isolating perspective, that’s mostly correct. So your neat application that relies on the newest libraries, they can be put into a snap, flatpak, or appimage and you can run that program with those new libraries no need for your distro to provide them or for you to go it alone.

    I won’t bore you on the technical difference between the formats, but just mostly focus on what I usually hear is the objectionable issue with snaps. Snaps is a format that is developed by Canonical. All of these formats have a means of distribution, that is how do you get the program to install and how it is updated. Because you know, getting regular updates of your program is still really important. With snaps, Canonical uses a cryptographic signature to indicate that the distribution of the program has come from their “Snaps Store”. And that’s the main issue folks have taken with snaps.

    So unlike the other kinds of formats, snaps are only really useful when they are acquired from the Canonical Snaps Store. You can bypass the checking of the cryptographic signature via the command line, but Ubuntu will not automatically check for updates on software installed via that method, you must check for updates manually. In contrast, anyone can build and maintain their own flatpak “store” or central repository. Only Canonical can distribute snaps and provide all of the nice features of distribution like automatic updates.

    So that’s the main gripe, there’s technical issues as well between the formats which I won’t get into. But the main high level argument is the conflicting ideas of “open and free to all” that is usually associated with the Linux group (and FOSS [Free and open-source software] in general) and the “only Canonical can distribute” that comes with snaps. So as @sederx indicated, if that’s not an argument that resonates with you, the debate is pretty moot.

    There’s some user level difference like some snaps can run a bit slower than a native program, but Canonical has updated things with snaps to address some of that. Flatpak sandboxing can make it difficult to access files on your system, but flatpak permissions can be edited with things like Flatseal. Etc. It’s what I would file into the “papercut” box of problems. But for some, those papercuts matter and ultimately turn people off from the whole Linux thing. So there’s arguments that come from that as well, but that’s so universal “just different in how the papercut happens” that I just file that as a debate between container and native applications, rather a debate about formats.