• 12 Posts
  • 52 Comments
Joined 1 year ago
cake
Cake day: February 1st, 2023

help-circle
  • AI has a lot of great uses, and a lot of stupid smoke and mirrors uses. For example, text to speech and live captioning or transcription are useful.

    “Hypothetical AI desktop” “Siri” “copilot+” and other assistants are smoke and mirrors. Mainly because they don’t work. But if they did, they would be unreliable (because ai is unreliable) and would have to be limited to not cause issues. And so they would not be useful.

    Plus, on Linux they would be especially unusefull, because there’s a million ways to do different things, and a million different setups. What if you asked the ai “change the screen resolution” and it started editing some gnome files while you are on KDE, or if it started mangling your xorg.conf because it’s heavily customized.

    Plus, every openai stuff you are seeing this days doesn’t really work because it’s clever, it works because it’s huge. Chatgpt needs to be trained for days of week on specialized hardware, who’s gonna pay for all that in the open source community?


  • Distributing software is not instantaneous. Assuming that Mozilla has already sent the update to flathub, it will take some time before it’s validated and available for download.

    If instead of flatpak you had used native packages, you would be in the same situation, as fedora’s update system keeps updates in testing until enough people say it’s fine.

    If you wanted to get the update as soon as possible, you would have to download the prebuilt binary from Mozilla, but then you would have to update manually and everything.

    Just be patient for a few days.


  • IMHO I would avoid the ublue distros and just go for official fedora spins. The guys have good intentions, but they don’t have the means to maintain that many distros “properly”. I often end up enabling copr packages for bazzite in my fedora install, just to find out the program doesn’t work.

    That being said, as the other comments told you, you can still install native apps on immutable distros, it’s just a bit more work. I don’t expect distrobox or toolbox to be much faster than flatpak, as they are all just containers with a nice cli, except flatpak is easier to update. But trying costs nothing



  • Systemd was actually a “clone” of apple’s launchd. Similarities with windows arise from the fact that it makes sense to manage services in certain ways on modern OSs. Also services on windows are completely different from Linux and MacOS, they are even a different executable file format, not a normal exe.


  • I think you are confusing “windows like” with “user-friendly”. A “bespoke archive, that you find on some developer’s website, that you extract and somewhere it contains an executable and assets, that you move where you want to keep them, and then the user remembers to manually update it sometimes somehow” is not how you usually do stuff on Linux and is not even user-friendly.

    Distributions come with programs like “gnome software” or “kde discover” that allows the user to graphically install programs from the distro’s package manager, or from flatpak or snap. It will also help them to keep them updated and to manage dependencies. That is user-friendly.

    I suggest using flatpak. It will work on almost all distros out of the box and will be easy to install and maintain for the user. If flatpak is too “bloated” for you because it uses containers, then you need to package it for every distro manually, but that’s a lot of work. If it’s something that just needs to be used once and never again, consider an appimage or a script, because they don’t need to be installed.

    Distros are different operating systems, it’s not gonna be easy to package for all of them without compromises.

    Also, if you really really really need to use your bespoke archive, you can do like native steam games do, and put every library you link in the archive, and link with relative paths instead of system wide paths, or with a launch script that loads your provided libraries. But that’s not a great user experience. Steam gets away with it because it’s the launcher that manages the whole thing.






  • edinbruh@feddit.ittoLinux@lemmy.mlCustom Libreboot 9020 Optiplex PC
    link
    fedilink
    arrow-up
    26
    arrow-down
    3
    ·
    edit-2
    1 month ago

    You are not supposed to power the GPU like that. You should use two separate cables from the supply. The other connector of the same cable is intended to “daisy chain” low power cards.

    It will probably work anyway, but better safe than sorry.

    Edit: I think it’s needed because:

    1. The power supply might have separate circuits for separate cables and might not be able to supply all the power needed by the GPU through just one
    2. The cable might not be rated to have that much power flow through and might overheat and melt over time
    3. If you could just fork the cable into two why would they put two connectors on the GPU, it’s not like they have different voltages, they are literally daisy chained

  • In addition to what the others have said, windows has already had its big paradigm change (“similar” to the change from x11 to Wayland that is happening) in the past. It was around 2007 with windows Vista. They also didn’t get it quite right on the first try, but because Microsoft can do whatever they want, and in Linux you must convince the community that something is better, it was easier for them to just change everything under everyone’s nose.


  • Hmmm. That’s suspicious, there’s a number of things in the way of video acceleration with that setup.

    First of all, the fact that on fedora (ublue is a derivative of fedora) you need to install openh264 from dnf and not from Firefox extension manager, and then you still need to change some settings in about:config . Second, you are using a flatpak, I’m not sure if openh264 needs to be installed “inside the flatpak”. And last, it might just be the Nvidia.

    The first two would also affect AMD.






  • The USB protocol was simple by design, so it could be implemented in small dumb devices like pen drives. More specifically, it used two couples of cables, one couple was for power and the other for data (four wires in total). Having a single half-duplex data line means you need some way of arbitrating who can send data at any time. The easiest way to do it is having a single machine that decides who gets to send data (master), and the easiest way to decide the master is to not do it and have the computer always do the master. This means you couldn’t connect two computers together because they would both try to be the master.

    I used the past tense because you may have noticed that micro USB have 5 pins and not 4, that’s because phones are computers and they use the 5th pin to decide how to behave. If it’s grounded they act as a slave (the male micro to male A cable grounds it). If it has a resistor (the otg cable has it) it act as master. And if the devices are connected with a wire on that pin (on some special micro to micro) they negotiate the connection.

    When they made usb 3.0 and they realized that not having the 5th wire on the usb-A was stupid, so they put it (along side some extra data lines) that’s why they have an odd number of wires. So with usb 3 you can connect computers together, but you need a special cable that uses the negotiation wire. Also I don’t know what software you need for it to work.

    Usb-c is basically two USB 3.0 in the same cable, so you can probably connect computers with that. But often the port on the devices only uses one, so it might not be faster. Originally they put the pins for two connections so you could flip the connector, but later they realized they could use them to get double speed.







  • What is available is a x11 server, not more not less, it cannot be used for anything other than x11. If they made X12, it would not work on Nvidia, unless they wrote a new server, which they wouldn’t.

    You need to understand that the xorg server everyone use literally does not work on Nvidia, because it uses implicit sync, which is required by the Linux infrastructure. The only thing that works on Nvidia it’s specifically their own proprietary server.

    Nvidia does a lot of impressive stuff, but they have neglected the Linux scene for a long time, because it wasn’t convenient, and it shows.

    Edit: …what was available… because Nvidia is gradually implementing things the correct way, and Wayland is becoming more and more usable with every driver update. Because, surprise surprise, it does depend on the drivers. Also, both Intel and AMD work perfectly with Wayland.