• 0 Posts
  • 134 Comments
Joined 2 years ago
cake
Cake day: April 23rd, 2023

help-circle
  • […] I’d like to be able to backup to my home server. The main thing would probably just be my photos […]

    For the photos, since you have a home server, have you heard of Immich? For anything else, there was a time when I could have recommended syncthing-android, but development on that has been discontinued, though you can still try using it. Some privacy-conscious cloud services may allow you to sync app folders, backing up WhatsApp that way, but I have no experience with that.

    is the 8a likely to drop much in price after that? I don’t know how quickly the prices drop but considering the 8a is currently £500 I can’t see it dropping to <£300

    Instead of buying straight from Google, you can consider buying a refurbished 8a off ebay or something local - my last two Pixel purchases have been through that method. It tends to be substantially cheaper than buying new, even as little as 6 months after the product launch, and the 8a launched 9 months ago. Just be cautious of seller ratings, reputations, and consistency - prices are lower there because it’s more of a risk for the buyer.


  • Onihikage@beehaw.org
    cake
    toPrivacy@lemmy.mlProton Ditches Mastodon
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    2 months ago

    https://medium.com/@ovenplayer/does-proton-really-support-trump-a-deeper-analysis-and-surprising-findings-aed4fee4305e

    Thanks for the link, that’s a lot more context than the usual reactionary “Andy Yen said one nice thing about a Republican therefore he’s fascist pro-Trump MAGA” takes I’ve been seeing. Not only does it more or less disprove that narrative, it makes me question how much of the hate against him lately is genuine and how much of it has been seeded and signal-boosted by nation-state actors who don’t want people to use encrypted communications.

    Yen is clearly trying to be nonpartisan and praise what he sees as good for privacy while pointing out abuses of power, regardless of who has the power at the moment. He sees this as his way of adding weight to the scale in favor of better privacy and tearing down big tech. I know many in my country and on the web are hyper-polarized and addicted to anger, to the point that if someone says anything even slightly positive about their perceived political enemy, it’s seen as legitimizing and aligning with that enemy, but I don’t believe that’s a healthy or productive mindset to have. I believe that kind of divisive attitude is preventing us from uniting with those who should be agreeable to our cause, and that’s exactly what the oligarchs want. It’s making us weak.

    I’ve been on the fence for a while since this whole thing started, because I do use a paid Proton email, and it sounded bad, but I kept getting this nagging feeling I wasn’t seeing the full picture. That’s gone now - Andy may be politically and/or socially inept, and he may have a different perspective on what it means to support privacy and democracy, but I think it’s clear his heart is in the right place, and the work he and Proton are continuing to do for tech privacy is helping to erode authoritarian power structures, including Trump’s.


  • I appreciate the links, but these are all about how to efficiently process an audio sample for a signal of choice.

    Your stumbling block seemed to be that you didn’t understand how it was possible, so I was trying to explain that, but I may have done a poor job of emphasizing why the technique I described matters. When you said this in a previous comment:

    I do think that they’re not just throwing away the other fish, but putting them into specific baskets.

    That was a misunderstanding of how the technology works. With a keyword spotter (KWS), which all smartphone assistants use to detect their activation phrases, they they aren’t catching any “other fish” in the first place, so there’s nothing to put into “specific baskets”.

    To borrow your analogy of catching fish, a full speech detection model is like casting a large net and dragging it behind a ship, catching absolutely everything and identifying all the fish/words so you can do things with them. Relative to a KWS, it’s very energy intensive and catches everything. One is not likely to spend that amount of energy just to throw back most of the fish. Smart TVs, cars, Alexa, they can all potentially use this method continuously because the energy usage from constantly listening with a full model is not an issue. For those devices, your concern that they might put everything other than the keyword into different baskets is perfectly valid.

    A smartphone, to save battery, will be using a KWS, which is like baiting a trap with pheromones only released by a specific species of fish. When those fish happen to swim nearby, they smell the pheromones and go into the trap. You check the trap periodically, and when you find the fish in there, you pull them out with a very small net. You’ve expended far less effort to catch only the fish you care about without catching anything else.

    To use yet another analogy, a KWS is like a tourist in a foreign country where they don’t know the local language and they’ve gotten separated from their guide. They try to ask locals for help but they can’t understand anything, until a local says the name of the tour group, which the tourist recognizes, and is able to follow that person back to their group. That’s exactly what a KWS system experiences, it hears complete nonsense and gibberish until the key phrase pops out of the noise, which they understand clearly.

    This is what we mean when we say that yes, your phone is listening constantly for the keyword, but the part that’s listening cannot transcribe your conversations until you or someone says the keyword that wakes up the full assistant.

    My question is, how often is audio sampled from the vicinity to allow such processing to happen.

    Given the near-immediate response of “Hey Google”, I would guess once or twice a second.

    Yes, KWS systems generally keep a rolling buffer of audio a few seconds long, and scan it a few times a second to see if it contains the key phrase.


  • Onihikage@beehaw.org
    cake
    toPrivacy@lemmy.mlI know Phones dont listen but....
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    2 months ago

    How can you catch the right fish, unless you’re routinely casting your fishing net?

    It’s a technique called Keyword Spotting (KWS). https://en.wikipedia.org/wiki/Keyword_spotting

    This uses a tiny speech recognition model that’s trained on very specific words or phrases which are (usually) distinct from general conversation. The model being so small makes it extremely optimized even before any optimization steps like quantization, requiring very little computation to process the audio stream to detect whether the keyword has been spoken. Here’s a 2021 paper where a team of researchers optimized a KWS to use just 251uJ (0.00007 milliwatt-hours) per inference: https://arxiv.org/pdf/2111.04988

    The small size of the KWS model, required for the low power consumption, means it alone can’t be used to listen in on conversations, it outright doesn’t understand anything other than what it’s been trained to identify. This is also why you usually can’t customize the keyword to just anything, but one of a limited set of words or phrases.

    This all means that if you’re ever given an option for completely custom wake phrases, you can be reasonably sure that device is running full speech detection on everything it hears. This is where a smart TV or Amazon Alexa, which are plugged in, have a lot more freedom to listen as much as they want with as complex of a model as they want. High-quality speech-to-text apps like FUTO Voice Input run locally on just about any modern smartphone, so something like a Roku TV can definitely do it.




  • I would recommend against pairing Battlemage with a low-spec CPU. As shown by Hardware Canucks, Hardware Unboxed, and others, Intel’s Arc graphics driver overhead is currently much higher than competitors, which means they’re disproportionately affected by having a weaker CPU. This causes the B580 to lose significantly more performance when paired with low-end CPUs than a roughly equivalent Nvidia or AMD card. At the very low end, the difference is especially stark. In some games, the B580 goes from neck-and-neck with a 4060 on a high-end CPU to losing half its performance with a low-end older CPU, while the 4060 only loses about 25%.

    If you’re really stuck with a lower-end CPU, it would be far better to get a used midrange AMD or Nvidia GPU from an older product generation for the same price and use that.



  • Onihikage@beehaw.org
    cake
    toLinux@lemmy.mlLinux suggestion
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    4 months ago

    Have you ever seen Linux Journey? It’s a very informative set of tutorials on how Linux fundamentally works under the hood; all the separate systems that together create an operating system. The concepts you learn there will apply to almost any distro in some way, even if some distros (like Atomic ones) don’t let you mess with all of it.

    For more top-level transition concerns, given that you’re coming from stock Debian running KDE… Bazzite can also run KDE, so provided you select KDE when you download it, your GUI experience should be pretty much identical. Some minor but important differences would include themes, but there are guides for that, too.

    When it comes to package management, the intent on Atomic systems is you basically don’t install traditional packages (Flatpaks are the preferred option), but Bazzite has frameworks in place such that you can install pretty much any package from any distro, as laid out in their documentation I linked in my previous post and just now. Work is also ongoing to make traditional package-based software installations more seamless with an incoming switch from rpm-ostree to bootc, but that’s getting into the weeds. If you have a deb file for a GUI program that’s not available as a Flatpak, you’ll be using a Distrobox to install it.

    If you have any specific concerns about the differences, let me know and I can hopefully give you more details.


  • Onihikage@beehaw.org
    cake
    toLinux@lemmy.mlLinux suggestion
    link
    fedilink
    English
    arrow-up
    2
    ·
    4 months ago

    I can highly recommend Bazzite for your needs. It has a KDE version which is clearly your favorite Desktop Environment (DE), it’s extremely safe/stable due to being an Atomic distro (you can always boot into the previous image if a system update broke something), has incredible documentation, supports almost any traditional app through Distrobox (VPN requires rpm-ostree for now), has a scripted easy install of Waydroid for native android emulation, and has a few tweaks preconfigured to ensure the desktop gaming experience is a little more seamless out of the box than a stock distro. It really seems to tick all the boxes for what you’re looking for.

    If you want more focus on development and less on gaming, the Universal Blue team also makes Aurora for more developer-focused workloads, but Steam not being included in the image does introduce some usability regressions - Steam running via Flatpak or Distrobox is just plain less capable than a native install, though work is ongoing to make native installs Just Work even on Atomic systems.









  • You’re entirely correct, but in theory they can give it a pretty good go, it just requires a lot more computation, developer time, and non-LLM data structures than these companies are willing to spend money on. For any single query, they’d have to get dozens if not hundreds of separate responses from additional LLM instances spun up on the side, many of which would be customized for specific subjects, as well as specialty engines such as Wolfram Alpha for anything directly requiring math.

    LLMs in such a system would be used only as modules in a handcrafted algorithm, modules which do exactly what they’re good at in a way that is useful. To give an example, if you pass a specific context to an LLM with the right format of instructions, and then ask it a yes-or-no question, even very small and lightweight models often give the same answer a human would. Like this, human-readable text can be converted into binary switches for an algorithmic state machine with thousands of branches of pre-written logic.

    Not only would this probably use an even more insane amount of electricity than the current approach of “build a huge LLM and let it handle everything directly”, it would take much longer to generate responses to novel queries.