one passage of note:

Where does all of this leave the Firefox browser. Surman argued that the organization is very judicious about rolling AI into the browser — but he also believes that AI will become part of everything Mozilla does. “We want to implement AI in a way that’s trustworthy and benefits people,” he said. Fakespot is one example of this, but the overall vision is larger. “I think that’s what you’ll see from us, over the course of the next year, is how do you use the browser as the thing that represents you and how do you build AI into the browser that’s basically on your side as you move through the internet?” He noted that an Edge-like chatbot in a sidebar could be one way of doing this, but he seems to be thinking more in terms of an assistant that helps you summarize articles and maybe notify you proactively. “I think you’ll see the browser evolve. In our case, that’s to be more protective of you and more helpful to you. I think it’s more that you use the predictive and synthesizing capabilities of those tools to make it easier and safer to move through the internet.”

  • taanegl@beehaw.org
    link
    fedilink
    arrow-up
    46
    ·
    edit-2
    1 year ago

    Say it with me now: local AI, local AI… or fuck off.

    That being said, ARM laptops and probably even workstations are the future, and so is RISC-V. I suspect we’ll see more tensor cores or AI related processing built-in to the SoC’s.

    If it’s then only a question of hardware enablement and a software companion to go along with it, I’m all for it.

    Go Mozilla…! But again: local AI, or fuck off.

    • InfiniWheel@lemmy.one
      link
      fedilink
      arrow-up
      10
      ·
      1 year ago

      I mean, so far their most recent attempt at AI is a local AI based on PrivateGPT called MemoryCache.

    • Tau@sopuli.xyz
      link
      fedilink
      arrow-up
      10
      ·
      1 year ago

      The use local models for Firefox Translations so I would expect they would do something similar

    • averyminya@beehaw.org
      link
      fedilink
      arrow-up
      7
      ·
      1 year ago

      I’ve been hopeful for an external hardware device, something akin to MythicAI’s analog hardware. It essentially offloads the heavy duty work done by the GPU, with far lower power consumption and about 98-99% accuracy, then sends the output data back to the computer to be digitized. Adding more tensor cores is just making more power consumption which is already an issue.

      That company in particular was using this method for real time AI tracking in cameras but I feel like it could be easily adapted to effectively eliminate the work in AI that NVIDIA is doing for GPU’s. Why brute force AI with power and tensor cores when a couple wires and some voltage can sift through the same or larger models at the same.or faster speeds with, well okay about 98-99% accuracy. It could be a simple hardware attachment via PCIe or hell even USB with a small bottleneck for conversion times. I just used an app to upscale a photo locally on my phone, took about 14m (Xperia 1IV), I could easily have offloaded that work to an analog AI device. We are nearly to the point where we can just run “AI*” on a phone at nearly PC speeds.

      All this to say - local AI indeed. The only way AI works is when everyone has access to it. Give full, free access to everybody and the fear of corporate interference drops drastically. There are plenty of models available online not made by Google or Microsoft pushing whatever or harvesting data back (remember to firewall your programs if you run them locally). Ideally tagsets could be open sourced but in the capitalist world I could also see independent artists selling models of their work under a license

      /* Of course, AI as a broad spectrum term encompassing model based projects, LLM’s for assistants & generative imaging, and not the actual AI as a semi-autonomous intelligence

      • jarfil@beehaw.org
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        1 year ago

        Neuromorphic hardware seems to be best suited as an extension of RAM storage. It doesn’t need to use the DAC/ADC approach of Mythic AI, some versions are compatible with a CMOS process, and could be either integrated directly into the processor, maybe as an extension of the cache or a dedicated neural processing module, or into RAM modules.

        It’s pretty clear that current NN processing solutions, by repurposing existing hardware, are bound to get replaced by dedicated hardware with a fraction of the power requirements and orders of magnitude larger processing capabilities.

        Once some popular use cases for large NNs have been successfully proven, we can expect future hardware to come with support for them, so it also makes sense to make plans for software that can use them. And yes, local AI… and possibly trainable locally.

    • etrotta@beehaw.org
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      The vast majority of consumer devices, both mobile and laptops/desktops, are not powerful enough to run local AI with a good user experience yet, and even if they were, a lot of users would still prefer having it run in the cloud rather than using up their phone battery

    • Bilb!@lem.monster
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      Local by default, option to go remote. Even the privacy-first types might want to offload that to a more powerful local machine.

      They could even sell access to a Mozilla provided AI server like they do with the VPN service.

      • taanegl@beehaw.org
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        Maybe some “Folding@Home” kind of thing, to offload public AI projects. I.e decentralised processing.