Article from The Atlantic, archive link: https://archive.ph/Vqjpr

Some important quotes:

The tensions boiled over at the top. As Altman and OpenAI President Greg Brockman encouraged more commercialization, the company’s chief scientist, Ilya Sutskever, grew more concerned about whether OpenAI was upholding the governing nonprofit’s mission to create beneficial AGI.

The release of GPT-4 also frustrated the alignment team, which was focused on further-upstream AI-safety challenges, such as developing various techniques to get the model to follow user instructions and prevent it from spewing toxic speech or “hallucinating”—confidently presenting misinformation as fact. Many members of the team, including a growing contingent fearful of the existential risk of more-advanced AI models, felt uncomfortable with how quickly GPT-4 had been launched and integrated widely into other products. They believed that the AI safety work they had done was insufficient.

Employees from an already small trust-and-safety staff were reassigned from other abuse areas to focus on this issue. Under the increasing strain, some employees struggled with mental-health issues. Communication was poor. Co-workers would find out that colleagues had been fired only after noticing them disappear on Slack.

Summary: Tech bros want money, tech bros want speed, tech bros want products.

Scientists want safety, researchers want to research…

  • Quasari@programming.dev
    link
    fedilink
    English
    arrow-up
    13
    ·
    1 year ago

    The apps using GPT4 without regards to safety can be though. Example: replacing human with chatbot for suicide prevention.

    • tal@lemmy.today
      link
      fedilink
      arrow-up
      7
      ·
      1 year ago

      Being an existential threat is a much higher bar – that’s where humanity’s continued existence is at threat.

      There are plenty of technologies that you could hypothetically put somewhere where a life might be at stake, but very few that could put humanity’s existence on the line.

      • brothershamus@kbin.social
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        It’s the same situation, just writ large. Dumb human decisions to put AI where it shouldn’t be. Heck, you can put it in charge of the nuclear missles now if you want to. Don’t. Though. That’d be really, really stupid.

        Part of my knee-jerk dislike of the AI hype is that it’s glorified text completion. It doesn’t know shit. It only knows the % chance of your saying the next word. AGI is not happening anytime soon and all this is techbro theatre for the sake of money.

        Anyone who reads a wall of bland generated text and thinks we’re about to talk to god is seriously mistaken.