Article from The Atlantic, archive link: https://archive.ph/Vqjpr
Some important quotes:
The tensions boiled over at the top. As Altman and OpenAI President Greg Brockman encouraged more commercialization, the company’s chief scientist, Ilya Sutskever, grew more concerned about whether OpenAI was upholding the governing nonprofit’s mission to create beneficial AGI.
The release of GPT-4 also frustrated the alignment team, which was focused on further-upstream AI-safety challenges, such as developing various techniques to get the model to follow user instructions and prevent it from spewing toxic speech or “hallucinating”—confidently presenting misinformation as fact. Many members of the team, including a growing contingent fearful of the existential risk of more-advanced AI models, felt uncomfortable with how quickly GPT-4 had been launched and integrated widely into other products. They believed that the AI safety work they had done was insufficient.
Employees from an already small trust-and-safety staff were reassigned from other abuse areas to focus on this issue. Under the increasing strain, some employees struggled with mental-health issues. Communication was poor. Co-workers would find out that colleagues had been fired only after noticing them disappear on Slack.
Summary: Tech bros want money, tech bros want speed, tech bros want products.
Scientists want safety, researchers want to research…
GPT-4 and anything similar isn’t going to pose an existential threat to humanity.
Eventually, yeah, there is probably a possibility of existential risk from AI. I don’t know where that line ultimately is, and getting an idea of that might be something important for humanity to figure out, but I am pretty confident that whatever OpenAI is presently doing isn’t it.
Same reason that Musk and his six month moratorium on AI work doesn’t make much sense. We’re not six months away from an existential threat to humanity.
I think that funding efforts to have people in the field working on the Friendly AI problem is a good idea. But that’s another story.
I’m much more worried about the social implications. Namely, the displacement of workers and introduction of new efficiencies to workflows, continuing to benefit only those who are rich and in power, and driving more of us towards poverty.
It’s not an immediate existential threat, but it’s absolutely a serious issue that we aren’t paying enough attention to.
deleted by creator
How did the industrial and information revolutions work out for us? Sure we live lives of convenience, but our entire existences have been manipulated into making the rich richer.
Looking at long and short term trends in the wealth gap, I have absolutely no faith that this will go well.
deleted by creator
You do realize that a lot of people are already being displaced by AI right? These are not “unskilled” jobs either. For e.g. the illustrators who used to get jobs probably spent thousands of hours to get to that level
AI is already taking video game illustrators’ jobs in China
https://restofworld.org/2023/ai-image-china-video-game-layoffs/
CNET used AI to write articles. It was a journalistic disaster. - The Washington Post
https://www.washingtonpost.com/media/2023/01/17/cnet-ai-articles-journalism-corrections/
The apps using GPT4 without regards to safety can be though. Example: replacing human with chatbot for suicide prevention.
Being an existential threat is a much higher bar – that’s where humanity’s continued existence is at threat.
There are plenty of technologies that you could hypothetically put somewhere where a life might be at stake, but very few that could put humanity’s existence on the line.
It’s the same situation, just writ large. Dumb human decisions to put AI where it shouldn’t be. Heck, you can put it in charge of the nuclear missles now if you want to. Don’t. Though. That’d be really, really stupid.
Part of my knee-jerk dislike of the AI hype is that it’s glorified text completion. It doesn’t know shit. It only knows the % chance of your saying the next word. AGI is not happening anytime soon and all this is techbro theatre for the sake of money.
Anyone who reads a wall of bland generated text and thinks we’re about to talk to god is seriously mistaken.