Profits Over Safety: Deja Vue All Over Again At OpenAI!
This article was first published in the Wiser! newsletter on 26th Nov 2023. Wiser! is an AI focused tech and business roundup of what’s going on and what’s coming next.
Image Credits: AI Generated Image Created On Canva
Capitalism Beats Safetyism At OpenAI
What’s that saying about a week’s a long time in politics? Well, a week on from last week’s Wiser! about the unexpected sacking of Sam Altman, the boomerang CEO of OpenAI has returned.
Basically here’s the deal. Late Tuesday, OpenAI announced that Sam Altman was being brought back after a five-day campaign to have him reinstated as the boss. Remember, that in this five day period between firing and then reinstating Altman as CEO, OpenAI had appointed two other CEOs! The campaign was a combination of efforts by Altman himself, his allies, investors, and the employees, who had threatened to quit en mass and go work at Microsoft.
It was also clear that there was no smoking gun to support the board’s decision to fire Altman. The Wall Street Journal reported, “the board said that Altman had been so deft (at not telling the truth), they couldn’t even give a specific example, according to the people familiar with the executives”.
In other words, the OpenAI board thought Altman was so good at being bad, they can’t even tell you what he did bad. This kind of paranoid and conspiratorial thinking in a board room will always lead to bad decisions.
Which explains why OpenAI are overhauling the company’s board of directors. Of those who voted to fire Altman, only Adam DiAngelo is staying on. New people are joining to replace those departing. Brett Taylor, an ex-Facebook and ex-Salesforce executive is coming on to the board as the new chairman. Larry Summers, the former US Treasury Secretary, is also coming on to the board.
So, what does it all mean and what have we learnt?
At the heart of the dispute was the conflict between the For Profit and the Not For Profit sides of OpenAI. Particularly a dispute between Altman and former board member Helen Toner over AI Safety.
Toner represented the nonprofit side of OpenAI. She recently co.authored a research paper that was, in part, about AI safety. And the thing that she and her co-authors wrote was that OpenAI’s rival, Anthropic, which is co-founded by a bunch of former OpenAI people, had built their product more safely than OpenAI had.
In Altman’s mind this was a betrayal and not helpful in the pursuit of commercial value. But for Toner she was just her doing her job which is to make sure that AI gets built in the safest manner possible. Her job is not to protect the reputation and commercialisation of OpenAI.
By all accounts, this appears to be where the schism that lead to Altman’s firing a week ago exists. But in just five days, it was clear that the money beat safety and Altman was back and Toner was gone.
It remains to be seen if/how this will change the mission of OpenAI. It can be expected that Microsoft is going to have a bigger hand in the governance of OpenAI going forward. Up until now, they’ve been a kind of a passive investor, but with so much at stake for Microsoft’s strategy in AI, it’s hard to see how they wont take an active role in OpenAI now.
Here’s The Thing ➜ This is so reminiscent of when Frances Haugen, a former Facebook employee turned whistleblower, accused Facebook of “prioritising profits over public safety” in 2021. Big Tech just wasn’t doing “the right thing” then and it feels like deja vue all over again!
The bigger picture here is that in the war of words between the capitalists and the safetyists, those who say “push forward because the opportunities are too great”, versus the “slow down, we don’t know what we’ve unleashed here” brigade, the capitalists have won.
The people who are now in charge of the board of OpenAI are the kind of seasoned deal makers and Silicon Valley insiders that you would expect to govern a for profit technology company. They have replaced the academics and ideologues on the board who worry that AI could become too powerful and needs to be shut down.
Meanwhile, Anthropic have remained relatively silent, a smart move as the dust settles. Instead, they’ve released Claude 2.1 to retake top spot as the most powerful large language model. They’ve also been quick to market a “Anthropic is 50% safer than ChatGPT” tag line.
Join the mailing list ➜ Join 15k subscribers who get Wiser! every week, and claim your free eBooks too!
Learn AI in 5 minutes a day
AI Tool Report will teach you how to save time and earn more with AI.
Join 400,000+ free daily readers for trending tools, productivity boosting prompts, the latest news, and more. ➜ LEARN MORE
About The Author
Rick Huckstep has worked in technology his entire career, as a corporate sales leader, investor in tech startups and keynote speaker. From his home in Spain, Rick is thought leader in artificial intelligence, emerging technologies and the future of work.
🤔 Join The Mailing List and Get Wiser! every week (and your free eBooks and resources)