What was Meta thinking with the Galactica launch?
Every week, 15k subscribers get Wiser! — the tech economy newsletter to anyone looking to better understand what’s happening today for clues of what’s coming tomorrow. Join the community and be Wiser!
A couple of weeks ago, I posted a story in Wiser! about Meta’s new AI science chatbot. Built in conjunction with Papers with Code, it was called Galactica to signify the sheer scale and size of the massive language model that it was built on.
The knowledge, wisdom, and intellect of Galactica was built on 48 million published scientific papers. Officially, it was described as “a large language model that can store, combine and reason about scientific knowledge.”
The Meta team behind Galactica said their language models would be better than search engines. “We believe this will be the next interface for how humans access scientific knowledge,” said the researchers.
It was meant to be a super-duper monster of a memory bank that could generate science papers, write wiki articles, create scientific code, and never need to stop for a cup of tea and digestive.
This was clearly a bank of intellectual capability that is unachieveable on a human scale.
Sounds great, right?
Trouble is, Galactica was as susceptible to human bias, prejudice and bigotry as, well, we humans are.
Proudly launched on November 15th. By the 17th Galactica was shutdown.
The trouble started literally within hours of the public launch. Users complained that the answers Galactica threw out were garbage. With some of them were offensive — homophobic, antisemetic, misogynistic et al.