(quote)
Meta Trained an AI on 48M Science Papers. It Was Shut Down After 2 Days
Galactica was supposed to help “organize science.” Instead, it spewed misinformation.
In the first year of the pandemic, science happened at light speed. More than 100,000 papers were published on COVID in those first 12 months — an unprecedented human effort that produced an unprecedented deluge of new information. It would have been impossible to read and comprehend every one of those studies. No human being could (and, perhaps, none would want to).
But, in theory, Galactica could.
Galactica is an artificial intelligence developed by Meta AI (formerly known as Facebook Artificial Intelligence Research) with the intention of using machine learning to “organize science.” It’s caused a bit of a stir since a demo version was released online, with critics suggesting it produced pseudoscience, was overhyped and not ready for public use.
Meta AI released a demo version Nov. 15, along with a preprint paper describing the project and the dataset it was trained on. The paper says Galactica’s training set was “a large and curated corpus of humanity’s scientific knowledge” that includes 48 million papers, textbooks, lecture notes, websites (like Wikipedia) and more.
Galactica struggled to perform kindergarten math. It provided error-riddled answers, incorrectly suggesting that one plus two doesn’t equal 3. It generated lecture notes on bone biology that would certainly have seen me fail my college science degree had I followed them, and many of the references and citations it used when generating content were seemingly fabricated.
Carl Bergstrom, a professor of biology at the University of Washington who studies how information flows, described Galactica as a “random bullshit generator.” It doesn’t have a motive and doesn’t actively try to produce bullshit, but because of the way it was trained to recognize words and string them together, it produces information that sounds authoritative and convincing — but is often incorrect.
That’s a concern, because it could fool humans, even with a disclaimer.
Within 48 hours of release, the Meta AI team “paused” the demo. It’s worrying to see the demo released and described as a way to “explore the literature, ask scientific questions, write scientific code, and much more” when it failed to live up to that hype.
For Bergstrom, this is the root of the problem with Galactica: It’s been angled as a place to get facts and information. Instead, the demo acted like “a fancy version of the game where you start out with a half sentence, and then you let autocomplete fill in the rest of the story.”
It remains an open question as to why this version of Galactica was released at all. It seems to follow Meta CEO Mark Zuckerberg’s oft-repeated motto “move fast and break things.” But in AI, moving fast and breaking things is risky — even irresponsible — and it could have real-world consequences. Galactica provides a neat case study in how things might go awry.
Why Meta’s latest large language model survived only three days online
Galactica was supposed to help scientists. Instead, it mindlessly spat out nonsense.
On November 15 Meta unveiled a new large language model called Galactica, designed to assist scientists. But instead of landing with the big bang Meta hoped for, Galactica has died with a whimper after three days of intense criticism. The company took down the public demo that it had encouraged everyone to try out.
Galactica is a large language model for science, trained on 48 million examples of scientific articles, websites, textbooks, lecture notes, and encyclopedias. Meta promoted its model as a shortcut for researchers and students. In the company’s words, Galactica “can summarize academic papers, solve math problems, generate Wiki articles, write scientific code, annotate molecules and proteins, and more.” But the shiny veneer wore through fast. Like all language models, Galactica is a mindless bot that cannot tell fact from fiction. Within hours, scientists were sharing its biased and incorrect results on social media.
“I am both astounded and unsurprised by this new effort,” says Chirag Shah at the University of Washington, who studies search technologies. “When it comes to demoing these things, they look so fantastic, magical, and intelligent. But people still don’t seem to grasp that in principle such things can’t work the way we hype them up to.” “Language models are not really knowledgeable beyond their ability to capture patterns of strings of words and spit them out in a probabilistic manner. It gives a false sense of intelligence.”
A fundamental problem with Galactica is that it is not able to distinguish truth from falsehood, a basic requirement for a language model designed to generate scientific text. People found that it made up fake papers (sometimes attributing them to real authors), and generated wiki articles about the history of bears in space as readily as ones about protein complexes and the speed of light. It’s easy to spot fiction when it involves space bears, but harder with a subject users may not know much about.
Why Meta Took Down its ‘Hallucinating’ AI Model Galactica?
MetaAI and Papers with Code announced the release of Galactica, an open-source large language model trained on scientific knowledge, with 120 billion parameters. However, just days after its launch, Meta took Galactica down. Interestingly, every result generated by Galactica came with the warning- Outputs may be unreliable. Language Models are prone to hallucinate text.
“I asked Galactica about some things I know about and I’m troubled. In all cases, it was wrong or biassed but sounded right and authoritative. I think it’s dangerous,” Micheal Black, director at Max Planck Institute for Intelligent Systems, said.
(unquote)
Image courtesy Stephanie Arnett/MITTR; Getty, Envato, NASA
2022-11-28