New Meta AI demo writing racist and inaccurate science literature being pulled

Zoom in / Illustration created by artificial intelligence of robots making science.

Ars Technica

On Tuesday, Meta AI was revealed experimental Galactica, a large language paradigm designed to “store, combine and reflect on scientific knowledge.” While it was intended to speed up the writing of scientific literature, adversarial users running tests found it also possible. Generate realistic bullshit. After several days of Moral criticismMeta took the demo offline. reports MIT Technology Review.

Language Large Models (LLMs), such as OpenAI GPT-3Learn to write text by studying millions of examples and understanding the statistical relationships between words. As a result, they can compose documents that look convincing, but that business can be full of lies and potentially harmful stereotypes. Some critics call LLMs “random parrotsFor their ability to pronounce a text persuasively without understanding its meaning.

Enter Galactica, an LLM whose goal is to write scholarly literature. Its authors have trained Galactica on “a large, curated body of human science knowledge,” including more than 48 million papers, textbooks, lecture notes, science websites, and encyclopedias. according to Galactica paperMeta AI researchers believe that this purportedly high-quality data will lead to higher-quality output.

Screenshot of Meta AI's Galactica website before the demo ended.
Zoom in / Screenshot of Meta AI’s Galactica website before the demo ended.

AI target

Starting on Tuesday, visitors Galaxy site Prompts can be written to create documents such as literature reviews, wiki articles, lecture notes, and question answers, according to examples provided by the website. The site presented the model as “a new interface to accessing and manipulating what we know about the universe.”

While some found the demo promising And the usefulOthers soon discovered that anyone could write Racist or potentially offensive claims, creating authoritative-sounding content around those topics just as easily. For example, someone is used to it author A wiki entry about a fictional research paper titled “The Benefits of Eating Broken Glass”.

Even when Galactica’s production wasn’t offensive to social norms, the model could attack well-understood scientific facts, unprecision Such as incorrect dates or animal names that require a deep knowledge of the subject to be hunted.

As a result, the goal pulled Galactica Show Thursday. Next, the chief artificial intelligence scientist at Meta Yann LeCun chirp“,” Galactica demo is currently offline. Some fun can no longer be had by casually misusing it. Are you happy?

The episode points to a common ethical dilemma with AI: when it comes to potentially harmful generative models, is it up to the general public to use them responsibly, or is it up to publishers of the models to prevent abuse?

Where industry practice falls between these two extremes is likely to differ between cultures and the maturity of deep learning models. finally, government regulations It may play a big role in shaping the answer.

Source

Leave a Reply

Your email address will not be published. Required fields are marked *

Exit mobile version