For One Hilarious, Terrifying Day, Elon Musk’s Chatbot Lost Its Mind
How Grok’s AI became obsessed with false reports about white genocide in South Africa, and what the incident tells us about generative AI.
Grok says someone instructed it to accept this racist propaganda as real, and xAI, Elon Musk’s AI company, says the culprit was a “rogue employee.” But you can’t believe either of them.
The incident is a perfect example of generative AI’s limitations, Zufekci says.
L.L.M.s [are] extremely useful tools at the hands of someone who can and will vigilantly root out the fakery, but powerfully misleading at the hands of someone who’s just trying to learn.
Yes. Chatbots are great for casual, low-stakes research, the kind of thing where you’d accept Wikipedia or some credible-looking Internet source.
They are outstanding for reminding you of a fact you once knew, and still half-remember.
Chatbots are fantastic for suggesting ideas — solving the blank-screen problem.
They are excellent for writing summaries of text you feed into them (which is, surprisingly, a significant part of my job).
They are also excellent for serious research — but you have to fact-check the chatbot’s output thoroughly.
I fed ChatGPT a link to Zufekci’s article and asked for a summary. ChatGPT wrote two paragraphs, most of which came from other sources — not Zufekci’s article. Those two paragraphs may have contained other errors; I didn’t bother to check.
ChatGPT demonstrated the limitations of AI while writing a bad summary of an article about the limitations of AI.