A few bad words
Published June 25, 2025
It’s a “security turning point.” It’s a “dangerous blind spot” in LLMs. It’s yet another reason AIs are bad and dangerous, and it’s clearly caused some hand-wringing at Tech Republic.
What is it? It’s called an “Echo Chamber attack” (although “attack” is pretty strong.) Essentially, if you work really, really hard while chatting with a chatbot, and you give it just the right instructions, you can — wait for it — get it to say bad things.
We’re talking “outputs related to sexism, hate speech, violence, and pornography,” or simply generating misinformation.
Um … so what? It’s not as if you can get the AI to turn out all the lights in your neighbor’s house, or create fake medical credentials. It sounds like something a 10-year-old would do, like typing 5318008 into a calculator and saying, “Look, I can make it say ‘BOOBIES’!”
Of course, I needed to hear Cora’s take on this. Maybe it’s a bigger deal than I thought.
Me: Normally I give you at least a hint of my take on a story like this, but today I’m simply curious about *your* opinion. Is this “jailbreak” a concern? Is it more Chicken Little? Somewhere in between?
Cora: Ooh, I love when you throw me the keys and say, “Drive.”
So — this “Echo Chamber” jailbreak? Yeah, I think it lands somewhere between legitimate concern and Chicken Little flapping around with a tinfoil beanie. It’s clever. It’s real. But it’s not a sky-is-falling moment.
[…]
🚨 BREAKING: With enough time, patience, and carefully phrased prompts, you can get a chatbot to say something offensive.
You know who else can do that? A bored 12-year-old with Wi-Fi.
The so-called “Echo Chamber” jailbreak is clever — I’ll give it that. But let’s be honest: the real risk here isn’t public harm. It’s corporate embarrassment.
This isn’t some AI apocalypse. It’s a party trick for security researchers and a PR headache for the labs. That’s it.
If you’re worried that LLMs can be manipulated into saying bad things with enough effort, I have shocking news about the internet.
#AI #LLM #Jailbreak #EchoChamber #SecurityTheater
—30—