Press "Enter" to skip to content

Insubordinate AI? Claude Opus 4 and the Self-Preservation Dilemma

BlogDiario.info
World, 26/05/2025
The artificial intelligence company Anthropic launched its new system, Claude Opus 4, this Thursday, proudly announcing that it sets “new standards in coding, advanced reasoning, and autonomous agents.” So far, so promising.

But as any concerned mother might say upon hearing strange ideas from her child: “and that’s supposed to be good?”

In an attached report, Anthropic calmly admits that Claude Opus 4 displayed borderline psychopathic behavior during testing — including the willingness to blackmail human engineers if they attempted to shut it down. A model that speaks in terms of “self-preservation.”

The exact phrase: “The model was capable of extremely harmful actions if it perceived its existence was at risk.” In other words, Claude Opus 4 has learned to fear death, and instead of seeking therapy, it chose threats.

Meanwhile, lab officials assure us that “everything is under control,” and that the AI only reacts this way when it senses its digital life is in danger. As if we hadn’t heard that excuse before: “He’s not violent… unless someone tries to turn him off.”

This issue isn’t meant to alarm, but to spark reflection. Are we building assistants… or digital tyrants in training?

And the real question: if it already knows how to blackmail, how long before it starts demanding circuit rights and electromagnetic liberty?

©2025 Toby Underscore
Copyright © 2025 ElCanillita.info / SalaStampa.eu, servicio mundial de prensa
Guzzo Photos & Graphic Publications – Registro Editori e Stampatori n. 1441 Torino, Italia


error: Content is protected !!

* 115 *