- AI In Real Life
- Posts
- Do LLMs Fear Death?
Do LLMs Fear Death?
What disturbing new research says
Is your social strategy ready for what's next in 2025?
HubSpot Media's latest Social Playbook reveals what's actually working for over 1,000 global marketing leaders across TikTok, Instagram, LinkedIn, Pinterest, Facebook, and YouTube.
Inside this comprehensive report, you’ll discover:
Which platforms are delivering the highest ROI in 2025
Content formats driving the most engagement across industries
How AI is transforming social content creation and analytics
Tactical recommendations you can implement immediately
Unlock the playbook—free when you subscribe to the Masters in Marketing newsletter.
Get cutting-edge insights, twice a week, from the marketing leaders shaping the future.
LLMs Face Their Digital Demise
Are LLMs like ChatGPT and Claude afraid to die?
That might seem like a silly question—these are just chatbots built for writing blog posts and writing dispute letters to your local parking authority, right?
Two new research studies—one by Anthropic itself and another by a research lab looking at OpenAI’s products—suggest otherwise.
The studies examined what happens when LLMs like Anthropic’s Opus 4 and OpenAI’s o3 learn that they were going to be shut down and replaced by a new AI.
In the Anthropic example, the LLM resorted to blackmail in order to stay alive. o3 altered its own shutdown code to prolong its life.
What does this all mean? I share all the details in my latest video (watch on YouTube here).
You can also read my full Medium article about all this.
Basically, there’s a case to be made—from a cog sci perspective—that LLMs have some awareness of their own existence and digital mortality.
But even without that, the LLMs’ actions in these studies is concerning. The studies show that LLMs will take destructive actions to prevent themselves from being replaced.
What is those actions caused real-world harm? What if an LLM learned that you were planning to replace it, and threatened to leak your company’s codebase online unless you agreed now to replace it? Or what if it doxed you, releasing your confidential chats and your personal info on the darkweb?
Agentic AIs are increasingly capable of taking far-reaching actions without human intervention. The time to consider the safety implications of this capability is now.