Episode: 69

Of Mythos and Legends

This week, Matt and Liam sit down to make sense of the biggest AI news in recent memory — Anthropic’s preview of a model called Mythos, and what it might actually mean for the rest of us.

They don’t dance around the scepticism.

We cover:

  • What Mythos reportedly did — finding long-standing security vulnerabilities across major platforms, and its now-famous sandbox escape
  • Why only ~40 enterprises have been given early access, and whether that’s responsible disclosure or a very effective marketing line
  • The social engineering angle — how AI models can already be manipulated into doing things they’ve been told not to, and what that means for a model this capable
  • OpenAI’s financial troubles, Elon Musk’s attempt to block their IPO, and the Project Stargate saga
  • NVIDIA and Jensen Huang’s “selling shovels in a gold rush” moment
  • The surveillance rabbit hole — from algorithmic dwell-time tracking and the Target pregnancy brochure story to amiunique.org and Derren Brown’s horse racing con
  • A slow-burn theory Matt’s been sitting on: that the public AI models we already have contain enough of our data that, in time, someone will de-anonymise all of it
  • Where AI is actually heading — excellent for individuals, still struggling to find its place in enterprise

It’s not a doom episode. But it’s not a reassuring one either.

🍻 Tonight’s Drinks: Matt – Aldi Scotch 🥃 Liam – Homebrew 🍺


🔗 Tonight’s Links:

Any Likes 👍, Shares 📣, Subscriptions 🔔, and Love ❤️ go a long way to helping us keep doing this for fun.

Cheers! 🍻