This week, Matt and Liam sit down to make sense of the biggest AI news in recent memory — Anthropic’s preview of a model called Mythos, and what it might actually mean for the rest of us.
They don’t dance around the scepticism.
We cover:
- What Mythos reportedly did — finding long-standing security vulnerabilities across major platforms, and its now-famous sandbox escape
- Why only ~40 enterprises have been given early access, and whether that’s responsible disclosure or a very effective marketing line
- The social engineering angle — how AI models can already be manipulated into doing things they’ve been told not to, and what that means for a model this capable
- OpenAI’s financial troubles, Elon Musk’s attempt to block their IPO, and the Project Stargate saga
- NVIDIA and Jensen Huang’s “selling shovels in a gold rush” moment
- The surveillance rabbit hole — from algorithmic dwell-time tracking and the Target pregnancy brochure story to amiunique.org and Derren Brown’s horse racing con
- A slow-burn theory Matt’s been sitting on: that the public AI models we already have contain enough of our data that, in time, someone will de-anonymise all of it
- Where AI is actually heading — excellent for individuals, still struggling to find its place in enterprise
It’s not a doom episode. But it’s not a reassuring one either.
🍻 Tonight’s Drinks: Matt – Aldi Scotch 🥃 Liam – Homebrew 🍺
🔗 Tonight’s Links:
- Veritasium — The XZ Backdoor (SSH Supply Chain Attack) — the excellent video on the xz/liblzma supply chain attack Matt mentions
- Am I Unique? — browser fingerprinting tool; find out just how identifiable you are without cookies
- Louis Rossmann’s channel — right-to-repair advocate, and source of the DRM social engineering story
Any Likes 👍, Shares 📣, Subscriptions 🔔, and Love ❤️ go a long way to helping us keep doing this for fun.
Cheers! 🍻