Article
October 15, 2025

AI Security: What to Expect On Here

AI Security: What to Expect On Here

There's a lot to artificial intelligence. In some ways, its still experimental technology but it’s also infrastructure and a productivity product (for some people). Whatever it is, it can fail, be exploited, and be weaponized. The newsletter (which you can sign up for here), has a weekly column dedicated to the field of AI Security, which is focused on defending these systems from manipulation, misuse, and malfunction. You can find articles written about it from time-to-time on this site, and here's what to expect: 

The (Esoteric) Edge: Tools, Jailbreaks, and Vulnerabilities

There will be pieces that dig into the technical underbelly of AI systems: jailbreak techniques, tokenization exploits, prompt-injection chains, and the evolving set of AI security tools designed to detect and defend against them.

We’ll unpack how researchers (including myself) find ways to bypass model restrictions, distort embeddings, or poison datasets — and how defenders are responding with model interpretability, alignment hardening, and threat intelligence techniques borrowed from cybersecurity... and then how that's all bypassed again.

I'll try to often include the details that rarely reach mainstream coverage. but define the next frontier of digital defense.

The Human Impact: When AI Security Fails

Not all AI vulnerabilities stay confined to research papers or GitHub repos. When AI systems break, people feel it — whether it’s a deepfake convincing voters, a recommender system amplifying hate speech, or a compromised medical AI making false recommendations.

When an AI vulnerability has real-world consequences, it will be covered similarly to a news story. Expect some articles (and always to be included in the newsletter) that trace the path from technical flaw to human fallout, showing how AI failures ripple through society, policy, and trust.

Why This Matters Now

As AI scales, so do its attack surfaces. And now it's powering chatbots and creative tools, as well as underwriting critical infrastructure, financial systems, and national security. So, whilst AI is defined by how powerful the models are, the counterpart is how resilient they are.

Other blog posts

Start a conversation


What’s real in AI and AI security: what’s happening, what matters, and why does it matter?

I’ve spent my career exploring how technology, infrastructure, and human behavior intersect across cybersecurity, subsea systems, and more recently AI. I’ve worked in offensive security, engineering, and now lead Subsea Cloud, where we build sustainable, high-performance data centers beneath the sea.

I write and speak about the edges of technology: how we secure them, scale them, and sometimes subvert them. My work has been featured in conferences and publications across the U.S. and Europe, and I’ve presented to organizations including Amazon, NASA, Linkedin, U.S. federal agencies, the United Nations and the UK government and at conferences across the world including South by Southwest, Underwater Defense Technology, OODA Con, DEFCON, PTC, DataCloud Global Congress and BlackHat.

Say Hello

You can find more on LinkedIn or reach out directly.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.