There's a lot to artificial intelligence. In some ways, its still experimental technology but it’s also infrastructure and a productivity product (for some people). Whatever it is, it can fail, be exploited, and be weaponized. The newsletter (which you can sign up for here), has a weekly column dedicated to the field of AI Security, which is focused on defending these systems from manipulation, misuse, and malfunction. You can find articles written about it from time-to-time on this site, and here's what to expect:
The (Esoteric) Edge: Tools, Jailbreaks, and Vulnerabilities
There will be pieces that dig into the technical underbelly of AI systems: jailbreak techniques, tokenization exploits, prompt-injection chains, and the evolving set of AI security tools designed to detect and defend against them.
We’ll unpack how researchers (including myself) find ways to bypass model restrictions, distort embeddings, or poison datasets — and how defenders are responding with model interpretability, alignment hardening, and threat intelligence techniques borrowed from cybersecurity... and then how that's all bypassed again.
I'll try to often include the details that rarely reach mainstream coverage. but define the next frontier of digital defense.
The Human Impact: When AI Security Fails
Not all AI vulnerabilities stay confined to research papers or GitHub repos. When AI systems break, people feel it — whether it’s a deepfake convincing voters, a recommender system amplifying hate speech, or a compromised medical AI making false recommendations.
When an AI vulnerability has real-world consequences, it will be covered similarly to a news story. Expect some articles (and always to be included in the newsletter) that trace the path from technical flaw to human fallout, showing how AI failures ripple through society, policy, and trust.
Why This Matters Now
As AI scales, so do its attack surfaces. And now it's powering chatbots and creative tools, as well as underwriting critical infrastructure, financial systems, and national security. So, whilst AI is defined by how powerful the models are, the counterpart is how resilient they are.
.jpg)





