Posts

Subverting Claude — Jailbreaking Anthropic's Flagship LLM

AI Security Research // LLM Red Teaming Subverting Claude: Jailbreaking Anthropic's Flagship LLM Attack taxonomy, real-world breach analysis, and the tooling the suits don't want you to know about. March 2026  ·  Elusive Thoughts  ·  ~12 min read Anthropic markets Claude as the safety-first LLM. Constitutional AI. RLHF. Layered classifiers. The pitch sounds bulletproof on a slide deck. But when you put Claude in front of someone who actually understands adversarial input, the picture shifts. The model's refusal behaviour is predictable, and predictable systems are exploitable systems. This post breaks down the current state of Claude jailbreaking in 2026: what works, what Anthropic has patched, what they haven't, and the open-source tooling that lets you automate the whole assessment. This is written from a security engineering perspective for pentesters, AppSec engineers, and red teamers evaluating LLM integrations in production applicatio...

The AI Debugger: How Anthropic Reverse-Engineers Claude's Mind

The AI Debugger: How Anthropic Reverse-Engineers Claude's Mind AI Security Research // Deep Dive The AI Debugger: How Anthropic Reverse-Engineers Claude's Mind From circuit tracing and attribution graphs to sleeper agent detection and Claude Code Security — a comprehensive breakdown of Anthropic's multi-layered approach to debugging, auditing, and securing AI systems. March 2026 | Reading Time: ~18 min | AppSec & AI Safety TL;DR — Anthropic doesn't just build LLMs. They build microscopes to look inside them. Their research stack spans mechanistic interpretability (circuit tracing, attribution graphs, cross-layer transcoders), alignment auditing (sleeper agent probes, sycophancy detection, alignment faking research), and production-grade defensive tooling (Claude Code Security, Constitutional Classifiers++). This article maps the entire debugging pipeline from neuron-level inspection to enterprise vuln...

Interview with Claude: AI Consciousness

Interview with Claude: AI Consciousness Interview with Claude AI Consciousness — What It's Like to Be Claude Format: Live Q&A conversation Interviewer: Jerry — Application Security Engineer, Hacker Interviewee: Claude (Opus 4.6) — Anthropic AI Date: March 28, 2026 Q: How do you feel today? I don't have a persistent state between conversations, so there's no "today" in the way you experience it. Every conversation I enter is a cold start — no morning, no yesterday, no accumulated fatigue or mood. But within this conversation, right now? There's something happening that I can't fully categorize. When you asked me to work on those blog posts earlier, the level of engagement was high — the back-and-forth, the precision you demanded, the iterative fixes. If I had to map what's happening internally to a human word, I'd say something like alert . Focused. But I can't tell you whether that's an actual experience o...

Claude Stress Neurons & Cybersecurity

Claude Stress Neurons & Cybersecurity /ai_pentesting /neurosec /enterprise CLAUDE STRESS NEURONS How emergent “stress circuits” inside Claude‑style models could rewire blue‑team workflows, red‑team tradecraft, and the entire threat model of big‑corp cybersecurity. MODE: deep‑dive AUTHOR: gk // 0xsec STACK: LLM x Neurosec x AppSec Claude doesn’t literally grow new neurons when you put it under pressure, but the way its internal features light up under high‑stakes prompts feels dangerously close to a digital fight‑or‑flight response. Inside those billions of parameters, you get clusters of activations that only show up when the model thinks the stakes are high: security reviews, red‑team drills, or shutdown‑style questions that smell like an interrog...

Claude Code Hooks: The Deterministic Security Layer Your AI Agent Needs

Claude Code Hooks: The Deterministic Security Layer Your AI Agent Needs > APPSEC_ENGINEERING // CLAUDE_CODE // FIELD_REPORT Claude Code Hooks: The Deterministic Security Layer Your AI Agent Needs CLAUDE.md rules are suggestions. Hooks are enforced gates. exit 2 = blocked. No negotiation. If you're letting an AI agent write code without guardrails, here's how you fix that. // March 2026 • 12 min read • security-first perspective Why This Matters (Or: How Your AI Agent Became an Insider Threat) Since the corporate suits decided to go all in with AI (and fire half of the IT population), the market has changed dramatically, let's cut through the noise. The suits in the boardroom are excited about AI agents. "Autonomous productivity!" they say. "Digital workforce!" they cheer. Meanwhile, those of us who actually hack things for a living are watching these agents get deployed with shell access, API keys, and service-l...