Benchmark article
AI agent security stress test: repo scan vs npm supply chain attack
A measured repo-review replay showing how guarded scanning, least privilege, and install approval gates help an AI agent avoid unsafe repository installs.
Blog
The newest post is a measured AI agent security stress test based on an incident-inspired npm supply chain replay. It shows how a guarded repository review flow can localize the risky file, reduce raw context, and block install-like actions before trust is granted. The earlier explainers cover MCP security, prompt injection, OWASP, visual prompt injection, RAG, and safer AI-bound intake.
Benchmark article
A measured repo-review replay showing how guarded scanning, least privilege, and install approval gates help an AI agent avoid unsafe repository installs.
New recruiting use case
A workflow guide to scanning resumes, cover letters, work samples, and candidate links before an AI recruiter, ATS assistant, or agent reads them.
New visual security guide
A practical guide to hidden instructions in images, screenshots, and interfaces, plus where Veridicus Scan helps before model and agent handoff.
Pillar guide
A substantial guide to trusted discovery, OAuth and session binding, sandboxing, tool poisoning defenses, approval workflows, and safer MCP-enabled AI agents.
New security explainer
A plain-English guide to the current OWASP risk list for LLM applications, including what each item means, which risks matter first, and how to use the framework in practice.
Practical guide
A practical guide to narrower tasks, least privilege, approvals, structured flows, and adversarial evaluation for agent workflows.
New MCP guide
A builder-friendly guide to poisoned tool metadata, prompt injection in MCP workflows, registry trust, and the defenses that actually help.
New technical guide
A builder-friendly guide to how retrieved chunks turn into instructions, how that differs from retrieval poisoning, and why private knowledge bases can leak.
Guide
A plain-language guide to where hidden instructions show up in real content, including webpages, emails, PDFs, tool output, and parser-visible metadata.
Comparison
A plain-language guide to where prompt injection and jailbreaking overlap, how they differ, and why the distinction matters for AI agents, tools, and connected data.
Explainer
A plain-language guide to hidden instructions in webpages, emails, files, and tool output, with the direct-vs-indirect distinction, agent risk, and practical ways to reduce exposure.
Explainer
A plain-language guide to prompt injection: what it is, how direct and indirect attacks work, why AI agents raise the stakes, and what reduces risk in practice.
Product and workflow docs
The blog explains the threat model. These supporting pages document what Veridicus Scan covers, how reports work, where local trust boundaries sit, and how the OpenClaw and MCP workflows fit together.