Veridicus Scan Local scanning for AI workflows
Download app

Blog

AI agent security, prompt injection, and safer AI intake.

The newest post is a measured AI agent security stress test based on an incident-inspired npm supply chain replay. It shows how a guarded repository review flow can localize the risky file, reduce raw context, and block install-like actions before trust is granted. The earlier explainers cover MCP security, prompt injection, OWASP, visual prompt injection, RAG, and safer AI-bound intake.

RAG prompt injection explained

A builder-friendly guide to how retrieved chunks turn into instructions, how that differs from retrieval poisoning, and why private knowledge bases can leak.

Prompt injection examples

A plain-language guide to where hidden instructions show up in real content, including webpages, emails, PDFs, tool output, and parser-visible metadata.

Prompt injection vs jailbreaking

A plain-language guide to where prompt injection and jailbreaking overlap, how they differ, and why the distinction matters for AI agents, tools, and connected data.

What is indirect prompt injection?

A plain-language guide to hidden instructions in webpages, emails, files, and tool output, with the direct-vs-indirect distinction, agent risk, and practical ways to reduce exposure.

What is prompt injection?

A plain-language guide to prompt injection: what it is, how direct and indirect attacks work, why AI agents raise the stakes, and what reduces risk in practice.

Product and workflow docs

Browse the core product pages alongside the blog.

The blog explains the threat model. These supporting pages document what Veridicus Scan covers, how reports work, where local trust boundaries sit, and how the OpenClaw and MCP workflows fit together.