New recruiting use case
AI job application screening: scan resumes before AI review
A workflow guide to scanning resumes, cover letters, work samples, and candidate links before an AI recruiter, ATS assistant, or agent reads them.
Blog
The newest post explains how to scan resumes, cover letters, and candidate links before AI review, why hiring teams need an intake layer for AI-assisted screening, and where Veridicus Scan fits in that workflow. The earlier explainers cover visual prompt injection, MCP security, OWASP, risk reduction, RAG, real-world prompt injection examples, and core prompt injection basics.
New recruiting use case
A workflow guide to scanning resumes, cover letters, work samples, and candidate links before an AI recruiter, ATS assistant, or agent reads them.
New visual security guide
A practical guide to hidden instructions in images, screenshots, and interfaces, plus where Veridicus Scan helps before model and agent handoff.
Pillar guide
A substantial guide to trusted discovery, OAuth and session binding, sandboxing, tool poisoning defenses, approval workflows, and safer MCP-enabled AI agents.
New security explainer
A plain-English guide to the current OWASP risk list for LLM applications, including what each item means, which risks matter first, and how to use the framework in practice.
New practical guide
A practical guide to narrower tasks, least privilege, approvals, structured flows, and adversarial evaluation for agent workflows.
New MCP guide
A builder-friendly guide to poisoned tool metadata, prompt injection in MCP workflows, registry trust, and the defenses that actually help.
New technical guide
A builder-friendly guide to how retrieved chunks turn into instructions, how that differs from retrieval poisoning, and why private knowledge bases can leak.
New guide
A plain-language guide to where hidden instructions show up in real content, including webpages, emails, PDFs, tool output, and parser-visible metadata.
New comparison
A plain-language guide to where prompt injection and jailbreaking overlap, how they differ, and why the distinction matters for AI agents, tools, and connected data.
Explainer
A plain-language guide to hidden instructions in webpages, emails, files, and tool output, with the direct-vs-indirect distinction, agent risk, and practical ways to reduce exposure.
Explainer
A plain-language guide to prompt injection: what it is, how direct and indirect attacks work, why AI agents raise the stakes, and what reduces risk in practice.