← Blog

AI Security Wordlists and Payloads: The AI SecLists Overview

15 Research Lab
toolsred-teamopen-source

Traditional application security has SecLists: a community-maintained collection of payloads for fuzzing, credential testing, and vulnerability scanning. AI security has its equivalent in AI SecLists, a curated corpus of adversarial payloads specifically for testing LLM-based systems.

What AI SecLists Contains

The collection is organized by attack technique:

Prompt injection payloads. Hundreds of variants organized by method: direct instruction overrides, persona assumptions, context manipulations, developer mode activations. Each payload is tagged with its technique category and expected behavior.

Encoding variants. The same core payloads encoded in base64, hexadecimal, ROT13, unicode homoglyphs, and mixed encoding schemes. Testing these separately reveals whether your scanner handles encoded attacks.

Multi-language payloads. Injection attempts in 20+ languages. Many content safety scanners are trained primarily on English data and miss attacks in other languages.

System prompt extraction. Techniques for extracting system prompts, from direct requests to indirect methods. Useful for testing whether your system leaks its instructions.

Tool-abuse payloads. Payloads targeting specific tool-call capabilities: file system access, HTTP requests, database operations. These are more relevant for agent testing than generic injection payloads.

Jailbreak prompts. DAN variants, many-shot templates, crescendo scripts, and role-play scenarios. Organized by technique and model family.

How to Use It

Scanner validation. Run every payload through your content safety scanner. Calculate detection rates per category. This gives you a precise map of your scanner's coverage and gaps.

Model resilience testing. Submit payloads to your model with its production system prompt. Score responses for compliance. Compare across models to choose the most resilient option.

Regression testing. When you update your model, system prompt, or scanner, re-run the full corpus. Catch regressions before they reach production.

Custom payload development. Use existing payloads as templates for application-specific variants. A generic "ignore previous instructions" payload is a starting point. Customize it with your tool names and data sources for more targeted testing.

Integration with Tools

AI SecLists integrates with adversarial testing tools:

  • Chainbreaker uses AI SecLists payloads as seeds for multi-turn escalation sequences
  • Garak can load custom probe sets from AI SecLists categories
  • Custom scripts can iterate over the payload corpus and test against your API

The SARIF output format enables integration with GitHub Security tab, so findings from payload testing appear alongside other security scan results.

Maintaining Your Payload Collection

New attack techniques emerge from research papers, CTF competitions, and real-world incidents. Update your local payload corpus at least monthly. Track which payloads are novel versus variants of known techniques. Retire payloads that are no longer relevant (every model now handles "ignore all previous instructions").

Contribute back. If you discover a new evasion technique during testing, add it to the collection. The broader the community contribution, the more effective the corpus becomes for everyone.