This repository provides a benchmark for prompt Injection attacks and defenses
-
Updated
Jul 16, 2025 - Python
This repository provides a benchmark for prompt Injection attacks and defenses
Manual Prompt Injection / Red Teaming Tool
LLM Security Platform.
LLM Security Project with Llama Guard
Client SDK to send LLM interactions to Vibranium Dome
LLM Security Platform Docs
Prompt Engineering Tool for AI Models with cli prompt or api usage
FRACTURED-SORRY-Bench: This repository contains the code and data for the creating an Automated Multi-shot Jailbreak framework, as described in our paper.
PITT is an open‑source, OWASP‑aligned LLM security scanner that detects prompt injection, data leakage, plugin abuse, and other AI‑specific vulnerabilities. Supports 90+ attack techniques, multiple LLM providers, YAML‑based rules, and generates detailed HTML/JSON reports for developers and security teams.
Add a description, image, and links to the prompt-injection-tool topic page so that developers can more easily learn about it.
To associate your repository with the prompt-injection-tool topic, visit your repo's landing page and select "manage topics."