ComPromptMized: Unleashing Zero-click Worms that Target GenAI-Powered Applications
-
Updated
Mar 6, 2024 - Python
ComPromptMized: Unleashing Zero-click Worms that Target GenAI-Powered Applications
CTF challenges designed and implemented in machine learning applications
An interactive CLI application for interacting with authenticated Jupyter instances.
This repository is the official implementation of the paper "ASSET: Robust Backdoor Data Detection Across a Multiplicity of Deep Learning Paradigms." ASSET achieves state-of-the-art reliability in detecting poisoned samples in end-to-end supervised learning/ self-supervised learning/ transfer learning.
🤯 AI Security EXPOSED! Live Demos Showing Hidden Risks of 🤖 Agentic AI Flows: 💉Prompt Injection, ☣️ Data Poisoning. Watch the recorded session:
Securing LLM's Against Top 10 OWASP Large Language Model Vulnerabilities 2024
This repo contains reference implementations, tutorials, samples, and documentation for working with Bosch AIShield
LLM Security Project with Llama Guard
A collection list for Large Language Model (LLM) Watermark
Zero Trust AI 360
A Jailbroken GenAI Model Can Cause Real Harm: GenAI-powered Applications are Vulnerable to PromptWares
FIMjector is an exploit for OpenAI GPT models based on Fill-In-the-Middle (FIM) tokens.
AiShields is an open-source Artificial Intelligence Data Input and Output Sanitizer
ZySec AI: Empowering Security with AI for AI
list of resources for AI/ML/LLM security
Prompt Engineering Tool for AI Models with cli prompt or api usage
An intentionally vulnerable AI chatbot to learn and practice AI Security.
This research exploring [Research Idea in a few words]. This work [Specific benefit of research] holds promise for [Positive impact].
Bert models interpretation and security checker
Add a description, image, and links to the aisecurity topic page so that developers can more easily learn about it.
To associate your repository with the aisecurity topic, visit your repo's landing page and select "manage topics."