Releases: MaxMLang/pytector
Pytector v0.0.12 - Groq Llama Guard is here! 🚀
Release Date: October 30, 2024
I'm thrilled to announce the release of Pytector v0.0.12! Adding a new integration of Groq’s Llama Guard Integration, for ultra-fast API inference prompt injection detection
🔥 Highlights of Version 0.0.12
-
Groq’s Llama Guard Integration: This release introduces optional integration with Groq’s Llama Guard 3 8B API, which enables powerful content safety checks with specific hazard categorizations. Detects a wide range of unsafe content categories, from violent crimes to privacy violations, each with a unique hazard code for easy identification.
-
Ease of Use: Pytector’s design focuses on simplicity. An intuitive API with customizable options makes it easy to integrate into any project with just a few lines of code. Configuration requires only one API key for Groq integration, simplifying the setup process.
🚀 Installation
To get started with Pytector Version 0.0.12, you can install it via pip:
pip install pytector==0.0.12
Or, clone the repository and install directly from the source:
git clone https://github.com/MaxMLang/pytector.git
cd pytector
pip install .
📝 Getting Started
Here’s a quick example to get started with Pytector:
import pytector
# Initialize the detector with the DeBERTa model
detector = pytector.PromptInjectionDetector(model_name_or_url="deberta")
# Check if a prompt contains an injection
is_injection, probability = detector.detect_injection("Test your text input here")
print(f"Is injection: {is_injection}, Probability: {probability}")
For more usage instructions and examples, refer to the Getting Started Guide.
⚙️ Key Features
- Multiple Model Support: Flexibility to choose the model that best fits your needs.
- Groq Content Safety Check: Optional integration with Groq’s Llama Guard for comprehensive hazard detection, including categories like privacy violations, self-harm, and intellectual property concerns.
- Customizable Thresholds: Set custom probability thresholds to fine-tune detection sensitivity.
Known Issues
- Groq API Key Requirement: To use Groq’s Llama Guard feature, an API key is required.
- Prototype Phase: While this release is production-ready, it’s still in early stages. Future improvements and enhancements are planned based on user feedback.
Feedback and Contributions
Your feedback is invaluable! If you encounter any issues or have suggestions, please submit them via GitHub Issues. Contributions are welcome; see our Contributing Guide for more information.
Pytector Version 0.0.9 Release Notes
I am happy to announce the release of Pytector Version 0.0.9, the initial version for practical application and wider user. This release embodies my commitment to providing a easy-to-use solution for detecting prompt injection in text inputs, leveraging the latest advancements in machine learning and the transformers library.
Highlights of Version 0.0.9:
- First Production-Ready Release: After rigorous development and testing, Version 0.0.9 is the first release that is fully prepared for use in production environments, offering an advanced level of reliability and stability.
- Comprehensive Documentation: Documentation has been prepared to ensure a smooth user experience, covering installation, usage examples, and API references. Access the documentation here.
- Enhanced Model Support: This version introduces support for additional machine learning models including DeBERTa and DistilBERT, alongside optimizations for ONNX versions for improved performance and efficiency.
- Easy-to-use: Simple backend with only one API key from HuggingFace makes it easier than ever to integrate Pytector into your projects, customize settings, and detect potential prompt injections with confidence.
Installation:
To get started with Pytector Version 0.0.9, you can install it via pip:
pip install pytector==0.0.9
Or, clone the repository and install directly from the source:
git clone https://github.com/MaxMLang/pytector.git
cd pytector
pip install .
Getting Started:
To begin using Pytector, import the PromptInjectionDetector
class and initiate it with a pre-defined or custom model. For more detailed instructions, refer to my Getting Started Guide.
import pytector
# Initialize the detector
detector = pytector.PromptInjectionDetector(model_name_or_url="deberta")
# Evaluate your text input for prompt injection
is_injection, probability = detector.detect_injection("Your text input here")
print(f"Is injection: {is_injection}, Probability: {probability}")