Frida script to bypass the iOS application Jailbreak Detection
-
Updated
Mar 5, 2019 - JavaScript
Frida script to bypass the iOS application Jailbreak Detection
Does Refusal Training in LLMs Generalize to the Past Tense? [NeurIPS 2024 Safe Generative AI Workshop (Oral)]
An extensive prompt to make a friendly persona from a chatbot-like model like ChatGPT
Security Kit is a lightweight framework that helps to achieve a security layer
During the Development of Suave7 and it's Predecessors, we've created a lot of Icons and UI-Images and we would like to share them with you. The Theme Developer Kit contains nearly 5.600 Icons, more than 380 Photoshop-Templates and 100 Pixelmator-Documents. With this Package you can customize every App from the App Store …
Customizable Dark Mode Extension for iOS 13+
iOS APT distribution repository for rootful and rootless jailbreaks
Source code for bypass tweaks hosted under https://github.com/hekatos/repo. Licensed under 0BSD except submodules
LV-Crew.org_(LVC)_-_Howto_-_iPhones
This repository contains the code for the paper "Tricking LLMs into Disobedience: Formalizing, Analyzing, and Detecting Jailbreaks" by Abhinav Rao, Sachin Vashishta*, Atharva Naik*, Somak Aditya, and Monojit Choudhury, accepted at LREC-CoLING 2024
Updater script for iOS-OTA-Downgrader.
Your best llm security paper library
HITC reborn: faster, better and prettier
"ChatGPT Evil Confidant Mode" delves into a controversial and unethical use of AI, highlighting how specific prompts can generate harmful and malicious responses from ChatGPT.
ChatGPT Developer Mode is a jailbreak prompt introduced to perform additional modifications and customization of the OpenAI ChatGPT model.
Script and study research on deepin that removes any bogus feature on Deepin
Add a description, image, and links to the jailbreaking topic page so that developers can more easily learn about it.
To associate your repository with the jailbreaking topic, visit your repo's landing page and select "manage topics."