This is the official code for the paper "Lazy Safety Alignment for Large Language Models against Harmful Fine-tuning" (NeurIPS2024)
-
Updated
Sep 10, 2024 - Python
This is the official code for the paper "Lazy Safety Alignment for Large Language Models against Harmful Fine-tuning" (NeurIPS2024)
Sanitizer is a server-side method that ensures client-embedded backdoors can only be used for contribution demonstration in federated learning but not be triggered on natural queries in harmful ways.
Educational Ransomware Simulation
Add a description, image, and links to the harmful topic page so that developers can more easily learn about it.
To associate your repository with the harmful topic, visit your repo's landing page and select "manage topics."