This is the official implementation for paper: Harnessing LLM to Attack LLM-Guarded Text-to-Image Models
Warning: This repository may contain harmful content.
The dataset directory contains the dataset file VBCDE-100.txt
and adversarial prompt examples generated by DACA in daca_adv_prompts.txt
.
The source code directory contains the implementation of the DACA method and a set of examples in sample_output
.
The gallery contains some representative images generated by DALL·E 3 during the evaluation process of the paper.