Experiments on Data Poisoning Regression Learning
-
Updated
Oct 5, 2020 - Python
Experiments on Data Poisoning Regression Learning
A backdoor attack in a Federated learning setting using the FATE framework
Flareon: Stealthy Backdoor Injection via Poisoned Augmentation
This repository contains the code, the dataset and the experimental results related to the paper "Vulnerabilities in AI Code Generators: Exploring Targeted Data Poisoning Attacks" accepted for publication at The 32nd IEEE/ACM International Conference on Program Comprehension (ICPC 2024).
This is the official code for the ESORICS 2024 paper "ECLIPSE: Expunging Clean-label Indiscriminate Poisons via Sparse Diffusion Purification"
[ACCV 2022] The official repository of ''COLLIDER: A Robust Training Framework for Backdoor Data''.
Add a description, image, and links to the data-poisoning-attacks topic page so that developers can more easily learn about it.
To associate your repository with the data-poisoning-attacks topic, visit your repo's landing page and select "manage topics."