[NeurIPS 2021] Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training
-
Updated
Jan 9, 2022 - Python
[NeurIPS 2021] Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training
[ICLR 2023, Spotlight] Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning
APBench: A Unified Availability Poisoning Attack and Defenses Benchmark (TMLR 08/2024)
The official implementation of USENIX Security'23 paper "Meta-Sift" -- Ten minutes or less to find a 1000-size or larger clean subset on poisoned dataset.
How Robust are Randomized Smoothing based Defenses to Data Poisoning? (CVPR 2021)
Experiments on Data Poisoning Regression Learning
Image Shortcut Squeezing: Countering Perturbative Availability Poisons with Compression
Understanding the Limits of Unsupervised Domain Adaptation via Data Poisoning. (Neurips 2021)
CCS'22 Paper: "Identifying a Training-Set Attack’s Target Using Renormalized Influence Estimation"
Analyzing Adversarial Bias and the Robustness of Fair Machine Learning
Code for the paper Analysis and Detectability of Offline Data Poisoning Attacks on Linear Systems.
A backdoor attack in a Federated learning setting using the FATE framework
[NeurIPS 2022] Can Adversarial Training Be Manipulated By Non-Robust Features?
Implementation of backdoor attacks and defenses in malware classification using machine learning models.
A repository for the experimental framework for in-stream data poisoning monitoring.
Add a description, image, and links to the data-poisoning topic page so that developers can more easily learn about it.
To associate your repository with the data-poisoning topic, visit your repo's landing page and select "manage topics."