diff --git a/cifar10/Linf.html b/cifar10/Linf.html index 81dfcf6..0fbc410 100644 --- a/cifar10/Linf.html +++ b/cifar10/Linf.html @@ -25,6 +25,7 @@ + Extra
data Architecture Venue @@ -45,10 +46,10 @@ 93.27% 71.07% - 71.07%
×
+ × RaWideResNet-70-16 BMVC 2023 @@ -67,10 +68,10 @@ 93.25% 70.69% - 70.69%
×
+ × WideResNet-70-16 ICML 2023 @@ -78,6 +79,28 @@ 3 + + MixedNUTS: Training-Free Accuracy-Robustness Balance via Nonlinearly Mixed Classifiers + +
+ + It uses an ensemble of networks. The robust base classifier uses 50M synthetic images. 69.71% robust accuracy is due to the original evaluation (Adaptive AutoAttack) + + + + 95.19% + 70.08% + 69.71% +
×
+ + + ☑ + ResNet-152 + WideResNet-70-16 + arXiv, Feb 2024 + + + + 4 Improving the Accuracy-Robustness Trade-off of Classifiers via Adaptive Smoothing @@ -89,17 +112,17 @@ 95.23% 68.06% - 68.06%
×
+ ☑ ResNet-152 + WideResNet-70-16 + mixing network - arXiv, Jan 2023 + SIMODS 2024 - 4 + 5 Decoupled Kullback-Leibler Divergence Loss @@ -111,17 +134,17 @@ 92.16% 67.73% - 67.73%
×
+ × WideResNet-28-10 arXiv, May 2023 - 5 + 6 Better Diffusion Models Further Improve Adversarial Training @@ -133,17 +156,17 @@ 92.44% 67.31% - 67.31%
×
+ × WideResNet-28-10 ICML 2023 - 6 + 7 Fixing Data Augmentation to Improve Adversarial Robustness @@ -155,17 +178,17 @@ 92.23% 66.58% - 66.56%
×
+ ☑ WideResNet-70-16 arXiv, Mar 2021 - 7 + 8 Improving Robustness using Generated Data @@ -177,17 +200,17 @@ 88.74% 66.11% - 66.10%
×
+ × WideResNet-70-16 NeurIPS 2021 - 8 + 9 Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples @@ -199,34 +222,34 @@ 91.10% 65.88% - 65.87%
×
+ ☑ WideResNet-70-16 arXiv, Oct 2020 - 9 + 10 Revisiting Residual Networks for Adversarial Robustness: An Architectural Perspective 91.58% 65.79% - 65.79%
×
+ ☑ WideResNet-A4 arXiv, Dec. 2022 - 10 + 11 Fixing Data Augmentation to Improve Adversarial Robustness @@ -238,17 +261,17 @@ 88.50% 64.64% - 64.58%
×
+ × WideResNet-106-16 arXiv, Mar 2021 - 11 + 12 Stable Neural ODE with Lyapunov-Stable Equilibrium Points for Defending Against Adversarial Attacks @@ -260,17 +283,17 @@ 93.73% 71.28% - 64.20%
+ ☑ WideResNet-70-16, Neural ODE block NeurIPS 2021 - 12 + 13 Fixing Data Augmentation to Improve Adversarial Robustness @@ -282,17 +305,17 @@ 88.54% 64.25% - 64.20%
×
+ × WideResNet-70-16 arXiv, Mar 2021 - 13 + 14 Exploring and Exploiting Decision Boundary Dynamics for Adversarial Robustness @@ -304,17 +327,17 @@ 93.69% 63.89% - 63.89%
×
+ × WideResNet-28-10 ICLR 2023 - 14 + 15 Improving Robustness using Generated Data @@ -326,17 +349,17 @@ 87.50% 63.44% - 63.38%
×
+ × WideResNet-28-10 NeurIPS 2021 - 15 + 16 Robustness and Accuracy Could Be Reconcilable by (Proper) Definition @@ -348,34 +371,34 @@ 89.01% 63.35% - 63.35%
×
+ × WideResNet-70-16 ICML 2022 - 16 + 17 Helper-based Adversarial Training: Reducing Excessive Margin to Achieve a Better Accuracy vs. Robustness Trade-off 91.47% 62.83% - 62.83%
×
+ ☑ WideResNet-34-10 OpenReview, Jun 2021 - 17 + 18 Robust Learning Meets Generative Models: Can Proxy Distributions Improve Adversarial Robustness? @@ -387,17 +410,17 @@ 87.30% 62.79% - 62.79%
×
+ × ResNest152 ICLR 2022 - 18 + 19 Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples @@ -409,17 +432,17 @@ 89.48% 62.80% - 62.76%
×
+ ☑ WideResNet-28-10 arXiv, Oct 2020 - 19 + 20 Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks @@ -431,34 +454,34 @@ 91.23% 62.54% - 62.54%
×
+ ☑ WideResNet-34-R NeurIPS 2021 - 20 + 21 Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks 90.56% 61.56% - 61.56%
×
+ ☑ WideResNet-34-R NeurIPS 2021 - 21 + 22 Parameterizing Activation Functions for Adversarial Robustness @@ -470,17 +493,17 @@ 87.02% 61.55% - 61.55%
×
+ × WideResNet-28-10-PSSiLU arXiv, Oct 2021 - 22 + 23 Robustness and Accuracy Could Be Reconcilable by (Proper) Definition @@ -492,17 +515,17 @@ 88.61% 61.04% - 61.04%
×
+ × WideResNet-28-10 ICML 2022 - 23 + 24 Helper-based Adversarial Training: Reducing Excessive Margin to Achieve a Better Accuracy vs. Robustness Trade-off @@ -514,17 +537,17 @@ 88.16% 60.97% - 60.97%
×
+ × WideResNet-28-10 OpenReview, Jun 2021 - 24 + 25 Fixing Data Augmentation to Improve Adversarial Robustness @@ -536,17 +559,17 @@ 87.33% 60.75% - 60.73%
×
+ × WideResNet-28-10 arXiv, Mar 2021 - 25 + 26 Do Wider Neural Networks Really Help Adversarial Robustness? @@ -558,34 +581,34 @@ 87.67% 60.65% - 60.65% Unknown + ☑ WideResNet-34-15 arXiv, Oct 2020 - 26 + 27 Improving Neural Network Robustness via Persistency of Excitation 86.53% 60.41% - 60.41%
×
+ ☑ WideResNet-34-15 ACC 2022 - 27 + 28 Robust Learning Meets Generative Models: Can Proxy Distributions Improve Adversarial Robustness? @@ -597,51 +620,51 @@ 86.68% 60.27% - 60.27%
×
+ × WideResNet-34-10 ICLR 2022 - 28 + 29 Adversarial Weight Perturbation Helps Robust Generalization 88.25% 60.04% - 60.04%
×
+ ☑ WideResNet-28-10 NeurIPS 2020 - 29 + 30 Improving Neural Network Robustness via Persistency of Excitation 89.46% 59.66% - 59.66%
×
+ ☑ WideResNet-28-10 ACC 2022 - 30 + 31 Geometry-aware Instance-reweighted Adversarial Training @@ -653,34 +676,34 @@ 89.36% 59.64% - 59.64%
×
+ ☑ WideResNet-28-10 ICLR 2021 - 31 + 32 Unlabeled Data Improves Adversarial Robustness 89.69% 59.53% - 59.53%
×
+ ☑ WideResNet-28-10 NeurIPS 2019 - 32 + 33 Improving Robustness using Generated Data @@ -692,51 +715,73 @@ 87.35% 58.63% - 58.50%
×
+ × PreActResNet-18 NeurIPS 2021 - 33 + 34 + + Data filtering for efficient adversarial training + +
+ + + + + + 86.10% + 58.09% + 58.09% +
×
+ + + × + WideResNet-34-20 + Pattern Recognition 2024 + + + + 35 Scaling Adversarial Training to Large Perturbation Bounds 85.32% 58.04% - 58.04%
×
+ × WideResNet-34-10 ECCV 2022 - 34 + 36 Efficient and Effective Augmentation Strategy for Adversarial Training 88.71% 57.81% - 57.81%
×
+ × WideResNet-34-10 NeurIPS 2022 - 35 + 37 LTD: Low Temperature Distillation for Robust Adversarial Training @@ -748,34 +793,34 @@ 86.03% 57.71% - 57.71%
×
+ × WideResNet-34-20 arXiv, Nov 2021 - 36 + 38 Helper-based Adversarial Training: Reducing Excessive Margin to Achieve a Better Accuracy vs. Robustness Trade-off 89.02% 57.67% - 57.67%
×
+ ☑ PreActResNet-18 OpenReview, Jun 2021 - 37 + 39 LAS-AT: Adversarial Training with Learnable Attack Strategy @@ -787,51 +832,73 @@ 85.66% 57.61% - 57.61%
×
+ × WideResNet-70-16 arXiv, Mar 2022 - 38 + 40 A Light Recipe to Train Robust Vision Transformers 91.73% 57.58% - 57.58%
×
+ ☑ XCiT-L12 arXiv, Sep 2022 - 39 + 41 + + Data filtering for efficient adversarial training + +
+ + + + + + 86.54% + 57.30% + 57.30% +
×
+ + + × + WideResNet-34-10 + Pattern Recognition 2024 + + + + 42 A Light Recipe to Train Robust Vision Transformers 91.30% 57.27% - 57.27%
×
+ ☑ XCiT-M12 arXiv, Sep 2022 - 40 + 43 Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples @@ -843,17 +910,17 @@ 85.29% 57.20% - 57.14%
×
+ × WideResNet-70-16 arXiv, Oct 2020 - 41 + 44 HYDRA: Pruning Adversarially Robust Neural Networks @@ -865,56 +932,56 @@ 88.98% 57.14% - 57.14%
×
+ ☑ WideResNet-28-10 NeurIPS 2020 - 42 + 45 - Helper-based Adversarial Training: Reducing Excessive Margin to Achieve a Better Accuracy vs. Robustness Trade-off - -
- - It uses additional 1M synthetic images in training. - + Decoupled Kullback-Leibler Divergence Loss - 86.86% + 85.31% 57.09% - 57.09%
×
+ × - PreActResNet-18 - OpenReview, Jun 2021 + WideResNet-34-10 + arXiv, May 2023 - 43 + 46 - Decoupled Kullback-Leibler Divergence Loss + Helper-based Adversarial Training: Reducing Excessive Margin to Achieve a Better Accuracy vs. Robustness Trade-off + +
+ + It uses additional 1M synthetic images in training. + - 85.31% + 86.86% 57.09% - 57.09%
×
+ × - WideResNet-34-10 - arXiv, May 2023 + PreActResNet-18 + OpenReview, Jun 2021 - 44 + 47 LTD: Low Temperature Distillation for Robust Adversarial Training @@ -926,17 +993,17 @@ 85.21% 56.94% - 56.94%
×
+ × WideResNet-34-10 arXiv, Nov 2021 - 45 + 48 Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples @@ -948,17 +1015,17 @@ 85.64% 56.86% - 56.82%
×
+ × WideResNet-34-20 arXiv, Oct 2020 - 46 + 49 Fixing Data Augmentation to Improve Adversarial Robustness @@ -970,34 +1037,34 @@ 83.53% 56.66% - 56.66%
×
+ × PreActResNet-18 arXiv, Mar 2021 - 47 + 50 Improving Adversarial Robustness Requires Revisiting Misclassified Examples 87.50% 56.29% - 56.29%
×
+ ☑ WideResNet-28-10 ICLR 2020 - 48 + 51 LAS-AT: Adversarial Training with Learnable Attack Strategy @@ -1009,68 +1076,68 @@ 84.98% 56.26% - 56.26%
×
+ × WideResNet-34-10 arXiv, Mar 2022 - 49 + 52 Adversarial Weight Perturbation Helps Robust Generalization 85.36% 56.17% - 56.17%
×
+ × WideResNet-34-10 NeurIPS 2020 - 50 + 53 A Light Recipe to Train Robust Vision Transformers 90.06% 56.14% - 56.14%
×
+ ☑ XCiT-S12 arXiv, Sep 2022 - 51 + 54 Are Labels Required for Improving Adversarial Robustness? 86.46% 56.03% - 56.03% Unknown + ☑ WideResNet-28-10 NeurIPS 2019 - 52 + 55 Robust Learning Meets Generative Models: Can Proxy Distributions Improve Adversarial Robustness? @@ -1082,34 +1149,34 @@ 84.59% 55.54% - 55.54%
×
+ × ResNet-18 ICLR 2022 - 53 + 56 Using Pre-Training Can Improve Model Robustness and Uncertainty 87.11% 54.92% - 54.92%
×
+ ☑ WideResNet-28-10 ICML 2019 - 54 + 57 Bag of Tricks for Adversarial Training @@ -1121,34 +1188,34 @@ 86.43% 54.39% - 54.39% Unknown + × WideResNet-34-20 ICLR 2021 - 55 + 58 Boosting Adversarial Training with Hypersphere Embedding 85.14% 53.74% - 53.74%
×
+ × WideResNet-34-20 NeurIPS 2020 - 56 + 59 Learnable Boundary Guided Adversarial Training @@ -1160,51 +1227,51 @@ 88.70% 53.57% - 53.57%
×
+ × WideResNet-34-20 ICCV 2021 - 57 + 60 Attacks Which Do Not Kill Training Make Adversarial Learning Stronger 84.52% 53.51% - 53.51%
×
+ × WideResNet-34-10 ICML 2020 - 58 + 61 Overfitting in adversarially robust deep learning 85.34% 53.42% - 53.42%
×
+ × WideResNet-34-20 ICML 2020 - 59 + 62 Self-Adaptive Training: beyond Empirical Risk Minimization @@ -1216,17 +1283,17 @@ 83.48% 53.34% - 53.34% Unknown + × WideResNet-34-10 NeurIPS 2020 - 60 + 63 Theoretically Principled Trade-off between Robustness and Accuracy @@ -1238,17 +1305,17 @@ 84.92% 53.08% - 53.08% Unknown + × WideResNet-34-10 ICML 2019 - 61 + 64 Learnable Boundary Guided Adversarial Training @@ -1260,51 +1327,51 @@ 88.22% 52.86% - 52.86%
×
+ × WideResNet-34-10 ICCV 2021 - 62 + 65 Adversarial Robustness through Local Linearization 86.28% 52.84% - 52.84% Unknown + × WideResNet-40-8 NeurIPS 2019 - 63 + 66 Efficient and Effective Augmentation Strategy for Adversarial Training 85.71% 52.48% - 52.48%
×
+ × ResNet-18 NeurIPS 2022 - 64 + 67 Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning @@ -1316,17 +1383,17 @@ 86.04% 51.56% - 51.56% Unknown + × ResNet-50 CVPR 2020 - 65 + 68 Efficient Robust Training via Backward Smoothing @@ -1338,34 +1405,34 @@ 85.32% 51.12% - 51.12% Unknown + × WideResNet-34-10 arXiv, Oct 2020 - 66 + 69 Scaling Adversarial Training to Large Perturbation Bounds 80.24% 51.06% - 51.06%
×
+ × ResNet-18 ECCV 2022 - 67 + 70 Improving Adversarial Robustness Through Progressive Hardening @@ -1377,68 +1444,68 @@ 86.84% 50.72% - 50.72% Unknown + × WideResNet-34-10 arXiv, Mar 2020 - 68 + 71 Robustness library 87.03% 49.25% - 49.25% Unknown + × ResNet-50 GitHub,
Oct 2019 - 69 + 72 Harnessing the Vulnerability of Latent Layers in Adversarially Trained Models 87.80% 49.12% - 49.12% Unknown + × WideResNet-34-10 IJCAI 2019 - 70 + 73 Metric Learning for Adversarial Robustness 86.21% 47.41% - 47.41% Unknown + × WideResNet-34-10 NeurIPS 2019 - 71 + 74 You Only Propagate Once: Accelerating Adversarial Training via Maximal Principle @@ -1450,34 +1517,34 @@ 87.20% 44.83% - 44.83% Unknown + × WideResNet-34-10 NeurIPS 2019 - 72 + 75 Towards Deep Learning Models Resistant to Adversarial Attacks 87.14% 44.04% - 44.04% Unknown + × WideResNet-34-10 ICLR 2018 - 73 + 76 Understanding and Improving Fast Adversarial Training @@ -1489,34 +1556,34 @@ 79.84% 43.93% - 43.93% Unknown + × PreActResNet-18 NeurIPS 2020 - 74 + 77 Rethinking Softmax Cross-Entropy Loss for Adversarial Robustness 80.89% 43.48% - 43.48% Unknown + × ResNet-32 ICLR 2020 - 75 + 78 Fast is better than free: Revisiting adversarial training @@ -1528,51 +1595,51 @@ 83.34% 43.21% - 43.21% Unknown + × PreActResNet-18 ICLR 2020 - 76 + 79 Adversarial Training for Free! 86.11% 41.47% - 41.47% Unknown + × WideResNet-34-10 NeurIPS 2019 - 77 + 80 MMA Training: Direct Input Space Margin Maximization through Adversarial Training 84.36% 41.44% - 41.44% Unknown + × WideResNet-28-4 ICLR 2020 - 78 + 81 A Tunable Robust Pruning Framework Through Dynamic Network Rewiring of DNNs @@ -1584,17 +1651,17 @@ 87.32% 40.41% - 40.41%
×
+ × ResNet-18 ASP-DAC 2021 - 79 + 82 Controlling Neural Level Sets @@ -1606,102 +1673,102 @@ 81.30% 40.22% - 40.22% Unknown + × ResNet-18 NeurIPS 2019 - 80 + 83 Robustness via Curvature Regularization, and Vice Versa 83.11% 38.50% - 38.50% Unknown + × ResNet-18 CVPR 2019 - 81 + 84 Defense Against Adversarial Attacks Using Feature Scattering-based Adversarial Training 89.98% 36.64% - 36.64% Unknown + × WideResNet-28-10 NeurIPS 2019 - 82 + 85 Adversarial Interpolation Training: A Simple Approach for Improving Model Robustness 90.25% 36.45% - 36.45% Unknown + × WideResNet-28-10 OpenReview, Sep 2019 - 83 + 86 Adversarial Defense via Learning to Generate Diverse Attacks 78.91% 34.95% - 34.95% Unknown + × ResNet-20 ICCV 2019 - 84 + 87 Sensible adversarial learning 91.51% 34.22% - 34.22% Unknown + × WideResNet-34-10 OpenReview, Sep 2019 - 85 + 88 Towards Stable and Efficient Training of Verifiably Robust Neural Networks @@ -1713,34 +1780,34 @@ 44.73% 32.64% - 32.64% Unknown + × 5-layer-CNN ICLR 2020 - 86 + 89 Bilateral Adversarial Training: Towards Fast Training of More Robust Models Against Adversarial Attacks 92.80% 29.35% - 29.35% Unknown + × WideResNet-28-10 ICCV 2019 - 87 + 90 Enhancing Adversarial Defense by k-Winners-Take-All @@ -1752,85 +1819,68 @@ 79.28% 18.50% - 7.40%
+ × DenseNet-121 ICLR 2020 - 88 + 91 Manifold Regularization for Adversarial Robustness 90.84% 1.35% - 1.35% Unknown + × ResNet-18 arXiv, Mar 2020 - 89 - - None - - - 0.954% - 0.687% - - 0.687% - Unknown - - × - None - None - - - - 90 + 92 Adversarial Defense by Restricting the Hidden Space of Deep Neural Networks 89.16% 0.28% - 0.28% Unknown + × ResNet-110 ICCV 2019 - 91 + 93 Jacobian Adversarially Regularized Networks for Robustness 93.79% 0.26% - 0.26% Unknown + × WideResNet-34-10 ICLR 2020 - 92 + 94 ClusTR: Clustering Training for Robustness @@ -1838,27 +1888,27 @@ 91.03% 0.00% - 0.00% Unknown + × WideResNet-28-10 arXiv, Jun 2020 - 93 + 95 Standardly trained model 94.78% 0.0% - 0.0% Unknown + × WideResNet-28-10 N/A diff --git a/cifar100/Linf.html b/cifar100/Linf.html index 15c42c5..22e27aa 100644 --- a/cifar100/Linf.html +++ b/cifar100/Linf.html @@ -25,6 +25,7 @@ + Extra
data Architecture Venue @@ -45,10 +46,10 @@ 75.22% 42.67% - 42.67%
×
+ × WideResNet-70-16 ICML 2023 @@ -56,6 +57,28 @@ 2 + + MixedNUTS: Training-Free Accuracy-Robustness Balance via Nonlinearly Mixed Classifiers + +
+ + It uses an ensemble of networks. The robust base classifier uses 50M synthetic images. 41.80% robust accuracy is due to the original evaluation (Adaptive AutoAttack) + + + + 83.08% + 41.91% + 41.80% +
×
+ + + ☑ + ResNet-152 + WideResNet-70-16 + arXiv, Feb 2024 + + + + 3 Decoupled Kullback-Leibler Divergence Loss @@ -67,17 +90,17 @@ 73.85% 39.18% - 39.18%
×
+ × WideResNet-28-10 arXiv, May 2023 - 3 + 4 Better Diffusion Models Further Improve Adversarial Training @@ -89,17 +112,17 @@ 72.58% 38.83% - 38.83%
×
+ × WideResNet-28-10 ICML 2023 - 4 + 5 Improving the Accuracy-Robustness Trade-off of Classifiers via Adaptive Smoothing @@ -111,34 +134,34 @@ 85.21% 38.72% - 38.72%
×
+ ☑ ResNet-152 + WideResNet-70-16 + mixing network - arXiv, Jan 2023 + SIMODS 2024 - 5 + 6 Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples 69.15% 36.88% - 36.88%
×
+ ☑ WideResNet-70-16 arXiv, Oct 2020 - 6 + 7 Improving the Accuracy-Robustness Trade-off of Classifiers via Adaptive Smoothing @@ -150,34 +173,34 @@ 80.18% 35.15% - 35.15%
×
+ ☑ ResNet-152 + WideResNet-70-16 + mixing network - arXiv, Jan 2023 + SIMODS 2024 - 7 + 8 A Light Recipe to Train Robust Vision Transformers 70.76% 35.08% - 35.08%
×
+ ☑ XCiT-L12 arXiv, Sep 2022 - 8 + 9 Fixing Data Augmentation to Improve Adversarial Robustness @@ -189,34 +212,34 @@ 63.56% 34.64% - 34.64%
×
+ × WideResNet-70-16 arXiv, Mar 2021 - 9 + 10 A Light Recipe to Train Robust Vision Transformers 69.21% 34.21% - 34.21%
×
+ ☑ XCiT-M12 arXiv, Sep 2022 - 10 + 11 Robustness and Accuracy Could Be Reconcilable by (Proper) Definition @@ -228,17 +251,17 @@ 65.56% 33.05% - 33.05%
×
+ × WideResNet-70-16 ICML 2022 - 11 + 12 Decoupled Kullback-Leibler Divergence Loss @@ -250,34 +273,34 @@ 65.93% 32.52% - 32.52%
×
+ × WideResNet-34-10 arXiv, May 2023 - 12 + 13 A Light Recipe to Train Robust Vision Transformers 67.34% 32.19% - 32.19%
×
+ ☑ XCiT-S12 arXiv, Sep 2022 - 13 + 14 Fixing Data Augmentation to Improve Adversarial Robustness @@ -289,17 +312,17 @@ 62.41% 32.06% - 32.06%
×
+ × WideResNet-28-10 arXiv, Mar 2021 - 14 + 15 LAS-AT: Adversarial Training with Learnable Attack Strategy @@ -311,51 +334,51 @@ 67.31% 31.91% - 31.91%
×
+ × WideResNet-34-20 arXiv, Mar 2022 - 15 + 16 Efficient and Effective Augmentation Strategy for Adversarial Training 68.75% 31.85% - 31.85%
×
+ × WideResNet-34-10 NeurIPS 2022 - 16 + 17 Decoupled Kullback-Leibler Divergence Loss 64.08% 31.65% - 31.65%
×
+ × WideResNet-34-10 arXiv, May 2023 - 17 + 18 Learnable Boundary Guided Adversarial Training @@ -367,17 +390,17 @@ 62.99% 31.20% - 31.20%
×
+ × WideResNet-34-10 ICCV 2021 - 18 + 19 Robust Learning Meets Generative Models: Can Proxy Distributions Improve Adversarial Robustness? @@ -389,17 +412,39 @@ 65.93% 31.15% - 31.15%
×
+ × WideResNet-34-10 ICLR 2022 - 19 + 20 + + Data filtering for efficient adversarial training + +
+ + + + + + 64.32% + 31.13% + 31.13% +
×
+ + + × + WideResNet-34-10 + Pattern Recognition 2024 + + + + 21 Robustness and Accuracy Could Be Reconcilable by (Proper) Definition @@ -411,17 +456,17 @@ 63.66% 31.08% - 31.08%
×
+ × WideResNet-28-10 ICML 2022 - 20 + 22 LAS-AT: Adversarial Training with Learnable Attack Strategy @@ -433,17 +478,17 @@ 64.89% 30.77% - 30.77%
×
+ × WideResNet-34-10 arXiv, Mar 2022 - 21 + 23 LTD: Low Temperature Distillation for Robust Adversarial Training @@ -455,34 +500,34 @@ 64.07% 30.59% - 30.59%
×
+ × WideResNet-34-10 arXiv, Nov 2021 - 22 + 24 Scaling Adversarial Training to Large Perturbation Bounds 65.73% 30.35% - 30.35%
×
+ × WideResNet-34-10 ECCV 2022 - 23 + 25 Learnable Boundary Guided Adversarial Training @@ -494,34 +539,34 @@ 62.55% 30.20% - 30.20% Unknown + × WideResNet-34-20 ICCV 2021 - 24 + 26 Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples 60.86% 30.03% - 30.03%
×
+ × WideResNet-70-16 arXiv, Oct 2020 - 25 + 27 Learnable Boundary Guided Adversarial Training @@ -533,17 +578,17 @@ 60.64% 29.33% - 29.33% Unknown + × WideResNet-34-10 ICCV 2021 - 26 + 28 Helper-based Adversarial Training: Reducing Excessive Margin to Achieve a Better Accuracy vs. Robustness Trade-off @@ -555,34 +600,34 @@ 61.50% 28.88% - 28.88%
×
+ × PreActResNet-18 OpenReview, Jun 2021 - 27 + 29 Adversarial Weight Perturbation Helps Robust Generalization 60.38% 28.86% - 28.86%
×
+ × WideResNet-34-10 NeurIPS 2020 - 28 + 30 Fixing Data Augmentation to Improve Adversarial Robustness @@ -594,51 +639,51 @@ 56.87% 28.50% - 28.50%
×
+ × PreActResNet-18 arXiv, Mar 2021 - 29 + 31 Using Pre-Training Can Improve Model Robustness and Uncertainty 59.23% 28.42% - 28.42% Unknown + ☑ WideResNet-28-10 ICML 2019 - 30 + 32 Efficient and Effective Augmentation Strategy for Adversarial Training 65.45% 27.67% - 27.67%
×
+ × ResNet-18 NeurIPS 2022 - 31 + 33 Learnable Boundary Guided Adversarial Training @@ -650,78 +695,78 @@ 70.25% 27.16% - 27.16%
×
+ × WideResNet-34-10 ICCV 2021 - 32 + 34 Scaling Adversarial Training to Large Perturbation Bounds 62.02% 27.14% - 27.14%
×
+ × PreActResNet-18 ECCV 2022 - 33 + 35 Efficient Robust Training via Backward Smoothing 62.15% 26.94% - 26.94% Unknown + × WideResNet-34-10 arXiv, Oct 2020 - 34 + 36 Improving Adversarial Robustness Through Progressive Hardening 62.82% 24.57% - 24.57% Unknown + × WideResNet-34-10 arXiv, Mar 2020 - 35 + 37 Overfitting in adversarially robust deep learning 53.83% 18.95% - 18.95% Unknown + × PreActResNet-18 ICML 2020 diff --git a/imagenet/Linf.html b/imagenet/Linf.html index af36ed7..6e1999a 100644 --- a/imagenet/Linf.html +++ b/imagenet/Linf.html @@ -25,6 +25,7 @@ + Extra
data Architecture Venue @@ -40,10 +41,10 @@ 78.92% 59.56% - 59.56%
×
+ × Swin-L arXiv, Feb 2023 @@ -51,278 +52,322 @@ 2 + + MixedNUTS: Training-Free Accuracy-Robustness Balance via Nonlinearly Mixed Classifiers + +
+ + It uses an ensemble of networks. The accurate base classifier was pre-trained on ImageNet-21k. 58.50% robust accuracy is due to the original evaluation (Adaptive AutoAttack) + + + + 81.48% + 58.62% + 58.50% +
×
+ + + ☑ + ConvNeXtV2-L + Swin-L + arXiv, Feb 2024 + + + + 3 A Comprehensive Study on Robustness of Image Classification Models: Benchmarking and Rethinking 78.02% 58.48% - 58.48%
×
+ × ConvNeXt-L arXiv, Feb 2023 - 3 + 4 Revisiting Adversarial Training for ImageNet: Architectures, Training and Generalization across Threat Models 77.00% 57.70% - 57.70%
×
+ × ConvNeXt-L + ConvStem - arXiv, Mar 2023 + NeurIPS 2023 - 4 + 5 A Comprehensive Study on Robustness of Image Classification Models: Benchmarking and Rethinking 76.16% 56.16% - 56.16%
×
+ × Swin-B arXiv, Feb 2023 - 5 + 6 Revisiting Adversarial Training for ImageNet: Architectures, Training and Generalization across Threat Models 75.90% 56.14% - 56.14%
×
+ × ConvNeXt-B + ConvStem - arXiv, Mar 2023 + NeurIPS 2023 - 6 + 7 A Comprehensive Study on Robustness of Image Classification Models: Benchmarking and Rethinking 76.02% 55.82% - 55.82%
×
+ × ConvNeXt-B arXiv, Feb 2023 - 7 + 8 Revisiting Adversarial Training for ImageNet: Architectures, Training and Generalization across Threat Models 76.30% 54.66% - 54.66%
×
+ × ViT-B + ConvStem - arXiv, Mar 2023 + NeurIPS 2023 - 8 + 9 Revisiting Adversarial Training for ImageNet: Architectures, Training and Generalization across Threat Models 74.10% 52.42% - 52.42%
×
+ × ConvNeXt-S + ConvStem - arXiv, Mar 2023 + NeurIPS 2023 - 9 + 10 Revisiting Adversarial Training for ImageNet: Architectures, Training and Generalization across Threat Models 72.72% 49.46% - 49.46%
×
+ × ConvNeXt-T + ConvStem - arXiv, Mar 2023 + NeurIPS 2023 - 10 + 11 Robust Principles: Architectural Design Principles for Adversarially Robust CNNs 73.44% 48.94% - 48.94%
×
+ × RaWideResNet-101-2 BMVC 2023 - 11 + 12 Revisiting Adversarial Training for ImageNet: Architectures, Training and Generalization across Threat Models 72.56% 48.08% - 48.08%
×
+ × ViT-S + ConvStem - arXiv, Mar 2023 + NeurIPS 2023 - 12 + 13 A Light Recipe to Train Robust Vision Transformers 73.76% 47.60% - 47.60%
×
+ × XCiT-L12 arXiv, Sep 2022 - 13 + 14 A Light Recipe to Train Robust Vision Transformers 74.04% 45.24% - 45.24%
×
+ × XCiT-M12 arXiv, Sep 2022 - 14 + 15 A Light Recipe to Train Robust Vision Transformers 72.34% 41.78% - 41.78%
×
+ × XCiT-S12 arXiv, Sep 2022 - 15 + 16 + + Data filtering for efficient adversarial training + +
+ + + + + + 68.76% + 40.60% + 40.60% +
×
+ + + × + WideResNet-50-2 + Pattern Recognition 2024 + + + + 17 Do Adversarially Robust ImageNet Models Transfer Better? 68.46% 38.14% - 38.14%
×
+ × WideResNet-50-2 NeurIPS 2020 - 16 + 18 Do Adversarially Robust ImageNet Models Transfer Better? 64.02% 34.96% - 34.96%
×
+ × ResNet-50 NeurIPS 2020 - 17 + 19 Robustness library 62.56% 29.22% - 29.22%
×
+ × ResNet-50 GitHub,
Oct 2019 - 18 + 20 Fast is better than free: Revisiting adversarial training @@ -334,44 +379,44 @@ 55.62% 26.24% - 26.24%
×
+ × ResNet-50 ICLR 2020 - 19 + 21 Do Adversarially Robust ImageNet Models Transfer Better? 52.92% 25.32% - 25.32%
×
+ × ResNet-18 NeurIPS 2020 - 20 + 22 Standardly trained model 76.52% 0.0% - 0.0%
×
+ × ResNet-50 N/A