Adversarial Robustness Toolbox. Trouble reproducing attack and defense example (ART black box attack using HopSkipJump) #2438
Unanswered
mostaf7583
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I decided to study the attack on machine learning systems.
To do this, I used the Adversarial Robustness Toolbox library I tried to repeat the example: (Demonstrations of a black box attack on HopSkipJump Using BlackBoxClassifier
I did everything according to the instructions, used different IDEs
https://github.com/Trusted-AI/adversarial-robustness-toolbox/blob/main/notebooks/attack_hopskipjump.ipynb
in this code I am trying to apply black box attacks on my model and tried to make look like the notebook that library provided Adversarial Robustness Toolbox but I failed
my code
https://github.com/mostaf7583/bacheloe/blob/master/blackbox.ipynb
I was expecting the attack out put some Adversarial images but it outputted this
enter image description here
Beta Was this translation helpful? Give feedback.
All reactions