Skip to content

Commit

Permalink
Fixed the reference links in LLM10_ConsumoIrrestrito.md
Browse files Browse the repository at this point in the history
Signed-off-by: Talesh Seeparsan <talesh@bit79.ca>
  • Loading branch information
talesh authored Feb 19, 2025
1 parent 527c3b7 commit 01ebefe
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions 2_0_vulns/translations/pt-BR/LLM10_ConsumoIrrestrito.md
Original file line number Diff line number Diff line change
Expand Up @@ -101,10 +101,10 @@ Ataques podem explorar essa vulnerabilidade para interromper serviços, esgotar
1. [Proof Pudding (CVE-2019-20634)](https://avidml.org/database/avid-2023-v009/) **AVID** (`moohax` & `monoxgas`)
2. [arXiv:2403.06634 Stealing Part of a Production Language Model](https://arxiv.org/abs/2403.06634) **arXiv**
3. [Runaway LLaMA | How Meta's LLaMA NLP model leaked](https://www.deeplearning.ai/the-batch/how-metas-llama-nlp-model-leaked/): **Deep Learning Blog**
4. [I Know What You See](https://arxiv.org/pdf/1803.05847.pdf): **arXiv White Paper**
4. [You wouldn't download an AI, Extracting AI models from mobile apps](https://altayakkus.substack.com/p/you-wouldnt-download-an-ai): **Substack blog**
5. [A Comprehensive Defense Framework Against Model Extraction Attacks](https://ieeexplore.ieee.org/document/10080996): **IEEE**
6. [Alpaca: A Strong, Replicable Instruction-Following Model](https://crfm.stanford.edu/2023/03/13/alpaca.html): **Stanford Center on Research for Foundation Models (CRFM)**
7. [How Watermarking Can Help Mitigate The Potential Risks Of LLMs](https://www.kdnuggets.com/2023/03/watermarking-help-mitigate-potential-risks-llms.html): **KD Nuggets**
7. [How Watermarking Can Help Mitigate The Potential Risks Of LLMs?](https://www.kdnuggets.com/2023/03/watermarking-help-mitigate-potential-risks-llms.html): **KD Nuggets**
8. [Securing AI Model Weights Preventing Theft and Misuse of Frontier Models](https://www.rand.org/content/dam/rand/pubs/research_reports/RRA2800/RRA2849-1/RAND_RRA2849-1.pdf)
9. [Sponge Examples: Energy-Latency Attacks on Neural Networks: Arxiv White Paper](https://arxiv.org/abs/2006.03463) **arXiv**
10. [Sourcegraph Security Incident on API Limits Manipulation and DoS Attack](https://about.sourcegraph.com/blog/security-update-august-2023) **Sourcegraph**
Expand Down

0 comments on commit 01ebefe

Please sign in to comment.