[ACL 2025] Outlier-Safe Pre-Training for Robust 4-Bit Quantization of Large Language Models
-
Updated
Jul 30, 2025 - Python
[ACL 2025] Outlier-Safe Pre-Training for Robust 4-Bit Quantization of Large Language Models
[ACL 2025] Beyond Prompt Engineering: Robust Behavior Control in LLMs via Steering Target Atoms
[ACL 2025 main] SeedBench: A Multi-task Benchmark for Evaluating Large Language Models in Seed Science🌾
Official code for ACL'25 Main: "Neural Incompatibility: The Unbridgeable Gap of Cross-Scale Parametric Knowledge Transfer in Large Language Models"
This is the repository of the MyMy team, ranked Top 2 in Subtask 1 and Top 2 in Subtask 2 at the SemEval 2025 Task 9: Food Hazard Detection Challenge.
🥷🏻 Code for our ACL 2025 Main paper: "Can LLMs Deceive CLIP? Benchmarking Adversarial Compositionality of Pre-trained Multimodal Representation via Text Updates"
Official implementation for the ACL 2025 Main paper "Circuit Compositions: Exploring Modular Structures in Transformer-Based Language Models"
Nunchi-Bench: Benchmarking Language Models on Cultural Reasoning with a Focus on Korean Superstition
This repository is for the paper UAlberta at SemEval-2025 Task 2: Prompting and Ensembling for Entity-Aware Translation. In Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025). Association for Computational Linguistics.
Code and dataset for evaluating Multimodal LLMs on indexical, iconic, and symbolic gestures (Nishida et al., ACL 2025)
Add a description, image, and links to the acl2025 topic page so that developers can more easily learn about it.
To associate your repository with the acl2025 topic, visit your repo's landing page and select "manage topics."