SLoPe is a novel approach for efficient and accurate large language model pretraining. Our method leverages double-pruned sparse plus lazy low-rank adapters to achieve remarkable speedups and memory reductions.
We are excited to share our code with the community and are working on preparing it for release. Please stay tuned for updates, and thank you for your patience!
SLoPe is a research project that aims to push the boundaries of efficient large language model pretraining. Our approach has been shown to achieve state-of-the-art results in various benchmarks, and we believe it has the potential to benefit a wide range of natural language processing applications.
If you use SLoPe in your research, please cite our paper:
@article{slope:2024,
title = {{SLoPe: Double-Pruned Sparse Plus Lazy Low-rank Adapter Pretraining of LLMs}},
author = {Mozaffari, Mohammad and Yazdanbakhsh, Amir and Zhang, Zhao and Mehri Dahnavi, Maryam},
year = 2024,
journal = {arXiv preprint}
}