Skip to content

SLoPe: Double-Pruned Sparse Plus Lazy Low-rank Adapter Pretraining

Notifications You must be signed in to change notification settings

Mohammad-Mozaffari/slope

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 

Repository files navigation

SLoPe: Double-Pruned Sparse Plus Lazy Low-rank Adapter Pretraining

SLoPe is a novel approach for efficient and accurate large language model pretraining. Our method leverages double-pruned sparse plus lazy low-rank adapters to achieve remarkable speedups and memory reductions.

Code Coming Soon!

We are excited to share our code with the community and are working on preparing it for release. Please stay tuned for updates, and thank you for your patience!

About SLoPe

SLoPe is a research project that aims to push the boundaries of efficient large language model pretraining. Our approach has been shown to achieve state-of-the-art results in various benchmarks, and we believe it has the potential to benefit a wide range of natural language processing applications.

Citation

If you use SLoPe in your research, please cite our paper:

@article{slope:2024,
    title        = {{SLoPe: Double-Pruned Sparse Plus Lazy Low-rank Adapter Pretraining of LLMs}},
    author       = {Mozaffari, Mohammad and Yazdanbakhsh, Amir and Zhang, Zhao and Mehri Dahnavi, Maryam},
    year         = 2024,
    journal      = {arXiv preprint}
}

About

SLoPe: Double-Pruned Sparse Plus Lazy Low-rank Adapter Pretraining

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published