Replies: 1 comment 2 replies
-
Ok, found one here |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi
A recent paper https://arxiv.org/abs//2311.09578 released by NVIDIA, has used Nemo for implementing a novel LoRA method called WeightTying-Lora which significantly reduce LoRA params w/o sacrificing performance
The link mentioned in the paper is here https://github.com/NVIDIA/NeMo/commits/adithyare/vera, but there is no guideline for how to setup training script, and args, etc
Plus, I had not successfully found resources on using NeMO for LoRA training, so there is much confusion here.
Could anyone with LoRA experience pointing out how to use NeMO for LoRA or even weighttying LoRA?
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions