You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Dear Author,
I noted that the new version of your paper in arxiv has "Low-Rank Adaptation Matching". I'm interested in how to implement it but I can't find the corresponding code in distill.py or buffer.py. Could you please share the code of Low-Rank Adaptation Matching.
The text was updated successfully, but these errors were encountered:
I also have another question.
I noted that figure2 in your paper also has a projection head for vision encoder. But in distill.py, it seems that only the vision encoder was considered when matching training trajectory. What if I want to keep the vision encoder frozon and matching the training trajectory of vision projection head.
Could you please give me some guidance on how to deal with it when you are available?
Thanks so much!
Actually I try to use vit-base-patch16-224 as the visual encoder, but I encountered the oom error using a single A6000 GPU. So I hope to freeze the encoder and just train the projection head.
Dear Author,
I noted that the new version of your paper in arxiv has "Low-Rank Adaptation Matching". I'm interested in how to implement it but I can't find the corresponding code in distill.py or buffer.py. Could you please share the code of Low-Rank Adaptation Matching.
The text was updated successfully, but these errors were encountered: