-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Analysing a fairseq transformer model #37
Comments
Hello @aprzez, Currently NeuroX only supports
Hope this helps! |
Hello @fdalvi , thank you so much for responding! If you have some guidance for the first option - implementing fairseq_extractor - that would be great. Thank you! |
Hi again, I'm assuming I need to set the return_all_hiddens to True, so that all hidden states during training are saved. The model checkpoints when return_all_hiddens is True look the same as when it is False. I would be very grateful if you had some pointers on implementing the fairseq_extractor module, it's not clear to me what else I need to do to get this to work. Thank you in advance! |
Apologies for the delayed response, but I was away and this slipped my mind after coming back. Best, |
Hello,
I trained a fairseq transfomer model on an inflection task, and I am now trying to use the NeuroX toolkit to extract the representations. However, I am not sure how to import the model into neurox.data.extraction.transformers_extractor - I have the .pt files from the training checkpoints. Could you guide me?
The text was updated successfully, but these errors were encountered: