-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix case of special tokens in encoder #19
Comments
…urrent stim representation. in partial fulfilment of #19 (excludes special tokens in individual stim length computation, but includes them in the overall context anyway, so extraction is still affected)
Now what happens here is: the special tokens are chopped off from each stimulus when extracting stimulus-level representations evaluated within a context. |
whoops, that was an incorrect reference to this issue. it should have been #18 instead |
special tokens, e.g. , from tokenizer cause 1-off errors when using indices to extract sentence representations from context.
The text was updated successfully, but these errors were encountered: