You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Where embedded is the input of the decoder and hidden is the encoder's hidden as in the train you define hidden as: decoder_hidden = encoder_hidden. The problem is that as I found online in different sources the attention weights are computed with decoder's hidden and encoder's output.
The text was updated successfully, but these errors were encountered:
In seq2seq.py the attention weights are computed like this:
Where embedded is the input of the decoder and hidden is the encoder's hidden as in the train you define hidden as: decoder_hidden = encoder_hidden. The problem is that as I found online in different sources the attention weights are computed with decoder's hidden and encoder's output.
The text was updated successfully, but these errors were encountered: