You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I saw you have adapted BPR-MF algorithm to the session_based case by considering each session as a new user during training. Also, at prediction your paper says you get the average of item-embeddings of all visited items of the test session and use them as user features. But, it also says that " In other words we average the similarities of the feature
vectors between a recommendable item and the items of the session so far". So I'm confused whether after you get that average item embedding vector for the test session you use it to find similar items from the set of training items, or you actually use that in place of user-embedding of that session (like P and Q matrices in matrix factorization context) and multiply that to item-embedding of each test item to get score of this session for each of those test items.
And, I'm also confused why you need input_item_id in addition to session_id in get_predictions method. Isn't it enough to have just the session_id (then we know all of its items) and the list of items to get the predictions for (predict_for_item_ids)?
Thanks
The text was updated successfully, but these errors were encountered:
I saw you have adapted BPR-MF algorithm to the session_based case by considering each session as a new user during training. Also, at prediction your paper says you get the average of item-embeddings of all visited items of the test session and use them as user features. But, it also says that " In other words we average the similarities of the feature
vectors between a recommendable item and the items of the session so far". So I'm confused whether after you get that average item embedding vector for the test session you use it to find similar items from the set of training items, or you actually use that in place of user-embedding of that session (like P and Q matrices in matrix factorization context) and multiply that to item-embedding of each test item to get score of this session for each of those test items.
In particular, could you please explain what this line does:
https://github.com/hidasib/GRU4Rec/blob/master/baselines.py#L416
And, I'm also confused why you need input_item_id in addition to session_id in
get_predictions
method. Isn't it enough to have just the session_id (then we know all of its items) and the list of items to get the predictions for (predict_for_item_ids
)?Thanks
The text was updated successfully, but these errors were encountered: