Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The accuracy of our reproduction is very different from that reported in the paper? #20

Open
ZHUXUHAN opened this issue Jul 12, 2022 · 1 comment

Comments

@ZHUXUHAN
Copy link

ZHUXUHAN commented Jul 12, 2022

SGG eval: R @ 20: 0.5099; R @ 50: 0.5933; R @ 100: 0.6170; for mode=predcls, type=Recall(Main).
SGG eval: ngR @ 20: 0.5942; ngR @ 50: 0.7655; ngR @ 100: 0.8537; for mode=predcls, type=No Graph Constraint Recall(Main).
SGG eval: zR @ 20: 0.0302; zR @ 50: 0.0616; zR @ 100: 0.0769; for mode=predcls, type=Zero Shot Recall.
SGG eval: mR @ 20: 0.1591; mR @ 50: 0.1939; mR @ 100: 0.2080; for mode=predcls, type=Mean Recall.

I just follow your scripts/rel_train_BGNN_vg_predcls.sh, and we train it 70000 iterations.
In your paper the mr is 30.4 / 32.9. but here, it just is 19.39, 20.80.
I hope you can just explain it, or you can push your checkpoint and log.
The accuracy reported in your paper is what we have to compare with you, but this reproduction result makes it impossible to carry out our comparison and we hope to get your help

@Scarecrow0
Copy link
Collaborator

Please refer to this issue. You may load the config incorrectly.
#5

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants