Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance of the released codes with respect to the results in ECCV paper #1

Open
yuleichin opened this issue Apr 2, 2022 · 2 comments

Comments

@yuleichin
Copy link

yuleichin commented Apr 2, 2022

Hi, thanks for the great work. Regarding the results in this github code and the results reported in the ECCV paper, I have some questions:

  1. For the Google500 dataset, which pretrained weight (best_val_on_imgnet or best_val_on_web) corresponds to the results of SCC in this github?
  2. What is the difference between Google500 and WebVision500?
  3. The results of Google500 in this github corresponds to which Table in ECCV paper? (The results in Table2 seems not consistent with the results in this github).
  4. If we would like to reproduce the results of WebVision 1k (version 1), would you please share the configuration setting file (.yaml files for dataset and pipelines)?

Thank you in advance for your help! I appreciate it a lot for your answers and suggestions.

@Edgar-1205
Copy link

Hello, I only had more than 50% rank-1 at 60epochs in the process of reproduction. How much are you? Is the final reproduction result quite different?

@Edgar-1205
Copy link

Hi, thanks for the great work. Regarding the results in this github code and the results reported in the ECCV paper, I have some questions:

  1. For the Google500 dataset, which pretrained weight (best_val_on_imgnet or best_val_on_web) corresponds to the results of SCC in this github?
  2. What is the difference between Google500 and WebVision500?
  3. The results of Google500 in this github corresponds to which Table in ECCV paper? (The results in Table2 seems not consistent with the results in this github).
  4. If we would like to reproduce the results of WebVision 1k (version 1), would you please share the configuration setting file (.yaml files for dataset and pipelines)?

Thank you in advance for your help! I appreciate it a lot for your answers and suggestions.

Hi, thanks for the great work. Regarding the results in this github code and the results reported in the ECCV paper, I have some questions:

  1. For the Google500 dataset, which pretrained weight (best_val_on_imgnet or best_val_on_web) corresponds to the results of SCC in this github?
  2. What is the difference between Google500 and WebVision500?
  3. The results of Google500 in this github corresponds to which Table in ECCV paper? (The results in Table2 seems not consistent with the results in this github).
  4. If we would like to reproduce the results of WebVision 1k (version 1), would you please share the configuration setting file (.yaml files for dataset and pipelines)?

Thank you in advance for your help! I appreciate it a lot for your answers and suggestions.

Hello, author. I have the same question. Is it convenient for you to answer it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants