-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About the dataset #10
Comments
Yes, if you want to train on RGB channels and you use the DIV2K, then no conversion is needed. |
Thank you very much for your answer . |
* Add FID and KID metric * Fix import error * Fix bug that don't work on multi gpu * Fix * Fix KID metric for logging to mlflow * Lint * Add test code for EvalHook and BasicRestorer Co-authored-by: Hakjin Lee <nijkah@gmail.com> * Refactor FID and KID * Prototyping StyleGAN InceptionV3 module * Prototyping InceptionV3 module for StyleGAN * Refactoring * Add test code * Edit docstrings * Fix Co-authored-by: Hakjin Lee <nijkah@gmail.com> * Supplement Documentations for FID and KID (#4) * Prototyping InceptionV3 module for StyleGAN * Refactoring * Lint * Add test code * Fix * Update mmedit/core/evaluation/inceptions.py * Update load logic * Edit docstrings * Edit docstrings for KID * dump * Add docs Co-authored-by: Junhwa Song <ethan9867@si-analytics.ai> * fix lint * Reflect feedback * Fix typo * Fix * Fix * Fix * Fix * Fix * Update docs * Update docs * Fix * Update docs/en/tutorials/inception_eval.md Co-authored-by: Hakjin Lee <nijkah@gmail.com> * Fix * Move features-based metrics evaluation to `Dataset.evaluate` (#8) * delete redundant line * Fix metrics * fix docs * Move build_metric location * remove redundant line * Delete redundant test code * Fix bugs * Lint * delete redundant line * Revert EvalIterHook logic (#10) Co-authored-by: nijkah <nijkah@gmail.com> Co-authored-by: KKIEEK <kkieek@KKIEEKui-MacBookPro.local>
when i read the LQGT_dataset.py, i found that
if self.opt['color']: # change color space if necessary
img_GT = util.channel_convert(img_GT.shape[2], self.opt['color'], [img_GT])[0]
the img_GT.shape[2] is 3
the self.opt['color'] is RGB in train_SRResNet.yml
but the def channel_convert in util.py
if in_c == 3 and tar_type == 'gray': # BGR to gray
gray_list = [cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) for img in img_list]
return [np.expand_dims(img, axis=2) for img in gray_list]
elif in_c == 3 and tar_type == 'y': # BGR to y
y_list = [bgr2ycbcr(img, only_y=True) for img in img_list]
return [np.expand_dims(img, axis=2) for img in y_list]
elif in_c == 1 and tar_type == 'RGB': # gray/y to BGR
return [cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) for img in img_list]
else:
return img_list
has no in_c=3 and tar_type == 'RGB',so it just return img_list。
Is the means when i use DIV2K, i don't need to deal with the conversion among BGR, gray and y?
The text was updated successfully, but these errors were encountered: