Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About the dataset #10

Closed
czyczyczy opened this issue Sep 10, 2019 · 2 comments
Closed

About the dataset #10

czyczyczy opened this issue Sep 10, 2019 · 2 comments

Comments

@czyczyczy
Copy link

when i read the LQGT_dataset.py, i found that
if self.opt['color']: # change color space if necessary
img_GT = util.channel_convert(img_GT.shape[2], self.opt['color'], [img_GT])[0]
the img_GT.shape[2] is 3
the self.opt['color'] is RGB in train_SRResNet.yml
but the def channel_convert in util.py
if in_c == 3 and tar_type == 'gray': # BGR to gray
gray_list = [cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) for img in img_list]
return [np.expand_dims(img, axis=2) for img in gray_list]
elif in_c == 3 and tar_type == 'y': # BGR to y
y_list = [bgr2ycbcr(img, only_y=True) for img in img_list]
return [np.expand_dims(img, axis=2) for img in y_list]
elif in_c == 1 and tar_type == 'RGB': # gray/y to BGR
return [cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) for img in img_list]
else:
return img_list

has no in_c=3 and tar_type == 'RGB',so it just return img_list
Is the means when i use DIV2K, i don't need to deal with the conversion among BGR, gray and y?

@xinntao
Copy link
Collaborator

xinntao commented Sep 16, 2019

Yes, if you want to train on RGB channels and you use the DIV2K, then no conversion is needed.
The conversion is executed when you have RGB images, but you want to train on gray images.

@czyczyczy
Copy link
Author

Yes, if you want to train on RGB channels and you use the DIV2K, then no conversion is needed.
The conversion is executed when you have RGB images, but you want to train on gray images.

Thank you very much for your answer .

@xinntao xinntao closed this as completed Jul 8, 2020
LeoXing1996 pushed a commit that referenced this issue Dec 20, 2022
* Add FID and KID metric

* Fix import error

* Fix bug that don't work on multi gpu

* Fix

* Fix KID metric for logging to mlflow

* Lint

* Add test code for EvalHook and BasicRestorer

Co-authored-by: Hakjin Lee <nijkah@gmail.com>

* Refactor FID and KID

* Prototyping StyleGAN InceptionV3 module 

* Prototyping InceptionV3 module for StyleGAN

* Refactoring

* Add test code

* Edit docstrings

* Fix

Co-authored-by: Hakjin Lee <nijkah@gmail.com>

* Supplement Documentations for FID and KID (#4)

* Prototyping InceptionV3 module for StyleGAN

* Refactoring

* Lint

* Add test code

* Fix

* Update mmedit/core/evaluation/inceptions.py

* Update load logic

* Edit docstrings

* Edit docstrings for KID

* dump

* Add docs

Co-authored-by: Junhwa Song <ethan9867@si-analytics.ai>

* fix lint

* Reflect feedback

* Fix typo

* Fix

* Fix

* Fix

* Fix

* Fix

* Update docs

* Update docs

* Fix

* Update docs/en/tutorials/inception_eval.md

Co-authored-by: Hakjin Lee <nijkah@gmail.com>

* Fix

* Move features-based metrics evaluation to `Dataset.evaluate` (#8)

* delete redundant line

* Fix metrics

* fix docs

* Move build_metric location

* remove redundant line

* Delete redundant test code

* Fix bugs

* Lint

* delete redundant line

* Revert EvalIterHook logic (#10)

Co-authored-by: nijkah <nijkah@gmail.com>
Co-authored-by: KKIEEK <kkieek@KKIEEKui-MacBookPro.local>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants