Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

inconsistent result with human perception #1

Open
ceciliavision opened this issue Oct 4, 2018 · 2 comments
Open

inconsistent result with human perception #1

ceciliavision opened this issue Oct 4, 2018 · 2 comments

Comments

@ceciliavision
Copy link

Hello,

I ran the code to compare the following two images' perceptual score. However, I got results that hardly make sense, and I hope to get some insights from you.

Here are the two images:

Image A:
image

Image B:
image

Image A gets a score of 7.2, while image B gets a score 8.9. This is not consistent to what they appear to our human perception. I'm wondering if there is certain bias in the learned metric, that fails on the images I presented here?

To make it more complete, the way I tested these is to simply run:
img = imread(path); score = quality_predict(img);

Is there additional processing that's required to run the metric correctly?

Thanks,

@Vandermode
Copy link

Hi, the issue described above is due to the IQA model (in this repo) can only assess quality degradations arising from the distortion types (i.e. super-resolved artifacts) that it has been trained on.

Such model could be formulated as 'distortion aware', thus is necessarily limited.
For more general purpose 'distortion unaware' NR IQA, please refer to the NIQE [1] to meet your goal.

Hope this helps.

By the way, I will close my issue in your repo since I have figured out everything mentioned there.
Thanks for your previous insightful discussion.

[1] Mittal, Anish, Rajiv Soundararajan, and Alan C. Bovik. "Making a" Completely Blind" Image Quality Analyzer." IEEE Signal Process. Lett. 20.3 (2013): 209-212.

@ceciliavision
Copy link
Author

Hi, the issue described above is due to the IQA model (in this repo) can only assess quality degradations arising from the distortion types (i.e. super-resolved artifacts) that it has been trained on.

Such model could be formulated as 'distortion aware', thus is necessarily limited.
For more general purpose 'distortion unaware' NR IQA, please refer to the NIQE [1] to meet your goal.

Hope this helps.

By the way, I will close my issue in your repo since I have figured out everything mentioned there.
Thanks for your previous insightful discussion.

[1] Mittal, Anish, Rajiv Soundararajan, and Alan C. Bovik. "Making a" Completely Blind" Image Quality Analyzer." IEEE Signal Process. Lett. 20.3 (2013): 209-212.

Thanks for letting me know.

Regarding the answer to perceptual metric, what I showed above is actually a result from a super-resolution model. I agree there would be certain limitation, but was trying to get an intuitive explanation about that limitation (e.g. whether high frequency details are generally considered as 'good images').

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants