Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[QNN] Optimize lowering for requantize and FixedPointMultiply. #4798

Merged
merged 3 commits into from
Feb 5, 2020

Conversation

anijain2305
Copy link
Contributor

As Title.

Changes are verified through existing tests.

@jackwish @FrozenGene @yzhliu @vinx13

@@ -157,12 +158,15 @@ Expr FixedPointMultiplyPerChannel(Expr tensor, std::vector<double> multipliers,
fixed_pt_multipliers.push_back(fixed_pt_multiplier);
lshifts.push_back(lshift);
rshifts.push_back(rshift);
is_lshift_required |= (lshift != 0);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we save the [?]= style operators for artimethic, but write boolean as it originally is.

Comment on lines 108 to 110
if (out_dtype == DataType::Int(32)) {
return Cast(shifted_int64_t, out_dtype);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would you please share the insight here? I looked around, but lost a bit in the arithmetic here. :)

Copy link
Contributor Author

@anijain2305 anijain2305 Feb 4, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Definitely, happy to explain :)

We approximate the floating point computation here with fixed point computation. This is done by representing the requantize_scale (input_scale/output_scale) as int32, where the decimal point is between 1st and 2nd bit - representing a number between 0.5 and 1. And then we multiply this fixed point number with the quantized tensor (another int32 tensor). So, to keep the precision higher, we perform multiplication in int64. But, we can safely say that the resulting number is still a fixed point int64 number, where the decimal part of the number is within int32 range. We, then perform, right shift etc to get the decimal portion.

So, if the requantize scale is less than 1, we can safely assume that the range will be within int32. (I forgot to add that check, but let me add that as a second commit).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, thank you for the detailed explain!

@vinx13
Copy link
Member

vinx13 commented Feb 4, 2020

LGTM, possible to have a test?

@anijain2305
Copy link
Contributor Author

Done. Thanks!

@anijain2305
Copy link
Contributor Author

Ping

@vinx13 vinx13 merged commit 23f3988 into apache:master Feb 5, 2020
@vinx13
Copy link
Member

vinx13 commented Feb 5, 2020

Thanks @anijain2305 @jackwish this is merged

alexwong pushed a commit to alexwong/tvm that referenced this pull request Feb 26, 2020
…e#4798)

* [QNN] Optimize lowering for requantize and FixedPointMultiply.

* Add check for requantize scale gt 1.

* Added test case.
alexwong pushed a commit to alexwong/tvm that referenced this pull request Feb 28, 2020
…e#4798)

* [QNN] Optimize lowering for requantize and FixedPointMultiply.

* Add check for requantize scale gt 1.

* Added test case.
zhiics pushed a commit to neo-ai/tvm that referenced this pull request Mar 2, 2020
…e#4798)

* [QNN] Optimize lowering for requantize and FixedPointMultiply.

* Add check for requantize scale gt 1.

* Added test case.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants