Skip to content

Why does Python allow 0.1 + 0.2 != 0.3? #152958

Discussion options

You must be logged in to vote

This happens because of how floating-point numbers are represented in binary. Computers use the IEEE 754 standard, which stores numbers in base-2. However, 0.1 and 0.2 cannot be precisely represented in binary, leading to small rounding errors.

Example in Python:
print(0.1 + 0.2 == 0.3) # False
print(0.1 + 0.2) # 0.30000000000000004

0.1 in binary is an infinite repeating fraction:
0.00011001100110011001100110011... (base 2)
0.2 also has an infinite repeating representation in binary.
When the computer adds them, tiny precision errors accumulate, resulting in 0.30000000000000004.

Use decimal or round() to get accurate results:
from decimal import Decimal
print(Decimal('0.1') + Decimal('0.2…

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by cherzyy
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Programming Help Programming languages, open source, and software development.
2 participants