Why does Python allow 0.1 + 0.2 != 0.3? #152958
-
BodyIn Python, why does 0.1 + 0.2 sometimes give 0.30000000000000004 instead of exactly 0.3? Shouldn’t floating-point math be precise? Guidelines
|
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
This happens because of how floating-point numbers are represented in binary. Computers use the IEEE 754 standard, which stores numbers in base-2. However, 0.1 and 0.2 cannot be precisely represented in binary, leading to small rounding errors. Example in Python: 0.1 in binary is an infinite repeating fraction: Use decimal or round() to get accurate results: This is not a Python bug—it’s just a limitation of how floating-point numbers work in all programming languages! |
Beta Was this translation helpful? Give feedback.
This happens because of how floating-point numbers are represented in binary. Computers use the IEEE 754 standard, which stores numbers in base-2. However, 0.1 and 0.2 cannot be precisely represented in binary, leading to small rounding errors.
Example in Python:
print(0.1 + 0.2 == 0.3) # False
print(0.1 + 0.2) # 0.30000000000000004
0.1 in binary is an infinite repeating fraction:
0.00011001100110011001100110011... (base 2)
0.2 also has an infinite repeating representation in binary.
When the computer adds them, tiny precision errors accumulate, resulting in 0.30000000000000004.
Use decimal or round() to get accurate results:
from decimal import Decimal
print(Decimal('0.1') + Decimal('0.2…