Open In App

Why 0.3 – 0.2 is not equal to 0.1 in Python?

Last Updated : 26 Nov, 2020
Improve
Improve
Like Article
Like
Save
Share
Report

In this article, we will see why 0.3 – 0.2 is not equal to 0.1 in Python. The reason behind it is called “precision”, and it’s due to the fact that computers do not compute in Decimal, but in Binary. Computers do not use a base 10 system, they use a base 2 system (also called Binary code).

Below is the Implementation. 

Python3




# code
print(0.3 - 0.2)
print(0.3 - 0.2 == 0.1)


Output

0.09999999999999998
False

As you can see in the output, 0.3 – 0.2 does not give 0.1 but 0.09999999999999998.  We do calculations using decimal (base 10), while computer does calculations using binary(base 2).

Let us consider 1 / 3 in decimal which is 0.3333333, 2 / 3 in decimal is 0.6666666, if we add both we will only get 0.9999999, which is not equal to 1. Similarly, 0.3, 0.2 cannot be represented accurately in binary no matter how many significant digits you use. Fractions with denominator in multiples of 5 and 2 can only be represented precisely in decimal, similarly fractions with denominator in multiples of 2 can only be represented precisely in binary. Floating-point numbers are stored internally using IEEE standard 754 which is correct only from 15-17 significant digits.

We can use inbuilt decimal module to change precision and get accurate results. getcontext().prec can be used to set precision of each decimal value. Default precision is 28 digits.

Python3




from decimal import *
  
  
getcontext().prec = 6
print(Decimal("0.3") - Decimal("0.2"))
print(Decimal("0.3") - Decimal("0.2") == Decimal("0.1"))


Output

0.1
True

Like Article
Suggest improvement
Share your thoughts in the comments

Similar Reads