Python precision issues with float formatting

Riptyde4 Source

Please look at the below Python code that I've entered into a Python 3.6 interpreter:

>>> 0.00225 * 100.0
>>> '{:.2f}'.format(0.00225 * 100.0)
>>> '{:.2f}'.format(0.225)
>>>  '{:.2f}'.format(round(0.00225 * 100.0, 10))

Hopefully you can immediately understand why I'm frustrated. I am attempting to display value * 100.0 on my GUI, storing the full precision behind a cell but only displaying 2 decimal points (or whatever the users precision setting is). The GUI is similar to an Excel spreadsheet.

I'd prefer not to lose the precision of something like 0.22222444937645 and round by 10, but I also don't want a value such as 0.00225 * 100.0 displaying as 0.22.

I'm interested in hearing about a standard way of approaching a situation like this or a remedy for my specific situation. Thanks ahead of time for any help.



answered 1 week ago ndmeiri #1

Consider using the Decimal module, which "provides support for fast correctly-rounded decimal floating point arithmetic." The primary advantages of Decimal relevant to your use case are:

  • Decimal numbers can be represented exactly. In contrast, numbers like 1.1 and 2.2 do not have exact representations in binary floating point. End users typically would not expect 1.1 + 2.2 to display as 3.3000000000000003 as it does with binary floating point.

  • The exactness carries over into arithmetic. In decimal floating point, 0.1 + 0.1 + 0.1 - 0.3 is exactly equal to zero. In binary floating point, the result is
    5.5511151231257827e-017. While near to zero, the differences prevent reliable equality testing and differences can accumulate. For this reason, decimal is preferred in accounting applications which have strict equality invariants.

Based on the information you've provided in the question, I cannot say how much of an overhaul migrating to Decimal would require. However, if you're creating a spreadsheet-like application and always want to preserve maximal precision, then you will probably want to refactor to use Decimal sooner or later to avoid unexpected numbers in your user-facing GUI.

To get the behavior you desire, you may need to change the rounding mode (which defaults to ROUND_HALF_EVEN) for Decimal instances.

from decimal import getcontext, ROUND_HALF_UP

getcontext().rounding = ROUND_HALF_UP

n = round(Decimal('0.00225') * Decimal('100'), 2)
print(n)  # prints Decimal('0.23')

m = round(Decimal('0.00225') * 100, 2)
print(m)  # prints Decimal('0.23')

answered 1 week ago Arkadiusz Tymieniecki #2

perhaps use decimal?

from decimal import *
getcontext().prec = 2
n = Decimal.from_float(0.00225)
m = n * 100
print(n, m)
print(m.quantize(Decimal('.01'), rounding=ROUND_DOWN))
print(m.quantize(Decimal('.01'), rounding=ROUND_UP)

comments powered by Disqus