1 .. testsetup:: 2 3 import math 4 5 .. _tut-fp-issues: 6 7 ************************************************** 8 Floating Point Arithmetic: Issues and Limitations 9 ************************************************** 10 11 .. sectionauthor:: Tim Peters <tim_one (a] users.sourceforge.net> 12 13 14 Floating-point numbers are represented in computer hardware as base 2 (binary) 15 fractions. For example, the decimal fraction :: 16 17 0.125 18 19 has value 1/10 + 2/100 + 5/1000, and in the same way the binary fraction :: 20 21 0.001 22 23 has value 0/2 + 0/4 + 1/8. These two fractions have identical values, the only 24 real difference being that the first is written in base 10 fractional notation, 25 and the second in base 2. 26 27 Unfortunately, most decimal fractions cannot be represented exactly as binary 28 fractions. A consequence is that, in general, the decimal floating-point 29 numbers you enter are only approximated by the binary floating-point numbers 30 actually stored in the machine. 31 32 The problem is easier to understand at first in base 10. Consider the fraction 33 1/3. You can approximate that as a base 10 fraction:: 34 35 0.3 36 37 or, better, :: 38 39 0.33 40 41 or, better, :: 42 43 0.333 44 45 and so on. No matter how many digits you're willing to write down, the result 46 will never be exactly 1/3, but will be an increasingly better approximation of 47 1/3. 48 49 In the same way, no matter how many base 2 digits you're willing to use, the 50 decimal value 0.1 cannot be represented exactly as a base 2 fraction. In base 51 2, 1/10 is the infinitely repeating fraction :: 52 53 0.0001100110011001100110011001100110011001100110011... 54 55 Stop at any finite number of bits, and you get an approximation. On most 56 machines today, floats are approximated using a binary fraction with 57 the numerator using the first 53 bits starting with the most significant bit and 58 with the denominator as a power of two. In the case of 1/10, the binary fraction 59 is ``3602879701896397 / 2 ** 55`` which is close to but not exactly 60 equal to the true value of 1/10. 61 62 Many users are not aware of the approximation because of the way values are 63 displayed. Python only prints a decimal approximation to the true decimal 64 value of the binary approximation stored by the machine. On most machines, if 65 Python were to print the true decimal value of the binary approximation stored 66 for 0.1, it would have to display :: 67 68 >>> 0.1 69 0.1000000000000000055511151231257827021181583404541015625 70 71 That is more digits than most people find useful, so Python keeps the number 72 of digits manageable by displaying a rounded value instead :: 73 74 >>> 1 / 10 75 0.1 76 77 Just remember, even though the printed result looks like the exact value 78 of 1/10, the actual stored value is the nearest representable binary fraction. 79 80 Interestingly, there are many different decimal numbers that share the same 81 nearest approximate binary fraction. For example, the numbers ``0.1`` and 82 ``0.10000000000000001`` and 83 ``0.1000000000000000055511151231257827021181583404541015625`` are all 84 approximated by ``3602879701896397 / 2 ** 55``. Since all of these decimal 85 values share the same approximation, any one of them could be displayed 86 while still preserving the invariant ``eval(repr(x)) == x``. 87 88 Historically, the Python prompt and built-in :func:`repr` function would choose 89 the one with 17 significant digits, ``0.10000000000000001``. Starting with 90 Python 3.1, Python (on most systems) is now able to choose the shortest of 91 these and simply display ``0.1``. 92 93 Note that this is in the very nature of binary floating-point: this is not a bug 94 in Python, and it is not a bug in your code either. You'll see the same kind of 95 thing in all languages that support your hardware's floating-point arithmetic 96 (although some languages may not *display* the difference by default, or in all 97 output modes). 98 99 For more pleasant output, you may wish to use string formatting to produce a limited number of significant digits:: 100 101 >>> format(math.pi, '.12g') # give 12 significant digits 102 '3.14159265359' 103 104 >>> format(math.pi, '.2f') # give 2 digits after the point 105 '3.14' 106 107 >>> repr(math.pi) 108 '3.141592653589793' 109 110 111 It's important to realize that this is, in a real sense, an illusion: you're 112 simply rounding the *display* of the true machine value. 113 114 One illusion may beget another. For example, since 0.1 is not exactly 1/10, 115 summing three values of 0.1 may not yield exactly 0.3, either:: 116 117 >>> .1 + .1 + .1 == .3 118 False 119 120 Also, since the 0.1 cannot get any closer to the exact value of 1/10 and 121 0.3 cannot get any closer to the exact value of 3/10, then pre-rounding with 122 :func:`round` function cannot help:: 123 124 >>> round(.1, 1) + round(.1, 1) + round(.1, 1) == round(.3, 1) 125 False 126 127 Though the numbers cannot be made closer to their intended exact values, 128 the :func:`round` function can be useful for post-rounding so that results 129 with inexact values become comparable to one another:: 130 131 >>> round(.1 + .1 + .1, 10) == round(.3, 10) 132 True 133 134 Binary floating-point arithmetic holds many surprises like this. The problem 135 with "0.1" is explained in precise detail below, in the "Representation Error" 136 section. See `The Perils of Floating Point <http://www.lahey.com/float.htm>`_ 137 for a more complete account of other common surprises. 138 139 As that says near the end, "there are no easy answers." Still, don't be unduly 140 wary of floating-point! The errors in Python float operations are inherited 141 from the floating-point hardware, and on most machines are on the order of no 142 more than 1 part in 2\*\*53 per operation. That's more than adequate for most 143 tasks, but you do need to keep in mind that it's not decimal arithmetic and 144 that every float operation can suffer a new rounding error. 145 146 While pathological cases do exist, for most casual use of floating-point 147 arithmetic you'll see the result you expect in the end if you simply round the 148 display of your final results to the number of decimal digits you expect. 149 :func:`str` usually suffices, and for finer control see the :meth:`str.format` 150 method's format specifiers in :ref:`formatstrings`. 151 152 For use cases which require exact decimal representation, try using the 153 :mod:`decimal` module which implements decimal arithmetic suitable for 154 accounting applications and high-precision applications. 155 156 Another form of exact arithmetic is supported by the :mod:`fractions` module 157 which implements arithmetic based on rational numbers (so the numbers like 158 1/3 can be represented exactly). 159 160 If you are a heavy user of floating point operations you should take a look 161 at the Numerical Python package and many other packages for mathematical and 162 statistical operations supplied by the SciPy project. See <https://scipy.org>. 163 164 Python provides tools that may help on those rare occasions when you really 165 *do* want to know the exact value of a float. The 166 :meth:`float.as_integer_ratio` method expresses the value of a float as a 167 fraction:: 168 169 >>> x = 3.14159 170 >>> x.as_integer_ratio() 171 (3537115888337719, 1125899906842624) 172 173 Since the ratio is exact, it can be used to losslessly recreate the 174 original value:: 175 176 >>> x == 3537115888337719 / 1125899906842624 177 True 178 179 The :meth:`float.hex` method expresses a float in hexadecimal (base 180 16), again giving the exact value stored by your computer:: 181 182 >>> x.hex() 183 '0x1.921f9f01b866ep+1' 184 185 This precise hexadecimal representation can be used to reconstruct 186 the float value exactly:: 187 188 >>> x == float.fromhex('0x1.921f9f01b866ep+1') 189 True 190 191 Since the representation is exact, it is useful for reliably porting values 192 across different versions of Python (platform independence) and exchanging 193 data with other languages that support the same format (such as Java and C99). 194 195 Another helpful tool is the :func:`math.fsum` function which helps mitigate 196 loss-of-precision during summation. It tracks "lost digits" as values are 197 added onto a running total. That can make a difference in overall accuracy 198 so that the errors do not accumulate to the point where they affect the 199 final total: 200 201 >>> sum([0.1] * 10) == 1.0 202 False 203 >>> math.fsum([0.1] * 10) == 1.0 204 True 205 206 .. _tut-fp-error: 207 208 Representation Error 209 ==================== 210 211 This section explains the "0.1" example in detail, and shows how you can perform 212 an exact analysis of cases like this yourself. Basic familiarity with binary 213 floating-point representation is assumed. 214 215 :dfn:`Representation error` refers to the fact that some (most, actually) 216 decimal fractions cannot be represented exactly as binary (base 2) fractions. 217 This is the chief reason why Python (or Perl, C, C++, Java, Fortran, and many 218 others) often won't display the exact decimal number you expect. 219 220 Why is that? 1/10 is not exactly representable as a binary fraction. Almost all 221 machines today (November 2000) use IEEE-754 floating point arithmetic, and 222 almost all platforms map Python floats to IEEE-754 "double precision". 754 223 doubles contain 53 bits of precision, so on input the computer strives to 224 convert 0.1 to the closest fraction it can of the form *J*/2**\ *N* where *J* is 225 an integer containing exactly 53 bits. Rewriting :: 226 227 1 / 10 ~= J / (2**N) 228 229 as :: 230 231 J ~= 2**N / 10 232 233 and recalling that *J* has exactly 53 bits (is ``>= 2**52`` but ``< 2**53``), 234 the best value for *N* is 56:: 235 236 >>> 2**52 <= 2**56 // 10 < 2**53 237 True 238 239 That is, 56 is the only value for *N* that leaves *J* with exactly 53 bits. The 240 best possible value for *J* is then that quotient rounded:: 241 242 >>> q, r = divmod(2**56, 10) 243 >>> r 244 6 245 246 Since the remainder is more than half of 10, the best approximation is obtained 247 by rounding up:: 248 249 >>> q+1 250 7205759403792794 251 252 Therefore the best possible approximation to 1/10 in 754 double precision is:: 253 254 7205759403792794 / 2 ** 56 255 256 Dividing both the numerator and denominator by two reduces the fraction to:: 257 258 3602879701896397 / 2 ** 55 259 260 Note that since we rounded up, this is actually a little bit larger than 1/10; 261 if we had not rounded up, the quotient would have been a little bit smaller than 262 1/10. But in no case can it be *exactly* 1/10! 263 264 So the computer never "sees" 1/10: what it sees is the exact fraction given 265 above, the best 754 double approximation it can get:: 266 267 >>> 0.1 * 2 ** 55 268 3602879701896397.0 269 270 If we multiply that fraction by 10\*\*55, we can see the value out to 271 55 decimal digits:: 272 273 >>> 3602879701896397 * 10 ** 55 // 2 ** 55 274 1000000000000000055511151231257827021181583404541015625 275 276 meaning that the exact number stored in the computer is equal to 277 the decimal value 0.1000000000000000055511151231257827021181583404541015625. 278 Instead of displaying the full decimal value, many languages (including 279 older versions of Python), round the result to 17 significant digits:: 280 281 >>> format(0.1, '.17f') 282 '0.10000000000000001' 283 284 The :mod:`fractions` and :mod:`decimal` modules make these calculations 285 easy:: 286 287 >>> from decimal import Decimal 288 >>> from fractions import Fraction 289 290 >>> Fraction.from_float(0.1) 291 Fraction(3602879701896397, 36028797018963968) 292 293 >>> (0.1).as_integer_ratio() 294 (3602879701896397, 36028797018963968) 295 296 >>> Decimal.from_float(0.1) 297 Decimal('0.1000000000000000055511151231257827021181583404541015625') 298 299 >>> format(Decimal.from_float(0.1), '.17') 300 '0.10000000000000001' 301