Calculation Bug Fixed

Dave Gainer writes:

Two weeks ago, we posted about an issue involving the calculation of numbers around 65,535 and 65,536. As of today, fixes for this issue in Excel 2007 and Excel Services 2007 are available for download from the following locations:

Posted in Uncategorized

40 thoughts on “Calculation Bug Fixed

  1. Hugh – click on “Dave Gainer writes” and it will take you to the links. The fix hasn’t made its way into auto updates yet, so if you are using XL2007 for purposes where an error of (100,000 – 65,535) would be embarrasing, I recommend the download.

    While you’re there, if you read through the comments you’ll find this link:

    http://www.lomont.org/Math/Papers/2007/Excel2007/Excel2007Bug.pdf

    which is a detailed description of exactly where the bug came from and how it was fixed.

    It is an excellent article but it repeats (several times) that the bug is only in the rendering, and only affects 12 numbers out of the more than 9*10^18 possible floating point numbers that can be generated in Excel. This is true, but in my opinion misleading. Because the critical numbers can be generated by multiplying an integer less than 1000 by a single decimal point number less than 100, and because commercial computing is full of just such multiplications, the chances of hitting the bug by accident is enormously higher than 12/(9*10^18). About a trillion times higher by my estimation.

    But if you install the fix, that’s academic :)

  2. Rembo – old ‘news’. If you don’t need variable precision for, among other things, compound interest calculations, then Excel ALREADY provides precision as displayed. The article you cite just shows that some people or web sites that claim deep knowledge of Excel or Office actually possess or reflect rather shallow knowledge.

  3. The Excel team even went to the trouble of writing a pretty clear simple to follow blog post on this very subject here:
    http://blogs.msdn.com/excel/archive/2008/04/10/understanding-floating-point-precision-aka-why-does-excel-give-me-seemingly-wrong-answers.aspx

    bottom line – digital computers have limits on their accuracy, lucky then that the IEEE specified what they should be, and handy too Excel follows those specs.

    I’m not sure of the purpose of that series of articles on Office watch, I agree with fzz on the actual effect.

  4. Office Watch strikes me as one of those groups that finds anything they perceive as bad and picks at it until it bleeds. Besides Rembo, who didn’t know about this? Who hasn’t known since long ago that decimal to binary to decimal introduces errors on the order of 10-16. Of course, the way Excel messes this up in it’s 2007 regression calculations is inexcusable, but for addition and subtraction it’s a nonstarter.

  5. I agree that this isn’t a bug, but it’s interesting seeing what you get when you enter that sum in different spreadsheets:

    Excel 8.64E-12
    Lotus 123 8.64019966684282E-12
    Open Office 0.00E+000
    Gnumeric 7.73070496506989E-12

    Excel and 123 return exactly the same number (formatted to 2 decimal places in Excel and 14 in 123).
    Gnumeric gives a slightly different number. Can anyone explain that?
    Open Office Calc returns exactly zero. Presumably they are doing some extra processing to do that. Is that a good thing or a bad thing?

  6. “Of course, the way Excel messes this up in it’s 2007 regression calculations is inexcusable, but for addition and subtraction it’s a nonstarter.”

    I hadn’t heard about that Jon.

    Do you have more details or a link?

  7. Quattro Pro gives the same answer as Excel & Lotus 123. Quattro Pro gives an exact zero answer if the items are first sorted just like in Excel. Why should sorting the items make a difference?

  8. Doug –

    I checked my files and learned it was over a year ago (11 May 07, in fact) when I documented this in Excel 2007. The problem appears to be corrected in SP1. Sorry to raise unnecessary alarms for those who have updated.

    Here is the problem, first presented by a reader on the newsgroups. Using simple data sets (the same X values, Y values differing only by power of ten):

    A B C D E F G H
    0.1111 2.872E-19 2.872E-18 2.872E-17 2.872E-16 2.872E-15 2.872E-14 2.872E-13 2.872E-12
    0.0625 4.043E-19 4.043E-18 4.043E-17 4.043E-16 4.043E-15 4.043E-14 4.043E-13 4.043E-12
    0.0400 4.576E-19 4.576E-18 4.576E-17 4.576E-16 4.576E-15 4.576E-14 4.576E-13 4.576E-12
    0.0277 4.814E-19 4.814E-18 4.814E-17 4.814E-16 4.814E-15 4.814E-14 4.814E-13 4.814E-12

    Let’s see if I can use HTML image tags…

    In this image (http://peltiertech.com/Sample/regress2007sp0.png) I have plotted this data as separate lines on a semilog plot, along with the calculated regression formulas. The regression formulas are reproduced here:

    y = 5.49429E-19 [R² = 9.99213E-01]
    y = 5.49429E-18 [R² = 9.99213E-01]
    y = 5.49429E-17 [R² = 9.99213E-01]
    y = 5.49429E-16 [R² = 9.99213E-01]
    y = -2.35066E-14x [R² = 9.99213E-01]
    y = -2.35066E-13x + 5.49429E-14 [R² = 9.99213E-01]
    y = -2.35066E-12x + 5.49429E-13 [R² = 9.99213E-01]
    y = -2.35066E-11x + 5.49429E-12 [R² = 9.99213E-01]

    Given that R² is the same for all of the regressions, it seems the missing slopes for the 10E-19 to 10E-16 data, and the missing intercept for the 10E-15 data, were removed in a faulty attempt to hide binary to decimal conversion errors.

  9. Arnold – sorting matters because, unlike actual paper-and-pencil arithmetic, floating point arithmetic isn’t associative. The order of operands matter because each possible pair of adds/subtracts has a different truncation error. Some orderings lead to perfectly offsetting truncation errors, so zero final truncation error, but most orderings don’t, so nonzero final truncation error. Exactly representable decimal mixed or fractional results from underlying binary arithmetic are pure happenstance.

  10. @fzz en Jon:
    At the risk of being called a nitpicker I have to say I disagree with you. From a mathematical point of view there is a huge difference between the number 1794.71 and 1794.71000000000000000000000000 The number of ‘significant digits’ of the latter is obviously bigger. Simply put the first number could (from a mathematical point of view) be anywhere between 1794.705 and 1794.7149999.. etc. The second number could be anywhere between 1794.709999999999999999999999995 and 1794.710000000000000000000000005
    For those interested see http://en.wikipedia.org/wiki/Significant_figures

    The example on the WoW site uses 26 decimals and the sum errors on the 12th decimal. That is a HUGE mistake and therefor in my opinion a bug. If Excel is unaware of significant numbers they have some catching up to do at Microsoft. The more because Excel is in fact used in scientific environments.
    Looking a the table Doug posted I’d say Open Office scores a point.

    Remco

  11. Rembo – I’m a mathematician. Well, I have a graduate degree in the subject.

    Depends on definitions. There’s either no difference between 1794.71 and 1794.71000000000000000000000000, or they represent different equivalence classes for operations that are similar to but not the same as arithmetic on real numbers. That is, these two numbers represent the same real number, but they could represent different neighborhoods around that real number. Too bad there’s no GENERALLY ACCEPTED convention for representing significant digits.

    Anyway, this doesn’t affect anything I’ve mentioned previously.

    1. Excel doesn’t provide greater precision simply by appending more zeros to the right end of the decimal fractional part of numbers. 1.23 and 1.230000 represent the SAME value in Excel using default settings, just as they represent the same value in every other piece of mostly or completely IEEE-754 compliant software. Maybe there are some specialized scientific or statistical packages that would distinguish between them, but Excel is NOT such a package. That said, Excel does provide [decimal] precision as displayed as an option, which eliminates truncation errors due to decimal-to-binary-to-decimal conversions.

    2. Finite precision values, be they floating point or fixed point, are equivalence classes that effectively span the range between what an extra digit (binary for floating point, decimal for fixed point) of precision would have provided. Order of operation becomes important when adding 3 or more values together because some pairs of numbers may have offsetting truncation errors while others could have exacerbating truncation errors. Example: =1.23-1000.1+998.87 returns 0 but =-1000.1+998.87+1.23 returns -1.82077E-14. Why the difference? It’s due to the handling of the final binary bit of precision. The result of the expression -1000.1+998.87 is in a different equivalence class than 1.23.

    This is just how floating point works. It has NOTHING to do with significant digits. You’d find Excel easier to work with if you avoid conflating the two.

  12. “Excel does provide [decimal] precision as displayed as an option”

    And what a sorry option it is, because it messes up the whole sheet. It’s better to use ROUND() if you fear a problem.

  13. fzz and Jon: First of all I appreciate the feedback. Doesn’t mean I agree with you though ;-)

    I’m not a mathematician per se. I am a mechanical engineer though and I’m crunching numbers all the time. I realize all to well that Excel is not the same as Mathlab and you are right that in desktop-spreadsheet-land 1.23 the same is as 1.23000000. In fact, I’d be very disappointed if Excel returned 2.3598 as an answer if I added up 1.23 + 1.12 Theoretically that answer is not wrong because 1.23 could be 1.2349 and 1.12 could be 1.1249. The answer is within the fault margin.

    However, I fully expect to get the answer 2.35(00000000000000) because like you said 1.23 is the same as 1.2300000000000000.
    So knowing that, if you add up the numbers as displayed in the WoW example, I expect to get an answer of exactly 0. That is a zero, a dot and than another 26 zeros. Excel flunked at the 12 digit so one wonders what the whole point is of displaying 26 digits if the last 14 digits are meaningless?
    If Excel is only precise up to 11 digits I think it should only display 11 digits. The result given in the WoW example is therefor IMHO a bug and it should be fixed. No matter how well the cause can be explained, the result is unexpected to say the least.

  14. “If Excel is only precise up to 11 digits I think it should only display 11 digits.”

    Q: Is there a way to determine how precise a number is after you’ve converted to binary and back? For instance, if you subtract 57 from 57.8, is there an algorithm to determine how precise the answer is?

  15. Rembo – you just don’t understand software.

    First, software like Excel does its arithmetic in binary, not in decimal. However, you and every other Excel user enters values in decimal. That means there’s an unavoidable decimal-to-binary conversion from entry (using decimal numerals) to RAM storage (they’re stored in binary). When you perform arithmetic operations on values, Excel uses the binary values in RAM and returns binary values. Those returned binary values need to be displayed in decimal, so an unavoidable binary-to-decimal conversion. These conversions lead to some of the problems. Fractions representable by a finite number of decimal digits may not have finite binary representations. Indeed, fractions would need to be exact sums of negative powers of 2 (so 1/2, 1/4, 1/8, 1/16, etc.) to have finite binary representations. E.g., 1.23 doesn’t have a finite BINARY representation. But Excel must store it using just 52 binary mantissa bits.

    Second, even if you had a decimal-based computer, how would you store 1/3 if you only had finite digits to store any number?

    Third, AND MOST RELEVANT, significant digits are a statistical concept that’s found its way into the sciences and engineering. It’s NOT a computer science/programming concept, at least not generally. None of the major programming languages have any concept of significant digits. In BASIC, C, FORTRAN, APL, Pascal, Python, Perl, Java, Ruby (yada, yada, yada), 1.23 and 1.230000 are ALWAYS stored as the same binary value. Excel is like one of these programming languages in this regard.

    So what’s this WoW site example you mention but for which don’t provide a url? To me, with a teen age son, WoW means World of Warcraft. To Google too. Unlike you, if I saw that the numbers in this example had many different BINARY orders of magnitude in their whole number parts, I’d expect truncation error.

    It seems you have limited experience using calculation software. Truncation error has been with us since the beginning of binary computing. Knowing how to handle it has been a necessary part of applied numerical programming since FORTRAN days, so for about 5 decades. Where’ve you been?

  16. Sorry, Rembo, found the link. But ‘WoW’ == Office Watch?

    Anyway, the original example was to sum

    -127551.73
    103130.41
    1807.75
    7390.11
    9028.59
    2831.26
    1568.9
    1794.71

    which if this list were named L, the formula =SUM(L) would return 8.64019966684282E-12.

    Order matters, Rearrange these values as

    9028.59
    1794.71
    2831.26
    1568.9
    103130.41
    1807.75
    -127551.73
    7390.11

    Then =SUM(L) returns 0.

    To repeat, this is old news. This is CLASSIC floating point truncation error. It’s not going away, Microsoft is unlikely to ‘fix’ it (and they are aware of it), so the only rational course is to stop whining about it and learn some of the decades-old techniques for dealing with it.

  17. Strangely, changing the number format to Currency (and therefore changing the underlying data type to the 32 bit integral Currency type) does not change the result. I had thought that it would. (At least for addition and subtraction.) Using repeated addition (‘+’) operations instead of the SUM() worksheet function did not help.

    Does anyone know why not? I had thought that using a Currency data type changed what the underlying type was (that is, how it was stored) and was not simply a formatting issue.

    Was I wrong about this?

  18. What makes you think changing the number format to currency changes the underlying values?

    IOW, yes, you were wrong in believing that changing formats changes values.

  19. Hi fzz,

    Well, believe it or not, a currency number formatting (using “$#,##0.00? on a U.S. system, for example) does change the underlying data type to the Currency data type, or at least it would seem:

    Sub MySub1()
    [a1].Value = 1.23456789
    [b1].Value = 1.23456789

    [a1].NumberFormat = “General”
    [b1].NumberFormat = “$#,##0.00?

    MsgBox TypeName([a1].Value) ‘ Returns “Double”
    MsgBox TypeName([b1].Value) ‘ Returns “Currency”

    MsgBox [a1].Value ‘ Returns “1.23456789?
    MsgBox [b1].Value ‘ Returns “1.2346?
    End Sub

    The above returns “Currency” for cell B1 and also truncates and rounds the [b1].Value result to 4 decimal places.

    But something I didn’t realize until testing just now is that it seems that the Currency data type behavior seems to be occurring only within VBA calls, not within the cell itself. Inspecting the cell B1 on the worksheet shows the full precision within the cell, if you expand the number of decimals enough to see it.

    So I guess that the cell holds a double, no matter what, even if VBA calls are affected by the number formatting and receive a currency result. It’s as if CCur() is being called when you call Range.Value on a range that has with currency number formatting applied.

    I’m not a C++ / XLL guy, so I had to look this up: and indeed, the XLOPER data type doesn’t have any capability to hold a Currency type. So I guess the case is closed: any implicit conversion to Currency via the number formatting would be reflected in calls to VBA only, via a conversion when Range.Value is called, but the underlying values in the worksheet cells themselves would still be Double data types.

    So I guess you were right that the number formatting of the worksheet cells cannot effect the rounding precision issues that we are talking about here. It was worth a try though!

    Mike

  20. Interesting. What’s more:

    If you declare a variable as double, and read [b1].value into that you get 1.2346, but if you write [b1].value back to the spreadsheet you get $1.23, i.e. you lose another two decimal places, and you get the currency format included in the “value”.

    Sub MySub1()
    Dim Cella1Val As Double, Cellb1Val As Double
    [a1].Value = 1.23456789
    [b1].Value = 1.23456789

    [a1].NumberFormat = “General”
    [b1].NumberFormat = “$#,##0.00”

    Cella1Val = [a1].Value
    Cellb1Val = [b1].Value

    MsgBox TypeName([a1].Value) ‘ Returns “Double”
    MsgBox TypeName([b1].Value) ‘ Returns “Currency”

    [a2] = [a1].Value  ‘ Returns 1.23456789
    [b2] = [b1].Value  ‘ Returns $1.23

    [a3] = Cella1Val  ‘ Returns 1.23456789
    [b3] = Cellb1Val  ‘ Returns 1.2346

    MsgBox [a1].Value ‘ Returns “1.23456789?
    MsgBox [b1].Value ‘ Returns “1.2346?

    End Sub

  21. This topic came up recently at Smurf on Spreadsheets:
    http://smurfonspreadsheets.wordpress.com/2008/04/02/data-types/

    If you use .value2, rather than .value, you don’t get the automatic data type conversion.

    I’ll be doing that in future.

    Sub MySub1()
    Dim Cella1Val As Double, Cellb1Val As Double, Cellb1val2 As Double
    [a1].Value = 1.23456789
    [b1].Value = 1.23456789

    [a1].NumberFormat = “General”
    [b1].NumberFormat = “$#,##0.00”

    Cella1Val = [a1].Value
    Cellb1Val = [b1].Value
    Cellb1val2 = [b1].Value2

    MsgBox TypeName([a1].Value) ‘ Returns “Double”
    MsgBox TypeName([b1].Value) ‘ Returns “Currency”

    [a2] = [a1].Value  ‘ Returns 1.23456789
    [b2] = [b1].Value  ‘ Returns $1.23

    [a3] = Cella1Val  ‘ Returns 1.23456789
    [b3] = Cellb1Val  ‘ Returns 1.2346
    [b4] = Cellb1val2  ‘ Returns 1.23456789

    MsgBox [a1].Value ‘ Returns “1.23456789?
    MsgBox [b1].Value ‘ Returns “1.2346?

    End Sub

  22. I’m no expert on this, but here is what I think is going on.

    VBA has different currency and double variable types. Excel the worksheet has only double. If a currency format is applied to a cell, this might induce VBA to assign a variable type of Currency to the cell’s value, instead of the type of Double it may otherwise have assigned.

  23. Hey Jon, yes, and it works in both directions. If the number formatting of the cell is “currency” then the value returned from Range.Value is a Currency data type. Until now I had always assumed that the value within the cell was *actually* a Currency data type; but now it is clear that it is simply being converted when called via the Excel object model.

    When assigning a Currency data type to the cell, a “currency” number formatting is applied along with the value assignment. Strangely, as Doug noticed, the write assignment further truncates the result to two decimal places:

    Sub MySub1()
        [a1].NumberFormat = “General”
        [b1].NumberFormat = “$#,##0.00”

        [a1].Value = 1.23456789
        [b1].Value = [a1].Value    
        [c1].Value = [b1].Value
       
        MsgBox TypeName([a1].Value) ‘ Returns “Double”
       MsgBox TypeName([b1].Value) ‘ Returns “Currency”
       MsgBox TypeName([c1].Value) ‘ Returns “Currency”
       
        MsgBox [a1].Value ‘ Returns “1.23456789”
       MsgBox [b1].Value ‘ Returns “1.2346”
       MsgBox [c1].Value ‘ Returns “1.23”
    End Sub

    Of course .Value2 gets around all this, but then you’d have to test the data type separately, which is not very important for “currency” data, I would argue, but it is pretty important for Dates. So I personally don’t wind up using .Value2 very often.

  24. We’re getting off topic. VBA does different things than Excel itself, and ranges have both .Value (which is specific to VBA and Excel’s object model) and .Value2, which is what’s actually stored in Excel cells.

    Even if Excel did use the Variant/Currency data type for cells formatted as currency, that’d only solve truncation error in addition and subtraction. Multiplication and division would remain problematic. Also, it’s less flexible than precision as displayed. If you have a cell (A1) formatted as 0.000 and enter 1.001 in it, it’s squared value should be 1.002001, but currency format will truncate this at 4 decimal places whereas using precision as displayed you could format the cell as 0.000000, and the full value would be preserved.

    As long as you have only FINITE precision, you’d still have to handle values that lack finite representations in any base, e.g., SQRT(2). If you want arbitrary precision and symbolic processing, there are software packages available, e.g., Mathematica, MatLab, Maple, Maxima, Gauss, MuPad and maybe others, but AFAIK there are no spreadsheets providing such functionality in no small part because such functionality means SLOW calculations.

    Finite precision binary floating point has this advantage: it’s currently the fastest way of performing fairly accurate computer arithmetic. If you believe there would be better (either faster or as fast but more accurate) ways for computers to perform arithmetic calculations, where are they? Wouldn’t you suppose there’d be a huge market for chips implementing such alternatives? Perhaps the absence of such chips is evidence that no one has yet been able to come up with anything better.

  25. I don’t know that we’re off topic really… In as much as we are, it’s due to my mistaken belief that “currency” number formatting affected the actual underlying type. It sure behaves that way from the VBA side of the fence.

    I had thought that by changing the formatting to “currency”, this not-quite-zero-sum situation that was being discussed here might have *actually* returned zero. But, alas, it doesn’t because it is merely the object model that is doing a conversion to Currency behind the scenes and not actually within the cells themselves.

    As for what *could be*, I think your assessment is exactly right: improving this situation for precision would definitely mean a rather large penalty in calculation time. Probably the easiest way to implement better precision would be to utilize the Decimal data type. It still would not have the range of a double, but it would still have an excellent range and has greater precision (and more flexible precision) than Currency; however, its processing is not native to the CPU and so its calculation speed would probably be about 10x slower than calculating doubles.

    Still, it might be neat to be able to set the data type (perhaps via the number format) and be willing to make that trade-off on the fly. I’m not saying that it’s convenient for Excel or the calculation engine to be able to do this, but it’s an interesting idea which could have a lot of value in certain situations. And this is essentially what I mistakenly believed was going on with the “currency” number formatting.

    I was basically fooled by the following code, which returns “Currency”:

    [b1].Value = 1.23456789
    [b1].NumberFormat = “$#,##0.00”
    MsgBox TypeName([b1].Value) ‘ Returns “Currency”

    I’m not sure what else I should have assumed, but I thought that the underlying type actually *was* Currency. But I was wrong. If I was a C++/XLL programmer, one who is used to using an XLOPER on a daily basis, then I would have known better.

    Still, I think we learned something here. Perhaps it is “off topic”, I don’t know, but the way Excel VBA is handling Currency data types for cells with “currency” number formatting is definitely interesting. And the fact that the following code truncates to two decimal places is also surprising and worth being aware of:

    [a1].Value = 1.23456789
    [a1].NumberFormat = “$#,##0.00”
    [b1].Value = [a1].Value
    MsgBox [b1].Value  ‘ Returns 1.23

    It certainly makes me want to use .Value2 a little more often. :-)

  26. Dick, there is an entire science for calculation of errors, it has its own set of formulas. Its fairly easy to understand what an accumulated error is when you add or subtract numbers. For example, the maximum error in 4.0 – 3.0 is 0.099999 so the answer is 1.0 +/- 0.0999999 etc.
    It becomes slightly more complicated with multiplications and division and even more complicated with complex formulas. But calculating a (maximum) error is an exact science and generally it can be done.
    I don’t know if you could implement these formulas into Excel though given the rounding errors when using floating point calculations.

    Fzz: you look at this from a computer science point of view. That is a great way to explain the problem and why it occurs. Indeed all software that uses floating point calculations following the IEEE 754 standard (as Excel does) will make calcuation errors due to rounding. I can’t argue with that.
    But I’m an Excel user and I look at Excel as such. It doesn’t make any sense to me to display 26 digits when oly about 15 digits are accurate (read this on Chip Pearsons page).

  27. The number 0.000000000008640199666843 has 13 significant digits, not 26 (or even 25). If you paste it into Excel it displays as 8.640199666843E-12, 13 figures. If you subtract two large numbers, resulting in a number very close to zero you are going to lose precision. It’s just something you need to be aware of if you have an application that really demands that level of precision.

  28. Doug; you are right because the number starts with 0.00.. etc. If it would start with 1.00.. etc. you’d have 25 significant digits, of which 24 digits are decimals. The numbers used for the sum in the WoW example contain more than 13 significant digits.

    Jon; I counted the number of decimals in the WoW example. Not sure if I counted them right though, I was more concerned with result of the sum.

  29. Dick,

    Q: Is there a way to determine how precise a number is after you’ve converted to binary and back? For instance, if you subtract 57 from 57.8, is there an algorithm to determine how precise the answer is?

    The degree of error will depend on the method used to convert from decimal to binary and how many binary bits are used.
    For Excel I guess the simplest way would be to compare the results of its slightly modified IEEE scheme with the results using an extended precision addin like
    http://digilander.libero.it/foxes/MultiPrecision.htm
    which claims to be able to handle up to 250 significant digits.

    Rembo,

    “the maximum error in 4.0 – 3.0 is 0.099999? – this is only true if you make the assumption that digits that are not shown could be anything and ignore any errors caused by the calculation method used (in this case converting to binary and back).
    I do not think this assumption is widely used, for instance certainly not in Finance/banking. Even in physical sciences I would have thought it was more usual to state a measurement +-value.

    .Value2
    Is faster than .Value for numbers because it does not do the Currency and Date implicit type conversions, as well as avoiding the truncation problems.

  30. Rembo – you’re assuming Excel is something other than just another program that performs numeric calculations using binary floating point. That assumption is wrong.

    You obviously want Excel to use arbitrary of specified precision decimal arithmetic. That’d make Excel VERY, VERY SLOW. Others prefer Excel to be relatively fast if a bit of a pain to work with in terms of truncation error.

    IOW, Excel is what it is. It’s NOT going to change (at least not in this way). You’re going to be very disappointed expecting your arguments will change Excel.

  31. > IOW, Excel is what it is. It’s NOT going to change (at least not in this way). You’re going to be
    > very disappointed expecting your arguments will change Excel.

    Nah.. I don’t thinkt that is going to change fzz. Afaik MS has always supported the IEEE 754 standard and my arguments are not that important. The chances of anyone running into problems because of this rounding behaviour is very small. Rewriting Excel is most likely not an option. But despite all that I still think it’s silly to display that much decimals if they are not accurate.

    As for the calculation speed, with todays computer that probably isn’t that much of a problem. But you are correct ofcourse that there is a performance penalty for using exact calculations if that were possible.

    Charles: that assumption isn’t used at all except in scientific calculations. Fault calculations are commonly used to cover for measuring/calibrating errors for example.

  32. Rembo – you haven’t worked with arbitrary precision decimal math software. It’s slow even on current hardware. Not slow in the same way it would have been 10 years ago, but slow in the sense of annoyingly noticeable.

    As for accuracy, you’re still confusing the statistical/scientific/engineering meaning of significant DECIMAL digits with binary floating point mantissa bits. Except for exact sums of negative powers of 2, e.g., #.5, #.75, #.203125 (# + 13/64), binary floating point can’t store DECIMAL fractional values exactly. The best binary floating point can do is use all available binary bits to store the closest value possible. Such software then uses binary floating point math for calculations using those numbers and the results are in turn stored using all available binary bits to store the closest value possible. These values are simply the result of unavoidable imprecision in the decimal-to-binary-to-decimal conversions. THEY ARE ACCURATE (or as accurate as possible), just not in the same sense as spurious decimal places beyond the number of significant (DECIMAL) digits in statistical/… context.

    You persist in believing Excel can (or should) perform decimal arithmetic. It can if you use precision as displayed. Otherwise, it can’t.


Posting code? Use <pre> tags for VBA and <code> tags for inline.

Leave a Reply

Your email address will not be published.