Jump to content

Talk:Loss of significance

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Untitled

[edit]

The page reads:

'Examples of ill-conditioned calculations are:

  • Subtracting two almost equal numbers
  • Division by almost zero

In these ill-conditioned calculations you get errors which tend to blow up dramatically. '

Division by a small non-zero number does not result in loss of significance. It may (depending on the numerator) cause numerical overflow, as may multiplication, addition, or subtraction. This is a very different and separate problem from loss of significance.

Subtraction of almost equal numbers is the sole source of loss of significance.


The formula given for loss of significance gives undefined results for x < y. Is "assuming x > y" missed out, or should it be abs(1 - y / x)?


I think this page is a good place to mention cathastropic cancelation. This is already mentioned once by someone else but might have been unclear and incomplete. 18:44, 13 March 2007 Ehdr (Talk | contribs) m (Mentioned "cancellation") Citations maybe usable from "What Every Computer Scientist Should Know About Floating-Point Arithmetic", by David Goldberg

84.41.135.110 17:25, 4 July 2007 (UTC)[reply]

Kahan algorithm

[edit]

Hi, what about Kahan algorithm.? —Preceding unsigned comment added by 149.156.67.102 (talk) 18:59, 14 October 2009 (UTC)[reply]

Sujit Kadam, Ohio University, USA

[edit]

Hello Everyone,

This is Sujit Kadam, pursuing Masters in Mathematics - Computational track at Ohio University, USA. I would like to propose few changes to this page. I think the section "A better algorithm" is bit confusing and needs some changes regarding examples and formula used. The formula used for 'x1' and 'x2' in this section are correct if the coefficient 'b' is negative in the quadratic equation. In our example, the quadratic equation used has positive coefficient 'b'. Hence, in the formula it gives

- b + sqrt(b*b -4ac) = - 200 + sqrt(200*200 - 4*1* − 0.000015)

This actually results into subtraction operation and loss of significant digits.

If 'b' was negative (say -200) in the example then the formula would result into an addition operation. Hence, we would be able to avoid loss of significant digits.

-b + sqrt(b*b - 4ac) = -(-200) + sqrt(-200*-200 - 4*1*− 0.000015)

My proposed changes for this section:

[edit]

A better algorithm

A better algorithm for solving quadratic equations is based on two observations: that one solution is always accurate when the other is not, and that given one solution of the quadratic, the other is easy to find.

If

and


then we have the identity (one of Viète's formulas for a second degree polynomial)

.

The above formulas (1) and (2) work perfectly for a quadratic equation whose coefficient 'b' is negative (b < 0). If 'b' is negative then '-b' in the formulas get converted to a positive value as -(-b) is equal to 'b'. Hence, we can avoid subtraction and loss of significant digits caused by it. But if the coefficient 'b' is positive then we need to use a different set of formulas. The second set of formulas that are valid for finding roots when coefficient 'b' is positive are mentioned below.


and


In the above formulas (3) and (4) when 'b' is positive the formula converts it to negative value as -(+b) is equal to -b. Now, as per the formulas '-b' is subtracted by square root of (b*b - 4ac) so basically it's an addition operation. In our example, coefficient 'b' of quadratic equation is positive . Hence, we need to use the second set of formulas i.e. Formula (3) and (4).


The algorithm is as follows. Use the quadratic formula to find the solution of greater magnitude, which does not suffer from loss of precision. Then use this identity to calculate the other root. Since no subtraction is involved, no loss of precision occurs.

Applying this algorithm to our problem, and using 10-digit floating-point arithmetic, the solution of greater magnitude, as before, is The other solution is then

which is accurate.

I like the overall idea here...but it's somewhat convoluted. You can get the Viete formula x_1*x_2 = c/a pretty easily (just multiply out the expression (x- x_1)(x-x_2) and equate it to the monic polynomial x^2 + b/a x + c/a). So you don't need to write out all those formulae above if your intent was simply to use the Viete formula thing. --DudeOnTheStreet (talk) 07:33, 4 May 2011 (UTC)[reply]

More detail in the instability section

[edit]

I think it would be beneficial to include more detail in the example discussing instability of the quadratic formula. First, the quadratic formula certainly does not always provide an inaccurate root, so it would be useful to include why this formula may not always produce an accurate result, instead of citing just one example. This could be a simple change, for instance, just a statement about how when c is very small, loss of significance can occur in either of the root calculations, depending on the sign of b. In addition, I think the example would be more effective with more explanation. I think it is important to show what is so special about this particular polynomial that causes loss of significance to occur. Simply stating "subtraction" as a reason for loss of significance might be misleading for some readers, as the accurate root was calculated using -b minus a similar number. —Preceding unsigned comment added by Vv148408 (talkcontribs) 07:19, 24 September 2010 (UTC)[reply]

Done. But is it really any better? I don't buy the 'explanation', myself, but perhaps others will find it more illuminating than I do!
I personally think that the effect is caused by any calculation needing more significant figures than we have available for the computational task; but I have no reference to back me up. My impressions were formed by many years of practical computation, and the need to engineer effective algorithms for arbitrary-precision floating point on a fixed-point computer architecture. But that is original research, and thus not encyclopaedic.
yoyo (talk) 06:05, 5 July 2012 (UTC)[reply]

Change in the Example

[edit]

My name is Kyle D, and I am an undergraduate, also at Ohio University, studying Electrical Engineering. I am proposing a change to the example on the page.

Problems with the example:

1. A machine does not store Floating Point decimal digits, except for Binary Coded Decimal calculations, which is rare.

Decimal works for the purpose of the example, but is not reflective of the actual computation.

If we used a binary number:

   It would be easier on everyone's eyes. (0.12345678901234567890 is hard to look at.)
   A reader familiar with binary would get a better understanding of the actual computation.
   A reader unfamiliar with binary would still understand the example as a decimal example.

2. 10 digits is a lot to think about. It's only an example.

I think a 4 bit FP processor should work for the purposes of this example.

The current decimal example should also be included later in the page, to demonstrate the phenomenon in Decimal numbers, and with more digits.

I think these changes would make the example a lot easier to understand, for both binary and decimal readers. Here is the proposed example:


For the purposes of this example, if the reader is not familiar with binary numbers, bits can be considered as decimal digits.

Consider the real binary number

   1.001111111

A floating-point representation of this number on a machine that keeps 4 floating-point bits would be

   1.001,

which is fairly close — the difference is very small in comparison with either of the two numbers.

Now perform the calculation

   1.001111111 − 1

The real answer, accurate to 7 significant bits, is

   0.001111111

However, on the 4-bit floating-point machine, the calculation yields

   1.001 − 1.000 = 0.001

Whereas the original numbers are significant in all of the first 4 bits, their floating-point difference is only accurate in its first nonzero bit. This amounts to loss of information.

This phenomenon can be extended to any size Floating Point number. Instead of 4 significant bits, an IEEE standard single precision floating point number has 23 significant bits and 1 sign bit.

Additionally, the phenomenon can also be demonstrated with decimal numbers. The following example demonstrates Loss of Significance for a Decimal floating point data type with 10 significant digits:

Consider the real decimal number

   0.1234567891234567890.

A floating-point representation of this number on a machine that keeps 10 floating-point digits would be

   0.1234567891,

which is fairly close — the difference is very small in comparison with either of the two numbers.

Now perform the calculation

   0.1234567891234567890 − 0.1234567890.

The real answer, accurate to 10 digits, is

   0.0000000001234567890.

However, on the 10-digit floating-point machine, the calculation yields

   0.1234567891 − 0.1234567890 = 0.0000000001.

Whereas the original numbers are accurate in all of the first (most significant) 10 digits, their floating-point difference is only accurate in its first nonzero digit. This amounts to loss of information.

'Workarounds It is possible to do computations using an exact fractional representation of rational numbers and keep all significant digits, but this is often prohibitively slower than floating-point arithmetic. Furthermore, it usually only postpones the problem: What if the data is accurate to only 10 digits? The same effect will occur.

One of the most important parts of numerical analysis is to avoid or minimize loss of significance in calculations. If the underlying problem is well-posed, there should be a stable algorithm for solving it. The art is in finding a stable algorithm. —Preceding unsigned comment added by Kyle.drerup (talkcontribs) 14:56, 24 September 2010 (UTC)[reply]

Possible error

[edit]

"If the underlying problem is well-posed, there should be a stable algorithm for solving it. The art is in finding it." Why is that? It seems like wishful thinking rather than a mathematical fact. — Preceding unsigned comment added by 181.28.64.87 (talk) 08:19, 12 December 2012 (UTC)[reply]

Rump polynomial

[edit]

... should be added, I suppose? --Cognoscent (talk) 12:34, 1 October 2013 (UTC)[reply]