Title: | Mathematics at DEC |
Moderator: | RUSURE::EDP |
Created: | Mon Feb 03 1986 |
Last Modified: | Fri Jun 06 1997 |
Last Successful Update: | Fri Jun 06 1997 |
Number of topics: | 2083 |
Total number of notes: | 14613 |
this is an interesting way to approx to sqrt(2): look at points at hyperbola y^2=2x^2+1 , which is asymptotic to line y= x*sqrt(2) it can be shown that this formula has the relationship for all the roots n X(n) * SQRT(2) + Y(n) = ( 2*SQRT(2) + 3 ) where X(n),Y(n) are one solution. so for n's we pick an X and find corresponding y and come up with this table n X Y Y/N= approx to SQRT(2) --- ---- ----- -------------------- 1 2 3 1.5 2 12 17 1.416 3 70 99 1.41428 . . 10 15,994,428 22,619,537 1.414213562373096 I think geometrical based methods as the above are easier to understand than non-geomertry based ones. (although might not always be better) as in the example of approximating to pie by using the method of drawing many sided gons inside a circle, and that of using series approx. the above method is from intro to number theory, by daniel E. FLATH (Wiley) does anyone knows of a faster/simpler method to approach SQRT(2) ? /naser
T.R | Title | User | Personal Name | Date | Lines |
---|---|---|---|---|---|
1334.1 | GUESS::DERAMO | Dan D'Eramo | Sun Nov 11 1990 00:36 | 4 | |
Those look like the continued fraction approximants to sqrt(2) to me. Dan | |||||
1334.2 | Alternative | SHIRE::ALAIND | Alain Debecker @GEO DTN 821-4912 | Mon Nov 12 1990 08:12 | 28 |
Form Lebesgue, "Le�ons sur les constructions g�om�triques", 1940 Take a square ABCD. Take the point E on the diagonal AC, such that AE = AC. We have to compare AE to AC. Draw from E the perpendicular to AC which cuts BC in F. We have AE = EF = FB, and we have to compare CF to CE. But these are the side and diagonal of a new square and the process doesn't stop. Define AB = 1 and x = AC/AB = sqrt(2). The above process translate into AC CE 1 1 1 x = ---- = 1 + ---- = 1 + ---- = 1 + ---------- = 1 + ------- AB AB CB CF 1 + x ---- 1 + ---- CE CE From where we get, by substituing x in the left hand side by the value of this left hand side, and repeating: x = 1 + 1/(2 + 1/(1+x)) = 1 + 1/ (2 + 1/(2 + 1/(2 + 1/ ..... ))) This provides sqrt(2) as a continuous fraction. The first terms are 2/3, 5/7, 17/12, ... A continous fraction is the fastest way to converge to a real number, although it take ages to define the "fastest way". Now is a square simpler than an hyperbole? | |||||
1334.3 | Confused | CHOSRV::YOUNG | The OOL's are not what they seem. | Mon Nov 12 1990 10:25 | 45 |
I am a little confused by the hand waving in .0 : >look at points at hyperbola y^2=2x^2+1 , which is asymptotic to line >y= x*sqrt(2) OK, I understand this. >it can be shown that this formula has the relationship for all the roots > > n > X(n) * SQRT(2) + Y(n) = ( 2*SQRT(2) + 3 ) >where X(n),Y(n) are one solution. How was this Formula derived? I cannot see the connection at all. > so for n's we pick an X and find corresponding y and come up with this table Huh? How can you use this formula to approximate SQRT(2) when it has SQRT(2) in it?!? Are you saying to use iterative substitution? >n X Y Y/N= approx to SQRT(2) >--- ---- ----- -------------------- >1 2 3 1.5 >2 12 17 1.416 I think that "Y/N=" is a typo and should be "Y/X=". But, even using iterative substitution, I do not get the same results: Y(n) = (2*(Y(n-1))/(X(n-1)) + 3)^n - X(n)*(Y(n-1))/(X(n-1)) Assume X0,Y0=1 For X1=2 : Y1 = (2*1 + 3)^1 - 2*1) = 3 So this works out the same. For X2=12 : Y1 = (2*1.5 + 3)^2 - 12*1.5 = (6)^2 - 18 = 18 Now I get different answers!? So whats going on? -- Barry | |||||
1334.4 | I which I can hand wave that good :-) | SMAUG::ABBASI | Mon Nov 12 1990 10:55 | 16 | |
ref .-1 >How was this Formula derived? I cannot see the connection at all. that was not hand waving, I just did show how the author cam up with that formual, I'll write down his method tonit when I get home. yep that was a typo "y/x" >Huh? How can you use this formula to approximate SQRT(2) when it has >SQRT(2) in it?!? Are you saying to use iterative substitution? the SQRT(2) cancels out, so we find Y for X, and Y/X=> SQRT(2) approx /naser | |||||
1334.5 | exact book proof of .0 | SMAUG::ABBASI | Mon Nov 12 1990 22:36 | 49 | |
this is the handwaving from the text book itself regarding .0 "failing to find a rational number that eqluals sqrt(2), we might search for rationals that aprox sqrt(2), it seems natural to look for integral points on hyperbola y^2=2x^2+1 which is asymptotic to line y=sqrt(2) x. the situation is beter immediatly because there are integers points on the hyperbola, for instance(x,y) = (2,3) or (13860,19601). the ancients knew how to mutliply solutions of the equation y^2-2x^2=1 : if (x,y)= (u,v),(U,V) are two such solutions, then their product (u,v).(U,V) = (uV + uU, 2uU+vV) is a third. this follows from the key identity 2 2 (2uU + uV) - 2(uV+uU) = (v^2 - 2u^2) ( V^2 - 2U^2) starting with one integer point (x,y) on the hyperbola in the first quadrant we can find infinitly many others by taking powers. for example ley (x1,y1)= (2,3) and let (Xn,Yn)=(2,3)(Xn-1,Yn-1) for n>1 . we will write (Xn,Yn)= (2,3)^n . 2 2 we have the equality Yn - 2Xn = 1 for all n >= 1 and as easily seen, the inequalities x1<x2<x3... and y1<y2<.... . two intersting formuals for (Xn,Yn) are easily proved by finite induction n / \ / \ / \ | Xn | = | 3 2 | | 0 | for n>=1 | Yn | | 4 3 | | 1 | \ / \ / \ / n Xn sqrt(2) + Yn = (2sqrt(2) + 3) 2 2 table of solutions Yn - 2Xn = 1 n Xn Yn Yn/Xn 1 2 3 1.5 2 12 17 1.416 . . " end of quote. I should get my self a scanner ! /naser | |||||
1334.6 | ComPelling Proof | IOSG::CARLIN | Dick Carlin IOSG, Reading, England | Tue Nov 13 1990 05:10 | 9 |
> <<< Note 1334.1 by GUESS::DERAMO "Dan D'Eramo" >>> > Those look like the continued fraction approximants to > sqrt(2) to me... ... and of course they are. That is the traditional way to solve Pell's equation. I was very miffed years back when I thought I had discovered it, only to find it was standard textbook stuff. dick | |||||
1334.7 | Continous fraction ? | SHIRE::ALAIND | Alain Debecker @GEO DTN 821-4912 | Tue Nov 13 1990 07:03 | 18 |
>> <<< Note 1334.1 by GUESS::DERAMO "Dan D'Eramo" >>> >> Those look like the continued fraction approximants to >> sqrt(2) to me... > > <<< Note 1334.6 by IOSG::CARLIN "Dick Carlin IOSG, Reading, England" >>> > ... and of course they are. That is the traditional way to solve > Pell's equation. I was very miffed years back when I thought I had > discovered it, only to find it was standard textbook stuff. > Not in the strict sense. In facts, THE continued fraction approximants would alternate around the limit. In the case of sqrt(2) it is 1, 3/2, 7/5, 17/12, 41/29, 99/70,... Any good approximation by fraction will be a subsequence, because the best approximation of sqrt(2) by rational p/q with q<5 is 3/2, with q<12 is 7/5, with q<29 is 17/12, with q<70 is 41/29, ... | |||||
1334.8 | Newton shows the way | CIVAGE::LYNN | Lynn Yarbrough @WNP DTN 427-5663 | Tue Nov 13 1990 12:53 | 6 |
It looks to me as though the standard iteration formula x[n+1] = (x[n]+2/x[n])/2 is even faster, since it skips over several of the approximants along the way: starting with x[0]=1, it produces 3/2, 17/12, 577/408, .... | |||||
1334.9 | GUESS::DERAMO | Dan D'Eramo | Tue Nov 13 1990 14:53 | 21 | |
>> x[n+1] = (x[n]+2/x[n])/2 Letting n be the numerator and d the denominator, this is n/d ==> (n/d + 2d/n) / 2 ==> ((n^2 + 2d^2)/dn) / 2 ==> (n^2 + 2d^2) / 2dn which is the formula from .5 in a slightly different form: (u,v).(U,V) = (uV + vU, 2uU+vV) u,U <--> d v,V <--> n (d,n).(d,n) -> (2dn,2d^2 + n^2) Which is why it skips terms ... it "multiplies" the latest term with itself, not with the first term. Dan | |||||
1334.10 | Clarification | SHIRE::ALAIND | Alain Debecker @GEO DTN 821-4912 | Tue Nov 20 1990 08:58 | 150 |
Thanks to off line communication with Eric Osman, I was able to rewrite note .2 In order to read it: EXTRACT the note, EDIT the file to remove this cover instructions (up to the Line-Feed), and $ TYPE/PAGE the file. Note that the file contains REGIS instructions and was tested on VT340 only. Alain --- Last line to be removed ---- [2J [1;1HTake a square ABCD. [09;40f AC [10;40f Let x = ---- = sqrt(2). [11;40f AB PpS(A[0,479][799,0])R(I0)S(C(I[+0,+0]" ")) p[ 50, 50]v[300,300] p[300, 50]v[300,300]v[ 50,300]v[ 50, 50]v[300, 50] p[ 50, 50]t"A" p[300, 50]t"B" p[290,320]t"C" p[ 50,320]t"D" \ [2J [1;1HTake a square ABCD. Take the point E on the diagonal AC, such that AE = AB. We have AC/AE = sqrt(2). [09;40f AC [10;40f Let x = ---- = sqrt(2). [11;40f AB [13;45f AC CE [14;45fx = ---- = 1 + ---- [15;45f AB AB Pp p[ 50, 50]v[300,300] p[300, 50]v[300,300]v[ 50,300]v[ 50, 50]v[300, 50] p[ 50, 50]t"A"p[300, 50]t"B"p[290,320]t"C"p[ 50,320]t"D" p[300, 50]C(A45C)[ 50, 50]p[217,247]T"E" \ [2J [1;1HTake a square ABCD. Take the point E on the diagonal AC, such that AE = AB. We have AC/AE = sqrt(2). Draw from E the perpendicular to AC which cuts BC in F. We have CE = EF = FB, thus FC/FE = sqrt(2). [09;40f AC [10;40f Let x = ---- = sqrt(2). [11;40f AB [13;45f AC CE [14;45fx = ---- = 1 + ---- [15;45f AB AB [17;45f 1 1 [18;45fx = 1 + ------ = 1 + ---------- [19;45f CB CF [20;45f ---- 1 + ---- [21;45f CE CE [23;45f From where, ... Pp p[ 50, 50]v[300,300]p[300, 50]v[300,300]v[ 50,300]v[ 50, 50]v[300, 50]p[ 50, 50]t"A"p[300, 50]t"B"p[290,320]t"C"p[ 50,320]t"D"p[300, 50]C(A45C)[ 50, 50]p[217,247]T"E"p[300, 50]C(A45C)[ 50, 50]v[300,154]p[310,164]t"F" \ [2J [1;1HTake a square ABCD. Take the point E on the diagonal AC, such that AE = AB. We have AC/AE = sqrt(2). Draw from E the perpendicular to AC which cuts BC in F. We have CE = EF = FB, thus FC/FE = sqrt(2). [07;40f 1 [08;40fWe have (1) x = 1 + ------- [09;40f 1 + x Pp p[ 50, 50]v[300,300]p[300, 50]v[300,300]v[ 50,300]v[ 50, 50]v[300, 50]p[ 50, 50]t"A"p[300, 50]t"B"p[290,320]t"C"p[ 50,320]t"D"p[300, 50]C(A45C)[ 50, 50]p[217,247]T"E"p[300, 50]C(A45C)[ 50, 50]v[300,154]p[310,164]t"F" \ [2J [1;1HTake a square ABCD. Take the point E on the diagonal AC, such that AE = AB. We have AC/AE = sqrt(2). Draw from E the perpendicular to AC which cuts BC in F. We have CE = EF = FB, thus FC/FE = sqrt(2). But FE and FC are the side and diagonal of a new square Ppp[ 50, 50]v[300,300]p[300, 50]v[300,300]v[ 50,300]v[ 50, 50]v[300, 50]p[ 50, 50]t"A"p[300, 50]t"B"p[290,320]t"C"p[ 50,320]t"D"p[300, 50]C(A45C)[ 50, 50]p[217,247]T"E"p[300, 50]C(A45C)[ 50, 50]v[300,154]p[310,164]t"F" p[373,227]v[300,300]v[227,227]v[300,154]v[373,227]C(A45C)[300,154]v[343,257]\ [07;40f 1 [08;48f(1) x = 1 + ------- [09;40f 1 + x [11;40fReplacing x by (1) in (1), we get [13;40f 1 [14;40f (2) x = 1 + ------------- [15;40f 1 [16;40f 2 + ------- [17;40f 1 + x [2J [1;1HTake a square ABCD. Take the point E on the diagonal AC, such that AE = AB. We have AC/AE = sqrt(2). Draw from E the perpendicular to AC which cuts BC in F. We have CE = EF = FB, thus FC/FE = sqrt(2). But FE and FC are the side and diagonal of a new square and the process can be repeated. Ppp[ 50, 50]v[300,300]p[300, 50]v[300,300]v[ 50,300]v[ 50, 50]v[300, 50]p[300, 50] C(A45C)[ 50, 50]v[300,154]p[ 50, 50]t"A"p[300, 50]t"B"p[290,320]t"C"p[ 50,320]t"D"p[217,247]T"E"p[310,164]t"H" p[373,227]v[300,300]v[227,227]v[300,154]v[373,227]C(A45C)[300,154]v[343,257] p[343,300]v[300,300]v[300,257]v[343,257]v[343,300]C(A45C)[343,257]v[325,300]\ [07;63H1[08;48H(1) x = 1 + -------[09;61H1 + x[11;40HReplacing x by (1) in (2), we get[13;56H1[14;41H(3) x = 1 + -----------------[15;61H1[16;54H2 + ------------[17;66H1[18;59H2 + -------[19;64H1 + x[1;1H [2J [1;1HTake a square ABCD. Take the point E on the diagonal AC, such that AE = AB. We have AC/AE = sqrt(2). Draw from E the perpendicular to AC which cuts BC in F. We have CE = EF = FB, thus FC/FE = sqrt(2). But FE and FC are the side and diagonal of a new square and the process can be iterated. [07;63H1[08;48H(1) x = 1 + -------[09;61H1 + x[11;40HReplacing x by (1) in (n), we get[13;56H1[14;39H(n+1) x = 1 + -------------------[15;61H1[16;54H2 + --------------[17;66H1[18;59H2 + ---------[19;70H1[20;64H2 + -----[21;69H...[1;1H Ppp[ 50, 50]v[300,300]p[300, 50]v[300,300]v[ 50,300]v[ 50, 50]v[300, 50]p[300, 50]C(A45C)[ 50, 50]v[300,154]p[ 50, 50]t"A"p[300, 50]t"B"p[290,320]t"C"p[ 50,320]t"D"p[217,247]T"E"p[310,164]t"H" p[373,227]v[300,300]v[227,227]v[300,154]v[373,227]C(A45C)[300,154]v[343,257] p[343,300]v[300,300]v[300,257]v[343,257]v[343,300]C(A45C)[343,257]v[325,300] p[313,313]v[300,300]v[313,287]v[325,300]v[313,313]C(A45C)[325,300]v[307,307] p[300,307]v[300,300]v[307,300]v[307,307]v[300,307]C(A45C)[307,307]v[300,304] p[298,302]v[300,300]v[302,302]v[300,304]v[298,302]C(A45C)[300,304]v[299,301] p[299,300]v[300,300]v[300,301]v[299,301]v[299,300]C(A45C)[299,301]v[299,300] p[300,300]v[300,300]v[300,300]v[299,300]v[300,300]C(A45C)[299,300]v[300,300]\ | |||||
1334.11 | The Newton iteration formula | DECWET::BISHOP | Avery: father, hacker, brewer, Japanese speaker | Tue Nov 20 1990 19:14 | 41 |
Re .8: >It looks to me as though the standard iteration formula > > x[n+1] = (x[n]+2/x[n])/2 > >is even faster, since it skips over several of the approximants along the >way: starting with x[0]=1, it produces 3/2, 17/12, 577/408, .... This is the Newton formula for iterating to an approximate solution of f(x)=0, for differentiable f(): (#) x[n+1] = x[n] - f(x[n])/f'(x[n]) ; With f(x) = x^2 - A for a constant A, (#) becomes x[n+1] = (x[n]+A/x[n])/2. Given a sufficiently accurate first approximation x[0], Newton's formula converges quadratically, i.e., the error term e[n] = x[n] - x[n-1] satisfies |e[n+1]| < K e[n]^2, for some constant K. This means that the number of digits of accuracy in the approximation essentially doubles with each iteration. Faster converging algorithms are hard to come by. There are a couple of ways to derive (#). The most intuitive is to draw the tangent to the function f() at the point (x[n],f(x[n]). Then x[n+1] is the interception of this tangent line with the x axis. However, another derivation of (#) as a solution to a certain fixed point problem g(x) = x, is also instructive in that it explains why (#) converges so fast. I can't remember what you use for g(x) to get (#), but it turns out that g(x) is flat (g' ~ 0) near the fixed point a (i.e., a satisfies g(a) = a). It is easy to convince yourself graphically that the traditional approach of iterating x[n+1] = g(x[n]) converges very fast to a for a function with the property g'(a) = 0. As I recall, this approach also yields some conditions underwhich (#) is gauranteed to converge. Avery | |||||
1334.12 | GUESS::DERAMO | Dan D'Eramo | Tue Nov 20 1990 19:36 | 13 | |
>>This is the Newton formula for iterating to an approximate solution of f(x)=0, >>for differentiable f(): >> . >> . >> . >>Given a sufficiently accurate first approximation x[0], Newton's formula >>converges quadratically, i.e., the error term e[n] = x[n] - x[n-1] satisfies CLarification ... The statement about quadratic convergence is for this particular f. Dan | |||||
1334.13 | Not so! | DECWET::BISHOP | Yakudoshi! | Tue Nov 20 1990 21:21 | 31 |
No, unless my memory has been zapped by jet lag, the theorem is of the form Given that f() satisfies blah blah blah, then there exists an open set S containing a such that (#) converges quadratically for every initial guess x[0] in S. It is only coincidence that this function contains x^2. Quadratic convergence works for any suitable f. Of course, the conditions "blah, blah, blah", that I can't recall right now, are the crux of the problem. In the worst case, if the conditions were "blah, blah, blah" = "|f(x)| = x^2-A", then you would be right! In this case it turns out that S is the positive x axis. Seriously, I believe one of the conditions is that f'(a) <> 0. Even if this condition is not met, then (#) will usually still converge, albeit slowly, if f'(x) <> 0 for some region about a. If f'(a) = 0, then there is a higher order iteration of the form (#) but with a correction term containing f''(x[n]) such that it converges quadratically.. This can be generalized for any f() with f^m(a) = 0, m=1,2,...,p, f^p(a) <> 0, where f^k() means the kth derivative of f(). In fact, these higher order methods work even if the lower order derivatives do not disappear, but the gain in convergence speed is not worth the extra effort per iteration. -fab | |||||
1334.14 | ??? | CHOVAX::YOUNG | Operation Imminent Blunder. | Tue Nov 20 1990 23:50 | 5 |
Well, I for one would like to hear what the "blah, blah, blah" conditions are, since there are some suprisingly simple (and normal) functions for which Newtonian Iteration will not converge *at all*. -- Barry | |||||
1334.15 | Cayley problem for complex z | ALLVAX::JROTH | from Saturday alley up to Sunday Street | Wed Nov 21 1990 00:52 | 5 |
This is known as the "Cayley problem", and the basins of attraction to the two roots of a quadratic in the complex plane have boundaries that are a self-similar fractal set. - Jim | |||||
1334.16 | CHOSRV::YOUNG | Operation Imminent Blunder. | Wed Nov 21 1990 10:35 | 14 | |
No, thats not what I was talking about, Jim. Quadratics with 2 roots have *some* starting places (ie. "sufficiently close") that will converge on one or the other root. But there are other, quite normal functions, that have *no* convergent starting points, other than the root itself. Take for instance: (1/3) Y = X Other than the root itself (X[0] = 0) all starting points are *divergent*. -- Barry | |||||
1334.17 | GUESS::DERAMO | Dan D'Eramo | Wed Nov 21 1990 12:04 | 8 | |
>> Other than the root itself (X[0] = 0) all starting points are >> *divergent*. ... and I vaguely recall one example where the iterations cycled (X[0] = x[2] = x[4] = ... and x[1] = x[3] = x[5] = ...). Dan | |||||
1334.18 | slightly more info on the blah, blah, blah | CSSE::NEILSEN | I used to be PULSAR::WALLY | Wed Nov 21 1990 12:34 | 22 |
I once knew these conditions, although I don't think I ever fully understood them. You can find them in any good book on numerical analysis. Unfortunately I don't have one in my office. Basically, the condition is that higher derivatives have to be bounded. I don't remember whether the bound applies to the next derivative or all higher derivs. The precise statement is something like this: If the second derivative of f is bounded by M, then there exists an interval of size e(M) containing the root in which Newton's method is bound to converge. There is a way to compute e(M) given M. Note that in the .16 example y=x^(1/3) higher derivatives are unbounded. The example in .17, if I am thinking of the same case, is one where starting points outside the interval lead to cycling, but starting points within the interval still converge. There are other pathological cases which do not contradict the theorem above. In one common nasty variety starting points near the interval lead to rapid divergence. I have even seen cases where exact calculations lead to convergence but double precision approximations diverge. | |||||
1334.19 | CHOSRV::YOUNG | Operation Imminent Blunder. | Wed Nov 21 1990 13:35 | 14 | |
Re .18: Hmmm, that may be a sufficient condition, but is certainly is not a *necesary* one. For instance, all functions of the form: Y = X^n Where 'n' is an integer > 3, fail this condition, but nonetheless will converge *very* rapidly with Newtonian Iteration. -- Barry | |||||
1334.20 | another detail or two | CSSE::NEILSEN | I used to be PULSAR::WALLY | Mon Nov 26 1990 12:51 | 30 |
Since I wrote .18, I remembered that the derivative(s) must be bounded within a circle on the complex plane centered on the root. The 'complexity' can cause some surprises for the unwary. A function which is well-behaved on the real line can fail to converge because it has a singularity off the real line but near the root. Numerical analysis texts usually give an example of this. .19 is correct that this is a sufficient condition and not a necessary one. Sufficient conditions are much more interesting in numerical analysis, since they tell us when a procedure is reasonably safe. To show that Newton's method is not safe when applied to a general cubic, apply it to f(x) = x^3 - 3 and start with the guess x=1. .19> For instance, all functions of the form: > > > Y = X^n > > Where 'n' is an integer > 3, fail this condition, but nonetheless will > converge *very* rapidly with Newtonian Iteration. When I apply Newton's method to this, I get x[i+1] = x[i] * (n-1)/n which is very slow for large n. | |||||
1334.21 | DECWET::BISHOP | Yakudoshi! | Mon Nov 26 1990 14:07 | 13 | |
Re .20: >When I apply Newton's method to this, I get > > x[i+1] = x[i] * (n-1)/n > >which is very slow for large n. This is exactly the situation I was talking about in .13, where f'(a) = 0. It will still sometimes converge, but to get quadratic convergence you have to go to a higher order method with terms in f''(x), f'''(x), etc. Avery | |||||
1334.22 | don't count and don't test | TOOK::CBRADLEY | Chuck Bradley | Mon Nov 26 1990 15:47 | 10 |
back about 64-66 a clever person at CDC found good way to take advantage of the rapid convergence of Newton's method for the math library. He chose the starting point to be floating point number with a constant, fraction, and the exponent being a one bit shift from the exponent of the argument. Then two(?) single precision iterations and three(?) double precision iterations gave the answer. No loop counts, and no convergence tests. The trick is common now. The VMS RTL separates the cases for even and odd exponents and does 2, 3, or 5 iterations for single, double, and H formats. it's good math and good programming. | |||||
1334.23 | No need to test convergence in special cases | DECWET::BISHOP | Yakudoshi! | Tue Nov 27 1990 08:14 | 8 |
If you know things like the precision of your data type and maybe characteristics of the function, you can figure pretty accurately how many iterations are needed. It's an obvious simplification. The question is, what function(s) did/do they use this on? SQRT? others? Avery | |||||
1334.24 | VMSDEV::HALLYB | The Smart Money was on Goliath | Tue Nov 27 1990 16:16 | 5 | |
I would also think you could speed things up via a table lookup based on the most significant bits of the mantissa. While we're in the area, I recall reading of a trip to Lucasfilms where one of their concerns was rapid calculation of 1 / SQRT(x�+y�) | |||||
1334.25 | only square root | TOOK::CBRADLEY | Chuck Bradley | Wed Nov 28 1990 12:27 | 8 |
re .23 square root was all i recall hearing about. re .24 at Lucasfilms 1/sqrt(x^2+y^2) would probably be called often with x and y close to the previous value. For that case, a calculation based on a series expansion around the previous point could be used n times before going back to the stated formula. n is not a constant, but if enough is known about the step size and the domain of interest, it could be treated as a constant. |