mersenneforum.org  

Go Back   mersenneforum.org > Great Internet Mersenne Prime Search > Math

Reply
 
Thread Tools
Old 2004-10-11, 17:26   #34
mfgoode
Bronze Medalist
 
mfgoode's Avatar
 
Jan 2004
Mumbai,India

22·33·19 Posts
Lightbulb General Solution to Polynomial Equations

#33 Today, 10:24 PM
mfgoode
Registered User Join Date: Jan 2004
Location: Mumbai,India
Posts: 90



--------------------------------------------------------------------------------

Quote:
Originally Posted by jinydu
Now I think I get it. We say that linear convergence is a bad rate of convergence because naive guessing also leads to a linear rate of convergence.

Suppose I know the root to n decimal places, and want to increase the accuracy to n+1 decimal places. I can simply guess that the next decimal place is 5, thus cutting the highest possible error value in half. This process of halving can be repeated indefinitely, so that it takes log (base 2, 10), or about 3.32193 steps to add a single decimal place. Therefore, it's a linear rate of convergence.

If Newton's Method does no better than this simplistic method, we can say that it performs poorly.

Quote:
Originally Posted by jinydu
Now I think I get it. We say that linear convergence is a bad rate of convergence because naive guessing also leads to a linear rate of convergence.

Suppose I know the root to n decimal places, and want to increase the accuracy to n+1 decimal places. I can simply guess that the next decimal place is 5, thus cutting the highest possible error value in half. This process of halving can be repeated indefinitely, so that it takes log (base 2, 10), or about 3.32193 steps to add a single decimal place. Therefore, it's a linear rate of convergence.

If Newton's Method does no better than this simplistic method, we can say that it performs poorly. Un quote.


You are merely using the Bisection method which is based on common sense!
The False Position method is old but superior to the BS method.
The Newton raphson method is even better and based on Calculus and the Taylor series. It requires only one value to start with and the slope of the tangent should be known.
Try and get someone to give you a graphical interpretation. You will then have more confidence in the numerical analysis.

Incidentally it has been mathem'cally proved that the NR method is superior to the others.
There are other methods like Fourier, Sturm, AND Horners methods but they have adv'ages and disadv'ages.
Mally


















mfgoode
View Public Profile
Send a private message to mfgoode
Send email to mfgoode
Find all posts by mfgoode
Add mfgoode to Your Buddy List
mfgoode is offline   Reply With Quote
Old 2005-03-30, 23:03   #35
jinydu
 
jinydu's Avatar
 
Dec 2003
Hopefully Near M48

33368 Posts
Default

Well, I've now taken a course called "Integration and Infinite Series" at university; so I am hopefully somewhat better prepared to look at this.

I tried my hand at using a quadratic approximation for the function, then using that as a starting point for deriving new methods that converge faster than Newton's Method:

http://www.sosmath.com/CBB/viewtopic.php?t=14686
jinydu is offline   Reply With Quote
Old 2005-04-01, 05:59   #36
ewmayer
2ω=0
 
ewmayer's Avatar
 
Sep 2002
República de California

1164710 Posts
Default

Quote:
Originally Posted by jinydu
Well, I've now taken a course called "Integration and Infinite Series" at university; so I am hopefully somewhat better prepared to look at this.

I tried my hand at using a quadratic approximation for the function, then using that as a starting point for deriving new methods that converge faster than Newton's Method:

http://www.sosmath.com/CBB/viewtopic.php?t=14686
As you state in that thread, that method has the drawback of requiring the taking of square roots. It also doesn't generalize well to higher convergence orders. But it sounds like you're having fun!
ewmayer is offline   Reply With Quote
Old 2005-04-01, 20:33   #37
jinydu
 
jinydu's Avatar
 
Dec 2003
Hopefully Near M48

2×3×293 Posts
Default

Quote:
Originally Posted by ewmayer
As you state in that thread, that method has the drawback of requiring the taking of square roots. It also doesn't generalize well to higher convergence orders. But it sounds like you're having fun!
Thanks for the feedback.

I realize that although the method involving a square root converges fastest, it is difficult to apply precisely because of the square roots. Nevertheless, I think its interesting because I had never heard of a method before that could approximate solutions using nested square roots. I guess the other two methods (Halley's Rational Method, and the one I derived myself) address that problem by using only rational operations, although at the expense of accuracy.

However, I still don't understand how to generalize the procedure so as to derive a method with convergence of order n. The method I used could be generalized further using Taylor polynomials of degree 3 and 4, then using the Binomial Theorem to get rid of the radicals. However, it would have to stop there due to Abel's Impossibility Theorem. How then can I get convergence faster than fifth order (corresponding to a degree 4 polynomial)?

Also, there is another thing I don't quite understand. The method I derived (the one which looks similar to Newton's Method, but with an extra "correction" term) has a 2(f'(x))^2 in the denominator. Intuitively, I would expect that this method should fare horribly (far, far worse than Newton's Method) when searching for a double root, where f(x) and f'(x) are near zero, but f''(x) is not. In the case of the original Newton's Method, I can see why it converges to the correct value of the root; since, in the case of a double root, f(x) approaches 0 faster than f'(x). But although f(x) may approach 0 faster than f'(x); surely, it should approach zero slower than (f'(x))^2, causing my method to diverge. However, this doesn't seem to be the case when I test out specific examples using Mathematica. Instead, the method I derived seems to not only converge, but converge faster than Newton's Method, albeit still quite slowly. Why is that?

Last fiddled with by jinydu on 2005-04-01 at 20:39
jinydu is offline   Reply With Quote
Old 2005-04-02, 20:24   #38
ewmayer
2ω=0
 
ewmayer's Avatar
 
Sep 2002
República de California

19×613 Posts
Default

Quote:
Originally Posted by jinydu
I realize that although the method involving a square root converges fastest, it is difficult to apply precisely because of the square roots.
What do you mean by "fastest"? If it's 3rd-order it will generally converge faster than a 2nd-order method like Newton's, yes, but it wouldn't be faster than any other 3rd-order method (e.g. Halley's), at least in an asymptotic sense.

Quote:
Nevertheless, I think its interesting because I had never heard of a method before that could approximate solutions using nested square roots. I guess the other two methods (Halley's Rational Method, and the one I derived myself) address that problem by using only rational operations, although at the expense of accuracy.
Again, why do methods like Halley's supposedly sacrifice accuracy? How many iterations did you do using your test function(s)? Did you do enough (and use high-enough-precision arithmetic) to really be able to compare the asymptotic behavior of the various methods?

Quote:
However, I still don't understand how to generalize the procedure so as to derive a method with convergence of order n. The method I used could be generalized further using Taylor polynomials of degree 3 and 4, then using the Binomial Theorem to get rid of the radicals. However, it would have to stop there due to Abel's Impossibility Theorem. How then can I get convergence faster than fifth order (corresponding to a degree 4 polynomial)?
As I said already, the methods based on Taylor series and solution of ever-higher-order polynomial equations simply don't generalize to arbitrarily high order. More on alternatives below...

Quote:
Also, there is another thing I don't quite understand. The method I derived (the one which looks similar to Newton's Method, but with an extra "correction" term) has a 2(f'(x))^2 in the denominator. Intuitively, I would expect that this method should fare horribly (far, far worse than Newton's Method) when searching for a double root, where f(x) and f'(x) are near zero, but f''(x) is not. In the case of the original Newton's Method, I can see why it converges to the correct value of the root; since, in the case of a double root, f(x) approaches 0 faster than f'(x). But although f(x) may approach 0 faster than f'(x); surely, it should approach zero slower than (f'(x))^2, causing my method to diverge. However, this doesn't seem to be the case when I test out specific examples using Mathematica. Instead, the method I derived seems to not only converge, but converge faster than Newton's Method, albeit still quite slowly. Why is that?
The 3rd-order method you derived starting with the Taylor series is
Code:
                           f    /     f * f'' \
               xk+1 = xk - -- * | 1 + -------- |
                           f'   \     2*(f')2 /
If the function f gets closer and closer being to linear, f'' vanishes and this reduces to Newton's method, as expected - note that for a linear function with nonzero slope, Newton's method converges exactly in a single iteration, i.e. cannot be improved upon - thus any higher-order method must necessarily reduce to Newton's method in the linear case. (I mention this because it leads to a general approach for deriving higher-order schemes which *does* generalize to arbitrary order, unlike the methods beginning with the Taylor series.)

Now consider your method for the simple parabola, f = x^2. This is a case where Newton's method only converges linearly fast, since the root at x = 0 is a point of zero slope. For this f(x) the terms inside the large () in your method reduces to 1 + f*f''/[2*(f')^2] = 1 + x^2/[4*x^2] = 5/4, and we see that the method reduces to

Code:
                           f    5
               xk+1 = xk - -- * - .
                           f'   4
This is basically Newton's method with the delta-x term (f/f') multiplied by a relaxation factor of 5/4 - what one would refer to as an over-relaxed Newton's method (a multiplier < 1 is under-relaxed, > 1 is over-relaxed.) Since for this kind of root Newton's method consistently under-predicts the needed delta-x term, your overrelaxed scheme helps improve on that degree of underprediction, hence your method converges faster, albeit still just linearly fast. In fact it's easy to show that for this particular example, we really want a relaxation factor of 2, since that would allow the resulting method to converge exactly in a single iteration (delta x = -2*f/f' = -x). By way of comparison, note that Halley's method gives a relaxation factor of 4/3 here, so is also over-relaxing in the right direction, albeit also not far enough. (I expect the original scheme you got from solving a quadratic - the with the square root term) should give you a scheme that converges in just one step here, since f(x) = x^2 is a quadratic.)

Now getting back to methods that generalize to arbitrarily high order - the Gerlach-style methods (which include Halley's as the special case at 3rd order) start with the aforementioned observation that Newton's method converges in a single iteration if the function in question is linear. Now most functions we're interested in finding roots numerically for are by definition not linear, but one can contemplate multiplying f(x) by an auxiliary function g(x) that makes the resulting product *look* linear in a neighborhood of interest. Assuming one can find a suitable multiplier function g(x), apply Newton's method to the product f(x)*g(x), and away you go. You should try to find a copy of Gerlach's short paper in the "classroom notes" section of SIAM Review 36, 272-276, 1994 at your local university library for details and numerical examples. The method (although sans much of the motivation) is also discussed in Richard Crandall's small book (it's really more of a monograph in style, but touches on many topics - a multigraph, perhaps? ) "Topics in Advanced Scientific Computation" (Telos/Springer, 1997) - the list price is outrageous for this book, but you can find cheap used copies, e.g. via the Amazon.com reseller links - I see several used for less than $10.

Last fiddled with by ewmayer on 2007-02-15 at 21:11 Reason: changed [pre] to [code] to restore alignment of egns
ewmayer is offline   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
My solution for the problem of microtransactions and media in general jasong jasong 21 2019-08-19 14:59
Polynomial Discriminant is n^k for an n-1 degree polynomial carpetpool Miscellaneous Math 14 2017-02-18 19:46
solving 2nd order differential equations Joshua2 Homework Help 9 2009-10-30 07:37
Navier-Stocks equations mfgoode Math 1 2006-10-09 16:02
Equality of quadratic equations dsouza123 Math 2 2004-07-30 09:03

All times are UTC. The time now is 15:10.


Mon Aug 2 15:10:46 UTC 2021 up 10 days, 9:39, 0 users, load averages: 4.15, 3.35, 3.29

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.