![]() |
|
|
#34 |
|
Bronze Medalist
Jan 2004
Mumbai,India
22·33·19 Posts |
#33 Today, 10:24 PM
mfgoode Registered User Join Date: Jan 2004 Location: Mumbai,India Posts: 90 -------------------------------------------------------------------------------- Quote: Originally Posted by jinydu Now I think I get it. We say that linear convergence is a bad rate of convergence because naive guessing also leads to a linear rate of convergence. Suppose I know the root to n decimal places, and want to increase the accuracy to n+1 decimal places. I can simply guess that the next decimal place is 5, thus cutting the highest possible error value in half. This process of halving can be repeated indefinitely, so that it takes log (base 2, 10), or about 3.32193 steps to add a single decimal place. Therefore, it's a linear rate of convergence. If Newton's Method does no better than this simplistic method, we can say that it performs poorly. Quote: Originally Posted by jinydu Now I think I get it. We say that linear convergence is a bad rate of convergence because naive guessing also leads to a linear rate of convergence. Suppose I know the root to n decimal places, and want to increase the accuracy to n+1 decimal places. I can simply guess that the next decimal place is 5, thus cutting the highest possible error value in half. This process of halving can be repeated indefinitely, so that it takes log (base 2, 10), or about 3.32193 steps to add a single decimal place. Therefore, it's a linear rate of convergence. If Newton's Method does no better than this simplistic method, we can say that it performs poorly. Un quote. You are merely using the Bisection method which is based on common sense! The False Position method is old but superior to the BS method. The Newton raphson method is even better and based on Calculus and the Taylor series. It requires only one value to start with and the slope of the tangent should be known. Try and get someone to give you a graphical interpretation. You will then have more confidence in the numerical analysis. Incidentally it has been mathem'cally proved that the NR method is superior to the others. There are other methods like Fourier, Sturm, AND Horners methods but they have adv'ages and disadv'ages. Mally mfgoode View Public Profile Send a private message to mfgoode Send email to mfgoode Find all posts by mfgoode Add mfgoode to Your Buddy List |
|
|
|
|
|
#35 |
|
Dec 2003
Hopefully Near M48
33368 Posts |
Well, I've now taken a course called "Integration and Infinite Series" at university; so I am hopefully somewhat better prepared to look at this.
I tried my hand at using a quadratic approximation for the function, then using that as a starting point for deriving new methods that converge faster than Newton's Method: http://www.sosmath.com/CBB/viewtopic.php?t=14686 |
|
|
|
|
|
#36 | |
|
∂2ω=0
Sep 2002
República de California
1164710 Posts |
Quote:
|
|
|
|
|
|
|
#37 | |
|
Dec 2003
Hopefully Near M48
2×3×293 Posts |
Quote:
I realize that although the method involving a square root converges fastest, it is difficult to apply precisely because of the square roots. Nevertheless, I think its interesting because I had never heard of a method before that could approximate solutions using nested square roots. I guess the other two methods (Halley's Rational Method, and the one I derived myself) address that problem by using only rational operations, although at the expense of accuracy. However, I still don't understand how to generalize the procedure so as to derive a method with convergence of order n. The method I used could be generalized further using Taylor polynomials of degree 3 and 4, then using the Binomial Theorem to get rid of the radicals. However, it would have to stop there due to Abel's Impossibility Theorem. How then can I get convergence faster than fifth order (corresponding to a degree 4 polynomial)? Also, there is another thing I don't quite understand. The method I derived (the one which looks similar to Newton's Method, but with an extra "correction" term) has a 2(f'(x))^2 in the denominator. Intuitively, I would expect that this method should fare horribly (far, far worse than Newton's Method) when searching for a double root, where f(x) and f'(x) are near zero, but f''(x) is not. In the case of the original Newton's Method, I can see why it converges to the correct value of the root; since, in the case of a double root, f(x) approaches 0 faster than f'(x). But although f(x) may approach 0 faster than f'(x); surely, it should approach zero slower than (f'(x))^2, causing my method to diverge. However, this doesn't seem to be the case when I test out specific examples using Mathematica. Instead, the method I derived seems to not only converge, but converge faster than Newton's Method, albeit still quite slowly. Why is that? Last fiddled with by jinydu on 2005-04-01 at 20:39 |
|
|
|
|
|
|
#38 | ||||
|
∂2ω=0
Sep 2002
República de California
19×613 Posts |
Quote:
Quote:
Quote:
Quote:
Code:
f / f * f'' \
xk+1 = xk - -- * | 1 + -------- |
f' \ 2*(f')2 /
Now consider your method for the simple parabola, f = x^2. This is a case where Newton's method only converges linearly fast, since the root at x = 0 is a point of zero slope. For this f(x) the terms inside the large () in your method reduces to 1 + f*f''/[2*(f')^2] = 1 + x^2/[4*x^2] = 5/4, and we see that the method reduces to Code:
f 5
xk+1 = xk - -- * - .
f' 4
Now getting back to methods that generalize to arbitrarily high order - the Gerlach-style methods (which include Halley's as the special case at 3rd order) start with the aforementioned observation that Newton's method converges in a single iteration if the function in question is linear. Now most functions we're interested in finding roots numerically for are by definition not linear, but one can contemplate multiplying f(x) by an auxiliary function g(x) that makes the resulting product *look* linear in a neighborhood of interest. Assuming one can find a suitable multiplier function g(x), apply Newton's method to the product f(x)*g(x), and away you go. You should try to find a copy of Gerlach's short paper in the "classroom notes" section of SIAM Review 36, 272-276, 1994 at your local university library for details and numerical examples. The method (although sans much of the motivation) is also discussed in Richard Crandall's small book (it's really more of a monograph in style, but touches on many topics - a multigraph, perhaps? ) "Topics in Advanced Scientific Computation" (Telos/Springer, 1997) - the list price is outrageous for this book, but you can find cheap used copies, e.g. via the Amazon.com reseller links - I see several used for less than $10.
Last fiddled with by ewmayer on 2007-02-15 at 21:11 Reason: changed [pre] to [code] to restore alignment of egns |
||||
|
|
|
![]() |
| Thread Tools | |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| My solution for the problem of microtransactions and media in general | jasong | jasong | 21 | 2019-08-19 14:59 |
| Polynomial Discriminant is n^k for an n-1 degree polynomial | carpetpool | Miscellaneous Math | 14 | 2017-02-18 19:46 |
| solving 2nd order differential equations | Joshua2 | Homework Help | 9 | 2009-10-30 07:37 |
| Navier-Stocks equations | mfgoode | Math | 1 | 2006-10-09 16:02 |
| Equality of quadratic equations | dsouza123 | Math | 2 | 2004-07-30 09:03 |