mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Puzzles (https://www.mersenneforum.org/forumdisplay.php?f=18)
-   -   Algebra problem that I've been trying to solve (https://www.mersenneforum.org/showthread.php?t=19397)

Stargate38 2014-05-30 22:05

Algebra problem that I've been trying to solve
 
Does anyone have an equation for the value of x in this equation?

x^5+16*x^6+243*x^7=1

I have an approximation for the real root, but I want an exact formula for it, even if it includes an infinite series:

[code]0.446223522318958869779779205880610165343221619229686782382108709788[/code]

Uncwilly 2014-05-30 23:22

[QUOTE=Stargate38;374632]Does anyone have an equation for the value of x in this equation?

x^5+16*x^6+243*x^7=1[/QUOTE]
Wolfram Alpha?
[url]http://www.wolframalpha.com/input/?i=x%5E5%2B16*x%5E6%2B243*x%5E7%3D1[/url]

ewmayer 2014-05-31 00:38

Offhand I would guess !∃ a closed-form expression for the single real root, but perhaps it will be of use to note that the expression can be rewritten as

x^4*(x + 2^4*x^2 + 3^5*x^3) = 1 , or alternatively

x^2*(1^3*x^3 + 2^4*x^4 + 3^5*x^5) = 1 .

(This is unhelpful in terms of directly leading to a closed-form for the real root, but perhaps this kind of equation has been previously studied in some particular mathematical context.)

science_man_88 2014-05-31 02:02

[QUOTE=Stargate38;374632]Does anyone have an equation for the value of x in this equation?

x^5+16*x^6+243*x^7=1

I have an approximation for the real root, but I want an exact formula for it, even if it includes an infinite series:

[code]0.446223522318958869779779205880610165343221619229686782382108709788[/code][/QUOTE]

best I can think of is:

x^5+16*x^6+243*x^7 = 1;
x^5+16*(x^5)^1.2+243*(x^5)^1.4 = 1;
1+16*(x^5)^0.2 + 243*(x^5)^0.4 = 1/(x^5);
16*(x^5)^0.2+243*(x^5)^0.4 = (1-x^5)/(x^5);

R.D. Silverman 2014-05-31 18:45

[QUOTE=ewmayer;374649]Offhand I would guess !∃ a closed-form expression for the single real root, but perhaps it will be of use to note that the expression can be rewritten as

x^4*(x + 2^4*x^2 + 3^5*x^3) = 1 , or alternatively

x^2*(1^3*x^3 + 2^4*x^4 + 3^5*x^5) = 1 .

(This is unhelpful in terms of directly leading to a closed-form for the real root, but perhaps this kind of equation has been previously studied in some particular mathematical context.)[/QUOTE]

Sigh. People herein still refuse to learn math or use Google.
It is nauseating. How hard can it be to look up "septic equation"?????

Hint: It is an irreducible Septic.

Yes, there is a closed form solution. But it is not algebraic and
it is not elementary. It can be solved in terms of Theta functions.

R.D. Silverman 2014-05-31 18:47

[QUOTE=R.D. Silverman;374714]Sigh. People herein still refuse to learn math or use Google.
It is nauseating.

Hint: It is an irreducible Septic.

Yes, there is a closed form solution. But it is not algebraic and
it is not elementary. It can be solved in terms of Theta functions.[/QUOTE]

Oh. It is very bad form to present a polynomial this way.

ewmayer 2014-05-31 21:40

[QUOTE=R.D. Silverman;374714]Sigh. People herein still refuse to learn math or use Google.
It is nauseating. How hard can it be to look up "septic equation"?????[/QUOTE]

I did, but got a raft of links to antibiotic creams and iodine compounds. My google search history must differ from yours, for the results we see to be so wildly different.

CRGreathouse 2014-05-31 21:59

[QUOTE=R.D. Silverman;374714]Hint: It is an irreducible Septic.

Yes, there is a closed form solution. But it is not algebraic and
it is not elementary. It can be solved in terms of Theta functions.[/QUOTE]

Not all irreducible septics are unsolvable, of course, but you're right that this one is. Its Galois group is [TEX]S_7.[/TEX]

TheMawn 2014-05-31 23:15

[QUOTE=ewmayer;374725]I did, but got a raft of links to antibiotic creams and iodine compounds. My google search history must differ from yours, for the results we see to be so wildly different.[/QUOTE]

:snicker:

R.D. Silverman 2014-05-31 23:31

[QUOTE=CRGreathouse;374727]Not all irreducible septics are unsolvable, of course, but you're right that this one is. Its Galois group is [TEX]S_7.[/TEX][/QUOTE]

C7 would have been nicer........

NBtarheel_33 2014-06-01 00:57

[QUOTE=TheMawn;374733]:snicker:[/QUOTE]

Uh, huh huh huh, uh [URL=http://www.wikipedia.org/wiki/Beavis and Butthead]Beavis[/URL], you said "septic", uh huh huh huh...

Whoa! They're like talking about poop in math class! Yeah! Yeah! Galois theory *kicks ass*!

ewmayer 2014-06-01 01:44

[QUOTE=NBtarheel_33;374741]Whoa! They're like talking about poop in math class![/QUOTE]

Uh, huh, huh, You do know there's a whole book about :poop: in the Bible, don't you?

===============

Putting my mod hat back on, a general comment:

You know, Bob, were I - godforbid - remotely as ill-tempered as you, I would have been sorely tempted to similarly rake you over the coals for "laziness" and "ignorance" w.r.to your own little [url=http://www.mersenneforum.org/showthread.php?goto=newpost&t=19392]whoopsie[/url] yesterday. (I suspect said "gah!" is the real reason for your Mr. Grumpty-Dumpty act and vituperation here). And, whereas given my educational background, work experience and research interests kinda-sorta-"exact" solution of 7th-order polys via theta functions is way, way far afield, given your educational background, work experience and research interests, work estimation for bignum modmul is something that should be intimately familiar. But did I flame away? No, I simply corrected the work estimate and provided some information which will hopefully be of use to the interested reader.

Now, we know you're hurting and regularly jump through double-standard hoops to justify letting you get away with yet another in a LONG chain of immature tantrums and ad hominems, but please realize: our patience is not limitless. (NB: if you knew who your allies and detractors were amongst the mods here, you might be surprised.) While your current medical issues may serve as a convenient excuse for lenience, you were an ill-tempered ass long before, and however crappy any given day may be going for you, nothing requires you to take it out on the world. You think life is hard just for you? (As we have met in person, I can only say that for you act in such a "no one else had as hard a day/week/life as mine" manner would be indicative of extreme self-centeredness.)

Now, we appreciate the mathematical guidance - that is the reason folks post threads here - but the unneeded editorializing must stop. It's called "impulse control", and despite our current culture of rampant online narcissism and flamage, I believe most adults - I use the term here based on behavior, not mere years-of-age - consider it a valuable thing to possess and exercise. I suggest you join their ranks, before you piss off the wrong party around here and get yourself gobsmacked permanently.

And now, having spent far too much time dealing with other people's anger management issues, I really must get back to the work I should have spent the last hour doing.

BudgieJane 2014-06-01 09:10

[QUOTE=NBtarheel_33;374741]Uh, huh huh huh, uh [URL=http://www.wikipedia.org/wiki/Beavis and Butthead]Beavis[/URL], you said "septic", uh huh huh huh...

Whoa! They're like talking about poop in math class! Yeah! Yeah! Galois theory *kicks ass*![/QUOTE]

Where I come from, septic has a totally different meaning. (Hint: in London, cockneys used to use rhyming slang; some people still use it occasionally for its original use of hiding the meaning of what is being said.)

Equally, over here, we view kicking donkeys as abuse of the said quadruped. It's not their fault that they're donkeys, and stubborn bu**ers.

Galois theory *kicks arse*!

Have a nice day.

xilman 2014-06-01 09:25

[QUOTE=BudgieJane;374762]Where I come from, septic has a totally different meaning. (Hint: in London, cockneys used to use rhyming slang; some people still use it occasionally for its original use of hiding the meaning of what is being said.)[/QUOTE]As in the collective noun in finance?

Brian-E 2014-06-01 11:55

[QUOTE=xilman;374765]As in the collective noun in finance?[/QUOTE]
Or just an expression of gratitude?

BudgieJane 2014-06-01 17:55

Neither. As in citizen of the USA

NBtarheel_33 2014-06-01 18:04

[QUOTE=BudgieJane;374788]Neither. As in citizen of the USA[/QUOTE]

"Yank" -> "Tank" because they rhyme, and then "Tank" -> "Septic Tank" because ???, right? Australians then shorten that to "Seppo".

Why they don't use "Gas Tank" or "Shark Tank" is beyond me, though. Perhaps because the idea is to be derisive?

Never understood the rules of the whole rhyming slang thing, myself...

ewmayer 2014-06-01 20:37

BTW, in the bible according to Beavis and Butt-Head, the book about :poop: is "Doodooronomy".

chalsall 2014-06-01 21:02

[QUOTE=ewmayer;374798]BTW, in the bible according to Beavis and Butt-Head, the book about :poop: is "Doodooronomy".[/QUOTE]

Which version of what holy book is that defined?

Please forgive me. It's still Sunday where I manifest....

BudgieJane 2014-06-01 22:04

[QUOTE=NBtarheel_33;374791]"Yank" -> "Tank" because they rhyme, and then "Tank" -> "Septic Tank" because ???, right? Australians then shorten that to "Seppo".

Why they don't use "Gas Tank" or "Shark Tank" is beyond me, though. Perhaps because the idea is to be derisive?
[/QUOTE]

Because to the average cockney, there isn't such a thing as a "gas tank" -- if anything, gas is stored in gasometers or gas holders; but if you are referring to gasoline, that is known as petrol over here. Never heard of a "shark tank". The idea is to use something common to hide the meaning of what's being said from any stranger who may overhear it.

[QUOTE]
Never understood the rules of the whole rhyming slang thing, myself...[/QUOTE]
The thing that rhymes must be common (apples and pears = stairs), or the name of a film star (Gregory Peck = cheque), etc.

I remember seeing a sign behind the bar of a pub I once visited: "We do not sausage gooses". The translation of this is left as an exercise for the reader.

Brian-E 2014-06-02 00:04

[QUOTE=BudgieJane;374810][...]Never heard of a "shark tank".[...][/QUOTE]
I believe that retina may have one (he'll correct me if I'm wrong). It's occupants might have lasers on their heads.

retina 2014-06-02 00:20

[QUOTE=Brian-E;374816]It's occupants have [b]frickin'[/b] lasers on their heads.[/QUOTE]Fixed that for you.

Stargate38 2014-06-02 18:45

Can you please get back on topic?

:direction:

NBtarheel_33 2014-06-02 20:58

[QUOTE=Stargate38;374879]Can you please get back on topic?

:direction:[/QUOTE]

OK, what can *you* tell us about the solvability of this septic equation, or of high-degree polynomial equations in general?

ewmayer 2014-06-02 21:31

[QUOTE=NBtarheel_33;374887]OK, what can *you* tell us about the solvability of this septic equation, or of high-degree polynomial equations in general?[/QUOTE]

Here is a more-focused question for our on-topic fetishist:
[i]
Say you desire an approximation to the real root of the OP's poly to (say) 50 decimal digits of precision. Compare the cost of obtaining such in terms of basic-hardware arithmetic ops (+,-,*, assumed to require 1 pipelined cycle each, with any more-expensive ops such /, sqrt,... given in terms of cycle-equivalents) for

[A] The '"exact" solution RDS describes in terms of theta functions (I put exact in quotes because one still needs a sufficiently precise evaluation of the theta, and those don't grow on trees);

[B] A simple Newton-Raphson iteration starting from some reasonable initial guess (based on simply plotting the poly, x0 = 0.5 or 1 will serve).[/i]

R.D. Silverman 2014-06-02 22:51

[QUOTE=ewmayer;374892]

[B] A simple Newton-Raphson iteration starting from some reasonable initial guess (based on simply plotting the poly, x0 = 0.5 or 1 will serve).[/i][/QUOTE]

One must be careful with NR on high-degree polys, especially if (at least
some of) the roots are close together.

ewmayer 2014-06-03 00:01

[QUOTE=R.D. Silverman;374894]One must be careful with NR on high-degree polys, especially if (at least
some of) the roots are close together.[/QUOTE]

Not an issue in the present instance, which has just 1 real root and no points of near-zero slope - and were the poly one having degenerate (or nearly so) roots one could easily switch to an alternate higher-order iterative scheme such as [url=http://hogranch.com/mayer/nr.html#high2]Halley's[/url].

[BTW, I am using the present thread as a spur to fill in some gaps in my larnin' during my spare time (basically whenever I need a break from coding or get stumped on a debug issue); currently refreshing my acquaintance with the classical methods for cubics and quartics, after which the real fun begins, the Galois-theory approach to the quintic and beyond.]

R.D. Silverman 2014-06-03 00:06

[QUOTE=ewmayer;374897]Not an issue in the present instance, which has just 1 real root and no points of near-zero slope - and were the poly one having degenerate (or nearly so) roots one could easily switch to an alternate higher-order iterative scheme such as [url=http://hogranch.com/mayer/nr.html#high2]Halley's[/url].

[BTW, I am using the present thread as a spur to fill in some gaps in my larnin' during my spare time (basically whenever I need a break from coding or get stumped on a debug issue); currently refreshing my acquaintance with the classical methods for cubics and quartics, after which the real fun begins, the Galois-theory approach to the quintic and beyond.][/QUOTE]

Think transitively.

fivemack 2014-06-03 08:34

[QUOTE=ewmayer;374892]Say you desire an approximation to the real root of the OP's poly to (say) 50 decimal digits of precision. Compare the cost of obtaining such in terms of basic-hardware arithmetic ops (+,-,*, assumed to require 1 pipelined cycle each, with any more-expensive ops such /, sqrt,... given in terms of cycle-equivalents) for

[A] The '"exact" solution RDS describes in terms of theta functions (I put exact in quotes because one still needs a sufficiently precise evaluation of the theta, and those don't grow on trees);

[B] A simple Newton-Raphson iteration starting from some reasonable initial guess (based on simply plotting the poly, x0 = 0.5 or 1 will serve).[/QUOTE]

I expect Newton-Raphson to win there, if only because the core of a theta-function evaluation is likely to be an AGM loop which itself costs about as much as the N-R; with your cheap multiplies and costly divides, I'd be quite tempted by a pure-polynomial iteration, but my knowledge of Magma syntax isn't enough to work out what bit of homomorphism judo I need to extract the linear equations in V.1 .. V.10 from

[code]
P<x>:=PolynomialRing(Rationals());
f:=x^2*(3^5*x^5+2^4*x^4+x^3)-1;
N<theta>:=NumberField(f);
NN<xx>:=PolynomialRing(N);
V:=PolynomialRing(NN,10);
t:=V!(theta+xx);
c:=V.10 + V.1*t + V.2*t^2 + V.3*t^3 + V.4*t^4 + V.5*t^5 + V.6*t^6 + V.7*t^7 + V.8*t^8 + V.9*t^9;
Coefficients(c);
[/code]

ewmayer 2014-06-03 21:00

1 Attachment(s)
BTW, for the OS X users out there, one useful tool I stumbled across while examining the OP's poly was this handy-dandy utility called [url=http://www.fusionmath.com/apple]Grapher[/url]. Don't know how I missed that all these years. Choose 2D mode, enter x^5+16*x^6+243*x^7-1 to the right of the y= prompt at upper left, hit <enter>, et voilà:

davar55 2014-06-04 14:03

In that graph, for x in (-.25,+.25) the y-value looks flat at -1.0.

Could this be wrong?

NBtarheel_33 2014-06-04 16:15

[QUOTE=davar55;375033]In that graph, for x in (-.25,+.25) the y-value looks flat at -1.0.

Could this be wrong?[/QUOTE]

Could it? How about the first derivative in that interval? The second derivative? Even better - what happens to a number in that interval when you take a linear combination of its fifth, sixth, seventh powers and *subtract 1*?

I swear, cmd and Kathegetes will be the next ones to show up in this thread...:loco:

davar55 2014-06-04 18:13

[QUOTE=NBtarheel_33;375045]Could it? How about the first derivative in that interval? The second derivative? Even better - what happens to a number in that interval when you take a linear combination of its fifth, sixth, seventh powers and *subtract 1*?

I swear, cmd and Kathegetes will be the next ones to show up in this thread...:loco:[/QUOTE]

Don't know about the insult, but my post was just a comment.
How can a seventh degree polynomial be flat over an interval?
So perhaps a larger view would show up the actual non-flat curvature?

NBtarheel_33 2014-06-04 18:42

[QUOTE=davar55;375056]Don't know about the insult, but my post was just a comment.
How can a seventh degree polynomial be flat over an interval?
So perhaps a larger view would show up the actual non-flat curvature?[/QUOTE]

It's not that it is "flat" (i.e. has zero first derivative) over that *entire* interval, but that it is *nearly* "flat" (i.e. has a very small magnitude derivative) over that interval. You would readily see this behavior more clearly on a zoomed-in view (try graphing the function on a calculator and playing with the window size).

All odd polynomial functions exhibit this behavior, in fact. Take [TEX]f(x)=x^3[/TEX]. Near [TEX]x = 0[/TEX], the graph of this function appears "flat". Why? Well, [TEX]f'(x)=3x^2[/TEX]. So, you see that at [TEX]x=0[/TEX], indeed, the function has zero derivative, and hence is "flat". But even as far out as, say, [TEX]x=0.25[/TEX], the derivative is still near enough to 0 for the function to *appear* flat in a larger-range window.

No insult intended, by the way. I suppose I was just taken somewhat aback at your elementary question, given that (IIRC) you have some degree of mathematical credentials. Call it a "Bob moment"...

davar55 2014-06-04 18:51

Also, what I thought might be inaccurate was not the flatness
but the program that generated the graph. But I see now it
was just a matter of the size of the graph outputted.
In this situation, size matters.

ewmayer 2014-06-04 22:34

[QUOTE=davar55;375033]In that graph, for x in (-.25,+.25) the y-value looks flat at -1.0.

Could this be wrong?[/QUOTE]

True, I spoke too hastily - clearly f`(x) = 1701*x^6 + 96*x^5 + 5*x^4 = 0 at x = 0. This leads to our next pair of fun homework problems:
[i]
True or False: Under the assumption of infinite numerical precision, Newton-Raphson on f(x) always converges to the root as long as the initial iterate x0 != 0. [Hint: consider whether an iterative-update sequence can start at x != 0 and land at x = 0 at some later iterate.]

True or False: Under the assumption of infinite numerical precision, Halley's method applied to f(x) always converges to the root irrespective of the choice of the initial iterate.[/i]

ewmayer 2014-06-05 00:21

C'mon, you guys - at least the 2nd part of the above problem is trivially easy....

[spoiler]just examine the denominator which appears in Halley's iterative-update formula and see if it can = 0 or not for the function in question.[/spoiler]

LaurV 2014-06-05 03:42

On that graph, 4 squares is 1, so a square is 0.25, and 0.25^7 is 0.000061, or 1/4096 of a square. So, it is "totally flat" :razz: for plus and minus a square around x=0.

Well, if you have a monitor with 4096x8192 resolution, or 6 hi-res monitors, or about 24 "normal monitors" connected in such a way that the image "spreads" on all of them (each shows a different part of the image, like on stadiums), and enlarge (magnify) that graphic SO MUCH until ONE SINGLE square of it is shown on your (assembly of) monitor(s), and look to the interval from (-0.25, 0.25) on horizontal and (-1.5, -0.5) on vertical, then you may be able to see the graph of the line raising ONE SINGLE PIXEL, in the rightmost part of your (assembly of) screen (s). :rofl:. And of course, the line will drop one pixel, on the leftmost part of the screen.

ewmayer 2014-06-05 03:55

1 Attachment(s)
First derivative near x=0:

LaurV 2014-06-05 04:05

I am not arguing with you, my post was for davar&co. About your derivative graph, it still has exactly the same "aspect", no matter how much you zoom in. If you change the Ox labels from +/-0.25 to +/-0.000001, and modify accordingly on Oy, it will still look exactly the same.

Nick 2014-06-05 16:59

[QUOTE=ewmayer;375082][I]
True or False: Under the assumption of infinite numerical precision, Newton-Raphson on f(x) always converges to the root as long as the initial iterate x0 != 0. [Hint: consider whether an iterative-update sequence can start at x != 0 and land at x = 0 at some later iterate.]
[/I][/QUOTE]
For anyone interested in the theory behind this, the key is the Banach fixed-point theorem
[URL]http://en.wikipedia.org/wiki/Contraction_mapping_theorem[/URL]
In practice, this can take a while to converge, and Newton-Raphson is an optimization.

Applied to spaces of continuous functions instead, the Banach fixed-point theorem underlies many proofs of the existence (and uniqueness) of solutions to differential equations.

ewmayer 2014-06-06 00:51

Thanks for the link, but note that the kinds of situations which are problematic for Newton-style iterative schemes are just those which break the assumption of a contraction mapping which underpins the BFPT. Depending on the details of the scheme, function to which applied and initial iterate one can have iterates which shoot off arbitrarily far toward +-oo, or which bounce around either deterministically (limit cycle) or chaotically.

The BFPT is - roughly speaking - essentially a proof of convergence once one is within the "contiguous basin of attraction" of a fixed point.

davar55 2014-06-06 10:26

Re the OP:

[QUOTE]
...
I have an approximation for the real root, but I want an exact formula for it, even if it includes an infinite series:
...
[/QUOTE]Since the particular seventh degree polynomial equation is irreducible,
by Galois it has no closed form (by radicals) solution. I wonder what
is meant by an exact formula solution that includes an infinite series
whose convergent is either expressible in radicals (reducing to the
reducible case) or not (in which case not exact.) Or is it me who's
a bit confused?

Stargate38 2014-06-06 22:32

By exact, I mean something like this:

[TEX]\sum\limits_{i=a}^\infty f(i)[/TEX]

where f(i) is a function of i such that the above sum converges to the root. If you need to, use nested sums.

ewmayer 2014-06-07 01:55

[QUOTE=Stargate38;375241]By exact, I mean something like this:

[TEX]\sum\limits_{i=a}^\infty f(i)[/TEX]

where f(i) is a function of i such that the above sum converges to the root. If you need to, use nested sums.[/QUOTE]

In what manner is that "more exact" (or even "more concise") than giving an iterative formula which encodes a sequence of iterates that converges to the root? The only difference I see is that you appear to desire an explicit expression for your summand, whereas the iterative-formula approach defines its argument inductively.

BudgieJane 2014-06-07 08:29

Should you not be asking this question on Stack Exchange?

R.D. Silverman 2014-06-07 13:29

[QUOTE=ewmayer;375255]In what manner is that "more exact" (or even "more concise") than giving an iterative formula which encodes a sequence of iterates that converges to the root? The only difference I see is that you appear to desire an explicit expression for your summand, whereas the iterative-formula approach defines its argument inductively.[/QUOTE]

It is not "exact" in any meaningful sense, especially since ALL
real-analytic functions can be put in this form. It is also meaningless
without saying anything about f(x). One could define f() to be
"the smallest real root of the following septic polynomial.........."

ewmayer 2014-06-07 21:34

[QUOTE=R.D. Silverman;375287]It is not "exact" in any meaningful sense, especially since ALL
real-analytic functions can be put in this form. It is also meaningless
without saying anything about f(x). One could define f() to be
"the smallest real root of the following septic polynomial.........."[/QUOTE]

Exactly - pardon the pun. It is an interesting epistemological question to try to delineate the (fuzzy) boundary between exact and approximate here. For instance, one could require that any candidate for exactness should encode a constructive method for producing the answer to any desired precision, thus the self-referential quote you give above would be ruled out on those grounds.

================

I spent a bit of time considering higher-order analogs of the N-R scheme which will not only converge quickly from a good initial guess at the root, but can also cope with the deliberately-problematic initial iterate x0 = 0, at which f = -1 and f`-f```` all = 0.

Define a set of (k)th-order-convergent iterative schemes

x[sub]n+1[/sub] = x[sub]n[/sub] - f*G[sub]k-1[/sub](x)/G[sub]k[/sub](x) , [1]

where the G[sub]k[/sub](x) are functions-of-the-function-f(x) defined via the recurrence

G[sub]k[/sub](x) = f`*G[sub]k[/sub] - f*G`[sub]k[/sub]/(k-1) , starting with G[sub]1[/sub] = 1 . [2]

We can see that G[sub]2[/sub] = f`, which is just the 2nd-order N-R scheme. k = 3 gives G[sub]3[/sub](x) = f`[sup]2[/sup] - f*f``/2, yielding the 3rd-order Halley scheme, &c. The lowest-order iteration safe to start at x0 = 0 needs a 5th derivative (f*f`````) or higher term in denominator, which means the 6th-order scheme of the family, but that gives dx = 0 since the numerator = 0 at x = 0, thus we need thw 7th-order scheme. I'll spare you the gory algebraic details. but we end up with the following 2 G-terms in the 7th-order version of [1]:

G[sub]6[/sub] = ((f`)^2 - 2.f.f``).(f`)^3 + f^2.f`.(2.f`.f``` + 3.(f``)^2)/4 - f^3.((f`.f```` + 2.f``.f```) + f.f`````/10)/12

G[sub]7[/sub] = (f`)^4.((f`)^2 - (5/2).f.f``) + f^2.(f`)^2.(4.f`.f``` + 9.(f``)^2)/6 - f^3.(f``.((f``)^2 + 4.f`.f```) + (f`)^2.f````)/8 + f^4.(12.f`.f````` + 30.f``.f```` + 20.(f```)^2 - f.f``````)/720

Now try the resulting iterative scheme starting from x0 = 0; here is simple Pari code:

default(realprecision, 200)
x = 0;
[i][Repeat the following sequence][/i]
f = 243*x^7 + 16*x^6 + x^5 - 1;
f1 = 1701*x^6 + 96*x^5 + 5*x^4;
f2 = 1701*6*x^5 + 480*x^4 + 20*x^3;
f3 = 1701*30*x^4 + 1920*x^3 + 60*x^2;
f4 = 1701*120*x^3 + 5760*x^2 + 120*x;
f5 = 1701*360*x^2 + 11520*x + 120;
f6 = 1701*720*x + 11520; print(f)
G_6 = (f1^2 - 2*f*f2)*f1^3 + f^2*f1*(2*f1*f3 + 3*f2^2)/4 - f^3*((f1*f4 + 2*f2*f3) - f*f5/10)/12;
G_7 = f1^4*(f1^2 - 5*f*f2/2) + f^2*f1^2*(4*f1*f3 + 9*f2^2)/6 - f^3*(f2*(f2^2 + 4*f1*f3) + f1^2*f4)/8 + f^4*(12*f1*f5 + 30*f2*f4 + 20*f3^2 - f*f6)/720;
dx = -f*G_6/G_7; x += dx; x

Note that because our function is a polynomial one could further substitute the derivatives f-f6 into the expressions for G[sub]6[/sub] and G[sub]7[/sub] and collect common powers of x in each, but I'm only interested in checking the convergence properties, so didn't bother doing that. Starting from x0 = 0 we obtain the following sequence of iterates (and f(x) value at each) - Notice that the approximant barely budges away from x = 0 on the first iteration, but that is enough for the above functions to start getting a nonzero signal for the intermediate derivatives, so on step 2 we get more than halfway to the root, and it's all downhill from there:
[code]j f(x) x_j
-- ------------ --------------------------------
0 -1
1 -0.999997... 0.0625
2 -0.955180... 0.28249...
3 -1.75..E-002 0.44507...
4 4.73..E-018 0.4462235223189588700... [good to ~19 digits]
5 -5.10..E-127 0.4462235223189588697797792058806101653432216192296867823821087097882260386219976960158801016445364065226392916613728558306678486... [good to 127 digits][/code]
Now from a computational-efficiency perspective this is of course the wrong way to go - The problem at x = 0 here illustrates the first thing one should do whenever deploying such a scheme, namely to produce a bracketing interval a < x < b containing one or more roots (1 is the preferred count). For the OP's 7th-order poly it is easy to verify that 0 < x < 1 is a bracketing interval since f(0) < 0 and f(1) > 0. The initial guess for the root is drawn from this interval - typically x = (a+b)/2 is as good a starting value as any, and indeed x0 = 1/2 works perfectly well as a starting value for the basic 2nd-order N-R scheme here.

During the ensuing iteration loop if the updated iterate x lands outside (a,b) one should try to narrow the bracketing interval and retry, and/or retry with a higher-order iterative scheme.

jasonp 2014-06-09 00:33

In the special case of polynomial roots, one could also directly compute approximations to the numerical values via a variety of methods: computing the eigenvalues of a matrix derived from the polynomial coefficients, or using sophisticated algorithms like Jenkins-Traub that are nearly bulletproof. Then use your favorite iterative method to 'polish' the computed roots. You get more robustness than when using iterative methods alone, and only need high-precision arithmetic after the bulletproof methods finish.

xilman 2014-06-09 12:24

[QUOTE=jasonp;375377]In the special case of polynomial roots, one could also directly compute approximations to the numerical values via a variety of methods: computing the eigenvalues of a matrix derived from the polynomial coefficients, or using sophisticated algorithms like Jenkins-Traub that are nearly bulletproof. Then use your favorite iterative method to 'polish' the computed roots. You get more robustness than when using iterative methods alone, and only need high-precision arithmetic after the bulletproof methods finish.[/QUOTE]Another approach, hinted at in Knuth, is to use exact rational arithmetic. That is treat a floating point number as p/(2^n) where p and n are integers. This approach can give greatly reduced issues with rounding and cancellation errors together with the easily availability and simplicity of use of multiprecision arithmetic compared with the typical floating point counterpart.

R.D. Silverman 2014-06-09 16:23

[QUOTE=xilman;375409]Another approach, hinted at in Knuth, is to use exact rational arithmetic. That is treat a floating point number as p/(2^n) where p and n are integers. This approach can give greatly reduced issues with rounding and cancellation errors together with the easily availability and simplicity of use of multiprecision arithmetic compared with the typical floating point counterpart.[/QUOTE]

Yep! Why not??

Let's solve it over the 2-adics, then lift the result to R. :smile:


:smile::smile::smile: for the humor impaired.//

xilman 2014-06-09 16:33

[QUOTE=R.D. Silverman;375426]Yep! Why not??

Let's solve it over the 2-adics, then lift the result to R. :smile:


:smile::smile::smile: for the humor impaired.//[/QUOTE]Nice to see someone is paying attention and spots a gentle troll.

davar55 2014-06-09 16:37

Did that mean "An approximation is an approximation, in any base" ?

(Otherwise, wouldn't computing in multiprecision integer
plus "large" binary part (say > 10^5 bits) produce as
valid a "solution" ? )

ewmayer 2014-06-09 21:45

Actually, it is an interesting exercise to perform an iterative approximation of one's choice over Q in a package like Pari which auto-reduces the resulting quotients to LCD form and compare how fast the "bitness" of the approximants grows relative to (error vs exact) compared to a multiprecision-float approach.

Stargate38 2014-06-10 17:27

Post #48 helps, but how do I see the whole number in Pari? I keep seeing "[+++]" and the number is cut off.

R.D. Silverman 2014-06-10 19:56

[QUOTE=Stargate38;375520]Post #48 helps, but how do I see the whole number in Pari? I keep seeing "[+++]" and the number is cut off.[/QUOTE]

See also:

[url]http://en.wikipedia.org/wiki/Bairstow's_method[/url]

science_man_88 2014-06-10 21:18

[QUOTE=Stargate38;375520]Post #48 helps, but how do I see the whole number in Pari? I keep seeing "[+++]" and the number is cut off.[/QUOTE]

Do you have a log file ? I'm not sure how much increasing realprecision would help.


All times are UTC. The time now is 23:28.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.