mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Puzzles (https://www.mersenneforum.org/forumdisplay.php?f=18)
-   -   Can Euler's Basel Solution be Saved? (https://www.mersenneforum.org/showthread.php?t=17994)

 jinydu 2013-03-20 01:05

Can Euler's Basel Solution be Saved?

I'm thinking of giving a talk to an undergrad math club, and I had the idea of presenting Euler's solution to the Bessel Problem (1 + 1/4 + 1/9 + ... = pi^2/6) as the topic. While there are some very grave gaps, it a deliciously elegant solution, and it would be great if there were a way to repair those gaps. My (very rough draft of a) plan is to present Euler's solution as I read it, explain what the gaps are, and then try to fix them. While I know how to repair some of the problems, there are some I don't, which is why I'm posting this.

Anyway, here's Euler's solution:

1) Lemma 1: For all x, sin(x) = x - x^3/3! + x^5/5! - ...
Proof: Taken for granted.

2) Lemma 2: Let f be a complex polynomial with f(0) = 1 and distinct roots r_1, ... r_n. Then f(x) = (1 - x/r_1)(1- x/r_2)...(1-x/r_n)
Proof: Since f has n distinct roots, deg(f) = n. We have specified the values of f at n+1 distinct inputs; it is easy to check that there is at most one f of degree n which can satisfy those conditions. The given f satisfies those conditions.

3) Define sinc(x) = sin(x)/x when x =/= 0 and 1 when x = 0. Then sinc(x)= 1- x^2/3! + x^4/5! - ...

4) Clearly, sinc(0) = 1 and the roots of sinc(x) are just the roots of sin(x) with 0 removed (distinct because the roots of sin(x) are distinct), namely +/-npi for n = 0, 1, 2, etc.

5) Regard sinc(x) as an (infinite) polynomial. By Lemma 2, sinc(x) = (1-x/pi)(1+x/pi)(1-x/2pi)(1+x/2pi)...

6) So (1-x/pi)(1+x/pi)(1-x/2pi)(1+x/2pi)... = 1 - x^2/3! + x^4/5! - ...

7) (1-x^2/pi^2)(1-x^2/4pi^2)(1 - x^2/9pi^2)... = 1 - x^2/3! + x^4/5! - ...

8) Multiplying out the left-hand side and collecting like terms, we get
1 + (-1/pi^2 - 1/4pi^2 - 1/9pi^2 - ...)x^2 + O(x^4) = 1 - x^2/3! + x^4/5! - ...

9) Equate x^2 coefficients to get -1/pi^2 - 1/4pi^2 - 1/pi^2 = ... = -1/3!

10) Multiply both sides by -pi^2 to get the desired result.

--- Problems with the solution and my ideas for fixing them ---

4) We need to look for all complex zeroes of the function, not just the well-known ones on the real line. But this is easily fixed by using the formula sin(z) = e^(iz) - e^(-iz)/2i, setting the numerator equal to zero, splitting z into real and imaginary parts, and solving. It is quickly seen that Im(z) = 0, which brings us back to the well-known case.

5) This is the biggest problem. sinc(x) is of course an entire function, not a polynomial; so we can't legally just apply the lemma. To make matters much worse, the conditions in Lemma 2 aren't even enough to uniquely determine an entire function. Replacing sinc(x) with e^(h(x))sinc(x) for any entire function h with h(0) = 0 yields another function that satisfies exactly the same conditions; but is a very different function and of course will have a very different power series.

What I need is an analogue of Lemma 2 for entire functions that allows me to uniquely pin down the function using its roots, its value at the origin, and some extra data that needs to be easy to get for sinc(x).

7) Warning: Messing with an infinite product in such a nontrivial way is in general not ok. But in this case, it is ok assuming we have patched up the proof up till Line 6 because we know that the infinite product is convergent. In a convergent infinite product, multiplication is associative because the new sequence of partial products is just a subsequence of the original sequence of partial products.

8) Multiplying out an infinite product like that seems kind of fishy to me; can't think of a theorem to justify it off the top of my head. But I would think it's something that can be justified using standard undergrad analysis techniques. Am I right?

-------

So in summary, the main thing I'm missing is a way to justify step 5. A justification for step 8 would also be nice.

Can it be done?

Thanks

 NBtarheel_33 2013-03-20 09:06

Nitpick-of-the-day: It's actually the [I]Basel[/I] problem (after Euler's hometown of Basel, Switzerland). There are, of course, [I]Bessel[/I] functions, and knowing Euler, he may well have been involved with them too, but they play no part in this problem. Anyway...

Take a look at [URL=math.cmu.edu/~bwsulliv/MathGradTalkZeta2.pdf]this[/URL] paper/talk on the same subject.

It discusses a lot of the complex analysis behind this problem, and specifically, states that the issue in your step (5) is resolved via the Weierstrass Factorization Theorem, which is a stronger result that comes from the Fundamental Theorem of Algebra.

I also looked at Wikipedia's [URL=http://en.wikipedia.org/wiki/Basel_problem]article[/URL] on the Basel problem, and it explains that your step (8) is legal because of [URL=http://en.wikipedia.org/wiki/Newton%27s_identities]Newton's identities[/URL]. The latter article does not resemble anything that I have seen in an undergraduate curriculum, so you might want to just make mention of the buzzword "Newton's identities" in explaining why step (8) is possible.

It's worth mentioning to the "kids" that the upshot is that, as with many problems in the earliest days of calculus and the study of the infinite and infinitesimal, mathematicians were often able to discover these beautiful and unexpected results, but were unable to put them on any sort of rigorous foundation for at least another 100+ years when [I]analysis[/I] would be formally established. Nonetheless, having results such as this one would also prove useful as a check on the new rigorous methods that were being implemented.

Another interesting point to mention: Why doesn't this work for the odd powers? What happens if we try to use the cosine series instead of the sine series? Zeta(3) is irrational [cf. Apery, 1978]...but that's about all we know...

Obviously, this result also gives us an approximation for pi (which was part of a talk that I gave at a student session at MathFest in 2004): the quantity sqrt(6 * sum(1/n^2)) should approximate pi for large enough n. How large? How fast is the convergence? (Hint: it's not too bad!)

You might also recommend that they check out William Dunham's books [I]Journey through Genius[/I] and [I]Euler: Master of Us All[/I]. Both of these books provide nice sketches of Euler's work on this problem (he didn't stop with Zeta(2); rather, he calculated Zeta(n) for even n as high as 18, long before pocket calculators), as well as discussing how he played fast and loose with infinite series.

Good luck! This is a really neat topic and a fine introduction to Euler's amazing and prodigious mathematics.

 jinydu 2013-03-20 12:57

Thanks for the links. I already read [I]Journey through Genius[/I] in high school; that's where I learned of this solution!

Going to have to think about the Weierstrass Factorization and it's application to this case; but as for the other concern, I still don't quite see how to prove the validity of (8) from Newton's identities. I mean, I'm comfortable with applying Newton's identities to get the coefficients of x^2, x^4, x^6 etc. if we had a finite product. The problem here is the product being infinite.

Seems to me it boils down to three steps:

8.1) The sequence of partial products (1-x^2/pi^2), (1-x^2/pi^2)(1-x^2/4pi^2), (1-x^2/pi^2)(1-x^2/4pi^2)(1-x^2/9pi^2), etc. converges to sinc(x). [Pointwise this is ok, but is that going to be enough?]

8.2) Use Newton's identities to find the x^2 (and x^4, x^6, etc. if I feel like it) coefficients of each of these partial products.

8.3) Argue that if {f_n} is a sequence of entire functions converging (is pointwise enough?) to an entire function f, then the x^2 (and higher) coefficients of the Taylor expansions of f_n converge to the x^2 coefficient of the Taylor expansion of f. [Come to think of it, I'm pretty sure I heard that in the real setting, even f_n -> f uniformly does not imply f_n' -> f'. In the complex setting, err... Not sure...]

Thanks

 davieddy 2013-03-20 16:03

Bessel

[QUOTE=NBtarheel_33;334163]Nitpick-of-the-day: It's actually the [I]Basel[/I] problem (after Euler's hometown of Basel, Switzerland).[/QUOTE]
Easy mistake to make.
Like confusing bridges with lager viz a viz Kronenburg.

D

 Dubslow 2013-03-20 21:15

I can't say much on this particular solution (I haven't had analysis yet), but I can say that another method I've seen to calculate this series (and which generalizes well to larger even n, I believe) is a combination of Fourier analysis/linear algebra/Parseval's identity (the latter is essentially the Pythagorean theorem for an arbitrary inner product space), as well as being quite accessible to undergrads with little analysis.

The following were questions on a lin. alg. quiz I took in high school:
[quote=Dr. Fogel]
2) Compute the Fourier coefficients (for the interval [-$$\pi$$, $$\pi$$]) of sines and cosines for the function f(x) = x.

....

4) Use Parseval's identity on the result from problem 2 to obtain an interesting result.[/quote]
The answer to 2) is easily calculated to be 0 cosine coeffiecients, and the sine coefficients are $$b_k = \frac{-2(-1)^k}{k}$$.

For problem 4), we apply Parseval's identity with our orthonormal basis $$\{e_k\}$$ being $$e_k = \frac{sin(kx)}{\sqrt \pi}$$. We also note that $$b_k = \frac{<f, \quad sin(kx)>}{<sin(kx), \quad sin(kx)>}$$ (where $$sin(kx) = \sqrt{\pi}e_k$$), or re-writing to put it in the form of Parseval's identity, $$<f, \quad e_k> = \frac{<f, \quad sin(kx)>}{\sqrt{\pi}} = \frac{<f, \quad sin(kx)>}{\sqrt{<sin(kx), \quad sin(kx)>}} = b_k \sqrt{<sin(kx), \quad sin(kx)>} = \sqrt\pi b_k$$. Thus $$<f, \quad f> = \sum_{k=1}^{\infty}{|<f, \quad e_k>|^2} = \sum_{k=1}^{\infty}{|\sqrt\pi b_k|^2} = \sum_{k=1}^{\infty}{\pi \frac{4}{k^2}}$$. Meanwhile, $$<f, \quad f> = \int_{-\pi}^{\pi}{x^2dx} = \frac{2\pi^3}{3}$$, so we have $$\frac{2\pi^3}{3} = 4\pi \sum_{k=1}^{\infty}{\frac{1}{k^2}}$$, or $$\sum_{k=1}^{\infty}{\frac{1}{k^2}} = \frac{\pi^2}{6}$$.

The only squishy part (AFAICT) is proving Parseval's identity (esp. in the infinite dimensional case). We can also see why we get only even powers -- because of the squaring of the fourier coefficients. It ought to be easy (though I haven't really thought about it) to use f(x)=x^2, x^3, ... to get $$\zeta(4), \quad \zeta(6)...$$. (The only part that I haven't confirmed is that the Fourier coefficients take the form I think they take for the higher order f's.)

 All times are UTC. The time now is 03:52.