![]() |
|
|
#23 | |
|
Dec 2003
Hopefully Near M48
175810 Posts |
Quote:
I think mfgoode is saying that: A(1) + A(2) + A(3) + ... + A(n) = 0 implies A(1)^n + A(2)^n + A(3)^n + ... + A(n)^n = k*A(1)*A(2)*A(3)*...*A(n) where k is some nonzero constant If that is true, it would complete the proof by induction, since the left-hand side of that equation equals 0 (by assumption), so the right-hand side must also equal zero, implying that at least one of the variables must equal 0. My problem is, I don't know how to prove that the crucial equation is true. |
|
|
|
|
|
|
#24 |
|
Bronze Medalist
Jan 2004
Mumbai,India
22·33·19 Posts |
Compliments of the season to all subscribers of this thread. I took a few days off my computer for Christmas so the delay in replying. Anyway please bear with me. Paul: I didn’t claim to prove jinydu’s conjecture so the proof I am supposed to have given is purely ethereal. It was just a helpful hint to proceed with the problem in proving it. The pattern was there for n = 1, 2, 3, and so I hinted to employ the method of induction which was a possibility. Your theory on M o I was absolutely correct and I know it well. My message for jinydu was in d’Alemberts words “ Go on, and faith will come to you.” Jinydu: Your conjecture is difficult to prove by induction as large algebraic expressions are involved. Still an astute math’cian should be able to do it with a shortcut. My intuition is that it can be proved and that the conjecture holds right up to n I have now proved the case for n =4 and from there on the rest should follow suit until the nth case. In such cases we must ‘study the masters and not the pupils’. Fermat in his letters to Carcavi in Aug 1659 speaks of his ‘method of descent’ a technique he often used. E.T. Bell in his ‘Men of Mathematics 1’. gives a free translation of the letter to Carcavi and then summarises it in his own words. ‘The difficulties in applying descent to a new problem, that of proving that if the assumed or conjectured proposition is true, of any number of the kind concerned, chosen at random, then it will be true of a number smaller of the same kind’. Wish you the best of luck Mally
|
|
|
|
|
|
#25 |
|
Dec 2003
Hopefully Near M48
175810 Posts |
Has there been any luck on constructing that smaller nontrivial solution? I haven't been able to make any real progress...
|
|
|
|
|
|
#26 | |
|
"Richard B. Woods"
Aug 2002
Wisconsin USA
22×3×641 Posts |
Quote:
Criticizing Mathworld for being hard to read unless you've already studied the topic is like criticizing a dictionary because it is not a primer for beginning readers. If what you want is a primer, get "Dick and Jane" or its equivalent, not "Webster's Third New International". See my December 27 response to T.Rex at http://www.mersenneforum.org/showpos...1&postcount=15 Last fiddled with by cheesehead on 2004-12-31 at 07:04 |
|
|
|
|
|
|
#27 |
|
"Richard B. Woods"
Aug 2002
Wisconsin USA
11110000011002 Posts |
Having said that, let me note that I was thinking in terms of printed text, not hypertext.
One might suggest that the authors of on-line reference works should be expected to incorporate links to, for example, historical background or tutorials. However, the ease of linking does not eliminate the work of actually producing the display of historical or tutorial information. What does the reference work's author link to? If that information is not readily available and easily found online (as you could do with Google), how much effort to find that information do you expect the reference work's author to expend? |
|
|
|
|
|
#28 |
|
May 2003
7×13×17 Posts |
If you are looking for an easier solution, there just won't be one, because the equations given are exaclty what one needs to get that A(1),..., A(n) are the roots of x^n=0. And to see this, one needs to look at symmetric polynomials.
However, if one assumes that A(1),..., A(n) are all distinct and non-zero, then one can get a contradiction much sooner, by showing that the coefficient matrix for these equations is a Vandermonte matrix, which will have non-zero determinant by simple methods. |
|
|
|
|
|
#29 | ||
|
Dec 2003
Hopefully Near M48
2·3·293 Posts |
Quote:
Quote:
Last fiddled with by jinydu on 2005-01-05 at 19:03 |
||
|
|
|
|
|
#30 |
|
May 2003
7×13×17 Posts |
A Vandermonte matrix, M, is of the following form: Let A(1),..., A(n) be any numbers. Then M is the nxn matrix whose ith column is (A(1)^(i-1), A(2)^(i-1),...,A(n)^(i-1))^t. In other words, the first column is just 1's. The second column is A(1),..., A(n), the third column is A(1)^2, ..., A(n)^2, and so forth. See http://mathworld.wolfram.com/VandermondeMatrix.html for an actual matrix.
------------------------------------------------------------------- To understand the symmetric polynomials do the following exercises. (1) Take the polynomial (x-a_1) (x-a_2) (x-a_3) ... (x-a_n) and multiply it out for a few small values of n, and look at the coefficients on the powers of x. For example, for n=1 we just have x-a_1. For n=2 we have x^2 - (a_1 + a_2) x +(a_1 a_2). In general, the coefficient of x^{n-1} will always be -(a_1 + a_2 + ... + a_n) and the constant coefficient will always be (-1)^(n-1) a_1 a_2 ... a_n. The other inbetween coefficients are the other elementary symmetric polynomials up to a sign. (Now would be a good time to look at that webpage about symmetric polynomials, and just look at the elementary symmetric polynomials. These are those polynomials that are defined with the capital Pi.) (2) If we think of a_1, ..., a_n as variables, the polynomials, (a_1)^i + (a_2)^i + ... + (a_n)^i, for each i, are symmetric polynomials. (If we switch any of the a_k around it doesn't change the polynomial.) These are called the power sum polynomials. (3) All that the Newton-Girard formulas do is show that the power sum polynomials can be written as linear combinations of the elementary symmetric polynomials, and vice versa. So, if all of the power sum polynomials are zero, then all of the elementary symmetric polynomials are zero, and thus (x-a_1) (x-a_2)...(x-a_n)=x^n. (In other words, all of the roots a_k are zero.) |
|
|
|
|
|
#31 |
|
Dec 2003
Hopefully Near M48
2·3·293 Posts |
Thanks for the reply Zeta-Flux. Things are starting to make some sense. So if I'm not mistaken, the proof will look something like this.
Let S(k) = A(1)^k + A(2)^k + A(3)^k + ... + A(n)^k and Pi(k) = The sum of all combinations of A(1), A(2), A(3), ... A(n) taken k at a time At http://mathworld.wolfram.com/Newton-GirardFormulas.html, Eq.3 represents the 1st identity, Eq. 4 is the 2nd identity, Eq. 5 is the 3rd identity and Eq. 6 is the 4th identity. In general, the nth identity will look like: S(n) - S(n-1)*Pi(1) + S(n-2)*Pi(2) - S(n-3)*Pi(3) + ... + ((-1)^(n-1))*S(1)*Pi(n-1) + ((-1)^n)*n*Pi(n) = 0 By assumption, S(1) = S(2) = S(3) = ... = S(n) = 0 Therefore, since 0*Pi(k) = 0, all of the terms except the last one must equal zero. But since all of the terms added together equal 0, and 0 + x = 0 implies that x = 0, the last term must then also equal 0. Thus, one of the three factors equals 0. But (-1)^n can never be 0 (it can only be 1 or -1). n can't be zero either, since n is a natural number. Therefore, Pi(n) must be 0. By definition, Pi(n) = A(1)*A(2)*A(3)*...*A(n) Therefore, at least one of the variables must equal 0. This reduces the problem down to the case n-1. By applying the same argument repeatedly, we can reduce the problem down to the trivial case of n = 1, at which point it is proven that all the variables equal 0. Q.E.D. Now the proof seems quite complete, except for one thing: I would have to prove the Newton-Girard Formulas! :surprised :surprised :surprised That doesn't seem easy... |
|
|
|
|
|
#32 |
|
May 2003
7×13×17 Posts |
That looks exactly right. The formula is not hard to prove actually. All you have to do is look at S(m-k)Pi(k), and figure out exactly what the monomials look like. They break up into two kinds. The monomials where the higest power on any variable is k and all the other variable have order 1, and monomials where the higest power on any variable is k+1 and the other variables have order 1. Then, grouping in this way, the sum just telescopes to zero.
For example, take n=3, so we have A(1), A(2), A(3). Let m=3, k=1. So we are looking at S(m-k)Pi(k)=S(2)Pi(1)=(A_1 A_2 +A_1 A_3 + A_2 A_3)(A_1 +A_2 + A_3) = [ A_1^2 A_2 +A_1^2 A_3 + A_2^2 A_1 +A_2^2 A_3 +A_3^2 A_1 +A_3^2 A_2] + 3[A_1 A_2 A_3]. Notice in the first group that the highest power is just 2, and in the second group the highest power is 1. Do the same type of grouping on S(1)Pi(2), Pi(3), and S(3), and the different groups just cancel each other out, and the proof follows. -------- By the way, an alternate question to ask is will this same method work if we use a field that is different than the complex numbers. (Say, a finite field.) |
|
|
|
|
|
#33 | |
|
Dec 2003
Hopefully Near M48
33368 Posts |
Quote:
It does work for n = 2: A + B = 0 (A + B)^2 = 0 A^2 + 2AB + B^2 = 0 A^2 + B^2 = -2AB It also works for n = 3 A + B + C = 0 (A + B + C)^3 = 0 A^3 + 3(A^2)B + 3A(B^2) + B^3 + 3(A^2)C + 6ABC + 3(B^2)C + 3A(C^2) + 3B(C^2) + B^3 = 0 (ouch!) A^3 + B^3 + C^3 = -3(A^2)B - 3A(B^2) - 3(A^2)C - 6ABC - 3(B^2)C - 3A(C^2) - 3B(C^2) According to Mathematica, the right side factorizes A^3 + B^3 + C^3 = -3(A+B)(A+C)(B+C) Applying the first equation again: A^3 + B^3 + C^3 = -3(-C)(-B)(-A) A^3 + B^3 + C^3 = 3ABC Thus, the formula still works. But it fails for n = 4. The expansion is even more terrifying. But we can disprove it using a counterexample. Let A = 1, B = -1, C = 1, D = -1 Obviously, A + B + C + D = 0 A^4 = B^4 = C^4 = D^4 = 1 Therefore, A^4 + B^4 + C^4 + D^4 = 4 Also, ABCD = 1 So we expect that k = 4. But if we choose different values for A, B, C and D, we get a different value of k. For example: A = 1, B = i, C = -1, D = -i Again, A + B + C + D = 0 Also, A^4 = B^4 = C^4 = D^4 = 1. So A^4 + B^4 + C^4 + D^4 = 4 Since ABCD (allegedly) equals (A^4 + B^4 + C^4 + D^4)/k, and k = 4, we expect that ABCD = 1 But instead, we find that ABCD = (1*i*-1*-i) = (1*i)*(-1*-i) = i*i = -1 Thus, the method would need to be modified. |
|
|
|
|
![]() |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| When I was your age.....CHALLENGE | petrw1 | Lounge | 14 | 2009-11-23 02:18 |
| Challenge | science_man_88 | Miscellaneous Math | 229 | 2009-09-07 08:08 |
| Fermat's challenge problem. | mfgoode | Puzzles | 1 | 2009-03-16 18:04 |
| Another challenge | R.D. Silverman | Programming | 24 | 2005-07-27 21:08 |
| Who is Challenge? | JuanTutors | PrimeNet | 2 | 2004-07-22 12:56 |