mersenneforum.org  

Go Back   mersenneforum.org > Fun Stuff > Puzzles

Reply
 
Thread Tools
Old 2004-12-24, 13:04   #23
jinydu
 
jinydu's Avatar
 
Dec 2003
Hopefully Near M48

175810 Posts
Default

Quote:
Originally Posted by xilman
Not quite.

Here is a "proof" that all primes are less than 100, using your statement. Let p(n) be the n-th prime. p(2) = 3, which is < 100. p(3) = 5, and 5 < 100. Therefore the nth prime is less than 100.

The resolution, of course, is that for induction to work, you not only have to prove a base case, you also have to show that if the proposition is true for any value of n at all, it must be true for the value n+1.

In the case in question, the proposition was shown by jinydu to be true for n = 1, 2 and 3. He hasn't (as far as I'm aware) shown that the assumption it is true for arbitrary n leads inevitably to the conclusion that it must be true for n+1.

Paul
True, but if I could just show that at least one of the variables must equal zero for any natural n, the problem would reduce down to the n - 1 case. And if I assume that the conjecture holds for the n - 1 case, this would show that it is also true for n, thus completing the proof by induction. This, I knew all along.

I think mfgoode is saying that:

A(1) + A(2) + A(3) + ... + A(n) = 0

implies

A(1)^n + A(2)^n + A(3)^n + ... + A(n)^n = k*A(1)*A(2)*A(3)*...*A(n)
where k is some nonzero constant


If that is true, it would complete the proof by induction, since the left-hand side of that equation equals 0 (by assumption), so the right-hand side must also equal zero, implying that at least one of the variables must equal 0.

My problem is, I don't know how to prove that the crucial equation is true.
jinydu is offline   Reply With Quote
Old 2004-12-27, 16:50   #24
mfgoode
Bronze Medalist
 
mfgoode's Avatar
 
Jan 2004
Mumbai,India

22·33·19 Posts
Default Challenge problem


Compliments of the season to all subscribers of this thread. I took a few days off my computer for Christmas so the delay in replying. Anyway please bear with me.

Paul: I didn’t claim to prove jinydu’s conjecture so the proof I am supposed to have given is purely ethereal. It was just a helpful hint to proceed with the problem in proving it. The pattern was there for n = 1, 2, 3, and so I hinted to employ the method of induction which was a possibility. Your theory on M o I was absolutely correct and I know it well.
My message for jinydu was in d’Alemberts words “ Go on, and faith will come to you.”

Jinydu: Your conjecture is difficult to prove by induction as large algebraic expressions are involved. Still an astute math’cian should be able to do it with a shortcut.
My intuition is that it can be proved and that the conjecture holds right up to n
I have now proved the case for n =4 and from there on the rest should follow suit until the nth case.
In such cases we must ‘study the masters and not the pupils’.

Fermat in his letters to Carcavi in Aug 1659 speaks of his ‘method of descent’ a technique he often used.
E.T. Bell in his ‘Men of Mathematics 1’. gives a free translation of the letter to Carcavi and then summarises it in his own words.
‘The difficulties in applying descent to a new problem, that of proving that if the assumed or conjectured proposition is true, of any number of the kind concerned, chosen at random, then it will be true of a number smaller of the same kind’.

Wish you the best of luck
Mally
mfgoode is offline   Reply With Quote
Old 2004-12-31, 02:51   #25
jinydu
 
jinydu's Avatar
 
Dec 2003
Hopefully Near M48

175810 Posts
Default

Has there been any luck on constructing that smaller nontrivial solution? I haven't been able to make any real progress...
jinydu is offline   Reply With Quote
Old 2004-12-31, 07:01   #26
cheesehead
 
cheesehead's Avatar
 
"Richard B. Woods"
Aug 2002
Wisconsin USA

22×3×641 Posts
Default

Quote:
Originally Posted by jinydu
I suppose this is the T.Rex quote you had in mind:

"You're right. But, the first time I tried to understand the LLT, I read a proof provided by http://www.utm.edu/research/primes , I think. It was really badly explained. Very short. Boring. But it is a general complain I have against Maths: most of the books provide the shortest possible proof, rather than explaining the long and difficult way used by the first discoverers. (I put group theory inside quotes in order to "group" the 2 words, so that it is more readable.)"

I would have to say that in spite of the vast amount of information available, most of the articles on Mathworld, unfortunately, meet this description. To Weisstein's credit, the proofs do score top points in terms of logical rigor, which is very important. But unfortunately, much of the material is very hard to read unless I've already studied the topic elsewhere.
Like T.Rex, though, you and the others expressing similar opinions are being unfair in your criticism because you're presuming that Mathworld has to be some sort of textbook that includes tutorials for everything. But it's not, and was never intended, nor ever pretended, to be so general. Instead, Mathworld is intended to be a reference work, and that's what it is.

Criticizing Mathworld for being hard to read unless you've already studied the topic is like criticizing a dictionary because it is not a primer for beginning readers. If what you want is a primer, get "Dick and Jane" or its equivalent, not "Webster's Third New International".

See my December 27 response to T.Rex at http://www.mersenneforum.org/showpos...1&postcount=15

Last fiddled with by cheesehead on 2004-12-31 at 07:04
cheesehead is offline   Reply With Quote
Old 2005-01-05, 10:36   #27
cheesehead
 
cheesehead's Avatar
 
"Richard B. Woods"
Aug 2002
Wisconsin USA

11110000011002 Posts
Default

Having said that, let me note that I was thinking in terms of printed text, not hypertext.

One might suggest that the authors of on-line reference works should be expected to incorporate links to, for example, historical background or tutorials.

However, the ease of linking does not eliminate the work of actually producing the display of historical or tutorial information. What does the reference work's author link to? If that information is not readily available and easily found online (as you could do with Google), how much effort to find that information do you expect the reference work's author to expend?
cheesehead is offline   Reply With Quote
Old 2005-01-05, 18:50   #28
Zeta-Flux
 
Zeta-Flux's Avatar
 
May 2003

7×13×17 Posts
Default

If you are looking for an easier solution, there just won't be one, because the equations given are exaclty what one needs to get that A(1),..., A(n) are the roots of x^n=0. And to see this, one needs to look at symmetric polynomials.

However, if one assumes that A(1),..., A(n) are all distinct and non-zero, then one can get a contradiction much sooner, by showing that the coefficient matrix for these equations is a Vandermonte matrix, which will have non-zero determinant by simple methods.
Zeta-Flux is offline   Reply With Quote
Old 2005-01-05, 19:02   #29
jinydu
 
jinydu's Avatar
 
Dec 2003
Hopefully Near M48

2·3·293 Posts
Default

Quote:
Originally Posted by Zeta-Flux
If you are looking for an easier solution, there just won't be one, because the equations given are exaclty what one needs to get that A(1),..., A(n) are the roots of x^n=0. And to see this, one needs to look at symmetric polynomials.
My problem is that the Mathworld articles are even more impenetrable than the original problem, so it doesn't help to use that, unless I can find a simpler explanation.

Quote:
Originally Posted by Zeta-Flux
However, if one assumes that A(1),..., A(n) are all distinct and non-zero, then one can get a contradiction much sooner, by showing that the coefficient matrix for these equations is a Vandermonte matrix, which will have non-zero determinant by simple methods.
Maybe this could work. Although it doesn't fully solve the original problem, it certainly would be progress. What is a Vandermonte matrix?

Last fiddled with by jinydu on 2005-01-05 at 19:03
jinydu is offline   Reply With Quote
Old 2005-01-05, 20:53   #30
Zeta-Flux
 
Zeta-Flux's Avatar
 
May 2003

7×13×17 Posts
Default

A Vandermonte matrix, M, is of the following form: Let A(1),..., A(n) be any numbers. Then M is the nxn matrix whose ith column is (A(1)^(i-1), A(2)^(i-1),...,A(n)^(i-1))^t. In other words, the first column is just 1's. The second column is A(1),..., A(n), the third column is A(1)^2, ..., A(n)^2, and so forth. See http://mathworld.wolfram.com/VandermondeMatrix.html for an actual matrix.
-------------------------------------------------------------------
To understand the symmetric polynomials do the following exercises.

(1) Take the polynomial (x-a_1) (x-a_2) (x-a_3) ... (x-a_n) and multiply it out for a few small values of n, and look at the coefficients on the powers of x. For example, for n=1 we just have x-a_1. For n=2 we have x^2 - (a_1 + a_2) x +(a_1 a_2).

In general, the coefficient of x^{n-1} will always be -(a_1 + a_2 + ... + a_n) and the constant coefficient will always be (-1)^(n-1) a_1 a_2 ... a_n. The other inbetween coefficients are the other elementary symmetric polynomials up to a sign. (Now would be a good time to look at that webpage about symmetric polynomials, and just look at the elementary symmetric polynomials. These are those polynomials that are defined with the capital Pi.)

(2) If we think of a_1, ..., a_n as variables, the polynomials, (a_1)^i + (a_2)^i + ... + (a_n)^i, for each i, are symmetric polynomials. (If we switch any of the a_k around it doesn't change the polynomial.) These are called the power sum polynomials.

(3) All that the Newton-Girard formulas do is show that the power sum polynomials can be written as linear combinations of the elementary symmetric polynomials, and vice versa. So, if all of the power sum polynomials are zero, then all of the elementary symmetric polynomials are zero, and thus (x-a_1) (x-a_2)...(x-a_n)=x^n. (In other words, all of the roots a_k are zero.)
Zeta-Flux is offline   Reply With Quote
Old 2005-01-07, 03:55   #31
jinydu
 
jinydu's Avatar
 
Dec 2003
Hopefully Near M48

2·3·293 Posts
Default

Thanks for the reply Zeta-Flux. Things are starting to make some sense. So if I'm not mistaken, the proof will look something like this.

Let S(k) = A(1)^k + A(2)^k + A(3)^k + ... + A(n)^k
and Pi(k) = The sum of all combinations of A(1), A(2), A(3), ... A(n) taken k at a time

At http://mathworld.wolfram.com/Newton-GirardFormulas.html, Eq.3 represents the 1st identity, Eq. 4 is the 2nd identity, Eq. 5 is the 3rd identity and Eq. 6 is the 4th identity. In general, the nth identity will look like:

S(n) - S(n-1)*Pi(1) + S(n-2)*Pi(2) - S(n-3)*Pi(3) + ... + ((-1)^(n-1))*S(1)*Pi(n-1) + ((-1)^n)*n*Pi(n) = 0

By assumption, S(1) = S(2) = S(3) = ... = S(n) = 0

Therefore, since 0*Pi(k) = 0, all of the terms except the last one must equal zero.

But since all of the terms added together equal 0, and 0 + x = 0 implies that x = 0, the last term must then also equal 0.

Thus, one of the three factors equals 0. But (-1)^n can never be 0 (it can only be 1 or -1). n can't be zero either, since n is a natural number. Therefore, Pi(n) must be 0.

By definition, Pi(n) = A(1)*A(2)*A(3)*...*A(n)

Therefore, at least one of the variables must equal 0.

This reduces the problem down to the case n-1. By applying the same argument repeatedly, we can reduce the problem down to the trivial case of n = 1, at which point it is proven that all the variables equal 0.

Q.E.D.

Now the proof seems quite complete, except for one thing: I would have to prove the Newton-Girard Formulas! :surprised :surprised :surprised

That doesn't seem easy...
jinydu is offline   Reply With Quote
Old 2005-01-07, 06:36   #32
Zeta-Flux
 
Zeta-Flux's Avatar
 
May 2003

7×13×17 Posts
Default

That looks exactly right. The formula is not hard to prove actually. All you have to do is look at S(m-k)Pi(k), and figure out exactly what the monomials look like. They break up into two kinds. The monomials where the higest power on any variable is k and all the other variable have order 1, and monomials where the higest power on any variable is k+1 and the other variables have order 1. Then, grouping in this way, the sum just telescopes to zero.


For example, take n=3, so we have A(1), A(2), A(3). Let m=3, k=1. So we are looking at S(m-k)Pi(k)=S(2)Pi(1)=(A_1 A_2 +A_1 A_3 + A_2 A_3)(A_1 +A_2 + A_3) = [ A_1^2 A_2 +A_1^2 A_3 + A_2^2 A_1 +A_2^2 A_3 +A_3^2 A_1 +A_3^2 A_2] + 3[A_1 A_2 A_3]. Notice in the first group that the highest power is just 2, and in the second group the highest power is 1. Do the same type of grouping on S(1)Pi(2), Pi(3), and S(3), and the different groups just cancel each other out, and the proof follows.
--------

By the way, an alternate question to ask is will this same method work if we use a field that is different than the complex numbers. (Say, a finite field.)
Zeta-Flux is offline   Reply With Quote
Old 2005-01-07, 22:25   #33
jinydu
 
jinydu's Avatar
 
Dec 2003
Hopefully Near M48

33368 Posts
Default

Quote:
Originally Posted by jinydu

A(1) + A(2) + A(3) + ... + A(n) = 0

implies

A(1)^n + A(2)^n + A(3)^n + ... + A(n)^n = k*A(1)*A(2)*A(3)*...*A(n)
where k is some nonzero constant

Actually, it seems that formula isn't true in general after all.

It does work for n = 2:

A + B = 0
(A + B)^2 = 0
A^2 + 2AB + B^2 = 0
A^2 + B^2 = -2AB

It also works for n = 3

A + B + C = 0
(A + B + C)^3 = 0
A^3 + 3(A^2)B + 3A(B^2) + B^3 + 3(A^2)C + 6ABC + 3(B^2)C + 3A(C^2) + 3B(C^2) + B^3 = 0 (ouch!)
A^3 + B^3 + C^3 = -3(A^2)B - 3A(B^2) - 3(A^2)C - 6ABC - 3(B^2)C - 3A(C^2) - 3B(C^2)

According to Mathematica, the right side factorizes

A^3 + B^3 + C^3 = -3(A+B)(A+C)(B+C)

Applying the first equation again:

A^3 + B^3 + C^3 = -3(-C)(-B)(-A)
A^3 + B^3 + C^3 = 3ABC

Thus, the formula still works.

But it fails for n = 4. The expansion is even more terrifying. But we can disprove it using a counterexample.

Let A = 1, B = -1, C = 1, D = -1

Obviously, A + B + C + D = 0

A^4 = B^4 = C^4 = D^4 = 1

Therefore, A^4 + B^4 + C^4 + D^4 = 4

Also, ABCD = 1

So we expect that k = 4.

But if we choose different values for A, B, C and D, we get a different value of k. For example:

A = 1, B = i, C = -1, D = -i

Again, A + B + C + D = 0

Also, A^4 = B^4 = C^4 = D^4 = 1.

So A^4 + B^4 + C^4 + D^4 = 4

Since ABCD (allegedly) equals (A^4 + B^4 + C^4 + D^4)/k, and k = 4, we expect that ABCD = 1

But instead, we find that ABCD = (1*i*-1*-i) = (1*i)*(-1*-i) = i*i = -1

Thus, the method would need to be modified.
jinydu is offline   Reply With Quote
Reply



Similar Threads
Thread Thread Starter Forum Replies Last Post
When I was your age.....CHALLENGE petrw1 Lounge 14 2009-11-23 02:18
Challenge science_man_88 Miscellaneous Math 229 2009-09-07 08:08
Fermat's challenge problem. mfgoode Puzzles 1 2009-03-16 18:04
Another challenge R.D. Silverman Programming 24 2005-07-27 21:08
Who is Challenge? JuanTutors PrimeNet 2 2004-07-22 12:56

All times are UTC. The time now is 03:00.


Sat Jul 17 03:00:21 UTC 2021 up 50 days, 47 mins, 1 user, load averages: 1.17, 1.21, 1.31

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.