mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Miscellaneous Math (https://www.mersenneforum.org/forumdisplay.php?f=56)
-   -   Standard crank division by zero thread (https://www.mersenneforum.org/showthread.php?t=15278)

Don Blazys 2011-02-19 19:07

Standard crank division by zero thread
 
Here:

[U][COLOR="Navy"]httр://donblazys.com/on_рolygonal_numbers_3.рdf[/COLOR][/U]

you will find my "Special Polygonal Number Counting Function" that approximates (to a very high degree of accuracy)
how many "polygonal numbers of order greater than [TEX]2[/TEX]" there are under some given number [TEX]x[/TEX]
in much the same way that the function: [TEX]Li(x)[/TEX] approximates how many primes there are under some given number [TEX]x[/TEX].

The reason that I am posting it here is because this forum seems to have many experienced "coders"
who have access to some very powerful computers.

Here is my question...

Would it be possible to calculate [TEX]\varpi(x)[/TEX] to say, [TEX]x=10^{18}[/TEX] or so?
Given that [TEX]\pi(x)[/TEX] (the number of primes under [TEX]x[/TEX]) has been calculated to [TEX]x=10^{24}[/TEX],
I should think that this would be "easy", or at least, "possible in a reasonable amount of time",
but as it turns out, the coders who determined the present "world record" [TEX]\varpi(1,100,000,000,000)=704,398,597,754[/TEX]
informed me that determining [TEX]\varpi(10^{13})[/TEX] would probably take [B]about a year[/B].

There is an intrepid young coder who is letting his computer run constantly
and will verify that "world record" in a few days from now. He estimates that he will be able to determine [TEX]\varpi(10^{13})[/TEX]
in about 6 or 7 months, so I doubt that his machine is powerful enough to determine [TEX]\varpi(10^{18})[/TEX] within our lifetimes!

I don't own a computer (I use my grand daughters laptop to post) nor do I know anything about "coding" or "programing".
Thus, I would greatly appreciate any help or advice whatsoever as to how a determination of [TEX]\varpi(10^{18})[/TEX] might be possible.

Thanks in advance,

Don.

science_man_88 2011-02-19 23:18

[QUOTE=Don Blazys;253053]Here:

[U][COLOR="Navy"]httр://donblazys.com/on_рolygonal_numbers_3.рdf[/COLOR][/U]

you will find my "Special Polygonal Number Counting Function" that approximates (to a very high degree of accuracy)
how many "polygonal numbers of order greater than [TEX]2[/TEX]" there are under some given number [TEX]x[/TEX]
in much the same way that the function: [TEX]Li(x)[/TEX] approximates how many primes there are under some given number [TEX]x[/TEX].

The reason that I am posting it here is because this forum seems to have many experienced "coders"
who have access to some very powerful computers.

Here is my question...

Would it be possible to calculate [TEX]\varpi(x)[/TEX] to say, [TEX]x=10^{18}[/TEX] or so?
Given that [TEX]\pi(x)[/TEX] (the number of primes under [TEX]x[/TEX]) has been calculated to [TEX]x=10^{24}[/TEX],
I should think that this would be "easy", or at least, "possible in a reasonable amount of time",
but as it turns out, the coders who determined the present "world record" [TEX]\varpi(1,100,000,000,000)=704,398,597,754[/TEX]
informed me that determining [TEX]\varpi(10^{13})[/TEX] would probably take [B]about a year[/B].

There is an intrepid young coder who is letting his computer run constantly
and will verify that "world record" in a few days from now. He estimates that he will be able to determine [TEX]\varpi(10^{13})[/TEX]
in about 6 or 7 months, so I doubt that his machine is powerful enough to determine [TEX]\varpi(10^{18})[/TEX] within our lifetimes!

I don't own a computer (I use my grand daughters laptop to post) nor do I know anything about "coding" or "programing".
Thus, I would greatly appreciate any help or advice whatsoever as to how a determination of [TEX]\varpi(10^{18})[/TEX] might be possible.

Thanks in advance,

Don.[/QUOTE]


I know my values may be off so first I''l check them up (no I'm not a mathematician), mass of proton according to my calculator (which if I'm not mistaken is needed for the mass ratio you talk of) is about 1.67 *10^-27 , mass of electron is about 9.109 * 10^-31, the ratio my calculator gives me with the ratio of the 2 constants is 1836.15 but I'll shorten that to 1.83*10^3
assuming the e talked of that's not defined is 2.71828............ ? , and you give the fine structure constant. I find that [TEX]B(x)\times(1-\frac{\alpha}{\mu-2\times {e}})[/TEX] is about 9.99**10^-1 which if my math is correct means the figures in the table are wrong. never mind I don't see why the first equation then but like I said I'm not mathematical

Don Blazys 2011-02-21 00:44

The counting function [B]works. [/B]

In fact, it approximates the number of [B]polygonal numbers of order greater than 2[/B]
to a much higher degree of accuracy than [TEX]Li(x)[/TEX] approximates the prime numbers.

The challenge now is to determine [TEX]\varpi(x)[/TEX] to about [TEX]x=10^{18}[/TEX].

That will tell us a lot.

Don.

CRGreathouse 2011-02-21 01:30

[QUOTE=Don Blazys;253209]In fact, it approximates the number of [B]polygonal numbers of order greater than 2[/B]
to a much higher degree of accuracy than [TEX]Li(x)[/TEX] approximates the prime numbers.[/QUOTE]

Of course -- these are easy to count by virtue of being well-behaved polynomials. There's no need to account for the irregularities of the zeta function.

Your approximation is constant * n + lower order terms and your paper shows you modifying the coefficient of n based on counts done so far. That you use constants from physics is of no particular significance here; higher counts will show that the constant term is still off and needs to be modified.

Now I'm curious about this "young coder" you have working on the task. What algorithm is he using? Based on the speed it seems like direct enumeration, which would seem to be a very slow method for the task. I would think that the theory of Diophantine equations could be used to construct an inclusion-exclusion method that would be orders of magnitude faster.

Don Blazys 2011-02-21 15:09

Quoting CRGreathouse:
[QUOTE]
Of course --
these are easy to count by virtue of being well-behaved polynomials.
There's no need to account for the irregularities of the zeta function.
[/QUOTE]

The sequence of [B]"polygonal numbers of order greater than2", [/B]
which can be found here:

[URL]http://oeis.org/A090466[/URL]

is [B][I][U]not[/U][/I][/B] a "well behaved" sequence.

As with the sequence of primes, it can only be described as
erratic, irregular, patternless, random and unpredictable.

The random fluctuations in [TEX]\varpi(x)[/TEX] may not be as pronounced as
the random fluctuations in [TEX]\pi(x)[/TEX] (the number of primes under x)
but they are nevertheless quite random, and the sequence is
[B][I]extraordinarily[/I][/B] [B][I][U]difficult[/U][/I][/B] to count.

At least a dozen coders crashed their computers trying to break the
present "world record" [TEX]\varpi(1,100,000,000,000)=704,398,597,754[/TEX].

Quoting CRGreathouse
[QUOTE]
Your approximation is constant * n + lower order terms and your paper
shows you modifying the coefficient of n based on counts done so far.
[/QUOTE]

Well, my counting function can be "simplified" as follows:

[TEX]B(x)*\left(1-\frac{\alpha}{\mu-2*e}\right)=[/TEX] [TEX]\left(x-\frac{x}{\alpha*\pi*e+e}-\frac{1}{2}*\sqrt{x-\frac{x}{\alpha*\pi*e+e}}\right)*\left(1-\frac{\alpha}{\mu-2*e}\right)[/TEX]

[TEX]=.64036274309582*x-.40011254372008*\sqrt{x}[/TEX],

and since I am the [B][I][U]first[/U][/I][/B] mathematician to develop a "counting function" for
"polygonal numbers of order greater than 2", I suppose that I could name
the coefficients .6403627... and .4001125... after myself and call them
"Blazys constants", but I am much too humble and modest to do that!

Quoting CRGreathouse:
[QUOTE]
That you use constants from physics is of no particular significance here;
higher counts will show that the constant term is still off and needs to be modified.
[/QUOTE]

When I first began work on this counting function,
there was very little data to work [I][B]with[/B][/I]. However,
as the coders provided me with higher counts of [TEX]\varpi(x)[/TEX].
the "physical" constants [TEX]\alpha[/TEX] and [TEX]\mu[/TEX] emerged [B][I]naturally.[/I][/B]

That's why I am quite certain that the function is correct,
and will hold up regardless of how high the counts are.

Quoting CRGreathouse:
[QUOTE]
Now I'm curious about this "young coder" you have working on the task.
What algorithm is he using? Based on the speed it seems like direct enumeration,
which would seem to be a very slow method for the task.

I would think that the theory of Diophantine equations could be used to
construct an inclusion-exclusion method that would be orders of magnitude faster. [/QUOTE]

Well, the young coder is working [B][I]with[/I][/B] me... not [B][I]for[/I][/B] me.

Since I don't know anything about computers or coding,
I really can't comment on his methods, but he recently
informed me that [TEX]\varpi(1,200,000,000,000)[/TEX] will be determined
by this coming Friday, and [U]that[/U] will be the new world record.

If you are interested, I will post the result here as it comes in.

Don.

R.D. Silverman 2011-02-21 15:26

[QUOTE=Don Blazys;253281]Quoting CRGreathouse:


The sequence of [B]"polygonal numbers of order greater than2", [/B]
which can be found here:

[URL]http://oeis.org/A090466[/URL]

is [B][I][U]not[/U][/I][/B] a "well behaved" sequence.

As with the sequence of primes, it can only be described as
erratic, irregular, patternless, random and unpredictable.
[/QUOTE]

He did not say that it was. You need to learn to read.
He said that the [b]polynomials[/b] were well behaved.


[QUOTE]
The random fluctuations in [TEX]\varpi(x)[/TEX] may not be as pronounced as
the random fluctuations in [TEX]\pi(x)[/TEX] (the number of primes under x)
but they are nevertheless quite random,
[/QUOTE]

Oh? What is the underlying density function? Please specify.


[QUOTE]
and since I am the [B][I][U]first[/U][/I][/B] mathematician to develop a "counting function" for
"polygonal numbers of order greater than 2",

[/QUOTE]

May we ask: Where did you get your math degree?

[QUOTE]
I suppose that I could name
the coefficients .6403627... and .4001125... after myself and call them
"Blazys constants", but I am much too humble and modest to do that!
[/QUOTE]

You have an approximation that works for the range(s) for which you
have computed values. Please show us the derivation of your
counting function. Or is merely an emprical result from fitting curves?

CRGreathouse 2011-02-21 20:07

[QUOTE=Don Blazys;253281]I am the [B][I][U]first[/U][/I][/B] mathematician to develop a "counting function" for
"polygonal numbers of order greater than 2"[/QUOTE]

How do you know that you're the first?

Also, you seem to have only an empirical fit. What have you actually proved? I can prove bounds on the distribution function; can you?

[QUOTE=Don Blazys;253281]That's why I am quite certain that the function is correct,
and will hold up regardless of how high the counts are.[/QUOTE]

I don't suppose you'd care to back that up with a friendly bet?


I'm not prepared at the moment to commit the time or processor power, but at some point I may want to compete with the team of you and your coder to reach 10^14.

science_man_88 2011-02-21 21:10

[QUOTE=Don Blazys;253281]Quoting CRGreathouse:


The sequence of [B]"polygonal numbers of order greater than2", [/B]
which can be found here:

[URL]http://oeis.org/A090466[/URL]

is [B][I][U]not[/U][/I][/B] a "well behaved" sequence.

As with the sequence of primes, it can only be described as
erratic, irregular, patternless, random and unpredictable.

The random fluctuations in [TEX]\varpi(x)[/TEX] may not be as pronounced as
the random fluctuations in [TEX]\pi(x)[/TEX] (the number of primes under x)
but they are nevertheless quite random, and the sequence is
[B][I]extraordinarily[/I][/B] [B][I][U]difficult[/U][/I][/B] to count.

At least a dozen coders crashed their computers trying to break the
present "world record" [TEX]\varpi(1,100,000,000,000)=704,398,597,754[/TEX].

Quoting CRGreathouse


Well, my counting function can be "simplified" as follows:

[TEX]B(x)*\left(1-\frac{\alpha}{\mu-2*e}\right)=[/TEX] [TEX]\left(x-\frac{x}{\alpha*\pi*e+e}-\frac{1}{2}*\sqrt{x-\frac{x}{\alpha*\pi*e+e}}\right)*\left(1-\frac{\alpha}{\mu-2*e}\right)[/TEX]

[TEX]=.64036274309582*x-.40011254372008*\sqrt{x}[/TEX],

and since I am the [B][I][U]first[/U][/I][/B] mathematician to develop a "counting function" for
"polygonal numbers of order greater than 2", I suppose that I could name
the coefficients .6403627... and .4001125... after myself and call them
"Blazys constants", but I am much too humble and modest to do that!

Quoting CRGreathouse:


When I first began work on this counting function,
there was very little data to work [I][B]with[/B][/I]. However,
as the coders provided me with higher counts of [TEX]\varpi(x)[/TEX].
the "physical" constants [TEX]\alpha[/TEX] and [TEX]\mu[/TEX] emerged [B][I]naturally.[/I][/B]

That's why I am quite certain that the function is correct,
and will hold up regardless of how high the counts are.

Quoting CRGreathouse:


Well, the young coder is working [B][I]with[/I][/B] me... not [B][I]for[/I][/B] me.

Since I don't know anything about computers or coding,
I really can't comment on his methods, but he recently
informed me that [TEX]\varpi(1,200,000,000,000)[/TEX] will be determined
by this coming Friday, and [U]that[/U] will be the new world record.

If you are interested, I will post the result here as it comes in.

Don.[/QUOTE]

Assigning all the values you give and the final equation I can give you that result for x = 1.2*10^12 but you need PARI for me to give you the script I wrote of the equation.

according to my math:

[CODE]alpha = 137.035999084^-1[/CODE]

[CODE]micro = 1836.15267247[/CODE]

and:

[CODE]blazy(x)=(x-(x/(alpha*micro*Pi*exp(1)+exp(1)))-(.5*(sqrt(x-(x/(alpha*Pi*exp(1)+exp(1)))))))*(1-(alpha/(micro-(2*exp(1)))))[/CODE]

try it out for 1.2*10^12 and I get:

[QUOTE]1189750897554.266557436437709[/QUOTE]

science_man_88 2011-02-21 21:42

never mind coding error.. added a term in.

science_man_88 2011-02-21 22:02

result after correction
 
the only error I found was a micro where it didn't belong, with this corrected I redid the test (took 0 ms according to my timer) and I got the new result of:

[QUOTE]768434853413.6797063854667429[/QUOTE]

science_man_88 2011-02-21 22:39

also with the corrected code and maximum memory allocation my machine can handle in the PARI console I calculated this one:

[CODE](18:24)>blazy(1.2*10^100000000)
%34 = 7.684352917150111789071860349 E99999999
(18:30)>##
*** last result computed in 41,469 ms.[/CODE]

Don Blazys 2011-02-22 06:26

To: R.D. Silverman,

Quoting R.D. Silverman:
[QUOTE]
He did not say that it was. You need to learn to read.
He said that the [COLOR=black]polynomial[/COLOR][COLOR=red][B]s[/B][/COLOR] were [COLOR=black]well behaved[/COLOR].[/QUOTE]

Here is [B][U][I]exactly[/I][/U][/B] what CRGreathouse said...

Quoting CRGreathouse's remark [COLOR=black]about[/COLOR] [B]polygonal [COLOR=black]numbers[/COLOR] of order greater than 2:[/B]
[QUOTE]
[COLOR=black]These[/COLOR] are [COLOR=black]easy to count[/COLOR] by virtue of being well-behaved [COLOR=red][COLOR=black]polynomial[/COLOR][B]s[/B][/COLOR].
[/QUOTE]
In my paper, there is [B][U]only[/U] [U]one[/U][/B] polynomial.
There are [B][I]not[/I][/B] a multitude of polynomial[COLOR=red][B]s.[/B][/COLOR]
Now, a [B]polynomial[/B] is an [B]"[I]expression".[/I][/B]
[I][B]Nobody "counts" one expression ![/B][/I]
Therefore...
[COLOR=black][B][I]Nobody[/I][/B] [/COLOR][B][I][COLOR=black]"counts"[/COLOR][/I][/B] [B][I]one[/I][/B] [B][I][COLOR=black]polynomial ![/COLOR][/I][/B]
My paper is about counting [I][B]numbers[/B][/I],
It's [B][I]not[/I][/B] about counting one [B]polynomial[/B].

I like CRG, but clearly, he's a busy person,
didn't have time to fully digest the idea,
and "misspoke"............... as did you.

Quoting R.D. Silverman:
[QUOTE]
You need to learn to read.
[/QUOTE]
Now that's just plain rude (and absolutely uncalled for).
I never did anything to you! [B]I came here wanting to be friends![/B]

You know, there's a "phrase that fits" overly educated people
who are rude [B][I]while[/I][/B] they are wrong. That phrase is "pompous buffoon".

Quoting R.D. Silverman:
[QUOTE]
You have an approximation that works for the range(s)
for which you have computed values.
[/QUOTE]
I can't take any credit for computing those values. They were computed,
(and now verified) by a couple of exellent coders. I owe them a lot.

Quoting R.D. Silverman:
[QUOTE]
Oh? What is the underlying density function? Please specify.

Please show us the derivation of your counting function.
Or is merely an emprical result from fitting curves?
[/QUOTE]
[B]Would you answer questions posed in that manner?[/B]

I suppose that I could ask the coders to provide me with more data...
pages and pages of counts in much smaller increments that I could
then test and analyze in a thousand different ways, but what would be
the point of all that if all I would get for it is the kind of treatment that
you and others such as yourself have been giving me. It's not worth it!

The data that you see in the paper, is all the data that I have.
When I began working on this function, there was a lot less.
In fact, there was almost no data! The count was less than 1000.

The derivation involves methods that I developed over many years
and would probably fill up a book. Why should I bother explaining any of it?
So that I can be called a "crank" and a "crackpot". No thanks!

If you have no interest or curiosity when my presentation is simple,
then you will certainly have no interest or curiosity in the details.

Quoting R.D. Silverman:
[QUOTE]May we ask: Where did you get your math degree?[/QUOTE]
I am a very humble and modest person, and as such,
I prefer not to focus attention on myself, but rather,
on the problem at hand, which is, can we determine [TEX]\varpi(10^{18})[/TEX]?

NBtarheel_33 2011-02-22 07:32

C-to-the-Rank Alert...
 
Misc. Math anyone???

NBtarheel_33 2011-02-22 07:40

[quote]
The derivation involves methods that I developed over many years
and would probably fill up a book. Why should I bother explaining any of it?
So that I can be called a "crank" and a "crackpot". No thanks!

If you have no interest or curiosity when my presentation is simple,
then you will certainly have no interest or curiosity in the details.[/quote]

Let's see. We've got:

* Years of developing and implementing the "derivation", completely out of view of (and without any collaboration or review on the part of) the mathematical community. Swing and a miss, strike one.

* A theory that "fills up a book". Swing and a miss, strike two.

* Refusal to explain or expound on said theory due to supposed lack of interest or knowledge on the part of the unwashed Philistine masses. Swing and a miss, strike three.

Who wants to calculate the crank points on this one? :loco:

R.D. Silverman 2011-02-22 10:37

[QUOTE=Don Blazys;253344]To: R.D. Silverman,

Quoting R.D. Silverman:


Here is [B][U][I]exactly[/I][/U][/B] what CRGreathouse said...

Quoting CRGreathouse's remark [COLOR=black]about[/COLOR] [B]polygonal [COLOR=black]numbers[/COLOR] of order greater than 2:[/B]

In my paper, there is [B][U]only[/U] [U]one[/U][/B] polynomial.
There are [B][I]not[/I][/B] a multitude of polynomial[COLOR=red][B]s.[/B][/COLOR]
Now, a [B]polynomial[/B] is an [B]"[I]expression".[/I][/B]
[I][B]Nobody "counts" one expression ![/B][/I]
Therefore...
[COLOR=black][B][I]Nobody[/I][/B] [/COLOR][B][I][COLOR=black]"counts"[/COLOR][/I][/B] [B][I]one[/I][/B] [B][I][COLOR=black]polynomial ![/COLOR][/I][/B]
My paper is about counting [I][B]numbers[/B][/I],
It's [B][I]not[/I][/B] about counting one [B]polynomial[/B].

I like CRG, but clearly, he's a busy person,
didn't have time to fully digest the idea,
and "misspoke"............... as did you.

Quoting R.D. Silverman:

Now that's just plain rude (and absolutely uncalled for).
I never did anything to you! [B]I came here wanting to be friends![/B]

You know, there's a "phrase that fits" overly educated people
who are rude [B][I]while[/I][/B] they are wrong. That phrase is "pompous buffoon".

Quoting R.D. Silverman:

I can't take any credit for computing those values. They were computed,
(and now verified) by a couple of exellent coders. I owe them a lot.

Quoting R.D. Silverman:

[B]Would you answer questions posed in that manner?[/B]

I suppose that I could ask the coders to provide me with more data...
pages and pages of counts in much smaller increments that I could
then test and analyze in a thousand different ways, but what would be
the point of all that if all I would get for it is the kind of treatment that
you and others such as yourself have been giving me. It's not worth it!

The data that you see in the paper, is all the data that I have.
When I began working on this function, there was a lot less.
In fact, there was almost no data! The count was less than 1000.

The derivation involves methods that I developed over many years
and would probably fill up a book. Why should I bother explaining any of it?
So that I can be called a "crank" and a "crackpot". No thanks!

If you have no interest or curiosity when my presentation is simple,
then you will certainly have no interest or curiosity in the details.

Quoting R.D. Silverman:

I am a very humble and modest person, and as such,
I prefer not to focus attention on myself, but rather,
on the problem at hand, which is, can we determine [TEX]\varpi(10^{18})[/TEX]?[/QUOTE]

You are a classic crank.
Ignorant, unaware of your ignorance, and argumentative with
experts who know far more than you about the subject you
are trying to discuss.

You make handwaving claims that you can not substantiate.

Are you even aware of the relationship between the Bernoulli
numbers and the (coefficients) of the polynomials (yes, plural!)
that generate the polygonal numbers you are trying to count?

Congratulations. You have made my ignore list faster than anyone
else ever has.

akruppa 2011-02-22 10:39

A ten minute hack computed omega(10^10) in 90 seconds. Its result 6403587409 agrees with the value in your manuscript. Its run-time is roughly linear, so 10^13 should take about 24 hours. I made no effort to make the code efficient. Allowing larger values would require partitioning the sieve which would take several more minutes, which I don't think worth the time.

Don Blazys 2011-02-22 11:50

Quoting CRGreathouse:
[QUOTE]How do you know that you're the first?[/QUOTE]

Searched the internet, L.A. public Library, and found nothing.
Also, sufficiently large tables of [TEX]\varpi(x)[/TEX] did not exist
until the coders working with me calculated them.

Quoting CRGreathouse:
[QUOTE]
Also, you seem to have only an empirical fit.
What have you actually proved? [/QUOTE]
I'm presenting a hypothesis that I think is intriguing.
Like the R.H., it may not be provable in the mathematical sense,
but only by "a preponderance of the evidence".

A theory is only as good as it can [B]predict [/B]results, so if a determination of [TEX]\varpi(10^{23})[/TEX],
done on a supercomputer, gives us a 20 digit value of the fine structure constant
which is later corroborated by "physical experiment", then further physical experiments
would no longer be necessary and that would save a lot of time, effort and money.

Quoting CRGreathouse:
[QUOTE]
I don't suppose you'd care to back that up with a friendly bet?
[/QUOTE]

Sure!

If [TEX]B(x)[/TEX] and [TEX]\varpi(x)[/TEX] cross before and after [TEX]\varpi(10^{13})[/TEX],
then you owe me $100.00.
If not, then I owe you $100.00.

Okay?

Don.

Don Blazys 2011-02-22 12:07

Quoting R.D. Silverman.
[QUOTE]You are a classic crank.
Ignorant, unaware of your ignorance, and argumentative with
experts who know far more than you about the subject you
are trying to discuss.[/QUOTE]

Don't forget to stick your thumbs in your ears and sing
nyah nyah nyah nyah nyah after you write such things.

Quoting R.D. Silverman.
[QUOTE]
Are you even aware of the relationship between the Bernoulli
numbers and the (coefficients) of the polynomials (yes, plural!)
that generate the polygonal numbers you are trying to count?

[/QUOTE]

[B][U]My[/U][/B] paper has only one polynomial. [B]My[/B] paper counts [B][I]numbers[/I][/B].

You are wrong and rude. A [B][I][U]classic[/U][/I][/B] buffoon!

Don.

CRGreathouse 2011-02-22 12:24

[QUOTE=Don Blazys;253344]Quoting CRGreathouse's remark [COLOR=black]about[/COLOR] [B]polygonal [COLOR=black]numbers[/COLOR] of order greater than 2:[/B]

In my paper, there is [B][U]only[/U] [U]one[/U][/B] polynomial.
There are [B][I]not[/I][/B] a multitude of polynomial[COLOR=red][B]s.[/B][/COLOR]
Now, a [B]polynomial[/B] is an [B]"[I]expression".[/I][/B]
[I][B]Nobody "counts" one expression ![/B][/I]
Therefore...
[COLOR=black][B][I]Nobody[/I][/B] [/COLOR][B][I][COLOR=black]"counts"[/COLOR][/I][/B] [B][I]one[/I][/B] [B][I][COLOR=black]polynomial ![/COLOR][/I][/B]
My paper is about counting [I][B]numbers[/B][/I],
It's [B][I]not[/I][/B] about counting one [B]polynomial[/B].

I like CRG, but clearly, he's a busy person,
didn't have time to fully digest the idea,
and "misspoke"............... as did you.[/QUOTE]

It's not uncommon that I'm imprecise when talking here, informally and amongst friends. But as it happens I said just what I meant there.

You have a single multivariate polynomial, but I look it as a collection of single-variable polynomials because that unlocks the ability to use inclusion-exclusion to count them. Further, I think they're susceptible to a particular form of accelerated counting in that form -- but I'd rather not say more on that subject until I do some preliminary testing of my own.

CRGreathouse 2011-02-22 12:49

[QUOTE=Don Blazys;253366]If [TEX]B(x)[/TEX] and [TEX]\varpi(x)[/TEX] cross before and after [TEX]\varpi(10^{13})[/TEX],
then you owe me $100.00.
If not, then I owe you $100.00.[/QUOTE]

They're quite close (by construction) at the end of your calculated range, so that would be unwise. (They're likely to cross many times even if the asymptotics are wrong.) How about this: you have relative errors calculated for counts to {1, 2, ..., 11} * 10^11, and note the "very small and rapidly decreasing percentage of error."

If the geometric average of the absolute relative error is lower at {1, 2, 3, 4, 5} * 10^14 than at {1, 2, ..., 11} * 10^11 I owe you $100, otherwise you owe me $100. (Heck, we could push that number up if you'd like.)

I'll let you choose -- now, though, not when the calculations are complete -- what value to use for the constants, whether the values you used in the paper (137.035999084 and 1836.15267247), the CODATA values as of the time the calculations finish, or some other accepted standard.

CRGreathouse 2011-02-22 12:51

Oh, and I'm talking about your more refined estimate, B(x) * (1 - alpha/(mu - 2e)). By your admission the constant in B(x) seems to be off.

R.D. Silverman 2011-02-22 12:51

[QUOTE=Don Blazys;253367]Quoting R.D. Silverman.


Don't forget to stick your thumbs in your ears and sing
nyah nyah nyah nyah nyah after you write such things.

Quoting R.D. Silverman.


[B][U]My[/U][/B] paper has only one polynomial. [B]My[/B] paper counts [B][I]numbers[/I][/B].

You are wrong and rude. A [B][I][U]classic[/U][/I][/B] buffoon!

Don.[/QUOTE]

You are an idiot studying to become an imbecile.

Your "counting function" is a function (as you gave it) of the form

C1 x + C2 sqrt(x)

You are so totally clueless that you don't even realize that this is NOT
a polynomial.

CRGreathouse 2011-02-22 12:58

[QUOTE=akruppa;253362]Allowing larger values would require partitioning the sieve which would take several more minutes, which I don't think worth the time.[/QUOTE]

I wonder if what you have in mind was what I had in mind.

akruppa 2011-02-22 13:13

I don't claim having had anything in mind at all when I wrote the code. It simply looped through n,r combinations and lit up bits in an array. I can't imagine what OP's haxor friends are doing if their code is two orders of magnitude slower than that.

CRGreathouse 2011-02-22 14:59

[QUOTE=akruppa;253376]I don't claim having had anything in mind at all when I wrote the code. It simply looped through n,r combinations and lit up bits in an array. I can't imagine what OP's haxor friends are doing if their code is two orders of magnitude slower than that.[/QUOTE]

I mean what you mentioned when you suggested partitioning the sieve.

Don Blazys 2011-02-23 12:19

To: R.D. Silverman,

Quoting R.D. Silverman:
[QUOTE]

Your "counting function" is a function (as you gave it) of the form

C1 x + C2 sqrt(x)

You are so totally clueless that you don't even realize that this is NOT
a polynomial.
[/QUOTE]
Are you hallucinating?

Where oh where did I ever say that my [B]counting function[/B] is a polynomial?

It was CRGreathouse who casually and informally introduced
the idea of "counting polynomials" in this discussion.

But you see, unlike you, I [B][I]know[/I][/B] which expression he was referring to.

[B]Unlike you[/B], I know that he was referring to the [B][I]polynomial[/I][/B]:

[TEX]\left(\frac{n}{2}-1\right)*r^2-\left(\frac{n}{2}-2\right)*r[/TEX]

and [I][B]not[/B][/I] to the "counting function"!

Quoting R.D. Silverman:
[QUOTE]
You are a classic crank.
You are an idiot studying to become an imbecile.
You have made my ignore list faster than anyone
else ever has.
[/QUOTE]

When you keep making remarks like that, I can only [I][B]wish[/B][/I] that you would
[B][U]hurry up and start ignoring me instead of continuing to be obsessed with me![/U][/B]

Don.

Don Blazys 2011-02-24 04:47

Quoting CRGreathouse
[QUOTE]
They're quite close (by construction) at the end of your calculated range,
so that would be unwise.
[/QUOTE]
The word "close" is somewhat arbitrary.

[TEX]\varpi(10^{12})[/TEX] and [TEX]\varpi(10^{13})[/TEX] are apart by a factor of 10.

That's roughly the difference between traveling across the U.S.A.
and traveling all the way around the Earth!

[TEX]\varpi(10^{12})[/TEX] and [TEX]\varpi(10^{14})[/TEX] are apart by a factor of 100.

That's roughly the difference between traveling across the U.S.A.
and traveling all the way around the Earth ten times!

Now, I'm confident enough in the accuracy of my function to take that bet,
but I'm not all that confident that anyone here can determine [TEX]\varpi(10^{14})[/TEX]
in a reasonable amount of time.

Don.

CRGreathouse 2011-02-24 04:56

[QUOTE=Don Blazys;253564]The word "close" is somewhat arbitrary.

[TEX]\varpi(10^{12})[/TEX] and [TEX]\varpi(10^{13})[/TEX] are apart by a factor of 10.[/QUOTE]

I'm not talking about how close those are, but how close the true value and your estimate are at 10^13. Simply because they're close, move at approximately the same rate, and jump around somewhat unpredictably (in the case of the true value) we'd expect many more crossings on that basis alone, even if they were not asymptotically equivalent.

I could do some Browning modelling to see how many crossings we'd expect between 10^13 and 10^14 if they're not of the same order, though I'd have to estimate how far apart they are and also how much 'noise' there is in the true function. Right now I'm not inclined to do that since it would take a few hours to do it right, but you can imagine that the number would be dozens if not hundreds or more.

CRGreathouse 2011-02-24 04:59

[QUOTE=Don Blazys;253564]Now, I'm confident enough in the accuracy of my function to take that bet,
but I'm not all that confident that anyone here can determine [TEX]\varpi(10^{14})[/TEX]
in a reasonable amount of time.[/QUOTE]

I'll leave the offer open until the end of this month. Of course if the count to 10^14, ..., 5 * 10^14 aren't completed then neither will have to pay out, so that's not a concern. I'm more likely to invest time and effort into the calculation if I have some 'skin' in the game -- of course that's much more about my ego being on the line than the money.

[P.S. If you'd like we could also do it for charities of the winner's choice.]

Don Blazys 2011-02-24 09:46

To: CRGreathouse,
Quoting CRGreathouse:
[QUOTE]I'm more likely to invest time and effort into the calculation if I have some 'skin' in the game --
of course that's much more about my ego being on the line than the money.

[/QUOTE]It's not about the money for me either, (although it [I][B]should[/B][/I] be,
since my son lost his job in this screwed up economy and I am now
helping support him, my grand daughter, and my wife on one salary!)

How about this idea... regardless of whether or not the function:

[TEX]B(x)*\left(1-\frac{\alpha}{\mu-2*e}\right)=[/TEX] [TEX]\left(x-\frac{x}{\alpha*\pi*e+e}-\frac{1}{2}*\sqrt{x-\frac{x}{\alpha*\pi*e+e}}\right)*\left(1-\frac{\alpha}{\mu-2*e}\right)[/TEX]

is "absolutely correct", it is [B][I]still[/I][/B] good enough to give [B][U]exellent[/U][/B] estimates of [TEX]\varpi(x)[/TEX]
to well past [TEX]x=1,000,000,000,000[/TEX] which means that the coefficients of:

[TEX]=.64036274309582*x-.40011254372008*\sqrt{x}[/TEX]

are correct to about 14 decimal places.

There is [B][I]still[/I][/B] a lot of reasearch to be done here,
and this [B][I]is[/I][/B] to polygonal numbers as [TEX]Li(x)[/TEX] is to [TEX]\pi(x)[/TEX],
so why not consider co-authoring a paper on this topic with me.

You can have "top billing" and we can call .6403627... and .4001125...
the Greathouse-Blazys constants regardless of whether or not we will
ultimately have to jettison the function involving [TEX]\pi[/TEX],[TEX]e[/TEX],[TEX]\alpha[/TEX] and [TEX]\mu[/TEX].:smile:

I don't know enough about computers to use them for doing math,
(but I am learning how to use a pocket calculator) nor do I know how to
write "publication ready" papers in LaTex, so I [B][U]need[/U][/B] to find a professional
who can do all that, (and probably a lot more that I am unaware of).

You see, I don't hold it against you that you called me a "crank".

Here's why...

I have a [B][I]friend[/I][/B] who is a well known professor at a prestigious university
who told me point blank that although he knows that my proof of BC is good,
he can't say it publicly without the risk of being ostracised by his colleagues
who don't understand it and who would probably damage his career for
[B][I]associating[/I][/B] with a "crank". (His initials are B.B., in case he ever reads this.):wink:

I am well aware of the possibility that your situation may be similar to his,
and that it is therefore all but obligatory that you "distance" yourself from me.

I, on the other hand, have no "math career" to protect,
and couldn't care less what "real" mathematicians call me!

I'm [B][I]not[/I][/B] a "real" mathematician, and the only math I know is what my grandfather
(who [B][I]was[/I][/B] a mathematician and a bridge engineer) stuffed down my throat when I was a kid,
and from the books that I read to alleviate the boredom of having to wait between fares
while driving a taxi to support my then young family.

Thus, I am [B][I]free[/I][/B] to splatter and scatter my findings all over cyberspace,
and that is [B][I]exactly[/I][/B] what I will continue to do until one of you professionals
musters up the courage and risks your reputation to give me a helping hand!

I am now over 60 years old, and I want to [B][I]forget[/I][/B] about math and finish up my life
playing music (which is what I used to do to supplement my income as a taxi driver).

But I will [B][I][U]not[/U][/I][/B] let polygonal numbers flounder about without their very own "counting function".

Heck, even the relatively obscure "practical numbers" have a "counting function"
and considering how extraordinarily difficult polygonals of order > 2 are to count,
a counting function for them has been long, long overdue!

Moreover, and perhaps, just perhaps most importantly, it has recently been discovered that
quantum chaotic Hamiltonians may be responsible for the non-trivial zeroes of the Riemann Zeta Function
which would mean that the prime counting function is intimately connected to quantum mechanics...
so it may not be so "far fetched" to suppose that the counting function for that [B][U][I]other[/I][/U][/B] erratic sequence,
the [B]polygonal numbers of order greater than 2[/B], might give rise to the fine structure constant.

Anyway, I find you to be intellectually honest, and that's what's most important to me,
so please consider the above ramblings and let me know what you think.

Don.

science_man_88 2011-02-24 12:36

if you download PARI and walk through some of the PARI commands thread, that might help the math on a computer part, as for LaTex you can possibly use :

[url]http://www.ctan.org/tex-archive/info/symbols/comprehensive/symbols-a4.pdf[/url] it's a symbol list not all of them work everywhere at last check but it's a good help.

for example I've learned enough from and playing around with symbols without posting them from quotes from others that I can now do :

[TEX]\pi = 4\sum_{k=1}^{\infty} \frac{-1^k}{2k+1}[/TEX]

or

[TEX]\pi = \sqrt{12}\sum_{k=1}^{\infty}\frac{-3^{-k}}{2k+1}[/TEX]

CRGreathouse 2011-02-24 15:14

[QUOTE=Don Blazys;253579]is "absolutely correct", it is [B][I]still[/I][/B] good enough to give [B][U]exellent[/U][/B] estimates of [TEX]\varpi(x)[/TEX]
to well past [TEX]x=1,000,000,000,000[/TEX] which means that the coefficients of:

[TEX]=.64036274309582*x-.40011254372008*\sqrt{x}[/TEX]

are correct to about 14 decimal places.[/QUOTE]

I agree that it's a great approximation. I reserve judgment on how many decimal places are right -- I'd have to do some analysis on my own to distinguish the primary from secondary terms and I wouldn't do the problem justice by guessing right now.

[QUOTE=Don Blazys;253579]I have a [B][I]friend[/I][/B] who is a well known professor at a prestigious university
who told me point blank that although he knows that my proof of BC is good,
he can't say it publicly without the risk of being ostracised by his colleagues
who don't understand it and who would probably damage his career for
[B][I]associating[/I][/B] with a "crank". (His initials are B.B., in case he ever reads this.)[/QUOTE]

Interesting. I've looked at your T-Z (Beal) conjecture proofs and they were utter rubbish. That your friend wouldn't see that straightaway makes me question his ability, if not credentials.

I was pleased to see that for this problem you took a more reasonable position. The count is clearly Theta(n) and you have that; the constant is almost surely close to the value you give for it; you provide evidence for your claim. That we disagree on the 'numerology' of the constant is little in comparison.

[QUOTE=Don Blazys;253579]Anyway, I find you to be intellectually honest, and that's what's most important to me,
so please consider the above ramblings and let me know what you think.[/QUOTE]

I think that the current choice of constants is overfit, so that the closeness of the match is not as great as it appears by looking at the count up to 1.1 * 10^12. In particular, I suspect the prediction for the interval
[TEX]\left(1-\frac{\alpha}{\mu-2e}\right)\left(\mathcal{B}(1.2\cdot10^{12})-\mathcal{B}(1.1\cdot10^{12})\right)[/TEX]
will have a relative error substantially greater than the 10[SUP]-11[/SUP] up to 1.1 * 10^12.

I think that I'd need to look at the residuals more closely before commenting on the Ansatz you've chosen, a*x + b*sqrt(x).

I think that the leading term is quite close but, as I wrote above, I can't say how close without looking at it more closely.

--

I have an idea for counting solutions more quickly. Depending on the details, it may also be a good way to estimate the number of solutions to high precision. To avoid embarrassment if I'm wrong I'm not going to give out detail until I have a chance to work with the idea and see if it pans out. There are very specific criteria the problem will need to have to work for efficient counting and for precise estimation (and actually they're different, so conceivably this could give a very good approximation but be inefficient if used to count exactly).

Don Blazys 2011-02-25 09:40

Quoting CRGreathouse:
[QUOTE]
Interesting. I've looked at your T-Z (Beal) conjecture proofs and they were utter rubbish.
That your friend wouldn't see that straightaway makes me question his ability, if not credentials.
[/QUOTE]
Well, my friend has impressive and impeccable credentials. (Phd's in both mathematics [I][B]and[/B][/I] physics.)

Moreover, he is not the only one who thinks that my proof of BC is both true and correct.
If you look at the "articles and letters" on my website, then you will find a lot more evidence that I'm right!
Even the [B]Journal of the London Mathematical Society gave my proof some support[/B] while declining to
publish it due only to a lack of available journal space, and back then my proof was a handwritten manuscript!

And that's not all! My proof [B][I][U]is[/U][/I][/B] [B][I][U]published[/U][/I][/B] in the online journal: "Unsolved Problems" where
it can be refereed not just by one or two referees, but by the [B][I]entire math community[/I][/B],
and no one has ever found a "fatal flaw" in it. Google searching "Beal's Conjecture Proof"
shows that it is consistently in the top 5, so clearly, it is both well known and popular.

By contrast, there is absolutely [B]no evidence whatsoever[/B] that my proof of BC is wrong.
You calling it "utter rubbish", and kids with names like "Punky Munky" calling me a "crank"
in no way constitutes evidence, and I can be assure you that if a fatal flaw was ever found,
then I would drop my proof like a hot potato! I would [B][U]never[/U][/B] waste my [B][I]precious[/I][/B] time on a lie.

Thus, the controversy continues, and I think that a [B][I]formal[/I][/B] online debate between
a [B]recognized panel of experts[/B] and [B]myself[/B] is in order. Maybe you can help arrange that!

Quoting CRGreathouse:

[QUOTE]
I think that the current choice of constants is overfit.
[/QUOTE]
The function [TEX]B(x)*\left(1-\frac{\alpha}{\mu-2*e}\right)[/TEX] starts out as an overestimate,

but then becomes mostly an underestimate as the "random excursions" of [TEX]\varpi(x)[/TEX]

periodically overtake [TEX]B(x)*\left(1-\frac{\alpha}{\mu-2*e}\right)[/TEX] and then drop back down below it.

Interestingly, [TEX]B(x)*\left(1-\frac{\alpha}{\mu-2*e}\right)[/TEX] never seems to overestimate [TEX]\varpi(x)[/TEX] by much.

Here's an interesting article on the possible connection between prime numbers and quantum physics:

[URL]http://www.americanscientist.org/issues/id.3349,y.0,no.,content.true,page.1,css.print/issue.aspx[/URL]

Don Blazys 2011-02-25 09:45

Thanks for the info science_man_88.

Don.

Don Blazys 2011-03-01 11:04

Last Friday, the coder working on this theory with me determined that

[TEX]\varpi(1,200,000,000,000)=768,434,854,386)[/TEX]. My function gives:

[TEX]B(1,200,000,000,000)*\left(1-\frac{\alpha}{\mu-2*e}\right)=768,434,853,414[/TEX] for a difference of [TEX]-972 [/TEX]

and a relative error of [TEX]-.00000000124[/TEX], which is outstanding!

An incredibly accurate prediction!

The determination of [TEX]\varpi(1,300,000,000,000)[/TEX] should be complete later today.

Don.

CRGreathouse 2011-03-01 12:17

[QUOTE=Don Blazys;253669]Moreover, he is not the only one who thinks that my proof of BC is both true and correct.
If you look at the "articles and letters" on my website, then you will find a lot more evidence that I'm right!
Even the [B]Journal of the London Mathematical Society gave my proof some support[/B] while declining to
publish it due only to a lack of available journal space, and back then my proof was a handwritten manuscript![/QUOTE]

If the London Mathematical Society thought the proof was correct they would have published it. Major outstanding conjectures in exponential Diophantine theory aren't ignored. But of course they saw that the claimed proof was wrong and declined it.

[QUOTE=Don Blazys;253669]By contrast, there is absolutely [B]no evidence whatsoever[/B] that my proof of BC is wrong.
You calling it "utter rubbish", and kids with names like "Punky Munky" calling me a "crank"
in no way constitutes evidence, and I can be assure you that if a fatal flaw was ever found,
then I would drop my proof like a hot potato! I would [B][U]never[/U][/B] waste my [B][I]precious[/I][/B] time on a lie.[/QUOTE]

I've given specific criticisms on a number of different forums. For example:
[url]http://www.physicsforums.com/showthread.php?t=301139[/url]

After the first half-dozen mistakes I lost interest in pointing them out. Had your proof been sound except for those things I might have kept up interest longer, but the whole proof is fatally flawed because of your bizarre assumption that the integers are closed under root extraction.

[QUOTE=Don Blazys;253669]Thus, the controversy continues, and I think that a [B][I]formal[/I][/B] online debate between
a [B]recognized panel of experts[/B] and [B]myself[/B] is in order. Maybe you can help arrange that![/QUOTE]

I really couldn't -- the experts wouldn't spend their time on a 'proof' like that. If the proof looked correct to me I *might* be able to get certain professors I know to review it on the strength of my recommendation (as a favor), but since it doesn't I wouldn't even be able to convince them on those grounds.

Don Blazys 2011-03-02 12:10

Quoting CRGreathouse:

[QUOTE]
If the London Mathematical Society thought the proof was correct they would have published it.
Major outstanding conjectures in exponential Diophantine theory aren't ignored.
But of course they saw that the claimed proof was wrong and declined it.
[/QUOTE]That's not true.

They declined for the exact same reason they decline to publish 99% of
the papers that are submitted to them, which is lack of journal space.

The London Mathematical Journal found [B][U]no[/U][/B] fatal flaw in my paper.
Quite the contrary, the referee gave my work [B][I]some support ! [/I][/B]
If there was a fatal flaw, then they would simply have pointed it out.
Instead, they recommended that I send it to another [B]good[/B] journal!

That's because it's correct! :smile:

Quoting CRGreathouse:
[QUOTE]I've given specific criticisms on a number of different forums. For example:
[URL]http://www.physicsforums.com/showthread.php?t=301139[/URL]
[/QUOTE]In that rude and childish forum, your single "criticism" about "integrality"
needed only a little clarification and in no way constitutes a "fatal flaw".

I encourage everyone to read that entire thread and see for themselves that
no one who participated in that discussion ever found a "fatal flaw" in my proof.
In fact, everyone who tried made complete and utter fools of themselves.
(And I was being as gentle as I could be!)

Finally, their fragile egos couldn't take anymore and in their frustration,
they locked that thread, thereby capitulating and admitting defeat.

(By the way, if you Google search "Beal's Conjecture Proof", then you will find that
the above thread that you linked to is #3. Clearly, it is very popular because
I cleaned their clocks! In fact, I [B][I]still[/I][/B] get e-mails about it!)

Quoting CRGreathouse:
[QUOTE]
After the first half-dozen mistakes I lost interest in pointing them out.
[/QUOTE]There were no "half dozen mistakes".

The [B][U]truth[/U][/B] is... I made [B][I]no[/I][/B] errors. In fact, on post #28 of that thread,
there is a list of all the errors that were made, and my only "faux pas"
was to not fully explain something that I thought was obvious.

Quoting CRGreathouse:
[QUOTE]
Had your proof been sound except for those things
I might have kept up interest longer,
but the whole proof is fatally flawed because of your
bizarre assumption that the integers are closed under root extraction.
[/QUOTE]I would [B][I]never[/I][/B] claim or assume such a silly notion!

Here's my proof:

[U][COLOR="Navy"]httр://donblazys.com/02.рdf[/COLOR][/U]

please [B][I][U]show[/U][/I][/B] me where I claim that integers are closed under root extraction.

Quoting CRGreathouse:
[QUOTE]
...the experts wouldn't spend their time on a 'proof' like that.
[/QUOTE]That's wrong too.

A lot of experts [B][I]did[/I][/B] spend a lot of their time on it, (years in some cases!)
and found my Proof of Beal's Conjecture to be both true and correct. :smile:

Anyway, this thread is supposed to be about my polygonal number counting function
which is something that number theory desperately needs because
polygonal numbers of order greater than 2 are so incredibly hard to count.
([TEX]\varpi(1,300,000,000,000)[/TEX] should be determined by tomorrow... )

However, if you would like to continue discussing my Proof of Beal's Conjecture,
then please let me know, and I will start a new thread on it.


Don.

rajula 2011-03-02 13:53

[QUOTE=Don Blazys;254162]
Here's my proof:

[U][COLOR=Navy]httр://donblazys.com/02.рdf[/COLOR][/U]
[/QUOTE]

I know I perhaps should not do this, but... I did have a look at the files on you web pages.

For example when you write equation (1) you assume that ([TEX]z = 1[/TEX] or) [TEX]c \ne T[/TEX]. (Otherwise the last equality does not hold (assuming you are using real numbers here).) The conclusion which follows "and division by zero prevents - -" is therefore [B]false[/B].

[QUOTE=Don Blazys;254162]
They declined for the exact same reason they decline to publish 99% of
the papers that are submitted to them, which is lack of journal space.

The London Mathematical Journal found no fatal flaw in my paper.
Quite the contrary, the referee gave my work some support !
If there was a fatal flaw, then they would simply have pointed it out.
Instead, they recommended that I send it to another good journal!
[/QUOTE]

Perhaps you are talking about some other letter than the one in [U][COLOR="Navy"]httр://donblazys.com/letters_and_articles.рdf[/COLOR][/U]?

Assuming the odd case that it is the same letter, then.. The letter reads "- - we felt obliged to reject your paper in favour of more highly recommended contributions." which [B]does not[/B] imply that they did not publish it because of lack of journal space. The letter also [B]does not[/B] say that the LMS found no fatal flaw; you can, however, say they did not report any. Also, they "hoped that you have no difficulty in finding another good journal for the paper" which is [B]not[/B] the same as recommending sending it to another good journal.

(It could well be that you made these conclusion after some extra communications with the LMS; or that you are talking about some other submission.)

About the [I]A Special Polygonal Number Counting Function Involving the Fine Structure Constant and the Proton to Electron Mass Ratio[/I]: Do you have any (heuristic) arguments why the approximation should be correct or is it a result of numerical experiments? Have tried any probabilistic (or other trivial) approximations to obtain bounds?

Don Blazys 2011-03-04 12:25

Quoting rajula:
[QUOTE]I know I perhaps should not do this, but... [/QUOTE]I think that you will be okay, just as long as you are [B]very careful[/B]
and proceed with [B]great caution[/B]. Don't forget, I am not just "[B][I]any[/I][/B] crank",
I am a "[B][I]dangerous crank"[/I][/B] who will "get you" with his "counting function"!:lol:

Quoting rajula:
[QUOTE]
...when you write equation (1) you assume that [TEX]z=1[/TEX] [/QUOTE]Here is my proof:

[U][COLOR="Navy"]httр://donblazys.com/02.рdf[/COLOR][/U]

As anyone can see, equations (1) and (2) assume [TEX]Z,z>2[/TEX].
It is then (and only then) that [TEX]T\not=c[/TEX], which means that substituting [TEX]\frac{c}{c}[/TEX] for [TEX]\frac{T}{T}[/TEX] is impossible,
which is a contradiction, because it [B][I][U]must[/U][/I][/B] be possible to substitute [TEX]\frac{c}{c}=1[/TEX] for [TEX]\frac{T}{T}=1[/TEX].

Equation (3) assumes [TEX]Z=1[/TEX] and equation (4) assumes [TEX]z=2[/TEX] which completes the proof because
both equations (3) and (4) allow [TEX]T=c[/TEX] which in turn allows us to substitute [TEX]\frac{c}{c}[/TEX] for [TEX]\frac{T}{T}[/TEX] which means that
there is no contradiction if and only if [TEX]Z=1[/TEX] and [TEX]z=2[/TEX].

Quoting rajula:
[QUOTE]
(assuming you are using real numbers here)
[/QUOTE]"Real numbers" are [B][I][U]not[/U][/I][/B] assumed because they include "irrationals"
for which the concept of co-primality is meaningless!
As anyone can see, in my proof, each and every variable is
carefully defined as an element in some set of natural numbers.

Quoting rajula:
[QUOTE]
The letter also [B]does not[/B] say that the LMS found no fatal flaw;
you can, however, say they did not report any.
Also, they "hoped that you have no difficulty in finding another good journal for the paper"
which is [B]not[/B] the same as recommending sending it to another good journal.[/QUOTE]You are not being very logical. Now, think about this carefully.

Would the Journal of the London Mathematical Society (which is one of the worlds most prestigious journals)
"hope" that a "fatally flawed" proof would make it's way into another [B][U]good[/U][/B] journal?

Would the Journal of the London Mathematical Society (which is one of the worlds most prestigious journals)
give [B][I][U]some support[/U][/I][/B] to a proof that was "fatally flawed"?

Of course not!

But they [B][I]would[/I][/B] give [B][I][U]some support[/U][/I][/B] to a proof that is both true and correct!

That's why they gave [B][I]my[/I][/B] proof [B][I][U]some support![/U][/I][/B]

Doesn't that make you happy? :smile:

Quoting rajula:
[QUOTE] It could well be that you made these conclusion after some extra communications with the LMS.
[/QUOTE]I phoned them "long distance". They told me that my proof was correct,
but that it was not a "priority" because Wiles already proved "Fermat's Last Theorem".
I then reminded them that my paper proves the "general case",
and they said they would "consider it".

That was a dozen years ago.

I then decided that it would be a [B][I]lot[/I][/B] easier to publish it in an online journal for amateurs,
where it can [B][I]easily[/I][/B] be viewed and "refereed" by the [B][I]entire math community[/I][/B]
rather than just a few subscribers to the Journal of the London Mathematical Society!

As it turns out, that was the right decision because my proof is now (and has been for quite a while)
[B][I]consistently[/I][/B] in the [B]top five[/B] when you Google search "Beal's Conjecture Proof".

Quoting rajula:
[QUOTE]
About the [I]A Special Polygonal Number Counting Function Involving [/I]
[I]the Fine Structure Constant and the Proton to Electron Mass Ratio[/I]:
Do you have any (heuristic) arguments why the approximation should be correct
or is it a result of numerical experiments? Have tried any probabilistic (or other trivial)
approximations to obtain bounds?
[/QUOTE]It's logically derived. The constants [TEX]\alpha[/TEX] and [TEX]\mu[/TEX] emerged [B][I]naturally[/I][/B] and came as a complete surprise.
I will present those details at a later date. Right now it's more important to get higher counts of [TEX]\varpi(x)[/TEX].
By the way, [TEX]\varpi(1,300,000,000,000)[/TEX] was just determined to be [TEX]832,471,110,338[/TEX].
My "approximation function" predicted [TEX]832,471,109,826[/TEX] which is off by only [TEX]-512[/TEX].
Pretty amazing, don't you think?

Don

science_man_88 2011-03-04 13:46

[QUOTE=Don Blazys;254283]
As it turns out, that was the right decision because my proof is now (and has been for quite a while)
[B][I]consistently[/I][/B] in the [B]top five[/B] when you Google search "Beal's Conjecture Proof".
[/QUOTE]

okay and for the search nephrotic syndrome + site:webs.com my site nephroticsyndrome.webs.com is first ( this doesn't make it accurate( though I did learn some from my doctor)) just means google thinks it fits the search best. it could be that yours is the most read ( still doesn't prove accuracy).

rajula 2011-03-04 15:06

[QUOTE=Don Blazys;254283]
I think that you will be okay, just as long as you are [B]very careful[/B]
and proceed with [B]great caution[/B]. Don't forget, I am not just "[B][I]any[/I][/B] crank",
I am a "[B][I]dangerous crank"[/I][/B] who will "get you" with his "counting function"!:lol:
[/QUOTE]
I meant that I perhaps should not continue off-topic. The topic of the thread was [I]A Special Polygonal Number Counting Function[/I].

[QUOTE=Don Blazys;254283]
- - which means that substituting [TEX]\frac{c}{c}[/TEX] for [TEX]\frac{T}{T}[/TEX] is impossible,
which is a contradiction, because it [B][I][U]must[/U][/I][/B] be possible to substitute [TEX]\frac{c}{c}=1[/TEX] for [TEX]\frac{T}{T}=1[/TEX].
[/QUOTE]
Substituting [TEX]\frac{c}{c}[/TEX] for [TEX]\frac{T}{T}[/TEX] is possible, but substituting [TEX]c[/TEX] for [TEX]T[/TEX] is not. These two substitutions should not be mixed up.

[QUOTE=Don Blazys;254283]
"Real numbers" are [B][I][U]not[/U][/I][/B] assumed because they include "irrationals"
for which the concept of co-primality is meaningless!
As anyone can see, in my proof, each and every variable is
carefully defined as an element in some set of natural numbers.
[/QUOTE]
Natural numbers are also real numbers.

[QUOTE=Don Blazys;254283]
Would the Journal of the London Mathematical Society (which is one of the worlds most prestigious journals)
"hope" that a "fatally flawed" proof would make it's way into another [B][U]good[/U][/B] journal?
[/QUOTE]
They probably would not. But.. rejection-letters (I have received plenty of those :smile:) are usually written with careful and positive tone. They usually praise the manuscript or encourage to work on it or to submit it elsewhere regardless of the content.

[QUOTE=Don Blazys;254283]
Would the Journal of the London Mathematical Society (which is one of the worlds most prestigious journals)
give [B][I][U]some support[/U][/I][/B] to a proof that was "fatally flawed"?
[/QUOTE]
I thought it was the referee who did that? I know that the Journal of the LMS is one of the better journals. I also have some work which is published in it. To my knowledge they follow the usual procedure of refereeing. First the referee or editors decide if the work might be worth considering for publication. If it is, then a referee will read it, comment on it and recommend it to be rejected/accepted/revised. If your manuscript gets this far you will most likely get a referee's report listing corrections and comments.

[QUOTE=Don Blazys;254283]
Doesn't that make you happy? :smile:
[/QUOTE]
It makes me happy that the journals give nice and encouraging responses. I just wanted to point out that it is better to cite the responses rather than the uncertain conclusions. (After one has received huge amounts of letters accepting and rejecting manuscripts, then there is a possibility that one can make accurate conclusions for oneself. I, personally, am not able to make such conclusions and even if I were, I would never consider making those conclusions publicly!)

[QUOTE=Don Blazys;254283]
- - As it turns out, that was the right decision because my proof is now (and has been for quite a while)
[B][I]consistently[/I][/B] in the [B]top five[/B] when you Google search "Beal's Conjecture Proof".
[/QUOTE]
It is true that your writings are easy to find with Google. And if that was your goal, then you have done well in that regard.

[QUOTE=Don Blazys;254283]
It's logically derived. The constants [TEX]\alpha[/TEX] and [TEX]\mu[/TEX] emerged [B][I]naturally[/I][/B] and came as a complete surprise.
I will present those details at a later date.
[/QUOTE]
I look forward seeing the details.

[QUOTE=Don Blazys;254283]
Right now it's more important to get higher counts of [TEX]\varpi(x)[/TEX].
By the way, [TEX]\varpi(1,300,000,000,000)[/TEX] was just determined to be [TEX]832,471,110,338[/TEX].
My "approximation function" predicted [TEX]832,471,109,826[/TEX] which is off by only [TEX]-512[/TEX].
[/QUOTE]
I do not understand why the higher counts are more important. For me they have only little importance.

[QUOTE=Don Blazys;254283]
Pretty amazing, don't you think?
[/QUOTE]
I would have to analyze the behavior of [TEX]\varpi(x)[/TEX] before I could say if the approximation is amazing or not.

CRGreathouse 2011-03-04 20:53

[QUOTE=Don Blazys;254283]substituting [TEX]\frac{c}{c}[/TEX] for [TEX]\frac{T}{T}[/TEX] is impossible[/QUOTE]

What does this even mean? The equations cited, (1) and (2), do not even have [TEX]\frac cc[/TEX] or [TEX]\frac TT[/TEX] in them.

Don Blazys 2011-03-05 13:33

To rajula,

Quoting rajula:
[QUOTE]
Substituting [TEX]\frac{c}{c}[/TEX] for [TEX]\frac{T}{T}[/TEX] is possible, but substituting [TEX]c[/TEX] for [TEX]T[/TEX] is not.
These two substitutions should not be mixed up.[/QUOTE]

You are wrong. Here's why...

Given [TEX]\frac{T}{T}[/TEX] and substituting [TEX]c[/TEX] for [TEX]T[/TEX] results in [TEX]\frac{c}{c}[/TEX].

Given [TEX]\frac{T}{T}[/TEX] and substituting [TEX]\frac{c}{c}[/TEX] for [TEX]\frac{T}{T}[/TEX] [B][I][U]also[/U][/I][/B] results in [TEX]\frac{c}{c}[/TEX].

So clearly, the two substitutions are absolutely equivalent!

Thus, if we can't substitute [TEX]c[/TEX] for [TEX]T[/TEX], then neither can we substitute [TEX]\frac{c}{c}[/TEX] for [TEX]\frac{T}{T}[/TEX].

If we substitute [TEX]c[/TEX] for [TEX]T[/TEX] (or [TEX]\frac{c}{c}[/TEX] for [TEX]\frac{T}{T}[/TEX]) on the left side only,
then we can no longer [B][I]derive[/I][/B] the terms involving logarithms!

Quoting rajula:
[QUOTE]
It is true that your writings are easy to find with Google.
And if that was your goal, then you have done well in that regard.
[/QUOTE]
Thank you!

Journals are dead.

They are quickly going the way of the horse and buggy and
exept for a few elitist snobs, nobody reads them anymore.

Most people [B][I]now[/I][/B] get their information by searching the internet,
and when they search for "Beal's Conjecture Proof", it is [B][I]my[/I][/B] proof,
which is [B][I]obviously[/I][/B] both true and correct, that they will find!

Quoting rajula:
[QUOTE]
I do not understand why the higher counts are more important.
For me they have only little importance. [/QUOTE]

Given sufficiently high counts of [TEX]\varpi(x)[/TEX], we can solve for [TEX]\alpha[/TEX] and find out if
it matches the most accurately determined value of the Fine Structure Constant.

You see, [B]polygonal numbers[/B] are every bit as fundamental and important as [B]prime numbers[/B].
That fact alone makes this the most important counting function this side of [TEX]Li(x)[/TEX].
However, if it turns out that [TEX]\alpha[/TEX] in this function precisely matches
the most accurately measured value of the fine structure constant,
then my counting function will be of that much greater importance to mankind.

Doesn't that make you happy? :smile:

Don.

Don Blazys 2011-03-05 13:50

To CRGreathouse:

Quoting CRGreathouse:
[QUOTE]
The equations cited, (1) and (2), do not even have [TEX]\frac{c}{c}[/TEX] or [TEX]\frac{T}{T}[/TEX] in them.
[/QUOTE]

The equations cited, (1) and (2), [B][I]must[/I][/B] have [TEX]\frac{T}{T}[/TEX] in them.

Otherwise, the terms involving logarithms could not have been logically derived.

So please keep looking as hard as you can!

I'm sure that you [B][I]will[/I][/B] be able to find those cancelled [TEX]T[/TEX]'s. :smile:

Don

rajula 2011-03-05 14:25

[QUOTE=Don Blazys;254365]
You are wrong. Here's why...

Given [TEX]\frac{T}{T}[/TEX] and substituting [TEX]c[/TEX] for [TEX]T[/TEX] results in [TEX]\frac{c}{c}[/TEX].

Given [TEX]\frac{T}{T}[/TEX] and substituting [TEX]\frac{c}{c}[/TEX] for [TEX]\frac{T}{T}[/TEX] [B][I][U]also[/U][/I][/B] results in [TEX]\frac{c}{c}[/TEX].

So clearly, the two substitutions are absolutely equivalent!
[/QUOTE]

Now you are confusing implication with equivalence! I do not see any point in arguing anymore as it is clear that we are not following the same logic.

[QUOTE=Don Blazys;254365]
Journals are dead.

They are quickly going the way of the horse and buggy and
exept for a few elitist snobs, nobody reads them anymore.

Most people [B][I]now[/I][/B] get their information by searching the internet,
and when they search for "Beal's Conjecture Proof", it is [B][I]my[/I][/B] proof,
which is [B][I]obviously[/I][/B] both true and correct, that they will find!
[/QUOTE]

Most people might search information from the internet. But those who do not want to check all the details, refereed journals are still the way to go (and most of them are online). It is true that papers in refereed journals contain lots of errors, but (for example) arXiv contains surely more. Random web pages on the internet on the matters at hand are almost sure to contain errors.

[QUOTE=Don Blazys;254365]
However, if it turns out that [TEX]\alpha[/TEX] in this function precisely matches
the most accurately measured value of the fine structure constant,
then my counting function will be of that much greater importance to mankind.

Doesn't that make you happy? :smile:
[/QUOTE]

You did say that the approximation is logically derived. So assuming you telling the truth one can indeed approximate fine structure constant by the counting function and there is no reason to verify that they match.

However, as I am not asleep, dreaming does not make me happy. To be honest, your previous illogical responses make me seriously doubt that there is any such logical derivation. If there is no derivation, any computation beyond the known measurements for the fine structure constant give no new information about the constant.

science_man_88 2011-03-05 14:42

[QUOTE=Don Blazys;254365]
You are wrong. Here's why...

Given [TEX]\frac{T}{T}[/TEX] and substituting [TEX]c[/TEX] for [TEX]T[/TEX] results in [TEX]\frac{c}{c}[/TEX].

Given [TEX]\frac{T}{T}[/TEX] and substituting [TEX]\frac{c}{c}[/TEX] for [TEX]\frac{T}{T}[/TEX] [B][I][U]also[/U][/I][/B] results in [TEX]\frac{c}{c}[/TEX].
[/QUOTE]

here's an example proving you wrong lets say T=1 and c=2

then [TEX]T \ne c[/TEX] but 1/1 = 2/2 = 1 so T/T = c/c = 1 regardless if T=c or not.

Don Blazys 2011-03-05 16:16

Quoting science man 88:[QUOTE]
here's an example proving you wrong lets say T=1 and c=2
[/QUOTE]

Sorry science man, but in my proof, [TEX]T[/TEX] is [B][I]defined[/I][/B] as being an element of [TEX]\left{2,3,4...\right}[/TEX].

Thus, we can not allow [TEX]T=1[/TEX].

My proof is correct.

Don.

xilman 2011-03-05 16:25

[QUOTE=Don Blazys;254371]Quoting science man 88:

Sorry science man, but in my proof, [TEX]T[/TEX] is [B][I]defined[/I][/B] as being an element of [TEX]\left{2,3,4...\right}[/TEX].

Thus, we can not allow [TEX]T=1[/TEX].

My proof is correct.

Don.[/QUOTE]|Let T = 2 and c =3

science_man_88 2011-03-05 17:12

[QUOTE=xilman;254372]|Let T = 2 and c =3[/QUOTE]

continuing with this under these new values:

T[TEX]\ne[/TEX]c but 2/2=3/3=1 so T/T = c/c=1 which you say can't happen which is false.

science_man_88 2011-03-05 17:58

check about some things on https://oeis.org/A0904
 
On this page : we have the formula for the nth k-gonal, if accurate and the comment about using k and n values such that [TEX]k\ge 3[/TEX] and [TEX]n\ge 3[/TEX] by making a Pari script that made it look like a table I think I have a way using a finite sum of linear equations based on the 3-gonal sequence to calculate the amount of polygonal numbers of order greater than 2 less than a value x and I have an idea to sort out the one final thing. Anyone care for details? Note this post is technically talking of a special equation to calculate a value(function ?) and therefore can fit into this thread.

formula given:

[QUOTE]The n-th k-gonal number is 1 +k*n(n-1)/2 - (n-1)^2.[/QUOTE]

PARI:

[CODE]for(k=3,100,for(n=3,10,print1(1+k*n*(n-1)/2-(n-1)^2","));print(":"k))[/CODE]

science_man_88 2011-03-05 19:18

[QUOTE=science_man_88;254378]On this page : we have the formula for the nth k-gonal, if accurate and the comment about using k and n values such that [TEX]k\ge 3[/TEX] and [TEX]n\ge 3[/TEX] by making a Pari script that made it look like a table I think I have a way using a finite sum of linear equations based on the 3-gonal sequence to calculate the amount of polygonal numbers of order greater than 2 less than a value x and I have an idea to sort out the one final thing. Anyone care for details? Note this post is technically talking of a special equation to calculate a value(function ?) and therefore can fit into this thread.

formula given:



PARI:

[CODE]for(k=3,100,for(n=3,10,print1(1+k*n*(n-1)/2-(n-1)^2","));print(":"k))[/CODE][/QUOTE]

okay anyways I'm moving on this idea and it stems from the fact that the nth k-gonal numbers are on the line ((n-1)th 3-gonal) *z + (n-th 3-gonal) so the maximum ( assuming it repeats) number of k-gonal numbers of k and n greater than or equal to 3 are a finite sum of the number of results in these sequences less than x until the y-th 3-gonal number where y is the highest n such that (y 3-gonal) <x

CRGreathouse 2011-03-05 22:38

[QUOTE=Don Blazys;254365]Given [TEX]\frac{T}{T}[/TEX] and substituting [TEX]c[/TEX] for [TEX]T[/TEX] results in [TEX]\frac{c}{c}[/TEX].

Given [TEX]\frac{T}{T}[/TEX] and substituting [TEX]\frac{c}{c}[/TEX] for [TEX]\frac{T}{T}[/TEX] [B][I][U]also[/U][/I][/B] results in [TEX]\frac{c}{c}[/TEX].

So clearly, the two substitutions are absolutely equivalent![/QUOTE]

Are you off your rocker?

If c = 2 and T = 3, then (c/c) * c^2 = 4. Substituting c = T makes this equation false. Substituting T/T for c/c leaves it true. The two are not only inequivalent but obviously inequivalent.

science_man_88 2011-03-06 00:01

[QUOTE=CRGreathouse;254401]Are you off your rocker?

If c = 2 and T = 3, then (c/c) * c^2 = 4. Substituting c = T makes this equation false. Substituting T/T for c/c leaves it true. The two are not only inequivalent but obviously inequivalent.[/QUOTE]

what do you think of my idea ( so far in my search I've found too many repeats ( though all of 3-gonal numbers so far)).

science_man_88 2011-03-06 01:47

[QUOTE=science_man_88;254410]what do you think of my idea ( so far in my search I've found too many repeats ( though all of 3-gonal numbers so far)).[/QUOTE]

I've solved the repeat part I realized I could use the vecsort features.

The problem now is speeding it up.

[CODE]blazy2(x)=y=[];for(k=3,x+1,for(n=3,x+1,if((1+k*n*(n-1)/2-(n-1)^2)<(x+1),y=concat(y,[(1+k*n*(n-1)/2-(n-1)^2)]),break(1))));y=vecsort(y,,8);return(#y)[/CODE]

as far as I'm willing to calculate with this slow function it matches with the first table second column values at the link at the start of the thread.

Don Blazys 2011-03-06 02:30

To: rajula,

Quoting rajula:
[QUOTE]Now you are confusing implication with equivalence![/QUOTE]

No rajula, it is [B][I]you[/I][/B] who is confused.

The terms containing [TEX]T[/TEX] are [B][I]identities.[/I][/B]
Thus, if [TEX]Z,z>2[/TEX], then we [B][U]can't[/U][/B] substitute
[TEX]c[/TEX] for [TEX]T[/TEX], or [B][SIZE=3][COLOR=red]equivalently[/COLOR][/SIZE][/B], [TEX]\frac{c}{c}[/TEX] for [TEX]\frac{T}{T}[/TEX].

The substitutions being equivalent [B][I]implies[/I][/B] that
we cannot substitute [TEX]\frac{c}{c}[/TEX] for [TEX]\frac{T}{T}[/TEX] if and only if [TEX]Z,z>2[/TEX].

Without that [B][I]equivalence[/I][/B], there would be no [B]implication ![/B]

Again, you seem to be insisting that we can substitute
[TEX]c[/TEX] for [TEX]T[/TEX], or [B][SIZE=3][COLOR=red]equivalently[/COLOR][/SIZE][/B], [TEX]\frac{c}{c}[/TEX] for [TEX]\frac{T}{T}[/TEX] on the left side only,
without taking into account that such a substitution would
automatically [B][I]contradict[/I][/B] the [B][U]fact[/U][/B] that the terms involving
logarithms were [B][U]derived[/U][/B] from the terms showing [TEX]\frac{T}{T}[/TEX].

Quoting rajula:
[QUOTE]
So assuming you telling the truth one can indeed approximate fine structure constant by
the counting function and there is no reason to verify that they match.
[/QUOTE]
I disagree. There are plenty of reasons to verify that they match!
[B][I]Any[/I][/B] theory, regardless of whether it is the result of luck, logic or both,
is only as good as it can [B][I]predict[/I][/B] experimental results and [B][I][U]must[/U][/I][/B] therefore be
tested, and [B][I]verified[/I][/B] in that regard! That's how science progresses.

If a determination of say, [TEX]\varpi(10^{18})[/TEX] allows us to [B][I]predict [/I][/B]several more digits of
the fine structure constant and those predictions are then corroberated by physical experiments,
then that would constitute [B][I][U]evidence[/U][/I][/B] that polygonal numbers are somehow related to quantum mechanics.

Quoting rajula:
[QUOTE]
However, as I am not asleep, dreaming does not make me happy.
[/QUOTE]
Well, I think you [B][I]are[/I][/B] asleep. You sure have been posting like it.

Quoting rajula:
[QUOTE]
I do not see any point in arguing anymore as it is clear that we are not following the same logic.
[/QUOTE]

On that we can agree. :smile:

Don.

Don Blazys 2011-03-06 06:25

Quoting CRGreathouse:
[QUOTE]
Are you off your rocker?[/QUOTE]
Of course not! Are you?

Quoting CRGreathouse:
[QUOTE]If c = 2 and T = 3, then (c/c) * c^2 = 4. Substituting c = T makes this equation false.
Substituting T/T for c/c leaves it true. The two are not only inequivalent but obviously inequivalent.
[/QUOTE]This is your most convoluted post ever!
It's almost as if you are looking at a different paper!
For one thing...

We are [B][I]not[/I][/B] talking about substituting T/T for c/c,
We [B][I][U]are[/U][/I][/B] talking about substituting c/c for T/T . (You have it "backwards"!)

Also, the term (c/c) * c^2 = 4 [B][I][U]doesn't even exist[/U][/I][/B] in my proof
because in my paper, the term (T/T)*c^Z assumes [B][I][U]odd[/U][/I][/B] values of Z.

Still, I think I know what you are trying to say, so here are a few questions...

Let's consider the more simple identity (T/T)*c^3=(T/T)^c^3,
where T can be viewed as some "[COLOR=red]cancelled common factor[/COLOR]".

Now, if we had to substitute c for T (or equivalently, c/c for T/T),
then would we be [B][I]consistent[/I][/B] in our [B][I]logic[/I][/B] if we wrote (c/c)*c^3=(T/T)*c^3,
or would we be [B][I]consistent[/I][/B] in our [B][I]logic[/I][/B] if we wrote (c/c)*c^3=(c/c)*c^3 ?

By the same token, would we be [B][I]consistent[/I][/B] in our [B][I]logic[/I][/B] if we wrote
(c/c)*c^3=T(c/T)^((3*ln(c)/(ln(T))-1)/(ln(c)/(ln(T))-1))
or would we be [B][I]consistent[/I][/B] in our [B][I]logic[/I][/B] if we simply admitted that we
cannot allow the above and wrote
(T/T)*c^3=T(c/T)^((3*ln(c)/(ln(T))-1)/(ln(c)/(ln(T))-1)) ?

If we don't have [B][I]consistency[/I][/B] in our [B][I]logic[/I][/B], then we don't have mathematics.

Don

Don Blazys 2011-03-06 11:48

Note typo... (T/T)*c^3=(T/T)^c^3 should read (T/T)*c^3=(T/T)*c^3.

science_man_88 2011-03-06 12:51

[QUOTE=Don Blazys;254443]Note typo... (T/T)*c^3=(T/T)^c^3 should read (T/T)*c^3=(T/T)*c^3.[/QUOTE]

all we were saying is because c/c=T/T=1 we can substitute them but since c!=T we can't substitute these.

Don Blazys 2011-03-06 13:01

Quoting science man 88
[QUOTE]all we were saying is because c/c=T/T=1 we can substitute them but since c!=T we can't substitute these. [/QUOTE]
I don't think you understand. This is a [B][I]simple[/I][/B] question:

Let's consider the simple identity (T/T)*c^3=(T/T)*c^3,
where T can be viewed as some "[COLOR=red]cancelled common factor[/COLOR]".

Now, if we had to substitute c for T (or equivalently, c/c for T/T),
then would we be [B][I]consistent[/I][/B] in our [B][I]logic[/I][/B] if we wrote (c/c)*c^3=(T/T)*c^3,
or would we be [B][I]consistent[/I][/B] in our [B][I]logic[/I][/B] if we wrote (c/c)*c^3=(c/c)*c^3 ?


I say (c/c)*c^3=(c/c)*c^3 .

What do you say?

science_man_88 2011-03-06 13:06

[QUOTE=Don Blazys;254446]Quoting science man 88

I don't think you understand. This is a [B][I]simple[/I][/B] question:

Let's consider the simple identity (T/T)*c^3=(T/T)*c^3,
where T can be viewed as some "[COLOR=red]cancelled common factor[/COLOR]".

Now, if we had to substitute c for T (or equivalently, c/c for T/T),
then would we be [B][I]consistent[/I][/B] in our [B][I]logic[/I][/B] if we wrote (c/c)*c^3=(T/T)*c^3,
or would we be [B][I]consistent[/I][/B] in our [B][I]logic[/I][/B] if we wrote (c/c)*c^3=(c/c)*c^3 ?


I say (c/c)*c^3=(c/c)*c^3 .

What do you say?[/QUOTE]

(c/c)*c^3=(T/T)*c^3 works because c/c=T/T=1 regardless of the values used, it's not a or situation as far as I can see.

Don Blazys 2011-03-06 13:59

Quoting science man 88:[QUOTE]
(c/c)*c^3=(T/T)*c^3 works because c/c=T/T=1 regardless of the values used, it's not a or situation as far as I can see.
[/QUOTE]

No. It doesn't "work". Here's why...

(T/T)*c^3=(T/T)*c^3 is the result of having divided both sides by T in T*c^3=T*c^3.

(c/c)*c^3=(T/T)*c^3 is the result of what? Gibberish perhaps?

You see, it is a "situation"! There is no logical justification for (c/c)*c^3=(T/T)*c^3.

It's complete and utter garbage! Can't you see that?

Don.

axn 2011-03-06 14:25

@Don: If your proof structure for Beal's conjecture is correct, either:
a) you've proved that the exponent of the largest term must be <= 2, OR
b) you've proved that the exponents of _all_ the terms must <= 2.

[Explanation: either there is something preventing us from applying the proof structure to the a^x and b^y terms OR there isn't. First case leads to a) and second case leads to b)]

Both are stronger statements than BC (and wrong). Therefore your proof structure must be wrong.

Don Blazys 2011-03-06 15:26

Quoting axn:
[QUOTE]
@Don: If your proof structure for Beal's conjecture is correct, either:
a) you've proved that the exponent of the largest term must be <= 2, OR
b) you've proved that the exponents of _all_ the terms must <= 2.

[Explanation: either there is something preventing us from applying the proof structure to the a^x and b^y terms OR there isn't. First case leads to a) and second case leads to b)]

Both are stronger statements than BC (and wrong). Therefore your proof structure must be wrong. [/QUOTE]

The proof structure is not wrong.

In my proof, we assume that x,y > 2 and prove that it must then be the case that z <= 2.

However, there is nothing "special" about the "c" term and it should be [B][I]understood[/I][/B] that we can also write
a similar proof whereby we assume that y,z > 2 and prove that it must then be the case that x <= 2.

We need not be redundant.

Don.

axn 2011-03-06 15:41

[QUOTE=Don Blazys;254459]In my proof, we assume that x,y > 2 and prove that it must then be the case that z <= 2[/QUOTE]

Where do you _use_ the fact that x,y > 2. Your "proof by contradiction" works just as well with one or both of x,y <=2, doesn't it?

EDIT:- It would also work if a,b,c were not coprime.

Don Blazys 2011-03-06 17:51

Quoting axn:
[QUOTE]
Where do you _use_ the fact that x,y > 2.
[/QUOTE]

In my proof, x,y > 2 are indeed assumed and therefore "used".

In fact, my proof [B][I]explicitly[/I][/B] defines x and y as x,y [TEX]\in[/TEX] {3,4,5...}.

Quoting axn:
[QUOTE]
Your "proof by contradiction" works just as well with one or both of x,y <=2, doesn't it?
[/QUOTE]
No, it doesn't.

In order to prove Beal's conjecture "by contradiction", we must first assume that it is [B][U]false[/U][/B] ,
and that [B][I]requires[/I][/B] that we assume x,y,z > 2, which is exactly what we do in equations (1) and (2).

However, to assume that "one or both of x,y <=2" is to assume that Beal's conjecture is [B][U]true[/U][/B] ,
and that assumption includes all equations such as 2^1+3^2=11^1 and 3^2+4^2=5^2
where [B][I]more[/I][/B] than one exponent is <= 2.

Thus, making the assumption that "one or both of x,y <=2" is rather mundane and get's us nowhere.

The "trick" here is to keep in mind that the "c" term was [B][I]arbitrarily[/I][/B] chosen to show it's "logarithmic identity".
Just imagine two similar papers where the "a" and "b" terms show [B][I]their[/I][/B] logarithmic identities while
the remaining terms have exponents > 2 and you'll get the picture!

Quoting axn:
[QUOTE]It would also work if a,b,c were not coprime.[/QUOTE]

That's not true. If all three terms had a non-trivial common factor, then we could call it T > 1 and write

T*a^x + T*b^y = T*c^z = (T*c)^((z*ln(c)/(ln(T))+1)/(ln(c)/(ln(T))+1))

where (unlike the proof) we can now [I][B]let[/B][/I] T= c in the logarithmic identity [B][U]regardless[/U][/B] of the value of z.

Besides, the "assumption of co-primality" is "standard procedure" because
[B][I]any[/I][/B] common factor can easily be cancelled "initially".

Don.

Don Blazys 2011-03-06 18:18

To the Moderator,

Can you re-name this topic

"A Polygonal Number Counting Function and Beal's Conjecture"

or "Don Blazys Hot Topics".

The name of this thread no longer reflects it's contents.

Thanks,

Don, the "Dangerous Crank".

science_man_88 2011-03-06 19:07

[QUOTE=Don Blazys;254467]In order to prove Beal's conjecture "by contradiction", we must first assume that it is [B][U]false[/U][/B] ,
and that [B][I]requires[/I][/B] that we assume x,y,z > 2, which is exactly what we do in equations (1) and (2).
[/QUOTE]

if we assume x,y,z>2 we are assuming part of the conjecture is true according to wikipedia because if you look the conjecture assumes this.

science_man_88 2011-03-06 19:14

If I understand the text correctly Beal's conjecture says A^x+B^y=C^z can be expressed as p^x*a^x+p^y*b^y = p^z*c^z.

science_man_88 2011-03-06 19:45

[QUOTE=science_man_88;254475]If I understand the text correctly Beal's conjecture says A^x+B^y=C^z can be expressed as p^x*a^x+p^y*b^y = p^z*c^z.[/QUOTE]

which is just a fancy way of saying p^t+p^r = p^s for some values of t,r, and s (not necessarily integer last I checked).

CRGreathouse 2011-03-06 20:29

[QUOTE=science_man_88;254478]which is just a fancy way of saying p^t+p^r = p^s for some values of t,r, and s (not necessarily integer last I checked).[/QUOTE]

That's really not what it's saying.

science_man_88 2011-03-06 20:36

[QUOTE=CRGreathouse;254481]That's really not what it's saying.[/QUOTE]

you can use what wikipedia says to bring it to that.

CRGreathouse 2011-03-06 20:40

[QUOTE=science_man_88;254482]you can use what wikipedia says to bring it to that.[/QUOTE]

Your first claim, that Beal's conjecture is

A^x+B^y=C^z can be expressed as p^x*a^x+p^y*b^y = p^z*c^z

is true. Your second, that the above is the same as

p^t+p^r = p^s for some values of t,r, and s

is false regardless of whether we limit t, r, s to integers (in which case it's simply false for p > 2) or not (in which case it's true and thus not obviously equivalent to Beal's).

science_man_88 2011-03-06 20:49

[QUOTE=CRGreathouse;254483]Your first claim, that Beal's conjecture is

A^x+B^y=C^z can be expressed as p^x*a^x+p^y*b^y = p^z*c^z

is true. Your second, that the above is the same as

p^t+p^r = p^s for some values of t,r, and s

is false regardless of whether we limit t, r, s to integers (in which case it's simply false for p > 2) or not (in which case it's true and thus not obviously equivalent to Beal's).[/QUOTE]

here's how I got to the r,t,s part :

1) the first part is true
2) x*p^y = p^(y+log[SUB]p[/SUB](x))

from that we get:

1)t = x+log[SUB]p[/SUB](a^x)
2)r = y+log[SUB]p[/SUB](b^y)
3)s = z+log[SUB]p[/SUB](c^z)
4) p is a prime according to the conjecture.

Yes I know I crossed variables sorry for that.

Don Blazys 2011-03-06 20:56

Quoting science man 88
[QUOTE]
if we assume x,y,z>2 we are assuming part of the conjecture is true
according to wikipedia because if you look the conjecture assumes this.
[/QUOTE]

According to Wikipedia,

Beal's Conjecture states that x,y,z > 2 [COLOR=blue][B]can[/B][/COLOR] only exist if a^x + b^y = c^c [B][I][U][COLOR=blue]has[/COLOR][/U][/I][/B] a common factor.

That means that x,y,z > 2 [COLOR=red][B]cannot[/B][/COLOR] exist if a^x + b^y = c^c [COLOR=red]has [B][I][U]no[/U][/I][/B] [/COLOR][COLOR=black]common[/COLOR] factor.

Thus, to assume x,y,z > 2 where a^x + b^y = c^z [B][I][U][COLOR=blue]has[/COLOR][/U][/I][/B] a common factor (which is what is done in Wikipedia)
is to assume that Beal's Conjecture is [COLOR=blue][B][U]true[/U][/B][/COLOR][COLOR=black] ,[/COLOR]

and to assume x,y,z > 2 where a^x + b^y = c^z [COLOR=red]has [/COLOR][COLOR=red][B][I][U]no[/U][/I][/B] [/COLOR]common factor (which is what we do in my proof)
is to assume that Beal's Conjecture is [B][U][COLOR=red]false[/COLOR][/U][/B] .

Look science man 88, if you really want to help, then try to goad
some [B][I][U]famous[/U][/I][/B] "big shot" mathematician into debating this proof with me.

e-mail every well known "genius" that you can think of, and tell them that
I said that [B][U]none[/U][/B] of them would make a good pimple on my behind!

Get them angry!!! Get them "hopping mad" so that they will come to me seeking revenge!!!

That's the [B][I][U]only[/U][/I][/B] way that we will ever put this issue to rest!

Otherwise, they will just keep on avoiding me like the chickens that they are.

(Don't tell them that I'm really a nice guy and that this is all in good fun.):smile:

Don.

science_man_88 2011-03-06 20:59

[QUOTE=Don Blazys;254487]Quoting science man 88


According to Wikipedia,

Beal's Conjecture states that x,y,z > 2 [COLOR=blue][B]can[/B][/COLOR] only exist if a^x + b^y = c^c [B][I][U][COLOR=blue]has[/COLOR][/U][/I][/B] a common factor.

That means that x,y,z > 2 [COLOR=red][B]cannot[/B][/COLOR] exist if a^x + b^y = c^c [COLOR=red]has [B][I][U]no[/U][/I][/B] [/COLOR][COLOR=black]common[/COLOR] factor.

Thus, to assume x,y,z > 2 where a^x + b^y = c^z [B][I][U][COLOR=blue]has[/COLOR][/U][/I][/B] a common factor (which is what is done in Wikipedia)
is to assume that Beal's Conjecture is [COLOR=blue][B][U]true[/U][/B][/COLOR][COLOR=black] ,[/COLOR]

and to assume x,y,z > 2 where a^x + b^y = c^z [COLOR=red]has [/COLOR][COLOR=red][B][I][U]no[/U][/I][/B] [/COLOR]common factor (which is what we do in my proof)
is to assume that Beal's Conjecture is [B][U][COLOR=red]false[/COLOR][/U][/B] .

Look science man 88, if you really want to help, then try to goad
some [B][I][U]famous[/U][/I][/B] "big shot" mathematician into debating this proof with me.

e-mail every well known "genius" that you can think of, and tell them that
I said that [B][U]none[/U][/B] of them would make a good pimple on my behind!

Get them angry!!! Get them "hopping mad" so that they will come to me seeking revenge!!!

That's the [B][I][U]only[/U][/I][/B] way that we will ever put this issue to rest!

Otherwise, they will just keep on avoiding me like the chickens that they are.

(Don't tell them that I'm really a nice guy and that this is all in good fun.):smile:

Don.[/QUOTE]

I would tell you a way that you could do that yourself but I'm not a complete idiot. I don't give up my secrets that easily.

Don Blazys 2011-03-06 22:04

Quoting science man 88:
[QUOTE]
I would tell you a way that you could do that yourself but I'm not a complete idiot.
I don't give up my secrets that easily.
[/QUOTE]

Then do it in secret and don't tell me how you did it!

Don't you want this to be over, one way or another?

You know, one of these days the math community will have to contend with it's own history,
and it's a [B][I][U]matter of historical record[/U][/I][/B] that "Don the Dangerous Crank" was...

the [B][I][U]first[/U][/I][/B] to [COLOR=red][B]observe[/B][/COLOR] that the Polygonal Numbers (which are almost as important as
the prime numbers) had no counting function,

the [I][B][U]first[/U][/B][/I] to [COLOR=red][B]develop[/B][/COLOR] a very accurate counting function for them,

and the [B][I][U]first[/U][/I][/B] to [B][COLOR=red]present[/COLOR][/B] it in an alternate manner,
which involves the two most important mathematical constants,
along with the two most important physical constants.

You have seen, first hand, how exeedingly difficult this erratic sequence is to count.
You know that this is potentially the most important counting function this side of [TEX]Li(x)[/TEX].

When it makes it's way into textbooks and math history books, will those who use it, study it,
and read about it be told that it was discovered, developed and presented by a "crackpot"!?:lol:

Thus, we need to resolve the controversy surrounding my proof of Beal's Conjecture once and for all!

Don.

CRGreathouse 2011-03-06 22:49

[QUOTE=Don Blazys;254487]Look science man 88, if you really want to help, then try to goad
some [B][I][U]famous[/U][/I][/B] "big shot" mathematician into debating this proof with me.[/QUOTE]

If you won't listen to any of us when we tell you what's wrong with your supposed proof, why would you listen to someone else?

CRGreathouse 2011-03-06 22:57

[QUOTE=Don Blazys;254492]the [B][I][U]first[/U][/I][/B] to [COLOR=red][B]observe[/B][/COLOR] that the Polygonal Numbers (which are almost as important as
the prime numbers) had no counting function,[/QUOTE]

:huh:

Of course they have one.

[QUOTE=Don Blazys;254492]the [I][B][U]first[/U][/B][/I] to [COLOR=red][B]develop[/B][/COLOR] a very accurate counting function for them,[/QUOTE]

I hadn't seen a better estimate for the count of these numbers before yours. I'm not sure that you were the first, but it wouldn't surprise me -- I don't know of anyone other than you who think that they're important, and that raises the chance that no one would have bothered.

[QUOTE=Don Blazys;254492]and the [B][I][U]first[/U][/I][/B] to [B][COLOR=red]present[/COLOR][/B] it in an alternate manner,
which involves the two most important mathematical constants,
along with the two most important physical constants.[/QUOTE]

You have done this, but you haven't presented even a heuristic, let alone a proof, that it's correct. A very simple heuristic suggests that that this is not correct: the density of rational combinations of integer powers of those constants in the real number is 0. Until you have an argument stronger than this 'default', you'll have trouble convincing mathematicians that you're correct.

[QUOTE=Don Blazys;254492]You have seen, first hand, how exeedingly difficult this erratic sequence is to count.[/QUOTE]

It doesn't seem difficult to me (or to akruppa).

[QUOTE=Don Blazys;254492]You know that this is potentially the most important counting function this side of [TEX]Li(x)[/TEX].[/QUOTE]

Why do you think this is so?

Don Blazys 2011-03-07 03:11

Quoting CGRreathouse:
[QUOTE]
If you won't listen to any of us when we tell you what's wrong
with your supposed proof, why would you listen to someone else?
[/QUOTE]
I [B][I][U]did[/U][/I][/B] listen, and no one in this forum
(or anywhere else for that matter)
was able to point out a "fatal flaw".

Had that occured, then I would have
dropped my proof like a hot potato.

You yourself made several blunders trying to refute it!

This is all now a matter of public record,
and anyone who reads this thread with
sufficient care will [B][I]know[/I][/B] that I am right.

My proof is simple enough that any sufficiently talented
[B][U]student[/U][/B] can see that it is both true and correct.
Thus, if [B][I]you[/I][/B] can't see that it's absolutely irrefutable,
then I feel sorry for you!

Quoting CRGreathouse:
[QUOTE]
You have done this, but you haven't presented even a heuristic, let alone a proof, that it's correct.
A very simple heuristic suggests that that this is not correct: the density of rational combinations of
integer powers of those constants in the real number is 0. Until you have an argument
stronger than this 'default', you'll have trouble convincing mathematicians that you're correct.[/QUOTE]

As of now, the Fine Structure Constant is known to lie
somewhere in between 1/137.035999135 and 1/137.035999033.
I estimate that a determination of [TEX]\varpi(10^{18})[/TEX] would, (if my theory is correct),
improve that approximation [B][I]significantly[/I][/B]. Then, if my approximation is
corroborated by physical experiments, I will [B][I][U]sell[/U][/I][/B] all the details as to
how I derived my counting function to the highest bidder.

Or maybe I will be like "Dr. Evil" and keep it a secret unless the world pays me "von-hondred beeleeon dollurs!" :lol:

People are naturally curious, and as my counting function
continues to make accurate predictions, that curiosity should continue to grow.
By the way, my last prediction of [TEX]\varpi (1,300,000,000,000)[/TEX] was off by only -512.
[TEX]\varpi(1,400,000,000,000)[/TEX] will be determined by about this Tuesday,
and [TEX]\varpi(2,000,000,000,000)[/TEX] by sometime early next month!
I will keep you posted!

Quoting CRGreathouse:
[QUOTE]
It doesn't seem difficult to me (or to akruppa).
[/QUOTE]

Then why not have a little fun and determine [TEX]\varpi(1,600,000,000,000)[/TEX],
which my function will very accurately estimate, and my coder friend will verify.

Quoting CRGreathouse:
[QUOTE]Why do you think this is so?[/QUOTE]
Well, just as no book (or article) on [B]Prime Numbers[/B] is complete without mentioning "prime counting functions",
no book (or article) on [B]Polygonal Numbers [/B]will be complete without mentioning "polygonal counting functions",
and since polygonal numbers are second in importance only to the primes, ...well, it just seems to me like
a [B][I]lot[/I][/B] of books and articles, and consequently... a [B][I][U]lot[/U][/I][/B] of importance!

Doesn't that make you happy? :smile:

Don.

axn 2011-03-07 05:07

[QUOTE=Don Blazys;254467]In order to prove Beal's conjecture "by contradiction", we must first assume that it is [B][U]false[/U][/B] ,
and that [B][I]requires[/I][/B] that we assume x,y,z > 2, which is exactly what we do in equations (1) and (2).[/QUOTE]

Let's see if I can make this clearer. Your proof structure is incorrect. To show that it is incorrect, we will use the proof structure to "prove" obviously incorrect conjecture. [We're no longer trying to prove Beal's conjecture -- we're trying to [B]dis[/B]prove your proof structure].

The Obviously Wrong Conjecture - if a^x + b^y = c^z (a,b,c coprime) then x,y,z <= 2.

Can you tell me why your "logarithmic identity" method doesn't work as a "proof" for the OWC?

CRGreathouse 2011-03-07 05:42

[QUOTE=Don Blazys;254508]I [B][I][U]did[/U][/I][/B] listen, and no one in this forum
(or anywhere else for that matter)
was able to point out a "fatal flaw".[/QUOTE]

In fact we've pointed out many fatal flaws. axn shows that your proof can be transformed into a proof of absurdity: that if you proof is correct then 1 + 1 = 3 or whatever you'd like to conclude. I had showed some time ago that (an earlier version of) your proof rests on the assumption that integers were closed under root extraction. rajula found the most glaring flaw yet -- so glaring that I'm quite embarrassed that I didn't point it out earlier: that you (bizarrely) confuse substituting c/c for T/T with substituting c for T. Many posters (e.g., imag94) have pointed this out to you over the years, but you seem incapable of understanding their point.

Being foolhardy to the extreme, I'll attempt this which so many others have failed at.

Right now your proof is as follows:

1. Let (a bunch of stuff required by Beal's conjecture, in particular a^x + b^y = c^z).
2. Assume Beal's conjecture is false.
3. Then (awful formula equivalent to a^x + b^y = c^z, assuming T > 1).*
4. Division by zero prevents the substitution of c/c for T/T, a contradiction.
5. Thus Beal's conjecture is true.
[list][*]#4 is not a valid proof step: "prevents the substitution" is not a mathematical relation.[*] #2 is not used, so your proof, if correct, can be used to prove any proposition substituted for #2.[*] What you explain when you try to justify #4 is that substituting c = T is the same as substituting c/c = T/T, which is clearly false.[/list]
Now if you want to understand everyone else (I'd say, "if you want to convince people that you're right", but I have to drop the pretense at this point) you're going to have to actually explain what rule of logic you're following that lets you conclude a contradiction, using real mathematics rather than made-up terms like "prevents the substitution".

* You also give a second formula equivalent to a^x + b^y = c^Z, but since one contradiction is enough I'll skip that one.

CRGreathouse 2011-03-07 06:03

[QUOTE=Don Blazys;254508]People are naturally curious, and as my counting function
continues to make accurate predictions, that curiosity should continue to grow.
By the way, my last prediction of [TEX]\varpi (1,300,000,000,000)[/TEX] was off by only -512.[/QUOTE]

A relative error of ~50 times that of the count up to 1.2e12, though. Certainly not evidence that the function is homing in on its predicted value.

But still, I think that your first term is close to the true value, even though I have not had time to do a careful analysis myself. I do not think the first term is correct, though: in other words, with k as the constant you choose (the expression with B and alpha), I think that there is a constant [TEX]\varepsilon>0[/TEX] such that there are arbitrarily large n with

[TEX]\left|\frac{\varpi(n)}{n}-k\right|>\varepsilon[/TEX]

[QUOTE=Don Blazys;254508]Then why not have a little fun and determine [TEX]\varpi(1,600,000,000,000)[/TEX],
which my function will very accurately estimate, and my coder friend will verify.[/QUOTE]

I may. I was working on a different programming project today, extending PARI/GP with a new function over the primes. But if I have a chance and can devote the resources needed I might very well do that.

Alternatively, I have some thoughts about a different way of counting that might be much more efficient. This would take a great deal more work but (if workable) should allow calculations to go much faster.

I would say the first one is about 50% likely and the second one about 10% likely, given my schedule at the moment.

[QUOTE=Don Blazys;254508]Well, just as no book (or article) on [B]Prime Numbers[/B] is complete without mentioning "prime counting functions",
no book (or article) on [B]Polygonal Numbers [/B]will be complete without mentioning "polygonal counting functions",[/QUOTE]

I'm not convinced. Fist, prime numbers are vastly more important that polygonal numbers, so even if true this doesn't automatically make the formula important. But even if polygonal numbers are important that doesn't make counting the ones of any form an important problem -- it might well be that most applications require that only types with fixed numbers of sides are counted (and indeed this is the case!).

I'm not saying that it isn't important, just that I don't know of any importance and I don't know of anyone other than you who ascribes any importance at all to the problem.

[QUOTE=Don Blazys;254508]well, it just seems to me like
a [B][I]lot[/I][/B] of books and articles, and consequently... a [B][I][U]lot[/U][/I][/B] of importance![/QUOTE]

How many books do you know devoted to the topic of polygonal numbers? There are (minimally) tens of thousands devoted to prime numbers. I don't know of [i]any[/i] devoted to polygonal numbers; Google Books finds just one, a small-run out-of-print book by Laugwitz. (I'm actually not sure if it's a book or just a chapter, judging by its listing.)

Don Blazys 2011-03-07 06:03

Quoting axn:
[QUOTE]
Your proof structure is incorrect...
We're trying to [B]dis[/B]prove your proof structure.
[/QUOTE]
My proof structure is correct, and yup... you sure are trying alright ! :lol:

[QUOTE]
Let's see if I can make this clearer....
Can you tell me why your "logarithmic identity" method
doesn't work as a "proof" for the OWC?
[/QUOTE]

Well, since you are trying to make this "clearer", perhaps you should first explain,
in sufficient detail, why you think it [B][I]does[/I][/B] work as a "proof" of the OWC.

I will be back in a day or so. :grin:

Don.

xilman 2011-03-08 20:26

[QUOTE=akruppa;253362]A ten minute hack computed omega(10^10) in 90 seconds. Its result 6403587409 agrees with the value in your manuscript. Its run-time is roughly linear, so 10^13 should take about 24 hours. I made no effort to make the code efficient. Allowing larger values would require partitioning the sieve which would take several more minutes, which I don't think worth the time.[/QUOTE]My code took 88 seconds on a single core of 3.7GHz Phenom II to yield the same result in essentially the same time. Perhaps we could compare implementations. Mail me if you would like a copy of the code for comparison

Mine is perhaps over-simple. The sieve uses a char per hit so isn't very memory efficient, but it does have segmentation implemented.
[code]
[pcl@anubis nums]$ time ./varpi 5000000001 10000000000
3201801977 polygonal numbers in the range 5000000001 to 10000000000 inclusive

real 0m40.836s
user 0m38.928s
sys 0m1.901s
[pcl@anubis nums]$ time ./varpi 1 5000000000
3201785432 polygonal numbers in the range 1 to 5000000000 inclusive

real 0m47.527s
user 0m43.510s
sys 0m4.005s
[pcl@anubis nums]$ bc -l
bc 1.06.95
Copyright 1991-1994, 1997, 1998, 2000, 2004, 2006 Free Software Foundation, Inc.
This is free software with ABSOLUTELY NO WARRANTY.
For details type `warranty'.
3201801977+3201785432
6403587409
38.928+1.901+43.510+4.005
88.344
[/code]BTW, it's not \omega, it's \varpi. I also misread it at first.

Paul

xilman 2011-03-08 21:05

[QUOTE=Don Blazys;253564]Now, I'm confident enough in the accuracy of my function to take that bet, but I'm not all that confident that anyone here can determine [TEX]\varpi(10^{14})[/TEX] in a reasonable amount of time.[/QUOTE]I've just computed [TEX]\varpi(10^{11})[/TEX]in 14 minutes on a single core of my home machine. The run time is linear, so computing [TEX]\varpi(10^{14})[/TEX] will take 14000 core-minutes, or a little under 10 days.

I don't think 10 core-days is at all unreasonable, especially for a program which is trivially parallelizable. If I could be bothered, I could run it here at home in under a day.

Paul

xilman 2011-03-08 23:33

[QUOTE=xilman;254647]My code took 88 seconds on a single core of 3.7GHz Phenom II to yield the same result in essentially the same time.[/QUOTE]Current code does it in something over 34 seconds:[code]
[pcl@anubis nums]$ time ./varpi 1 10000000000
6403587409 polygonal numbers in the range 1 to 10000000000 inclusive

real 0m34.462s
user 0m33.977s
sys 0m0.453s[/code]Time to compute to 1e14 is now down to 4 days on a single core. I have 14 cores available, so could reach 1e16 in a month. The first is eminently "reasonable" in Blazys' words; the second is only somewhat silly. Do I really want to put my current computations in abeyance for a month?

Paul

CRGreathouse 2011-03-09 03:32

Caching improvements on my version doubled performance, though since my computer's slower than yours that's 47.45 seconds for the above version to 23.8 seconds for the new version. I'm not sure how much improvement you'll see because we have different architectures, but probably most of it. The exact figures will probably depend on your L3 cache size.

(Unfortunately memory access changes did not give any benefit, actually increasing execution time.)

This brings expected time to 10^14 down to 16 hours on my slow 4-core machine, or 3.5 hours on Paul's machines.

I still haven't been able to try my nifty idea; maybe tomorrow. Of course it's far from certain that this would be efficient, but it could be really fast if it works.

axn 2011-03-09 03:55

[QUOTE=Don Blazys;254508]
[TEX]\varpi(1,400,000,000,000)[/TEX] will be determined by about this Tuesday,
and [TEX]\varpi(2,000,000,000,000)[/TEX] by sometime early next month!
I will keep you posted!

Then why not have a little fun and determine [TEX]\varpi(1,600,000,000,000)[/TEX],
which my function will very accurately estimate, and my coder friend will verify.
[/QUOTE]

[QUOTE=xilman;254664]Do I really want to put my current computations in abeyance for a month?[/QUOTE]

Perhaps you can count upto 1e13 (~ 0.4 core days?) in multiples of 1e11 and give it to him? It will certainly save some poor chap months of computations.

CRGreathouse 2011-03-09 05:40

[QUOTE=Don Blazys;254508]Then why not have a little fun and determine [TEX]\varpi(1,600,000,000,000)[/TEX],
which my function will very accurately estimate, and my coder friend will verify.[/QUOTE]

[TEX]\varpi(1.0\cdot10^{12})=\quad[/TEX]640 362 343 980 (diff -997)
[TEX]\varpi(1.1\cdot10^{12})=\quad[/TEX]704 398 597 754 (diff 10)
[TEX]\varpi(1.2\cdot10^{12})=\quad[/TEX]768 434 854 386 (diff -972)
[TEX]\varpi(1.3\cdot10^{12})=\quad[/TEX]832 471 110 338 (diff -512)
[TEX]\varpi(1.4\cdot10^{12})=\quad[/TEX]896 507 366 959 (diff -44)
[TEX]\varpi(1.5\cdot10^{12})=\quad[/TEX]960 543 623 833 (diff 775)
[TEX]\varpi(1.6\cdot10^{12})=\quad[/TEX]1 024 579 881 387 (diff 1460)

Credit where due: xilman and I (but mostly xilman) wrote the program to calculate these figures. Further, xliman's computer was probably the first one to find these, even though I'm posting first.

So now that we're able to calculate modestly large values of this counting function, I'd like to hear your predictions for how accurate your approximation will be at 10^13, 10^14, and 10^15. My guess is that, just like the error for B(x) started to settle down at a fixed fraction of x, the error for what I call [TEX]B_2(x)[/TEX] -- B(x) * (1 - alpha/(mu - 2e)) -- will do likewise.

xilman 2011-03-09 08:59

[QUOTE=CRGreathouse;254682]Caching improvements on my version doubled performance, though since my computer's slower than yours that's 47.45 seconds for the above version to 23.8 seconds for the new version. I'm not sure how much improvement you'll see because we have different architectures, but probably most of it. The exact figures will probably depend on your L3 cache size.

(Unfortunately memory access changes did not give any benefit, actually increasing execution time.)

This brings expected time to 10^14 down to 16 hours on my slow 4-core machine, or 3.5 hours on Paul's machines.[/QUOTE]Received your code, thanks, and I'll try it out later today. My program ran overnight and is presently just short of 9e12. I'll let it complete to 1e13 and then post the results.

Paul

xilman 2011-03-09 10:45

[QUOTE=xilman;254698]Received your code, thanks, and I'll try it out later today. My program ran overnight and is presently just short of 9e12. I'll let it complete to 1e13 and then post the results.

Paul[/QUOTE]Here they are. Raw counts only, at 1e11 intervals. Someone else can see how well they fit Blazys' predictions.
[code]
0.1e12 64 036 148 166
0.2e12 128 072 369 864
0.3e12 192 108 604 710
0.4e12 256 144 844 029
0.5e12 320 181 088 566
0.6e12 384 217 336 898
0.7e12 448 253 585 852
0.8e12 512 289 836 587
0.9e12 576 326 089 252
1.0e12 640 362 343 980
1.1e12 704 398 597 754
1.2e12 768 434 854 386
1.3e12 832 471 110 338
1.4e12 896 507 366 959
1.5e12 960 543 623 833
1.6e12 1 024 579 881 387
1.7e12 1 088 616 139 555
1.8e12 1 152 652 398 424
1.9e12 1 216 688 657 577
2.0e12 1 280 724 918 033
2.1e12 1 344 761 178 412
2.2e12 1 408 797 437 873
2.3e12 1 472 833 698 988
2.4e12 1 536 869 961 110
2.5e12 1 600 906 223 058
2.6e12 1 664 942 483 571
2.7e12 1 728 978 745 343
2.8e12 1 793 015 006 938
2.9e12 1 857 051 268 308
3.0e12 1 921 087 531 608
3.1e12 1 985 123 794 438
3.2e12 2 049 160 058 193
3.3e12 2 113 196 321 309
3.4e12 2 177 232 585 528
3.5e12 2 241 268 847 396
3.6e12 2 305 305 110 862
3.7e12 2 369 341 375 028
3.8e12 2 433 377 638 955
3.9e12 2 497 413 902 037
4.0e12 2 561 450 166 830
4.1e12 2 625 486 431 251
4.2e12 2 689 522 695 281
4.3e12 2 753 558 960 045
4.4e12 2 817 595 223 961
4.5e12 2 881 631 487 971
4.6e12 2 945 667 753 419
4.7e12 3 009 704 018 928
4.8e12 3 073 740 283 775
4.9e12 3 137 776 547 341
5.0e12 3 201 812 813 962
5.1e12 3 265 849 079 099
5.2e12 3 329 885 343 899
5.3e12 3 393 921 609 073
5.4e12 3 457 957 873 737
5.5e12 3 521 994 140 845
5.6e12 3 586 030 407 344
5.7e12 3 650 066 673 083
5.8e12 3 714 102 938 294
5.9e12 3 778 139 203 060
6.0e12 3 842 175 469 067
6.1e12 3 906 211 734 019
6.2e12 3 970 248 001 116
6.3e12 4 034 284 268 019
6.4e12 4 098 320 533 203
6.5e12 4 162 356 798 629
6.6e12 4 226 393 064 673
6.7e12 4 290 429 332 595
6.8e12 4 354 465 599 045
6.9e12 4 418 501 864 799
7.0e12 4 482 538 131 097
7.1e12 4 546 574 397 541
7.2e12 4 610 610 663 450
7.3e12 4 674 646 930 018
7.4e12 4 738 683 196 899
7.5e12 4 802 719 463 108
7.6e12 4 866 755 730 333
7.7e12 4 930 791 996 741
7.8e12 4 994 828 264 411
7.9e12 5 058 864 531 797
8.0e12 5 122 900 798 823
8.1e12 5 186 937 065 291
8.2e12 5 250 973 334 355
8.3e12 5 315 009 599 582
8.4e12 5 379 045 866 256
8.5e12 5 443 082 133 967
8.6e12 5 507 118 401 736
8.7e12 5 571 154 667 628
8.8e12 5 635 190 933 871
8.9e12 5 699 227 201 884
9.0e12 5 763 263 470 126
9.1e12 5 827 299 738 263
9.2e12 5 891 336 005 750
9.3e12 5 955 372 273 994
9.4e12 6 019 408 542 654
9.5e12 6 083 444 808 917
9.6e12 6 147 481 076 897
9.7e12 6 211 517 343 604
9.8e12 6 275 553 611 319
9.9e12 6 339 589 880 043
10.0e12 6 403 626 146 905
[/code]Paul

xilman 2011-03-09 11:48

[QUOTE=Don Blazys;254508]
People are naturally curious, and as my counting function
continues to make accurate predictions, that curiosity should continue to grow.
By the way, my last prediction of [TEX]\varpi (1,300,000,000,000)[/TEX] was off by only -512.
[TEX]\varpi(1,400,000,000,000)[/TEX] will be determined by about this Tuesday,
and [TEX]\varpi(2,000,000,000,000)[/TEX] by sometime early next month!
[/QUOTE]Waiting for confirmation by your hot-shot programmer that [TEX]\varpi(2,000,000,000,000) =1,280,724,918,033[/TEX].

Any predictions when he/she/it can verify that[TEX]\varpi(10,000,000,000,000) = 6,403,626,146,905[/TEX] ? I'm naturally curious, to coin a phrase.

Paul

xilman 2011-03-09 11:59

[QUOTE=Don Blazys;253053]
but as it turns out, the coders who determined the present "world record" [TEX]\varpi(1,100,000,000,000)=704,398,597,754[/TEX]
informed me that determining [TEX]\varpi(10^{13})[/TEX] would probably take [B]about a year[/B].[/QUOTE]Took me 10 hours 20 minutes. Very nearly a thousand times less than your estimate.

Paul

science_man_88 2011-03-09 13:03

[QUOTE=xilman;254702]Waiting for confirmation by your hot-shot programmer that [TEX]\varpi(2,000,000,000,000) =1,280,724,918,033[/TEX].

Any predictions when he/she/it can verify that[TEX]\varpi(10,000,000,000,000) = 6,403,626,146,905[/TEX] ? I'm naturally curious, to coin a phrase.

Paul[/QUOTE]

according to my blazy function (not blazy2 that would take too long(that and my computer restarted last night so unless i look through my log file I'll never find it.) I get [CODE](08:59)>blazy(2000000000000)
%1 = 1280724920347.099493640852596[/CODE] in 0 ms.

and

[CODE](09:01)>blazy(10000000000000)
%2 = 6403626165691.467931734187920[/CODE]

also in 0 ms.

science_man_88 2011-03-09 13:28

[QUOTE=science_man_88;254707](not blazy2 that would take too long(that and my computer restarted last night so unless i look through my log file I'll never find it.) [/QUOTE]

Apparently my log file doesn't work. Yeah good thing I post a lot of my scripts.

Don Blazys 2011-03-09 13:31

Quoting xilman
[QUOTE]
Here they are. Raw counts only, at 1e11 intervals. Someone else can see how well they fit Blazys' predictions.

[/QUOTE]
What is a "raw count"?

Anyway, I will check to see how close the approximation function is to
your determinations. If verified... then... exellent work, most impressive!

Will be back in a day or two.

Thanks, :smile:

Don

science_man_88 2011-03-09 13:32

[QUOTE=Don Blazys;254709]Quoting xilman

What is a "raw count"?

Anyway, I will check to see how close the approximation function is to
your determinations. If verified... then... exellent work, most impressive!

Will be back in a day or two.

Thanks, :smile:

Don[/QUOTE]

you know my blazy script is kinda based in your counting function, it should bring the same results.


All times are UTC. The time now is 08:38.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.