mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Lounge (https://www.mersenneforum.org/forumdisplay.php?f=7)
-   -   our first "dry" year in four years (https://www.mersenneforum.org/showthread.php?t=9813)

lycorn 2008-01-05 10:56

[QUOTE=jinydu;122187]Does the increase in factoring difficulty really make that much of a difference? I thought that the bulk of GIMPS' computing power is being devoted to LL testing.
[/QUOTE]

And in fact it is. But in terms of "years remaining", the main decrease has been due to LMH, as the numbers eliminated by their work have been far larger than the ones eliminated by Primenet. Back in 2003, I was doing LMH work with a Pentium III 700 MHz, and it was going much faster than a Pentium M, or even an Athlon 64 2.2 MHz is today able to do. The bit levels back then were at 57 and 58! Trial factoring a number from 58 to 59 takes roughly 32 times less work than from 63 to 64.
Even today, I test on average 140 exponents from 62 to 63 bits, in the 68M range, on my Pentium 4. I find on average 2 factors a day, which means roughly 60 to 65 P-90 CPU years. It is far more than what we can expected from LL tests!...
As the gap between the leading edge of Primenet 1st time LL testing and the ranges worked by LMH narrows, this effect will be less apparent. But undoubtly the increase in the bit levels will hold back the throughput of LMH, and will be reflected in the "years remaining" figures, unless we get a significant increase in resources devoted to the LMH project

jinydu 2008-01-05 11:32

I forgot about that. :blush:

Of course, trying to measure the actual amount of computing work done by GIMPS using the P90 CPU Years remaining figures from the status pages gives an overestimate.

davieddy 2008-01-05 11:37

[quote=jasong;122204]I believe it's already been found, by someone in the prime community, someone who's proven themselves to be knowlegeable about this stuff. According to them the doublecheck has been running since the middle of December.

When I say soon, I mean before Valentine's Day, which is February 14th.[/quote]
Although you profess to recognizing bullshit when you see it,
I (for one) do not intend to proliferate this rumour on the strength of
your say so.

Mini-Geek 2008-01-05 14:27

[quote=jasong;122204]I believe it's already been found, by someone in the prime community, someone who's proven themselves to be knowlegeable about this stuff. According to them the doublecheck has been running since the middle of December.

When I say soon, I mean before Valentine's Day, which is February 14th.[/quote]
Three months for a doublecheck, which would certainly be run by the fastest computer available? Just how long does each candidate normally take, and how they were able to find one before GIMPS, when we have so many resources and can routinely run a number in a month? Do they have Blue Gene for everything except the double check? Assuming this isn't complete lies, perhaps the double check will finish on my b-day, which is just before Valentine's Day.
If it's real, why wasn't it announced anywhere that one was discovered, and that a doublecheck was to be run, so that faster hardware could run it in under three months?

davieddy 2008-01-05 18:11

[quote=jinydu;122223]I forgot about that. :blush:

Of course, trying to measure the actual amount of computing work done by GIMPS using the P90 CPU Years remaining figures from the status pages gives an overestimate.[/quote]
The current throughput of 2000 P90 years per day works out at
730,000 P90 years per year.
The average computer participating in GIMPS (there are ~73,000
of them) is only doing 10 times as much as a P90 working 24/7:sad:

davieddy 2008-01-08 00:29

I've just calculated this:
Up to 40M 63% of prime exponent Mersennes factored.
50M-80M 57% factored
40M-50M 59.5% factored (GIMPS has reached~45M)

This tells us how much trial factoring will help
with the P90 years remaining.

David

petrw1 2008-01-08 03:15

[QUOTE=lycorn;122176]I also keep some status pages, here is my contribution:


[U]Date[/U]: [U]Yrs remaining[/U]:

03FEB2002 - 27,338,916 (the oldest I have)
01JAN2003 - 26,531,251
31DEC2003 - 25,701,037
10JAN2005 - 24,601,120
01JAN2006 - 23,848,134

It looks like we are essentially maintaining our rate over the years, but I think that is due to the fact that we are searching for factors at ever increasing bit levels, which means that much more work is needed to find them. This is particularly evident in LMH work, that accounts for the majority of the decrease in the "number of years remaining". So in fact we are investing a lot more power in the search. As the bit levels increase, though, I am not sure whether we will able to "keep the pace". As an example, going from 64 to 65 bits takes a long time, much more than the double of the time needed to go from 63 to 64 bits. Hence, when the LMHers start hitting that wall, the increase in computer power will probably not be sufficient to allow us to reduce the "years remaining" at the rate we are curently doing.[/QUOTE]

Very linear indeed!!! .... It will be done in 2032 at this linear rate.

davieddy 2008-01-08 08:59

[quote=petrw1;122441]Very linear indeed!!! .... It will be done in 2032 at this linear rate.[/quote]
But as Iycorn pointed out, the reason the decrease in 2002 was as
big as in 2007 was due to much easier factoring by LMH in the ranges
beyond GIMPS.

As my figures show, we can expect GIMPS factoring to reduce
the "status unknown" in the range 50M-80M from its current 43%
down to 37% at which point further factoring becomes more
expensive than LLtesting and checking.

See
[URL]http://mersenne.org/ips/stats.html[/URL]

petrw1 2008-01-08 22:17

[QUOTE=davieddy;122454]But as Iycorn pointed out, the reason the decrease in 2002 was as
big as in 2007 was due to much easier factoring by LMH in the ranges
beyond GIMPS.

As my figures show, we can expect GIMPS factoring to reduce
the "status unknown" in the range 50M-80M from its current 43%
down to 37% at which point further factoring becomes more
expensive than LLtesting and checking.

See
[URL]http://mersenne.org/ips/stats.html[/URL][/QUOTE]

However, each higher exponent takes a little less time to factor to a specific limit. This should suggest that at the points that factoring is done to one more bit than those lower that a few more should find factors.

BUT.... are you considering this and still noting that the percentage is dropping anyway?

davieddy 2008-01-09 08:12

[quote=petrw1;122485]However, each higher exponent takes a little less time to factor to a specific limit. This should suggest that at the points that factoring is done to one more bit than those lower that a few more should find factors.

BUT.... are you considering this and still noting that the percentage is dropping anyway?[/quote]
Here are the percentages of Mersennes factored by range:

0-15M: 63.54%
15M-17.5M: 62.59%
17.5M-20M: 62.87%
20M-25M: 63.04%
25M-30M: 62.67%
30M-35M: 62.64%
35M-40M: 62.41%

It is hardly over-extrapolating to guess that from 40M-80M
the percentage factored before LL testing takes place will be
close to 62.5% (5/8).
As your observations show, it is remarkable that GIMPS factoring
limits happen to result in this constant fraction factored.

David

davieddy 2008-01-09 09:07

[quote=davieddy;122494]As your observations show, it is remarkable that GIMPS factoring
limits happen to result in this constant fraction factored.

David[/quote]

OTOH these figures represent a "narrow" range of exponents
in a logarithmic sense: ~2^23 to 2^25.


All times are UTC. The time now is 23:28.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.