mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Data (https://www.mersenneforum.org/forumdisplay.php?f=21)
-   -   Newer milestone thread (https://www.mersenneforum.org/showthread.php?t=13871)

cuBerBruce 2014-10-27 04:55

[QUOTE=Primeinator;386203]It looks like several are definitely on track to finish in the next couple of days:

31938679 D LL, 98.50% 269 1 2014-01-31 2014-10-26 2014-10-27 2014-10-28 ANONYMOUS
31989091 D LL, 98.60% 269 1 2014-01-31 2014-10-26 2014-10-27 2014-10-28 ANONYMOUS
32273279 D LL, 96.60% 324 1 2013-12-07 2014-10-26 2014-10-27 2014-10-28 Kankabar

Or... at least they have checked in recently and aren't overdue despite their age.[/QUOTE]

M32273279 has only progressed 0.3 percentage points in the last month. For that reason, it's one of the 5 I consider questionable for finishing. At that pace, it will take months, but obviously it has made faster progress in the past, so maybe it will be much sooner. I'm very doubtful about it finishing in the next couple days, though.

EDIT: I didn't see Chris's post before posting...

TheMawn 2014-10-27 04:59

I feel like the way "progress" and estimated times are calculated need to be reworked. The client has that option that says how many hours the program runs, but that's probably left at the default 24 hours 90% of the time because people just don't know it's there. I hope Primenet uses something a bit more sophisticated than that.

Would the simple solution not be to use a rolling average based on a lengthy amount of time like the last six months? In the end it really doesn't matter what the ETA is because they're either done or not when the one-year time frame is up. On the other hand, a column of data that says "ETA" should not exist unless it's actually telling us something useful.

Madpoo 2014-10-27 16:24

[QUOTE=TheMawn;386206]I feel like the way "progress" and estimated times are calculated need to be reworked. The client has that option that says how many hours the program runs, but that's probably left at the default 24 hours 90% of the time because people just don't know it's there. I hope Primenet uses something a bit more sophisticated than that.

Would the simple solution not be to use a rolling average based on a lengthy amount of time like the last six months? In the end it really doesn't matter what the ETA is because they're either done or not when the one-year time frame is up. On the other hand, a column of data that says "ETA" should not exist unless it's actually telling us something useful.[/QUOTE]

The "eta" or estimated completion data being presented comes from the estimate provided by the client machine itself.

How the client comes up with that time, I'm not sure. Prime95 does have the concept of the rolling average, and despite what the user sets the "time on per day" to, the rolling average is an up to date reflection of how the machine is doing. I would guess it's using that in it's estimating... George would know, or if someone went through the code they could figure it out, or even suggest changes.

Primeinator 2014-10-27 16:51

Interesting. I did not know you could look up an exponent's history like that, Chalsall. That paints things in a slightly different light.

chalsall 2014-10-27 17:43

[QUOTE=Primeinator;386238]Interesting. I did not know you could look up an exponent's history like that, Chalsall. That paints things in a slightly different light.[/QUOTE]

I have spiders. It's a bit like having crabs, but different....

ET_ 2014-10-27 19:47

[QUOTE=chalsall;386243]I have spiders. It's a bit like having crabs, but different....[/QUOTE]

Yeah, the crabs walk backwards...

Primeinator 2014-10-27 20:10

[QUOTE=chalsall;386243]I have spiders. It's a bit like having crabs, but different....[/QUOTE]

I'll take your word for it :smile:

Conspiracy plot: Chalsall IS Big Brother!

retina 2014-10-28 15:17

[QUOTE=retina;385325][QUOTE=retina;381855]Using this query on the temporary new server:
[url]http://www.mersenne.org/report_LL/default.php?exp_lo=30000000&exp_hi=33219278&exp_date=&user_only=0&user_id=&exdchk=1&exbad=1&exfactor=1&txt=1&dispdate=1[/url]
And removing duplicates I show 2204 remaining exponents to finish all numbers below 10M decimal digits.[/QUOTE]As of now there appears to be 2[sup]8[/sup] remaining till there's no more.[/QUOTE]And currently 2[sup]7[/sup]+1 remaining.

And they are all currently [url=http://www.mersenne.org/assignments/default.php?exp_lo=1&exp_hi=33219280&execm=1&exfirst=1&exp1=1&extf=1]assigned[/url].

petrw1 2014-10-28 17:16

[QUOTE=retina;386308]And currently 2[sup]7[/sup]+1 remaining.

And they are all currently [url=http://www.mersenne.org/assignments/default.php?exp_lo=1&exp_hi=33219280&execm=1&exfirst=1&exp1=1&extf=1]assigned[/url].[/QUOTE]
And many overdue.
As one of our top veteran producers I have nothing but respect for grunwalderGIMP and his contributions.
That said, he has 19 of these all over due and his recent progress suggests he's either quit or taken a break.

If anyone knows for sure that he has quit these could all be released. If its just a break ..... you can ignore this post

Prime95 2014-10-28 19:22

[QUOTE=Madpoo;386232]Prime95 does have the concept of the rolling average, and despite what the user sets the "time on per day" to, the rolling average is an up to date reflection of how the machine is doing..[/QUOTE]

Prime95 won't let the rolling average go below 50 (I think). If a machine is on a few minutes a day, then prime95 will be wildly optimistic as to the completion date.

Brian-E 2014-10-29 11:20

[QUOTE=Prime95;386323]Prime95 won't let the rolling average go below 50 (I think). If a machine is on a few minutes a day, then prime95 will be wildly optimistic as to the completion date.[/QUOTE]
This might be one of various possibilities. My machine is on about 8 hours a day and the estimates are always on the pessimistic side, jumping around from day to day between slightly too pessimistic and wildly pessimistic. (I haven't bothered to update the version of mprime I run in the last few years - sorry for my laxness there - and maybe you changed the way rolling average is handled since then. But mine won't be the only machine running an old client, I'm sure.)

legendarymudkip 2014-10-29 14:39

Countdown to proving M(32582657) is the 44th Mersenne Prime: 12 (Estimated completion : 2014-11-16)

Luis 2014-10-29 20:06

[quote]Countdown to testing all exponents below M(57885161) once: [COLOR=blue]6,156[/COLOR][/quote][quote]First-time LL assignments in ]M(51646879), M(57885161)[ = [COLOR=blue]781[/COLOR][/quote]Should they be the same thing? Why am I wrong?

sdbardwick 2014-10-29 20:29

[QUOTE=Luis;386420]Should they be the same thing? Why am I wrong?[/QUOTE]

[CODE]Countdown to testing [B][I]all exponents[/I][/B] below M(57885161) once: 6,156[/CODE]
[CODE]First-time[B][I] LL assignments[/I][/B] in ]M(51646879), M(57885161)[ = 781[/CODE]I'd guess not all untested exponents are currently assigned for LL testing (maybe some are out for TF, ECM or P-1).

VictordeHolland 2014-10-29 21:28

[QUOTE=sdbardwick;386422][CODE]Countdown to testing [B][I]all exponents[/I][/B] below M(57885161) once: 6,156[/CODE][CODE]First-time[B][I] LL assignments[/I][/B] in ]M(51646879), M(57885161)[ = 781[/CODE]I'd guess not all untested exponents are currently assigned for LL testing (maybe some are out for TF, [STRIKE]ECM[/STRIKE] or P-1).[/QUOTE]
Or are not assigned at all.

Madpoo 2014-10-29 23:01

[QUOTE=sdbardwick;386422][CODE]Countdown to testing [B][I]all exponents[/I][/B] below M(57885161) once: 6,156[/CODE]
[CODE]First-time[B][I] LL assignments[/I][/B] in ]M(51646879), M(57885161)[ = 781[/CODE]I'd guess not all untested exponents are currently assigned for LL testing (maybe some are out for TF, ECM or P-1).[/QUOTE]

Correct... there are still quite a few unassigned exponents below M(48) that need a first time check.

There are 350039 total in that range
- 224586 have a known factor
- 119298 have some kind of LL result (single/double) (excludes any unverified bad/suspect results)
- 5980 are assigned as first LL checks and haven't expired

There's some overlap involved there... 91 LL results where a factor was found later on for example.

I get 264 unassigned when I combined the different things together and filter out the ones that I can account for, removing overlaps.

But that 264 unassigned in that range is pretty different from what was mentioned. Specifically, I see 5980 assigned in that range, not a mere 781.

I might also have to take a closer look at the countdown of 6156 remaining. I'm probably just doing something wrong because my numbers aren't adding up. I didn't count suspect/bad results. If I include those, there are just 249 left, but I don't think I'd want to count a suspect result towards a milestone. :) That's *probably* where my method differed from the countdown, but I don't know.

chalsall 2014-10-29 23:23

[QUOTE=Madpoo;386441]I might also have to take a closer look at the countdown of 6156 remaining. I'm probably just doing something wrong because my numbers aren't adding up. I didn't count suspect/bad results. If I include those, there are just 249 left, but I don't think I'd want to count a suspect result towards a milestone. :) That's *probably* where my method differed from the countdown, but I don't know.[/QUOTE]

There is an old expression: "No good deed goes unpunished".

cuBerBruce 2014-10-30 01:28

[QUOTE=Madpoo;386441]5980 are assigned as first LL checks and haven't expired[/QUOTE]

[QUOTE=Madpoo;386441]I get 264 unassigned when I combined the different things together and filter out the ones that I can account for, removing overlaps.[/QUOTE]

This doesn't seem to jive with the figures shown in the [url=http://www.mersenne.org/primenet/]Work Distribution Map[/url], which is showing less than 800 exponents under 58,000,00 as assigned for first LL testing. This is consistent with Luis's value of 781.

Also, according to that page, there are over 2000 exponents between 56,000,000 and 57,000,000 alone that are simply waiting to be assigned for LL testing. These are Cat 1 exponents, so are only given to qualifying machines of users that have promised to complete low-valued exponent assignments quickly. There currently is not enough machines taking these assignments to deplete the supply of Cat 1 exponents. (The boundary between Cat 1 and Cat 2 exponents is currently pretty close to the provisional M48, or exponent of 57,885,161. So practically all new first LL assignments for exponents under 57,885,161 will be Cat 1 assignments.)

GPU72 also has many reserved for further trial factoring. The excess amount of exponents already available for LL gives GPU72 some time to perform this trial factoring of the remaining ones before the machines testing Cat 1 LL exponents need them.

cuBerBruce 2014-10-30 03:03

[code]
1111 00000
11 11 00 00
11 00 00
11 00 00
11 00 00
11 00 00
11 00 00
11 00 00
11111111 00000
[/code]
Countdown to proving M(32582657) is the 44th Mersenne Prime: [b]10[/b] (Estimated completion : 2014-11-16)

The two ANONYMOUS assignments mentioned earlier by Primeinator as being near done have completed.

Madpoo 2014-10-30 03:53

[QUOTE=cuBerBruce;386452][code]
1111 00000
11 11 00 00
11 00 00
11 00 00
11 00 00
11 00 00
11 00 00
11 00 00
11111111 00000
[/code]
Countdown to proving M(32582657) is the 44th Mersenne Prime: [b]10[/b] (Estimated completion : 2014-11-16)

The two ANONYMOUS assignments mentioned earlier by Primeinator as being near done have completed.[/QUOTE]

The end is nigh. :smile:

For the next major milestone we can do a countdown on, I was thinking of the countdown for double-checking all exponents below 10M digits.

It's basically this, since they're all assigned:
[URL="http://www.mersenne.org/assignments/?exp_lo=31148503&exp_hi=33219280&execm=1&extf=1&B1=Get+Assignments"]http://www.mersenne.org/assignments/?exp_lo=31148503&exp_hi=33219280&execm=1&extf=1&B1=Get+Assignments[/URL]

In fact, once we prove M44 is really M44, I'll make those options the default when going to the assignment page, just like I did for the M44 stuff currently. :)

Madpoo 2014-10-30 03:59

[QUOTE=cuBerBruce;386448]This doesn't seem to jive with the figures shown in the [url=http://www.mersenne.org/primenet/]Work Distribution Map[/url], which is showing less than 800 exponents under 58,000,00 as assigned for first LL testing. This is consistent with Luis's value of 781.

Also, according to that page, there are over 2000 exponents between 56,000,000 and 57,000,000 alone that are simply waiting to be assigned for LL testing. These are Cat 1 exponents, so are only given to qualifying machines of users that have promised to complete low-valued exponent assignments quickly. There currently is not enough machines taking these assignments to deplete the supply of Cat 1 exponents. (The boundary between Cat 1 and Cat 2 exponents is currently pretty close to the provisional M48, or exponent of 57,885,161. So practically all new first LL assignments for exponents under 57,885,161 will be Cat 1 assignments.)

GPU72 also has many reserved for further trial factoring. The excess amount of exponents already available for LL gives GPU72 some time to perform this trial factoring of the remaining ones before the machines testing Cat 1 LL exponents need them.[/QUOTE]

Yeah, I just have this feeling I'm doing something wrong. At first my counts were off more because a certain table with all the assignments in it also has assignments where it expired, and I was counting those as well by mistake. So yeah, it was assigned at one point, but just not currently.

But even excluding those I'm still getting 5975 assigned 1st time LL checks between 51646879 and 57885161.

I'm chalking it up to me just missing something in there, or some way assignments are actually entered. I'll have to look at some other queries to try and figure it out.

And yeah, if you go here it shows 777 assignments:
[URL="http://www.mersenne.org/assignments/?exp_lo=51646879&exp_hi=57885161&execm=1&exdchk=1&exp1=1&extf=1&B1=Get+Assignments"]http://www.mersenne.org/assignments/?exp_lo=51646879&exp_hi=57885161&execm=1&exdchk=1&exp1=1&extf=1&B1=Get+Assignments[/URL]

But anyway, back to the original point, yes, there are unassigned exponents in that range. :)

EDIT: Oh... my SQL query had a stupid human error (mine). I was doing an "is not null" when I meant to do an "is null". :) So that count of assignments for 1st time LL checks was actually all of the *expired* assignments. Doh! But I did explore the process a bit and saw a few things I might be able to improve.

Luis 2014-10-31 22:34

Ahhh, so that query (~800 exponents) doesn't simply show all remaining exponents to test. Now it's clear to me where all missing ones are. :smile:

Madpoo 2014-11-01 01:27

[QUOTE=Luis;386591]Ahhh, so that query (~800 exponents) doesn't simply show all remaining exponents to test. Now it's clear to me where all missing ones are. :smile:[/QUOTE]

Yeah... maybe there would be some value in showing *unassigned* exponents in a range when looking at the "assignments" report. Maybe that would be useful... something for "down the road" I suppose.

kladner 2014-11-01 13:48

1 Attachment(s)
I happened to look at the GPU72 ranking for LLTF, and the phrase, "Battle of the Titans" came to mind-


EDIT: The next nearest positions are only about half of these two.

Qubit 2014-11-02 16:08

The number of registered CPUs passed 1 million.
(Although the number of CPUs active in the last week/month is "only" 11.9k/22.5k, resp.)

NBtarheel_33 2014-11-02 16:48

[QUOTE=Qubit;386720]The number of registered CPUs passed 1 million.
(Although the number of CPUs active in the last week/month is "only" 11.9k/22.5k, resp.)[/QUOTE]

I would say that the number of CPUs active in the last year would be a safe metric for the number of actually participating CPUs.

In that 1,000,000 figure, I am sure there are many obsolete CPUs, CPUs that are no longer contributing to GIMPS, and CPUs that got inadvertently registered in the course of stress testing.

cuBerBruce 2014-11-03 05:27

[size=+4]Nine[/size]

Countdown to proving M(32582657) is the 44th Mersenne Prime: 9 (Estimated completion : 2014-11-17)

A 299.8-day old assignment was completed!

Primeinator 2014-11-03 05:45

[QUOTE=cuBerBruce;386752][size=+4]Nine[/size]

Countdown to proving M(32582657) is the 44th Mersenne Prime: 9 (Estimated completion : 2014-11-17)

A 299.8-day old assignment was completed![/QUOTE]

By hand? I guess Dr. Cooper is branching out from computers. :smile:

[SIZE="1"]Edit: That comment is more appropriate in the Chuck Norris style Curtis Cooper joke thread.[/SIZE]

cuBerBruce 2014-11-03 15:13

[font=Times New Roman][size=+4]VIII[/size][/font]

Countdown to proving M(32582657) is the 44th Mersenne Prime: 8 (Estimated completion : 2014-11-17)

Double check of M(32207701) was finished.

And less than 100 to go to prove no more Mersenne Primes with less than 10 million digits.

petrw1 2014-11-03 17:55

[QUOTE=cuBerBruce;386765][font=Times New Roman][size=+4]VIII[/size][/font]

Countdown to proving M(32582657) is the 44th Mersenne Prime: 8 (Estimated completion : 2014-11-17.[/QUOTE]

Looks like 4-5 can/will be reassigned before they will finish

legendarymudkip 2014-11-03 18:30

[url]http://www.mersenne.org/M32155297[/url] has currently had 3 LLs reported. 1 is suspect, the other 2 are unverified (unmatching). The assignment is 21 days overdue at the moment, and is almost a year old. The latest result is an attempted poach but the residues didn't match. If any of them should be poached, this is the one (in my opinion).

chalsall 2014-11-03 18:53

[QUOTE=legendarymudkip;386782]If any of them should be poached, this is the one (in my opinion).[/QUOTE]

If no one objects, I could "cook" this candidate in about 18 hours.

Edit: Actually, if no one objects, I could [URL="http://www.mersenne.org/assignments/?exp_lo=32155297&exp_hi=32582657&execm=1&extf=1&B1=Get+Assignments"]cook[/URL] all four of these "bad boys" in about 18 hours (concurrently, on different CPUs).

Any objections? If I don't hear a sane objection by my EOD, they'll be put in the queue.

legendarymudkip 2014-11-03 19:33

I'm currently running [url]http://www.mersenne.org/M32155297[/url] anyway, so I object to that one :razz:

Besides that, I'd stick to ones that are overdue a long way just in case they do actually finish it - at the moment that's just the ones that are overdue in general. 32155301, 31148503 and 31517393.

ETA on 32155297 ~5h30.

chalsall 2014-11-03 20:11

[QUOTE=legendarymudkip;386789]I'm currently running [url]http://www.mersenne.org/M32155297[/url] anyway, so I object to that one :razz:[/QUOTE]

OK. I certainly don't wan't to "step on toes". But then, equally, I don't like those who "take the piss" either.

Does anyone object to me DC'ing 32155301, 32273279 and 32544607 in the next 24 hours?

petrw1 2014-11-03 20:48

[QUOTE=chalsall;386793]OK. I certainly don't wan't to "step on toes". But then, equally, I don't like those who "take the piss" either.

Does anyone object to me DC'ing 32155301, 32273279 and 32544607 in the next 24 hours?[/QUOTE]

Imho.. aren't the last 2 showing recent progress and likely to finish soon?

chalsall 2014-11-03 21:16

[QUOTE=petrw1;386796]Imho.. aren't the last 2 showing recent progress and likely to finish soon?[/QUOTE]

OK, I don't disagree.

Let's not "poach" a candidate at least until George's promise of legacy assignments being given a year's "grandfathering", even if "Cat-1".

(Although I have to put on the record that Kankabar's 32,273,279 is going to be "poached" by me personally once it is 365.01 days old.)

Madpoo 2014-11-03 21:21

[QUOTE=legendarymudkip;386789]I'm currently running [url]http://www.mersenne.org/M32155297[/url] anyway, so I object to that one :razz:

Besides that, I'd stick to ones that are overdue a long way just in case they do actually finish it - at the moment that's just the ones that are overdue in general. 32155301, 31148503 and 31517393.

ETA on 32155297 ~5h30.[/QUOTE]

This got me wondering how long it would take on one of my large multi-core boxes... It has 2 x 10-core E5-2690 V2 processors (3 GHz), so 20 cores (plus 20 hyperthreads, but I won't count those).

1 worker thread using 20 cores, it tells me it'll take 12.5 hours or so.

Well heck, I'll just let it run as a manual test, but I won't check it in, I'll just use it to verify residues. That's unusual to see a quadruple check. Now I have it in my brain to do a database query and see how often that happens... what's the most # of times a check had to be done before 2 matched?

Hmm... I'll puzzle over that query. There are certainly a good # of exponents where people ran the LL test many times, but with the same residue. 21934219 has been done 656 times with the same residue, for instance. Umm... good job! 6522911 has been done 305 times with the same result, 8893783 got checked 244 times.

Maybe those 3 were used as tests for different code bases or something to verify the process. There are another 80 exponents that were done 10-79 times each with the same residue, then thousands of single digit instances of the same exponent/same result.

Now I need to break it down into multiple attempts where we got 3 or more different residues... I'll get back on that. :)

Madpoo 2014-11-03 22:22

[QUOTE=Madpoo;386801]...
Now I need to break it down into multiple attempts where we got 3 or more different residues... I'll get back on that. :)[/QUOTE]

Interesting. Now, the exponent report on the website isn't always going to show all of these. For example:
2397103 has been LL checked many times, resulting in no less than *13* different residues.

The exponent report doesn't list the full details though, because that number got factored by ECM. The historical data on the exponent report is also pulling from a different source and it's not going to include suspect/bad results if an exponent is later verified by a matching double-check, or if it's factored.

To see an example of an unverified triple-mismatch, try this:
[URL="http://www.mersenne.org/M62059441"]http://www.mersenne.org/M62059441[/URL]

So that one will be needing a quadruple check at some point.

Here's the weird breakdown for the number crunching crowd. I didn't bother narrowing it down to how many checks have been run altogether, so some of those "only one unique residue" are probably because it's only been checked once so far. Don't read too much into that one.
[CODE]
# of residues frequency
13 1
9 1
8 1
7 1
6 4
5 6
4 148
3 4840
2 86557
1 1326570
[/CODE]

Madpoo 2014-11-03 22:55

[QUOTE=chalsall;386800]OK, I don't disagree.

Let's not "poach" a candidate at least until George's promise of legacy assignments being given a year's "grandfathering", even if "Cat-1".

(Although I have to put on the record that Kankabar's 32,273,279 is going to be "poached" by me personally once it is 365.01 days old.)[/QUOTE]

On the topic of poaching...

I was just remembering a LONG time back when I wasn't so popular with some GIMPS participants because I was poaching exponents that I reckoned had been abandoned. I was just digging back through some old Mersenne mail list archives about those. It's always fascinating to read things you wrote 15 years ago, FYI. For good humor, go back and read some of those archived threads from June 1999. I was snarky back then, apparently. :smile:

I suppose all of my history with poaching controversies should tell me to stay out of it, although to be fair to myself, much of the criteria I used way back when are actually part of the recycling policy now, even if not specifically... Maybe I shouldn't feel so guilty about stirring the pot back then?

Xyzzy 2014-11-03 23:21

[QUOTE=Madpoo;386801]Maybe those 3 were used as tests for different code bases or something to verify the process.[/QUOTE]Maybe they were part of a disk image that got pushed out? Or the media is read-only?

Uncwilly 2014-11-04 02:09

[FONT="Century Gothic"][SIZE="6"]七[/SIZE][/FONT]

Primeinator 2014-11-04 02:41

[QUOTE]Countdown to proving M(32582657) is the 44th Mersenne Prime: 7 [/QUOTE]

Estimated completion is 11-17 :rolleyes:

Madpoo 2014-11-04 03:46

[QUOTE=Xyzzy;386813]Maybe they were part of a disk image that got pushed out? Or the media is read-only?[/QUOTE]

What's weird is that out of those 13 mismatched residues, 12 of those were all from one user, "chatmate".

I think that user must have been checking a machine that had a lot of problems and kept checking in bad results. The DB shows that poor user had error codes on just about every result. Even on one of the check-ins, the "error code" was zero but the residue was incorrect anyway.

He/she did finally get a good result in which was verified, only to have the whole thing get factored about a year ago by ECM.

Every exponent has it's own story I suppose. :)

cuBerBruce 2014-11-04 03:55

M(32155297) now has three matching LL results.

legendarymudkip 2014-11-04 08:20

[QUOTE=cuBerBruce;386830]M(32155297) now has three matching LL results.[/QUOTE]How ironic.

Madpoo 2014-11-04 18:18

[QUOTE=Madpoo;386829]What's weird is that out of those 13 mismatched residues, 12 of those were all from one user, "chatmate".

I think that user must have been checking a machine that had a lot of problems and kept checking in bad results. The DB shows that poor user had error codes on just about every result. Even on one of the check-ins, the "error code" was zero but the residue was incorrect anyway.

He/she did finally get a good result in which was verified, only to have the whole thing get factored about a year ago by ECM.

Every exponent has it's own story I suppose. :)[/QUOTE]

By the way, I've been working on and off updating the exponent report page... mostly some style changes, but a few other things. For instance, the LL section of each exponent will currently only show verified/unverified results (single/double check). If there are some suspect entries in there (mismatched residues) they only appear while that exponent is in the unverified status. Once a matching residue comes in, the exponent is categorized as verified, and any of those previously suspect results get marked as bad, because they were.

There's also a status for an exponent where it might be verified composite by double-checking, but then if someone does some deeper TF or ECM testing on it and finds a factor, those previous "verified" results get classed as "factored" meaning the LL was verified but it's moot since a factor was found. And I think some people might, for whatever reason, do an LL test on exponents that were already known composite because of a known factor... maybe for testing.

I think it's kind of interesting to see those previous bad results, or show the LL results even if the number was factored. So my test page is including that information right now.

What do you all think though? Is that kind of "inside baseball" stats useful? I mean, I can peek in the database and see those previous bad results, but for everyone else, if you were looking at [URL="http://www.mersenne.org/M32155297"]http://www.mersenne.org/M32155297[/URL] right now you wouldn't see those other 2 bad results anymore, now that it's verified.

You can see one of those bad results in the history section of the details, but the other bad result pre-dated this version of the database (or the raw client messages only go back so far... same difference), so only the LL entry is available for that particular bad result from user "EspElement" pre-2008 or so.

Or if you were to look at the details for M2397103 , since it was eventually factored you'd never know the sordid tale of user "chatmate" and his numerous failed efforts at that one.

So... think that kind of warts and all history (including bad results in the LL details) is useful, or just a mere curiosity since it doesn't really affect the outcome of anything?

ATH 2014-11-04 20:00

[QUOTE=Madpoo;386863]So... think that kind of warts and all history (including bad results in the LL details) is useful, or just a mere curiosity since it doesn't really affect the outcome of anything?[/QUOTE]

I would love to be able to see all the history but not as default.

Maybe change the "Show full details (current assignment, history, LL residues)" to "Show more details" or "Show history" and then add another check mark called "Show complete details/history (including bad results)" or something like that which would show you all the things you can currently see in your version.

Madpoo 2014-11-04 21:36

[QUOTE=ATH;386866]I would love to be able to see all the history but not as default.

Maybe change the "Show full details (current assignment, history, LL residues)" to "Show more details" or "Show history" and then add another check mark called "Show complete details/history (including bad results)" or something like that which would show you all the things you can currently see in your version.[/QUOTE]

Well, on that note we're actually trying out some other tweaks. Right now if you show the full details, it will include every time someone checked in even just 1 curve of an ECM run. Those add up...

Right now on my test page I've got it setup to "roll up" all of the curve info from the history and present it as "14,376 curves run" or whatever. That's actually not a real indication of the ECM work since there have been plenty of curves run prior, they're just not part of what's in the history log going back to 2008'ish.

There's a metric regarding ECM work effort that I'm still wrapping my head around, which kind of relates to the # of curves run for each upper bound, and I think I'm on the verge of beginning to start to comprehend. :smile: I may show that info instead, perhaps as a percentage like "ECM work xx% done" or "ECM complete" if applicable.

George had me add in a tick box so that you *could* get the full "user xyz ran 15 curves with bounds a and b" but honestly, for some exponents like 1277 it's a LOT of stuff.

Trust me, including the occasional bad or suspect LL result won't be a lot of extra info... one or two here and there really. The vast majority of LL results come in clean as a whistle and they verify just fine.

ATH 2014-11-04 23:50

[QUOTE=Madpoo;386869]There's a metric regarding ECM work effort that I'm still wrapping my head around, which kind of relates to the # of curves run for each upper bound, and I think I'm on the verge of beginning to start to comprehend. :smile: I may show that info instead, perhaps as a percentage like "ECM work xx% done" or "ECM complete" if applicable.[/QUOTE]

The number of curves to run at each B1/B2 is the number required so there is 1/e ~ 37% chance (risk?) that a factor was missed at that size. I guess that is the most efficient time to move up to the next size level (where you can still find the smaller factors).

Madpoo 2014-11-05 03:59

[QUOTE=ATH;386878]The number of curves to run at each B1/B2 is the number required so there is 1/e ~ 37% chance (risk?) that a factor was missed at that size. I guess that is the most efficient time to move up to the next size level (where you can still find the smaller factors).[/QUOTE]

Okay... so maybe y'all can help me piece something together.

For an extreme example I've been looking at M1277 since there has been a lot of ECM activity on it.

There's a measurement in the database "total ECM effort" and right now for 1277 it's "42 656 688 085 105.5"

For a cheat sheet, I'm using the code behind the ECM report on the site, e.g.
[URL="http://www.mersenne.org/report_ecm/?txt=0&ecm_lo=1277&ecm_hi=1277"]http://www.mersenne.org/report_ecm/?txt=0&ecm_lo=1277&ecm_hi=1277[/URL]

It tells me there are 53,321 curves tested for that 800e6 bound #1.

The calculations behind that are using a table of total curves for each of the bounds (same you see on that table), and for 800,000,000 it's 360,000 curves. The simple math it's doing is just 800e6 * 360e3 = 288e12. The total ECM work done, 42.66e12, is 14.81%... 14.81% of 360,000 curves = 53,320.86 curves.

So... that matches what the report is doing, and I'm okay with that; I understand how it's getting that 53,321 curves tested for that bound.

What I wonder though... is it more helpful to show that in the exponent report like "53,321 curves tested for Bound #1=800e6" or would it be more helpful to show something like "ECM progress = 14.81% complete" or some such? Or both?

The deal is I'm not familiar with ECM in it's particulars, so I apologize if this *should* be apparent what the best way to show that info is or if these seem like dumb questions. :smile:

axn 2014-11-05 04:59

[QUOTE=Madpoo;386891]What I wonder though... is it more helpful to show that in the exponent report like "53,321 curves tested for Bound #1=800e6" or would it be more helpful to show something like "ECM progress = 14.81% complete" or some such? Or both?[/QUOTE]
You can say that ECM is [B]14.81% complete to 65-digit level[/B] or you could say that ECM is complete to 60.7 digits (60 + (65-60)*14.81%). I am not sure that the second one is the right way to calculate the precise digit level, but it has the advantage of having only one metric (completed digit level), instead of two (target digit level and % complete).

VictordeHolland 2014-11-05 10:51

[QUOTE=Madpoo;386891]
What I wonder though... is it more helpful to show that in the exponent report like "53,321 curves tested for Bound #1=800e6" or would it be more helpful to show something like "ECM progress = 14.81% complete" or some such? Or both?
[/QUOTE]
"ECM progress = xx% complete" is vague without bounds or digitlevel.
I prefer the " xx curves tested B1=y" , just like the ECM progess report. Or "53,321 of 360,000 curves tested (B1=800,000)".

lycorn 2014-11-05 16:28

[QUOTE=axn;386894]You can say that ECM is [B]14.81% complete to 65-digit level[/B] .[/QUOTE]

ECM is an asymptotic method, that gives a certain probability of finding a factor of a given size after running the prescribed number of curves (i.e., even after running 360000 curves with B1=800,000,000, we can´t be sure that no factor smaller than 65 digits exists). So I think the quoted wording is misleading.

Madpoo 2014-11-05 17:54

[QUOTE=lycorn;386930]ECM is an asymptotic method, that gives a certain probability of finding a factor of a given size after running the prescribed number of curves (i.e., even after running 360000 curves with B1=800,000,000, we can´t be sure that no factor smaller than 65 digits exists). So I think the quoted wording is misleading.[/QUOTE]

Hmm... these are all good points. I'm just hoping to find some way to communicate to even [STRIKE]an idiot like[/STRIKE] myself exactly what the progress of ECM work is.

Given that ECM curves in some particular bound aren't really a guarantee of finding any factors in that range, if we stipulate that fact then we could still say "ECM work is xx% complete at B1=xxx (or @ xx digits)", because we're not saying anything about ECM itself, just that we've done x% of the stipulated maximum work we're going to bother with.

If I had the inclination, and perhaps incurable insomnia, I might actually look at why the # of curves for each bound are determined the way they are. I got the bit about 1/e (~37%) chance of finding a factor, I just don't know how or why that's important. It sounds like you're saying it approaches some asymptotic value, so I'm assuming anything beyond the curves / bound that are identified have a REALLY low chance of success.

Anyway, if we can find some way to represent that it's x% done in some way or another while also conveying that, hey, ECM work isn't really *ever* complete...

Maybe "ECM work is X% of the way done to a goal of Y", so we're merely stating that wherever we end up, it's a goal, not the end all, be all. :)

VBCurtis 2014-11-05 19:18

While you're correct that doing more curves after the prescribed number has a low chance of success, that alone isn't reason not to do them; it's that once that number of curves is complete, it is more efficient use of computrons to step up to the B1 that's optimal for finding a factor 5 digits larger.

If you want to play with the number of curves for various digit levels and B1 values, I suggest downloading GMP-ECM and using the -v flag; it will spit out the expected number of curves to find a factor of various sizes for any B1 you choose to start with. If you want curve counts as used by GIMPS, you'll have to also specify B2 as 100* B1 (since GMP-ECM uses a different algorithm, its default B2 values are much higher than Prime95's- also, you should use a small input number to play with so you don't get not-enough-memory faults).

Roughly, it takes 6 times more effort to complete a level 5 digits larger. There is a best B1 value for each digit level, but it is not necessary to use precisely that B1; using one that's too small will take more curves to complete the level, while using one that's too big will take more time per curve without a corresponding reduction in the number of curves needed. Time per curve in stage 1 scales linearly with B1; I'm not sure about stage 2 in Prime95.

For each candidate number, there is a limit to B1/B2 bounds that fit in the allowed memory for ECM. So, one might choose suboptimal B1/B2 combinations due to memory constraints, but still do meaningful work toward a digit level. In the case of your B1=800M example, curves run at 400M would still help with the t65 level, but not as well as half an 800M curve. For simplicity, GIMPS has chosen to record ECM work as a sum of B1 bound* curves run as a proxy for digit level; this is not-quite-accurate, but isn't far enough off to really matter as long as people aren't doing silly things like reporting thousands of curves at B1=3M when a t60 has already been done.

The readme file that comes with GMP-ECM may also help solidify these concepts. I don't know how GMP-ECM calculates the expected curve counts, but I'm pretty sure RDS wrote a paper on it that you could code given time and interest. Googling something like "silverman optimal ECM bounds" might do it (sorry, I'm lazy, didn't try myself).

Madpoo 2014-11-05 22:14

[QUOTE=VBCurtis;386941]While you're correct that doing more curves after the prescribed number has a low chance of success, that alone isn't reason not to do them; it's that once that number of curves is complete, it is more efficient use of computrons to step up to the B1 that's optimal for finding a factor 5 digits larger....[/QUOTE]

Hmm... lots to ponder there. I wonder now if I shouldn't just stick with George's original suggestion to me, which was to use the same format that the ECM results report shows. I was hoping to find something a little more terse for inclusion in the XML or table report or something a layman could understand, but I think the concept behind it is just complex enough that it probably can't be dumbed down too much without losing relevant info.

I did wonder if extra curves were run at the lower bounds that technically add to that "total ECM effort" #, so it doesn't necessarily apply to how many curves have been run at the higher bounds using the simple arithmetic there. I guess there's always that chance.

I guess when I get my quantum CPU built, I'll go back through and find all the missing factors of the first few million exponents during my lunch break. :smile:

Dubslow 2014-11-06 02:13

From the GMP-ECM README:
[code]
The ECM method is a probabilistic method, and can be viewed in some sense
as a generalization of the P-1 and P+1 method, where we only require that
P+t+1 is smooth, where t depends on the curve we use and satisfies
|t| <= 2*P^(1/2) (Hasse's theorem). The optimal B1 and B2 bounds have
to be chosen according to the (usually unknown) size of P. The following
table gives a set of nearly optimal B1 and B2 pairs, with the corresponding
expected number of curves to find a factor of given size (column "-power 1"
does not take into account the extra factors found by Brent-Suyama's exten-
sion, whereas column "default poly" takes them into account, with the poly-
nomial used by default: D(n) means Dickson's polynomial of degree n):

digits D optimal B1 default B2 expected curves
N(B1,B2,D)
-power 1 default poly
20 11e3 1.9e6 74 74 [x^1]
25 5e4 1.3e7 221 214 [x^2]
30 25e4 1.3e8 453 430 [D(3)]
35 1e6 1.0e9 984 904 [D(6)]
40 3e6 5.7e9 2541 2350 [D(6)]
45 11e6 3.5e10 4949 4480 [D(12)]
50 43e6 2.4e11 8266 7553 [D(12)]
55 11e7 7.8e11 20158 17769 [D(30)]
60 26e7 3.2e12 47173 42017 [D(30)]
65 85e7 1.6e13 77666 69408 [D(30)]

Table 1: optimal B1 and expected number of curves to find a
factor of D digits with GMP-ECM.

After performing the expected number of curves from Table 1, the
probability that a factor of D digits was missed is exp(-1), i.e.,
about 37%. After twice the expected number of curves, it is exp(-2),
i.e., about 14%, and so on.

Example: after performing 8266 curves with B1=43e6 and B2=2.4e11
(or 7553 curves with -dickson 12), the probability to miss a 50-digit
factor is about 37%.[/code]

This measurement has colloquially taken the form of "t-levels". When we say a number has been completed to "t65", what we mean is that 70K curves (give or take) at B1=850e6 have been run. Complete to "2t65" would mean roughly 140K curves, for exp(-2) chance of missing a factor; "4t65" (nearly 300K curves) would mean exp(-4) chance to miss a 65 digit factor.

Running a bunch of curves at a given level (e.g. B1=260e6 for t60) does put work in to increasing the next highest t level, but running (say) 10x the recommended t60 B1 curves is a lot less efficient at finding a 65 digit factor than running the recommended t65 B1 curves. (Note that the 10x is just a random number -- I don't know "how many t60s is equivalent to a t70".)

By default rather than by any substantive reasoning, as far as Mersenne numbers go where ECM is the only plausible method, the current practice is to complete 1t60, then complete 1t65, then 1t70 etc. until a factor is found. The work increases exponentially of course (or rather, I believe it's sub-exponentitally; but still quite sharply).

ATH 2014-11-06 07:24

[QUOTE=Madpoo;386937]If I had the inclination, and perhaps incurable insomnia, I might actually look at why the # of curves for each bound are determined the way they are.[/QUOTE]

Akruppa explains it well in post #2 here:

[URL="http://www.mersenneforum.org/showthread.php?t=5871"]http://www.mersenneforum.org/showthread.php?t=5871[/URL]

and it all comes back to the often quoted paper by Silverman and Wagstaff, "A Practical Analysis of the Elliptic Curve Factoring Algorithm":

[URL="http://www.ams.org/journals/mcom/1993-61-203/S0025-5718-1993-1122078-7/"]http://www.ams.org/journals/mcom/1993-61-203/S0025-5718-1993-1122078-7/[/URL]

TheMawn 2014-11-07 01:00

[QUOTE=ATH;386998][URL="http://www.mersenneforum.org/showthread.php?t=5871"]http://www.mersenneforum.org/showthread.php?t=5871[/URL]
][/QUOTE]

Short thread. I read it all.

Is the B2 = 100 x B1 convention something that we have since determined to be optimal or is that also strictly for book-keeping?

I have always had a degree of interest in ECM, either for fully factoring some Mersennes or for finding at least one factor for all Mersennes. I would like to some day understand the math (getting there slowly) and run my own B1 and B2 but in the meantime I would stick to the Primenet "Optimums", assuming that they're designed to maximize the chance of finding a factor given previous effort (previous Bounds and / or currently known factors).

VBCurtis 2014-11-07 04:36

[QUOTE=TheMawn;387059]Short thread. I read it all.
Is the B2 = 100 x B1 convention something that we have since determined to be optimal or is that also strictly for book-keeping?
[/QUOTE]

GMP-ECM uses a different method for stage 2 than Prime95. GMP-ECM stage 2 is faster (and thus a larger B2 is optimal compared to Prime95), but uses vastly more memory for large composites. So, for smallish numbers (say, 5000 or fewer digits), GMP-ECM's default B2 would be more efficient; once memory requirements get too big, prime95 is the tool, and its stage 2 method is most efficient around B2=100*B1.

The Wagstaff/RDS paper states that optimal B2 is when time spent in stage 2 is roughly equal to stage 1; that is the source of the 100x standard with Prime95. Current GMP-ECM implementations on modern Core hardware spends a bit under half the time in stage 2 vs stage 1, but I've found that setting B2 larger to match time spent on stage 1 vs stage 2 is not more efficient.

Users with tons of memory (say, 16GB or more) can run GMP-ECM on fairly large mersenne candidates on one core; also, note GPU can be used on stage 1 of ECM, while a single core does large-memory stage 2 work. One can choose B2 such that a single core (or two, I suppose, if memory is sufficient) to keep up with the GPU's stage 1 output. See GPU-ECM thread in the ECM forum. I don't know how large the inputs can be for GPU-ECM work.
Edit: GPU-ECM has been compiled for 512-bit inputs, 1024-bit, and 4096-bit. I am not sure how large that limit can be compiled for, but at present the max size is hard-coded at compile time.

Madpoo 2014-11-07 04:51

[QUOTE=VBCurtis;387075]...
Users with tons of memory (say, 16GB or more) can run GMP-ECM on fairly large mersenne candidates on one core; also, note GPU can be used on stage 1 of ECM, while a single core does large-memory stage 2 work. One can choose B2 such that a single core (or two, I suppose, if memory is sufficient) to keep up with the GPU's stage 1 output. See GPU-ECM thread in the ECM forum. I don't know how large the inputs can be for GPU-ECM work...[/QUOTE]

Hmm... this might tie in to an feature request that I thought would be cool for Prime95, and that would be as a memory thrasher as well as CPU.

I had a troublesome server a while back with a flaky memory module. The server had a nasty habit of only crashing when some memory hungry app was running, and only after a while. This was a SQL server (in a cluster, thankfully) and when it was running, eventually SQL would reach enough memory that it hit the bad block and it would crash the entire server (the Proliant detected an uncorrectable memory error and helpfully rebooted).

I tried memory burn in tests, both in Windows an standalone bootable programs, but oddly enough the only thing that ever crashed it was SQL.

I eventually found the bad module... the Proliant logged where the defective one was installed, but I really hoped to do a good test again once I replaced it to make sure it was truly solved.

I mean, let's say you have a server with 192GB of RAM and 72 cores... having something like Prime95 doing huge memory tests while also fully thrashing the CPU would really be an awesome burn-in. It'd also exercise the cooling system and dual power supplies. :)

I know the Prime95 stress test will do CPU stuff, but if I wanted to get it doing some memory thrashing, I'm not quite sure how to optimize telling it how much memory to use during ECM work, and how many worker threads should be running the curves to make sure it'll hit as much memory as possible. If that were baked into the stress test somehow, it'd be neat.

wombatman 2014-11-07 04:56

[QUOTE=VBCurtis;387075]*snip*
Edit: GPU-ECM has been compiled for 512-bit inputs, 1024-bit, and 4096-bit. I am not sure how large that limit can be compiled for, but at present the max size is hard-coded at compile time.[/QUOTE]

Although it was compiled for these sizes, it doesn't actually work on those numbers. Still trying to figure out how (if I can) to get it to work, but with a new job and lack of actual programming skills, that probably won't happen. In the meantime, the highest working limit is 2^1018-1.

Uncwilly 2014-11-07 06:17

[QUOTE=Madpoo;386869]Well, on that note we're actually trying out some other tweaks. Right now if you show the full details, it will include every time someone checked in even just 1 curve of an ECM run.[/QUOTE]
:direction:
Can all of the ECM and non-milestone discussion get moved to a new thread? Thank you.

legendarymudkip 2014-11-07 08:09

[QUOTE=wombatman;387080]Although it was compiled for these sizes, it doesn't actually work on those numbers. Still trying to figure out how (if I can) to get it to work, but with a new job and lack of actual programming skills, that probably won't happen. In the meantime, the highest working limit is 2^1018-1.[/QUOTE]

The reason the limit is 2^1018-1 is because of the size limit for modular operations on the GPU. I don't know if this can be worked around, but it might be able to be.

Uncwilly 2014-11-07 13:27

[SIZE="6"][FONT="Lucida Sans Unicode"]Cinco[/FONT][/SIZE]

Primeinator 2014-11-07 17:47

[QUOTE=Uncwilly;387100][SIZE="6"][FONT="Lucida Sans Unicode"]Cinco[/FONT][/SIZE][/QUOTE]

...de mayo? You're a bit late, I'm afraid.

Madpoo 2014-11-07 21:47

[QUOTE=Primeinator;387110]...de mayo? You're a bit late, I'm afraid.[/QUOTE]

That's his way of bringing this thread back on-topic. :)

5 exponents left to prove M44 is really M44.

So, for our next minor milestone, does it seem like the one to double-check all exponents below 10M? Basically baking this in:

[URL="http://www.mersenne.org/assignments/?exp_lo=31148503&exp_hi=33219280&execm=1&extf=1&B1=Get+Assignments"]http://www.mersenne.org/assignments/?exp_lo=31148503&exp_hi=33219280&execm=1&extf=1&B1=Get+Assignments[/URL]

If that seems good, I'll see about adding that to the milestone page maybe today sometime with a link to that report.

Primeinator 2014-11-07 22:00

[QUOTE=Madpoo;387121]That's his way of bringing this thread back on-topic. :)

5 exponents left to prove M44 is really M44.

[/QUOTE]

My very poor sense of humor. Edit: Or my constant desire for Mexican food.

I also noticed that there are now fever than 6k exponents to test until all numbers below M48 have had one LL.

retina 2014-11-07 23:47

[QUOTE=Madpoo;387121][URL="http://www.mersenne.org/assignments/?exp_lo=31148503&exp_hi=33219280&execm=1&extf=1&B1=Get+Assignments"]http://www.mersenne.org/assignments/?exp_lo=31148503&exp_hi=33219280&execm=1&extf=1&B1=Get+Assignments[/URL]

If that seems good, I'll see about adding that to the milestone page maybe today sometime with a link to that report.[/QUOTE]That URL only works if all exponents are assigned. There are times when some exponents are unassigned and are missing from the list. A different report would be needed to show all exponents, like the one I mention [url=http://www.mersenneforum.org/showthread.php?p=381855#post381855]here[/url].

ATH 2014-11-08 01:10

[QUOTE=TheMawn;387059]Is the B2 = 100 x B1 convention something that we have since determined to be optimal or is that also strictly for book-keeping?[/QUOTE]

[QUOTE=R.D. Silverman;80379](1) Spending an equal amount of time in step 1 and step 2 maximizes the per-unit time probability of success. This is independent of the step 2 METHOD. If a particular step 2 run K times as fast as step 1, and we take step 1 to B1, then we should take Step 2 to K*B1

In theory, an ordinary FFT implementation [by say a perfect programmer] can take an ordinary step 2 FFT (i.e. without the Brent-Suyama or similar extensions) to B1^2 in the same time it took step 1 to run to B1.[/QUOTE]

B2 = B1^2 would be the optimal with a "perfect" program and "infinite" memory.


[QUOTE=Madpoo;387078]I mean, let's say you have a server with 192GB of RAM and 72 cores... having something like Prime95 doing huge memory tests while also fully thrashing the CPU would really be an awesome burn-in. It'd also exercise the cooling system and dual power supplies. :)[/QUOTE]

I don't think Prime95 can run on 72 cores, but I think 32 cores last I heard. Then you could run GMP-ECM at the same time with a huge B2 to use as much memory as possible:
First you could run stage1 and save at the end: ecm.exe -save stage1.txt 3e6 1 < somenumber.txt
Then you can resume from the savefile to test memory usage by extending B2 and using verbose mode and "-k 1" which ensures all of stage2 is run in 1 step using the most memory:
ecm.exe -v -k 1 -resume stage1.txt 3e6 1e12 (modify 1e12 and check memory usage from the verbose mode output)

I am not sure if GMP-ECM can use as much as 192 Gb, but I guess you could run several copies if not.

Madpoo 2014-11-08 01:59

[QUOTE=retina;387136]That URL only works if all exponents are assigned. There are times when some exponents are unassigned and are missing from the list. A different report would be needed to show all exponents, like the one I mention [url=http://www.mersenneforum.org/showthread.php?p=381855#post381855]here[/url].[/QUOTE]

In this case, they are all assigned. :smile:

That's one reason I figured this would be the next good milestone to actually have some kind of countdown and a link to a report.

The other existing milestone #'s on that page are all nice, but not all in a range are assigned so without some really wild guessing or some emphasis on assignments, we couldn't do any kind of "expected completion" thing.

Oh well... it's probably nice to have milestones that are more easily digestable, with dates in the not too distant future.

Xyzzy 2014-11-08 03:31

[QUOTE=ATH;387142]I don't think Prime95 can run on 72 cores, but I think 32 cores last I heard.[/QUOTE]
[QUOTE]New features in Version 28.5 of mprime
--------------------------------------

1) Changed the output to the worker windows during LL and PRP tests. The new output includes the estimated time to complete the test. There are two new options described in undoc.txt: ClassicOutput and OutputRoundoff.

2) Added some new options described in undoc.txt: ScaleOutputFrequency, TitleOutputFrequency, and SilentVictoryPRP.

3) Benchmarking on hyperthreaded machines now times only the most common cases. Specifically, hyperthreading is used only in the one cpu and all cpu cases.

4) Benchmarking trial factoring is now off by default. Prime95 should not be used for trial factoring. GPUs are about 100 times more efficient at that task.

5) On multi-core machines, benchmarks are now run on multiple workers. This measures the effect of memory bandwidth during testing and helps you select the setup that gives you the most throughput.

6) There are many new options described in undoc.txt to customize the benchmarking process.

[B]7) Maximum number of threads supported rasied from 64 to 512.[/B][/QUOTE]:mike:

Luis 2014-11-08 16:44

Only [SIZE=7][B]2[/B][/SIZE]. :smile:

cuBerBruce 2014-11-08 18:08

[QUOTE=Luis;387191]Only [SIZE=7][B]2[/B][/SIZE]. :smile:[/QUOTE]

And given the recent history of poachings and double poachings, I really don't expect these last two will last much longer.

tha 2014-11-08 18:24

[URL="https://m.youtube.com/watch?v=9jK-NcRmVcw"]Final Countdown[/URL] - it is down to one now.

Madpoo 2014-11-08 18:37

[QUOTE=cuBerBruce;387195]And given the recent history of poachings and double poachings, I really don't expect these last two will last much longer.[/QUOTE]

Yeah, I think some of y'all got antsy and they've all been done or poached. :)

ZERO!

Okay... I guess I better remove the countdown from the milestone section and add it to the official list of milestones by date.

I'm working on adding that "all under 10M digits double-checked" countdown now...

TheMawn 2014-11-08 19:55

And someone already updated the Wikipedia page removing the note that we're not completely sure about M44. Didn't take them long.

:bow:

Xyzzy 2014-11-08 20:27

[QUOTE=TheMawn;387205]And someone already updated the Wikipedia page removing the note that we're not completely sure about M44.[/QUOTE][URL]http://mersennewiki.org/index.php/List_of_known_Mersenne_primes[/URL]

:sad:

kladner 2014-11-09 05:09

[QUOTE=Xyzzy;387209][URL]http://mersennewiki.org/index.php/List_of_known_Mersenne_primes[/URL]

:sad:[/QUOTE]

Why so sad? Do you grieve the death of an asterisk?

Madpoo 2014-11-09 06:45

[QUOTE=kladner;387234]Why so sad? Do you grieve the death of an asterisk?[/QUOTE]

I think it's because the asterisk was still there earlier when he posted. :) Gone now though... phew, that was a close one </sarcasm>

cuBerBruce 2014-11-09 13:36

[QUOTE=Madpoo;387240]I think it's because the asterisk was still there earlier when he posted. :) Gone now though... phew, that was a close one </sarcasm>[/QUOTE]

And I believe it was [b]two[/b] asterisks, not just one.

manfred4 2014-11-09 18:06

[QUOTE]*It is not known whether any undiscovered Mersenne primes exist between the 45th (M37,156,667) and the 48th (M57,885,161) on this chart; the ranking is therefore provisional.[/QUOTE]

That got corrected wrong. We Don't know if there are mersenne primes between M44 and M45 either.

Xyzzy 2014-11-09 19:18

[QUOTE=manfred4;387259]That got corrected wrong. We Don't know if there are mersenne primes between M44 and M45 either.[/QUOTE]:redface:

Uncwilly 2014-11-10 01:13

[QUOTE=Uncwilly;382927]The count down number for proving M44 went through 200 in the last week and has been dropping quickly.[/QUOTE]Time for a full check-point update:

All exponents below [B]32,593,019[/B] have been tested and double-checked.
All exponents below [B][COLOR="Red"]51,646,879[/COLOR][/B] have been tested at least once.

Countdown to testing all exponents below M([B][COLOR="Blue"]57885161[/COLOR][/B]) once: 5,924
Countdown to double-checking all 2[SUP]P[/SUP]-1 smaller than 10M digits: [B][COLOR="Red"]72[/COLOR][/B] (Estimated completion : [COLOR="Blue"]2015-03-06[/COLOR])
Countdown to proving M([COLOR="Green"]37156667[/COLOR]) is the [COLOR="green"]45[/COLOR]th Mersenne Prime: 62,119

retina 2014-11-10 04:49

I question the utility of the estimated completion. It is almost certainly wrong. And even if all involved clients are perfect in their estimates (which itself is very unlikely) there are still those impatient poachers that will blast through the last few exponents thus making the time estimate meaningless.

Brian-E 2014-11-10 08:59

[QUOTE=retina;387287]I question the utility of the estimated completion. It is almost certainly wrong. And even if all involved clients are perfect in their estimates (which itself is very unlikely) there are still those impatient poachers that will blast through the last few exponents thus making the time estimate meaningless.[/QUOTE]
Yes, the estimated date of completion should not be trusted for all clients. Here are the relevant lines of output of my current DC. The progress has been steady over the two months, the test is currently 62% complete, and it should therefore finish around 20 December. The chaotic estimated completion dates may be explained either by the fact that I run a fairly old version of mprime (v 26.5) (perhaps the handling or the rolling average has since been improved) or that my machine only runs a few hours per day, but I don't think either circumstance will be particularly unusual.

[CODE][Wed Sep 10 21:04:50 2014 - ver 26.5]
Sending expected completion date for M35614801: Jul 3 2015
[Fri Sep 12 00:37:14 2014 - ver 26.5]
Sending expected completion date for M35614801: Jul 2 2015
[Sat Sep 13 09:32:30 2014 - ver 26.5]
Sending expected completion date for M35614801: Jan 27 2015
[Sun Sep 14 09:32:31 2014 - ver 26.5]
Sending expected completion date for M35614801: Jan 26 2015
[Mon Sep 15 09:32:32 2014 - ver 26.5]
Sending expected completion date for M35614801: Jan 21 2015
[Tue Sep 16 09:32:33 2014 - ver 26.5]
Sending expected completion date for M35614801: Jun 25 2015
[Wed Sep 17 09:32:34 2014 - ver 26.5]
Sending expected completion date for M35614801: Jun 25 2015
[Thu Sep 18 09:32:35 2014 - ver 26.5]
Sending expected completion date for M35614801: Jun 22 2015
[Fri Sep 19 09:46:53 2014 - ver 26.5]
Sending expected completion date for M35614801: Jun 21 2015
[Sat Sep 20 18:23:21 2014 - ver 26.5]
Sending expected completion date for M35614801: May 31 2015
[Mon Sep 22 01:49:36 2014 - ver 26.5]
Sending expected completion date for M35614801: Jan 22 2015
[Tue Sep 23 01:49:37 2014 - ver 26.5]
Sending expected completion date for M35614801: Jan 21 2015
[Wed Sep 24 01:49:38 2014 - ver 26.5]
Sending expected completion date for M35614801: Jun 12 2015
[Thu Sep 25 10:33:41 2014 - ver 26.5]
Sending expected completion date for M35614801: Jan 21 2015
[Fri Sep 26 11:14:42 2014 - ver 26.5]
Sending expected completion date for M35614801: Jan 21 2015
[Sat Sep 27 11:14:43 2014 - ver 26.5]
Sending expected completion date for M35614801: Jan 20 2015
[Sun Sep 28 11:14:44 2014 - ver 26.5]
Sending expected completion date for M35614801: Jan 15 2015
[Mon Sep 29 11:14:45 2014 - ver 26.5]
Sending expected completion date for M35614801: May 29 2015
[Tue Sep 30 13:27:07 2014 - ver 26.5]
Sending expected completion date for M35614801: May 29 2015
[Wed Oct 1 13:27:08 2014 - ver 26.5]
Sending expected completion date for M35614801: May 25 2015
[Thu Oct 2 22:51:03 2014 - ver 26.5]
Sending expected completion date for M35614801: May 15 2015
[Fri Oct 3 22:51:04 2014 - ver 26.5]
Sending expected completion date for M35614801: Apr 28 2015
[Sun Oct 5 14:12:27 2014 - ver 26.5]
Sending expected completion date for M35614801: Jan 15 2015
[Mon Oct 6 18:16:21 2014 - ver 26.5]
Sending expected completion date for M35614801: Jan 14 2015
[Tue Oct 7 20:27:24 2014 - ver 26.5]
Sending expected completion date for M35614801: May 12 2015
[Wed Oct 8 20:27:25 2014 - ver 26.5]
Sending expected completion date for M35614801: May 9 2015
[Thu Oct 9 23:54:26 2014 - ver 26.5]
Sending expected completion date for M35614801: May 7 2015
[Fri Oct 10 23:54:27 2014 - ver 26.5]
Sending expected completion date for M35614801: Apr 20 2015
[Sat Oct 11 23:54:28 2014 - ver 26.5]
Sending expected completion date for M35614801: Apr 10 2015
[Sun Oct 12 23:54:29 2014 - ver 26.5]
Sending expected completion date for M35614801: Apr 1 2015
[Tue Oct 14 00:40:34 2014 - ver 26.5]
Sending expected completion date for M35614801: Jan 10 2015
[Tue Oct 14 10:31:16 2014 - ver 26.5]
Sending expected completion date for M35614801: Jan 9 2015
[Wed Oct 15 10:31:17 2014 - ver 26.5]
Sending expected completion date for M35614801: Jan 9 2015
[Thu Oct 16 10:31:18 2014 - ver 26.5]
Sending expected completion date for M35614801: Apr 20 2015
[Fri Oct 17 10:31:19 2014 - ver 26.5]
Sending expected completion date for M35614801: Jan 9 2015
[Sat Oct 18 18:42:22 2014 - ver 26.5]
Sending expected completion date for M35614801: Apr 15 2015
[Mon Oct 20 01:56:57 2014 - ver 26.5]
Sending expected completion date for M35614801: Jan 7 2015
[Tue Oct 21 01:56:58 2014 - ver 26.5]
Sending expected completion date for M35614801: Jan 8 2015
[Wed Oct 22 01:56:59 2014 - ver 26.5]
Sending expected completion date for M35614801: Jan 8 2015
[Thu Oct 23 02:02:22 2014 - ver 26.5]
Sending expected completion date for M35614801: Jan 7 2015
[Fri Oct 24 02:02:23 2014 - ver 26.5]
Sending expected completion date for M35614801: Apr 6 2015
[Sat Oct 25 11:53:07 2014 - ver 26.5]
Sending expected completion date for M35614801: Apr 2 2015
[Sun Oct 26 10:53:07 2014 - ver 26.5]
Sending expected completion date for M35614801: Mar 24 2015
[Mon Oct 27 10:53:08 2014 - ver 26.5]
Sending expected completion date for M35614801: Mar 16 2015
[Mon Oct 27 17:35:45 2014 - ver 26.5]
Sending expected completion date for M35614801: Jan 4 2015
[Tue Oct 28 17:35:46 2014 - ver 26.5]
Sending expected completion date for M35614801: Jan 3 2015
[Wed Oct 29 17:35:47 2014 - ver 26.5]
Sending expected completion date for M35614801: Dec 31 2014
[Thu Oct 30 21:08:50 2014 - ver 26.5]
Sending expected completion date for M35614801: Dec 30 2014
[Fri Oct 31 21:08:52 2014 - ver 26.5]
Sending expected completion date for M35614801: Dec 25 2014
[Sat Nov 1 21:08:52 2014 - ver 26.5]
Sending expected completion date for M35614801: Dec 23 2014
[Sun Nov 2 21:08:53 2014 - ver 26.5]
Sending expected completion date for M35614801: Mar 6 2015
[Mon Nov 3 21:08:54 2014 - ver 26.5]
Sending expected completion date for M35614801: Mar 3 2015
[Sun Nov 9 15:29:41 2014 - ver 26.5]
Sending expected completion date for M35614801: Mar 6 2015
[/CODE]

Madpoo 2014-11-11 03:52

[QUOTE=retina;387287]I question the utility of the estimated completion. It is almost certainly wrong. And even if all involved clients are perfect in their estimates (which itself is very unlikely) there are still those impatient poachers that will blast through the last few exponents thus making the time estimate meaningless.[/QUOTE]

As the saying goes, it's close enough for government work. :smile:

retina 2014-11-11 04:03

[QUOTE=Madpoo;387368]As the saying goes, it's close enough for government work. :smile:[/QUOTE]How about we make a deal. If you get rid of the internal table horiz scrollbar on the recent cleared/results reports then I won't complain about the estimated completion date. :deadhorse::spinner::whistle:

Madpoo 2014-11-11 05:15

[QUOTE=retina;387369]How about we make a deal. If you get rid of the internal table horiz scrollbar on the recent cleared/results reports then I won't complain about the estimated completion date.[/QUOTE]

I think it only looks that way since you have Javascript disabled. The tablesorter plugin auto resizes columns once it loads, so that could make all the difference.

So... you could enable Javascript :deadhorse: :smile:

retina 2014-11-11 05:35

[QUOTE=Madpoo;387373]I think it only looks that way since you have Javascript disabled. The tablesorter plugin auto resizes columns once it loads, so that could make all the difference.

So... you could enable Javascript :deadhorse: :smile:[/QUOTE]Or you could remove the overflow value in the the pre tag: [strike]"style="overflow: auto;"[/strike] It's like the table is in :jail:

:tantrum:

[size=1][color=grey]Note: It only overflows when someone reports a very large p-1 factor, but it is very irritating when it does happen.[/color][/size]

kladner 2014-11-11 14:57

[QUOTE=retina;387376]Or you could remove the overflow value in the the pre tag: [strike]"style="overflow: auto;"[/strike] It's like the table is in :jail:

:tantrum:

[SIZE=1][COLOR=grey]Note:[COLOR=Black] [SIZE=2][B]It only overflows when someone reports a very large p-1 factor,[/B][/SIZE][/COLOR] but it is very irritating when it does happen.[/COLOR][/SIZE][/QUOTE]


Are you sure that you are not insisting on this point just to maintain your evil cred?

retina 2014-11-11 15:04

[QUOTE=kladner;387400]Are you sure that you are not insisting on this point ...[/QUOTE]I'm not insisting on anything. Merely suggesting. :whistle:[QUOTE=kladner;387400]... just to maintain your evil cred?[/QUOTE]That is a given. Even my minions know not to question that! :razz:

kracker 2014-11-11 15:35

I really don't know/get why javascript is "bad". Can someone enlighten me?

Madpoo 2014-11-11 19:48

[QUOTE=kracker;387405]I really don't know/get why javascript is "bad". Can someone enlighten me?[/QUOTE]

Short answer: It's not. :smile:

Mark Rose 2014-11-11 20:49

[QUOTE=kracker;387405]I really don't know/get why javascript is "bad". Can someone enlighten me?[/QUOTE]

It's often used to side-load all kinds of things like ads and trackers, which are both annoying and considered by some to be an invasion of privacy. mersenne.org has Google Analytics for instance.

The web got much nicer to use once I started selectively enabling JavaScript.

garo 2014-11-11 21:54

If you do want to enable javascript, you have to use NoScript. It allows you to selectively load javascript from different sites - but alas you cannot choose which scripts to load or not from a given site. It also provides some protections against XSS attacks.

Someone here mentioned Ghostery recently and I have installed it and so far am pleased by what I see.

retina 2014-11-12 00:27

[QUOTE=kracker;387405]I really don't know/get why javascript is "bad". Can someone enlighten me?[/QUOTE]Allowing everyone else to run arbitrary unsanctioned code on your computer. How can that be considered a good thing? Even though we trust madpoo and George - servers can be compromised, connections can be hijacked, ISPs can (and do) insert their own things, etc.


All times are UTC. The time now is 21:12.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.