mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Data (https://www.mersenneforum.org/forumdisplay.php?f=21)
-   -   Newer milestone thread (https://www.mersenneforum.org/showthread.php?t=13871)

Uncwilly 2015-12-06 00:58

Figured that this would be a fine time to capture this:

All exponents below [COLOR="Blue"][B]34,770,721[/B][/COLOR] have been tested and double-checked.
All exponents below [COLOR="Blue"][B]58,970,231[/B][/COLOR] have been tested at least once.

Countdown to first time checking all exponents below 59M: [COLOR="Red"][B]1 [/B][/COLOR](Estimated completion : [COLOR="Green"]2015-11-28[/COLOR])
Countdown to first time checking all exponents below 60M: [B][COLOR="red"]12[/COLOR][/B] (Estimated completion : [COLOR="Green"]2015-12-24[/COLOR])
Countdown to first time checking all exponents below 61M: [B][COLOR="red"]97[/COLOR][/B] (Estimated completion : [COLOR="Green"]2016-02-05[/COLOR])
Countdown to first time checking all exponents below 62M: [COLOR="red"][B]220[/B][/COLOR] (Estimated completion : [COLOR="green"]2016-02-05[/COLOR])
Countdown to first time checking all exponents below 63M: [COLOR="red"][B]427[/B][/COLOR] (Estimated completion : [COLOR="green"]2016-02-06[/COLOR])

Countdown to double-checking all exponents below 35M: [COLOR="Red"][B]302[/B][/COLOR] (Estimated completion : [COLOR="green"]2016-01-22[/COLOR])


The next 3 months should be quite exciting!

rudy235 2015-12-06 13:56

[QUOTE=Uncwilly;418414]Figured that this would be a fine time to capture this:

All exponents below [COLOR="Blue"][B]34,770,721[/B][/COLOR] have been tested and double-checked.
All exponents below [COLOR="Blue"][B]58,970,231[/B][/COLOR] have been tested at least once.

Countdown to first time checking all exponents below 59M: [COLOR="Red"][B]1 [/B][/COLOR](Estimated completion : [COLOR="Green"]2015-11-28[/COLOR])
Countdown to first time checking all exponents below 60M: [B][COLOR="red"]12[/COLOR][/B] (Estimated completion : [COLOR="Green"]2015-12-24[/COLOR])
Countdown to first time checking all exponents below 61M: [B][COLOR="red"]97[/COLOR][/B] (Estimated completion : [COLOR="Green"]2016-02-05[/COLOR])
Countdown to first time checking all exponents below 62M: [COLOR="red"][B]220[/B][/COLOR] (Estimated completion : [COLOR="green"]2016-02-05[/COLOR])
Countdown to first time checking all exponents below 63M: [COLOR="red"][B]427[/B][/COLOR] (Estimated completion : [COLOR="green"]2016-02-06[/COLOR])

Countdown to double-checking all exponents below 35M: [COLOR="Red"][B]302[/B][/COLOR] (Estimated completion : [COLOR="green"]2016-01-22[/COLOR])


The next 3 months should be quite exciting![/QUOTE]

Evidently, the past month (November) should have been equally exciting! Countdown to first time checking all exponents below 59M: [COLOR="Red"][B]1 [/B][/COLOR](Estimated completion : [COLOR="Green"]2015-[SIZE="4"]11-28[/SIZE][/COLOR])

Madpoo 2015-12-07 17:31

[QUOTE=rudy235;418443]Evidently, the past month (November) should have been equally exciting! Countdown to first time checking all exponents below 59M: [COLOR="Red"][B]1 [/B][/COLOR](Estimated completion : [COLOR="Green"]2015-[SIZE="4"]11-28[/SIZE][/COLOR])[/QUOTE]

The user probably gave up on it. Hasn't checked in for about 4 weeks. I don't know what the expiration rule is for it but it'll be recycled pretty soon I think. Looks like it'll happen 60 days since it was last heard, or 90 days since it was assigned (since it's a "class 1" exponent) from which means recycling around January 2 or so, going by the 90 days since assignment criterion.

cuBerBruce 2015-12-18 17:47

[size=3][b]Four[/b][/size]
[QUOTE]
Countdown to first time checking all exponents below 60M: 4 (Estimated completion : 2015-12-25)
[/QUOTE]

blip 2015-12-19 12:41

[QUOTE=cuBerBruce;419603][SIZE=3][B]Four[/B][/SIZE][/QUOTE]
mprime reports an ETA of 23d for mine.

Madpoo 2015-12-19 17:39

[QUOTE=blip;419640]mprime reports an ETA of 23d for mine.[/QUOTE]

Oh well... it must report the estimated time funny?

That remaining 58M exponent will have to expire, get reassigned, and then completed anyway, and I'm not sure how much longer it'll be before it expires. I think I said early January previously? Something like that.

blip 2015-12-19 18:19

[QUOTE=Madpoo;419654]Oh well... it must report the estimated time funny?

[/QUOTE]
Yes. That's what I pointed to [URL="http://mersenneforum.org/showthread.php?t=20679"]here[/URL].

chalsall 2015-12-19 18:35

[QUOTE=Madpoo;419654]I think I said early January previously? Something like that.[/QUOTE]

Aaron, I assume you're going to "poach" this such that you don't submit your results until just before or just after it's expired but before it's reassigned?

It would be a shame for this to take another 60+ days for a single hold-up of a mini-milestone to be "off the books", while at the same time a shame if multiple people "did the job".

Mark Rose 2015-12-19 18:57

He probably already knows it's prime :grin:

Madpoo 2015-12-20 18:36

[QUOTE=Mark Rose;419668]He probably already knows it's prime :grin:[/QUOTE]

LOL... I wish. :smile:

I could do what chalsall suggested. Before I do it I should probably look again at the recycling rules so I know when it was going to be recycled for sure and then turn in my result a day or two beforehand. With the grandfathered assignments I went through and figured out just when they'd expire based on their progress, and that was *more* complicated, so these should be easier to figure out.

I think for the cat 1 LL tests it was 90 days after assignment which would peg that one as expiring on Jan 2. If they don't magically finish it before then maybe I can check mine in on New Years Eve as a special treat for the milestone. LOL

petrw1 2015-12-21 19:42

My case for occasional saving of interim results on the server
 
[url]http://www.mersenne.org/assignments/?exp_lo=66587000&exp_hi=66600000&execm=1&exdchk=1&exp1=1&extf=1[/url]

There are 5 66M LL results here within hours of completing that seem to be abandoned and will have to START ALL OVER...

kladner 2015-12-21 20:11

[QUOTE=petrw1;419871][URL]http://www.mersenne.org/assignments/?exp_lo=66587000&exp_hi=66600000&execm=1&exdchk=1&exp1=1&extf=1[/URL]

There are 5 66M LL results here within hours of completing that seem to be abandoned and will have to START ALL OVER...[/QUOTE]

I understand the frustration, but do you really want to bet your cycles on work of unknown provenance?

chalsall 2015-12-21 20:18

[QUOTE=kladner;419873]I understand the frustration, but do you really want to bet your cycles on work of unknown provenance?[/QUOTE]

Patience.

Madpoo 2015-12-22 04:58

[QUOTE=blip;419661]Yes. That's what I pointed to [URL="http://mersenneforum.org/showthread.php?t=20679"]here[/URL].[/QUOTE]

It's weird because your reported ETA is so much *sooner* than the actual ETA (which I'm estimating as Jan. 11th at 23:31 UTC) :smile:

Usually when my rolling averages are wrong, it's in the other direction, where it's reporting ETAs that are a lot farther out than reality, not sooner.

My guess is maybe your rolling average is really super high but mprime isn't getting as many cycles, and for whatever reason the rolling average isn't updating itself very well, so it still think you ought to be running way faster.

Odd that it's off by so much... mprime is saying 5 more days but in reality it's more like 21. That's not the normal amount of wrongness?

Are you running something very CPU intensive, to the point where the rolling average is already as low as it can go? There's a lower limit on it but if it's running even slower than that you'll have wildly inaccurate *overestimates* like you've been seeing.

In other words, something could be wrong with your configuration, or you simply have something using a lot of CPU nearly continuously?

petrw1 2015-12-22 20:55

[QUOTE=kladner;419873]I understand the frustration, but do you really want to bet your cycles on work of unknown provenance?[/QUOTE]

IMHO what might be a reasonable compromise:
- From Unknown/Unreliable/Inconsistent contributors only
(For many of us who NEVER(?) abandon an assignment saving would be a waste)
- Once "close" to complete save the intermediate file. Pick your favorite number: 75%; 80%; 90%.
- If it does complete the save file is simply erased.
- If not let a designate (or the server itself) complete it.
- BUT to address the "...unknown provenance..." mark it as SUSPECT
- If the next test matches it becomes the Double Check. YAY!!!
- If NOT it becomes a first time test BUT hopefully this will be the rare case.

kladner 2015-12-22 21:07

[QUOTE=petrw1;419924]IMHO what might be a reasonable compromise:
- From Unknown/Unreliable/Inconsistent contributors only
(For many of us who NEVER(?) abandon an assignment saving would be a waste)
- Once "close" to complete save the intermediate file. Pick your favorite number: 75%; 80%; 90%.
- If it does complete the save file is simply erased.
- If not let a designate (or the server itself) complete it.
- BUT to address the "...unknown provenance..." mark it as SUSPECT
- If the next test matches it becomes the Double Check. YAY!!!
- If NOT it becomes a first time test BUT hopefully this will be the rare case.[/QUOTE]

I am wondering if interim residues might help save time spent on a corrupted LL. Say you start from a save file. If the next interim failed to match, would that give a hint of the likely outcome?

Madpoo 2015-12-22 22:24

Sigh... M58970231 checked in
 
I wasn't paying too much attention and when I checked in a batch of results, that final 58M exponent was in there.

Well, I still don't think the original person was going to finish, and I was planning to check it in around Dec 31, but honestly, I'm on vacation for the next couple weeks and I'd rather not worry about holding that back. :smile:

Anyway, it's done... wasn't prime (I *would* have noticed that).

Milestone page updated, and I tossed in extra countdowns up to 67M:
[URL="http://www.mersenne.org/report_milestones/"]http://www.mersenne.org/report_milestones/[/URL]

Madpoo 2015-12-22 22:41

[QUOTE=kladner;419927]I am wondering if interim residues might help save time spent on a corrupted LL. Say you start from a save file. If the next interim failed to match, would that give a hint of the likely outcome?[/QUOTE]

One problem presented by these ideas is that the interim files aren't small. Sure, individually the file for something in the 72M range is "only" 9.2 MB, but cumulatively, for all of the assignments out there... well, it's a lot of data.

By way of estimate, take the exponent size and divide by 8 to get the approx file size in bytes.

For all active LL and DC assignments, that would add up to:
815,845,182,349 bytes (816 GB / 760 GiB).

Granted, many active assignments have zero progress. If I only include assignments with a percent done > 0:
259,718,199,046 bytes (260 GB / 242 GiB).

Sure, okay, disk storage [I]could[/I] handle that, but what about bandwidth concerns? How often would each assignment be expected to check in their latest interim file? Probably not time based, but every XX %.

Let's say it was every 25%, so realistically it would upload an interim file 3 times (at 25, 50 and 75%). So you'd be uploading 3x whatever. It would be spread out over however many days... I haven't the foggiest idea what the average time is for a worker to reach 25% complete. :smile: Depends on exponent size and CPU speed, so any average will be wildly misleading. LOL

Anyway, you get the idea... storage and bandwidth are the problems with any kind of interim file repository. Back in dialup days it was even more so than today, but even now with nice fast internet connections, on the server side at least, there's a cost to bandwidth. I forget how much monthly data the server's current home allows (it's higher than what we use, I'll leave it at that), but suffice to say, it's less than it would need to be.

On other server colocations, you're not paying for total data per month, but rather it's billed at a 95th percentile of bandwidth, so it's a little more possible on a budget but only if the uploads (and occasional downloads?) are spread out evenly to avoid big spikes at peak times of day or something. I have a feeling the Prime95 client, were it to automate uploading interims at whatever %, it would average fairly smoothly over the course of a day between all clients.

Madpoo 2015-12-22 22:50

[QUOTE=Madpoo;419937]One problem presented by these ideas is that the interim files aren't small...[/QUOTE]

An alternate method could involve saving the 64-bit residues along the way. Right now the server only saves the final residue. If it were to save residues every 10% along the way, a double-checker who comes along later would be able to compare results every 10% rather than once at the end.

Question then is, what should happen when a mismatch occurs at some step along the way? Maybe it's immediately made available for a simultaneous triple-check since we know early on that a mismatched final residue is coming.

That wouldn't really speed anything up necessarily, but then again, we'd have residues at the different percentages from people who didn't bother completing the whole thing.

Imagine some newbie starts a double-check and only gets up to 10% but it mismatches the first time check at that some point in time.

It might not be a "sure thing", but for my "strategic double-checking" where we try to find the bad apples ahead of the DC wavefront, that would be a good clue to check those out first. The newbie that gave up might have been bad, but it could be an indicator the first test was bad even though no full DC was done.

Another potential idea is that Prime95 could be fitted up with a newfangled "quit GIMPS" option. Let's say the newbie tries it out and then says "meh, don't need this". Some big "QUIT GIMPS" button would upload their current interim file at whatever iteration which the new assignee could get.

It depends on user participation and not just "stop and delete the program", but you get the idea...

blip 2015-12-23 01:57

[QUOTE=Madpoo;419901]It's weird because your reported ETA is so much *sooner* than the actual ETA (which I'm estimating as Jan. 11th at 23:31 UTC) :smile:...[/QUOTE]

Well, that system is a bit short on memory, which also could be faster. So, as soon as I can afford it, I will add more and faster RAM. It sports an i7-4930K and runs mprime with 5 exponents plus [URL="http://www.mersenne.org/report_exponent/?exp_lo=450000017&full=1"]this one[/URL]. So, it will be busy for some time.

But anyways, it moves forward all right. I just wanted to give a heads up here and manage expectations.

Madpoo 2015-12-23 02:06

[QUOTE=blip;419945]Well, that system is a bit short on memory, which also could be faster. So, as soon as I can afford it, I will add more and faster RAM. It sports an i7-4930K and runs mprime with 5 exponents plus [URL="http://www.mersenne.org/report_exponent/?exp_lo=450000017&full=1"]this one[/URL]. So, it will be busy for some time.

But anyways, it moves forward all right. I just wanted to give a heads up here and manage expectations.[/QUOTE]

Aha! There's your problem.

In my own testing I discovered that if you run multiple workers on the same system, the performance will degrade a LOT if any of the exponents are over a certain size (the exact size varies, mostly on the speed of the RAM).

With DDR3 memory I could run multiple workers okay if they were all under 37M or 38M, give or take (multiple workers on one CPU, I'm not counting what happens across multiple CPUs).

If you were to stop all but one of the workers, you will almost certainly notice that the worker still running will speed up a lot. Give it a try and you'll see what I mean.

In the case of very large ( > 100 million digit) exponents, my recommendation is to run one worker with all of the physical cores assigned to it. Trying to run one large FFT worker alongside other smaller FFT workers is just going to thrash the memory within an inch of it's life. :smile:

Great for stress testing, lousy for LL throughput.

blip 2015-12-23 10:06

Ok, I put all 6 cores now on 59425643. It should be finished in about 70 h.

I have to figure out how to optimise workers/threads for optimal LL throughput.

Are there some details on how workers with "big" exponents impact other workers?

Madpoo 2015-12-23 18:09

[QUOTE=blip;419965]Ok, I put all 6 cores now on 59425643. It should be finished in about 70 h.

I have to figure out how to optimise workers/threads for optimal LL throughput.

Are there some details on how workers with "big" exponents impact other workers?[/QUOTE]

The details are anecdotal on my part... I've shared in other threads (where exactly on here, I couldn't say) my experimentation.

All I know is that my testing on systems with DDR3 (dual CPU Xeon systems in all cases) showed that I could run two workers, each using all of the physical cores on its CPU, and:[LIST][*]if both exponents were below 58M things were fine[*]if one exponent was above 58M, the other one needed to be below 38M[*]otherwise there must be some kind of extra memory thrashing that happens[/LIST]Regarding performance on a single CPU, I did a little bit of testing on systems with dual 4-core (8 HT) CPUs. Even though it's dual CPU, I think I could extrapolate this to a single CPU:[LIST][*]4 cores of the CPU in a single worker, no problem doing whatever size (but then with dual CPU, the findings above would apply)[*]2 workers with 2 cores each: exponents below 38-40M worked best. Larger than that and the performance of the 2 workers starts to degrade[*]4 workers with 1 core each: it could still, sort of, manage exponents below 38M but, for me anyway, it wasn't any more efficient than running 2 workers with 2 cores each. Usually when you add extra cores to a worker it doesn't scale linearly. Going from 1 to 2 cores does NOT cut the total time in half. But in this case I found that running 2 workers with two cores actually was about twice as fast as 4 workers with one core. So I went with that just to churn out the results of a single test faster.[/LIST]
The big fun was when I got a system with DDR4 (a dual Xeon v3 system). The DDR4 must play very well with the FFT code because I could now run 2 workers using all of the cores (14 cores per CPU on this lovely box), and it didn't matter how large the exponents are.

Generally I use that system to test the exponents 60M and higher since I can run two at once without any balancing act. On other systems I have to put 60M+ exponents in one worker and fill the other worker with exponents < 38M.

Not only that but on the DDR4 system it scales much better when adding additional cores to a worker. There's not as much "penalty" for doing so. In fact, on the older systems I could add one core from the other CPU socket and get a small boost in speed, but adding 2+ cores would start to degrade performance. On this system though, I could use all 14 on one CPU and 7-8 cores from the other CPU and still see the performance increase. Adding 8+ cores from the other CPU doesn't start to decrease performance (doesn't increase it either...it tends to be about the same even up to having all 28 cores on one worker).

From that I tend to conclude that memory bandwidth is the real limit on running multiple workers or multi-core per worker. But even with good, reliable, fast DDR3 I'm not sure if it can beat what I see with solid DDR4 memory. And of course some of that may be the architectural differences between the Xeon v2 and Xeon v3 CPUs as well (faster QPI between sockets, etc).

Summary is, you may have to experiment to see what works best in your situation, but those were my findings if it's helpful to you.

Madpoo 2015-12-23 18:15

[QUOTE=blip;419965]Ok, I put all 6 cores now on 59425643. It should be finished in about 70 h.[/QUOTE]

Oh, and by the way, yeah, the fact that 6 cores on one worker can do it in 70 hours... from that we could say that, *ideally* if it were just 1 core chugging away at it, it might finish in 70 * 6 hours, or 17.5 days.

So maybe the 20-21 days it was going to take wasn't so far off the mark. On the other hand, adding extra cores to a worker doesn't scale linearly, so a single cored worker would be a bit more efficient and might look more like 70 * 5.5 or so and finish in around 16 days.

I applaud you for tackling a big 100M digit exponent. They're fun when they finally finish... some weird sense of accomplishment. In your shoes I might consider something like having 3 cores work on that one, and have the other 3 cores do double-checking work, preferably exponents below 38M since that's what worked best for me. In my case I had dual CPUs so I'm not sure what it would be to split the work like that on a single CPU.

blip 2015-12-23 20:56

[QUOTE=Madpoo;420003]
I applaud you for tackling a big 100M digit exponent. They're fun when they finally finish... some weird sense of accomplishment. In your shoes I might consider something like having 3 cores work on that one, and have the other 3 cores do double-checking work, preferably exponents below 38M since that's what worked best for me. In my case I had dual CPUs so I'm not sure what it would be to split the work like that on a single CPU.[/QUOTE]
Thanks. I will finish all other exponents on that machine first and then figure out how to go forward. When I first got that big exponent, mprime reported an ETA > 2000d. Let's see how we can improve that.

Madpoo 2015-12-24 06:35

[QUOTE=blip;420022]Thanks. I will finish all other exponents on that machine first and then figure out how to go forward. When I first got that big exponent, mprime reported an ETA > 2000d. Let's see how we can improve that.[/QUOTE]

Well, with 6 cores all working on that one exponent, you could hope for a six-fold improvement, maybe 340 days. In reality it might be more like a year, give or take a month? :smile: Could be faster because your estimate of 2000 days could have been slowed down since it was running 5 other smaller ones at the same time. Hard to say.

Not for the faint of heart.

manfred4 2015-12-24 11:29

One other way to improve that would be factoring it to 82 bits first, which has quite a decent chance of ruling that candidate out once and for all.

blip 2015-12-24 18:43

[QUOTE=manfred4;420073]One other way to improve that would be factoring it to 82 bits first, which has quite a decent chance of ruling that candidate out once and for all.[/QUOTE]
well, it could be prime...

But yes, I know. I pushed it to 78 bits, and then decided to give it a try with LL, just to see if and how it is working with an exponent of that size. (you have to start somewhere...). P-1 took a while, and now I have a process on that machine running probably until EOL of that specific system :-)

I need more power!

cuBerBruce 2015-12-26 15:01

[QUOTE]Countdown to first time checking all exponents below 60M: 1[/QUOTE]

[size=5][b]1[/b][/size] to go. (And is that last one possibly done already?)

henryzz 2015-12-26 15:22

[QUOTE=Madpoo;419937]One problem presented by these ideas is that the interim files aren't small. Sure, individually the file for something in the 72M range is "only" 9.2 MB, but cumulatively, for all of the assignments out there... well, it's a lot of data.

By way of estimate, take the exponent size and divide by 8 to get the approx file size in bytes.

For all active LL and DC assignments, that would add up to:
815,845,182,349 bytes (816 GB / 760 GiB).

Granted, many active assignments have zero progress. If I only include assignments with a percent done > 0:
259,718,199,046 bytes (260 GB / 242 GiB).

Sure, okay, disk storage [I]could[/I] handle that, but what about bandwidth concerns? How often would each assignment be expected to check in their latest interim file? Probably not time based, but every XX %.

Let's say it was every 25%, so realistically it would upload an interim file 3 times (at 25, 50 and 75%). So you'd be uploading 3x whatever. It would be spread out over however many days... I haven't the foggiest idea what the average time is for a worker to reach 25% complete. :smile: Depends on exponent size and CPU speed, so any average will be wildly misleading. LOL

Anyway, you get the idea... storage and bandwidth are the problems with any kind of interim file repository. Back in dialup days it was even more so than today, but even now with nice fast internet connections, on the server side at least, there's a cost to bandwidth. I forget how much monthly data the server's current home allows (it's higher than what we use, I'll leave it at that), but suffice to say, it's less than it would need to be.

On other server colocations, you're not paying for total data per month, but rather it's billed at a 95th percentile of bandwidth, so it's a little more possible on a budget but only if the uploads (and occasional downloads?) are spread out evenly to avoid big spikes at peak times of day or something. I have a feeling the Prime95 client, were it to automate uploading interims at whatever %, it would average fairly smoothly over the course of a day between all clients.[/QUOTE]

How fast is the needed bandwidth increasing in comparison with the cost per MB decreasing?
I would guess that this sort of setup would be getting cheaper over time. When would this look like being viable?

cuBerBruce 2015-12-26 23:22

[QUOTE]All exponents below 60,343,331 have been tested at least once.[/QUOTE]

The 60M milestone has been reached!
:party: :toot:

Madpoo 2015-12-26 23:40

[QUOTE=henryzz;420221]How fast is the needed bandwidth increasing in comparison with the cost per MB decreasing?
I would guess that this sort of setup would be getting cheaper over time. When would this look like being viable?[/QUOTE]

Good question.

Depends on many factors... at what point would an interim file be sent to the server, and how long would it be kept for?

I imagine that in whatever case, once a result is turned in, any interim files stored for that exponent would be removed... no use for them anymore. Perhaps the partial residue is still saved at whatever percent for comparison by double-checks...

In the case of abandoned work where it managed to upload an interim file at some point, would we expect the server to hand that out to the new assignee so it can continue at the same point?

Would the server keep several interims for the same exponent, or let's say it uploaded at 30% and 60% (just for example), should it delete the 30% file when the 60% comes in?

Or maybe not even bother with multiple interims of the same exponent... just have the client upload one when it reaches 33% or something and then nothing else... just delete that when it checks in a result, or hand it to someone else if the assignment expires.

It would be useful to know how many iterations, on average, an exponent reports in before it expires because the user wandered off. It's probably a big range...and that's if they even report back at all. It may not be surprising to anyone that a lot of anonymous assignments are never heard back from since they day they checked out a number.

So... yeah, a lot of variables on when to collect, how many to collect, how many and when to save them/hand them out to new folks, etc.

Madpoo 2015-12-27 00:39

[QUOTE=cuBerBruce;420253]The 60M milestone has been reached!
:party: :toot:[/QUOTE]

Milestone page updated.

henryzz 2015-12-27 17:07

Assuming weekly uploads 260 GB/7 ~ 35 GB/day. This needs average of around a 3.25 mega-bit connection. My home connection could handle that without any additional cost. It all depends on how the cost is calculated.

This is probably an overestimate as well. Many people won't upload each week.
Storage on the server shouldn't be an issue I would have thought. Special disks shouldn't be needed to provide bandwidth. A normal disk could handle that traffic.

chalsall 2015-12-27 18:02

[QUOTE=henryzz;420296]It all depends on how the cost is calculated.[/QUOTE]

There are, in my mind, two other issues...

1. Who gets the credit for the cycles (and, perhaps more importantly, when the next Mersenne Prime is found)? The one who completes the last cycle, the one who contributed the most cycles, or all who contributed cycles?

1.1. If either (or both) of the latter two, trust that some will "game the system" by contributing only enough cycles to be in the game for others to then finish.

2. Those who don't complete an assignment probably aren't that serious, and thus (heuristically) might have unreliable machines.

2.1. Is it worth investing in new software development (humans are expensive), bandwidth and storage (cheap) when the number of bad tests will almost certainly to go up (possibly without detection for years)?

2.1.1. Where, exactly, do the economic curves cross?

henryzz 2015-12-27 19:21

[QUOTE=chalsall;420297]There are, in my mind, two other issues...

1. Who gets the credit for the cycles (and, perhaps more importantly, when the next Mersenne Prime is found)? The one who completes the last cycle, the one who contributed the most cycles, or all who contributed cycles?
[/quote]
I assume credit will have to to be shared. Any recognition will need to be proportional to the amount of iterations done.
[QUOTE=chalsall;420297]
1.1. If either (or both) of the latter two, trust that some will "game the system" by contributing only enough cycles to be in the game for others to then finish.
[/quote]
A sensible minimum number of iterations would help this. 1M maybe? Credit would be divided according to the amount of iterations done anyway.
[QUOTE=chalsall;420297]
2. Those who don't complete an assignment probably aren't that serious, and thus (heuristically) might have unreliable machines.
[/quote]
Maybe an upload should be doublechecked before it is used.
[QUOTE=chalsall;420297]
2.1. Is it worth investing in new software development (humans are expensive), bandwidth and storage (cheap) when the number of bad tests will almost certainly to go up (possibly without detection for years)?
[/quote]
With doublechecking will they go up significantly?
The aim here is to not waste work when someone leaves GIMPS with a large percentage done.
Human hours is an issue. Madpoo, Prime95 etc will need to decide whether it is worthy of their time.
[QUOTE=chalsall;420297]
2.1.1. Where, exactly, do the economic curves cross?[/QUOTE]
We could do with coming up with an estimate of how much work would be saved by this.

edit: Maybe this discussion should be split off into its own thread.

Madpoo 2015-12-27 19:24

[QUOTE=chalsall;420297]There are, in my mind, two other issues...

1. Who gets the credit for the cycles (and, perhaps more importantly, when the next Mersenne Prime is found)? The one who completes the last cycle, the one who contributed the most cycles, or all who contributed cycles?[/QUOTE]

No credit for quitters. :smile:

[QUOTE=chalsall;420297]2. Those who don't complete an assignment probably aren't that serious, and thus (heuristically) might have unreliable machines.[/QUOTE]

Possibly. It's hard to say without digging into details. I suppose a basic analysis of machines that have only checked in one result, and what percent of those were bad, might offer a clue. I don't know if that could be extrapolated to machines that never checked in a result, but maybe it'll get you in the ballpark.

S485122 2015-12-27 19:35

I wouldn't want to start on the basis of work done by an unknown machine.

Another thing is that one would get a mixture of hardware, software versions an all.

Is the total work done on on almost finished, but abandoned, work units so big ? Of course they stand out because they will remain on the active assignments for a long time (until they expire.) But compared to the total of the throughput I am sure the work done on those "almost finished assignments" is negligible.

In my opinion a false good idea.

Jacob

Madpoo 2015-12-27 19:43

[QUOTE=henryzz;420296]Assuming weekly uploads 260 GB/7 ~ 35 GB/day. This needs average of around a 3.25 mega-bit connection. My home connection could handle that without any additional cost. It all depends on how the cost is calculated.

This is probably an overestimate as well. Many people won't upload each week.
Storage on the server shouldn't be an issue I would have thought. Special disks shouldn't be needed to provide bandwidth. A normal disk could handle that traffic.[/QUOTE]

Disk storage and performance isn't really an issue. Right now the server does NOT have enough space for something like this, but that can always be upgraded.

The main thing would be handling the network bandwidth of all that upload/download.

The current colocation provider offers a certain amount of monthly total data transferred, or I think alternatively they can do a bandwidth cap at 100 Mb/s.

Let's say we assumed 500 GB of weekly data, that's a couple TB per month on top of the normal server functions.

In theory it could be possible... ultimately though this gets into an area that might be better suited for hosting the files and transfers on AWS in an S3 storage blob just for convenience and to keep the core server functions isolated. I guess if it turned out the server could handle the bandwidth and storage, it could be moved in-house.

This is all speculative at this point anyway... but suffice to say that technology now would make a feature like this feasible, unlike years ago when Prime95 started out and dial-up was common.

It would involve changes to the client, the server, new options to enable/disable that feature (clients might not want a bunch of data going "into the cloud" regularly), consideration of the "upload points" (at what % does the interim file get sent up), who gets credit (only the one who finishes, if you ask me), should the server start saving temporary partial residues along the way to provide more "point in time" comparisons besides just the final, etc.

And what's the end goal? To me the end goals are essentially:
1) abandoned work can be picked up by someone else without starting over
2) partial residues along the way let us know when one or the other tests goes screwy, before the final iteration, so a 3rd (or more) test can be assigned immediately if desired.

Madpoo 2015-12-27 20:09

[QUOTE=S485122;420306]I wouldn't want to start on the basis of work done by an unknown machine.[/QUOTE]

I wouldn't mind, but probably only on double-checks so that a good/bad answer will be known at the end. I'm less sure if I'd be willing to do that when the answer as to the veracity of the result is a decade out.

[QUOTE=S485122;420306]Another thing is that one would get a mixture of hardware, software versions an all.[/QUOTE]

True, but everything still gets double-checked so it may not matter too much that something was tested at first by mprime 27.9 and finished by Prime95 28.7 or by mlucas or by a GPU. The final residue should still match unless the hardware bombed.

There are already folks out there (myself included) who may start on one machine and finish on another, or who upgrade their Prime95 mid-run so the start/end versions are different. And I think others have probably moved interim files from Win<->Linux or to a GPU. In other words, all those situations probably already happen.

[QUOTE=S485122;420306]Is the total work done on on almost finished, but abandoned, work units so big ? Of course they stand out because they will remain on the active assignments for a long time (until they expire.) But compared to the total of the throughput I am sure the work done on those "almost finished assignments" is negligible.
[/QUOTE]

Good question... I wasn't sure just how many assignments are abandoned and at what stage/% done.

There have been 720,670 assignments that expired with zero work done. I'd guess almost all of those (95%+) never checked in again after it was assigned.

Another 174,815 assignments expired after checking in "some" progress. Of those, 159,206 were in the LL stage (the others reported some progress in the TF or P-1 stages but never started LL).

The average % done of those that started LL is 12.15%, but here's a breakdown by 10th percentiles. And yes, oddly there are 3 results that expired even though the LL % done is 100%. Might have been rounded up from 99.95% + ?

[CODE]% Done Count
0 116600
10 11806
20 7164
30 5474
40 4103
50 3551
60 3084
70 2749
80 2565
90 2107
100 3[/CODE]

So if we hypothesize a system where the interim file is uploaded at 10%, there are 42,606 abandoned LL tests that could pick up again at 10%. Is a 10% leg up enough of a time-saver to make it worthwhile? If it were 20% that would mean 30,800 tests that could be picked up a fifth of the way in... is that a good enough time save?

The alternatives could involve simply uploading the interim file every 10% rather than at a single fixed point, so the time saved would depend entirely on how far they got, to the nearest 10th %. But then the monthly bandwidth requirements goes up quite a bit.

Maybe it's an option that only new accounts would use by default since they're the most likely to abandon work before finishing. That would save significant bandwidth and storage from people like Curtis and other heavy users who go through a lot of work and actually finish it.

Might even be the kind of thing that's enabled for the first couple LL tests a machine does and then shuts itself off? Just thinking out loud here...

chalsall 2015-12-27 20:17

[QUOTE=Madpoo;420311]Might even be the kind of thing that's enabled for the first couple LL tests a machine does and then shuts itself off? Just thinking out loud here...[/QUOTE]

But... That would involve a lot of work on the Server(s) and Client side software (with the risk of new bugs), for very little real upside.

An interesting idea, but not worth the risk (IMEO).

NBtarheel_33 2015-12-27 21:58

[QUOTE=Madpoo;420311]I wouldn't mind, but probably only on double-checks so that a good/bad answer will be known at the end. I'm less sure if I'd be willing to do that when the answer as to the veracity of the result is a decade out.



True, but everything still gets double-checked so it may not matter too much that something was tested at first by mprime 27.9 and finished by Prime95 28.7 or by mlucas or by a GPU. The final residue should still match unless the hardware bombed.

There are already folks out there (myself included) who may start on one machine and finish on another, or who upgrade their Prime95 mid-run so the start/end versions are different. And I think others have probably moved interim files from Win<->Linux or to a GPU. In other words, all those situations probably already happen.



Good question... I wasn't sure just how many assignments are abandoned and at what stage/% done.

There have been 720,670 assignments that expired with zero work done. I'd guess almost all of those (95%+) never checked in again after it was assigned.

Another 174,815 assignments expired after checking in "some" progress. Of those, 159,206 were in the LL stage (the others reported some progress in the TF or P-1 stages but never started LL).

The average % done of those that started LL is 12.15%, but here's a breakdown by 10th percentiles. And yes, oddly there are 3 results that expired even though the LL % done is 100%. Might have been rounded up from 99.95% + ?

[CODE]% Done Count
0 116600
10 11806
20 7164
30 5474
40 4103
50 3551
60 3084
70 2749
80 2565
90 2107
100 3[/CODE]

So if we hypothesize a system where the interim file is uploaded at 10%, there are 42,606 abandoned LL tests that could pick up again at 10%. Is a 10% leg up enough of a time-saver to make it worthwhile? If it were 20% that would mean 30,800 tests that could be picked up a fifth of the way in... is that a good enough time save?

The alternatives could involve simply uploading the interim file every 10% rather than at a single fixed point, so the time saved would depend entirely on how far they got, to the nearest 10th %. But then the monthly bandwidth requirements goes up quite a bit.

Maybe it's an option that only new accounts would use by default since they're the most likely to abandon work before finishing. That would save significant bandwidth and storage from people like Curtis and other heavy users who go through a lot of work and actually finish it.

Might even be the kind of thing that's enabled for the first couple LL tests a machine does and then shuts itself off? Just thinking out loud here...[/QUOTE]

42,606 LL tests at 10% completion is an amount of work equivalent to ~4,261 completed LL tests.

30,800 LL tests at 20% completion is an amount of work equivalent to ~6,160 completed LL tests.

At 200 GHz-days per LL test, we are looking at 852,200 GHz-days and 1,232,000 GHz-days of salvaged work, respectively. This represents roughly the combined total throughput of GIMPS as a whole over approximately 5-7 days. I suppose the next question would be over what time frame would we be "gaining" these extra days of throughput (and, of course, not all of it would be a gain, as some of the interim files may have errors).

Question: what do percentiles 1-9 look like in terms of the number of tests abandoned?

NBtarheel_33 2015-12-27 22:11

[QUOTE=Madpoo;420308]who gets credit (only the one who finishes, if you ask me), should the server start saving temporary partial residues along the way to provide more "point in time" comparisons besides just the final, etc.[/QUOTE]

Re: credit - we could do one of two things: (1) gussy up the server enough to award a proportional number of GHz-days to the original and subsequent tester(s) (e.g. 20% of a 200-GHz-day LL test would earn 40 GHz-days, 35% of the same test would earn 70 GHz-days, etc.), or (2) hold the interim result until the original AID has expired (which would end any claim that the original assignee might have on the exponent or its test result). The GIMPS legal disclaimers could be amended (if they do not already indeed state this) to equate expiration of an assignment with expiration of any claim on the test result.

Re: partial residues - definitely worth considering. Saving a Res64 even at 1% intervals would only require a total of 800 bytes or so. Trivial to transfer and store.

chalsall 2015-12-27 22:16

[QUOTE=NBtarheel_33;420323]Trivial to transfer and store.[/QUOTE]

I understand what you are saying, but _*not*_ trivial to code.

NBtarheel_33 2015-12-28 14:52

[QUOTE=chalsall;420325]I understand what you are saying, but _*not*_ trivial to code.[/QUOTE]

Would it be any easier to just send a Res64 every time Prime95 "phones home" to PrimeNet? That's only an extra eight bytes of information added to the ETA, etc. that otherwise gets sent every day.

retina 2015-12-28 14:59

[QUOTE=NBtarheel_33;420364]Would it be any easier to just send a Res64 every time Prime95 "phones home" to PrimeNet? That's only an extra eight bytes of information added to the ETA, etc. that otherwise gets sent every day.[/QUOTE]Even assuming you could and assuming it is just a five minute job to change everything (perhaps quite aggressive assumptions!), what is the purpose of the captured data? How is it used? What happens if systems don't agree on the staged res64?

ATH 2015-12-28 15:42

The server should keep the 64 bit residue for every 5 or 10 million iterations.

Madpoo 2015-12-28 19:26

[QUOTE=retina;420367]Even assuming you could and assuming it is just a five minute job to change everything (perhaps quite aggressive assumptions!), what is the purpose of the captured data? How is it used? What happens if systems don't agree on the staged res64?[/QUOTE]

I would imagine a system where:[LIST][*]First time LL test has a saved partial residue (64-bit, just like now) every 5% or something[*]Second (double) check would compare it's residue along the way.[*]If everything is matching, good, hopefully the final one matches as well[*]If there's a mismatch at any point along the way, the exponent can be made available immediately for a triple-check since we know it'll need one.[/LIST]
There's another benefit, and it's the ability to spot bad machines faster. Right now we only know a result is bad when 3 checks (or more) have finished and we finally get a match that tells us which one was bad.

However, if we're able to compare residues along the way, we could potentially know which one is bad as early as 5% (or whatever interval the periodic residues are saved) into the 3rd run. If two of the three match at some interval, we can be almost certain the odd one out is the bad result.

That's based on my assumption that a match at any given iteration is going to be as certain as a match at the final iteration... i.e. if they match at the 10 millionth iteration, they're both doing great up to that point.

Knowing your own triple-check is on the right path and that one of the others definitely went astray will increase that user's confidence in their triple-check, and lets someone like me who searches for the bad systems identify them faster.

It would also help identify bad systems that never completed their check. I imagine such a system would save those partial residues even if it never finished. Let's say a system did a bunch of first time checks and also had a few that went 10-20% and were abandoned.

It may be years and years before anyone starts double-checking their first-time work, but if they abandoned some exponents before completion, those would be re-assigned as first-time work much sooner. If those new assignments show mismatches along the way, we have a good idea that machine was bad and we can check their other first-time stuff way before we would have normally, well before we would have any notion that system was wonky if we had to wait for double-checking to get to them.

Madpoo 2015-12-28 19:31

[QUOTE=ATH;420369]The server should keep the 64 bit residue for every 5 or 10 million iterations.[/QUOTE]

FYI, I realized that I like that notion more than every xx% along the way... doing it every 5e6 or 10e6 or whatever gives better fixed reference points.

As far as the coding, yes, it would mean client changes to send those partial residues as part of it's normal communication, and some stuff on the server side.

Probably just a new table to hold the info... similar to the table that holds the final residue that holds user/cpu info, exponent, and maybe the assignment ID for tracking purposes, shift count. Then new columns for "nth iteration" and "64-bit residue" for the actual meat of it.

There would be some back-end magic pixie dust to actually do something with that data... look for mismatches and make things available for a triple-check right away, or use it for analyzing possibly bad systems.

Not terribly complicated, but then I'm not a coder. :smile:

NBtarheel_33 2015-12-29 09:39

[QUOTE=Madpoo;420311]
[CODE]% Done Count Equivalent Full LL Tests GHz-days saved
0 116600 0 0
10 11806 4261 852,200
20 7164 6160 1,232,000
[B]30 5474 7091 1,418,200
40 4103 7265 1,453,000
50 3551 7030 1,406,000
[/B] 60 3084 6305 1,261,000
70 2749 5197 1,039,400
80 2565 3740 748,000
90 2107 1899 379,800
100 3 3 600[/CODE][/QUOTE]

The above is assuming 200 GHz-days per LL test; current mainstream assignments are actually a little more expensive. The moral of the analysis looks to be that if this is worth implementing, the most "bang for the buck" (as measured by salvaged throughput to GIMPS) is achieved by collecting a single results file around 40% completion, or two or three results files between 30% and 50% completion.

airsquirrels 2015-12-29 13:11

[QUOTE=Madpoo;420378]FYI, I realized that I like that notion more than every xx% along the way... doing it every 5e6 or 10e6 or whatever gives better fixed reference points.

As far as the coding, yes, it would mean client changes to send those partial residues as part of it's normal communication, and some stuff on the server side.

Probably just a new table to hold the info... similar to the table that holds the final residue that holds user/cpu info, exponent, and maybe the assignment ID for tracking purposes, shift count. Then new columns for "nth iteration" and "64-bit residue" for the actual meat of it.

There would be some back-end magic pixie dust to actually do something with that data... look for mismatches and make things available for a triple-check right away, or use it for analyzing possibly bad systems.

Not terribly complicated, but then I'm not a coder. :smile:[/QUOTE]

Would you also stop the machine doing the original DC or assign it a new item?

There are some other interesting ways this could be used. If the client was modified to keep a local full copy of its residue every Xe6 (let's say we size this to roughly a day worth of work) as well the moment the server detected a mismatch the original client could roll back and try again from a known good point, while also alerting the user of a hardware problem. The client could remove old full checkpoints once verified as matched.

Alternatively the offending machine could then upload it's last good full iteration to primenet so a triple check wouldn't have to start from zero.

This would let even lightly misbehaving machines continue to contribute, and is especially useful for those that are unattended for a long time.

In the long run it also opens the possibilities of simultaneous tests by different users. We are talking about 210 +\- GhzDay tests now, but to effectively work on say 100M tests it would be really nice to run half-speed but catch (and correct) errors months early.

Mark Rose 2015-12-29 15:46

[QUOTE=airsquirrels;420420]Would you also stop the machine doing the original DC or assign it a new item?

There are some other interesting ways this could be used. If the client was modified to keep a local full copy of its residue every Xe6 (let's say we size this to roughly a day worth of work) as well the moment the server detected a mismatch the original client could roll back and try again from a known good point, while also alerting the user of a hardware problem. The client could remove old full checkpoints once verified as matched.
[/QUOTE]

If this is implemented, it would also be good if Prime95 stopped LL tests if a factor is found. I'm not sure if it does currently?

Madpoo 2015-12-30 01:02

[QUOTE=airsquirrels;420420]Would you also stop the machine doing the original DC or assign it a new item?[/QUOTE]

Hmm... well, there's two (or three) things that could happen. By way of refresher, I'm referring to a situation where a double-check in our theoretical system has a residue mismatch at, oh, let's say 20% just for example. Since there's a mismatch at that point with the first check, it's made available for a triple-check since we know it'll need one.

Things that could happen:[LIST=1][*]The machine doing the double-check finishes first, checks in it's result and then just waiting on the triple-check to figure out which one is correct.[*]The machine assigned the triple-check gets to that 20% where there was a mismatch noted, and it matches the residue of the first check. Should the double-checker go ahead and give up, knowing that it must have screwed up since the residue it had at 20% failed to match to other independent runs?[*]The machine assigned the triple-check gets to 20% and matches the DC at the same point. Looking good for the 2nd and 3rd tests and we can go ahead and assume the 1st check is the bad one.[/LIST]
I personally like that notion of spotting the bad one much earlier in the process by comparing residues along the way.

And of course there's always the chance that the triple-checker won't match *either* of the first two and we'll need a quad+ check. It's rare but it definitely happens.

[QUOTE]There are some other interesting ways this could be used. If the client was modified to keep a local full copy of its residue every Xe6 (let's say we size this to roughly a day worth of work) as well the moment the server detected a mismatch the original client could roll back and try again from a known good point, while also alerting the user of a hardware problem. The client could remove old full checkpoints once verified as matched.[/QUOTE]

That is true... going to my example (and I'm using % done instead of 1M iterations just for whatever reason), let's say the mismatch occurred at 20%. The client could roll back to it's 10% value and try again... if it arrived at the same residue it had before, then it'll just continue on with confidence it's on the right track. Or it may match the first check, so it knows it had some problem and might need to switch to a larger FFT or do something else for the rest of it's run. Or it could come up with something else entirely, which again points to a problem with this current run since it's inconsistent.

As you suggested, if the client didn't have to roll back more than a day, that would be easiest on the resources. Rolling back a whopping 10% could be days or even weeks for some exponents/clients, so I just use that by way of example.

Since storing partial residues on the server side is (relatively) cheap, being only 64 bits (plus other junk) for each entry, it could be every 1M iterations, or 500K iterations, etc. The client would only need to save maybe the last 2 or 3 full interim files, just enough so that if it mismatched a previous run, it could go back to it's last known match point and start again.

Since the client *already* saves 2 backup files at 30 minute intervals (by default) perhaps it's not too much to ask the clients to save an additional backup file or two, going back to the previous 1M point?

[QUOTE]Alternatively the offending machine could then upload it's last good full iteration to primenet so a triple check wouldn't have to start from zero.[/QUOTE]

True... we'd have to tell the double-checker to quit working on it at that point (if we've come to the conclusion that it's flaky because it keeps coming up with different residues), because the saved file will have a fixed shift-count. We can't have the original person and someone else who picks up from there *both* completing it because the shift-counts will be the same and can't be used as verification of each other.

[QUOTE]This would let even lightly misbehaving machines continue to contribute, and is especially useful for those that are unattended for a long time.

In the long run it also opens the possibilities of simultaneous tests by different users. We are talking about 210 +\- GhzDay tests now, but to effectively work on say 100M tests it would be really nice to run half-speed but catch (and correct) errors months early.[/QUOTE]

Right now, some users (looking at you LaurV) do this on their own... running two tests of the same exponent alongside each other on different machines, comparing residues at fixed interims along the way. It does help identify a potential problem midway through the run rather than only finding the problem at the end like we do now.

On large (like 100M digit) exponents, this makes even more sense when an LL test can take months or even years. I know I'd like to know much sooner whether my machine crapped out at some point and I could just roll back to the last time the residues matched (on both machines, since I wouldn't know which was bad) and start over from there. Much better than doing a full triple-check starting at zero.

For that to be effective, I think I hinted at the problem where LL and DC running alongside each other would mean *both* systems rolling back to the last time they matched, since it would be unknown which is bad. That could result in more lost time for the faster of the two machines... it might have to roll back over the last several iterations to get to where the slower machine is.

So to be really effective I guess we'd have to match machines with the same effective throughput.

But those are all "technical" problems... in theory it seems like a good idea? :smile: Now I'm just waiting for someone to come along and poke holes in it, but most of the objections will probably be towards implementation issues... I understand it wouldn't be trivial to code and implement, but if it's a good idea and would save time, it'd be worth it, I think.

airsquirrels 2015-12-30 02:54

Ok so here is what we have so far from my read, and some thoughts on how we could phase this in with some relatively small changes.

I. Prime95 client modified to support the following features:
1. Output interm result lines (partial residues) to results.txt that are uploaded to primenet. I would say every million iterations would be a sane interval. This would be a simple line containing the exponent, what iteration it is at, and the 64bit residue. 5KB or so in total DB space per-exponent. We should have that disk space/bandwidth.
2. Save a full local backup of the residue corresponding to each million iteration checkpoint.
- Erase these under the following circumstances
a. The corresponding exponent is completed
b. The exponent is aborted (see #4)
c. The number of saved checkpoints exceeds the newly added max checkpoints to save number. Remove the oldest.
d. The server "accepts" a partial residue and indicates it is safe to clear older checkpoints.
e. The server requests a rollback and all checkpoints newer than the rollback point are removed.
3. Accept a response from the server indicating a rollback point (via worktodo?)
a. Find the earliest checkpoint <= to that rollback and restart from that point, the beginning if necessary.
b. Cleanup older iterations.
c. Optionally log the reason for the restart/rollback
4. If not already supported, accept a response from the server indicating work on a given exponent should be aborted.
a. remove the exponent from the work queue.
b. Restart worker with next work unit
c. Cleanup checkpoints.
d. Optionally log the reason for abort.
5. Accept a response from the server requesting a given full checkpoint be uploaded.
a. Queue another thread or process to upload the checkpoint. This could simply be a worktodo entry?
b. Handle failure modes gracefully. (Retry? Just quit and let server request the next checkpoint?)
c. Compression? (There was a thread discussing that the checkpoints aren't very compressible - I'd love to empirically show that in practice it's worth the effort but I haven't tried yet and it may well not be. Alternatively I have a great compression method, where the data can simply be encoded as n^iteration mod exponent. Pretty CPU intensive to decompress on the server though :) )
6. Accept worktodo from the server that includes information indicating that a full checkpoint at a given interation is available for download
a. Support attempting to download the full checkpoint before starting work.
b. Handle a reasonable number of retries before just starting from the beginning. Graceful failure modes..
7. Accept a response from the server that indicates checkpoints older than a given iteration are no longer needed.
a. Cleanup unless other settings call for the checkpoints to be preserved

That's it for the primenet client, those features could be added and lay dormant/be tested non destructively. Only the incremental results would need to be ignored by the server. Someone can poke holes in opportunities for abuse, etc. but this would give us all the capabilities in the client but reserve all the 'How should this behave' logic for the server where it is easier to tune.

II. The server would need the following modifications, which could be added in stages
1. Accept the incremental result residue lines.
a. Store these in a DB record 1-* with the assignment.
b. Award credit incrementally for the work since the last incremental result? it may be better not to do this at this point. See #3
c. If the class of work and server configuration warrants it, add assignment/send a response that requests the full iteration be uploaded.
d. If the work class is DC, or there are two tests running in parallel, and we have one or more residue for this iteration to compare: (Discussion needed here!)
i. If it matches another residue from another user, return an Accept response (Client #7) - this is the 'keep going all is good' state
ii. If it matches another residue from this user/assignment, return an Accept response (Client #7) for this user (Rollback was good on DC.) - Queue a worktodo/response/message for any non-matching users to abort(if still active) and indicate a hardware/software error (Client #4.) or possibly to Rollback (if running simultaneously. - Client #3). State chart needed to make sure we are a handle all corner cases/event combinations here. It's likely the second part of this is handled by iii below.
iii. If there are no matching residues and it mismatches at least one residue, return a client response #7 to rollback (we haven't matched our own, go back and see if we do.) queue a response #7 for the other assignment if it is still active as well?

2. Accept the upload of a full checkpoint as requested from Server #1c/Client #5
a. Store this in a DB record 1-* with the assignment.
b. Award partial credit at this time?

3. Support assigning work for which existing checkpoints are available from the server in a form handled by Client #6. Both LL and DC work should accept this.


I'm sure I am missing some pieces, but the framework of this doesn't seem overly arduous. Thoughts? This needs a review to ensure the credit system would remain intact, the chain of proof remains intact, and that there aren't new avenues for abuse.

Who here actually owns the responsibility for these components? I know George writes prime95, is he open to external help/patches for review? I have no idea who actually handles the server side of primenet.

Edit: Argggg, formatting / indentation lost. I will clean this up when I'm not on a mobile device.

Dubslow 2015-12-30 05:34

:direction:

Madpoo 2015-12-30 07:21

[QUOTE=airsquirrels;420476]Who here actually owns the responsibility for these components? I know George writes prime95, is he open to external help/patches for review? I have no idea who actually handles the server side of primenet.[/QUOTE]

George would be the go-to for Prime95 and on the Primenet side of things, there's George, James, and Scott. My own role seems to be to suggest wild ideas that other people would be responsible for implementing. :smile:

Of course I help out when/where I can... helping with any DB schema changes, sproc updates and what not. I've learned enough PHP to "skin" the website design and update things here and there like the milestone page, etc. The client/server communication parts of the site are something I try to avoid. Too scary for me.

For everything it does, the Primenet server is pretty straightforward, which is a good thing... made it easy for me to peek in and see the flow of things. As you'd imagine, a system that's evolved over the years has a lot of nooks and crannies though. With that said, grafting on new features doesn't need to be too difficult as long as backwards compatibility with older clients is maintained. Heck, it still accepts chatter from old v4 (Prime95 v24.x and earlier) clients which has it's own little code just to handle.

NBtarheel_33 2016-01-02 12:23

Taking a look at the old [URL=http://www.mersenne.org/report_classic/]"colorful stats report"[/URL]:[LIST][*]Less than 6,000,000 P90-years = 30,450,000 GHz-days remaining to having every Mersenne number under M79300000 tested at least once.[*]Only 146 double-checks (or factors found!) remain between exponents 30.15M and 35.1M before the row for those exponents can be collapsed into the row for 0-30.15M.[*]Only 8,600+ Mersenne numbers need factored to reach a total of 3,000,000 factored Mersenne numbers in the "classical" exponent range of 0-79.3M.[/LIST]

cuBerBruce 2016-01-14 03:29

What was the #1 first-LL exponent has been completed by an ANONYMOUS user with an expired assignment. User markr now has [url=http://www.mersenne.org/report_exponent/?exp_lo=60356927&full=1]M60356927[/url] as a double-check. The lowest first LL is now M60371299.

I note another user checked in results for 2 60M range exponents and 2 61M range exponents all at the same time less than two days ago. All four of these were for expired assignments, leaving Patrik Johansson, Zr40, and vats09 with double-checks.

EDIT: Also...
[QUOTE] Countdown to double-checking all exponents below 35M: [size=3][b]10[/b][/size] (Estimated completion : 2016-02-01) [/QUOTE]

cuBerBruce 2016-01-14 16:19

[QUOTE]Countdown to first time checking all exponents below 61M: [color=red]10[/color] [color=green](Estimated completion : 2016-02-11)[/color][/QUOTE]

It will be down to 9 within 6 hours.

petrw1 2016-01-14 16:50

Countdown to double-checking all exponents below 35M: 10 (Estimated completion : 2016-02-01)
 
I have 2 on a machine with a failed hard drive.
I will make sure they get done somehow soon enough to not hold up that range.

Uncwilly 2016-01-19 19:22

Regarding the Milestones page. I was thinking about the page as I fell asleep last night. It seems to be a bit hard coded. What about a rules based page? Snapshot for reference
[CODE]All exponents below 34,969,871 have been tested and double-checked.
All exponents below 60,371,299 have been tested at least once.

Countdown to first time checking all exponents below 61M: 6 (Estimated completion : 2016-02-11)
Countdown to first time checking all exponents below 62M: 13 (Estimated completion : 2016-02-11)
Countdown to first time checking all exponents below 63M: 25 (Estimated completion : 2016-02-11)
Countdown to first time checking all exponents below 64M: 84 (Estimated completion : 2016-05-12)
Countdown to first time checking all exponents below 65M: 134 (Estimated completion : 2016-05-12)
Countdown to first time checking all exponents below 66M: 200 (Estimated completion : 2016-05-12)
Countdown to first time checking all exponents below 67M: 764 (2 still unassigned)
Countdown to first time checking all exponents below M(74207281): 81,863

Countdown to double-checking all exponents below 35M: 7 (Estimated completion : 2016-02-02)
Countdown to double-checking all exponents below 36M: 5,932 (5,201 still unassigned)

Countdown to proving M(37156667) is the 45th Mersenne Prime: 16,191
Countdown to proving M(42643801) is the 46th Mersenne Prime: 99,202
Countdown to proving M(43112609) is the 47th Mersenne Prime: 108,391
Countdown to proving M(57885161) is the 48th Mersenne Prime: 400,202
Countdown to proving M(74207281) is the 49th Mersenne Prime: 636,148[/CODE]
The "All exponents below" lines are good.

The "Countdown to first time checking all exponents below xXXM" lines should follow the following rules in order:[LIST=1][*]Display next 2 first time LL millions milestones.[*]Display any additional first time LL milestones with counts below 100.[*]Display any additional first time LL milestones where all exponents below it have been assigned.[*]Display any open first time milestones for at Mprimes up to largest known.[/LIST]The same rules for the double checks, except:
Rule 3 would need to check that there are no outstanding first LL assignments.
Rule 4 would apply to the "Countdown to proving" section.

This would change the current page a little and could change over the course of a day, but it would save having to monkey with it all of the time. But if you make the query run once an hour at a set time, that would be fine.

:two cents:

cuBerBruce 2016-01-20 01:43

Countdown to first time checking all exponents below 61M: [size=5][b]5[/b][/size]

Madpoo 2016-01-20 03:26

[QUOTE=Uncwilly;423091]Regarding the Milestones page. I was thinking about the page as I fell asleep last night. It seems to be a bit hard coded. What about a rules based page?
...
The "All exponents below" lines are good.

The "Countdown to first time checking all exponents below xXXM" lines should follow the following rules in order:[LIST=1][*]Display next 2 first time LL millions milestones.[*]Display any additional first time LL milestones with counts below 100.[*]Display any additional first time LL milestones where all exponents below it have been assigned.[*]Display any open first time milestones for at Mprimes up to largest known.[/LIST]The same rules for the double checks, except:
Rule 3 would need to check that there are no outstanding first LL assignments.
Rule 4 would apply to the "Countdown to proving" section.

This would change the current page a little and could change over the course of a day, but it would save having to monkey with it all of the time. But if you make the query run once an hour at a set time, that would be fine.

:two cents:[/QUOTE]

Yeah, the way it's setup now, there's a SQL stored procedure that pulls the data and pops it onto the page. That same sproc will either used cached info or, if more than the default time has elapsed, it will re-run the queries to refresh the values. Kind of a convenient version of NoSQL in a sense which happens to use *actual* SQL to hold the data.

It's actually setup to be kind of flexible... the text itself and the data it shows can be customized for various things and it's just limited to the type of query I can write. I actually have it create data up to countdowns up to 70M but going past 67M right now is kind of useless...but it's there for later.

I did just add the countdown to first-time checking below M49 since it's kind of relevant and fresh, and it shows just how many possibilities there are for an unknown prime below that one.

Once we start clearing out these 61M - 64M stuff it won't be quite so cluttered. As usual there are some foot draggers in there but they'll go away eventually.

At least this system lets me add other interesting milestones if/when the mood strikes me, and it'll be cached along with the rest.

cuBerBruce 2016-01-22 13:23

[QUOTE]Countdown to double-checking all exponents below 35M: [color=red]5[/color] [color=green](Estimated completion : 2016-02-04)[/color][/QUOTE]

A couple poachings (apparently by 2 different users) of stuck assignments and the countdown is down to 5.

The smallest exponent recently was recycled and there is a race between the prior assignee and the new assignee to see who finishes it first. The next 3 will almost certainly be recycled as well even though the current assignees will likely finish them very soon after they expire.

The last one is also about to be recycled. That's a little farther from being finished (in terms of ETA) than the others.

petrw1 2016-01-22 18:54

My 2 will complete later tomorrow (Jan 23)

.. no poacheeee pleeeeease

Madpoo 2016-01-22 19:23

[QUOTE=petrw1;423562]My 2 will complete later tomorrow (Jan 23)

.. no poacheeee pleeeeease[/QUOTE]

Better get the lead out. These assignments will expire after 60 days. They were assigned 2015-11-23 and today is the 60th day since then. That means they'll probably expire when the server does it's nightly maintenance at midnight, UTC.

If you don't think they'll finish in time, PM me and you can email me the save files... I'll finish them. At ~75% done I'd have them done in a couple hours.

petrw1 2016-01-22 21:32

[QUOTE=Madpoo;423575]Better get the lead out. These assignments will expire after 60 days. They were assigned 2015-11-23 and today is the 60th day since then. That means they'll probably expire when the server does it's nightly maintenance at midnight, UTC.

If you don't think they'll finish in time, PM me and you can email me the save files... I'll finish them. At ~75% done I'd have them done in a couple hours.[/QUOTE]

Darn....i only get home about 23:30 UTC; by then I will be at 85%. If you have a way of "HOLDING" them for me ... or not let someone else grab them. Mine will finish tomorrow afternoon and whoever grabs them tonight will get a less valuable Triple Check.

Mark Rose 2016-01-22 21:45

Isn't there a small percent leeway in the expiry? Or was the removed?

Madpoo 2016-01-22 22:00

[QUOTE=Mark Rose;423616]Isn't there a small percent leeway in the expiry? Or was the removed?[/QUOTE]

I don't think so. I'm pretty sure it's based just on the time since assigned (and what category it's in). The only thing that took the % done into consideration was for grandfathered assignments.

I could be wrong, but that was my understanding.

I'm trying to think if there's anything I could do to keep those 2 assignments from expiring. I mean, obviously there are things I could do... as simple as manually rolling forward the assignment date by a day to give you one more day to finish.

I'd hate to tinker with things like that though, if there were any other option. It seems like a kind of hacky/kludgy workaround.

Hmm... another option could be to create a new assignment for it manually. Obviously you can't normally get an assignment on something if it's still assigned, but I'm just talking about creating the assignment directly in the DB. I could assign those two to myself manually so that when your assignments do expire, there would be newer assignments to me for them, so they won't get reassigned to some random person.

Then when yours check in later (even though expired), the assignments I have for them would automatically expire, and bam, done, problem solved.

Again, that's a little hacky, but it doesn't involve me tinkering with your assignment in any way, it's more of a workaround to make sure they don't get assigned to someone else.

I've never done something like that... I'm not sure what will happen if there are two active assignments for the same exponent. There could be queries out there that only expect one result and could throw errors if more than a single entry is returned.

That's my only idea though... and with only 2 hours until midnight UTC it's probably the best I'll come up with.

I'll give that a shot and if it breaks any website reporting as a result, at least it'll only be for a couple hours. And it'll be interesting to find out anyway.

Dubslow 2016-01-22 22:03

[QUOTE=Madpoo;423618]
I've never done something like that... I'm not sure what will happen if there are two active assignments for the same exponent. There could be queries out there that only expect one result and could throw errors if more than a single entry is returned.
[/QUOTE]

That's one reason why, in my coding, I typically try to avoid relying on even the most reasonable-sounding assumption in the world -- I never know when someone in the future (such as myself) is going to come along and break things I never thought could be broken :smile:

[url]http://thecodelesscode.com/case/116[/url]

Madpoo 2016-01-22 22:04

[QUOTE=Madpoo;423618],..
I'll give that a shot and if it breaks any website reporting as a result, at least it'll only be for a couple hours. And it'll be interesting to find out anyway.[/QUOTE]

Well, it doesn't break this report, but it sure looks funny.

[URL="http://www.mersenne.org/assignments/?exp_lo=34969871&exp_hi=35000000&execm=1&exp1=1&extf=1&exfirst=1"]http://www.mersenne.org/assignments/?exp_lo=34969871&exp_hi=35000000&execm=1&exp1=1&extf=1&exfirst=1[/URL]

So, that should work. Your assignments should still expire in a couple more hours, but I have "newer" assignments for them and I'll just sit on those. Turn yours in when done and my new assignments will expire at that time.

Still seems wrong and weird to do it this way, but I understand your situation and that you only have one more day to go, and it'd be a shame to have someone else pick those up, start work on it, only to have yours turn results in within a few more hours. :smile:

chalsall 2016-01-22 22:26

[QUOTE=petrw1;423610]Darn....i only get home about 23:30 UTC; by then I will be at 85%. If you have a way of "HOLDING" them for me ... or not let someone else grab them. Mine will finish tomorrow afternoon and whoever grabs them tonight will get a less valuable Triple Check.[/QUOTE]

"You've got to know when to hold 'em
Know when to fold 'em
Know when to walk away
Know when to run
You never count your money
When you're sittin' at the table
There'll be time enough for countin'
When the dealin's done

:smile:

Prime95 2016-01-22 22:32

[QUOTE=Madpoo;423618]
I'm trying to think if there's anything I could do to keep those 2 assignments from expiring. I mean, obviously there are things I could do... as simple as manually rolling forward the assignment date by a day to give you one more day to finish.
.[/QUOTE]

Try deleting the 2 rows from the assignments table

Madpoo 2016-01-22 23:55

[QUOTE=Prime95;423632]Try deleting the 2 rows from the assignments table[/QUOTE]

Hmm... I guess that would have worked. Then when his machine turned them in, they would have come in as "unassigned" because no matching assignment ID would have been found.

Then again if he ever forgot to turn in the results after all, they would never get reassigned until the next maintenance to catch situations like that (exponents in "limbo" where they're not assigned but also not available to be assigned).

Well, hopefully my solution of creating new assignments ahead of time isn't a terrible one. :smile:

Madpoo 2016-01-23 06:39

[QUOTE=Madpoo;423648]Hmm... I guess that would have worked. Then when his machine turned them in, they would have come in as "unassigned" because no matching assignment ID would have been found.

Then again if he ever forgot to turn in the results after all, they would never get reassigned until the next maintenance to catch situations like that (exponents in "limbo" where they're not assigned but also not available to be assigned).

Well, hopefully my solution of creating new assignments ahead of time isn't a terrible one. :smile:[/QUOTE]

As expected, those assignments expired and now I have the only ones. As soon as petrw1 turns his in, mine will expire. :smile:

Dubslow 2016-01-23 12:04

Madpoo, could you track and save the milestone reports from each hour? They would make very pretty graphs. (Yes, it wouldn't be terribly hard for someone else to do it, but it should still be substantially easier for you...)

cuBerBruce 2016-01-23 14:50

Countdown to double-checking all exponents below 35M: [color=red][size=5][b]3[/b][/size][/color]

Wayne's DC's finished. Both successful double-checks.

ForResearch's will finish soon as well.

chalsall 2016-01-23 15:21

[QUOTE=cuBerBruce;423727]ForResearch's will finish soon as well.[/QUOTE]

LOL... I didn't even realize I had one... It just completed successfully.

Aaron, it looks like 34980299 is going to expire tonight. Any chance you could grab it, and either do it yourself or let me? I could have it done in 24 hours.

Uncwilly 2016-01-23 16:00

[QUOTE=Dubslow;423719]Madpoo, could you track and save the milestone reports from each hour? They would make very pretty graphs. (Yes, it wouldn't be terribly hard for someone else to do it, but it should still be substantially easier for you...)[/QUOTE]
Which parts would you want graphed? For many things weekly is enough.

petrw1 2016-01-23 16:15

AND done.....
 
...

ATH 2016-01-23 16:56

[QUOTE=chalsall;423731]Aaron, it looks like 34980299 is going to expire tonight. Any chance you could grab it, and either do it yourself or let me? I could have it done in 24 hours.[/QUOTE]

I could probably do it in like 13 hours, but Aaron could probably do it in 7-9 hours himself.

cuBerBruce 2016-01-23 16:57

[QUOTE=chalsall;423731]Aaron, it looks like 34980299 is going to expire tonight. Any chance you could grab it, and either do it yourself or let me? I could have it done in 24 hours.[/QUOTE]

I have also been looking at trying to grab it, but my ETA would be around 35 hours (assuming no loss of power from the storm). I could still attempt to get it and let someone with a faster machine do it.

I note that while M34973683 appears to have an ETA of 8 days, I believe the previous assignee will finish it sooner.

chalsall 2016-01-23 17:04

[QUOTE=ATH;423755]I could probably do it in like 13 hours, but Aaron could probably do it in 7-9 hours himself.[/QUOTE]

I don't really care who does it. I just don't want it being assigned to someone who won't complete it in a timely manner (or, gods forbid, doesn't complete it in the required 60 days and it gets recycled again).

ATH 2016-01-23 17:11

[QUOTE=chalsall;423760]I don't really care who does it. I just don't want it being assigned to someone who won't complete it in a timely manner (or, gods forbid, doesn't complete it in the required 60 days and it gets recycled again).[/QUOTE]

Too late, it got recycled here at the top of the hour.

petrw1 2016-01-23 17:16

[QUOTE=ATH;423762]Too late, it got recycled here at the top of the hour.[/QUOTE]

not ....299

chalsall 2016-01-23 17:16

[QUOTE=ATH;423762]Too late, it got recycled here at the top of the hour.[/QUOTE]

That was 34973683. I'm talking about 34980299.

cuBerBruce 2016-01-23 17:24

[QUOTE=chalsall;423765]That was 34973683. I'm talking about 34980299.[/QUOTE]

And 34973683 was recycled over half a day ago.

chalsall 2016-01-23 17:30

[QUOTE=cuBerBruce;423767]And 34973683 was recycled over half a day ago.[/QUOTE]

Yes. At ~00:00 UTC.

What part of [URL="http://www.mersenne.org/assignments/?exp_lo=34000000&exp_hi=35000000"]this report[/URL] isn't clear? I'm talking about 34980299, which is due to expire tonight.

cuBerBruce 2016-01-23 17:48

[QUOTE=chalsall;423768]What part of [URL="http://www.mersenne.org/assignments/?exp_lo=34000000&exp_hi=35000000"]this report[/URL] isn't clear? I'm talking about 34980299, which is due to expire tonight.[/QUOTE]

I certainly was not confused what you were talking about. I only mentioned the other exponent in my post #2119 to point out that we likely will be reaching the milestone in less than 8 days.

Dubslow 2016-01-23 21:25

[QUOTE=Uncwilly;423738]Which parts would you want graphed? For many things weekly is enough.[/QUOTE]

I think most or all of the numbers would make fine graphs. And yes, for most of them weekly would be enough, but why throw away the extra data if we already have it?

Madpoo 2016-01-24 00:07

[QUOTE=Dubslow;423812]I think most or all of the numbers would make fine graphs. And yes, for most of them weekly would be enough, but why throw away the extra data if we already have it?[/QUOTE]

I don't know exactly what you'd want graphed... Like a graph showing a downward trending line representing the countdown to when everything < 35M will be done? I think it'd be boring. LOL It's probably the kind of data that could easily be charted after the fact... just a count of how many remaining DCs there are for any given point in time below some threshold... but again I fear it'd be a boring mostly straight line ending at zero on some date.

I have been working on a fun little set of data today. Took me a while to get some of the data cobbled together.

The count of how many first time versus double-check results were checked in each day... it's a really... interesting... query to figure that out. I have to join on the same LL results for a particular exponent and then see if it had any previous results turned in or not. Took a while to figure out how to make that fast enough that I could go back to January 1, 2010 which I chose as my starting point.

Anyway, end result is that I should have a good starting point for how many results of different kinds are turned in each day as well as the # of assignments for each category handed out that day (no historical data for that one... just from today onwards) and then # of new users & computers.

I have it mostly ready to go, just need to schedule it as a daily job and then figure out what to do with this data.

Madpoo 2016-01-24 00:11

[QUOTE=chalsall;423768]Yes. At ~00:00 UTC.

What part of [URL="http://www.mersenne.org/assignments/?exp_lo=34000000&exp_hi=35000000"]this report[/URL] isn't clear? I'm talking about 34980299, which is due to expire tonight.[/QUOTE]

I didn't see any of these discussions about the other exponents until just now, after the daily expirations have run.

At any rate, there are only 2 left, both assigned on Jan 23 with ETAs in about a week for both of them. Fingers crossed they actually mean it.

It does remind me that I should get with George and discuss looking at the track record of CPUs that have the option ticked to get priority assignments. If they have that box checked but have done a lousy job of turning in assignments in a timely fashion, I vote to uncheck that box for them.

In other words, it's kind of lame when a cat 1 assignment goes out to someone who regularly takes 3+ months to finish, or has a lot of expired stuff in their past.

The system already takes some of that into account but I'm not sure how in depth it is and I'd rather just take that option off the table entirely for those unreliable machines.

Chuck 2016-01-24 00:34

[QUOTE=Madpoo;423839]It does remind me that I should get with George and discuss looking at the track record of CPUs that have the option ticked to get priority assignments. If they have that box checked but have done a lousy job of turning in assignments in a timely fashion, I vote to uncheck that box for them.[/QUOTE]

I definitely vote for that plan. Frustrating to see all those red expiration warnings for work that should have been completed quickly.

LaurV 2016-01-24 04:25

[QUOTE=Madpoo;423839]It does remind me that I should get with George and discuss looking at the track record of CPUs that have the option ticked to get priority assignments. If they have that box checked but have done a lousy job of turning in assignments in a timely fashion, I vote to uncheck that box for them.
[/QUOTE]
+1. We would also vote for it. At the end, it is everything about commitment. If i can't do it i say I can't do it. Or don't take it.

Uncwilly 2016-01-24 04:36

Maybe when we get within say 20 or 30 of a milestone, a new rule kicks in. Whenever any of those last few get recycled, it automagiaclly will get assigned to Aaron or Chris. Even better, when this happens have an e-mail go out to them letting them know that the exponent is theirs.

LaurV 2016-01-24 05:06

Haha, well... I won't go so far... but that [U]is[/U] an idea, too. :smile:
Everyone should have a chance to find that missed prime... You want these to guys two find them all? :razz:

Dubslow 2016-01-24 07:09

[QUOTE=Uncwilly;423851]Maybe when we get within say 20 or 30 of a milestone, a new rule kicks in. Whenever any of those last few get recycled, it automagiaclly will get assigned to Aaron or Chris. Even better, when this happens have an e-mail go out to them letting them know that the exponent is theirs.[/QUOTE]

[QUOTE=LaurV;423852]Haha, well... I won't go so far... but that [U]is[/U] an idea, too. :smile:
Everyone should have a chance to find that missed prime... You want these to guys two find them all? :razz:[/QUOTE]

I agree that the first idea here is quite overkill, but how about Category 0 with 5-10 day return time instead of 60? Or maybe 15 day return time, since on my box, the most efficient use results in around 10-11 day tests at the current Cat 1 DC wavefront.


All times are UTC. The time now is 22:21.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.