mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Five or Bust - The Dual Sierpinski Problem (https://www.mersenneforum.org/forumdisplay.php?f=86)

 philmoore 2008-10-10 18:34

Use this thread for issues concerning probable prime testing. The other thread is for information and reservations only.

 em99010pepe 2008-10-15 00:45

How do I run PRP with LLR?

 Mini-Geek 2008-10-15 01:21

[quote=em99010pepe;145422]How do I run PRP with LLR?[/quote]
The exact same way as an LLR test, but with a base that's not 2 (or a power of 2).

 em99010pepe 2008-10-15 01:26

[quote=Mini-Geek;145425]The exact same way as an LLR test, but with a base that's not 2 (or a power of 2).[/quote]

What changes do I have to make on the input file (just curious)?
(Meanwhile I already managed to test Prime95)

 Mini-Geek 2008-10-15 01:42

[quote=em99010pepe;145426]What changes do I have to make on the input file (just curious)?
(Meanwhile I already managed to test Prime95)[/quote]
No specific changes, just that a base has to be specified besides 2.
e.g. (base bold)[quote]175000000000:P:1:[B]3[/B]:257
31001156 86001
39809884 86001
44249222 86001
8603464 86002
31881438 86003
33666398 86003
40499588 86003
47214478 86003
20305126 86004
21497746 86004
7111766 86005
30440162 86005
25521872 86006
[/quote]

 mdettweiler 2008-10-15 16:56

[quote=em99010pepe]How do I run PRP with LLR?[/quote]Since these candidates aren't in a traditional k*b^n+-1 format, you'd need to use ABC format in an LLR input file instead of NewPGen format. Like this:

[FONT=Courier New]ABC 1*2^\$a-\$b
1400008 40291
1400087 28433
1400104 40291
[FONT=Tahoma]
[FONT=Verdana](generated using the Prime95 worktodo.txt lines from the PRP Testing thread as a basis--somebody correct me if my above example is in error)

Probably an easier way, though, would be to just run them with Prime95/mprime v25 as Phil suggests. As far as I know, Prime95/mprime v25 uses the same underlying PRP code as LLR, so the speed should be the same. Unfortunately, there's probably no way to run these through LLRnet at this time, since LLRnet only deals in traditional pairs of k/n values as used in NewPGen format files.

Max :smile:
[/FONT][/FONT][/FONT]

 philmoore 2008-10-15 18:53

Prime95 uses base 3 for PRP tests, so if you use LLR, be sure to use the same base. You could try testing 2^1399919+75353 just to check that LLR and Prime95 are reporting the same residues.

 mdettweiler 2008-10-16 03:07

[quote=philmoore;145472]Prime95 uses base 3 for PRP tests, so if you use LLR, be sure to use the same base. You could try testing 2^1399919+75353 just to check that LLR and Prime95 are reporting the same residues.[/quote]
I think what Mini-Geek meant by base was the value of [i]b[/i] in k*b^n+-c; not to be confused with PRP base (and as for that, I know for a fact that LLR uses PRP base 3, and in fact that can't even be changed in the settings). As far as I know, there is no difference in the core PRP code of LLR versus that of Prime95 v25, so they should produce compatible residuals (albeit in slightly differently-formatted output files; however, a simple Perl or bash script would probably be able to convert one to the other quite easily). :smile:

 mdettweiler 2008-10-17 02:13

[quote=philmoore;144995]... and at that value of n, I am finding that each PRP test takes about 2 hours on one processor of my 3000 MHz Pentium D machine. I will split the range from 1.4 million to 2.1 million up into ranges of 10,000. Each range has about 160 candidates, so it would take about 2 weeks on a single processor of my Pentium D, or 1 week if I split it between two processors. I assume that a Core 2 Duo would of course be a little faster.[/quote]
Hi all,

I've noticed that in my 1.41M-1.42M range, the tests are taking almost exactly 2 hours apiece, on one core a Core 2 Duo E4500 (2.2Ghz). Does anyone have any ideas why they're taking this long, despite the fact that numbers of about that size took exactly that same amount of time on Phil's Pentium D (a definitely slower CPU than mine)? Was there a massive FFT length change right before my range, or something like that?

Thanks,
Max :smile:

P.S.: Would this go better in the PRP Discussion thread? If so, please feel free to move it. :smile:

 philmoore 2008-10-17 02:21

I think that my last tests were at 160k FFT size, is that what your tests are at?

 mdettweiler 2008-10-17 02:35

[quote=philmoore;145625]I think that my last tests were at 160k FFT size, is that what your tests are at?[/quote]
Yep, 160K. That rules out the possibility of an FFT jump--any other ideas as to what could be causing this?

Maybe my CPU's just getting clogged with dust and it's running slowly because of that? (Then again, I have a little gadget on my taskbar that reads out the current CPU frequency for each core, and they're both already manually set to 2.20Ghz, which is confirmed by the gadget's readout.)

 mdettweiler 2008-10-17 03:28

[quote=mdettweiler;145628]Yep, 160K. That rules out the possibility of an FFT jump--any other ideas as to what could be causing this?

Maybe my CPU's just getting clogged with dust and it's running slowly because of that? (Then again, I have a little gadget on my taskbar that reads out the current CPU frequency for each core, and they're both already manually set to 2.20Ghz, which is confirmed by the gadget's readout.)[/quote]
Update: I just looked at my iteration times with mprime v25.6 on a 17M (the 896K part of it) Mersenne number that I'm currently doing a doublecheck test on for GIMPS, and compared them with the ratings for similar CPU's on the GIMPS Benchmarks reference page. I was getting very close to 0.030 seconds per iteration, whereas the best times listed for various CPU's similar to mine (they didn't have my model, an Intel E4500, listed) ranged from 0.0214-0.0253 seconds per iteration for the same FFT level.

Long story short: it seems to be quite a sure thing now that my CPU isn't running quite as fast as it should be, which would (at least partially) explain its odd slowness on my Five or Bust range. I'll look into the problem more within the next day or so; the only thing I can think of is dust contamination, since I've seen that slow down my previous CPU (a P4 Prescott 3.2GHz) a lot (though admittedly I wasn't watching the frequency scaling on that one).

Maybe the Ubuntu CPU Frequency Scaling Monitor doesn't detect heat-related scaling, only idle-related scaling? (I always leave the CPU manually set to "Performance" mode to correct for a "feature" in some versions of Ubuntu that causes it to still consider the CPU as idle when lowest-priority apps like prime search apps are running.)

 paleseptember 2008-10-24 08:09

For comparison (if this helps), 1420007 is taking 0.005s/iteration on a P4 3.0GHz 32-bit windows xp. Works out at 1.9 hours per test.

 jrk 2008-11-12 21:24

Would it be worthwhile to do P-1 on these candidates before doing PRP?

 philmoore 2008-11-12 21:52

I did a P-1 run on several hundred candidates and found far too few factors to be worth the cost. On the other hand, as candidates grow in size, I expect that a point will come where P-1 factoring will pay off. Much also depends upon the sieving depth, as well.

 paleseptember 2008-12-22 05:00

I've hit the change from 192K FFT length to 224K somewhere in the 1.97M file. Have slowed from 0.004s to 0.005s per iteration (Q6700, windoze xp 32-bit, 3.5GB RAM (apparently)). Of course, we're at 1 significant figure, so those numbers are pretty arbitrary :)

It would be nice if we could find a probable prime this year! *lets the machines keep crunching away*

 paleseptember 2009-02-09 05:52

Taking 2.70-2.74M (four files, work computer)

Ladida (Up to the 320K FFT length. The slowest from 224K (seem to have missed 256K altogether) on my Q6700 is from 0.005 to 0.007s/iteration. Still better than my Q6600 at home which is clocking 0.010s/iteration at 256K FFT. There's something wrong with it, and I can't work out what!)

 mdettweiler 2009-02-09 16:57

[quote=paleseptember;162162]Still better than my Q6600 at home which is clocking 0.010s/iteration at 256K FFT. There's something wrong with it, and I can't work out what!)[/quote]
Have you tried cleaning out the CPU fan and heatsink? If dust accumulates there it can slow a computer down a *lot*. I used a typical household canister vacuum to clean mine on a number of occasions and each time it's gotten its second wind back. :smile:

 Batalov 2009-02-09 18:54

Are you using [URL="http://www.alcpu.com/CoreTemp/"]Core Temp[/URL]? It's a good indicator when the cleaning is needed.
My home Q6600 runs at 0.004 sec/iter @ 256K.

I've found that at least twice a year you may even want to take apart the tower (or a similar device) because invisible from the outside it gets full with dust and you may even lose the fan motor and then all the hell will break loose.

External dusting for the sink is good at least every month, surely.

___________
[SIZE=1]When a runner find his second breath, he emits his [/SIZE][URL="http://www.secondwindairpurifier.com/"][SIZE=1]second wind[/SIZE][/URL][SIZE=1]?[/SIZE]

 paleseptember 2009-02-09 21:53

Ta Batalov, mdettweiler, for the advice. I'll pull my box apart this weekend and delicately attempt some cleaning. I'll also try that program you recommended Batalov, it looks helpful.

What OS are you using for PRP? I'm on WinXPPro 32-bit with 2GB of RAM. I've left all the options for prime95 as standard. Should I be bumping the memory allocation?

My computer knowledge is pretty woeful I'm afraid. (As is my number theory knowledge if previous conversations with Phil are to go on :smile:)

 engracio 2009-02-09 22:26

Ben,

Definitely download the core temp proggieand have it monitor the cpu temp. Most dc'er are also overclockers. All of them know that the number one enemy of the cpu is overheating. Cleaning out the inside of the box is a big step in lowering the cpu temp. Knowing the temp is a must. Try to run the core temp as soon as you can download it so that you know what is your baseline.

On this box I have a q6600 running all 4 cores 100% plus two gpu2 video cards folding. Those 2 video cards produces lots of heat. The highest cpu temp I feel comfortable with is 70c which is where it is at this time. To some it is too high, to others its acceptable.

A possibility of why your prp time is .0010 instead of .007 or 6 is because the mobo thinks it is overhaeting and stepping down the cpu. Let us know your cpu temp.

 Batalov 2009-02-10 07:09

1 Attachment(s)
[quote=paleseptember;162162]...my Q6600 at home which is clocking 0.010s/iteration at 256K FFT. There's something wrong with it, and I can't work out what!)[/quote]

Aaah. I get now what is most wrong. It's the top of the summer down there! It is hot [sarcasm] ...even in our coastal little town in August. [/sarcasm] (In truth, not really. There's probably only Hawaii that is milder than here. But still a systematic 10"C higher summer temp brings the hardworking comp to a point of confusion.)

Same (more, really) for you in February. You do have to be gentle to the comp in summer. Maybe lower the FSB by 5%, let it live a little. Better than throttling! Or get a Tuniq Tower (or something like it), and you will shave 10-15"C off that core temperature.

P.S. Ah yes, btw, to your question: My home system is a WinXP Pro, too. The CoreTemp is very nicely designed and doesn't take space (sits in the corner), and it logs, too. (E.g. you can set it to log once a minute... Later, you may plot the temps with something like Excel and see the trend). (below)

 paleseptember 2009-02-10 09:31

Ahhhh, I'm getting crazy temperatures under full load. Like >95C on all four cores. At rest they drop back down to 70C.

Am going to have to clean the case asap, and maybe consider a better CPU heatsinkfanthing.

But, as I currently have a cold/flu/death/thing, and the five minutes spent downloading, installing, and watching the temps on the programme is the longest I've been out of bed in the past eight hours, that is a thought for tomorrow, when I'm lucid and actually able to string two thoughts together in a row. And type properly.

 Jeff Gilchrist 2009-02-10 16:18

[QUOTE=paleseptember;162323]Ahhhh, I'm getting crazy temperatures under full load. Like >95C on all four cores. At rest they drop back down to 70C.
[/QUOTE]

That is definitely bad and probably going to lower the life of your CPUs. You can also try downloading [URL="http://www.techpowerup.com/realtemp/"]Real Temp[/URL] which measures a little differently than Core Temp, a good way to compare and make sure the readings are at least similar. Real Temp usually measure a little cooler but should be close.

PS: I hope you get over your death cold soon.

 philmoore 2009-02-23 17:30

Good news - I double-checked 21 of Ben's results from February 9th and 10th when he reported throttling due to overheating, and all 21 of my residues agree with his.

 paleseptember 2009-03-04 00:57

Somewhere during the 3.26M file we've hit the transition from FFT length 320K to 384K. I've taken a hit from 0.007s to 0.009s per iteration. Ouch.

Ah well, onwards we march :)

 paleseptember 2009-03-04 09:38

Go engracio go! \o\ \o| |o| |o/ /o/

(I know full well that this will be deleted within a day or so, but every bit of encouragement helps, right?)

 engracio 2009-03-04 14:32

[quote=paleseptember;164558]Go engracio go! \o\ \o| |o| |o/ /o/

(I know full well that this will be deleted within a day or so, but every bit of encouragement helps, right?)[/quote]

Thanks Ben:smile:

Can't wait to be doing this again:groupwave::bow wave::bounce wave:

 philmoore 2009-03-04 17:45

Ben, you probably took a 20% hit increase, but rounding makes it look a little worse (0.007 to 0.009). Engracio's last reservation takes us into the million-digit range, and we are close to completing up to 900,000 digits! I'm hoping we can find another probable prime soon and speed up the progress. I just uploaded more work files.

 philmoore 2009-04-28 16:42

Ouch, it looks like FFT size has increased again, from 384K to 448K. I am guessing that it may have happened somewhere in the middle of Engracio's current range, does anyone else have any data on that? On the other hand, my old Athlon XP system at home is still using 384K for the exponents in the 3.93-3.94M range, so I may shift a few exponents over to that, but it is slow, around 40 hours per test.

 Jeff Gilchrist 2009-04-28 17:31

[QUOTE=philmoore;171344]Ouch, it looks like FFT size has increased again, from 384K to 448K. I am guessing that it may have happened somewhere in the middle of Engracio's current range, does anyone else have any data on that? On the other hand, my old Athlon XP system at home is still using 384K for the exponents in the 3.93-3.94M range, so I may shift a few exponents over to that, but it is slow, around 40 hours per test.[/QUOTE]

I have a 2^3911320+40291 and 2^3921064+40291 that are using [B]448K[/B] FFTs and 2^3925684+2131 and 2^3916024^2131 using [B]384K[/B] still.

The 448K FFTs are taking 0.008 sec per iteration while the 384K ones are taking 0.006/0.007 secs.

Jeff.

 engracio 2009-04-28 19:44

Yea Phil, I've just glanced on the wu's and they are still in the [B]384K range at 3.83m.
[/B]

 philmoore 2009-04-29 22:14

I see now that the +2131 numbers are using the 384K all-complex FFTs on the Pentium D, but the Athlon is using the 384K size for all three sequences. Looks like it would make sense to do as many +40291 and +41693 numbers as possible on the Athlon and do all the +2131 numbers on the Pentium.

2131 has always had higher crossover sizes for FFT lengths.

 philmoore 2009-05-04 11:21

I see that our currently active reservations span a range of 240,000 and have all been reserved within the past 3 weeks! It may be some time before this happens again. Welcome, new participant "unconnected"!

 Jeff Gilchrist 2009-07-16 19:29

Everyone using Prime95/mprime may want to upgrade to the new 25.11 version announced here: [url]http://www.mersenneforum.org/showthread.php?t=12155[/url]

There is a speedup for zero-padded FFTs. I saw an increase in speed from 0.011 sec iteration time to 0.009 sec.

Jeff.

 paleseptember 2009-07-22 01:05

I'm not seeing that scale of improvement, previously at 0.011s, still at 0.011s. The expected improvement was only 3-4%, so I could just be within that significant figure. Hopefully other people will see more impressive improvement!

 engracio 2009-07-22 01:30

I hate to say it, I actually went back so I put back the old .exe. Does this update have any other upgrades or was it just some minor bug fixes??

Paleseptember, I saw you post and I thought dang he found another prime, lucky dingo.

 Jeff Gilchrist 2009-07-22 19:10

[QUOTE=engracio;182166]I hate to say it, I actually went back so I put back the old .exe. Does this update have any other upgrades or was it just some minor bug fixes??[/QUOTE]

It was mostly improvements for other non-GIMPS projects (such as this one) but not major bug fixes. If your old binary is faster, just continue using that. What kind of processor do you have?

Mine (Core 2) is now at 0.010s almost a week later so still faster but not as much as before. Maybe I'm now just hitting a slightly higher range so it rolled from 0.009 to 0.010.

 engracio 2009-07-23 03:00

Core 2 also, might have been the wu too. Oh well.:smile:

 paleseptember 2009-07-23 05:37

Any improvement is good! A 3% speed increase on my current workunits helps by 23mins. Every little bit helps :]

 engracio 2009-07-26 18:23

Yep I am sticking to 25.9.4 since I tried to use the 25.11 new client and it went down from .011/sec to .012/sec on my Q6600 boxes. I let them run on the same wu but different version. .001/sec might not be much but everything helps.

As stated it was a minor upgrade so I am sticking to the older ver.:smile:

 philmoore 2009-09-15 22:48

double checking residues

Just a bit of news, and an invitation to weigh in on the issue of double checking. Ben and I are currently double checking all the residues for exponents between zero and 1.24 million (but only for the 40291 and 41693 sequences.) The principal reason for this is that these candidates were all originally checked with pfgw version 20050213, which had some problems with FFT boundaries, and sometimes returned "ERROR IN PROCESSING" messages when errors were detected. The problems were particularly severe for the 2131 sequence, and many residues had to be retested with the older 20041129 version of pfgw, slower by a factor of 2, but apparently accurate and stable. Pfgw only did error checking on the first 25 and the last 25 iterations, so some times, errors were not detected but the residues were still wrong. These undetected errors often occurred near other detected errors, so I generally ran long sequences of retests in the vicinity of any "ERROR IN PROCESSING" messages. But we still cannot rule out the possibility of other errors in this data, hence the retest. I did have some errors in the 40291 sequence, but not in the 41693 sequence, and I am wondering if pfgw just did not happen to detect some errors there. Our retesting should require a total of about 1.5% of our total prp effort to date.

I began using Prime95, version 25 for exponents starting at 1.24 million, slightly smaller for 2131. Therefore, our double checking should provide us with a complete set of Prime95 residues for all exponents up through our current ranges. I hope to be able to verify most of the residues below 1.24 million using the older pfgw data.

I have had the general impression that our data since the project initiation last October at n > 1.4 million has been quite accurate. I have checked a few of Ben's and Engracio's residues and found no discrepancies. Nevertheless, an undetected error at a low exponent could cause us to unnecessarily search to an extremely high n value. Seventeen or Bust has found two of their eleven primes so far by double checking. Reports are that their error rates were considerably higher than GIMPS, so apparently there were some unstable machines returning results for awhile. Serge has suggested doing some selective double-checking to get an idea of our accuracy rate. We could also do some systematic double checking of the smaller exponent values, but knowing our accuracy rate would help us choose an optimal goal if we also do systematic double checking. We could start by checking, say, a random 1% of our tests over the entire range (for just the 40291 and 41693 sequences, of course.)

Any comments or suggestions?

 mdettweiler 2009-09-16 01:03

One comment: Based on what you were saying, it sounds like you're doublechecking all five original sequences. Since PRPs have already been found for three of those, as far as the conjecture is concerned, there's no need to bother with those at all now. Of course, there is always the chance of a lower prime having been missed, which could be possibly useful to check for.

BTW: for the tests that you've had to re-do with an older version of PFGW, would it be possible to simply use the latest version of PFGW (3.2.2)? Ever since 3.2.0 PFGW has given true 64-bit residuals rather than the 62-bit ones from before, so the first character of most residuals will be different from those produced by earlier versions, but the remaining part of the residual is compatible, and just as useful for doublechecking purposes. As long as you ignore the first digit (which can be rather easily screened out with a simple script), you should be able to take advantage of the full speed of the latest versions. You could even use Prime95 v25.11 for this as long as you do the same thing with the residuals--both Prime95 v25.11 and PFGW v3.2.2 should be the exact same speed.

 paleseptember 2009-09-16 01:25

Double-checking is only being done for the two open sequences. I think if the error rate is minimal, we should continue to focus on first-pass testing. If the error rate is significant, well, further discussion is required.

 philmoore 2009-09-16 03:11

[QUOTE=philmoore;189900]Ben and I are currently double checking all the residues for exponents between zero and 1.24 million (but only for the 40291 and 41693 sequences.)
......
We could start by checking, say, a random 1% of our tests over the entire range (for just the 40291 and 41693 sequences, of course.)
[/QUOTE]

I tried to make it clear that we are only double-checking the 40291 and 41693 sequences. As for pfgw, it certainly should work just as well, but since we are using Prime95 (mprime) for everything else, it made sense to me to stick with it. One advantage of Prime95 is that its server protocol can be hacked to eventually coordinate assignments through a server. (Compare the recent adaptation of Seventeen or Bust.)

Ben, I tend to agree with you if the error rate is low. I would appreciate hearing from any prp testers who are overclocking, and we could rerun a sample of your tests first. It would not be too costly to retest one number from each work file to get a general idea of the error rate.

 mdettweiler 2009-09-16 03:28

[quote=philmoore;189920]I tried to make it clear that we are only double-checking the 40291 and 41693 sequences. As for pfgw, it certainly should work just as well, but since we are using Prime95 (mprime) for everything else, it made sense to me to stick with it. One advantage of Prime95 is that its server protocol can be hacked to eventually coordinate assignments through a server. (Compare the recent adaptation of Seventeen or Bust.)

Ben, I tend to agree with you if the error rate is low. I would appreciate hearing from any prp testers who are overclocking, and we could rerun a sample of your tests first. It would not be too costly to retest one number from each work file to get a general idea of the error rate.[/quote]
Ah, I missed the note about checking only 40291 and 41693 the first time around--I see it now. I think I was also a little mixed up about the PFGW thing; the main reason why I suggested that was for the doublechecking of the stuff done with older versions of PFGW. Upon rereading your original post it seems that you're actually all done with that part--duh, my bad. :rolleyes:

 Jeff Gilchrist 2009-09-16 16:28

[QUOTE=philmoore;189920]Ben, I tend to agree with you if the error rate is low. I would appreciate hearing from any prp testers who are overclocking, and we could rerun a sample of your tests first. It would not be too costly to retest one number from each work file to get a general idea of the error rate.[/QUOTE]

If the error rate is really low, I nominate Ben to start doing double checks since he is hoarding all the PRP finds... :toot:

Only one of my machines is overclocked but I think I only did a few PRP checks with that almost all of them were with normally clocked systems. But doing a random sample to get an idea of the error rate would be a good idea.

Jeff.

 engracio 2009-09-16 17:50

My boxes are overclock only 14fsb for stability. That is less than 5% oc. For an overclocker that is very minimal. Since my goal is stability, anything more defeats the science. That is years of experience.:smile:

If you guys want to do double check on my wu's please do so. Just let me know if you find any discrepancies. I feel they will match but who knows. Non oc'd boxes also produces random errors due to peripheral errors like memory and such.

 philmoore 2009-09-16 19:23

Here's my suggestion: we randomly pick one test to check out of each work file, going as far back as 1.25 million. Split these tests into 3 work files and ask for volunteers. (I can even arrange it so that no one has to double check their own work.) Each work file should require about 85-90% of the computation time as an average first-time work file. We get the results back and look at the error rate, and go from there.

 engracio 2009-09-16 19:42

[quote=philmoore;190002] (I can even arrange it so that no one has to double check their own work.) [/quote]

Good idea about the not checking my own work.:smile:

 paleseptember 2009-09-16 21:46

*waves a hand to do some more DC work*

 engracio 2009-09-16 23:07

[quote=paleseptember;190015]*waves a hand to do some more DC work*[/quote]
You're a good man Ben:smile::bow:

 philmoore 2009-09-17 03:54

Engracio says he is game to do a double-check file, and I can do the third. It may be a couple of weeks before I get them sorted out, so go ahead and grab some more first time checks if you run out of work in the meantime.

 Jeff Gilchrist 2009-09-17 13:48

[QUOTE=philmoore;190028]Engracio says he is game to do a double-check file, and I can do the third. It may be a couple of weeks before I get them sorted out, so go ahead and grab some more first time checks if you run out of work in the meantime.[/QUOTE]

If you want/need to split it a bit more so people don't double check their own work, I can also take a DC work file as well.

 philmoore 2009-09-17 17:31

Thanks, Jeff, that will be helpful. I'll set up 4 double check files then, each of which will be about 60-65% of the work in our current work files.

 paleseptember 2009-09-20 22:37

Alrighty, I have a full double-check of 1.00-1.10M is running now. Should be done in less than a week.

 paleseptember 2009-09-24 23:26

Double-check on 1.00-1.10M is complete, results and residues have been forwarded to Phil. Hopefully he has a nice little script that strips out the important data from the first-run and double-check files and compares the res64 and Wd1 details.

Double-check on 1.10-1.25M is now running. Estimated time to completion approximately 6 days, making it 1st October barring any significant issues.

~ps~

 philmoore 2009-09-25 22:44

I have completed a double check of the range 0-500,000 also, and I am pleased to report that we now have complete residue matches of each double check in this range, as well as Ben's range 1M-1.1M, with earlier residues from pfgw, either 64-bit residues from version 20050213 or 62-bit residues from version 20041129. I am continuing with 500,000-1M, and Ben is continuing from 1.1M-1.24M, but the only range where 20050213 had the hiccups was between 500,000 and 600,000, and I should have that done fairly soon. If it checks out, I do not anticipate any further problems, but I still intend to complete the double checking up to 1M, because the only residues we have currently in this range are from the 20050213 version of pfgw which had known bugs. After 1.24M, all residues are from version 25 of Prime95 (or mprime) which I presume was probably working correctly.

I also did a scan of all our results files returned so far, and found three tests that were returned with non-zero error codes, two with ROUND-OFF errors and one with a SUMOUTerror. Not bad, out of 38,000 or so tests completed so far! I have contacted the people who sent me the errors by email so that they can investigate whether they have any stability issues with those particular machines.

I will pick some random double checks to rerun and send a file out to each volunteer before too long, and hopefully we can get some sort of estimate of our current error rate.

 mdettweiler 2009-09-25 23:46

[quote=philmoore;191106]I also did a scan of all our results files returned so far, and found three tests that were returned with non-zero error codes, two with ROUND-OFF errors and one with a SUMOUTerror. Not bad, out of 38,000 or so tests completed so far! I have contacted the people who sent me the errors by email so that they can investigate whether they have any stability issues with those particular machines.[/quote]
There is a known issue in PFGW that causes repeatable roundoff errors on certain candidates near FFT boundaries, not necessarily due to an unstable machine. Try rerunning those candidates with the -a1 switch and you should be able to finish the test and get a (hopefully) matching residual set.

As for the sumout errors, I'm not sure if those are due to a bug or what.

BTW, why did you use the 20050213 and 20041129 versions of PFGW for the earlier results? Everything up through version 3.1.0 (I don't know the exact release date for 3.1.0, but it was somewhere in 2009) produced 62-bit residues even though they were outputted as "64-bit" residuals (essentially the first character was repeatable but nonetheless inaccurate and could be thrown out for comparison with other 62-bit residues). The 20050213 version which you used for 64-bit residuals was actually outputting 62-bit ones in this manner.

In fact, you could just as well use the latest, true 64-bit residue version of PFGW (3.2.2) for comparison with originally 62-bit residuals, as long as you ignore the first character of the 64-bit ones when comparing them. You could even use Prime95 for this if you wanted since it produces the same true 64-bit residues as the latest version of PFGW.

 paleseptember 2009-09-26 00:10

The SUMOUT error was from me, and I'm pretty sure it was a result of a power black-out. Phil is rerunning the test, hopefully it'll check out.

 philmoore 2009-09-26 03:37

[QUOTE=mdettweiler;191108]There is a known issue in PFGW that causes repeatable roundoff errors on certain candidates near FFT boundaries, not necessarily due to an unstable machine. Try rerunning those candidates with the -a1 switch and you should be able to finish the test and get a (hopefully) matching residual set.[/QUOTE]

If this is true, it will also be true with the latest version of Prime95, as both programs use the same FFT boundaries. However, the errors I referred to were occurring with version 20050213. On that version, the -a1 switch was broken.

[QUOTE=mdettweiler;191108]BTW, why did you use the 20050213 and 20041129 versions of PFGW for the earlier results? Everything up through version 3.1.0 (I don't know the exact release date for 3.1.0, but it was somewhere in 2009) produced 62-bit residues even though they were outputted as "64-bit" residuals (essentially the first character was repeatable but nonetheless inaccurate and could be thrown out for comparison with other 62-bit residues). The 20050213 version which you used for 64-bit residuals was actually outputting 62-bit ones in this manner.[/QUOTE]

All of the earlier results were run prior to August 2008, and the 3.x.x versions of pfgw were not available then. The 20050213 actually output full 64-bit residuals, not 62-bit as was the case with 20041129.

[QUOTE=mdettweiler;191108]In fact, you could just as well use the latest, true 64-bit residue version of PFGW (3.2.2) for comparison with originally 62-bit residuals, as long as you ignore the first character of the 64-bit ones when comparing them. You could even use Prime95 for this if you wanted since it produces the same true 64-bit residues as the latest version of PFGW.[/QUOTE]

We are using Prime95 (or mprime) for this. See my response to your earlier post in post #45.

 mdettweiler 2009-09-26 10:44

[quote=philmoore;191120]If this is true, it will also be true with the latest version of Prime95, as both programs use the same FFT boundaries. However, the errors I referred to were occurring with version 20050213. On that version, the -a1 switch was broken.[/quote]
Yes, correct, it is broken in Prime95 as well. In that case, I guess it's probably a different problem (whether repeatable due to program bug or due to an unstable machine).
[quote]All of the earlier results were run prior to August 2008, and the 3.x.x versions of pfgw were not available then. The 20050213 actually output full 64-bit residuals, not 62-bit as was the case with 20041129.[/quote]
Are you sure about that? Because when I asked Mark (rogue), the current developer of PFGW about that, he said that even when pre-3.2.0 versions seemingly outputted 64-bit residuals, they were really 62-bit ones, and thus the first bit was incorrect (though reproducible). I.e., if you ran the same test with Prime95, you'd get a residual that's the same except for the first character.

I think the 20041129 version may have not even tried to print 64-bit residuals, but rather only printed the 62-bit residual (i.e. leaving off the first character entirely). Don't quote me on that though. :smile:
[quote]We are using Prime95 (or mprime) for this. See my response to your earlier post in post #45.[/quote]
Oh, I see. I think I was a little confused by this remark you made in post #42:
[quote=philmoore;189900]The problems were particularly severe for the 2131 sequence, and many residues had to be retested with the older 20041129 version of pfgw, slower by a factor of 2, but apparently accurate and stable.[/quote]
I'm assuming I read that wrong, then? :smile:

 philmoore 2009-09-26 16:49

[QUOTE=mdettweiler;191153]Are you sure about that? Because when I asked Mark (rogue), the current developer of PFGW about that, he said that even when pre-3.2.0 versions seemingly outputted 64-bit residuals, they were really 62-bit ones, and thus the first bit was incorrect (though reproducible). I.e., if you ran the same test with Prime95, you'd get a residual that's the same except for the first character.[/QUOTE]

Yes, I am sure. We have already checked thousands of residues, and the 64-bit residues from 20050213 match the 64-bit residues of Prime95, version 25, in most cases. The 62-bit residues always had the first character as 0, 1, 2, or 3, but if you masked the leading two bits of a 64-bit residue, the rest of it matched, at least in all cases checked so far.

 mdettweiler 2009-09-26 17:26

[quote=philmoore;191174]Yes, I am sure. We have already checked thousands of residues, and the 64-bit residues from 20050213 match the 64-bit residues of Prime95, version 25, in most cases. The 62-bit residues always had the first character as 0, 1, 2, or 3, but if you masked the leading two bits of a 64-bit residue, the rest of it matched, at least in all cases checked so far.[/quote]
Ah, I see. I'm surprised that the 20051213 version actually produced true 64-bit residues; was it a specially "hacked" version by chance? Because if so, that would explain why the 64-bit residue code hadn't stuck around, and was "newly" added in version 3.2.0 not long ago.

 philmoore 2009-09-26 21:22

I wouldn't say that 20050213 was hacked, but it had a number of enhancements that had been added by Jim Fougeron, the same guy who had put out many of the previous versions. It was officially a "beta" version, while the earlier 20041129 was an "alpha" version, presumably stable but with much slower FFT routines for numbers of the form a*2^n+b. Unfortunately, when Mark decided to update pfgw, he was unable to obtain Jim's source code which contained the newer enhancements.

 mdettweiler 2009-09-26 22:23

[quote=philmoore;191188]I wouldn't say that 20050213 was hacked, but it had a number of enhancements that had been added by Jim Fougeron, the same guy who had put out many of the previous versions. It was officially a "beta" version, while the earlier 20041129 was an "alpha" version, presumably stable but with much slower FFT routines for numbers of the form a*2^n+b. Unfortunately, when Mark decided to update pfgw, he was unable to obtain Jim's source code which contained the newer enhancements.[/quote]
Ah, I see...that would explain it. :smile:

 philmoore 2009-09-28 18:02

[QUOTE=paleseptember;191111]The SUMOUT error was from me, and I'm pretty sure it was a result of a power black-out. Phil is rerunning the test, hopefully it'll check out.[/QUOTE]

I confirmed Ben's residue, so his computer seemed to completely recover from the SUMOUT error, very reassuring. I also retested the two round-off errors from another user, and confirmed the residue with one error, but got something different for the other which had two round-off errors. He says that the machine has tested as quite stable several times, but on the theory that perhaps airflow could have been inhibited temporarily, I will retest a few more residues from around the same time as the first, just to get a clearer picture.

 Batalov 2009-10-02 00:53

1 Attachment(s)
It's ok. We know where they live... :squash:
[ATTACH]4178[/ATTACH]

 philmoore 2009-10-10 00:25

This project was launched one year ago, and has accomplished so much: Three prp discoveries, and we are closing in on testing all exponents below 5M on the two remaining sequences, plus completing sieving up to 500T. Ben and I have been doing a systematic double-check on all exponents below 1.25M because of the bugs in pfgw, but we are also launching a random double-check of the range from 1.25M to 5.01M. I have randomly picked one test to do in each range, with the constraint that it has not already been double-checked. This account for 376 tests. I also added another 16 tests that occurred at times that there were hardware errors occurring. Although these 16 tests were all reported with clean error codes, because they occurred at the same time as other errors were generated, or during times of high heat on the cores, this may give us a chance to see if there were other undetected problems. The 376 + 16 = 392 tests were divided into 4 work files of 98 tests each and sent to Engracio, Ben, Jeff, and myself. These work files should be a little shorter than the current ones because some of the tests are on numbers with small exponents. I am hoping that by December, we can get an estimate of our error rate and decide whether further error testing is needed. This double-check of the 392 tests represents about 1% of our total prp effort to date, and should give us a good idea of how likely it might be that we could have missed a prime.

Thanks everyone for your contributions to a very successful project so far!

 philmoore 2009-12-05 00:10

Double checking

The double checking data is in, so I would like to provide a summary of what we know about the accuracy of our data so far. Before getting in to the data, I want to first mention that 11 first-time checks (out of 38,000 or so) have been returned so far with bits set in the error-reporting word. Some errors (SUMOUT) tend to be less disruptive than others (ROUND-OFF, or SUMINP!=SUMOUT), and the program can often recover by restarting from the last save file. Of these 11 residues reported with errors, we have confirmed that 8 of them were correct residues and 3 were incorrect, with a triple-check needed to confirm the incorrect residues. There were also two file reading errors caused by simultaneous testing of two numbers with the same exponent but different k values. I believe this is because Prime95 uses the same naming convention for both save files, but a double check has confirmed that both residues were correct.

All other residues were reported without detected errors. I randomly chose one prp test in each 10k range to retest. The k values were all either 40291 or 41693. (If I had known we would find another prp so soon, I would have postponed this project!) From 1.25M to 5.01M, this gave a total of 376 prp tests to redo. To these, I added an additional 16 tests which were in the vicinity of reported errors on the theory that there may have been unreported errors (particularly ROUND-OFF errors) near the same time and from the same machines as the reported errors, for a total of 392 prp tests.

From the first 294 tests, there was only one discrepancy between the first time test and the double-check. The discrepancy has been confirmed via a triple check as an error in the first time test. The machine was the same quad of Jeff's that had returned one of the three bad residues with a reported error. Considering that this machine had only returned about 350 tests in all, very few of those residues were double-checked, and I was concerned that it might have a high error rate. Engracio volunteered to double check another 32 residues from that machine, and he confirmed that 31 of the residues were ok, and one was wrong (confirmed by my triple-check.) So we have about a 6% error rate (2 out of 33, not including the ROUND-OFF error result), although I would not be surprised if the true error rate on this machine is anywhere between 2% and 12%. My suggestion is, let's double-check the 105 remaining residues from this machine in the 40291 sequence, and just forget about 41693 and 2131. The chances are small that one of them is a prp, but I would feel rather silly if they finally got checked a couple of years from now and a prp turned up! Anyone want to volunteer? All 105 tests are between 3.05M and 3.92M.

So I was hoping for a low error rate, but I got paleseptember's double check file last night and found two more discrepancies with the first time checks which were done by engracio. I am triple checking the first one (on a very slow machine) to find out which results were in error. (Anyone with a faster machine want to test 2^4007127+41693 for me? Otherwise, I will be done late next week.) Still an overall error rate of < 1% (3 out of 394 or less). But we can expect the error rate to grow as the exponents get larger. Maybe we should periodically do more of this sort of sample double checking, and perhaps we will even be lucky enough to identify machines that have higher than usual error rates. Ben and I have done a lot of work double checking exponents < 1.25M, but we have not found even a single error yet, so I'm not sure that more systematic checking of the low exponents has a very good payoff.

So I think the error rate is low enough that we can concentrate on first-time tests for awhile. What about sieving? We are currently sieving from 2M to 50M, so any factors found < 5.2M are only benefitting future double-checking work. Should we drop 2M-5M from our sieving range? For the record, sieving speed is proportional to the square-root of the range size, so dropping 2-5M would only speed up our sieving by a bit over 3%. Maybe we should sieve a bit farther before raising the lower limit on sieving. Opinions?

 engracio 2009-12-05 00:21

Hey Phil I will take those 105 dc. I am finishing up my last range and I will complete this set of dc before reserving again.

 philmoore 2009-12-05 01:38

Thanks, Engracio, I emailed them to you.

 engracio 2009-12-05 05:21

They are on the queue to be worked on.

 philmoore 2009-12-05 23:51

Ignore my request on 2^4007127+41693, as it is now started and will finish early Tuesday. I have confirmed the first residue mismatch from Ben as being wrong on Engracio's end, so I'll be in touch with him to see if we can identify the computer that ran that test.

 engracio 2009-12-06 00:47

Phil if that was the same 3 or 4 wu we discussed before, I believe I do not have that machine anymore due to the mobo dying. So I do not think we can reproduce the results other than redoing the the test..

 Zuzu 2009-12-06 22:31

Double checking

Phil,

Given that the mean nb of primes per doubling is now 0.19, the probability of finding a yet undiscovered PRP by double-checking the last sequence (40291) in the relevant range (1.25M to 5.2M) is quite low, around 6%*(1-exp(-0.19*LN(5.2/1.25)/LN(2))) = 2%. Thus for this last sequence double checking would better be considered in the future by managing the expected increasing error rate along with increasing exponent.

OTOH for the other sequences I would suggest to make a double checking up to the discovered PRP exponent in order to ensure that it is indeed the lowest exponent ("Keller prime"). I think it would not be an unbearable task, it could result in easing the search for prime proof, and if clearly mentioned, would spare dispersed efforts of finding such hypothetical lower PRPs. Don't forget that for k=67607 the exponent 46549 had been discovered before 16389.:smile:

 philmoore 2009-12-06 23:35

6% was only the estimated error rate for that one computer that has returned three bad results so far; our overall error rate is probably more like 1% or less. As you say, the odds are small that a prime was missed for 40291.

On the other hand, double-checking all the sequences in which a prp has been discovered would be an effort comparable to a good fraction of our total effort to date, especially for 2131 and 41693. My current goal is to finish a complete list of residues for each of the four sequences so that they can eventually be double-checked, but I don't think we are ready to divert large amounts of resources to that task just yet.

 paleseptember 2010-02-08 01:01

Just checking in with some ongoing benchmarks.
In the 5.7M ranges, prime95 is using a zero-padded FFT length 640K, with iteration times of approximately 0.015s/iteration on a stock Q6700 @ 2.66GHz running on three (of the four) cores. That translates to pretty much exactly 24 hours per test per core.
(This update was prompted by [URL="http://www.mersenneforum.org/showthread.php?t=13063"]this thread concerning the efficacy of ECM and P-1 testing.[/URL])

 Cybertronic 2010-02-08 10:00

Also a benchmark from me.

2^5720440+40291 is done in 19h on a 3.4 Ghz Phenom.

(Only estimated.)

 Jeff Gilchrist 2010-09-16 15:25

Anyone try the new Prime95 v26 to see how much of a speed difference in any it makes with 5orB PRP tests?

Jeff.

 geoff 2010-09-19 00:47

For 896K FFT, the per iteration time went from 0.020 with version 25.11 to 0.018 with version 26.2 on my Core 2 Duo 2.66GHz, so should save about 2 hours per test.

 geoff 2010-09-24 03:06

More accurately, for the current PRP tests at ~8M (896K FFT) the new mprime version 26.2 takes 41h20m on my C2D 2.66GHz, which is a savings of almost 4 hours per test compared to version 25.11.

 enderak 2010-09-25 00:04

v26 seems to be quite a bit faster on my i7 as well. Still early to tell exactly how much faster, but it seems to be about the same speedup as geoff.

 Jeff Gilchrist 2010-09-25 10:20

Even my older Pentium D machine when from 0.032 sec/iter down to 0.027 sec/iter.

 All times are UTC. The time now is 04:49.