mersenneforum.org  

Go Back   mersenneforum.org > Prime Search Projects > No Prime Left Behind

Closed Thread
 
Thread Tools
Old 2008-03-23, 23:48   #166
Mini-Geek
Account Deleted
 
Mini-Geek's Avatar
 
"Tim Sorbera"
Aug 2006
San Antonio, TX USA

102538 Posts
Default

Quote:
Originally Posted by Anonymous View Post
Okay, great! We won't be needing any more help with the sieving, since Mini-Geek shouldn't have a problem finishing his ranges (he's finished all his other ranges for this effort pretty quickly)> However, the LLR effort will need a lot of help, so you are more than welcome to help there.

One quick question though: is your quad-core overclocked? If so, then you'll need to run a Prime95/mprime stress test on it for a few hours to verify that you can produce good results--for a doublecheck effort like this with no first-pass residuals to compare with, it is crucial that our machines produce perfect results. In fact, everyone who wants to participate in this effort's LLR portion should do a stress test on their machine first (overclocked or no) just to play it safe. I myself will be doing a stress test on my machine before doing any doublecheck LLR, even though I am very confident in my machine's results.
My computer isn't overclocked, and hasn't actually run the torture test, but it has run the self-test, and has returned a correct result for a number that was borderline too low FFT size with multiple over 0.4 rounding, as detailed in this thread:
http://www.mersenneforum.org/showthread.php?t=9906
Do you think I still need to run a stress test?

Yeah, sorry I wasn't able to report on the status through the weekend, but I was on vacation and, though I could check my computer was online, couldn't check the status or send factors. As stated earlier, I'll finish it pretty soon.
I'm interested in LLRing this range and was wondering how long between when I return my last factors and when it's posted for manual LLRing. Just a rough estimate, I want to know if I should get an other manual LLR or just LLRnet or something.
Mini-Geek is offline  
Old 2008-03-24, 04:09   #167
mdettweiler
A Sunny Moo
 
mdettweiler's Avatar
 
Aug 2007
USA (GMT-5)

141518 Posts
Default

Quote:
Originally Posted by Mini-Geek View Post
My computer isn't overclocked, and hasn't actually run the torture test, but it has run the self-test, and has returned a correct result for a number that was borderline too low FFT size with multiple over 0.4 rounding, as detailed in this thread:
http://www.mersenneforum.org/showthread.php?t=9906
Do you think I still need to run a stress test?
Well, since it's not overclocked, and you already ran the self-test, you'll probably be fine without a stress test. Of course, you may want to run it anyway, maybe for just an hour or so, but it's your call.

Quote:
Yeah, sorry I wasn't able to report on the status through the weekend, but I was on vacation and, though I could check my computer was online, couldn't check the status or send factors. As stated earlier, I'll finish it pretty soon.
I'm interested in LLRing this range and was wondering how long between when I return my last factors and when it's posted for manual LLRing. Just a rough estimate, I want to know if I should get an other manual LLR or just LLRnet or something.
As soon as I receive the factors for the last sieving range, I'll remove them from the sieve file and get to work on splitting it up. So, it probably won't be long before I post the files for LLRing (look for a thread called "Doublecheck Drive #1")--thus, you'll probably just want to run LLRnet as filler work, rather than grabbing a whole manual LLR file from one of the drives.
mdettweiler is offline  
Old 2008-03-24, 05:21   #168
gd_barnes
 
gd_barnes's Avatar
 
May 2007
Kansas; USA

33·5·7·11 Posts
Default

Quote:
Originally Posted by Anonymous View Post

As soon as I receive the factors for the last sieving range, I'll remove them from the sieve file and get to work on splitting it up. So, it probably won't be long before I post the files for LLRing (look for a thread called "Doublecheck Drive #1")--thus, you'll probably just want to run LLRnet as filler work, rather than grabbing a whole manual LLR file from one of the drives.
After removing all the factors, be sure and run the file by me first for removal of algebraic factors for k=9, 27, 81, 225, 243, 441, and 729 unless you want to mess with that. I can take the entire file, split out just those k's, remove the algebraic factors, and recreate the entire file...wouldn't be a problem.

One more thing...I think we should probably test this by k-value instead of n-value so we'll need to get the file sorted by k-value primary and n-value secondary. That is what I did for my double-checking on n=50K-100K. It will be much easier to cross check for missing and incorrect primes that way. That is unless for some reason we decide to use a heavy hitter on an LLRnet server, in which case we'd probably only want to feed him n>200K to avoid creaming the server.

I'm excited about this. After finding ~2% missing primes for n=16K-100K for k=300-1001, I'm looking for an even higher rate of missing primes in this range. (n<16K was highly accurate for this k-range with only 1 error found and I believe it had been previously double-checked.) k<300 has been double-checked quite a bit already and will likely yield a lower rate of missing or incorrect primes. Kosmaj told me that they were attempting to get double-checked any k's that were tested by people outside of RPS but they weren't done yet. I did some double-checking myself but it was very fragmented both in n-values and k-values. What this DOES mean that k's that have been tested by RPS folks have been largely NOT double-checked so who knows what we might find. We do know that k<300 was independently double-checked for n<100K but anything above that, it's not clear.


Gary

Last fiddled with by gd_barnes on 2008-03-24 at 05:25
gd_barnes is offline  
Old 2008-03-24, 11:16   #169
Mini-Geek
Account Deleted
 
Mini-Geek's Avatar
 
"Tim Sorbera"
Aug 2006
San Antonio, TX USA

426710 Posts
Default

Quote:
Originally Posted by Anonymous View Post
Well, since it's not overclocked, and you already ran the self-test, you'll probably be fine without a stress test. Of course, you may want to run it anyway, maybe for just an hour or so, but it's your call.


As soon as I receive the factors for the last sieving range, I'll remove them from the sieve file and get to work on splitting it up. So, it probably won't be long before I post the files for LLRing (look for a thread called "Doublecheck Drive #1")--thus, you'll probably just want to run LLRnet as filler work, rather than grabbing a whole manual LLR file from one of the drives.
473-495 is done. Factors sent. 523-546 should finish both its halves this evening (I split it up to run on both cores. I'm sure it's not too much more work for you. )
Mini-Geek is offline  
Old 2008-03-24, 14:25   #170
mdettweiler
A Sunny Moo
 
mdettweiler's Avatar
 
Aug 2007
USA (GMT-5)

624910 Posts
Default

Quote:
Originally Posted by gd_barnes View Post
After removing all the factors, be sure and run the file by me first for removal of algebraic factors for k=9, 27, 81, 225, 243, 441, and 729 unless you want to mess with that. I can take the entire file, split out just those k's, remove the algebraic factors, and recreate the entire file...wouldn't be a problem.
Okay, yeah, I'll send it to you then.

Quote:
One more thing...I think we should probably test this by k-value instead of n-value so we'll need to get the file sorted by k-value primary and n-value secondary. That is what I did for my double-checking on n=50K-100K. It will be much easier to cross check for missing and incorrect primes that way. That is unless for some reason we decide to use a heavy hitter on an LLRnet server, in which case we'd probably only want to feed him n>200K to avoid creaming the server.
Oh, okay. That might make things a little more complicated, though, since I was thinking of doing this team drive style....do you think we should simply post individual files for k's, like we normally do for ranges of n, and have users reserve them?

Quote:
I'm excited about this. After finding ~2% missing primes for n=16K-100K for k=300-1001, I'm looking for an even higher rate of missing primes in this range. (n<16K was highly accurate for this k-range with only 1 error found and I believe it had been previously double-checked.) k<300 has been double-checked quite a bit already and will likely yield a lower rate of missing or incorrect primes. Kosmaj told me that they were attempting to get double-checked any k's that were tested by people outside of RPS but they weren't done yet. I did some double-checking myself but it was very fragmented both in n-values and k-values. What this DOES mean that k's that have been tested by RPS folks have been largely NOT double-checked so who knows what we might find. We do know that k<300 was independently double-checked for n<100K but anything above that, it's not clear.
RPS should have just done it right in the first place--by doublechecking all of k<300, not just k's tested outside of their project.
mdettweiler is offline  
Old 2008-03-24, 14:25   #171
mdettweiler
A Sunny Moo
 
mdettweiler's Avatar
 
Aug 2007
USA (GMT-5)

3·2,083 Posts
Default

Quote:
Originally Posted by Mini-Geek View Post
473-495 is done. Factors sent. 523-546 should finish both its halves this evening (I split it up to run on both cores. I'm sure it's not too much more work for you. )
Okay, great! And as for those factors, no problem, it's not a hassle at all.
mdettweiler is offline  
Old 2008-03-24, 17:15   #172
Flatlander
I quite division it
 
Flatlander's Avatar
 
"Chris"
Feb 2005
England

1000000111012 Posts
Default

Quote:
Originally Posted by gd_barnes View Post
...
One more thing...I think we should probably test this by k-value instead of n-value so we'll need to get the file sorted by k-value primary and n-value secondary. That is what I did for my double-checking on n=50K-100K. It will be much easier to cross check for missing and incorrect primes that way. That is unless for some reason we decide to use a heavy hitter on an LLRnet server, in which case we'd probably only want to feed him n>200K to avoid creaming the server.
...
Gary
My preference is by n-value; like first time LLRs but months/years behind. As the ks get higher the CPUs get faster.

The link to P95 v25.3 (Windows) is dead, is there somewhere else I can get a copy. (I assume this is the one I need for stress testing?)

Last fiddled with by Flatlander on 2008-03-24 at 17:21 Reason: Blah, blah, blah.
Flatlander is offline  
Old 2008-03-24, 19:02   #173
gd_barnes
 
gd_barnes's Avatar
 
May 2007
Kansas; USA

33×5×7×11 Posts
Default

It will be a big BIG hassle to check results sorted by n-value vs. k-value. I suppose we could do the search by n-value and then sort the final primes by k-value before checking them but then we'd have to wait until the very end of LLRing to do any checking.

This would not be a problem team-drive style. Micha and I are doing Sierp base 3 at CRUS by k-value because there are > 10^15 k's!! I reserved the first 100 million k's and he reserved the next 10 million k's. (It's a very prime base with only 3 k's remaining at k=3M. My testing is currently past k=8M.) As for splitting the file up, do it in reasonable chunks, perhaps 3 or 6 k's at a time. Keep the # of k's per file at a multiple of 3 since any k that is divisible by 3 is heavier weight. I realize the files will have more variablity in size but it will all even out in the long run.

Chris's point about doing things upwards by n-value is a good one for doing first-pass processing and is the way we run our drives here. IMHO, the way RPS is testing up past 1.5-2M on some k's while leaving other k's near them at n=600K-700K, is a waste of current resources. Things should be kept somewhat more level while also allowing a certain degree of individuality to make things fun.

But for a second pass, which is always done with faster machines than a first pass, his point only becomes a factor if there is a large amount of time between the beginning and ending of the effort. So for instance if we started now at n=100K and did not anticipate finishing to n=260K for 2 years when computers are certainly going to be faster, then yes, we should search by n-value. But here, I anticipate us finishing in < 6 months if not faster so there is little benefit to searching by n-value.

Searching by k-value will allow us to compare entire k's as we go along. We will quickly see the results of our efforts. Also, if it turns out that there is few problems for k<300, we can shift to making k>300 a priority with k<300 done later.

Anon, you're running the effort here so if you and others feel strongly about searching by n-value, I'm fine with that. It won't affect how much I help the effort. I just wanted to bring up some points about searching by k-value instead like I did for n=50K-100K.


Gary
gd_barnes is offline  
Old 2008-03-24, 19:12   #174
mdettweiler
A Sunny Moo
 
mdettweiler's Avatar
 
Aug 2007
USA (GMT-5)

3·2,083 Posts
Default

Quote:
Originally Posted by gd_barnes View Post
It will be a big BIG hassle to check results sorted by n-value vs. k-value. I suppose we could do the search by n-value and then sort the final primes by k-value before checking them but then we'd have to wait until the very end of LLRing to do any checking.

This would not be a problem team-drive style. Micha and I are doing Sierp base 3 at CRUS by k-value because there are > 10^15 k's!! I reserved the first 100 million k's and he reserved the next 10 million k's. (It's a very prime base with only 3 k's remaining at k=3M. My testing is currently past k=8M.) As for splitting the file up, do it in reasonable chunks, perhaps 3 or 6 k's at a time. Keep the # of k's per file at a multiple of 3 since any k that is divisible by 3 is heavier weight. I realize the files will have more variablity in size but it will all even out in the long run.

Chris's point about doing things upwards by n-value is a good one for doing first-pass processing and is the way we run our drives here. IMHO, the way RPS is testing up past 1.5-2M on some k's while leaving other k's near them at n=600K-700K, is a waste of current resources. Things should be kept somewhat more level while also allowing a certain degree of individuality to make things fun.

But for a second pass, which is always done with faster machines than a first pass, his point only becomes a factor if there is a large amount of time between the beginning and ending of the effort. So for instance if we started now at n=100K and did not anticipate finishing to n=260K for 2 years when computers are certainly going to be faster, then yes, we should search by n-value. But here, I anticipate us finishing in < 6 months if not faster so there is little benefit to searching by n-value.

Searching by k-value will allow us to compare entire k's as we go along. We will quickly see the results of our efforts. Also, if it turns out that there is few problems for k<300, we can shift to making k>300 a priority with k<300 done later.

Anon, you're running the effort here so if you and others feel strongly about searching by n-value, I'm fine with that. It won't affect how much I help the effort. I just wanted to bring up some points about searching by k-value instead like I did for n=50K-100K.


Gary
Okay, I think you just sold me on doing it by k-value.

One thing, though: you suggested dishing them out in chunks of 3 k's at a time. Do you have a guess of how long one of those files would take? That would be helpful in determining whether 3 k's at a time is too much for lower-powered users to handle.
mdettweiler is offline  
Old 2008-03-24, 20:22   #175
gd_barnes
 
gd_barnes's Avatar
 
May 2007
Kansas; USA

33×5×7×11 Posts
Default

Quote:
Originally Posted by Anonymous View Post
Okay, I think you just sold me on doing it by k-value.

One thing, though: you suggested dishing them out in chunks of 3 k's at a time. Do you have a guess of how long one of those files would take? That would be helpful in determining whether 3 k's at a time is too much for lower-powered users to handle.
Now, don't let me influence you. lmao

Not a clue on the amount of time. I will guess that on average, it would be ~10-14 days for 3 k's but maybe 7-10 days. I'll speculate about as long as a drive 3 range at about n=340K. Somewhat large but not too bad.

Here's a suggestion that should allow people with all different resources to easily reserve files of the size that they want: Kosmaj does it and it's a very good idea (I bet you never thought you'd hear that, lol). Put the number of k/n pairs by the file link. Since there will be quite a bit of variability in file-size, if you have about 10-20 3-k files posted, people with more resources can take the bigger ones and with less resources can take the smaller ones. So that we aren't testing all over the place at once, posting no more than 20 files at a time keeps it managable.

You could do this any myriad of ways. One suggestion might be to post a group of 10 files for k=2-32 and another group of 10 files for k=300-330. Probably the very low k-ranges will be pretty accurate so to get some 'bang for our buck', doing both at the same time would make it more 'fun' so to speak.

Not that I need to beat a dead horse anymore but another reason that I now remember why it was so helpful to search by k-value is that some single k-values had several errors while other entire 100-k ranges had none. I found one k that had 3 missing primes. Obviously the same person(s) searched (or didn't search) the completed ranges at PrimeSearch. When we find a missing prime, we'll want to pay close attention to that k-value and any k-values around it that might have been originally tested by the same person(s) at PrimeSearch.

There is one main case against searching by k-value that I forgot about. Do we anticipate that an LLRnet server will be used for this AND will there potentially be a large number of participants using that server? All of my logic flies out the window in that case. If we can utilize a server and we have large #'s of resources, we'll finish so quickly that it doesn't really matter how we test. We'll just confirm everything at the end after sorting by k. We could set one up right away but limit the # of people using it at n=100K (i.e. encouraging people to do manual reservations if we near a perceived problem processing point on the server) and then open it up to more people as we progress upwards. THAT is a very good case for searching by n-value!!


Gary

Last fiddled with by gd_barnes on 2008-03-24 at 20:25
gd_barnes is offline  
Old 2008-03-24, 20:34   #176
Flatlander
I quite division it
 
Flatlander's Avatar
 
"Chris"
Feb 2005
England

1000000111012 Posts
Default

Quote:
Originally Posted by Flatlander View Post
...
The link to P95 v25.3 (Windows) is dead, is there somewhere else I can get a copy. (I assume this is the one I need for stress testing?)
...
Anyone? I'd like to get the stress test out the way a.s.a.p.
Flatlander is offline  
Closed Thread



Similar Threads
Thread Thread Starter Forum Replies Last Post
Is more sieving power needed? jasong jasong 4 2012-03-25 19:11
Doublecheck always have shifted S0 value? ATH PrimeNet 11 2010-06-03 06:38
All things doublecheck!! masser Sierpinski/Riesel Base 5 44 2006-09-24 17:19
DoubleCheck vs LL assignments Unregistered PrimeNet 9 2006-03-26 05:48
doublecheck - results TheJudger Data 4 2005-04-04 08:54

All times are UTC. The time now is 11:16.


Sat Jul 17 11:16:35 UTC 2021 up 50 days, 9:03, 1 user, load averages: 1.40, 1.29, 1.21

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.