mersenneforum.org  

Go Back   mersenneforum.org > Prime Search Projects > Conjectures 'R Us

Reply
 
Thread Tools
Old 2012-09-05, 02:47   #12
CGKIII
 
Aug 2012

52 Posts
Default

Thanks everyone for the clarifications. After a lot of investigating, it seems like the next/first pc I build will be "pretty badass" and won't happen until maybe Christmastime. The i7-3770k seems really quite neat.

Is there any way to do CRUS work on a GPU (and if not, are there any plans in the next 1-2 years to enable that functionality)? This is quickly turning into my favorite project, and I would like to be able to invest in gear that will be helpful.

Newish question: What's the probability for finding a prime on a given test? I assume it's some function of b, k, n, form(S or R), and the sieve depth.

For the past few days I've been spending a lot of time trying to estimate the remaining amount of work left. I'm currently taking S391 from n=2,500 to n=25,000 starting with 938 k's. I'm at n ~ 9,900 and have eliminated 393 k's so far.

I understand that time to test scales with n^2 (driven by # of digits, right? So a higher base would have higher times). So I can easily get a sense of the time left to compute all remaining tests. However, finding primes cuts things down quite a bit, and I haven't been able to quite figure out how to get the probability of finding a prime. A rough guess has been that "for the same b, for the same form, for the same sieve depth, for the same n range, roughly the same percent of k's will be eliminated." I'm not entirely sure why that would be the case, but it seemed to hold somewhat reasonably for my results thus far. Does anyone have anything better? This is using pfgw (and sometimes pfgw64, if that matters).
CGKIII is offline   Reply With Quote
Old 2012-09-05, 04:07   #13
mdettweiler
A Sunny Moo
 
mdettweiler's Avatar
 
Aug 2007
USA (GMT-5)

3·2,083 Posts
Default

Quote:
Originally Posted by CGKIII View Post
Is there any way to do CRUS work on a GPU (and if not, are there any plans in the next 1-2 years to enable that functionality)? This is quickly turning into my favorite project, and I would like to be able to invest in gear that will be helpful.
There is--sort of. An llrCUDA (for nVidia GPUs only--ATI is not as easy to program this kind of application on) has been developed, but it's not entirely read for "prime time" yet; it produces good results and is fairly fast on numbers of the right size (more on this later), but can be a bit tricky to set up and use. At present, it's distributed as source code only--you need to install the CUDA development libraries and compile it yourself. It can be done, but it's definitely a bit of an adventure.

You can check it out more here--the thread meanders through the process of the program's development from the beginning, so it's a bit long and uninteresting to follow, but if you skip to the last page (around page 250 or so, in case your # of posts per page setting is different than mine) you should be able to get to the latest code pretty quickly. One thing to look out for: some versions only support k*2^n+-1 (base 2), while others support the full gamut of bases that CPU LLR does. The main developer doesn't speak native English, so it's not entirely clear to me at first glance which versions support other bases and which don't (I haven't been following the project as closely of late); I'd try the last version posted in that thread that doesn't say "only k*2^n+-1" and see if it works, and if it gives an error about the tests not being base 2, try working your way backward until you get one that does work.

You might also try asking around at the PrimeGrid (http://www.primegrid.com) forums to see if they've been doing any work on llrCUDA--they have a more sizable community of GPU-endowed testers to work with, so it's possible that development has continued at a more robust pace there even though things are a bit scattered on the mersenneforum side of things. They might, for instance, have precompiled binaries available so you don't need to build the program from source.

The thing that llrCUDA rather tricky to fit into a CRUS effort is that the nature of GPU computing makes for algorithms that are much more effective on large numbers than small ones. As an example, a 100,000 digit number (~n=300K base 2) will take less than 5 minutes to LLR on a fast CPU, but significantly longer on a GPU; but a 10,000,000 digit number, which takes 3-4 weeks to test on a fast CPU, can be done in a few days on a GPU. This is because the GPU's advantage is that it has, basically, a lot of little processors that aren't much by themselves but can compute quite a bit working together. The LLR algorithm is iterative, and thus isn't very easily parallelizable; it can still be done to an extent, but it bottlenecks on latency of communication between computation threads, so the GPU is only using a fraction of its potential, as it were. (Sieving, on the other hand, is very parallelizable, and can be sped up hundreds-fold on a GPU.) Thus, the GPU can be a great help if you want to test really big numbers (like at GIMPS, where they have a well-developed CUDALucas program for LL tests which are very similar to LLRs, and at PrimeGrid's Generalized Fermat subproject, where they are testing similarly large numbers); but its usefulness is limited at CRUS, where our focus is to tackle broader ranges of smaller numbers instead of taking a specific, narrower swath of numbers to high search depths.

Long story short: GPUs can have some limited use at CRUS, but often the trouble is quite a bit more than worth it for the reward. A GPU is, generally, much better put to use at something like PrimeGrid's sieving projects or their Generalized Fermat search; for CRUS, a good fast CPU is going to do far more for you.
mdettweiler is offline   Reply With Quote
Old 2012-09-05, 08:44   #14
henryzz
Just call me Henry
 
henryzz's Avatar
 
"David"
Sep 2007
Cambridge (GMT/BST)

5,881 Posts
Default

Quote:
Originally Posted by CGKIII View Post
Newish question: What's the probability for finding a prime on a given test? I assume it's some function of b, k, n, form(S or R), and the sieve depth.
http://mersenneforum.org/showpost.ph...9&postcount=50

Quote:
For the past few days I've been spending a lot of time trying to estimate the remaining amount of work left. I'm currently taking S391 from n=2,500 to n=25,000 starting with 938 k's. I'm at n ~ 9,900 and have eliminated 393 k's so far.

I understand that time to test scales with n^2 (driven by # of digits, right? So a higher base would have higher times). So I can easily get a sense of the time left to compute all remaining tests. However, finding primes cuts things down quite a bit, and I haven't been able to quite figure out how to get the probability of finding a prime. A rough guess has been that "for the same b, for the same form, for the same sieve depth, for the same n range, roughly the same percent of k's will be eliminated." I'm not entirely sure why that would be the case, but it seemed to hold somewhat reasonably for my results thus far. Does anyone have anything better? This is using pfgw (and sometimes pfgw64, if that matters).
When optimising sieving we test a candidate 60% through the range of n(and k if there is a wide range of k. The fft length depends on k as well.) We take that as being the average test time so we multiply that by the number of candidates remaining to get the total testing time. This doesn't include ks being removed but it can give a reasonable estimate sometimes.
henryzz is online now   Reply With Quote
Old 2012-09-05, 18:17   #15
Puzzle-Peter
 
Puzzle-Peter's Avatar
 
Jun 2009

22·32·19 Posts
Default

Quote:
Originally Posted by henryzz View Post
When optimising sieving we test a candidate 60% through the range of n(and k if there is a wide range of k. The fft length depends on k as well.) We take that as being the average test time so we multiply that by the number of candidates remaining to get the total testing time. This doesn't include ks being removed but it can give a reasonable estimate sometimes.
That works quite well if you have a large number of k values left. With a single k left, each test might be the last one. With very little k's left a prime found will reduce the work left by a significant percentage. Any estimate becomes rather useless in these cases. You will still get a worst-case guess which can be nice for deciding wheter to tackle a range or not.
Puzzle-Peter is online now   Reply With Quote
Old 2012-09-05, 20:09   #16
CGKIII
 
Aug 2012

52 Posts
Default

Thanks. In C10, you have a "magic number" of 1.781. Can you explain that to me?

Additionally, B13 says the form is k*2^n +/- 1, but C4 is an input for a different base. Should B13 read: k*b^n...? Or are there other adjustments necessary for different bases?
CGKIII is offline   Reply With Quote
Old 2012-09-05, 20:13   #17
Dubslow
Basketry That Evening!
 
Dubslow's Avatar
 
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88

3×29×83 Posts
Default

Quote:
Originally Posted by mdettweiler View Post
The thing that llrCUDA rather tricky to fit into a CRUS effort is that the nature of GPU computing makes for algorithms that are much more effective on large numbers than small ones. As an example, a 100,000 digit number (~n=300K base 2) will take less than 5 minutes to LLR on a fast CPU, but significantly longer on a GPU; but a 10,000,000 digit number, which takes 3-4 weeks to test on a fast CPU, can be done in a few days on a GPU. This is because the GPU's advantage is that it has, basically, a lot of little processors that aren't much by themselves but can compute quite a bit working together. The LLR algorithm is iterative, and thus isn't very easily parallelizable; it can still be done to an extent, but it bottlenecks on latency of communication between computation threads, so the GPU is only using a fraction of its potential, as it were. (Sieving, on the other hand, is very parallelizable, and can be sped up hundreds-fold on a GPU.) Thus, the GPU can be a great help if you want to test really big numbers (like at GIMPS, where they have a well-developed CUDALucas program for LL tests which are very similar to LLRs, and at PrimeGrid's Generalized Fermat subproject, where they are testing similarly large numbers); but its usefulness is limited at CRUS, where our focus is to tackle broader ranges of smaller numbers instead of taking a specific, narrower swath of numbers to high search depths.
I don't know anything about the LLR test or llrCUDA, but I do know that CUDALucas scales almost as well as Prime95 with exponent (which, for Mersenne numbers without a multiplier, scales directly to the size of the number).
Dubslow is offline   Reply With Quote
Old 2012-09-05, 22:16   #18
mdettweiler
A Sunny Moo
 
mdettweiler's Avatar
 
Aug 2007
USA (GMT-5)

3·2,083 Posts
Default

Quote:
Originally Posted by Dubslow View Post
I don't know anything about the LLR test or llrCUDA, but I do know that CUDALucas scales almost as well as Prime95 with exponent (which, for Mersenne numbers without a multiplier, scales directly to the size of the number).
Hmm...does that hold for really small (n<2M or even <1M) numbers though? That's typically the range we're dealing with here at CRUS. When testing llrCUDA (on a GTX 460) I found that it was slower than a CPU up to about n=1.2M or so, and increasingly eclipsed the CPU as n (exponent) got bigger. I didn't test much higher than n=9M or so, so I don't know if it goes linear at some point in there. (The only k*b^n+-c project that's anywhere near GIMPS levels is Seventeen or Bust, which last I recall had a leading edge around 18-19M; nothing else is higher than 7M-8M or so, and most are far below that).

I'm not super knowledgeable about exactly how the GPU algorithm differs from the CPU, but AFAIU the trick with GPUs is to break a problem up into tons of relatively-small chunks for processing on the individual GPU "cores". I remember being told that for tpsieve/ppsieve (k*2^n+-1 sieving on GPUs), it needs a range of at least k=10,000 or so to reach full efficiency--indeed, you can expand to a range of that size nearly "for free" compared to a smaller (say k=1,000 or less) range. So if LL/LLR applications on the GPU are remotely like that (as much so as can be achieved for an iterative method), I would imagine there's a similar "tipping point" below which the GPU-testing-time graph becomes rather discontinuous.

Last fiddled with by mdettweiler on 2012-09-05 at 22:21
mdettweiler is offline   Reply With Quote
Old 2012-09-05, 22:37   #19
Dubslow
Basketry That Evening!
 
Dubslow's Avatar
 
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88

3×29×83 Posts
Default

Quote:
Originally Posted by mdettweiler View Post
Hmm...does that hold for really small (n<2M or even <1M) numbers though? That's typically the range we're dealing with here at CRUS. When testing llrCUDA (on a GTX 460) I found that it was slower than a CPU up to about n=1.2M or so, and increasingly eclipsed the CPU as n (exponent) got bigger. I didn't test much higher than n=9M or so, so I don't know if it goes linear at some point in there. (The only k*b^n+-c project that's anywhere near GIMPS levels is Seventeen or Bust, which last I recall had a leading edge around 18-19M; nothing else is higher than 7M-8M or so, and most are far below that).
Code:
~/CUDALucas∰∂ CUDALucas -r

Starting M86243 fft length = 6K
Running careful round off test for 1000 iterations. If average error >= 0.25, the test will restart with a larger FFT length.
Iteration  100, average error = 0.00002, max error = 0.00002
Iteration  200, average error = 0.00002, max error = 0.00002
Iteration  300, average error = 0.00002, max error = 0.00002
Iteration  400, average error = 0.00002, max error = 0.00002
Iteration  500, average error = 0.00002, max error = 0.00002
Iteration  600, average error = 0.00002, max error = 0.00002
Iteration  700, average error = 0.00002, max error = 0.00002
Iteration  800, average error = 0.00002, max error = 0.00002
Iteration  900, average error = 0.00002, max error = 0.00002
Iteration 1000, average error = 0.00002 < 0.25 (max error = 0.00002), continuing test.
Iteration 10000 M( 86243 )C, 0x23992ccd735a03d9, n = 6K, CUDALucas v2.04 Beta err = 0.0000 (0:01 real, 0.0720 ms/iter, ETA 0:05)
This residue is correct.

Starting M132049 fft length = 8K
Running careful round off test for 1000 iterations. If average error >= 0.25, the test will restart with a larger FFT length.
Iteration  100, average error = 0.00029, max error = 0.00040
Iteration  200, average error = 0.00031, max error = 0.00037
Iteration  300, average error = 0.00033, max error = 0.00043
Iteration  400, average error = 0.00033, max error = 0.00040
Iteration  500, average error = 0.00034, max error = 0.00041
Iteration  600, average error = 0.00034, max error = 0.00040
Iteration  700, average error = 0.00034, max error = 0.00040
Iteration  800, average error = 0.00034, max error = 0.00043
Iteration  900, average error = 0.00034, max error = 0.00041
Iteration 1000, average error = 0.00034 < 0.25 (max error = 0.00037), continuing test.
Iteration 10000 M( 132049 )C, 0x4c52a92b54635f9e, n = 8K, CUDALucas v2.04 Beta err = 0.0005 (0:01 real, 0.0900 ms/iter, ETA 0:10)
This residue is correct.

Starting M216091 fft length = 12K
Running careful round off test for 1000 iterations. If average error >= 0.25, the test will restart with a larger FFT length.
Iteration  100, average error = 0.00310, max error = 0.00415
Iteration  200, average error = 0.00341, max error = 0.00427
Iteration  300, average error = 0.00349, max error = 0.00415
Iteration  400, average error = 0.00351, max error = 0.00415
Iteration  500, average error = 0.00355, max error = 0.00430
Iteration  600, average error = 0.00355, max error = 0.00415
Iteration  700, average error = 0.00356, max error = 0.00421
Iteration  800, average error = 0.00358, max error = 0.00439
Iteration  900, average error = 0.00361, max error = 0.00452
Iteration 1000, average error = 0.00361 < 0.25 (max error = 0.00427), continuing test.
Iteration 10000 M( 216091 )C, 0x30247786758b8792, n = 12K, CUDALucas v2.04 Beta err = 0.0049 (0:01 real, 0.1077 ms/iter, ETA 0:21)
This residue is correct.

Starting M756839 fft length = 40K
Running careful round off test for 1000 iterations. If average error >= 0.25, the test will restart with a larger FFT length.
Iteration  100, average error = 0.02011, max error = 0.02734
Iteration  200, average error = 0.02237, max error = 0.02637
Iteration  300, average error = 0.02292, max error = 0.02539
Iteration  400, average error = 0.02360, max error = 0.02734
Iteration  500, average error = 0.02393, max error = 0.03125
Iteration  600, average error = 0.02415, max error = 0.03125
Iteration  700, average error = 0.02428, max error = 0.02734
Iteration  800, average error = 0.02434, max error = 0.02591
Iteration  900, average error = 0.02448, max error = 0.03125
Iteration 1000, average error = 0.02455 < 0.25 (max error = 0.02832), continuing test.
Iteration 10000 M( 756839 )C, 0x5d2cbe7cb24a109a, n = 40K, CUDALucas v2.04 Beta err = 0.0352 (0:02 real, 0.2004 ms/iter, ETA 2:28)
This residue is correct.

Starting M859433 fft length = 48K
Running careful round off test for 1000 iterations. If average error >= 0.25, the test will restart with a larger FFT length.
Iteration  100, average error = 0.00617, max error = 0.00830
Iteration  200, average error = 0.00695, max error = 0.00830
Iteration  300, average error = 0.00714, max error = 0.00830
Iteration  400, average error = 0.00722, max error = 0.00806
Iteration  500, average error = 0.00726, max error = 0.00806
Iteration  600, average error = 0.00732, max error = 0.00879
Iteration  700, average error = 0.00737, max error = 0.00928
Iteration  800, average error = 0.00742, max error = 0.00940
Iteration  900, average error = 0.00742, max error = 0.00842
Iteration 1000, average error = 0.00743 < 0.25 (max error = 0.00928), continuing test.
Iteration 10000 M( 859433 )C, 0x3c4ad525c2d0aed0, n = 48K, CUDALucas v2.04 Beta err = 0.0103 (0:02 real, 0.2172 ms/iter, ETA 3:02)
This residue is correct.

Starting M1257787 fft length = 64K
Running careful round off test for 1000 iterations. If average error >= 0.25, the test will restart with a larger FFT length.
Iteration  100, average error = 0.07321, max error = 0.10156
Iteration  200, average error = 0.08017, max error = 0.09375
Iteration  300, average error = 0.08297, max error = 0.09863
Iteration  400, average error = 0.08498, max error = 0.10156
Iteration  500, average error = 0.08607, max error = 0.10156
Iteration  600, average error = 0.08709, max error = 0.10156
Iteration  700, average error = 0.08713, max error = 0.09570
Iteration  800, average error = 0.08771, max error = 0.10864
Iteration  900, average error = 0.08777, max error = 0.09961
Iteration 1000, average error = 0.08766 < 0.25 (max error = 0.09766), continuing test.
Iteration 10000 M( 1257787 )C, 0x3f45bf9bea7213ea, n = 64K, CUDALucas v2.04 Beta err = 0.1172 (0:03 real, 0.2629 ms/iter, ETA 5:25)
This residue is correct.

Starting M1398269 fft length = 72K
Running careful round off test for 1000 iterations. If average error >= 0.25, the test will restart with a larger FFT length.
Iteration  100, average error = 0.05718, max error = 0.07617
Iteration  200, average error = 0.06297, max error = 0.07812
Iteration  300, average error = 0.06571, max error = 0.07812
Iteration  400, average error = 0.06654, max error = 0.08594
Iteration  500, average error = 0.06703, max error = 0.07812
Iteration  600, average error = 0.06736, max error = 0.07812
Iteration  700, average error = 0.06753, max error = 0.07422
Iteration  800, average error = 0.06797, max error = 0.08203
Iteration  900, average error = 0.06859, max error = 0.08203
Iteration 1000, average error = 0.06862 < 0.25 (max error = 0.07812), continuing test.
Iteration 10000 M( 1398269 )C, 0xa4a6d2f0e34629db, n = 72K, CUDALucas v2.04 Beta err = 0.0957 (0:03 real, 0.2993 ms/iter, ETA 6:53)
This residue is correct.

Starting M2976221 fft length = 160K
Running careful round off test for 1000 iterations. If average error >= 0.25, the test will restart with a larger FFT length.
Iteration  100, average error = 0.03141, max error = 0.04297
Iteration  200, average error = 0.03608, max error = 0.04883
Iteration  300, average error = 0.03706, max error = 0.04211
Iteration  400, average error = 0.03796, max error = 0.04590
Iteration  500, average error = 0.03818, max error = 0.04297
Iteration  600, average error = 0.03842, max error = 0.04395
Iteration  700, average error = 0.03865, max error = 0.04883
Iteration  800, average error = 0.03869, max error = 0.04297
Iteration  900, average error = 0.03885, max error = 0.04492
Iteration 1000, average error = 0.03883 < 0.25 (max error = 0.04297), continuing test.
Iteration 10000 M( 2976221 )C, 0x2a7111b7f70fea2f, n = 160K, CUDALucas v2.04 Beta err = 0.0508 (0:06 real, 0.6127 ms/iter, ETA 30:13)
This residue is correct.

Starting M3021377 fft length = 160K
Running careful round off test for 1000 iterations. If average error >= 0.25, the test will restart with a larger FFT length.
Iteration  100, average error = 0.04585, max error = 0.06250
Iteration  200, average error = 0.05122, max error = 0.06152
Iteration  300, average error = 0.05345, max error = 0.06055
Iteration  400, average error = 0.05395, max error = 0.06445
Iteration  500, average error = 0.05475, max error = 0.06543
Iteration  600, average error = 0.05517, max error = 0.06348
Iteration  700, average error = 0.05539, max error = 0.06445
Iteration  800, average error = 0.05569, max error = 0.06323
Iteration  900, average error = 0.05577, max error = 0.06250
Iteration 1000, average error = 0.05579 < 0.25 (max error = 0.06250), continuing test.
Iteration 10000 M( 3021377 )C, 0x6387a70a85d46baf, n = 160K, CUDALucas v2.04 Beta err = 0.0725 (0:06 real, 0.6098 ms/iter, ETA 30:35)
This residue is correct.
The test runs the first 10,000 iterations of all the Mersenne primes, though I cut it short because of the character limit.

This is also a GTX 460, though I didn't bother testing the same exponents with Prime95. Even so, you can see from the ETA how well it scales. (The ETAs might be few percent longer than they should be, especially for these smaller exponents -- the first 1,000 "careful" iterations take longer than "average" (depending on the user's settings).) The FFT takes the most time, and that's abstracted out to the cufft library from nVidia. 86243 is the (somewhat arbitrary) lower limit -- you'd have to ask msft what makes that the limit, and how much lower the code can actually go.

Last fiddled with by Dubslow on 2012-09-05 at 22:39 Reason: lower limit
Dubslow is offline   Reply With Quote
Old 2012-09-05, 23:20   #20
mdettweiler
A Sunny Moo
 
mdettweiler's Avatar
 
Aug 2007
USA (GMT-5)

3×2,083 Posts
Default

Now that's interesting. It seems that the latest CUDALucas scales quite a bit better to lower n's than does llrCUDA. I haven't had the opportunity to try some of the more recent llrCUDAs (the GTX 460 I was testing on bit the dust), but I do remember it scaled much less favorably at the low end. I just tried testing M216091 using LLR on 2.2Ghz Phenom II and it finished in 45 seconds, versus the ~20 second estimate you got on your 460. Clearly, the GPU does quite well even at that small size; but when I tried testing a number of similar size before with llrCUDA, it took about 20-30 minutes.

Part of this may be due to the fact that CUDALucas is a much more mature program and has had more time for polishing and tweaking--llrCUDA's development stagnated rather early in the process due to lack of significant interest (since the speed boost wouldn't have been as pronounced on typical LLR-sized numbers even if it matched CUDALucas's speed). That, and--as far as I understand it--it seemed llrCUDA scaled particularly badly as k increased, which is not so much the case on the CPU, where n's effect is far more pronounced.

That said, if someone with the appropriate knowledge picks up the development of llrCUDA at some point in the future, they may yet be able to get it up to more typical performance levels like those of CUDALucas or GeneferCUDA. I suspect that its ultimate potential hasn't been quite as nearly tapped as either of the latter programs.
mdettweiler is offline   Reply With Quote
Old 2012-09-06, 10:36   #21
henryzz
Just call me Henry
 
henryzz's Avatar
 
"David"
Sep 2007
Cambridge (GMT/BST)

5,881 Posts
Default

Quote:
Originally Posted by CGKIII View Post
Thanks. In C10, you have a "magic number" of 1.781. Can you explain that to me?

Additionally, B13 says the form is k*2^n +/- 1, but C4 is an input for a different base. Should B13 read: k*b^n...? Or are there other adjustments necessary for different bases?
It certainly should say k*b^n... I don't have a clue how that has gone so long unnoticed.

1.781 is an approximation of http://www.wolframalpha.com/input/?i...7s+constant%29
henryzz is online now   Reply With Quote
Old 2012-09-06, 13:31   #22
henryzz
Just call me Henry
 
henryzz's Avatar
 
"David"
Sep 2007
Cambridge (GMT/BST)

16F916 Posts
Default

I made a new version of the odds of prime spreadsheet. I have included CRUS support. Now you can get estimates for how many primes including k removal. The only problem with this is that each k is treated like it has the same weight as the others.

Please post any suggestions for improvements. If anyone fancies making it colourful I would be grateful. It is beginning to look a little cluttered.
Attached Files
File Type: zip odds of prime4.zip (12.2 KB, 128 views)
henryzz is online now   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
Bases 2 & 4 reservations/statuses/primes Jean Penné Conjectures 'R Us 466 2021-07-25 04:05
Mersenne prime discovery rate davieddy PrimeNet 0 2012-11-26 03:26
Riesel and Sierp numbers bases <= 1024 R. Gerbicz Conjectures 'R Us 22 2009-12-29 20:21
Sieving Riesel & Sierp base 16 gd_barnes Conjectures 'R Us 13 2009-12-14 09:23
Riesel/Sierp #'s for bases 3, 7, and 15 Siemelink Conjectures 'R Us 105 2009-09-04 06:40

All times are UTC. The time now is 10:24.


Tue Jul 27 10:24:24 UTC 2021 up 4 days, 4:53, 0 users, load averages: 1.35, 1.72, 1.83

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.