![]() |
![]() |
#45 |
Nov 2004
California
23×3×71 Posts |
![]()
The 100Ks starting at 702943047930463 are all at n=1M. All primes so far:
38639133371071424*4096^33-1 1103881474359425024*4096^90-1 25991772747414464*4096^203-1 39170360036452544*4096^605-1 725624444232314*4096^4118-1 |
![]() |
![]() |
![]() |
#46 | |
Jun 2003
3×5×107 Posts |
![]() Quote:
Thanks! ![]() |
|
![]() |
![]() |
![]() |
#47 |
Feb 2003
27·3·5 Posts |
![]()
Citrix, right now I'm on Christmas vacations with only limited recources. I'll be back after new year's day and then I'll have a look at this. Is a Linux binary (64 or 32 bit?) okay for you, or do you prefer a Windows version? In the latter case I would need to set up some proper compiler first...
|
![]() |
![]() |
![]() |
#48 | |
Jun 2003
110010001012 Posts |
![]() Quote:
|
|
![]() |
![]() |
![]() |
#49 | |
Feb 2003
27×3×5 Posts |
![]() Quote:
I changed only one single line of code (in file "sr1sieve.h"): Code:
#define LIMIT_BASE 51840 // was 720 It must also be a multiple of BASE_MULTIPLE, which is currently set to 30. I tried even higher values for LIMIT_BASE (e.g. up to 453600) and also smaller values for BASE_MULTIPLE (e.g. 12). However, this seems to work only on Linux. On Windows larger values for LIMIT_BASE (e.g. > 100000) are causing the program to crash... For your candidates in base 2 the optimal Q value seems to be either 4320 (3*1440) or 5760 (4*1440). I suggest that you use the "-vv" switch to get some additional output, e.g. the information on the number of subsequences and the total run time. We could further optimize the sieve by adjusting (lowering) BASE_MULTIPLE. Perhaps this might be useful for larger bases, e.g. b=4096 (2^12). But this would also depend on your preselection procedure for generating the candidates. Last fiddled with by Thomas11 on 2014-01-03 at 10:50 |
|
![]() |
![]() |
![]() |
#50 | |
Jun 2003
110010001012 Posts |
![]()
Thank you for the binary... it does work well. I got 25-30% speed up.
Could you compile a windows version with #define LIMIT_BASE 80640 (5760*7*2) also? I have tested several different k values and it appears that larger the Q value the faster the sieve. (I think this might have to do with the weight of the k--Lower weights will need larger Q values) Quote:
What compiler/IDE are you using to compile on windows? Also over the holidays I generated some k which are some of the lowest weight k under 2^64. Unfortunately they are all require generic FFT. Some of them have less than 10 candidates per million n left. If anyone wants them I can post them. |
|
![]() |
![]() |
![]() |
#51 |
Feb 2003
27·3·5 Posts |
![]()
Here are two new binaries: one with LIMIT_BASE = 80640 (version c) and one with 86400 (version d). The latter would also allow for Q=7200 (= 5*1440).
Regarding the problem for LIMIT_BASE > 100000: It does some startup (loading the sieve file and setting up the sieve), but then a window pops up telling that "This program doesn't work anymore." (or similar, since I have a german version of Windows). It's likely to be a memory allocation problem. There are quite a few arrays of size LIMIT_BASE... Regarding the compiler: I'm using the latest MinGW64 cross-compiler (gcc 4.9.0) on a 64 bit Linux system. I tried also an older native MinGW64 on Windows but ran into trouble with missing or wrong components of the toolchain. Thus the cross-compiler... For the Linux binaries (where LIMIT_BASE can be up to about 500000) I'm using gcc 4.6.3 and 4.7.2. Maybe the newer 4.9.0 of MinGW64 is somehow faulty... And a few words on the sieving speed: For the fastest sieve one should try to get Q as high as possible, while keeping the number of subsequences as low as possible. As a test case I took k=355262321784119, nmin=1, nmax=10M. For Q=4320 (=3*1440) there are 3 subsequences, 3.838 secs. For Q=5760 (=4*1440) there are 4 subsequences, 3.806 secs. For Q=7200 (=5*1440) there are 3 subsequences, 3.884 secs. Due to the lower number of subsequences for Q=7200 one would expect some higher speed, but the oposite is the case. Obviously the situation is a bit more complicated. One should keep in mind that due to the very low density (and thus low number of remaining n) there is also some imbalance in the BSGS procedure (overhead vs. reduced number of steps). Last fiddled with by Thomas11 on 2014-01-03 at 22:04 |
![]() |
![]() |
![]() |
#52 | |
Jun 2003
3×5×107 Posts |
![]() Quote:
Currently the Power residues are calculated for bases up to 16,9,5 --> for 2,3,5. If there was a way to extend this to factors of 5760 (or higher) then the program will be much faster. I don't know if this can be easily changed in sr1sieve. |
|
![]() |
![]() |
![]() |
#53 |
Jun 2003
3·5·107 Posts |
![]()
Try compiling with
#define POWER_RESIDUE_LCM 40320 #define POWER_RESIDUE_DIVISORS 96 in sr1sieve.h to see if this will be faster. |
![]() |
![]() |
![]() |
#54 | |
Feb 2003
78016 Posts |
![]() Quote:
Using the values you suggested leads to a segmentation fault (also on Linux; not right at the beginning, but somewhere during the sieve). The attached version (e) of sr1sieve is compiled with: #define POWER_RESIDUE_LCM 11520 #define POWER_RESIDUE_DIVISORS 54 which seem to be largest possible without causing a segfault. I tried it on Linux and Windows but, as before, I don't see any significant change in speed. It already seems to be a bit slower. In my opinion changing the parameters for the power residue test might be useful only for the general case (ordinary k). But our low weight ks are already "designed" to fail that test. Maybe when switching to higher bases (2^m, like 4096) it could be of some use... Last fiddled with by Thomas11 on 2014-01-04 at 10:21 |
|
![]() |
![]() |
![]() |
#55 | |
Jun 2003
3·5·107 Posts |
![]() Quote:
a) Q=7200 should be faster. Looking into bsgs.c from what I understand ( ![]() b) Other possible solution would be to set #define BASE_MULTIPLE 1440 and #define LIMIT_BASE 1440*5 and see if this lowers the overhead. c) Also the run time will be sqrt(N*#subsequence/(Q)) N=10M For Q=7200 time=64.54 For Q=5760 time=83.33 Would this be within the error of measurement? May be we need a larger N range to notice any significant difference. II) For #define POWER_RESIDUE_LCM it will be ideal to choose a value such that p%POWER_RESIDUE_LCM=1 a) One way of doing this would be to store the factorization of each p-1 and then keep on changing the values of POWER_RESIDUE_LCM for each p. This might be too cumbersome to do. (possibly rewriting the whole program). b) Another simpler solution would be to define POWER_RESIDUE_LCM and then only check p such that p%POWER_RESIDUE_LCM==1. This can be easily accomplished by adding an IF loop in the primes.c code. For some smooth POWER_RESIDUE_LCM values we can sieve much much deeper and find a few extra factors. III) The third possibility is to generate a large number of subsequences by using a large Q and then only testing the subsequences that have a high weight. This might help find a few extra factors. In the best case scenario (if for some k this exists ie a k with alot of heavy weight subsequences) you can reduce the run time from sqrt(N) to sqrt(# of candidates left). The question that I am currently thinking about is how to choose such a Q for any given k. ![]() Last fiddled with by Citrix on 2014-01-08 at 23:58 |
|
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
High weight k's | kar_bon | Riesel Prime Data Collecting (k*2^n-1) | 26 | 2013-09-11 23:12 |
Low weight k's | kar_bon | Riesel Prime Data Collecting (k*2^n-1) | 18 | 2010-05-14 08:49 |
Low Weight Subsequences | masser | Sierpinski/Riesel Base 5 | 17 | 2007-02-14 02:04 |
Heavy weight K's | Citrix | Twin Prime Search | 8 | 2006-06-10 20:38 |
Low Weight 15k | Citrix | 15k Search | 20 | 2004-06-20 21:00 |