20131218, 00:16  #45 
Nov 2004
California
2^{3}·3·71 Posts 
The 100Ks starting at 702943047930463 are all at n=1M. All primes so far:
38639133371071424*4096^331 1103881474359425024*4096^901 25991772747414464*4096^2031 39170360036452544*4096^6051 725624444232314*4096^41181 
20131219, 07:57  #46  
Jun 2003
1579_{10} Posts 
Quote:
Thanks! 

20131222, 16:39  #47 
Feb 2003
773_{16} Posts 
Citrix, right now I'm on Christmas vacations with only limited recources. I'll be back after new year's day and then I'll have a look at this. Is a Linux binary (64 or 32 bit?) okay for you, or do you prefer a Windows version? In the latter case I would need to set up some proper compiler first...

20131222, 20:08  #48  
Jun 2003
3053_{8} Posts 
Quote:


20140103, 10:48  #49  
Feb 2003
1,907 Posts 
Quote:
I changed only one single line of code (in file "sr1sieve.h"): Code:
#define LIMIT_BASE 51840 // was 720 It must also be a multiple of BASE_MULTIPLE, which is currently set to 30. I tried even higher values for LIMIT_BASE (e.g. up to 453600) and also smaller values for BASE_MULTIPLE (e.g. 12). However, this seems to work only on Linux. On Windows larger values for LIMIT_BASE (e.g. > 100000) are causing the program to crash... For your candidates in base 2 the optimal Q value seems to be either 4320 (3*1440) or 5760 (4*1440). I suggest that you use the "vv" switch to get some additional output, e.g. the information on the number of subsequences and the total run time. We could further optimize the sieve by adjusting (lowering) BASE_MULTIPLE. Perhaps this might be useful for larger bases, e.g. b=4096 (2^12). But this would also depend on your preselection procedure for generating the candidates. Last fiddled with by Thomas11 on 20140103 at 10:50 

20140103, 20:28  #50  
Jun 2003
1,579 Posts 
Thank you for the binary... it does work well. I got 2530% speed up.
Could you compile a windows version with #define LIMIT_BASE 80640 (5760*7*2) also? I have tested several different k values and it appears that larger the Q value the faster the sieve. (I think this might have to do with the weight of the kLower weights will need larger Q values) Quote:
What compiler/IDE are you using to compile on windows? Also over the holidays I generated some k which are some of the lowest weight k under 2^64. Unfortunately they are all require generic FFT. Some of them have less than 10 candidates per million n left. If anyone wants them I can post them. 

20140103, 21:59  #51 
Feb 2003
1,907 Posts 
Here are two new binaries: one with LIMIT_BASE = 80640 (version c) and one with 86400 (version d). The latter would also allow for Q=7200 (= 5*1440).
Regarding the problem for LIMIT_BASE > 100000: It does some startup (loading the sieve file and setting up the sieve), but then a window pops up telling that "This program doesn't work anymore." (or similar, since I have a german version of Windows). It's likely to be a memory allocation problem. There are quite a few arrays of size LIMIT_BASE... Regarding the compiler: I'm using the latest MinGW64 crosscompiler (gcc 4.9.0) on a 64 bit Linux system. I tried also an older native MinGW64 on Windows but ran into trouble with missing or wrong components of the toolchain. Thus the crosscompiler... For the Linux binaries (where LIMIT_BASE can be up to about 500000) I'm using gcc 4.6.3 and 4.7.2. Maybe the newer 4.9.0 of MinGW64 is somehow faulty... And a few words on the sieving speed: For the fastest sieve one should try to get Q as high as possible, while keeping the number of subsequences as low as possible. As a test case I took k=355262321784119, nmin=1, nmax=10M. For Q=4320 (=3*1440) there are 3 subsequences, 3.838 secs. For Q=5760 (=4*1440) there are 4 subsequences, 3.806 secs. For Q=7200 (=5*1440) there are 3 subsequences, 3.884 secs. Due to the lower number of subsequences for Q=7200 one would expect some higher speed, but the oposite is the case. Obviously the situation is a bit more complicated. One should keep in mind that due to the very low density (and thus low number of remaining n) there is also some imbalance in the BSGS procedure (overhead vs. reduced number of steps). Last fiddled with by Thomas11 on 20140103 at 22:04 
20140104, 00:22  #52  
Jun 2003
11000101011_{2} Posts 
Quote:
Currently the Power residues are calculated for bases up to 16,9,5 > for 2,3,5. If there was a way to extend this to factors of 5760 (or higher) then the program will be much faster. I don't know if this can be easily changed in sr1sieve. 

20140104, 02:13  #53 
Jun 2003
62B_{16} Posts 
Try compiling with
#define POWER_RESIDUE_LCM 40320 #define POWER_RESIDUE_DIVISORS 96 in sr1sieve.h to see if this will be faster. 
20140104, 10:18  #54  
Feb 2003
773_{16} Posts 
Quote:
Using the values you suggested leads to a segmentation fault (also on Linux; not right at the beginning, but somewhere during the sieve). The attached version (e) of sr1sieve is compiled with: #define POWER_RESIDUE_LCM 11520 #define POWER_RESIDUE_DIVISORS 54 which seem to be largest possible without causing a segfault. I tried it on Linux and Windows but, as before, I don't see any significant change in speed. It already seems to be a bit slower. In my opinion changing the parameters for the power residue test might be useful only for the general case (ordinary k). But our low weight ks are already "designed" to fail that test. Maybe when switching to higher bases (2^m, like 4096) it could be of some use... Last fiddled with by Thomas11 on 20140104 at 10:21 

20140108, 23:55  #55  
Jun 2003
11000101011_{2} Posts 
Quote:
a) Q=7200 should be faster. Looking into bsgs.c from what I understand (), I think that sr1sieve calculates the power residue for each subsequence. It only needs to calculate the power residue for one of the sequences. This might be the cause for the increased overhead. Could you take a look at the code.. if this is a bug, we can try to get it fixed. b) Other possible solution would be to set #define BASE_MULTIPLE 1440 and #define LIMIT_BASE 1440*5 and see if this lowers the overhead. c) Also the run time will be sqrt(N*#subsequence/(Q)) N=10M For Q=7200 time=64.54 For Q=5760 time=83.33 Would this be within the error of measurement? May be we need a larger N range to notice any significant difference. II) For #define POWER_RESIDUE_LCM it will be ideal to choose a value such that p%POWER_RESIDUE_LCM=1 a) One way of doing this would be to store the factorization of each p1 and then keep on changing the values of POWER_RESIDUE_LCM for each p. This might be too cumbersome to do. (possibly rewriting the whole program). b) Another simpler solution would be to define POWER_RESIDUE_LCM and then only check p such that p%POWER_RESIDUE_LCM==1. This can be easily accomplished by adding an IF loop in the primes.c code. For some smooth POWER_RESIDUE_LCM values we can sieve much much deeper and find a few extra factors. III) The third possibility is to generate a large number of subsequences by using a large Q and then only testing the subsequences that have a high weight. This might help find a few extra factors. In the best case scenario (if for some k this exists ie a k with alot of heavy weight subsequences) you can reduce the run time from sqrt(N) to sqrt(# of candidates left). The question that I am currently thinking about is how to choose such a Q for any given k. Last fiddled with by Citrix on 20140108 at 23:58 

Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
High weight k's  kar_bon  Riesel Prime Data Collecting (k*2^n1)  26  20130911 23:12 
Low weight k's  kar_bon  Riesel Prime Data Collecting (k*2^n1)  18  20100514 08:49 
Low Weight Subsequences  masser  Sierpinski/Riesel Base 5  17  20070214 02:04 
Heavy weight K's  Citrix  Twin Prime Search  8  20060610 20:38 
Low Weight 15k  Citrix  15k Search  20  20040620 21:00 