![]() |
|
|
#199 | |
|
Feb 2007
211 Posts |
Quote:
Code:
03:42:04 11481491 k's remaining. p=16023844223723527 divides k=87072309039 03:45:46 11481490 k's remaining. p=16023862458768221 divides k=64669832985 03:46:02 11481489 k's remaining. p=16023863669411093 divides k=93525250179 03:55:14 11481488 k's remaining. p=16023907257388957 divides k=81711418425 04:14:22 11481487 k's remaining. p=16024009456250219 divides k=64782567981 04:17:10 11481486 k's remaining. p=16024025948259773 divides k=80677472505 Example: for my range of 8.644T it found 329 sieve candidates and my one core does approx 105Mp/sec hence it will take 8644/0.105=88200seconds 88200seconds/329candidates = 250seconsds or 4:16sec per candidate. |
|
|
|
|
|
|
#200 | |
|
"Lennart"
Jun 2007
25×5×7 Posts |
Quote:
I know all that. Now i have done 4hr sieving and i have 214 sec/f (Real time is 107sec/f but i sieve on 2 core) /Lennart Last fiddled with by Lennart on 2009-06-09 at 09:49 |
|
|
|
|
|
|
#201 | |
|
Nov 2006
Earth
6410 Posts |
Quote:
We're using 64bit tpsieve as the standard for the timings since it's a little faster than NewPGen. Of course, different hardware is going to produce different results. It looks like your computer is better off LLRing than sieving. |
|
|
|
|
|
|
#202 | |
|
Jun 2009
22·52·7 Posts |
Quote:
Testing on a X5482 @ 3200MHz I got 151 seconds to LLR the highest candidate 99899996781*2^333333-1 The same machine found one factor every 157 seconds in the range 19990T-20000T. Regards, Peter ------------------- MooooMoo: The next post and several ones below it refer to TPS's next project: sieving 1<k<10M, 480000<n<500000. Last fiddled with by MooMoo2 on 2009-08-11 at 02:00 |
|
|
|
|
|
|
#203 | |
|
Account Deleted
"Tim Sorbera"
Aug 2006
San Antonio, TX USA
10B716 Posts |
482k-483k complete.
http://www.sendspace.com/file/wakik2 Quote:
|
|
|
|
|
|
|
#204 |
|
Feb 2007
211 Posts |
I use something call'd JS TEXT file merger.
http://www.tucows.com/preview/373437 It works great. You do have to remove the header or Find & replace also works. |
|
|
|
|
|
#205 | |
|
Jan 2005
Caught in a sieve
5·79 Posts |
Quote:
With multiple N's, the time required to sieve increases linearly with N; it's just a constant factor faster than doing the N's independently. So I suggest breaking the sieve into at least 4 sets of N's, to be done in sequence. In the worst case this is equivalent to doing the whole sieve plus (sets) one-N sieves. In the best case, we might get a twin before we get to the later sieve(s). |
|
|
|
|
|
|
#206 |
|
Account Deleted
"Tim Sorbera"
Aug 2006
San Antonio, TX USA
11×389 Posts |
Well, the only real difference is the header, so the only thing really standing in the way is that the header is telling it something it doesn't understand. If the headers were replaced, it would work, but that would probably be enough trouble that it wouldn't be worth using srfile instead of something like what I described. By the way, I should note that I used a hex editor to get rid of all the headers so that I could also get rid of its line, so I replaced the hex for the header and its line break with nothing. You could also, I suppose, put the header back in the beginning with a hex editor instead of a separate app, but XVI32 isn't very good at editing text.
Last fiddled with by TimSorbet on 2009-08-09 at 18:40 |
|
|
|
|
|
#207 | |
|
"Michael Kwok"
Mar 2006
49D16 Posts |
Quote:
|
|
|
|
|
|
|
#208 |
|
Mar 2003
New Zealand
13·89 Posts |
You might be biting off more than you can chew with this range.
tpsieve will need a bitmap with (nmax-nmin)*(kmax-kmin)/6 bits to hold the combined sieve file, so I think that is about 4Gb for 1 <= k <= 10M, 480K <= n <= 500K. Also remember tpsieve is still a lot slower than NewPGen in 32-bit mode. Edit: The sieve efficiency will decrease as the range on n increases. I know there is a trade off because the PRP time also increases as k increases, but you should test carefully whether the trade off is worth it. One way to test is to run without a sieve file (specify the full range with -k -K -n -N and no output file) and sieve for very large factors so that not many are reported. When I added the range-of-n feature I really had in mind a very small range, like less than 10 n's, I hadn't thought about what would happen with a very large range. Last fiddled with by geoff on 2009-08-10 at 03:57 |
|
|
|
|
|
#209 | |
|
Jan 2005
Caught in a sieve
6138 Posts |
Quote:
But Geoff's right about the overall speed. I'll hazard an educated guess that each extra K takes 1/10 the time of the first K. That still means that testing each P will take about 500 times as long as normal for the same size range of candidates. How much longer does LLR take with those bigger K's? Twice as long? In that case, we're better off with a range of 10 or maybe 20 K's. Although if somebody on this math forum can tell me how to quickly solve for all m where: 0 <= k*2^m (mod p) <= r, where r < p, and m is in a given range from 0 to some n (hint: here, r=10000000 and n=5000) I might be willing to give that coding challenge a shot. |
|
|
|
|
![]() |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| S9 and general sieving discussion | Lennart | Conjectures 'R Us | 31 | 2014-09-14 15:14 |
| Sieving discussion thread | philmoore | Five or Bust - The Dual Sierpinski Problem | 66 | 2010-02-10 14:34 |
| Combined sieving discussion | ltd | Prime Sierpinski Project | 76 | 2008-07-25 11:44 |
| Sieving Discussion | ltd | Prime Sierpinski Project | 26 | 2005-11-01 07:45 |
| Sieving Discussion | R.D. Silverman | Factoring | 7 | 2005-09-30 12:57 |