![]() |
|
|
#122 | ||
|
"Michael Kwok"
Mar 2006
1,181 Posts |
Quote:
Anyway, here's the thread that states some of LLR's limitations: http://www.mersenneforum.org/showthread.php?t=5579 Quote:
|
||
|
|
|
|
|
#123 | |
|
Oct 2005
Italy
3·113 Posts |
Quote:
Thanks |
|
|
|
|
|
|
#124 |
|
Nov 2006
23·11 Posts |
Use head to generate the first file. Then use tail to cut off the first file part. Use head again, cut off the exported part again, and so on.
Head/tail accepts negative line number counts, so you can use head with positive number and tail with the same number, just negative - it will give you all lines except first N, so you won't need to know the number of lines in the file. [I just read what I wrote and it probably isn't clear... But i hope you understand) |
|
|
|
|
|
#125 |
|
Mar 2004
Belgium
7×112 Posts |
Thank you!
I will certainely look into this! Other method, sieve a range to ?T test it, sieve the next range and so on. Regards Cedric |
|
|
|
|
|
#126 |
|
Mar 2005
Internet; Ukraine, Kiev
11·37 Posts |
I use a little perl script... Probably not as efficient as head/tail, but is allows you to think in terms of millions, not just k count.
Code:
#! /usr/bin/perl
use warnings;
use strict;
our $min = $ARGV[0];
our $max = $ARGV[1];
$_ = <STDIN>;
print $_;
while($_ = <STDIN>)
{
if(/^(\d+) \d+\r?$/)
{
if(($1 >= $min) && ($1 <= $max))
{
print $_;
}
if($1 > $max)
{
exit;
}
}
}
cat latest_sieve_file | ./get_llr_file.pl 1 300000000 > 00001e6-00300e6_333333.txt |
|
|
|
|
|
#127 |
|
Oct 2005
Italy
3·113 Posts |
Thanks Gribozavr !
For Windows user, you can download some Unix utilities (among them, cat) here: http://gnuwin.epfl.ch/apps/unxutils/...l/unxutils.exe just run and install. Of course , to run Perl script, you need the Perl interpreter: http://www.activestate.com/Products/ActivePerl/ Last fiddled with by pacionet on 2007-02-05 at 20:04 |
|
|
|
|
|
#128 |
|
Oct 2005
Italy
3×113 Posts |
n=500,000
range: 0-50G sieve depth: 35T remaining candidates: 21,401,304 rate: 1 k every 0.3 seconds |
|
|
|
|
|
#129 | |
|
Mar 2004
3×127 Posts |
Quote:
Every T you sieve seperately, the efficiency is only half as good as it could be. |
|
|
|
|
|
|
#130 | |
|
Oct 2005
Italy
33910 Posts |
Quote:
We have not planned to merge our file. |
|
|
|
|
|
|
#131 | |
|
"Michael Kwok"
Mar 2006
118110 Posts |
Quote:
(50G - 207,999,999,999)I've sieved a total of 430T so far, but it's not in order (since they are done on different machines). It's more like: Machine 1: 1-150T Machine 2: 400T-550T Machine 3: 800T-865T Machine 4: 1000T-1065T This is because I won't have access to machines 2,3, and 4 until a month from now. By that time, the status will be: Machine 1: 1-400T Machine 2: 400T-800T Machine 3: 800T-1000T Machine 4: 1000T-1200T and I'll merge the files then. I had to wait until ~30T before my .dat files got small enough to sieve the whole 50G-208G at once, and even then, it almost exceeded my limit of 512 MB RAM. Last fiddled with by MooMoo2 on 2007-02-12 at 21:14 |
|
|
|
|
|
|
#132 |
|
Oct 2005
Italy
3·113 Posts |
n=500,000
sieving depth= 47.5 T candidates= 20,986,579 rate= 1 k every 0.5 seconds Last fiddled with by pacionet on 2007-02-18 at 14:36 |
|
|
|
![]() |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| S9 and general sieving discussion | Lennart | Conjectures 'R Us | 31 | 2014-09-14 15:14 |
| Sieving discussion thread | philmoore | Five or Bust - The Dual Sierpinski Problem | 66 | 2010-02-10 14:34 |
| Combined sieving discussion | ltd | Prime Sierpinski Project | 76 | 2008-07-25 11:44 |
| Sieving Discussion | ltd | Prime Sierpinski Project | 26 | 2005-11-01 07:45 |
| Sieving Discussion | R.D. Silverman | Factoring | 7 | 2005-09-30 12:57 |