![]() |
|
|
#100 |
|
May 2007
Kansas; USA
22×19×137 Posts |
Thanks for finishing your range early Karsten! Now, when I get factors from Ian for 7000G-7500G and 12000G-12400G later today, we'll be complete to P=19T. I'll then send a new file to Max for uploading to the server.
In the mean time, I'm removing all factors that I have up to P=19T. I'm just about done and will shortly compute an optimum sieve depth. I'll report back my calculations here. Gary |
|
|
|
|
#101 |
|
May 2007
Kansas; USA
22×19×137 Posts |
Optimal sieve depth calculations:
General up-front stuff: 1. Use only the n=400K-1M portion of the file because that will be 90%+ of the total LLR time. The continued sieving removal of candidates lower than that saves little overall total primality testing time. 2. Assume that our drive will be for n=420K-650K; hence 70% of the n-range is n=581K. 3. For an LLR test, use the avg. k-value of k=2000-3400, i.e. k=2700 (actually 2701). 4. For a sieving test, all factors in the n=400K-1M file have been removed up to P=19T minus the couple of small ranges still outstanding. 15,492,766 pairs are remaining in that file. 5. The machine is a 2.6 Ghz Intel quad running 64-bit Linux; not over-clocked. Although not as fast as many of your guy's machines, it is equally good at both sieving and LLRing. Calculations: 1. LLR iteration time of 2701*2^581007-1 is 0.735 ms. Total LLR time would be 581.007*.735=427 secs. This is with the other 3 cores sieving and is an avg. of 3 tests AFTER the 1st 20,000 iterations were done so no "up front cost" when starting the program is factored in. 2. Sieving test is for P=20000G-20010G. Sieving rate is 144,000 P/sec. Expected # of factors for the range is 252.86. 3. Expected total time that the sieving range would take is 10G / 144,000 = 69,444 secs. 4. Factor removal rate is 69,444 / 252.86 = 275 secs. 5. Optimal sieve depth is slightly less than LLR test time divided by the removal rate times the sieve depth used; hence: 427 / 275 * 20T = 31T. Note: It is slightly less because factors are removed as the sieve depth gets deeper but at this depth, the impact is very minimal. But that is why we have to wait until a fairly high sieve depth to get a reasonably accurate calculation. So there you have it...about 31T. OK, let's take it on to P=30T since it's clear now that we can do that by the 20th. Lennart, can you do a few more trillion? I should be able to do another 400G-500G in the final 4-5 days after my current reservation is done.One other thing: Optimal for the n=420K-700K range is likely to be P=36T-40T somewhere. Personally, I don't care if we take the drive right on up to that without sieving further if we have the resources in the intermediate term. Due to continued increases in computer speed and capactiy, I always feel that it makes sense to err on the side of under-sieving slightly, especially if there are plenty of resources immediately available for primality testing. Another thing: If people particularly like sieving or have machines better suited to sieving, then we have some huge efforts under way for (1) k=300-400 and n=1M-2M (currently at P=4T) and (2) k=300-400/n=2M-3M combined with k=400-1001/n=1M-3M (currently almost at P=250G) that could use some help. I think people will be surprised at how quickly we will likely finish k=300-400 up to n=1M. We really need to look at putting a few resources on k=300-400 for n=1M-2M in the near future. Otherwise, we'll end up with another "high priority" drive; something I'd like to avoid in the future if possible. Optimal sieve depth for that range is probably P>100T; even if we break off the lower ranges like we're doing here. And finally... Of course since this calculation is for testing up to n=650K only, we will sieve it more for n=650K-1M later when machines are faster or have more capacity. ![]() Gary Last fiddled with by MyDogBuster on 2009-06-11 at 14:10 |
|
|
|
|
#102 |
|
"Lennart"
Jun 2007
112010 Posts |
Ok You can wait 4 hr's more and you get 22T-23T
I'll change ETA on 23200G-24T Lennart ETA June 12 to June 14 and reserve 25T-26T ETA June 16 Lennart Last fiddled with by Lennart on 2009-06-11 at 13:04 |
|
|
|
|
#103 | |
|
A Sunny Moo
Aug 2007
USA (GMT-5)
3×2,083 Posts |
Quote:
The algorithm, on the other hand, is a different animal entirely. Within the format, the software must use a specific compression algorithm to store the data in compacted format. A wide range of formats exist, such as Deflate, Bzip2, LZMA, RAR (not to be confused with the format RAR), and PPMd. Deflate is one of the oldest--and least effective--of the group. It is the one used in the original ZIP and gzip (.gz) format specifications, and can be opened by anyone who can read .zip or .gz files. Bzip2 has also been around for a while, but for many file types it is rather more efficient than Deflate; it is used by its eponymous format, and can also be used by Zip and 7z formats. LZMA is a bit newer, but is EXTREMELY effective (and fast) for text-heavy files. It is the default algorithm used by 7-Zip for its 7z format. RAR is quite effective across the board (roughly on a par with Bzip2--usually somewhat better than Deflate), but is a proprietary algorithm and thus is only used by its eponymous format and software. (Other software can read .rar files, but only WinRAR and its non-Windows counterpart, RAR, can write them.) Lastly, PPMd is a newer format, used primary with the 7z format. It is the most effective of all of these formats for text-heavy files, but requires a premium of RAM for compressing and decompressing (the others all require rather negligible amounts). The 7-Zip compression software, which I personally use, offers a huge amount of control over both the format and algorithm used when compressing. (Consequently, it can also be a bit more confusing than some other archivers. ) All of the pairings I listed above are supported for both reading and writing. However, when using it (or other software that similarly supports choosing a specific algorithm) with one of the less commonly supported format/algorithm combinations, one must be careful that the recipient can also read that particular combination.The only ZIP files WinZip can read, if memory serves, are Deflate ones. This could have changed in a more recent version release, though I doubt too many new algorithms were added. WinRAR, similarly, doesn't have much flexibility in which ZIP algorithms it can read; it also only seems to support Deflate. So does Windows's built-in "compressed/zipped folder" mechanism. Usually, you need fancier software (which is ironically often chepaer/free than the "simpler" ones--7-Zip is open-source and free, while WinZIP is shareware with a rather pesky trial-period warning) to read any ZIP files with non-Deflate algorithms. So, long story short: if in doubt, use Deflate. Anyone can read that, although it is rather less efficient than many of the other algorithms. (If your compression program doesn't offer a choice for this or you don't know where to find it, don't worry, it's almost definitely using Deflate by default.) Alternatively, use RAR if you can, since it only supports one pairing of format with algorithm, is quite efficient, and can be read (though not written) by just about anything today except WinZip and native Windows. ![]() Moral of the story: WinZip stinks. ![]() Max
Last fiddled with by mdettweiler on 2009-06-11 at 13:04 |
|
|
|
|
|
#104 |
|
May 2008
Wilmington, DE
22·23·31 Posts |
Reserving 26000G - 26400G
Last fiddled with by MyDogBuster on 2009-06-11 at 14:40 |
|
|
|
|
#105 |
|
I ♥ BOINC!
Oct 2002
Glendale, AZ. (USA)
45916 Posts |
Upgrade to the latest Winzip will allow you to extract the file, fixing the problem on your end.
Using Winzip Legacy compression to create the zip file will prevent it from happening in the first place. |
|
|
|
|
#106 |
|
"Lennart"
Jun 2007
25×5×7 Posts |
|
|
|
|
|
#107 | |
|
A Sunny Moo
Aug 2007
USA (GMT-5)
186916 Posts |
Quote:
|
|
|
|
|
|
#108 |
|
Aug 2008
Good old Germany
3·47 Posts |
All this sounds a little bit complicated. In future I will use only the windows owned tools for compressing my files. ;)
About the sievinghelp. This is not abig problem. As stated earlier, I will sieve one week per month in the future. Will give 2 cpu-weeks on my C2D. I know it isn“t very much, but I think every tiny bit helps. |
|
|
|
|
#109 |
|
May 2008
Wilmington, DE
22·23·31 Posts |
7000G-7500G & 12000G-12400G complete
Files emailed to Gary |
|
|
|
|
#110 | |
|
May 2007
Kansas; USA
1041210 Posts |
Quote:
I use WinRAR. I thought that was one of the better ones to both compress and read compressed files. I guess not. As for my Linux machines, whatever is automatically already in Ubuntu 8.04 is what I use. I haven't specifically installed anything on them. The fact is that neither the latest WinRAR or whatever is on my Linux machines could read his file. Gawd, I long for simpler days on this stuff. Gary |
|
|
|
![]() |
| Thread Tools | |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| Team drive #10 k=1400-2000 n=500K-1M | gd_barnes | No Prime Left Behind | 61 | 2013-01-30 16:08 |
| Team drive #12 k=2000-3000 n=50K-425K | gd_barnes | No Prime Left Behind | 96 | 2012-02-19 03:53 |
| k=2000-3400 k's to be pulled from upcoming drives | gd_barnes | No Prime Left Behind | 11 | 2009-06-12 21:28 |
| Sieving drive for k=1003-2000 n=500K-1M | gd_barnes | No Prime Left Behind | 160 | 2009-05-10 00:50 |
| Sieving drive for k=1005-2000 n=200K-500K | gd_barnes | No Prime Left Behind | 118 | 2009-01-17 16:05 |