![]() |
|
|
#1 |
|
Nov 2003
11101001001002 Posts |
It appears that I will need help to finish sieving 2,1870L. I just
don't have the resources. I sent the relations that I had to Serge, but a lot more is needed. I have already sieved special-q all the way to 452million, and the yield rate is dropping. I doubt whether my siever can gather enough relations. Any volunteers? __________________________ EDIT (S.B): Here are the instructions: * Save file <<t.poly>> Code:
# sieve with 16e -r from 90 to 120M, in ranges # Command line: gnfs-lasieve4I16e -v -r t.poly -f $start -c 1000000 n: 16995692987522455651754339410455320150093771210144273643775083936188200124843949967119977515852759358871709763714726542633958784170913772900370407491298241066753915069723640845561 Y0: -196159429230833773869868419475239575503198607639501078529 Y1: 9903520314283042199192993792 skew: 2.0 c4: 1 c3: -2 c2: -6 c1: 12 c0: -4 type: snfs lpbr: 31 lpba: 30 mfbr: 62 mfba: 60 rlambda: 2.55 alambda: 2.55 rlim: 120000000 alim: 16777215 gnfs-lasieve4I16e.zip (but if it won't work on your system, search for other binaries on the forum or build from source) * Reserve a range here, in chunks of 1M (this will serve as $start in the command-line). Each 1M range will take ~1.5M CPU-seconds on a 3GHz 64-bit CPU, and will produce ~200Mb of data after compression (400Mb plain) * Run gnfs-lasieve4I16e -v -r t.poly -f $start -c 1000000 (or split in smaller ranges, -f controls start, -c controls length of the range; both plain numbers, no 'M's or 'e's) * The memory requirement will be modest - 300-400Mb per process * Concatenate result files (they will have names t.poly.lasieve-1.<number>-<number>), gzip (or bzip, 7zip, tar cvz, etc) and post at sendspace, dropbox or for very large files PM Batalov for direct sftp login. Posprocessing will be done by Batalov. Reservations: Code:
up to 450M R.D. Silverman (own siever) DONE 75M unique relations (119M raw) ------ free relations 3.657M 90-91M Batalov DONE 3.8M relns 91-92M jrk DONE 3.98M relns 92-94M jyb DONE 7.47M relns 94-100M bsquared DONE 22.7M relns 100-101M xilman DONE 3.78M relns 101-102M fivemack DONE 3.84M relns 102-103M xilman DONE 3.85M relns 103-104M xilman DONE 3.85M relns 104-110M bsquared DONE 22229942 unique, 175378 dup. 110-112.4M fivemack DONE 9.38M relns 112.4-113M fivemack DONE 2.35M relns 113-114M bsquared DONE 3.87M relns #this lot should suffice Last fiddled with by Batalov on 2011-02-22 at 19:22 Reason: this lot should suffice |
|
|
|
|
|
#2 |
|
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2
24·593 Posts |
Here is a very brief digest of the existing relation set, that we've discussed and I will repost here:
* the FB lims are 14.5M / 86M and approximately 30/31-bit LP lims; * usually that parameter set would yield a matrix with around 150M unique relations (possibly less, but a larger matrix); * currently, there are 78,573,143 unique rels (with free rels included) =====remdups_out.txt===== Found 78132102 unique, 44165546 duplicate, and 0 bad relations. (~122M raw relations) * filtering is at this point now: Fri Feb 11 05:20:05 2011 reading all ideals from disk Fri Feb 11 05:20:34 2011 memory use: 3042.4 MB Fri Feb 11 05:21:02 2011 keeping 103773081 ideals with weight <= 200, target excess is 418036 Fri Feb 11 05:21:30 2011 commencing in-memory singleton removal Fri Feb 11 05:21:53 2011 begin with 78573143 relations and 103773081 unique ideals Fri Feb 11 05:22:57 2011 reduce to 33899 relations and 2 ideals in 11 passes Fri Feb 11 05:22:57 2011 max relations containing the same ideal: 2 * sieving on the other side will not help (this is a quartic), probably a 15e re-sieving (or even 16e?) will be needed; I can simulate, remdups with the existing set to estimate quasi-unique additional yields and will post later. --Serge |
|
|
|
|
|
#3 | |
|
Nov 2003
22×5×373 Posts |
Quote:
I was using a sieve area of 10K x 20K per special q. Results show that this was too small. yield per q was too low. Currently, for q near 450M, I am getting just under 4 relations/q. Rather than proceed with sieving q > 450million, it will probably be better to resieve some of the smaller q. I will finish sieving all q up to 450M this weekend. |
|
|
|
|
|
|
#4 | ||
|
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2
24·593 Posts |
For comparison, I found the sibling 2,1870M's logs (courtesy of B.Dodson's significant oversieving; long story short, it was easier to fire and forget than stop at a intermediate point):
Pre-simmed recipe (with experimental use of 16e Quote:
Quote:
Possibly 16e could be used again, here, for finishing (these are virtually identical projects). I will sim over the weekend (I cannot significantly sieve, 4Intel+6Phenom cores is all I got, but I can sim). EDIT: not 3LP. Here's what it was: Code:
# sieve with 16e -r from 60 to 110-120M, expect 165M+ unique rels n: 1387312376442199554837407296900851895433665230080527991970122352522509034451214731923682531140863318446032709537489490131868927679840546823810213417373743367475664367890147487119660449174892741 Y0: -196159429230833773869868419475239575503198607639501078529 Y1: 9903520314283042199192993792 skew: 2.0 c4: 1 c3: 2 c2: -6 c1: -12 c0: -4 type: snfs lpbr: 31 lpba: 30 mfbr: 62 mfba: 60 rlambda: 2.55 alambda: 2.55 rlim: 134000000 alim: 33554431 Last fiddled with by Batalov on 2011-02-12 at 01:04 Reason: remembered wrong; 3LP was not helping and was not used |
||
|
|
|
|
|
#5 |
|
"Ben"
Feb 2007
22·3·293 Posts |
I can help. Batalov will you be coordinating things?
|
|
|
|
|
|
#6 |
|
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2
24×593 Posts |
Can do. If you would be willing to do all, then you won't need sendspace - I'll open you a sftp entry for the results into the compute node.
Let me prepare one large workunit and post it here. You would need to be prepared for a few hundred core-days. |
|
|
|
|
|
#7 |
|
Bamboozled!
"πΊππ·π·π"
May 2003
Down not across
2A0B16 Posts |
I should be able to help. Please give me fairly clear instructions on what I need to do.
Paul |
|
|
|
|
|
#8 |
|
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2
24×593 Posts |
I will run tests, prepare a desired target range (and tentatively time it), and then post here. The set up will very similar to distributed project templates from the past. E.g. like this. In short, one command-line, run many times on as many nodes you have access to (or qsub'bed), then the results gzipped-or-bzipped-or-7zipped (your choice) and sendspace'd or (let's insert a plug for trolls here) dropbox'd.
Instructions posted in Post #1. Please reserve. Each 1M chunk will take 1.5M CPU seconds (~420 hours) on a 64-bit linux system with a 3GHz CPU, or ~630 hours on a 2GHz CPU, or twice as much on a 32-bit system. Last fiddled with by Batalov on 2011-02-12 at 04:39 |
|
|
|
|
|
#9 |
|
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2
24·593 Posts |
Tested (they work well with existing set; FBlims are slightly increased, so that we would get new relations even in the worst case). Posted.
Will delete reservation messages and record in post #1. Estimate is about ~600 core-days (+/- 50% depending on what CPUs will come to play). With estimated 50 cores participating, let's try to wrap it in two weeks (so, plase don't reserve a month worth of work). |
|
|
|
|
|
#10 | |
|
Nov 2003
164448 Posts |
Quote:
you like me to keep sieving? (this is why it was taking so long!) |
|
|
|
|
|
|
#11 |
|
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2
948810 Posts |
|
|
|
|
![]() |
| Thread Tools | |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| Finishing expired LL test | Damian | PrimeNet | 2 | 2017-11-16 00:36 |
| 1870L : Yield Rate | R.D. Silverman | Cunningham Tables | 44 | 2010-10-21 14:32 |
| Finishing mprime runs | bill-p | Software | 1 | 2009-12-08 17:45 |
| distributed.net completes OGR-25 | ixfd64 | Lounge | 4 | 2008-11-22 01:59 |
| LL assignments on slow PCs "die" shortly before finishing? | Andi47 | PrimeNet | 1 | 2007-02-28 22:03 |