![]() |
[QUOTE=sleigher;208834]I did in fact change $NUM_CPU to 7 for each host. Each host is a dual quad core. So it isn't doing it properly then. darn....
I will do what was suggested above and start each job in it's own window. If I am going to track the ranges manually, what is a good range for each job and how high do I go? 20 mil? 30 mil? It seems currently that ranges are 100000. stay with that?[/QUOTE]You don't have to track the ranges by hand, that's what we have computers for.... Just pick a range above the highest range achieved so far. There is code in the script to ask for a starting q0 if it can't be determined from a previous job file. I've hit it when starting higher numbered clients by themselves on a separate PC. You can just use the same value for all the clients and it should adjest the start value accordingly. Or edit the .job files to set the start value then let 'em rip. As far as how high to sieve, you can estimate that by looking at the yield at the end of a range. Forexample on a c135 I'm running right now, the yield at the end of 100k block says[code]total yield: 236826, q1=13000027 (0.09617 se/rel)[/code] That means my yield is ~2.3 relations per q. It's looking like it wants ~13M relations, so I have to sieve in the neighborhood of 5.6M q to get enough relations (with enough more to account for the duplication rate...) Andi47 says 30-40M should do it for you, so if your yield is ~2 rels/q, you need to sieve a range of 9.5-14.5M more (counting the 11M you already have). Good Luck! |
[QUOTE=schickel;208838]Andi47 says 30-40M should do it for you, so if your yield is ~2 rels/q, you need to sieve a range of 9.5-14.5M more (counting the 11M you already have). Good Luck![/QUOTE]
That was from the back of my head. Being back home and checking back with my excel file I think it might be rather 40M relations or just above that. |
[QUOTE=Andi47;208869]That was from the back of my head. Being back home and checking back with my excel file I think it might be rather 40M relations or just above that.[/QUOTE]So it would be good to [B]not[/B] start over, since it's 20-25% of the way into the job already....
|
So I am kind of confused now. My ggnfs.log is telling me this.
[code] Sun Mar 21 10:38:51 2010 found 47939341 duplicates and 530346052 unique relations [/code]47 Million duplicates out of 530 million relations. Is that right? Is something corrupt? That happened just 2 hours after I saw this in the log. [code] Sun Mar 21 08:36:48 2010 error -15 reading relation 577778153 Sun Mar 21 08:36:48 2010 error -1 reading relation 577778154 Sun Mar 21 08:37:02 2010 error -15 reading relation 578284395 Sun Mar 21 08:37:05 2010 error -15 reading relation 578284425 Sun Mar 21 08:37:05 2010 error -1 reading relation 578284667 Sun Mar 21 08:37:06 2010 error -15 reading relation 578284908 Sun Mar 21 08:37:08 2010 error -5 reading relation 578285151 Sun Mar 21 08:37:09 2010 error -9 reading relation 578285152 Sun Mar 21 08:37:09 2010 found 282537224 hash collisions in 578285384 relations Sun Mar 21 08:39:24 2010 added 160 free relations [/code]Any docs about error codes for ggnfs? |
The errors don't matter, they're only a few relations lost out of millions. But 500 is a bit too many millions for a c154. In your poly/job file, what are the values of lpbr and lpba?
|
Here is my bignum.poly file that was created after msieve did poly stage.
[code] [root@localhost fact]# cat bignum.poly n: 6813377766757638164918650305665391545877815056634620577957683139030334314048355246578767633356280078928552022932140281258043983076447823479268400293856367 Y0: -1156038696359091884229749817581 Y1: 193522735996815187 c0: -77511246416842652782947827243153982720 c1: 52845910196937376909035045081456 c2: 64530037137566250198775404 c3: -6266879667492455540 c4: -604994336743 c5: 3300 skew: 10062842.74 type: gnfs [/code] |
[QUOTE=sleigher;209125]Here is my bignum.poly file that was created after msieve did poly stage.
[/QUOTE] What are the contents of the .job file? |
I answered that wrong. Looking at a job file I see this.
[code] n: 6813377766757638164918650305665391545877815056634620577957683139030334314048355246578767633356280078928552022932140281258043983076447823479268400293856367 m: Y0: -1156038696359091884229749817581 Y1: 193522735996815187 c0: -77511246416842652782947827243153982720 c1: 52845910196937376909035045081456 c2: 64530037137566250198775404 c3: -6266879667492455540 c4: -604994336743 c5: 3300 skew: 10062842.74 rlim: 25700000 alim: 25700000 lpbr: 29 lpba: 29 mfbr: 58 mfba: 58 rlambda: 2.6 alambda: 2.6 q0: 33978547 qintsize: 10207 #q1:3398875 [/code]So the answer is it looks like 29 for both lpba and lpbr. |
You should need about 50M relations. Chop off the other 480M. But I do notice [code]q0: 33978547
... #q1:3398875[/code] Is this a typo or is it actually in the job file? |
I missed a 4 in the cut and paste.
Should be q0: 33978547 qintsize: 10207 #q1:3398875[B]4[/B] In my .dat file, is that were all the relations are kept? How big should that file be for a 154 digit number? |
While you may be able to successfully perform filtering with about 55M raw relations (translating to about 42-45M unique relations), I suggest keeping between 60M and 70M raw relations, to decrease the size of the matrix.
That's what we did in RSALS. |
| All times are UTC. The time now is 23:00. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.