Started NFS even though it could use a bit more ECM. Let's see who finishes first.

[QUOTE=Batalov;295998]Neat. As luck has it. (I had 3800 11e6s on it already ATM.)
Now a little one? [strike]C116?[/strike] C113? :)[/QUOTE] I set it factor(C142), ECM found the P29, and YAFU's already started GNFS on the C113, so I'll let it crunch. Edit: I'd estimate a few hours tops, maybe even less than 1 hr, but it's late; we'll see. [code]<poly select> aprogs: 2511 entries, 3391 roots hashtable: 4096 entries, 0.06 MB coeff 10212 specialq 1  3943 other 6499  15597 aprogs: 1030 entries, 1358 roots hashtable: 1024 entries, 0.02 MB coeff 10212 specialq 1  684 other 15597  37432 aprogs: 2427 entries, 3235 roots hashtable: 4096 entries, 0.06 MB coeff 10224 specialq 1  3945 other 6498  15595 aprogs: 1064 entries, 1404 roots hashtable: 1024 entries, 0.02 MB coeff 10224 specialq 1  684 other 15595  37428 aprogs: 2509 entries, 3381 roots hashtable: 4096 entries, 0.06 MB coeff 10248 specialq 1  3949 other 6496  15590 aprogs: 1086 entries, 1570 roots hashtable: 2048 entries, 0.03 MB coeff 10248 specialq 1  685 other 15590  37416 aprogs: 2609 entries, 4025 roots <snip> nfs: commencing polynomial search over range: 11001  11251[/code] It keeps spitting ~1 line/sec. The snip represents about a minute of output. Edit: Aw crap, crost post/cross NFS. You'll have to tell me which one of us is further. I'll post when it starts sieving. (One core of a ~3.9 GHz 2600K.) 
:popcorn:
[code]#free gift (a.k.a. poison): # norm 9.458633e11 alpha 5.536770 e 7.531e10 rroots 5 skew: 29634.48 c0: 65184430402097431471159344 c1: 24104275333618597871334 c2: 1628340288089389017 c3: 55394166360497 c4: 1696239259 c5: 24924 Y0: 4986425138116492873961 Y1: 227709451789[/code] 
[QUOTE=Batalov;296007]My bet is on you! ...'cause you know... nobody else is running it. ;)[/QUOTE]
We cross posted ... :razz: edit: me and akruppa cross posted. Man, this is full of cross posting and editing. Unfortunately, I don't actually know how to pass that polynomial to yafu/ggnfs, nor if it's better than anything I've found :razz: I was originally going to edit this to say that C105s take roughly 1100 seconds for poly select; so this'll take 1500, 1800? 2030 mins. [code]nfs: commencing polynomial search over range: 21001  21251[/code] Editwhoknowswhich: [strike]The only two numbers close are the quintic coeff and the skew (I have no idea what that is, I'm only guessing on the coeffs.) Which does my line of code refer to?[/strike] Edit5004: It says limit 50 CPUseconds per coeff. Edit5005: Whoops, I just actually read the output I copied pasted and answered my own question :blush: EditILoveEdits!: [code]hashtable: 4096 entries, 0.06 MB coeff [U]24924[/U] specialq 1  5979 other 5770  13848 aprogs: 916 entries, 1152 roots[/code] Edit5007: Regarding 5005, I gotta ask, what do the others besides the coefficients mean? Which do you use to "measure how good" a poly is? Edit0800: Jeez, how far does poly select go? I'm at 56.5K and climbing! Edit0811: Okay, I'm going to cheat a bit. At sometime not after sieving starts, I'm putting all four cores on it. Edit0818: 80K and climbing... 
Have 1M relations so far. I estimate about 23 more hours of sieving.

And in the meantime, my edit time is up, and 93K coeff and climbing... and my roommate wants bed... you win :smile:

Done

Did 1600@3M on c121, starting NFS. Feel free to do more ECM to get factors sooner.
Edit: sieving about half done, factors some time tomorrow. 
Done
Edit: starting NFS on c110. 
Done

i3000

Doing NFS on c99. @bsquared: Oh, good!

600 (GMPECM stages 1 and 2) + 64 (GPUECM stage 1, GMP_ECM stage 2) @ B1=3e6 on the C99, NF.
I wanted to launch only 300 CPU curves, but I screwed up on the argument to the script... 
Sorry, I did the C99.
I'm on the C118 now (actually  C98). 
Done. I'll disappear back into the woodwork for awhile now. There's a easy c89 waiting for someone.

Using B1=3000000, B2=30000005706890290, polynomial Dickson(6), sigma=2657438988
Step 1 took 9313ms Step 2 took 4816ms ********** Factor found in step 2: 19361838294942830863612292556544597 Found probable prime factor of 35 digits: 19361838294942830863612292556544597 Probable prime cofactor 4474174784158986141273146393493891378884972832030329546192051603 has 64 digits C89 now edit: running siqs 
i3010

i3010:200@3e6

i3010:
+ 2500 @ 3e6 + 1500 @ 11e6 
+ 1500 @ 11e6

+1300@11M, P1 1e10,1e16. Switching ECM to 44M.

Isn't that enough ECM?
For the c140, the 13e siever is faster than 14e. Here's a poly: (If RSALS wants to sieve it with 14e, then stop at Q=15M instead) [code][color=blue]# sieve with ggnfs lasieve4 I13e on alg side from Q=5M to 26M[/color] # est ~29M raw relations (avg. 0.029 sec/rel C2D @ 3.4GHz) # aq4788:3010 n: 16625281901730370455071681055917159130395787481401452610520456476271085497806820213722749165335977787336736588553081897741511525160614380551 # norm 1.910642e13 alpha 6.928618 e 2.341e11 rroots 5 skew: 599508.87 c0: 427648226665491786635521697657856 c1: 33913973063579742417026172928 c2: 220112435082076467817398 c3: 9753763813104063 c4: 525233124596 c5: 229080 Y0: 591774861633948464998505215 Y1: 427418671945939 rlim: 7500000 alim: 15000000 lpbr: 28 lpba: 28 mfbr: 56 mfba: 56 rlambda: 2.5 alambda: 2.5 [/code] 
Looks like enough ECM. Maybe
[CODE][COLOR=#0000ff]# sieve with ggnfs lasieve4 I13e on alg side from Q=5M to 26M[/COLOR] # est ~29M raw relations (avg. 0.029 sec/rel C2D @ 3.4GHz) # aq4788:3010 n: 16625281901730370455071681055917159130395787481401452610520456476271085497806820213722749165335977787336736588553081897741511525160614380551 type: gnfs # norm 2.179122e13 alpha 7.516537 e 2.538e11 rroots 3 skew: 1228691.55 c0: 4776061894413561017372169558866655 c1: 68299881421366359321693869553 c2: 321819763373019765946931 c3: 131220470449050007 c4: 213277792786 c5: 26520 Y0: 910833582159552006771443128 Y1: 84903198921839 rlim: 7500000 alim: 15000000 lpbr: 28 lpba: 28 mfbr: 56 mfba: 56 rlambda: 2.5 alambda: 2.5[/CODE] 
How does the sec/rel rate compare across different size jobs? (On a C125 I'm doing, I'm getting about ~0.0077 sec/rel.)
And while I'm making the post, how do you compare polys? That 'e' value, which I think I've seen referred to as 'Murphy <somethingorother>"? There isn't exactly a beginner's guide to running (not completely understanding) NFS that I've found yet :razz: (Or, how do you estimate rels required for a given size number?) 
The speed will be what it will be (and on different nodes it will be different but proportional to their 'strength'). It is gauged by a small sieve with a command line like
[CODE]gnfslasieve4I13e a t1.poly f 9000000 c 2000 gnfslasieve4I13e a t2.poly f 9000000 c 2000 # and then repeat with different staggered f ("from")[/CODE] Murphy E is a rough estimation function, but a headtohead run like above will help to decide (between a few top Erated contenders; makes sense to bother with this only for gnfs>150, snfs>220, roughly, otherwise take the top poly by E) For the third question, you may want to search forum and read. There are volumes written about this. Roughly, you want 46M nonredundant rels for a job with lpbr/a 29; 22M for lpbr/a 28; 92M for lpbr/a 30  minimum; more is initially better, then becomes a waste (lost time for more sieving is not compensated by less time for the algebra). _____________ EDIT: musing about Jayson's correct message below (a primitive explanation) Norm makes this one faster, but less productive per the work range (due to alpha), so the range does need correction. Oh, wait, you shrunk it /I would have expected it to need be extended/. Anyway, Ben will adjust as needed. Good call B[sup]2[/sup]! Too small for RSALS and even if they did it, the job would be first in queue for a day. 
Batalov's poly looks good to start sieving, but the specialQ range needs changed from the previous poly it was copied from.
I calculated a specialQ range of 5M to 24M for Batalov's poly, using siever 13e. 
[QUOTE=jrk;296196]Batalov's poly looks good to start sieving, but the specialQ range needs changed from the previous poly it was copied from.
I calculated a specialQ range of 5M to 24M for Batalov's poly, using siever 13e.[/QUOTE] I'll do it. 
[QUOTE=Batalov;296194]The speed will be what it will be (and on different nodes it will be different but proportional to their 'strength'). It is gauged by a small sieve with a command line like
[CODE]gnfslasieve4I13e a t1.poly f 9000000 c 2000 gnfslasieve4I13e a t2.poly f 9000000 c 2000 # and then repeat with different staggered f ("from")[/CODE] Murphy E is a rough estimation function, but a headtohead run like above will help to decide (between a few top Erated contenders; makes sense to bother with this only for gnfs>150, snfs>220, roughly, otherwise take the top poly by E) For the third question, you may want to search forum and read. There are volumes written about this. Roughly, you want 46M nonredundant rels for a job with lpbr/a 29; 22M for lpbr/a 28; 92M for lpbr/a 30  minimum; more is initially better, then becomes a waste (lost time for more sieving is not compensated by less time for the algebra).[/QUOTE]Ok, thanks, it's a good start. Now, the (really) stupid question: What's lpbr/lpba (and the other similar lines in the poly file)? Edit: Is mfb* something to do with 'factorbase'? I've heard that term, but have only a small idea of how it applies to NFS... is mfb* always 2*lpb*? (And is there a reference for how to you lasieve? There's no h... I can guess that a is the job file, f is the starting q, and c is how far to go?) I just tried your poly (and your command), and got this: total yield: 2905, q=9002003 (0.01983 sec/rel) That would mean I could do the whole job (with four cores) in just over 1.5 days... but I don't really want to put all four cores on it. Maybe a C130 or 135? Oh well. [strike]RSALS![/strike] Edit: Whoops, late to the party. That's what I get for going on a drinkrun with suitemates halfway through writing a post :P 
[QUOTE=Batalov;296194]EDIT: musing about Jayson's correct message below (a primitive explanation) Norm makes this one faster, but less productive per the work range (due to alpha), so the range does need correction. Oh, wait, you shrunk it /I would have expected it to need be extended/. Anyway, Ben will adjust as needed.[/QUOTE]
What I did is sample a small number of specialQ roots in each 1M range of Q, and then multiply the number of relations found in each sample by the ratio of the number of specialQ roots in the range over the number of roots in its sample. If the number of samples is large enough, this leads to a good estimate for the number of raw relations that will be found. This still doesn't tell us how many raw relations will be needed, since the duplication rate will depend on the number of specialQ sieved, but I took a guess that 29M raw relations would be more than adequate. 
[QUOTE=jrk;296202]What I did is sample a small number of specialQ roots in each 1M range of Q, and then multiply the number of relations found in each sample by the ratio of the number of specialQ roots in the range over the number of roots in its sample.
If the number of samples is large enough, this leads to a good estimate for the number of raw relations that will be found. This still doesn't tell us how many raw relations will be needed, since the duplication rate will depend on the number of specialQ sieved, but I took a guess that 29M raw relations would be more than adequate.[/QUOTE] Do you have a script that does that? If so I'd love to see it. 
[QUOTE=Dubslow;296201]Ok, thanks, it's a good start. Now, the (really) stupid question: What's lpbr/lpba (and the other similar lines in the poly file)?
Edit: Is mfb* something to do with 'factorbase'? I've heard that term, but have only a small idea of how it applies to NFS... is mfb* always 2*lpb*?[/QUOTE] lpbr/a are the limits (in bits) on the size of large primes which will be allowed in the relation values. mfbr/a are the limits (in bits) on the size of (trialfactored) cofactors which will be split by MPQS after sieving. [QUOTE=Dubslow;296201](And is there a reference for how to you lasieve? There's no h... I can guess that a is the job file, f is the starting q, and c is how far to go?)[/QUOTE] a is the flag to tell the siever to choose specialQ on the algebraic side of the relation. In some jobs you would use r instead to sieve specialQ on the rational side. 
[QUOTE=Dubslow;296201]Ok, thanks, it's a good start. Now, the (really) stupid question: What's lpbr/lpba (and the other similar lines in the poly file)?
Edit: Is mfb* something to do with 'factorbase'? I've heard that term, but have only a small idea of how it applies to NFS... is mfb* always 2*lpb*? (And is there a reference for how to you lasieve? There's no h... I can guess that a is the job file, f is the starting q, and c is how far to go?) I just tried your poly (and your command), and got this: total yield: 2905, q=9002003 (0.01983 sec/rel) That would mean I could do the whole job (with four cores) in just over 1.5 days... but I don't really want to put all four cores on it. Maybe a C130 or 135? Oh well. [strike]RSALS![/strike] Edit: Whoops, late to the party. That's what I get for going on a drinkrun with suitemates halfway through writing a post :P[/QUOTE] If you've fetched the ggnfs SVN repository, then the file /src/lasieve4/INSTALL.and.USE may answer some of your questions. Then again, it may only create more ;) In brief, lpbr/a specifies how big so called "large primes" are allowed to be for each relation. NFS factorization get almost all of their relations from combinations of relations with large primes. The bigger this value is, the more relations are needed to get the required number of combinations. This works the same as in QS, so google QS large prime variation or NFS large prime variation and you can learn more about it. mfb/r specifies the maximum size composite that will be attempted to be split into large primes for each potential relation after sieving has removed all primes in the factor base. But you should be doing more drink runs instead of reading this :) 
[QUOTE=bsquared;296210]If you've fetched the ggnfs SVN repository, then the file /src/lasieve4/INSTALL.and.USE may answer some of your questions. Then again, it may only create more ;)
[/quote]Nope, just asked somebody for links to lasieve.x so I could let yafu do all the hard work ;) [QUOTE=bsquared;296210] But you should be doing more drink runs instead of reading this :)[/QUOTE] But I still got plenty of water! [QUOTE=jrk;296207]lpbr/a are the limits (in bits) on the size of large primes which will be allowed in the relation values. mfbr/a are the limits (in bits) on the size of (trialfactored) cofactors which will be split by MPQS after sieving. [/QUOTE] [QUOTE=bsquared;296210] In brief, lpbr/a specifies how big so called "large primes" are allowed to be for each relation. NFS factorization get almost all of their relations from combinations of relations with large primes. The bigger this value is, the more relations are needed to get the required number of combinations. This works the same as in QS, so google QS large prime variation or NFS large prime variation and you can learn more about it. mfb/r specifies the maximum size composite that will be attempted to be split into large primes for each potential relation after sieving has removed all primes in the factor base. [/quote]Okay, what about *lim and *lambda? (Or Y0 and Y1? Are those some sort of intercepts?) 
[QUOTE=Dubslow;296211]
Okay, what about *lim and *lambda? (Or Y0 and Y1? Are those some sort of intercepts?)[/QUOTE] *lim are the factor base bounds on the rational and algebraic sides of the number field. *lambda is: [QUOTE=INSTALL.and.USE] # All sieve reports for which the sieve value is at least # log(abs(polynomial value))lambda*log(factor base bound) # will be investigated by the trial division sieve. [/QUOTE] Y0/1 are the rational polynomial coefficients. If none of that makes sense to you, I'd recommend you go read a bit about the basics of NFS and sieving (start with QS if you need to). 
[QUOTE=Batalov;296194] EDIT: musing about Jayson's correct message below (a primitive explanation) Norm makes this one faster, but less productive per the work range (due to alpha), so the range does need correction. Oh, wait, you shrunk it /I would have expected it to need be extended/. Anyway, Ben will adjust as needed.
Good call B[sup]2[/sup]! Too small for RSALS and even if they did it, the job would be first in queue for a day.[/QUOTE] Yeah, I didn't want to wait for or bother RSALS with this middling number. Sieving 5M25M yielded 32M+ relations, which was plenty (2425 was done with 14e on my workstation, since that's what yafu decided to use, the rest with 13e on the minicluster). Msieve will be done with it in another hour and a half. 
Here we go again.

i3010 =
[CODE]prp65 factor: 82059812711048446740392574427372605970429181518371714692723266939 prp75 factor: 202599559424682430613701324885437467831662447055045786149241488327937447909[/CODE] Yep! Now on i3014. ECMing the C117... 
I've started NFS, but anyone is welcome to beat me to a factor.

How many cores do I need to beat you ;)
yoyo 
[QUOTE=yoyo;296246]How many cores do I need to beat you ;)
yoyo[/QUOTE] That's the spirit! I'm running on 6, but even with the head start you'd probably only need 812 cores or so :smile:. 
Done, now on i3015. I'm stepping out for now.

Doing NFS on c111. Stopped, c111 got factored somehow.

I guess somebody ECMd it. The C139 is gonna be a tough cookie.
[code]factoring 1264990137606389692444133450629384476462397574581510043754443286574402574630665344815194412858328992167908623915512930414751943767906189429 using pretesting plan: normal using tune info for qs/gnfs crossover div: primes less than 10000 fmt: 1000000 iterations rho: x^2 + 3, starting 1000 iterations on C139 rho: x^2 + 2, starting 1000 iterations on C139 rho: x^2 + 1, starting 1000 iterations on C139 pm1: starting B1 = 150K, B2 = gmpecm default on C139 ecm: 30/30 curves on C139 input, at B1 = 2K, B2 = gmpecm default ecm: 74/74 curves on C139 input, at B1 = 11K, B2 = gmpecm default ecm: 214/214 curves on C139 input, at B1 = 50K, B2 = gmpecm default pm1: starting B1 = 3750K, B2 = gmpecm default on C139 ecm: 430/430 curves on C139 input, at B1 = 250K, B2 = gmpecm default pm1: starting B1 = 15M, B2 = gmpecm default on C139 ecm: 904/904 curves on C139 input, at B1 = 1M, B2 = gmpecm default ecm: 54/2350 curves on C139 input, at B1 = 3M, B2 = gmpecm default[/code] When it's done with the 3M I'll kill it, then we'll run standard ECM... and restart the cycle. 
Better a c139 than a bad p139 :)

I called it quits at 1600@3M; started 1000@12M, and somebody should probably run at least a few hundred north of 40M. (And of course, a volunteer to start nfs.)

Here's a poly:
[code][color=blue]# sieve with ggnfs lasieve4 I13e on alg side from Q=4M to 21.5M[/color] # est ~29M raw relations (avg. 0.023 sec/rel C2D @ 3.4GHz) # aq4788:3022 n: 1264990137606389692444133450629384476462397574581510043754443286574402574630665344815194412858328992167908623915512930414751943767906189429 # norm 2.791110e13 alpha 6.993238 e 2.986e11 rroots 3 skew: 250381.44 c0: 175454780246432276315295083860159 c1: 849411921289885239402429525 c2: 34199330229990184058525 c3: 87072389672945661 c4: 73902851374 c5: 667368 Y0: 285459427682843805320605998 Y1: 1143851721416731 rlim: 6000000 alim: 12000000 lpbr: 28 lpba: 28 mfbr: 56 mfba: 56 rlambda: 2.5 alambda: 2.5 [/code] 
I can start the NFS tonight (12 hrs from now or so) if noone else has volunteered by then, and if ECM is good.

In the process of running 2k@44M (Edit: done). I can't do NFS by myself, would take too long on my single home machine, but I can throw in a few relations if you like.

I'm doing 1k @ 44M, so by tonight we should be done with ECM.
I can take it solo  but thanks for the offer. 
I'm ~65 curves short of 2K@12M, so we're just a bit past t45.

i3027 @ c129

Starting 1K@4m
Actually, better make that 2K Anybody feel like going for ECM factorization only? 
[QUOTE=Dubslow;296387]Starting 1K@4m
Actually, better make that 2K Anybody feel like going for ECM factorization only?[/QUOTE] I'd rather try a QS factorization than that :smile: 
One of my two ECM runs on line 3027 produced this gem:
[code]Run 153 out of 2000: Using B1=10000000, B2=46842680440, polynomial Dickson(12), sigma=3536871957140198850 Step 1 took 27866ms Step 2 took 15738ms ********** Factor found in step 2: 5551206901172054583062351210595874081775416652238749 Found probable prime factor of 52 digits: 5551206901172054583062351210595874081775416652238749 Probable prime cofactor 129906928311090953880459217864523818071838252366190167685116209056556687270679 has 78 digits [/code] edit: Group order is: [code][2 5] [3 1] [17 1] [269 1] [9601 1] [165479 1] [669089 1] [5137309 1] [8592343 1] [9612763 1] [28033427731 1] [/code] Second largest factor is 96.13% of B1 and largest factor is 59.85% of B2. 
Nice! I was halfway through poly select  will happily kill it.

Starting NFS on c121.

[QUOTE=jrk;296397]One of my two ECM runs on line 3027 produced this gem:
[code]Run 153 out of 2000: Using B1=10000000, B2=46842680440, polynomial Dickson(12), sigma=3536871957140198850 Step 1 took 27866ms Step 2 took 15738ms ********** Factor found in step 2: 5551206901172054583062351210595874081775416652238749 Found probable prime factor of 52 digits: 5551206901172054583062351210595874081775416652238749 Probable prime cofactor 129906928311090953880459217864523818071838252366190167685116209056556687270679 has 78 digits[/code][/QUOTE]Wow, I never get that lucky on [I]my[/I] c130s! [EDIT: It's also good to see that the downdriver is still alive (now, as long as the 5 doesn't add any digits); we're still golden....] 
starting ECM @3e6 on alq4788.3033 c131

600 @ B1=3e6 on the C131, NF.

330 @ B1=3e6 on the C131, NF.

Found probable prime factor of 48 digits: 159450259206967402524996975215244611239608534557
Edit: starting NFS on c108 
It got factored.

i3041 has a C126 remaining .. and the 5 is gone!
(total size down to 137) 
Did 2400@3M, now doing 2k@11M on c126.

I threw 1500@3M on the c121@i3044. [SIZE="1"](I could do NFS, but I would not be able to read the factors before tomorrow morning, so I'll leave it to someone else.)[/SIZE]

[QUOTE=rajula;296500]I threw 1500@3M on the c121@i3044. [SIZE=1](I could do NFS, but I would not be able to read the factors before tomorrow morning, so I'll leave it to someone else.)[/SIZE][/QUOTE]
NFS: PRP61 = 4261063551706376757731519359860876519277978869182669884472831 PRP60 = 541450378688037124823705235698592946874107589893049486011051 Now sieving c101@3045 ... done 
at this size, isn't it quicker to directly nfs?

[QUOTE=firejuggler;296532]at this size, isn't it quicker to directly nfs?[/QUOTE]
Sieving at this size is faster than nfs on my i7 @Win7 64bit 
Blergh... we picked up the 5 again. (C101 factored, now C122 i3046.)

Found probable prime factor of 43 digits: 3902175648266726783023329347656704128904277
Edit: oh crap 
[QUOTE=akruppa;296542]
Edit: oh crap[/QUOTE] What an understatement... at least it's all by its lonesome self? 
The end... is near, friends, unless it pick a 3

I did 2000 curves @ B1=4e6, B2=8561602150 on line 3048. NF

I can do NFS, but it'd take about a day, maybe a bit less.
Edit: Starting NFS. If anyone intervenes in the next few hours, I can pass on the poly and job. (I could probably also pass on sieving, but at that point it's kinda pointless (damn you English words with 5000 meanings!).) 
The C120 at index 3049 is P36 (885347206161517046704356643418907199) * C84, found by GMPECM.
EDIT: and SIQS (yafu) produced another P36, 493951436092600008179009832136385333. 
C116 i3062 1K@2M, running 1K@11M then will start NFS.

I'm running NFS.

We appear to have a 2^5*7 guide/driver/thingy, i3072; I'll start running 34M on C133 shortly.

[QUOTE=Dubslow;296677]We appear to have a 2^5*7 guide/driver/thingy, i3072; I'll start running 34M on C133 shortly.[/QUOTE]
1000@3M; starting 1000@11M. 
"Thus conscience does make cowards of us all;
And thus the native hue of resolution Is sicklied o'er with the pale cast of thought, And enterprises of great pith and moment With this regard their currents turn awry, And lose the name of action." 
[QUOTE=Batalov;296707]"Thus conscience does make cowards of us all;
And thus the native hue of resolution Is sicklied o'er with the pale cast of thought, And enterprises of great pith and moment With this regard their currents turn awry, And lose the name of action."[/QUOTE] ...:huh:? :unsure: 
I did ~300@3e6, no factor. Switching to 11e6.

[QUOTE=Dubslow;296677]We appear to have a 2^5*7 guide/driver/thingy, i3072; I'll start running 34M on C133 shortly.[/QUOTE]Shoot....and it was going so well with just 2^3, in fact it even lost a couple more digits. Well, as long as we don't pick up a 3, the 2^5 * 7 should be pretty gentle, though it is an "up" guide.....

1000@11M on the c133@i3072. Not knowing how much others have actually ran, I'll throw in another 400@43M. Presumably the number is about ready for NFS in any case.

4k@11M. Edit: can't do NFS on my home machine, would take a week or two.

[QUOTE=Dubslow;296677]We appear to have a 2^5*7 guide/driver/thingy, i3072; I'll start running 34M on C133 shortly.[/QUOTE]
Once again, I will think that following 2[sup]5[/sup]*7 thing is being rather quite easy to escape from 2[SUP]5[/SUP] always gives away the thing as follows 1+2+4+8+16+32 = 63, thus thereby retaining the 7 as such but 7 gives away 1+7 = 8 = 2[sup]3[/sup] But if the remaining cofactor factors into product of two primes of the form 1 (mod 4) / single prime of the form 3 (mod 8), then it will mutate, i.e. the power of 2 will increase beyond 5, there is good chance to lose the 7 within the subsequent iterations, if and only if 1. There is being no prime factor of the form 6 (mod 7) 2. Power of 2 is not being 2 (mod 3) at once 3. If the power of prime p is being k, then (p[sup]k+1[/sup]1)/(p1) is not being congruent to 0 (mod 7) at all If an iteration factors as into 2[sup]5[/sup]*7*(prime of the form 1 mod 4), then within the subsequent iteration, the power of 2, will automatically come down to 4 itself (i.e. 2[sup]4[/sup]*7*...) 1+2+4+8+16+32 = 63 = 0 (mod 3), as long as the 2[sup]5[/sup]*7 thing is being on hold, it is not being possible to develop a 3, over on the subsequent iterations, either This is not a 2[SUP]2[/SUP]*7 driver, whereby 1+2+4 = 7 preserves the 7 always; 1+7 = 8 = 2[sup]3[/sup] > 2[sup]2[/sup] thereby always retain the 2[sup]2[/sup] factor, as such as both. Edit:> Developed all the aliquot sequences being starting with the powers of 2 to, through 2[sup]300[/sup] throughout, and then, thereby thus storing out them within into the factoring database itself 
1000@11e6, no factor.

2250 curves @ B1=1e7, B2=46842680440 on line 3072. NF

Holy cow, finished my 1K@11M; will move it to higher bounds, but this number is more than ready for NFS.

[QUOTE=Dubslow;296709]...:huh:?
:unsure:[/QUOTE] Sorry, I was lamenting the great pith and moment of our beloved sequence that turned awry. Surely not the preceding post. I turned to [URL="http://factordb.com/sequences.php?se=1&aq=516072&action=range&fr=1070&to=1080"]two[/URL] [URL="http://factordb.com/sequences.php?se=1&aq=345324&action=range&fr=1966&to=1976"]other[/URL] hungry mouths to feed. Most likely they, too, will turn awry; most of them do. Just a handful gets lucky. 
So wait, is anyone actually doing NFS, or are we just sitting around waiting for someone else to do it? (I would be willing to do it, but it'd take a few days.)

I for one am sitting around waiting for someone else to do it. I can sieve a bit, though, if that someone else gives me a poly and an sqrange.

I'll do it.

Now on i3075 with a C109... I'm doing NFS.

i3081 @ C104. I'm stepping aside for now  no more time to babysit this right now.
Oh, and now at 2^4 * 7 
don't you dare to drop 2 other 2.....

I'll take the current C104, i3081

All times are UTC. The time now is 13:42. 
Powered by vBulletin® Version 3.8.11
Copyright ©2000  2023, Jelsoft Enterprises Ltd.