Ran another 760 curves @B1=110M with nothing found. Terminated ECM.
It belongs to NFS now. Happy factoring! 
[QUOTE=VBCurtis;413665]That's terrific, thanks for the extra effort. I'll testsieve today, and will be ready to fire NFS when I wake up tomorrow.
I haven't previously tried running my full desktop on a single NFS task. Going to try 12threaded sieving on a 6core i7. Hopefully, I'll have factors in under a week![/QUOTE] How is the speed looking? Just curious about that "12headed monster". 
79 hrs wallclock time, 78.3M relations of estimated 143M raw rels needed. If a matrix builds the first try, that'll make nearly exactly 6 days sieve time.
There is a single thread of srsieve running, so NFS has 12/13ths of the CPU. I doubt the matrix will solve in under 24 hr, so it will take me more than a week to factor. :( 
89 days for a 157 GNFS? Not too shabby for a single machine! Let's hope this sequence starts moving downward.
I'm willing to throw some more ECM into the mix if needed later. Just call for help here. Good luck! 
168 hr later, I have factors! Second dependency, else it would have completed under 7 days; as it is, I needed the help from Daylight Savings to make it under 7 days. :)
Some stats: 143.4M raw relations w/31bit LP, 123.4M unique 147 hr sieving on 6 HT cores (900 CPUhr, or 1800 threadhr) 5.9M matrix at target density 90, 20 hr to solve on 6 threads (linux doesn't manage HT well some cores were idle during this time; I will experiment with more threads to see if lower idlecoretime makes up for the thread contention) The sequence rises through i1595; a c164 is in the middle of a t40 presently. I'll update factordb later today. 
i1591 through 1594 are added to factordb. 1595 is listed as c175, but aliqueit/ecm is currently working on a c164 (I think that since I invoked aliqueit to update the sequence to factorDB while another copy was still running, it didn't report the trivial prime found).
The c164 has had over half a t45, with the second half finishing overnight. Any public efforts (Sean?) should begin at t50 level. The sequence has grown to 189 digits. 
Ok I'll start ECM at t50 on the C164 later today. How much ECM do you want before starting GNFS?

[QUOTE=VBCurtis;414507]5.9M matrix at target density 90, 20 hr to solve on 6 threads (linux doesn't manage HT well some cores were idle during this time; I will experiment with more threads to see if lower idlecoretime makes up for the thread contention)[/QUOTE]
I get round the Linux scheduling problem with 'taskset c 05 msieve ... ... t 6' to lock it to running on one thread per core 
[QUOTE=fivemack;414637]I get round the Linux scheduling problem with 'taskset c 05 msieve ... ... t 6' to lock it to running on one thread per core[/QUOTE]We just recently started doing this at fivemack's suggestion and it works wonderfully!
:tu: We actually modify the process after it starts, but that accomplishes the same thing and uses the same 'taskset' program. [FONT=Courier New] taskset p c [I]x y[/I][/FONT] where [I]x[/I] is the core number and [I]y[/I] is the process id. 
Thanks for the protip!! I still use the python script; can I call taskset from the script by adding the command ahead of factmsieve's invocation of msieve {...} nc2?

[QUOTE=swellman;414630]Ok I'll start ECM at t50 on the C164 later today. How much ECM do you want before starting GNFS?[/QUOTE]
I use 0.31 * digits for GNFS, which gives me about 1.5t50 here. A C164 is a nontrivial factorization, but one I'd be likely to try in December if it is deemed too small for NFS @ Home and nobody else picks it up. The t45 finished overnight. 
I should finish a single t50 in a few days, and willing to run it up to 5000 curves at t55 level as well. Beyond that it belongs to GNFS.

[QUOTE=VBCurtis;414657]Thanks for the protip!! I still use the python script; can I call taskset from the script by adding the command ahead of factmsieve's invocation of msieve {...} nc2?[/QUOTE]Yep!
[url]https://www.google.com/search?q=site%3Amersenneforum.org+taskset+msieve[/url] 
The C164 completed t50 with no hits. Running another 5k curves @B1=110M. Should be done by Sunday.

5000 curves completed @B1=110M, with no hits. Releasing the C164.

C164 @ i1595
A pretty good poly.
[CODE]N: 45914073122903563395708089273665869580758755638098495988910893466568870950163550288098036669289159785897284341832116623956306052957001815380880771284896517406431299 # expecting poly E from 7.52e13 to > 8.65e13 R0: 33681412008419392063052441159712 R1: 1830929568967997 A0: 11260786718766335764073823281057638268015 A1: 7839832388178674512800429612311471 A2: 1815768171620985620757090807 A3: 105694792666173790535 A4: 29128040589686 A5: 1059240 # skew 9368452.93, size 5.742e16, alpha 8.043, combined = 8.110e13 rroots = 3[/CODE] 
Any takers, before we sieve it on NFS@Home's 14e ?

Please, take it away! We're grateful to have the assistance.

[QUOTE=debrouxl;415588]Any takers, before we sieve it on NFS@Home's 14e ?[/QUOTE]
Yes, that would be nice. 
Oops, it looks like I missed Curtis' reply, or forgot about it. Sorry. I've just entered "C164_3408_1595" into NFS@Home's 14e work generation system.

C164_3408_1595 has been factored by NFS@Home.
I'm now working on the C141. 
I'm releasing 3408. Pushed it through i1607.
Ran ECM through t40 +1100 curves @B1=11M on the current C167. 
C167 @ i1608
As I was posting this poly, I noticed the ECM work was only through 11e6, not 11e7 as I first read it.
Anyway, if needed later. [CODE]N: 19279124408593705964551938007182661475817993340606611931974614884160154214521467864149140038078184005437115174440833901644941046994615286984783459226466824702280498043 (167 digits) # expecting poly E from 5.21e13 to > 6.00e13 R0: 327720353491730676473078742013533 R1: 174549382181399939 A0: 3023638545710156835094264630064226096320 A1: 383976428325348753245280153981228 A2: 827767982714206914636699332 A3: 159564790585744901760 A4: 1247620302709 A5: 5100 skew 28893708.47 # skew 28893708.47, size 3.231e16, alpha 6.912, combined = 5.656e13 rroots = 3[/CODE] 
C167
Passing through 4800 @ 43e6  no factor.

[QUOTE=RichD;424761]Passing through 4800 @ 43e6  no factor.[/QUOTE]
How far are you taking ECM? Do you want help with t55? I know that can get tedious. 
I plan to do a full t50. Any help at 11e7 would be appreciated.
It may be a day or two before I can finish 43e6. 
Ok. Will watch this thread before starting t55. I can't start on it until tomorrow night anyway, so the timing seems perfect.

Looks like I will finish t50 in a few hours. Then I'll start on 110e6.
Does anyone have a good feel for the number of curves at 110e6 for this C167? 
Using heuristic of 0.31 * size, I get 51.8 digits, or a t50 plus about 5/6ths of a t50 worth of curves at 110M. gmpECM tells me 3583 curves at 110M is a t50, so the heuristic suggests 3000 curves.
A c167 should take about 8500 threadhours to sieve + LA. From t50 to t51.8 is 1.8 digits of factoring at 1/50 chance per digit = 3.6% chance of a factor. Our guess of 3000 curves would take 280 hrs (at 330 sec per curve, tested on a stock i5 haswell), which is 3.3% of expected NFS time. The chance of factor > fraction of NFS time, so 3000 curves looks about right. A bit of testsieving on the poly you posted might refine the 8500hr guess, but it doesn't really matter so long as we're in the neighborhood of optimal pretesting. 
Is 3000 curves @11e7 the current target? Do you still want the help for that number of curves? Glad to pitch in if still needed.

[QUOTE=VBCurtis;424887]A c167 should take about 8500 threadhours to sieve + LA..[/QUOTE]
How is the 8500 hours calculated, from prior work? I don't have the resources to complete a C167 by myself. Though I do some NFS@Home postprocessing but that is pretty much fire and forget for several days. I am passing through 250 curves @11e7. Any help would be nice since it appears this number might be a candidate for NFS@Home and the 14e queue may have a short list. 
Yes; I estimated based on my experience with smaller numbers and the rough doubling of effort for every 5 digit increase in input size. If I'd thought about it, I would have used previous runs with NFS@home to get a better estimate.
A more accurate estimate would be to testsieve a few q, find the sec/rel, and multiply by the estimated number of relations needed. This usually isn't possible for GNFS since poly select would usually happen after ECM is complete, but in this case we have your good poly so would could do so. If I were running the job myself, I'd choose among 15e/31, 15e/32, and 14e/32, but it's 14e for sure for NFS@ home. I would guess 275M relations for 14e/32, perhaps 5% fewer for 15e due to fewer duplicate relations. Previous NFS@home runs would give you a good guess for relations needed for 14e/31, the likely parameter choice for NFS@home for C167. Previous NFS@home data should give an idea of matrixeffort required as well. I might get to trying such a testsieve in a day or two if nobody else does it first. 
RichD  yes a GNFS 167 does seem a reasonable candidate for 14e, though there are still some proposed composites waiting for approval/queuing.
Just started ECM at 11e7. I will have three more machines become available later this week but we should hit 3000 curves by the weekend. 
[QUOTE=swellman;424915]
Just started ECM at 11e7. I will have three more machines become available later this week but we should hit 3000 curves by the weekend.[/QUOTE] I'll toss in 500 curves toward the 3000. I'll start them tonight on an old core2, will be done by Friday. 
[QUOTE=RichD;424910]How is the 8500 hours calculated, from prior work? [/QUOTE]
I found logs from a C165 I ran last fall, poly score 6.67e13. Sieve time was ~1000 hours on a 3.2 ghz core2, with 10.6M matrix taking 120 hrs to solve (on a newer 4threaded machine). That's 4500 threadhours on a nonHT but older machine. Your C167 poly score is 5.6e13, 15% lower than my C165. So, looks like my 8500hr guess was way high (at least for Core2 threadhours); if time scales with inverse of poly score (it does for nearby scores, plus or minus 10%) we're looking at a 5000 to 5500 threadhour project, which also means 3000 curves at 110M is a bit high. Let's end ECM at 2000 curves at 110M. Edit Aha! Threadhours on a HT machine is roughly 2Ghz equivalent, while this post's data is from 3.2Ghz nonHT. That explains my estimate dropping to 5/8ths the previous one the data is from a machine 8/5ths the speed of a fully loaded HThaswell. Never mind, 3000 curves is fine, nothing to see here... 
So when do we crack a beer? :smile:
My first machine is reporting an ETA of 164 hours for 3000 curves. Next machine is of similar performance and will become available on Thursday evening, but depending on RichD's throughput we may not need it. We can do more ECM if NFS@Home requires it, but the above calculations by VBCurtis seem solid. The work continues... 
C167
Passing through 510 @ 11e7.

Passing through 375 @11e7.

Passing through 600 curves @11e7.

C167
Passing through 1500 @ 11e7.

400 curves at 15e7 complete roughly equivalent to 500 @ 11e7. Looks like it's about ready for the 14e queue.

Passing through 840 @11e7. Will be over 1000 curves in the morning, and that makes a nice stopping point. Agreed?

Agreed, that would be more than 3000 total, plenty.

My final count was 1064 curves @11e7. Good luck with the GNFS and this series.

Completed 2200 @ 11e7  no factor.

What is the status of this sequence?

I believe the C167 is ready for GNFS. ECM is complete and the poly is [URL=http://www.mersenneforum.org/showpost.php?p=424602&postcount=217]here[/URL].
We can either perform a team sieve or wait to see if NFS@Home will pick it up. I think there are several numbers ahead of this one waiting for review by NFS@Home. 
I suggest you nominate it for inclusion in the NFS@Home queue. Nothing wrong with a team sieve effort, and I'm willing to participate, but my resources are tasked until early March.
Maybe others will help with the sieving in the near term. I'll watch this thread and follow your lead. 
Queued at [url]http://escatter11.fullerton.edu/nfs/crunching.php[/url]

C167_3408_1608
[url=http://www.mersenneforum.org/showpost.php?p=428324&postcount=694]Results posted[/url]

C147 @ i1609
t40 & pm1 9e9  no factor
Continuing. 
C141 @ i1614
2 * 3^2 * 5 * 7 * ... * C141
I advanced a few steps but I need to get back to other projects. Any help is welcome. 
Hey, someone cracked the C182 from i1615, there is now an easy C120 waiting at i1616
Congrats! 
It's me. C120 was also done :smile:

Sequence now at i1620 with a C175. I took it up to a trivial t35 but hit nothing. Sorry if I was poaching  didn't see any notices. But I've abandoned the sequence now. Here's hoping further ECM can crack it.

[QUOTE=RichD;433181]Aliquot sequence 3408 is also assigned to the forum though not [URL="http://www.mersenneforum.org/showthread.php?t=18421&page=23"]much activity[/URL] has been done lately. It currently stands with a [URL="http://www.factordb.com/sequences.php?se=1&aq=3408&action=range&fr=1620&to=1620"]C175[/URL].
Hint, hint.[/QUOTE] Hehe, one three is gone (actually... two threes are gone!), with a bit of luck we get rid of the last three too, and there it was a dead sequence... 
I did t45 on the C175 and ~300 curves @43e6. I've stopped now.

Now C157 @i1628 and it is ready for GNFS.

C157 @ i1628
Two to choose from.
[CODE]N: 1107539996194348607758987938149998439013200145998852929251249938679777840767363942489403268905868353799049234977734483837810312019588231329948926384387678623 # expecting poly E from 2.18e12 to > 2.51e12 R0: 1609319999339834703990591259529 R1: 2549538936276181 A0: 779820356075790076493881438513948920 A1: 10539848872370286710492549619340 A2: 21098243756839030672787026 A3: 15960416359995756765 A4: 2388677893794 A5: 102600 # skew 2804357.24, size 3.460e15, alpha 6.747, combined = 2.333e12 rroots = 5 N: 1107539996194348607758987938149998439013200145998852929251249938679777840767363942489403268905868353799049234977734483837810312019588231329948926384387678623 # expecting poly E from 2.18e12 to > 2.51e12 R0: 831609265226649019894593467224 R1: 697467876707489 A0: 11214674758834255568502622868971842975 A1: 10156257300307417199888877689380 A2: 45856196009118127755771581 A3: 5512568915297278574 A4: 25923868378092 A5: 2784600 # skew 1448164.02, size 3.257e15, alpha 7.208, combined = 2.251e12 rroots = 5[/CODE] 
Taking this one for NFS@Home's 14e.

Next blocker is C153@i1629 which resists 3000@43e6 from me.

I'll factor it.

It's factored now.
[CODE]Using B1=43000000, B2=240490660426, polynomial Dickson(12), sigma=3169275245 Step 1 took 204480ms Step 2 took 63339ms ********** Factor found in step 2: 113620712594484464788316394064963357489161663400363 Found probable prime factor of 51 digits: 113620712594484464788316394064963357489161663400363[/CODE] 
What's the status of this C149 or the C180 of 3366?

C149 is ready for nfs, C180 survived 3000@43e6 so far.

I'll NFS the C149 here.

I'm slowly working towards t45 on i1634 C167.

Near completion of the t50, a p50 popped out; NFS on the remainder revealed a p48 and p70. It was a bit unlucky, though possibly the two digits of NFS time saved is more than the average longer amount of work to find the p50.
In either case, i1636 has a C180 at t45, though it'll likely be the better part of a week for me to complete a t50. 
Someone else has also found the P48, though I'm past that and currently NFSing a C136 (should be less than 10 hours before that completes).
I will be working every number through either fully factoring it myself and moving on, or seeing ECM and polyselect through to NFS@Home, or until such time as I post that I have given up on the number. 
Or not. Someone with more resources is pushing the sequence past me.

Anyone run ECM curves on c158@i1649 ? No updates since June 29.

[QUOTE=unconnected;438713]Anyone run ECM curves on c158@i1649 ? No updates since June 29.[/QUOTE]
Looks like someone was poaching on Dubslow but then got bored. Maybe it was unintentional but poaching nevertheless. Anyone want to reserve this sequence? I have no resources to spare currently. Dubslow? Yours was the last reservation. 
[QUOTE=unconnected;438713]Anyone run ECM curves on c158@i1649 ? [/QUOTE]
I did 4500@11e6 and 3000@43e6 on it  no factors. Another 3000@43e6 in progress. 
Can someone find a poly for this? If ECM doesn't hit, I can run SNFS next week.

Sure, I can get you a poly by the end of the weekend.
I'll run msievegpu to do so; if anyone else chooses to run, please use first coefficient above 1M. 
C158
An average one from the 4M range to beat.
[CODE]N: 18055997083148694993898830047587909445293704904272941567447470181180256267040716650865275261560648696565277214392915125302555784136951112052232838721396553041 # expecting poly E from 1.84e12 to > 2.12e12 R0: 1349815755454126961826984196884 R1: 2712308297194163 A0: 32201371229291638933708920597791829295 A1: 72730117299910544086082879376273 A2: 113310109273797435099875345 A3: 25463421811136220309 A4: 28315318968406 A5: 4029480 # skew 2038863.68, size 2.631e15, alpha 7.691, combined = 1.998e12 rroots = 5[/CODE] 
Sorry I'm late I tied rich with my first run, so made a second run (which didn't help). Here are my two best:
[code]N 18055997083148694993898830047587909445293704904272941567447470181180256267040716650865275261560648696565277214392915125302555784136951112052232838721396553041 SKEW 3146043.65 R0 2217693253976731598275048870758 R1 7607723652311251 A0 11265081658508851032029782044722064595 A1 3277538334460862545072418018177 A2 4886710259841315171137387 A3 6991624360475671937 A4 2868703556058 A5 336600 #skew 3146043.65, size 2.539e15, alpha 6.945, combined = 1.986e12 rroots = 1 N 18055997083148694993898830047587909445293704904272941567447470181180256267040716650865275261560648696565277214392915125302555784136951112052232838721396553041 SKEW 3909600.72 R0 1845007741249083860157722463580 R1 3796167614537057 A0 158992908651771612480801508359453589533 A1 103379758640521209245040219270953 A2 39079075741511272435283203 A3 11225289270707446563 A4 3635942760954 A5 844560 #skew 3909600.72, size 2.539e15, alpha 7.618, combined = 1.966e12 rroots = 3[/code] I ran A1 from 15k to 1.27M. 
Thanks guys for the polys. Ended up choosing VBCurtis first poly, though they were all similar in performance. Will start it in a day or two once resources free up. Will take a few weeks to fully fsctor.

Any progress here?

[QUOTE=unconnected;442297]Any progress here?[/QUOTE]
Yes, it's in LA and should be factored in two more days. I've become spoiled by RyanP and his ability to chew through hundreds of iterations per day! 
I'm done. Series is at i1651 with a C149. Ran a bit of ECM to t35.
Somebody take it. 
C149 was cracked by ECM (p50 after ~400@11e6 curves  I'm lucky here).
Now again C158, I can run polyselect for this but not all GNFS. 
Poly for C158.
[CODE]# norm 2.829565e15 alpha 9.674228 e 1.832e12 rroots 1 n: 79933515103235306815732304856672491074680074688676425625899406692334007147016700671551909312123530653854370338778979049814698834222425825561192039057975774481 skew: 45476477.92 c0: 2104403158320717301966750387083391826024832 c1: 167898735083323752751067777774833704 c2: 1989821115747940481877005858 c3: 151701621080234560409 c4: 351786144456 c5: 36900 Y0: 4646655632492287543013586279001 Y1: 119797584633535873 rlim: 36800000 alim: 36800000 lpbr: 30 lpba: 30 mfbr: 60 mfba: 60 rlambda: 2.6 alambda: 2.6 [/CODE] 
If no one jumps up to take this on soon, I might want to experiment with it. It's been a while since I did this type of single number work and I can't find (or remember) anything on choosing the proper sieve. Would this be lasieve4I14e?
Or, do I need to test? Maybe that's why I can't find it... 
158 is squarely in 14e, though I suggest 31LP rather than 30. I think NFS@home often runs one largeprimebit small because data sets 40% smaller is worth 48% extra computation effort; for an individual effort, the tradeoff for less effort is valuable.
I believe 31 is faster than 30 at 155 digits, and 32 is faster than 31 at 166 digits. The transition to 15e is somewhere around 170, well above the typical singlemachine project. Basically, I run almost all my projects one LP bit higher than NFS@home chooses, with very good results. Something near 150M raw relations should allow you to build a matrix with target density 96 or 100. If the architecture you run LA on is older than Haswell, I'd set target density at 100110, while LA on haswell is fast enough that 90 or 96 will save you more sieve time than it costs you in LA (compared to, say, 104 or 110). 
For excessive detail on which siever:
[url]http://mersenneforum.org/showpost.php?p=426120&postcount=30[/url] 
[QUOTE=VBCurtis;443039]158 is squarely in 14e, though I suggest 31LP rather than 30. I think NFS@home often runs one largeprimebit small because data sets 40% smaller is worth 48% extra computation effort; for an individual effort, the tradeoff for less effort is valuable.
I believe 31 is faster than 30 at 155 digits, and 32 is faster than 31 at 166 digits. The transition to 15e is somewhere around 170, well above the typical singlemachine project. Basically, I run almost all my projects one LP bit higher than NFS@home chooses, with very good results. Something near 150M raw relations should allow you to build a matrix with target density 96 or 100. If the architecture you run LA on is older than Haswell, I'd set target density at 100110, while LA on haswell is fast enough that 90 or 96 will save you more sieve time than it costs you in LA (compared to, say, 104 or 110).[/QUOTE] I have several machines, all 64bit multicore, but not very new and I found my Team Sieving scripts on several. I started some of them last night to see what I would have this morning. I used the original poly and the scripts had been set to use siever 15 from the last time I ran it, so I left it. I currently have about 7M unique relations. Would it be worth restarting from scratch, or should I just let it go? I guess I'll work this number and see if I can actually complete it. 
[QUOTE=EdH;443053]I have several machines, all 64bit multicore, but not very new and I found my Team Sieving scripts on several. I started some of them last night to see what I would have this morning. I used the original poly and the scripts had been set to use siever 15 from the last time I ran it, so I left it. I currently have about 7M unique relations. Would it be worth restarting from scratch, or should I just let it go?
I guess I'll work this number and see if I can actually complete it.[/QUOTE] You can switch from 15e to 14e midjob. 15e searches a larger region and will have found more relations than 14e. These should be useful relations in the postprocessing. It isn't like you are reducing the large prime bound. 
[QUOTE=henryzz;443054]You can switch from 15e to 14e midjob. 15e searches a larger region and will have found more relations than 14e. These should be useful relations in the postprocessing. It isn't like you are reducing the large prime bound.[/QUOTE]
I have swapped to 14 and some of the machines are already running with it. The rest should swap over when they finish their current assignments. How much RAM will I need for the LA step? (I'm actually thinking about resurrecting my openmpi setup for that  nah, probably not... at least for now...) Thanks to both VBCurtis and henryzz! 
Well, that depends how far you oversieve, what targetdensity you choose, and (of course) some luck. If you have to use a 4GB system, you might need some extra relations to get the matrix small enough.

[QUOTE=VBCurtis;443093]Well, that depends how far you oversieve, what targetdensity you choose, and (of course) some luck. If you have to use a 4GB system, you might need some extra relations to get the matrix small enough.[/QUOTE]
Hmmm... I have some 4GBs... And, one 6GB. But, I'm not sure about the 6GB right now. The CPU temps are not right  they are about ten degrees C different and are hovering around 6575. I've repasted the CPU twice with no change. Maybe I will have to set up a miniature cluster again... Thanks... 
Edit: I think I have it figured out.
Sorry for the following, but my memory is failing me badly (or, is it actually failing me very well...)? I can't find any notes and I wasn't clear from the readmes. [strike]How do I manually invoke msieve when I finally have some relations to try? It seems like I need to make some other files and then run nc1, etc.[/strike] I did set up a two machine cluster to help with the LA. I think I can get that running. 
OK, posted:
[code] p59 factor: 30592812650704160232149399890819229125108884689066106302671 p100 factor: 2612820076920762067419883609800717505891645306969102554711927161587415720256631048330542515491118111 [/code]I haven't done anything with the c191... 
How big was the matrix, and how much memory did it take to perform LA? Did you do it on one machine, or two?

[QUOTE=VBCurtis;443481]How big was the matrix, and how much memory did it take to perform LA? Did you do it on one machine, or two?[/QUOTE]
Sorry for the delay, but I was away from home and the Internet most of the day yesterday and now, back home, my main computer has crashed. (I must be overdue on backing it up.) Anyway, to the points: [code] matrix is 5327133 x 5327307 (1620.8 MB) with weight 513026738 (96.30/col) memory use: 2259.1 MB one machine (4 cores 4G memory) [/code]After getting the relations from several machines, I ran remdups4 and then did the rest on an Intel(R) Core(TM)2 Quad CPU Q9400 @ 2.66GHz with 4G memory, using all four cores. Here's the log file (the time is off by a couple hours): [code] Fri Sep 23 09:48:02 2016 Fri Sep 23 09:48:02 2016 Fri Sep 23 09:48:02 2016 Msieve v. 1.53 (SVN 993) Fri Sep 23 09:48:02 2016 random seeds: 7ef1dc61 cedd1592 Fri Sep 23 09:48:02 2016 factoring 79933515103235306815732304856672491074680074688676425625899406692334007147016700671551909312123530653854370338778979049814698834222425825561192039057975774481 (158 digits) Fri Sep 23 09:48:03 2016 searching for 15digit factors Fri Sep 23 09:48:04 2016 commencing number field sieve (158digit input) Fri Sep 23 09:48:04 2016 R0: 4646655632492287543013586279001 Fri Sep 23 09:48:04 2016 R1: 119797584633535873 Fri Sep 23 09:48:04 2016 A0: 2104403158320717301966750387083391826024832 Fri Sep 23 09:48:04 2016 A1: 167898735083323752751067777774833704 Fri Sep 23 09:48:04 2016 A2: 1989821115747940481877005858 Fri Sep 23 09:48:04 2016 A3: 151701621080234560409 Fri Sep 23 09:48:04 2016 A4: 351786144456 Fri Sep 23 09:48:04 2016 A5: 36900 Fri Sep 23 09:48:04 2016 skew 45476477.92, size 2.340e15, alpha 9.674, combined = 1.832e12 rroots = 1 Fri Sep 23 09:48:04 2016 Fri Sep 23 09:48:04 2016 commencing relation filtering Fri Sep 23 09:48:04 2016 estimated available RAM is 3673.8 MB Fri Sep 23 09:48:04 2016 commencing duplicate removal, pass 1 Fri Sep 23 10:07:09 2016 found 2786288 hash collisions in 79276144 relations Fri Sep 23 10:07:55 2016 added 122115 free relations Fri Sep 23 10:07:55 2016 commencing duplicate removal, pass 2 Fri Sep 23 10:09:25 2016 found 4 duplicates and 79398255 unique relations Fri Sep 23 10:09:25 2016 memory use: 253.2 MB Fri Sep 23 10:09:25 2016 reading ideals above 49872896 Fri Sep 23 10:09:25 2016 commencing singleton removal, initial pass Fri Sep 23 10:24:31 2016 memory use: 1506.0 MB Fri Sep 23 10:24:31 2016 reading all ideals from disk Fri Sep 23 10:24:47 2016 memory use: 1380.0 MB Fri Sep 23 10:24:55 2016 commencing inmemory singleton removal Fri Sep 23 10:25:03 2016 begin with 79398255 relations and 76382521 unique ideals Fri Sep 23 10:26:29 2016 reduce to 36978097 relations and 27624336 ideals in 17 passes Fri Sep 23 10:26:29 2016 max relations containing the same ideal: 40 Fri Sep 23 10:26:33 2016 reading ideals above 720000 Fri Sep 23 10:26:34 2016 commencing singleton removal, initial pass Fri Sep 23 10:35:21 2016 memory use: 753.0 MB Fri Sep 23 10:35:21 2016 reading all ideals from disk Fri Sep 23 10:35:37 2016 memory use: 1332.8 MB Fri Sep 23 10:35:46 2016 keeping 33415419 ideals with weight <= 200, target excess is 196109 Fri Sep 23 10:35:54 2016 commencing inmemory singleton removal Fri Sep 23 10:36:03 2016 begin with 36978100 relations and 33415419 unique ideals Fri Sep 23 10:37:54 2016 reduce to 36944449 relations and 33381759 ideals in 13 passes Fri Sep 23 10:37:54 2016 max relations containing the same ideal: 200 Fri Sep 23 10:38:32 2016 removing 3507732 relations and 3107732 ideals in 400000 cliques Fri Sep 23 10:38:34 2016 commencing inmemory singleton removal Fri Sep 23 10:38:42 2016 begin with 33436717 relations and 33381759 unique ideals Fri Sep 23 10:39:59 2016 reduce to 33184377 relations and 30017954 ideals in 10 passes Fri Sep 23 10:39:59 2016 max relations containing the same ideal: 195 Fri Sep 23 10:40:34 2016 removing 2584818 relations and 2184818 ideals in 400000 cliques Fri Sep 23 10:40:35 2016 commencing inmemory singleton removal Fri Sep 23 10:40:42 2016 begin with 30599559 relations and 30017954 unique ideals Fri Sep 23 10:41:38 2016 reduce to 30442978 relations and 27674527 ideals in 8 passes Fri Sep 23 10:41:38 2016 max relations containing the same ideal: 185 Fri Sep 23 10:42:10 2016 removing 2295595 relations and 1895595 ideals in 400000 cliques Fri Sep 23 10:42:11 2016 commencing inmemory singleton removal Fri Sep 23 10:42:18 2016 begin with 28147383 relations and 27674527 unique ideals Fri Sep 23 10:43:15 2016 reduce to 28011320 relations and 25641164 ideals in 9 passes Fri Sep 23 10:43:15 2016 max relations containing the same ideal: 177 Fri Sep 23 10:43:44 2016 removing 2139525 relations and 1739526 ideals in 400000 cliques Fri Sep 23 10:43:46 2016 commencing inmemory singleton removal Fri Sep 23 10:43:52 2016 begin with 25871795 relations and 25641164 unique ideals Fri Sep 23 10:44:38 2016 reduce to 25743939 relations and 23772178 ideals in 8 passes Fri Sep 23 10:44:38 2016 max relations containing the same ideal: 166 Fri Sep 23 10:45:05 2016 removing 2043197 relations and 1643197 ideals in 400000 cliques Fri Sep 23 10:45:06 2016 commencing inmemory singleton removal Fri Sep 23 10:45:12 2016 begin with 23700742 relations and 23772178 unique ideals Fri Sep 23 10:46:05 2016 reduce to 23575597 relations and 22002134 ideals in 10 passes Fri Sep 23 10:46:05 2016 max relations containing the same ideal: 157 Fri Sep 23 10:46:29 2016 removing 1974189 relations and 1574189 ideals in 400000 cliques Fri Sep 23 10:46:30 2016 commencing inmemory singleton removal Fri Sep 23 10:46:35 2016 begin with 21601408 relations and 22002134 unique ideals Fri Sep 23 10:47:14 2016 reduce to 21475044 relations and 20299725 ideals in 8 passes Fri Sep 23 10:47:14 2016 max relations containing the same ideal: 144 Fri Sep 23 10:47:37 2016 removing 1930392 relations and 1530392 ideals in 400000 cliques Fri Sep 23 10:47:37 2016 commencing inmemory singleton removal Fri Sep 23 10:47:42 2016 begin with 19544652 relations and 20299725 unique ideals Fri Sep 23 10:48:17 2016 reduce to 19413060 relations and 18635646 ideals in 8 passes Fri Sep 23 10:48:17 2016 max relations containing the same ideal: 136 Fri Sep 23 10:48:37 2016 removing 1897152 relations and 1497152 ideals in 400000 cliques Fri Sep 23 10:48:38 2016 commencing inmemory singleton removal Fri Sep 23 10:48:42 2016 begin with 17515908 relations and 18635646 unique ideals Fri Sep 23 10:49:13 2016 reduce to 17375964 relations and 16996080 ideals in 8 passes Fri Sep 23 10:49:13 2016 max relations containing the same ideal: 124 Fri Sep 23 10:49:31 2016 removing 870487 relations and 718090 ideals in 152397 cliques Fri Sep 23 10:49:32 2016 commencing inmemory singleton removal Fri Sep 23 10:49:36 2016 begin with 16505477 relations and 16996080 unique ideals Fri Sep 23 10:50:01 2016 reduce to 16475956 relations and 16248256 ideals in 7 passes Fri Sep 23 10:50:01 2016 max relations containing the same ideal: 121 Fri Sep 23 10:50:24 2016 relations with 0 large ideals: 698 Fri Sep 23 10:50:24 2016 relations with 1 large ideals: 974 Fri Sep 23 10:50:24 2016 relations with 2 large ideals: 17017 Fri Sep 23 10:50:24 2016 relations with 3 large ideals: 156849 Fri Sep 23 10:50:24 2016 relations with 4 large ideals: 781984 Fri Sep 23 10:50:24 2016 relations with 5 large ideals: 2276838 Fri Sep 23 10:50:24 2016 relations with 6 large ideals: 4091338 Fri Sep 23 10:50:24 2016 relations with 7+ large ideals: 9150258 Fri Sep 23 10:50:24 2016 commencing 2way merge Fri Sep 23 10:50:47 2016 reduce to 9902807 relation sets and 9675107 unique ideals Fri Sep 23 10:50:47 2016 commencing full merge Fri Sep 23 10:54:27 2016 memory use: 1172.9 MB Fri Sep 23 10:54:28 2016 found 5358534 cycles, need 5327307 Fri Sep 23 10:54:31 2016 weight of 5327307 cycles is about 373003744 (70.02/cycle) Fri Sep 23 10:54:31 2016 distribution of cycle lengths: Fri Sep 23 10:54:31 2016 1 relations: 732212 Fri Sep 23 10:54:31 2016 2 relations: 652634 Fri Sep 23 10:54:31 2016 3 relations: 646332 Fri Sep 23 10:54:31 2016 4 relations: 594075 Fri Sep 23 10:54:31 2016 5 relations: 535051 Fri Sep 23 10:54:31 2016 6 relations: 472112 Fri Sep 23 10:54:31 2016 7 relations: 410449 Fri Sep 23 10:54:31 2016 8 relations: 341690 Fri Sep 23 10:54:31 2016 9 relations: 274406 Fri Sep 23 10:54:31 2016 10+ relations: 668346 Fri Sep 23 10:54:31 2016 heaviest cycle: 19 relations Fri Sep 23 10:54:33 2016 commencing cycle optimization Fri Sep 23 10:54:45 2016 start with 27751237 relations Fri Sep 23 10:55:33 2016 pruned 564060 relations Fri Sep 23 10:55:33 2016 memory use: 958.9 MB Fri Sep 23 10:55:33 2016 distribution of cycle lengths: Fri Sep 23 10:55:33 2016 1 relations: 732212 Fri Sep 23 10:55:33 2016 2 relations: 666012 Fri Sep 23 10:55:33 2016 3 relations: 665955 Fri Sep 23 10:55:33 2016 4 relations: 605940 Fri Sep 23 10:55:33 2016 5 relations: 545494 Fri Sep 23 10:55:33 2016 6 relations: 478075 Fri Sep 23 10:55:33 2016 7 relations: 412649 Fri Sep 23 10:55:33 2016 8 relations: 340124 Fri Sep 23 10:55:33 2016 9 relations: 270059 Fri Sep 23 10:55:33 2016 10+ relations: 610787 Fri Sep 23 10:55:33 2016 heaviest cycle: 19 relations Fri Sep 23 10:55:44 2016 RelProcTime: 4060 Fri Sep 23 10:55:44 2016 elapsed time 01:07:42 Fri Sep 23 10:58:46 2016 Fri Sep 23 10:58:46 2016 Fri Sep 23 10:58:46 2016 Msieve v. 1.53 (SVN 993) Fri Sep 23 10:58:46 2016 random seeds: 34464f6c 3f8c0ce2 Fri Sep 23 10:58:46 2016 factoring 79933515103235306815732304856672491074680074688676425625899406692334007147016700671551909312123530653854370338778979049814698834222425825561192039057975774481 (158 digits) Fri Sep 23 10:58:48 2016 searching for 15digit factors Fri Sep 23 10:58:48 2016 commencing number field sieve (158digit input) Fri Sep 23 10:58:48 2016 R0: 4646655632492287543013586279001 Fri Sep 23 10:58:48 2016 R1: 119797584633535873 Fri Sep 23 10:58:48 2016 A0: 2104403158320717301966750387083391826024832 Fri Sep 23 10:58:48 2016 A1: 167898735083323752751067777774833704 Fri Sep 23 10:58:48 2016 A2: 1989821115747940481877005858 Fri Sep 23 10:58:48 2016 A3: 151701621080234560409 Fri Sep 23 10:58:48 2016 A4: 351786144456 Fri Sep 23 10:58:48 2016 A5: 36900 Fri Sep 23 10:58:48 2016 skew 45476477.92, size 2.340e15, alpha 9.674, combined = 1.832e12 rroots = 1 Fri Sep 23 10:58:48 2016 Fri Sep 23 10:58:48 2016 commencing linear algebra Fri Sep 23 10:58:49 2016 read 5327307 cycles Fri Sep 23 10:59:00 2016 cycles contain 16310977 unique relations Fri Sep 23 11:01:54 2016 read 16310977 relations Fri Sep 23 11:02:24 2016 using 20 quadratic characters above 4294917295 Fri Sep 23 11:03:57 2016 building initial matrix Fri Sep 23 11:08:07 2016 memory use: 2259.1 MB Fri Sep 23 11:08:23 2016 read 5327307 cycles Fri Sep 23 11:08:24 2016 matrix is 5327133 x 5327307 (1620.8 MB) with weight 513026738 (96.30/col) Fri Sep 23 11:08:24 2016 sparse part has weight 360967488 (67.76/col) Fri Sep 23 11:09:41 2016 filtering completed in 2 passes Fri Sep 23 11:09:43 2016 matrix is 5325533 x 5325711 (1620.7 MB) with weight 512973592 (96.32/col) Fri Sep 23 11:09:43 2016 sparse part has weight 360957497 (67.78/col) Fri Sep 23 11:10:21 2016 matrix starts at (0, 0) Fri Sep 23 11:10:22 2016 matrix is 5325533 x 5325711 (1620.7 MB) with weight 512973592 (96.32/col) Fri Sep 23 11:10:22 2016 sparse part has weight 360957497 (67.78/col) Fri Sep 23 11:10:22 2016 saving the first 48 matrix rows for later Fri Sep 23 11:10:23 2016 matrix includes 128 packed rows Fri Sep 23 11:10:24 2016 matrix is 5325485 x 5325711 (1539.9 MB) with weight 413030725 (77.55/col) Fri Sep 23 11:10:24 2016 sparse part has weight 339768377 (63.80/col) Fri Sep 23 11:10:24 2016 using block size 8192 and superblock size 294912 for processor cache size 3072 kB Fri Sep 23 11:10:51 2016 commencing Lanczos iteration (4 threads) Fri Sep 23 11:10:51 2016 memory use: 1266.4 MB Fri Sep 23 11:11:40 2016 linear algebra at 0.0%, ETA 45h54m Fri Sep 23 11:11:56 2016 checkpointing every 120000 dimensions Sun Sep 25 08:52:46 2016 lanczos halted after 84222 iterations (dim = 5325485) Sun Sep 25 08:52:53 2016 recovered 29 nontrivial dependencies Sun Sep 25 08:52:53 2016 BLanczosTime: 165245 Sun Sep 25 08:52:53 2016 elapsed time 45:54:07 Sun Sep 25 19:54:00 2016 Sun Sep 25 19:54:00 2016 Sun Sep 25 19:54:00 2016 Msieve v. 1.53 (SVN 993) Sun Sep 25 19:54:00 2016 random seeds: d761917f 2106574e Sun Sep 25 19:54:00 2016 factoring 79933515103235306815732304856672491074680074688676425625899406692334007147016700671551909312123530653854370338778979049814698834222425825561192039057975774481 (158 digits) Sun Sep 25 19:54:01 2016 searching for 15digit factors Sun Sep 25 19:54:02 2016 commencing number field sieve (158digit input) Sun Sep 25 19:54:02 2016 R0: 4646655632492287543013586279001 Sun Sep 25 19:54:02 2016 R1: 119797584633535873 Sun Sep 25 19:54:02 2016 A0: 2104403158320717301966750387083391826024832 Sun Sep 25 19:54:02 2016 A1: 167898735083323752751067777774833704 Sun Sep 25 19:54:02 2016 A2: 1989821115747940481877005858 Sun Sep 25 19:54:02 2016 A3: 151701621080234560409 Sun Sep 25 19:54:02 2016 A4: 351786144456 Sun Sep 25 19:54:02 2016 A5: 36900 Sun Sep 25 19:54:02 2016 skew 45476477.92, size 2.340e15, alpha 9.674, combined = 1.832e12 rroots = 1 Sun Sep 25 19:54:02 2016 Sun Sep 25 19:54:02 2016 commencing square root phase Sun Sep 25 19:54:02 2016 reading relations for dependency 1 Sun Sep 25 19:54:03 2016 read 2663865 cycles Sun Sep 25 19:54:09 2016 cycles contain 8155114 unique relations Sun Sep 25 19:55:49 2016 read 8155114 relations Sun Sep 25 19:56:46 2016 multiplying 8155114 relations Sun Sep 25 20:11:05 2016 multiply complete, coefficients have about 422.94 million bits Sun Sep 25 20:11:09 2016 initial square root is modulo 38806457 Sun Sep 25 20:28:39 2016 sqrtTime: 2077 Sun Sep 25 20:28:39 2016 p59 factor: 30592812650704160232149399890819229125108884689066106302671 Sun Sep 25 20:28:39 2016 p100 factor: 2612820076920762067419883609800717505891645306969102554711927161587415720256631048330542515491118111 Sun Sep 25 20:28:39 2016 elapsed time 00:34:39 [/code]This won't be a surprise to some, but my cluster did not work. Even after I got the cluster details correct, it wouldn't build a matrix: [code] matrix needs more columns than rows; try adding 23% more relations [/code]I need to make sure I wasn't using an earlier iteration of my relations file, since the relation count doesn't match the other machine. But, for now, I'm tied up with other things... Thanks for all the help. 
[QUOTE=EdH;443522]...
This won't be a surprise to some, but my cluster did not work. Even after I got the cluster details correct, it wouldn't build a matrix: [code] matrix needs more columns than rows; try adding 23% more relations [/code]I need to make sure I wasn't using an earlier iteration of my relations file, since the relation count doesn't match the other machine. But, for now, I'm tied up with other things... Thanks for all the help.[/QUOTE] I finally got the cluster to work, but there is still one "bug" I'll bring up elsewhere having to do with threads(t). I was able to shave 7 hours off the time, by using three machines in one of the setups: 38h52m vs. 45h54m with the quad core machine. What is the current state of this sequence. Is it somewhat of a freeforall or is there some protocol I need to be aware of? 
[QUOTE=EdH;443655]What is the current state of this sequence. Is it somewhat of a freeforall or is there some protocol I need to be aware of?[/QUOTE]We have an openended reservation from Christophe Clavier. Looking back, it looks like RichD yielded back in March '16:[QUOTE=RichD;428989]2 * 3^2 * 5 * 7 * ... * C141
I advanced a few steps but I need to get back to other projects. Any help is welcome.[/QUOTE]It's been worked on since then as time and whim dictates. (To the best of my knowlege!) Probably best to annouce any large jobs you run before you do them so you or someone else does not waste significant time/resources. 
All times are UTC. The time now is 20:29. 
Powered by vBulletin® Version 3.8.11
Copyright ©2000  2022, Jelsoft Enterprises Ltd.