[QUOTE=R.D. Silverman;526793]Sam Wagstaff just got 2,2386L via ECM.
[Just in case anyone was wondering whether the remaining composites are worth attacking with ECM][/QUOTE] 2,2126M should finish sieving in another few days. I estimate ~ 5 days. The status page does not indicate what will be done next. It's been two months since it was last updated. Allow me to ask: Has the polynomial/input file(s) for 2,2330M been sent to Greg? It's also been a long time since the last LA result. Based on past efforts I would have expected 10,313+ to have been finished by now. I assume the Greg got hung up for some reason......[If indeed 10,313+ was the next target]. I am just ~two weeks away from finishing my ECM B1 = 3G pass through the 2,4k+ table. [1000 curves on each]. 2,1180+ is in progress now. Coincidently I will also be retiring from work at about the same time that 2,1180+ finishes. I will lose quite a few cores when I do so. 
[QUOTE=R.D. Silverman;527357]Has the polynomial/input file(s) for 2,2330M been sent to Greg? [/QUOTE]
No, I haven't had much time to testsieve. I expect I'll need this weekend and next to get it done, but no promises. 
Folks probably know this but 2,1084+ has completed 0.5t65, and 2,2150M is currently working on the same level and should complete 0.5t55 in 710 days. I’ve queued up both numbers for another 12,000 curves @ B1=850M. Once that is done, I’ll ask Yoyo if I can kick the job up to 24,000 curves each just to finish it off.
That said, it seems as though Greg is getting ready to queue one or both of these numbers. When he does, I will cease adding more work to the queue and advise Yoyo. Next up for ECM is 2.1165+ (only remaining GNFS and has 43% t65 so far) and 2,1157+. A full t65 will take us into spring at current rate of progress. 
[QUOTE=swellman;527464]Folks probably know this but 2,1084+ has completed 0.5t65, and 2,2150M is currently working on the same level and should complete 0.5t55 in 710 days.
<snip> [/QUOTE] Minor addendum: This is only with respect to work that YoYo has done. If one adds prior contributions from Bruce Dodson, EPFL, Peter Montgomery, me, et.al. the total work adds up to quite a bit more than t65. 
[QUOTE=R.D. Silverman;527468]Minor addendum: This is only with respect to work that YoYo has done.
If one adds prior contributions from Bruce Dodson, EPFL, Peter Montgomery, me, et.al. the total work adds up to quite a bit more than t65.[/QUOTE] Of course! I meant no slight to the previous work, just talking about the work associated with the recent initiative. Sorry if it seemed as if I was ignoring all the contributions to the Cunningham project, certainly not my intent. 
[QUOTE=swellman;527469]Of course! I meant no slight to the previous work, just talking about the work associated with the recent initiative. Sorry if it seemed as if I was ignoring all the contributions to the Cunningham project, certainly not my intent.[/QUOTE]
No, it did not seem as if your were ignoring previous work. No slight taken. I just wanted to make it clear to those who may not be following the work that you were referring to current efforts only. People may not know that a lot of work has been done previously. 
I'll queue 2,2150M tomorrow. As Bob suspected, 10,313+ will finish soon. I've had little time to devote to the project since the semester started, so things have largely been on autopilot. I'll devote time to bring everything up to date shortly.

[QUOTE=swellman;527464]
<snip> Next up for ECM is 2.1165+ (only remaining GNFS and has 43% t65 so far) and 2,1157+. A full t65 will take us into spring at current rate of progress.[/QUOTE] Let's not forget 2,1144+. 2,2150M can be removed from the queue. Greg says he will start it shortly. 
[QUOTE=R.D. Silverman;527509]Let's not forget 2,1144+.
2,2150M can be removed from the queue. Greg says he will start it shortly.[/QUOTE] Tracking. Request made to yoyo to kill any unstarted work on 2,2150M. 2,1144+ is on my radar. ETA: Yoyo has now killed any future ECM work for 2,2150M. The current WUs can’t be recalled and results will trickle in over the next week or so. 
Hmmm. From trial sieving, 2,2150M will take less than a week to sieve.

2,1084+1 has even more ECM completed than 2,2150M if you are looking for another NFS candidate for factoring.

[QUOTE=frmky;527559]Hmmm. From trial sieving, 2,2150M will take less than a week to sieve.[/QUOTE]
I am very surprised. As a sextic 2,2150M is 1075 bits. 2,1076+ took a month to sieve. As a quartic, it is C259 which I thought would be harder than the sextic. If we can do quartics in this range, 2,2230M becomes of interest. 
[QUOTE=R.D. Silverman;527611]I am very surprised. As a sextic 2,2150M is 1075 bits. 2,1076+ took a month to
sieve. As a quartic, it is C259 which I thought would be harder than the sextic. If we can do quartics in this range, 2,2230M becomes of interest.[/QUOTE] Note also that 2,2210M might be easier as a quartic SNFS than via GNFS..... 
2,1084+1?
Should I enqueue 2,1084+1 for another round of ECM with Yoyo? If Greg is likely to enqueue it within NFS@Home soon (say by end of Oct) then I’m inclined to cease all further ECM work on 2,1084+1.
To my knowledge, 2,1084+1 has undergone almost 48,000 curves @B1=850M, plus other previous efforts by several contributors. 
[QUOTE=swellman;527808]Should I enqueue 2,1084+1 for another round of ECM with Yoyo? If Greg is likely to enqueue it within NFS@Home soon (say by end of Oct) then I’m inclined to cease all further ECM work on 2,1084+1.
To my knowledge, 2,1084+1 has undergone almost 48,000 curves @B1=850M, plus other previous efforts by several contributors.[/QUOTE] We don't know what Greg will queue next. The status page needs an update. I believe that 2,2330M is ready. There are also the easier 2,1144+, 2,1157+, 2,2158L (but these could use additional ECM) If Greg is going to queue 2,1084+ next, then I would say to remove it from YoYo's queue. 
I'll add 2,1084+ now.

[QUOTE=frmky;527955]I'll add 2,1084+ now.[/QUOTE]
Noted. No more ECM for 2,1084+. Results will trickle in for the last few hundred curves. I’ll enqueue the following in Yoyo@Home: [CODE] 2,1165+ 2,1157+ 2,1144+ 2,2158L [/CODE] Note 2,1165+ has already completed 50% t65. It’s the last GNFS job left in the 1987 list AFAIK. Hoping to plow through all of these in a few months. 
[QUOTE=swellman;527998]
<snip> Note 2,1165+ has already completed 50% t65. It’s the last GNFS job left in the 1987 list AFAIK. [/QUOTE] It is certainly the last one less than C220. Whether it is truly the "last" depends on how high NFS@Home can reach. There are several more less than C225. 
I updated the status page on NFS@Home. I'm happy to change the order in which the numbers are sieved if it's more convenient.
I also ran a quick test sieve for 2,2210M. It looks like a relatively easy SNFS, but I'm going to first start the LA on 2,2150M to make sure it's really as smooth as it appears. 
[QUOTE=frmky;528637]I updated the status page on NFS@Home. I'm happy to change the order in which the numbers are sieved if it's more convenient.
I also ran a quick test sieve for 2,2210M. It looks like a relatively easy SNFS, but I'm going to first start the LA on 2,2150M to make sure it's really as smooth as it appears.[/QUOTE] Ok, though I note four numbers now enqueued in NFS@Home are currently scheduled to be run to t65 by Yoyo. [CODE] 2,1165+ 2,1157+ 2,1144+ 2,2158L [/CODE] I presume any further ECM of these is counterproductive, yes? 
[QUOTE=swellman;528643]Ok, though I note four numbers now enqueued in NFS@Home are currently scheduled to be run to t65 by Yoyo.
[CODE] 2,1165+ 2,1157+ 2,1144+ 2,2158L [/CODE] I presume any further ECM of these is counterproductive, yes?[/QUOTE] Yes. Note that two have not been queued. I would have thought that 2,2210M would be faster with GNFS.... Note that we can stop the polyselection. 
[QUOTE=R.D. Silverman;528648]Yes. Note that two have not been queued.
I would have thought that 2,2210M would be faster with GNFS.... Note that we can stop the polyselection.[/QUOTE] Greg has queued 2,1165+ by GNFS, but I don't recall seeing any discussion about polynomial selection. Was one selected? Did we send a polynomial for 2,2330M? 
[QUOTE=R.D. Silverman;528762]Greg has queued 2,1165+ by GNFS, but I don't recall seeing any discussion about
polynomial selection. Was one selected? Did we send a polynomial for 2,2330M?[/QUOTE] Not to my knowledge. Doesn’t mean Greg didn’t find his own poly I suppose. I am also confused about 2,2210M being run as a SNFS job. But that decision is pending LA on 2,2150M to verify smoothness(?) Moving forward with ECM, I am planning to enqueue the following in with Yoyo: [CODE] 2,1115+ 2,1135+ 2,1180+ 2,1139+ 3,748+ [/CODE] Any comments or objections? The last composite is a GNFS job we can run locally if there’s interest. 
The best 2330M poly I found, in very limited testing:
[code]Y0: 28961469478570719959140906105066840582630 Y1: 92566325806153545443 c0: 9445533148071673379778086273726321999348087566848 c1: 92405597357112380495238265590709071313716 c2: 5399213363634740995545029971716617 c3: 43667955927695773325644219 c4: 150754501738917390 c5: 39639600 skew: 204525474.24619 # size 1.383e20, alpha 8.073, combined = 1.181e15 rroots = 5[/code] This was found by Gimarel. I regret that I haven't had time to fully testsieve, and there were two or three polys that are very close in my initial testing (Q=100M, 300M, 500M, 1kq ranges). If someone else wishes to take on the testsieving, I'll be happy to PM them my work to build from. I believe there is only a small chance we find a substantially better poly from testsieving, though 24% better is fairly likely. 
[QUOTE=swellman;528787]Not to my knowledge. Doesn’t mean Greg didn’t find his own poly I suppose.
I am also confused about 2,2210M being run as a SNFS job. But that decision is pending LA on 2,2150M to verify smoothness(?) Moving forward with ECM, I am planning to enqueue the following in with Yoyo: [CODE] 2,1115+ 2,1135+ 2,1180+ 2,1139+ 3,748+ [/CODE] Any comments or objections? The last composite is a GNFS job we can run locally if there’s interest.[/QUOTE] I too am confused about 2,2210M. However, I believe running it as SNFS is based on Greg's experience with 2,2150M. Sieving it was easy. With respect to YoYo: I thought the objective was to run the remaining base 2 numbers, so I wonder why 3,748+ is included. With respect to the base 2 numbers: Queue them in any order that you find convenient. I don't think the order matters. 
[QUOTE=R.D. Silverman;528810]
With respect to YoYo: I thought the objective was to run the remaining base 2 numbers, so I wonder why 3,748+ is included. With respect to the base 2 numbers: Queue them in any order that you find convenient. I don't think the order matters.[/QUOTE] I threw 3,748 into the mix because it was a Cunningham the group could tackle as discussed [url= https://www.mersenneforum.org/showthread.php?t=24548]here[/url] and [url=https://www.mersenneforum.org/showthread.php?t=24211&page=6]this page[/url]. Otherwise there’s very little to do in this subproject but watch Yoyo’s progress. But we can drop it if no one is interested or we want to avoid mission creep. Understood that the order doesn’t really matter so long as a feasible candidate for NFS@Home is produced occasionally from Yoyo’s ECM preprocessing. I can just throw the remaining 46 composites in Yoyo’s queue, 12,000 curves at a time and tweak it every year or so. Deep or wide? 
[QUOTE=swellman;528824]I threw 3,748 into the mix because it was a Cunningham the group could tackle as discussed [url= https://www.mersenneforum.org/showthread.php?t=24548]here[/url] and [url=https://www.mersenneforum.org/showthread.php?t=24211&page=6]this page[/url]. Otherwise there’s very little to do in this subproject but watch Yoyo’s progress. But we can drop it if no one is interested or we want to avoid mission creep.
Understood that the order doesn’t really matter so long as a feasible candidate for NFS@Home is produced occasionally from Yoyo’s ECM preprocessing. I can just throw the remaining 46 composites in Yoyo’s queue, 12,000 curves at a time and tweak it every year or so. Deep or wide?[/QUOTE] I do see 3,748+ as mission creep. I recommend breadth first. 
My interest remains in 3,748+. Please continue with ECM on it; I think your reasons for making that an exception to the mission are valid.

[QUOTE=swellman;528824]
<snip> Understood that the order doesn’t really matter so long as a feasible candidate for NFS@Home is produced occasionally from Yoyo’s ECM preprocessing. [/QUOTE] Why? There is no shortage of candidates of candidates for NFS@Home. 
[QUOTE=VBCurtis;528835]My interest remains in 3,748+. Please continue with ECM on it; I think your reasons for making that an exception to the mission are valid.[/QUOTE]
I disagree. It is still mission creep regardless of how you spin it. 
Notice he said "we can drop it if no one is interested or we want to avoid mission creep."
So, you state that you think it's mission creep. I, independently, state that I am interested; I also use the word "exception" to indicate that I'm not arguing a case that this is not mission creep, rather that I don't think that should keep us from progressing on this candidate. You repeat that is it mission creep. Thanks? My comments addressed both halves of his "or". I'm not trying to disagree with you, as I don't care what label you give your mission. Sean also didn't claim it wasn't creep. He asked whether that mattered, or should stop us from ECM. 
No worries  I have reached out to Ryan and he has agreed to ECM 3,748+ to t65.
I’ll place the next four base 2 numbers in Yoyo’s CN queue tonight. 
[QUOTE=swellman;528926]No worries  I have reached out to Ryan and he has agreed to ECM 3,748+ to t65.
I’ll place the next four base 2 numbers in Yoyo’s CN queue tonight.[/QUOTE] Greg is now sieving 2,1157+. Yet according to [url]https://escatter11.fullerton.edu/nfs/numbers.html[/url] 2,1165+ is next after 2,1084+. Doing numbers out of order does not matter at all. However, I mention it here because it raises the possibility they Greg skipped 2,1165+ because he does not have a polynomial for it. Is this the case? Does the forum need to give him one? 
Status Update
According to my notes, there are 68 Cunningham numbers in the current effort.
 46 are queued with Yoyo@Home to receive 12,000 curves @B1=850M. Averaging 11 days to complete ECM of each number, suggesting wave 1 of 6 should finish in early 2021.  1 is currently in poly search (2,1165+)  9 are awaiting sieving in the NFS@Home queue  3 are sieved but awaiting postprocessing  8 have been factored  1 is stuck between the various phases of factoring (2,2398M) due to its high NFS difficulty. 
[QUOTE=swellman;534029]According to my notes, there are 68 Cunningham numbers in the current effort.
[/QUOTE] If you mean the base 2 effort my count differs from yours. Perhaps we should reconcile this count? I count 60 unfinished numbers from the 1987 hardcover edition of the Cunningham book. Eight of them have been sieved and are waiting for or running LA: (2,2102L, 2,2098L, 2,1063+, 2,1072+, 2,1076+, 2,2126M, 2,1084+, 2,1157+) One of them is sieving: (2,1144+) Several are queued to start sieving (2,1165+, 2,2158L, 2,2330M, [2,2210M is q'd but not listed on the web page]); 2,1165+ is running polysearch. This leaves 47 waiting to be done. Several more are "within reach",(IMO) but not currently queued: (2,2162L?, 2,2162M?, 2,2230M?); This would leave 44 left if they get done. The following are queued to run 72K curves with ECM; some of this work has been done. 2,1115+ 2,1135+ 2,1180+ 2,1139+ 2,2162M 2,2162L 2_2174L 2_2174M 2_1091+1 2_1097+1 2_2194M 2_2194L 2_2206L 2_2206M 2_1109+1 2_2222L 2_1108+1 2_2222M 2_2230M 2_2246M 2_2246L 2_1124+1 2_1123+1 2_1129+1 2_2266L 2_1136+1 2_2278M 2_2278L 2_1147+1 2_1151+1 2_2306L 2_2302L 2_1153+1 2_2318M 2_1159+1 2_1163+1 2_1168+1 2_2342M 2_2350M 2_2354M 2_2354L 2_2378L 2_2374L 2_1187+1 2_2390M 2_2390L Have I missed any thing? [QUOTE]  46 are queued with Yoyo@Home to receive 12,000 curves @B1=850M. Averaging 11 days to complete ECM of each number, suggesting wave 1 of 6 should finish in early 2021.  1 is currently in poly search (2,1165+)  9 are awaiting sieving in the NFS@Home queue [/QUOTE] ?? Here we disagree. Greg's web page shows 3 queued (plus 2,2210M which is not shown). What are the others? [QUOTE]  3 are sieved but awaiting postprocessing [/QUOTE] We differ. See above. [QUOTE]  1 is stuck between the various phases of factoring (2,2398M) due to its high NFS difficulty.[/QUOTE] I think this is out of reach for NFS@Home. 
[QUOTE=R.D. Silverman;534056]If you mean the base 2 effort my count differs from yours. Perhaps we should
reconcile this count? I count 60 unfinished numbers from the 1987 hardcover edition of the Cunningham book. [/QUOTE] Agreed. I had 68 original composites with 8 factored, leaving 60 remaining numbers yet to be factored. So we match there. And we both have 46 numbers in Yoyo’s queue. Your list appears to match mine. The difficulty of 2,2398M (SNFS 361) is beyond NFS, and it has been ECM’d to t65 AFAIK. Ryan agreed to look at it last April but his progress is unconfirmed. So we seem to be in agreement here as well. The remaining 13 are all works in progress. Suggest we wait for Greg to update the 16e/f queue display  it is wildly out of date. Your listed status for many of these is more accurate than mine, but there is still uncertainty with a few of them. 
[QUOTE=swellman;534073]
<snip> The difficulty of 2,2398M (SNFS 361) is beyond NFS, and it has been ECM’d to t65 AFAIK. [/QUOTE] I wrote that it is beyond the capability of NFS@Home. It is clearly not past the capability of NFS > i.e. Kleinjung et.al finished all 2^n1 for n < 1200. I don't fully agree with your assessment of Greg's queue. There is only one number queued but not listed. It does fail to show that two numbers were finished. 
What is the status of 2,2158L and 2,1144+? Sieving or awaiting LA?
Presumably 2,2330M is currently sieving. Just seeking confirmation. TYIA. 
[QUOTE=swellman;537729]What is the status of 2,2158L and 2,1144+? Sieving or awaiting LA?
Presumably 2,2330M is currently sieving. Just seeking confirmation. TYIA.[/QUOTE] 2,2158L is sieving. 2,1144+ is waiting for LA. I expect 2,2158L to take ~3 more weeks. The forum provided a polynomial for 2,1165+. I'm not sure what Greg will do next. You can always check current efforts at: [url]https://escatter11.fullerton.edu/nfs/top_hosts.php[/url] 
Right now my queue has wus for 2,2158L from 1513M to 1778M but still more than 1.13M wus available on the server. Remember each wus sieves a 2k space. Not sure how far are we going to sieve 2,2158L but there might be already another number queued.

That’s great news. Presumably once 2,2158L finishes sieving, 2,2330M and 2,1165+ will (eventually) follow.
Looking ahead, should 16e need to be fed, there’s the quartic 2,2210M (SNFS 266) which was ECM’d to a full t65. Note 2,2398M is fully ECM’d, awaiting action. With a SNFS difficulty of 361, it might be waiting for some time. There remain 46 numbers in Yoyo’s queue, with 10 currently being worked/completed and 36 awaiting ECM. I estimate all 46 should be finished with the first round of ECM t65 ~March 2021. Five more rounds of ECM will take until ~Dec 2027 if nothing changes. 
[QUOTE=swellman;537741]That’s great news. Presumably once 2,2158L finishes sieving, 2,2330M and 2,1165+ will (eventually) follow.
Looking ahead, should 16e need to be fed, there’s the quartic 2,2210M (SNFS 266) which was ECM’d to a full t65. [/QUOTE] Note 2,2398M is fully ECM’d, awaiting action. With a SNFS difficulty of 361, it might be waiting for some time. There remain 46 numbers in Yoyo’s queue, with 10 currently being worked/completed and 36 awaiting ECM. I estimate all 46 should be finished with the first round of ECM t65 ~March 2021. Five more rounds of ECM will take until ~Dec 2027 if nothing changes.[/QUOTE] The ECM effort takes 3 days to run through 90% of a trial, then another 8 days to finish the remaining 10%....... Perhaps this might get fixed, eventually. 
[QUOTE=swellman;537741]That’s great news. Presumably once 2,2158L finishes sieving, 2,2330M and 2,1165+ will (eventually) follow.
Looking ahead, should 16e need to be fed, there’s the quartic 2,2210M (SNFS 266) which was ECM’d to a full t65. [/QUOTE] Greg said that he would do 2,2210M, but he did not place it on the status page. There is also a slightly bigger quartic: 2,2230M if Greg chooses to do it. And 2,2162L,M are slightly smaller than the largest number done so far. After that, they are probably beyond NFS@Home. I doubt that Greg wants to spend 23+ months sieving a number. To finish the rest, a factory approach would be best, but there would be 44 of them.... Probably too large an effort for anyone at the current time. I am hoping that the ECM effort will pick off 3 to 4 of them. 
Greg added more 2,1072+ wus.

Just a few to bring the matrix down a bit. And I'm queueing all of 2,2210M and 2,2230M before 2,1165+.

[QUOTE=frmky;537894]Just a few to bring the matrix down a bit. And I'm queueing all of 2,2210M and 2,2230M before 2,1165+.[/QUOTE]
Great. In terms of postprocessing backlog, how are we? 
As behind as the status page suggests. In the weeds. :max:
This is why I've been prioritizing the base2 numbers. But 2,1165+ will give us some time to catch up a bit. And I don't think anyone is in a hurry to know these factors. They'll get done eventually. If anyone wants to solve a 70M+ matrix, send them my way! :smile: 
I'll do any matrix around 60M for your queue; if you stumble into one in the low 60s, give me a holla.

A New Target (easy!)
[QUOTE=VBCurtis;537964]I'll do any matrix around 60M for your queue; if you stumble into one in the low 60s, give me a holla.[/QUOTE]
Here is a new GNFS target for everyone: A C202 6523 10,337 c268 906533749251005245151122204670351312590267105760052002862150546121.c202 
[QUOTE=R.D. Silverman;538091]Here is a new GNFS target for everyone: A C202
6523 10,337 c268 906533749251005245151122204670351312590267105760052002862150546121.c202[/QUOTE] Noted. We can add it to the list. For reference, the decimal form of this C202 is [CODE] 2076486865904164187880498803002833020624706055858258295123907760787910463183237701437319913688727165276132151609318284002818920807675158414601157967453931895433506042829474274993772412901816590191592923 [/CODE] The record escore poly (deg 5) for a C202 was 3.665e15. 
Did it have a t60 worth of t65sized curves? I mean, is any more ECM necessary?
Sean and I can poly select this within a couple weeks. We could imitate the 2,1165+ sieve approach, using CADO for Q under, say, 100M and the 15e queue for 100Mup. Or just a Spring teamCADOsieve with A=30 (equivalent to I=15.5), which would need about 5GB ram per process. 
[QUOTE=VBCurtis;538118]Did it have a t60 worth of t65sized curves? I mean, is any more ECM necessary?
[/QUOTE] The P66 factor was found with ECM by Sam. Between his work, Bruce Dodson's work, my work plus the work of others [extent unknown], it has had more than sufficient ECM. The total extent is unknown: too many different participants, each with an unknown amount of work. I do believe that Bruce did a t65 by himself. It was among the first 5 holes when he did his work. 
[QUOTE=VBCurtis;538118]Did it have a t60 worth of t65sized curves? I mean, is any more ECM necessary?
Sean and I can poly select this within a couple weeks. We could imitate the 2,1165+ sieve approach, using CADO for Q under, say, 100M and the 15e queue for 100Mup. Or just a Spring teamCADOsieve with A=30 (equivalent to I=15.5), which would need about 5GB ram per process.[/QUOTE] I like the idea of a team sieve for low Q combined with a 15e queue for higher Q. But is this C202 GNFS too difficult for 15e? It seems to be stretching the bounds a bit. But 16e is fully tasked for the foreseeable future, so I would be happy to help in a team sieve if 15e proves “too suboptimal”. 
[QUOTE=swellman;538127]I like the idea of a team sieve for low Q combined with a 15e queue for higher Q. But is this C202 GNFS too difficult for 15e? It seems to be stretching the bounds a bit. But 16e is fully tasked for the foreseeable future, so I would be happy to help in a team sieve if 15e proves “too suboptimal”.[/QUOTE]
I understand that the relative efficiencies of ggnfs sievers vs cado sievers are quite different, but recall that we sieved half the C207 job with I=15. I don't think it's too bad an idea to use a large siever area on small Q with CADO, while doing higher Q with ggnfs/15e. We'd use I=15 for the higher ranges on CADO anyway, and on higher Q yield is more similar between the software packages than it is at low Q. So, A=30 on CADO combined with 15e on nfs@home should nicely utilize both largememory linux resources and lowermemory mass contributions. I think I'd pick 33/34LP if it were a pure CADO job, so going down half a largeprime to be compatible with the 15e queue is no big deal. Something like Q=5150M on CADO and 150600 on 15e ought to do the trick. 
[QUOTE=VBCurtis;538130]I don't think it's too bad an idea to use a large siever area on small Q with CADO, while doing higher Q with ggnfs/15e.[/QUOTE]
I recall Bob saying something to the effect that sieving a larger area at smaller q is theoretically optimal. 
On 2,1165+ I had a wonderful feedback from teams. They advise setting up a new app with details on memory requirements on project preference page and increase reward. I believe this is feasible, only maybe change or add more intermediate badge levels. Right now individuals cannot reach highest badge level.

[QUOTE=axn;538131]I recall Bob saying something to the effect that sieving a larger area at smaller q is theoretically optimal.[/QUOTE]
The following is a theorem. The total number of lattice points that are sieved is minimized when the sieve area for each q is proportional to the yield for that q. The constant of proportionality falls out of the analysis as an eigenvalue in the calc of variations problem. Its value depends on the total number of relations needed. Since smaller q have higher yields this means that the sieve area for small q should be larger. One would think that smaller q would have smaller yield, but the following happens: There is a "seesaw" effect that takes place between the two norms that need to be smooth. As one makes one norm smaller (let's say the rational one), the other norm gets bigger [and vice versa]. The effect is nonlinear; the rising norm increases faster than the decreasing norm decreases. To see this look at what happens for a fixed factorization when one changes the algebraic degree. Also, look at what happens as q changes size. For example, we need (rational norm/q) to be smooth as q changes. As q gets bigger this gets smaller. But the [i]algebraic[/i] norm [i] increases[/i] as ~q^(d/2) where d is the degree when we use q on the rational side. 
new GNFS repunit 10,337 C202 target
[QUOTE=R.D. Silverman;538091]Here is a new GNFS target for everyone: A C202
6523 10,337 c268 906533749251005245151122204670351312590267105760052002862150546121.c202[/QUOTE] Just to note, the recent GNFS repunit targets ~C200 were factored by Kurt Beschorner's team ([url]http://www.kurtbeschorner.de/[/url]) with polys selected with integrated knowledge of Mersenne Forum participants. It would be very nice for all of us to include Kurt in this discussion. 
Max
I'm not sure what you mean. It's an open forum, after all! 
Repunit number
There are some special effort at repunit number (10^n  1).
Kurt Team (Bo Chen, Wenjie Fang, Alfred Eichhorn, Danilo Nitsche, Kurt Beschorner, et al. ) aim at factoring these number use snfs and gnfs, until last year we have polish most number snfs less than 300 and gnfs less than 200. Now we are factoring snfs less than 310 and gnfs less than 210. 10,337 gnfs 202 is within our reach, I have suggest the team to factoring this number after the relation collect of 10^459  1, a snfs 306 number. In principle, we could factor this gnfs 202 with 12 months. If Kurt and other members accept my suggestion, we will send Sam an email to reserve this number. Yousuke Koide concentrates at 10^n+1, where n is between 400 and 800, using method ecm and nfs. It would better if the duplicate effort could avoid. If you still want to factor this number, we will not select this number as our next target. I found 3,748+ c204 is also less than 210, perhaps you could factoring this number, I and Kurt have no interest to this number. Best regards, Bo Chen 
We can get the new C202 done by summer, and I'd prefer to not see it wait 12 months for a factorization.
If the C202 goes well, we are likely to try 3,748 the same way: with a combination of CADO work from forumites and nfs@home. We lack candidates of general interest between 200 and 205, so I would like to do the 202 to test this CADOggnfs hybrid rather than the 204 first. 
2,2206M Factored by Yoyo
Yoyo found a p64. Nice work!

[QUOTE=swellman;541171]Yoyo found a p64. Nice work![/QUOTE]
May I suggest that 2,2230M be deleted from the YoYo queue? It is already sieving. 
[QUOTE=R.D. Silverman;541841]May I suggest that 2,2230M be deleted from the YoYo queue? It is already sieving.[/QUOTE]
That’s unfortunate  I had private discussions with Greg about this issue and I thought 2,2230M would not start sieving until after Yoyo had run a round of ECM (likely by June). If WUs for 2,2230M are now being distributed then I’ll ask Yoyo to remove the job from his queue. 
[QUOTE=swellman;541850]That’s unfortunate  I had private discussions with Greg about this issue and I thought 2,2230M would not start sieving until after Yoyo had run a round of ECM (likely by June).
If WUs for 2,2230M are now being distributed then I’ll ask Yoyo to remove the job from his queue.[/QUOTE] WU's are being distributed. Based on how long [elapsed time] it took for 2,2210M, 2,2230M will take about the same time to sieve that YoYo takes to do 12K curves. 
[QUOTE=R.D. Silverman;541907]WU's are being distributed. Based on how long [elapsed time] it took for 2,2210M,
2,2230M will take about the same time to sieve that YoYo takes to do 12K curves.[/QUOTE] BTW, 2,1115+ is the same difficulty as 2,2230M. It should be added to Greg's queue. 
Think there’s less clients connected to NFS, 2,2230M will take longer.

And 2,2330M is underway at NFS@Home.

[QUOTE=R.D. Silverman;541909]BTW, 2,1115+ is the same difficulty as 2,2230M. It should be added to Greg's queue.[/QUOTE]
Will 2,1115+ be added to the queue? 2,1165+ finished sieving and 3,667 has started. It was next in line in terms of SNFS difficulty. (Greg has skipped 12,319 [quintic] for the time being) The status page has not been updated recently. All of the numbers on it have been sieved, so we do not know what is planned. 
I've been very busy with teaching a 5 week summer course from home so I just threw 3,667 in to keep the clients busy. I'll update everything in a couple of weeks or so.

Just for the convenience of new visitors, here is the S.S.W. wanted page with done numbers removed (I will periodically update)
[CODE]Here are the wanted lists issued with Page 136. . . . . When what is done is removed Ten Most Wanted: 1. 2,1207 C337 4. 10,323 C271 TwentyFour More Wanted: 2,1213 C297 2,1091+ C307 2,1097+ C288 2,2126M C219  res'd 2,1076+ C238  res'd 3,667 C275 <<< in progress 3,668+ C277 5,452+ C246  res'd 5,454+ C285  res'd 6,409+ C311 7,379 C320 7,376+ C311  res'd 7,379+ C320 10,323+ C242 10,332+ C295 11,307 C289 11,307+ C276 12,293 C238  res'd SmallerbutNeeded: 3,748+ C204[/CODE] 
[QUOTE=Batalov;548475]Just for the convenience of new visitors, here is the S.S.W. wanted page with done numbers removed (I will periodically update)
[CODE]Here are the wanted lists issued with Page 136. . . . . When what is done is removed Ten Most Wanted: 1. 2,1207 C337 4. 10,323 C271 TwentyFour More Wanted: 2,1213 C297 2,1091+ C307 2,1097+ C288 2,2126M C219  res'd 2,1076+ C238  res'd 3,667 C275 <<< in progress 3,668+ C277 5,452+ C246  res'd 5,454+ C285  res'd 6,409+ C311 7,379 C320 7,376+ C311  res'd 7,379+ C320 10,323+ C242 10,332+ C295 11,307 C289 11,307+ C276 12,293 C238  res'd 12,293+ C303  res'd SmallerbutNeeded: 3,748+ C204[/CODE][/QUOTE] Also, here is a list of 15 potential candidates in order of increasing SNFS SIZE. I ignore quartics. [2,1115+ is a C269 quartic] 12,319 C313 (quintic) 3,668+ C319 6,409+ C319 5,457+ C320 11,307+ C320 11,307 C320 5,458+ C321 7,379+ C321 7,379 C321 3,784+ C321 12,298+ C322 3,674+ C322 10,323+ C323 10,323 C323 5,461+ C323 Let me know if I missed anything. Based upon how long 2,1084+ took to sieve (6 weeks) I guess that the NFS@Home limit is about 330 diigits. 
[QUOTE=frmky;548472]I've been very busy with teaching a 5 week summer course from home so I just threw 3,667 in to keep the clients busy. I'll update everything in a couple of weeks or so.[/QUOTE]
It is nice to hear that Universities are still going! Greg, do you know of anyone offering a first course (online) in string theory? I'd love to take one. Or a course in quantum field theory? I did get to take a course in relativistic quantum mech from Ed Purcell when I was an undergrad. [45 years ago, however!] It was a small fun class; he was a superb teacher. I'm not sure if I have the needed prereqs. My differential geometry background is minimal, so I may need to take that first. Basic tensor analysis is not a problem. 
[QUOTE=R.D. Silverman;548036](Greg has skipped 12,319 [quintic] for the time being)
[/QUOTE] He also skipped 10 323 
[QUOTE=sweety439;548502]He also skipped 10 323[/QUOTE]
If you would bother to read or if you would bother to engage some brain cells you would see that 10,323 is about 5 digits bigger than the smallest current SNFS candidates. Since Greg generally does them in order of size [look at the status page] you should realize that 10,323 [b]HAS NOT BEEN SKIPPED[/b]. Idiot. 12,319 [b]WAS SKIPPED[/b] because quintics yield larger matrices making this number "LA constrained". BTW, what is your obsession with this particular number? Is it OCD?? 
[QUOTE=R.D. Silverman;548496]It is nice to hear that Universities are still going!
Greg, do you know of anyone offering a first course (online) in string theory? I'd love to take one. Or a course in quantum field theory? I did get to take a course in relativistic quantum mech from Ed Purcell when I was an undergrad. [45 years ago, however!] It was a small fun class; he was a superb teacher. I'm not sure if I have the needed prereqs. My differential geometry background is minimal, so I may need to take that first. Basic tensor analysis is not a problem.[/QUOTE]I would like to learn more about string theory. An excellent introduction to QFT can be found in [URL="https://www.abebooks.com/9780071543828/QuantumFieldTheoryDemystifiedMcMahon0071543821/plp"]quantum field theory DeMYSTiFieD[/URL] (the StuDlY CapS is how it appears on the front cover) by David McMahon. Not knowing your present level I couldn't say whether you would find it useful or too basic. 
Record
[QUOTE=xilman;548594]
<snip> .[/QUOTE] 12,293+ just set a new record for largest penultimate factor. 
Very nice! Persistence pays off!

[QUOTE=R.D. Silverman;548527]If you would bother to read or if you would bother to engage some brain cells
you would see that 10,323 is about 5 digits bigger than the smallest current SNFS candidates. Since Greg generally does them in order of size [look at the status page] you should realize that 10,323 [b]HAS NOT BEEN SKIPPED[/b]. Idiot. 12,319 [b]WAS SKIPPED[/b] because quintics yield larger matrices making this number "LA constrained". BTW, what is your obsession with this particular number? Is it OCD??[/QUOTE] but 12^319 > 10^323, so 10^3231 should be factored first Also Phi_319(12) > Phi_323(10) 
[QUOTE=sweety439;549081]but 12^319 > 10^323, so 10^3231 should be factored first
Also Phi_319(12) > Phi_323(10)[/QUOTE] 12^3191 has an algebraic factor. We are not factoring 12^3191, we are factoring (12^3191)/(12^291) ~12^290 ~ C313. The resulting polynomial is reciprocal, so we can do this number with a quintic. However, quintic polynomials for numbers this size result in matrices that are significantly larger than those for numbers of similar size done with sextics. Greg is LA constrained right now, so he [b]skipped[/b] 12^3191 for the time being. He did C314, C315, C316 C317 and is now working on C318's via 3^6671 etc. Greg may indeed do R323 before he does 12^3191. I think he will. R323 might well be done by a reciprocal octic to take advantage of the algebraic factor 10^191. Whether the octic would be easier than the obvious sextic might be an interesting experiment. It might also be interesting to see if a septic would be any better. I think a septic will be slightly better in general for numbers of this size. Let's do a "back of the envelope" look at the norms. Take (10^6, 10^6) == (a,b) as a 'typical lattice point'. For a sextic, an algebraic norm is ~ a^6 ~ 10^36 and a linear norm is ~ b * (10^324/6) ~ 10^60. For a septic an anorm is ~a^7 ~ 10^42 and a linear norm is b *(10^322/7) ~ 10^52. The norms are closer for the septic and their product is slightly smaller. A septic seems slightly superior. For the reciprocal octic an anorm is a^8 ~ 10^48 and a linear norm is b * (10^38) ~ 10^44 which seems even better still. Note that one also needs to adjust these estimates by the specialq. The estimates also ignore the effect of variance on the norms. Since we want smooth numbers we are more concerned with the tails of the distributions of the norms rather than the means. However, it does give a quick comparison. NFS works best when the norms are as nearly equal as possible, other things being equal. This [b]very rough[/b] estimate is based on the assumption that (10^6, 10^6) is a typical lattice point. Adjust the analysis if this assumption is not a good enough estimate. I do now know what sieve areas the lasievef siever uses. Noone has been calling for him to do 12^3191. It is possible that Greg missed the reciprocal octic for R323. He will get to it. Doing R323 seems to be a compulsion with you. 
[QUOTE=R.D. Silverman;549036]12,293+ just set a new record for largest penultimate factor.[/QUOTE]
Wow. I've been so busy I didn't even notice. Yay! 
[QUOTE=R.D. Silverman;548496]It is nice to hear that Universities are still going!
Greg, do you know of anyone offering a first course (online) in string theory? I'd love to take one. Or a course in quantum field theory? I did get to take a course in relativistic quantum mech from Ed Purcell when I was an undergrad. [45 years ago, however!] It was a small fun class; he was a superb teacher. I'm not sure if I have the needed prereqs. My differential geometry background is minimal, so I may need to take that first. Basic tensor analysis is not a problem.[/QUOTE] Sorry, but I don't. I took a QFT course 20 years ago from Alfred Shapere at U. Kentucky, who was a student of Frank Wilczek. Haven't looked at it since I'm afraid. The closest I've come to string theory is attending a couple of talks by Ed Witten, of which I understood very little. 
[QUOTE=R.D. Silverman;549088]
<snip> Greg may indeed do R323 before he does 12^3191. I think he will. R323 might well be done by a reciprocal octic to take advantage of the algebraic factor 10^191. Whether the octic would be easier than the obvious sextic might be an interesting experiment. It might also be interesting to see if a septic would be any better. I think a septic will be slightly better in general for numbers of this size. Let's do a "back of the envelope" look at the norms. Take (10^6, 10^6) == (a,b) as a 'typical lattice point'. For a sextic, an algebraic norm is ~ a^6 ~ 10^36 and a linear norm is ~ b * (10^324/6) ~ 10^60. For a septic an anorm is ~a^7 ~ 10^42 and a linear norm is b *(10^322/7) ~ 10^52. The norms are closer for the septic and their product is slightly smaller. A septic seems slightly superior. For the reciprocal octic an anorm is a^8 ~ 10^48 and a linear norm is b * (10^38) ~ 10^44 which seems even better still. .[/QUOTE] I'd like to hear ideas from others about what I wrote just above. It seems that a degree 7 polynomial would be better (than degree 6) for Greg to use moving forward for numbers that NFS@Home is about to undertake. 
4 Attachment(s)
[QUOTE=R.D. Silverman;549313]I'd like to hear ideas from others about what I wrote just above. It seems that
a degree 7 polynomial would be better (than degree 6) for Greg to use moving forward for numbers that NFS@Home is about to undertake.[/QUOTE] I test the speed 4 years ago, the result shows deg 7 is much slower than deg 6. deg 6 need 102 CPU years to collect 1200M raw relations, while deg 7 need 182 CPU years on an i3 CPU. I attach the poly and test files. 
[QUOTE=wreck;549340]I test the speed 4 years ago, the result shows deg 7 is much slower than deg 6.
deg 6 need 102 CPU years to collect 1200M raw relations, while deg 7 need 182 CPU years on an i3 CPU. I attach the poly and test files.[/QUOTE] One needs to change the factor base sizes moving from d = 6 to 7. Increase the algebraic and decrease the linear. 
[QUOTE=R.D. Silverman;549352]One needs to change the factor base sizes moving from d = 6 to 7. Increase the
algebraic and decrease the linear.[/QUOTE] Note also that one should (likely) apply special q to the algebraic side instead of rational side. 
I guess you mean increase alim and decrease rlim, use option a.
But I test again with these changes, the situation is the same. When I use alim=800M rlim =200M a, and binary lasieve5_f compiled, it need 100 CPU years to collect 1200M raw relations on an i7 CPU; While use alim=rlim =400M,r with the same binary and processor,it need 40 CPU years to collect 1200M raw relations. Though I don't know why,it is a little strange. 
Repunit cofactor R337
[QUOTE=wreck;538230]There are some special effort at repunit number (10^n  1).
Kurt Team (Bo Chen, Wenjie Fang, Alfred Eichhorn, Danilo Nitsche, Kurt Beschorner, et al. ) aim at factoring these number use snfs and gnfs, until last year we have polish most number snfs less than 300 and gnfs less than 200. Now we are factoring snfs less than 310 and gnfs less than 210. 10,337 gnfs 202 is within our reach, I have suggest the team to factoring this number after the relation collect of 10^459  1, a snfs 306 number. In principle, we could factor this gnfs 202 with 12 months. If Kurt and other members accept my suggestion, we will send Sam an email to reserve this number. Yousuke Koide concentrates at 10^n+1, where n is between 400 and 800, using method ecm and nfs. It would better if the duplicate effort could avoid. If you still want to factor this number, we will not select this number as our next target. I found 3,748+ c204 is also less than 210, perhaps you could factoring this number, I and Kurt have no interest to this number. Best regards, Bo Chen[/QUOTE] Dear forumites, Kurt Beschorner's team ([url]http://kurtbeschorner.de/[/url]) is days away from cracking R459/C221 by SNFS and 40% in the GNFS sieving for R1740M/C204. Slowly but surely we are selecting our next repunit cofactor to work on. One of the candidates is R337/C202. If nobody is actively working on this number, could we please reserve it for Kurt's team? Since the discussion in late February 2020 (see wreck's message above), did anybody try to polyselect for this C202? I ran CADO with standard parameters for a day and found a mediocre baseline poly (2.39e15), the 2018 record belongs to fivemack (3.665e15). If nobody minds our reservation, we would really appreciate your help in polyselecting, especially on the msieve side. I will run CADO with improved parameters and spin up all good candidates as always. Please let us know, here or via PM. Stay safe, Max 
Fine with me we're not in a big hurry to grab a 201202 for a 15e/homeCADO hybrid.
I wager there's less than 10% chance anymore of msieve finding a winning poly for a composite at 200+ digits. I can do a little CADO poly select, but not a ton if Kurt's group wants firepower, please reserve a range of c5 values (admin/admax) for them, and some of us will add our efforts in nonoverlapping ranges. Might not be worth their time to coordinate, though. 
[QUOTE=VBCurtis;551306]Fine with me we're not in a big hurry to grab a 201202 for a 15e/homeCADO hybrid.
I wager there's less than 10% chance anymore of msieve finding a winning poly for a composite at 200+ digits. I can do a little CADO poly select, but not a ton if Kurt's group wants firepower, please reserve a range of c5 values (admin/admax) for them, and some of us will add our efforts in nonoverlapping ranges. Might not be worth their time to coordinate, though.[/QUOTE] Thank you, VBCurtis! Kurt cracked R459 in the morning: [url]http://kurtbeschorner.de/[/url] Gimarel found an exceptional poly for R337/C202: [url]https://mersenneforum.org/showpost.php?p=551414&postcount=1930[/url] And I started the spinup process: [url]https://mersenneforum.org/showpost.php?p=551432&postcount=1931[/url] 
Why 10^371+1 are factored earlier than 10^365+1? eulerphi(371) = 312 > 288 = eulerphi(365)

[QUOTE=sweety439;560656]Why 10^371+1 are factored earlier than 10^365+1? eulerphi(371) = 312 > 288 = eulerphi(365)[/QUOTE]
What relevance does this information have to do with deciding which to factor first? 
Sam Wagstaff just knocked 2,2390M down to c191 with ECM.
Noone has reserved it yet  could this be a target for a short forum team sieve? 
Sure! I can start poly select this evening. Can you contact Sam to reserve it for MersenneForum?

[QUOTE=VBCurtis;569014]Sure! I can start poly select this evening. Can you contact Sam to reserve it for MersenneForum?[/QUOTE]
Done. I suppose it's possible he's already started working on it himself and just hasn't got round to updating the reservation page, so I'll keep everyone updated. What parameters are you using for polyselect, so I know not to overlap with you? 
I figured I'd start after you, so I'd work around whatever region you select.
I'm fine with using admin of your admax, and incr of 2310 or 4620 to cover a large region up there. 
Got the goahead from Sam. :smile:
I'll do P=5M, admax=10M, incr=420. 
All times are UTC. The time now is 11:01. 
Powered by vBulletin® Version 3.8.11
Copyright ©2000  2022, Jelsoft Enterprises Ltd.