![]() |
[QUOTE=Gordon;414839]
Per the formula 1. Throw away Z - the actual number of curves - we don't use it !! 2. Force B2=100*B1 = 300M 3. Calculate Z1*x = 3M*300M = 900*M*M = 9000B 4. ???[/QUOTE] In step 3, where did you get Z1 = 3M and x = 300M? And what does "B" stand for here? 900 million million isn't 9000 anythings that I can think of an abbreviation for. Z1 is the adjusted number of curves, NOT one of the bounds of the curve. If 3M/300M is a standard t40 curve, and you run 3m/500m instead, Alex's formula converts that to an equivalent number of 3m/300m curves. Perhaps one curve at 3m/500m is "worth" 1.3 curves at 3m/300m; if so, submitting Z = 100 curves at 3m/500m will result in Z1 = 130 curves of work at 3m/300m. It sounds from George's reply that his summary tracks a sort of summed-by-B1 effort, which is not very accurate. That would also explain the inflated curve report on the site. However, even inflation by factor-of-2 leads users to run useful curves, where any underestimate would lead to wasted duplication of work; so leaving it as-is is preferred to a report that causes users to do too many curves at too low a level. |
[QUOTE=VBCurtis;414847]In step 3, where did you get Z1 = 3M and x = 300M? And what does "B" stand for here? 900 million million isn't 9000 anythings that I can think of an abbreviation for.
Z1 is the adjusted number of curves, NOT one of the bounds of the curve. If 3M/300M is a standard t40 curve, and you run 3m/500m instead, Alex's formula converts that to an equivalent number of 3m/300m curves. Perhaps one curve at 3m/500m is "worth" 1.3 curves at 3m/300m; if so, submitting Z = 100 curves at 3m/500m will result in Z1 = 130 curves of work at 3m/300m. It sounds from George's reply that his summary tracks a sort of summed-by-B1 effort, which is not very accurate. That would also explain the inflated curve report on the site. However, even inflation by factor-of-2 leads users to run useful curves, where any underestimate would lead to wasted duplication of work; so leaving it as-is is preferred to a report that causes users to do too many curves at too low a level.[/QUOTE] it may be that he's using the old british system of million milliard billion billiard .... etc. there's still a type though. |
[QUOTE=VBCurtis;414847]In step 3, where did you get Z1 = 3M and x = 300M? And what does "B" stand for here? 900 million million isn't 9000 anythings that I can think of an abbreviation for.
Z1 is the adjusted number of curves, NOT one of the bounds of the curve. If 3M/300M is a standard t40 curve, and you run 3m/500m instead, Alex's formula converts that to an equivalent number of 3m/300m curves. Perhaps one curve at 3m/500m is "worth" 1.3 curves at 3m/300m; if so, submitting Z = 100 curves at 3m/500m will result in Z1 = 130 curves of work at 3m/300m. It sounds from George's reply that his summary tracks a sort of summed-by-B1 effort, which is not very accurate. That would also explain the inflated curve report on the site. However, even inflation by factor-of-2 leads users to run useful curves, where any underestimate would lead to wasted duplication of work; so leaving it as-is is preferred to a report that causes users to do too many curves at too low a level.[/QUOTE] Direct lift from George's post "Using a formula from Alex Kruppa your B1=x B2=y curves=z values are converted into an equivalent number of curves where B2=100x (call this z1). Then z1 * x is added to the running total of ECM effort." I read this as Z1=100x=100*B1 If it isn't then it needs explained more fully how this mystical "conversion" is done. For my maths x=B1 so in my case 3M(illion) Z1=100*B1= 300M(illion) 3 million x 300 million = 900 million million which is 9000 BILLION The reason for the ???? at point 4 is - what does actually represent and how is it converted to curves, equivalent or otherwise? Without knowing how the calcs are done, can you look at the report for say M22787 and be certain that the T25,30,35,40 actually have been FULLY done and that the T45 is 1/6 the way there?? |
[QUOTE=Gordon;414852]
"Using a formula from Alex Kruppa your B1=x B2=y curves=z values are converted into an equivalent number of curves where B2=100x (call this z1). Then z1 * x is added to the running total of ECM effort."[/QUOTE] Read this as: Using a formula from Alex Kruppa your B1=x B2=y curves=z, values are converted into an "equivalent number of curves (call this z1)" where B2=100x . Then z1 * x is added to the running total of ECM effort. |
[QUOTE=Prime95;414854]Read this as:
Using a formula from Alex Kruppa your B1=x B2=y curves=z, values are converted into an "equivalent number of curves (call this z1)" where B2=100x . Then z1 * x is added to the running total of ECM effort.[/QUOTE] Punctuation always helps :smile: What is the conversion formula? |
[CODE]// Our total_ECM_effort tracks curves assuming a B2 value of 100 * B1.
// If B2 is not 100 * B1, then adjust the reported B1 value up or down // to reflect the increased or decreased chance of finding a factor. // // From Alex Kruppa, master of all things ECM, the following formula // compensates for using B2 values that are not 100 * B1. // 0.11 + 0.89 * (log_10(B2 / B1) / 2) ^ 1.5 function normalized_B1( $B1, $B2 ) { if ($B2 == 100 * $B1) return $B1; return $B1 * (0.11 + 0.89 * pow (log10 ($B2 / $B1) * 0.5, 1.5)); } [/CODE] |
Meanwhile...
M191689 has a factor: Factor: 1319541091656106614381619344521 / (ECM curve 53, B1=250000, B2=25000000) 100.058 bits k = 2[SUP]2[/SUP] × 5 × 71 × 1439 × 3881 × 434013212304053 |
[QUOTE=Prime95;414836]Using a formula from Alex Kruppa your B1=x B2=y curves=z values are converted into an equivalent number of curves where B2=100x (call this z1). Then z1 * x is added to the running total of ECM effort. I'm no expert in the field, but I've been told this is good enough for a rough approximation of effort.[/QUOTE]
This works well enough when the current "open" range is a t50 and people are reporting curves in the t45-t55 range. But it will give ridiculous results when the open range is a t65 and people are reporting t25 curves. |
[QUOTE=axn;414897]This works well enough when the current "open" range is a t50 and people are reporting curves in the t45-t55 range. But it will give ridiculous results when the open range is a t65 and people are reporting t25 curves.[/QUOTE]
[thinking]Woh! Let't try that! Time to get some free ECM credit, we began to fall behind...:w00t:[/thinking] |
[QUOTE=axn;414897]This works well enough when the current "open" range is a t50 and people are reporting curves in the t45-t55 range. But it will give ridiculous results when the open range is a t65 and people are reporting t25 curves.[/QUOTE]
Well, part of the problem seems to be that all t25 and t30 curves are counted this way for the t50 and up levels, resulting in a permanent overestimation of the actual work done on every candidate marked complete past t45 level. It's not a large error, less than a factor of two, but it's definitely an error. It just-so happens to be the sort of error that leads people to run curves slightly too big, which is possibly more efficient (e.g. once half a t35 is done, running curves at 3M is nearly as fast at finishing the t35 whilst finding some factors larger than curves at 1M would). I like accuracy, but this error is wasting very very little user-CPU time. |
P-1 found a factor in stage #2, B1=635000, B2=11906250.
UID: Jwb52z/Clay, M74230187 has a factor: 141895279886608660072079 (P-1, B1=635000, B2=11906250), 76.909 bits. |
| All times are UTC. The time now is 23:10. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.