20191122, 03:15  #12  
Nov 2003
1110001000000_{2} Posts 
Quote:
exponents > 11K B1 only goes up to ~3M. The ECM testing to 11M for p > 5K isn't even complete. Nor is the testing to 3M for p > 11K. For B1 near 3M, putting B2 = 100 B1 is not unreasonable. The default B2 would be 5 x 10^9. For the numbers you mention (millions of digits) what B1 values have been used? Indeed. Have any curves been run at all? For the numbers being run at B1 = 800M, introducing million digit numbers into the discussion is a complete red herring. 

20191122, 03:16  #13  
P90 years forever!
Aug 2002
Yeehaw, FL
15766_{8} Posts 
Quote:
For simplicity's sake, the ECM tables at the server display the curves and bounds needed for B2 = 100*B1 for both small and large numbers. 

20191122, 03:28  #14  
Aug 2002
Buenos Aires, Argentina
2465_{8} Posts 
Quote:
For example, for 2^33554432+1 (F25, about 10 million digits), the 35digit level was finished and Prime95 has run the equivalent of 684 curves with B1 = 3M, B2 = 300M. Last fiddled with by alpertron on 20191122 at 03:30 

20191122, 03:32  #15  
Nov 2003
1C40_{16} Posts 
Quote:
does say that 360K curves at 800M have been run for several numbers. e.g. M1277 It says that over 100K curves have been run for some other numbers and over 85K for some others. [and 70K to 80K for others] For these numbers it is time to increase B1. The range 1500 < p < 1700 has over 150K curves run. This too is overkill for B1 = 800M. It is roughly 2*t65. The range 1400 < p < 1500 hasn't even started a t60, while many larger exponents have finished t60. Many numbers report t60 with B1 = 260M and 112K curves as "DONE". This too is overkill on the number of curves for t60. Or are these entries wrong? This is for the table: https://www.mersenne.org/report_ecm/ It seems that a lot of CPU effort has been WASTED by running too many curves for a given B1. Last fiddled with by R.D. Silverman on 20191122 at 03:33 Reason: one additional sentence 

20191122, 04:43  #16 
Jun 2003
4777_{10} Posts 
I think you missed the fact that these are 360,000 "curve equivalents". Noone has run 360k curves with GMPECM defaults. They have run the proper amount, and the server is recording it in B2=100B1 equivalent counts.

20191122, 06:59  #17  
Nov 2003
1110001000000_{2} Posts 
Quote:
with B1=800M and B2=100*B1 may give a 11/e probability for t65, but its probability for say 70 digit factors will be different from running an "equivalent" set of fewer curves with higher B2. Similarly for other sized target factors. They are equivalent only for one particular factor size. Keep in mind that running a set of curves at a given limit gives a pdf. It has a probability of success for a range of different factor sizes. Running a different set may give an equal probability AT ONE SIZE, but it will differ for other sizes. The "equivalence" is not really an "equivalence". When one converts a set of curves to an "equivalent" set for one size you lose information. And the term "proper amount" begs the question. Note that the GMPECM defaults are not truly optimal. What is the "proper amount"?? I would prefer to list actual counts and B1, B2. However much of that info has been lost owing to "conversions to equivalent counts with B2 = 100 B1". BTW, the headings on the web page should document the "conversion". 

20191122, 07:05  #18  
Nov 2003
2^{6}·113 Posts 
Quote:
This is another red herring since the discussion is about Mersenne numbers. Why do you keep deflecting the discussion? The tables show zippo for Mersenne numbers above exponent 15K. 

20191122, 07:19  #19  
Nov 2003
2^{6}×113 Posts 
Quote:
you could also list the effort as a fraction; i.e. what fraction of the effort required to achieve a search for a particular size has been run. Example. Rather than list 111K curves with B1 = 800M for M1213, as it does now, one could list .308 t65. This drops the misleading/false claims/impressions about curve counts. And giving 6 significant digits for the curve counts is also somewhat misleading because it conveys an impression of accuracy that really isn't there. Especially after a "conversion". 

20191122, 07:47  #20  
Jun 2003
17·281 Posts 
Quote:
Quote:
Quote:
Plus, does it really affect the optimality of ECM if these numbers are off by a few percent? What's the big deal if you do a few percent more or fewer curves than optimal at each level? 

20191122, 08:43  #21 
Romulan Interpreter
Jun 2011
Thailand
3×13×229 Posts 
I think you are all right, as George said, and about "wasting" the effort, a lot of factors are still coming from the ECM trenches, which means for me (not the LaurV with any minimal math knowledge, but the pragmatic LaurV, with no math, but with logic and common sense) that there is no effort wasted. As long as the factors come, people are producing. How are they equivalating (is this a word?) the hammers and screwdrivers with pneumatic hammers and dynamite, I couldn't care less... If there would be no ore coming, and yet people continue digging, I would say, well, stop the factory, we are wasting time...
Last fiddled with by LaurV on 20191122 at 08:44 
20191122, 12:48  #22  
Nov 2003
2^{6}×113 Posts 
Quote:
necessary. We don't know the extent since the "conversion" destroys the information needed. [exact curve counts, both B1 and B2 values] Quote:
Didn't you study the use of sigfigs in scientific tables while you were in secondary school? Quote:
Quote:
What does matter is that the tables convey misinformation about the pdf's. It would be nice to know, for example, when one runs a t65 what the probability was of finding/missing factors of OTHER sizes. The conversion makes that impossible. 

Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
How to change # of CPU's working?  GinoTitan  PrimeNet  4  20160529 19:25 
change of computer  deepesh  Hardware  5  20160405 05:30 
Name Change?  Fred  Lounge  8  20160131 17:42 
Change the world!  Xyzzy  Lounge  5  20090831 12:41 
How can I change worktype?  Andriy  Information & Answers  1  20090620 12:39 