20191113, 05:42  #1 
P90 years forever!
Aug 2002
Yeehaw, FL
2×3×1,193 Posts 
ECM change
I'm reworking the server's algorithm for handing out ECM assignments. The goal is to hand out assignments with the best chance of finding a factor for a fixed CPU effort.
The new algorithm (which may undergo further adjustments) is handing out smaller exponents than before. This isn't of much concern for users unless you've specified multithreaded workers. Smaller FFT sizes cannot take advantage of multithreading. 
20191121, 22:03  #2 
Sep 2002
Oeiras, Portugal
1424_{10} Posts 
Speaking of ECM assignments, would it be possible to add a new column to the right of the ECM progress table? Thanks to Ryan Propper's work, several exponents < 10K have now all B1 bounds tested to 800, 000, 000.

20191121, 23:47  #3 
"Dylan"
Mar 2017
23^{2} Posts 
Another suggestion for ECM work: right now for ECM on Mersennes there are two choices: one for first factors and one for additional factors, and both seem to give curves with B1 = 50k, B2 = 100*B1. Perhaps we should have an additional ECM work type where the server will hand out curves with higher bounds, without having to manually request curves or edit worktodo.txt.

20191121, 23:52  #4  
Nov 2003
16100_{8} Posts 
Quote:
for t65. In fact, only ~83000 curves are needed at 800M to achieve t65. 360K curves is roughly 4.3 x t65. This is massive overkill. I leave it to others to decide how many curves to run, BUT: after 85K (or so) curves at 800M, the project should switch to about B1 = 3G or 4G. 

20191122, 00:13  #5 
"Curtis"
Feb 2005
Riverside, CA
3·5·13·23 Posts 

20191122, 00:33  #6  
Aug 2002
Buenos Aires, Argentina
2^{3}·167 Posts 
Quote:


20191122, 00:51  #7  
P90 years forever!
Aug 2002
Yeehaw, FL
2·3·1,193 Posts 
Quote:
When results are reported with B2 <> 100*B1, they are converted into the equivalent number of B2=100*B1 curves using a formula given to me by Alex Kruppa. The table is far from perfect but should give us a rough idea of how much effort has been completed. 

20191122, 01:26  #8  
Nov 2003
2^{6}·113 Posts 
Quote:
xxx < 5000. YMMV depending on your definition of "huge". Running ECM at B2 = 100 B1 is horribly inefficient. 

20191122, 01:34  #9  
Aug 2002
Buenos Aires, Argentina
2^{3}×167 Posts 
Quote:
Last fiddled with by alpertron on 20191122 at 01:34 

20191122, 02:31  #10  
Nov 2003
16100_{8} Posts 
Quote:
do not go nearly that high. They max out at exponents ~15K. (i.e. numbers of ~4K digits max). For numbers in the millions of digits convolution methods are clearly out of reach. For 2^p1, p< 5K I see no reason not to use convolution based step 2. Save the step 1 result then use GMPECM for step 2. Furthermore, no one is discussing ECM at B1=800M for numbers of the size you suggest. Last fiddled with by R.D. Silverman on 20191122 at 02:35 

20191122, 03:04  #11 
Aug 2002
Buenos Aires, Argentina
1336_{10} Posts 
The program Prime95 is optimized for huge numbers. Ryan Propper uses GMPECM or a similarly step2 optimized application, and then he sends the results manually to the Primenet server.
It is clear that the should be no need to use Prime95 to discover prime factors of more than 5560 digits. Notice that David Bessell found in 2011 the prime factor 7751061099802522589358967058392886922693580423169 of the 17th Fermat number using Prime95. F17 has more than 39400 digits. He also found the prime factor 37590055514133754286524446080499713 of F19 in 2009. It has more than 157800 digits. Last fiddled with by alpertron on 20191122 at 03:05 
Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
How to change # of CPU's working?  GinoTitan  PrimeNet  4  20160529 19:25 
change of computer  deepesh  Hardware  5  20160405 05:30 
Name Change?  Fred  Lounge  8  20160131 17:42 
Change the world!  Xyzzy  Lounge  5  20090831 12:41 
How can I change worktype?  Andriy  Information & Answers  1  20090620 12:39 