20190403, 09:17  #12 
"Tilman Neumann"
Jan 2016
Germany
2·3·5·13 Posts 
Hi Nesio,
Thilo Harich and me have been working recently on an implementation that is very similar to your SM algorithm. See this post: https://www.mersenneforum.org/showpo...5&postcount=28 Or this Java class: https://github.com/TilmanNeumann/jav...Hart_Fast.java Both your and our implementations improve upon the orginial Hart algorithm mostly because of the use of multipliers for k. You use k*m*n with some flexible choice of m, and we use 4*k*n, stepping only over kvalues that are multiples of 315. Additionally, we adjust a=ceil(sqrt(4kn)) by some congruences (mod 2^s) as it is done in Lehman's algorithm. From your paper I derived that you propose multipliers m like 24, 120, 480, 5040, where smaller m are better suited for small n, and bigger m better for bigger n. I implemented your SM algorithm in Java in https://github.com/TilmanNeumann/jav...art_Nesio.java and tested it for n from 20 to 50 bit with the following multipliers m: 1, 4, 24, 48, 120, 240, 720, 1260, 1440, 5040, 10080, 40320, 80640, 110880, 362880, 725760. Surprisingly I found that 10080 is the best m in the whole nrange. The reason is that I compute the sqrt(k) values already at the construction time of the algorithm. This is a big speedup because we can replace a sqrt by a double multiplication. But the double multiplication hardly cares about the size of its arguments; thus the size of m only matters in the k*m*n multiplication now, if at all. Considerung that your SM algorithm does not use adjustments by congruences it is quite fast. Nonetheless the congruence adjustments mean a further speedup. According to my tests with semiprimes having smaller factor between cbrt(n) and sqrt(n), Hart_Fast seems to be constantly about factor 1.46 faster than Hart_Nesio. 
20190403, 09:37  #13 
"Tilman Neumann"
Jan 2016
Germany
2·3·5·13 Posts 
Here is a performance comparison with 100k test numbers of the mentioned kind (semiprimes with smaller factor between cbrt(n) and sqrt(n) ) from 30 to 50 bit, including a couple of other algorithms, too.

20190403, 11:13  #14  
Apr 2019
2^{5} Posts 
Quote:
In our paper we directly wrote that SM (Simple Multiplication) is alaHart. There are several references in our article to Hart work and thoughts regarding SM also. At the beginning of this forum thread we referenced to Hard too in the connection of SM. SM and its improvement in applying its multiplier "m" (5040, 720, etc.) was not the object of our research. As you can see we have researched RM (Recursive Multiplication) algorithm and touched of his relations to SM. We think that the improvement and tricks of speeding SM, RM and Lehman algorithms in practice (at the programming level) is a particular theme. 

20190403, 15:28  #15 
"Dana Jacobsen"
Feb 2011
Bangkok, TH
2×11×41 Posts 
I also got from the paper that SM was not the end goal, and I wasn't commenting on that. I think RM is certainly interesting and nonobvious.
Mainly since we've had discussions on this forum of Hart's OLF (with multiplier) vs. Lehman vs. SQUFOF vs. Pollard/Brent Rho here, I wanted to note that SM and Hart's OLF are basically the same, though just like Hart's paper it leaves the use and choice of multiplier to the user (he suggests 480). Thilo and Tilman have been doing lots of great work with both Lehman's and Hart's methods in Java. I wish I had more free time to work on this in C! I look forward to what they come up with when looking at RM. 
20190403, 16:09  #16 
"Tilman Neumann"
Jan 2016
Germany
2×3×5×13 Posts 
Actually RM does not look that promising to me. Compared to SM, its advantage in terms of iterations is only 0.6% at 16digit inputs, but it will (quite certainly) need more instructions per iteration.
Of course it is fair and interesting to investigate such issues. But the final goal is better performance, no? We found another way to arrange kvalues in a good way in class Lehman_CustomKOrder. For "bigger" n like 50 bits, it is around factor 1.4 faster than Lehman_Fast that was reproduced in C by bsquared with some success. Lehman_CustomKOrder's performance is pretty similar to that of our Hard_Fast implementation. The performance of Hard and Lehman seems to be converging. Maybe there is not much space for improvement left (considering current architectures). Last fiddled with by Till on 20190403 at 16:18 Reason: fixed bsquared's nick ;) 
20190403, 17:35  #17  
Apr 2019
2^{5} Posts 
Quote:
Thanks for the curious links to your and Thilo work. Quote:
Hart's paper contains k*m*n multiplication. k is sought value, m is tuning parameter. He advises set m=480. We took Hart almost as is. One small result of our work that RM explains some limits of SM as we think. Quote:
Quote:
Quote:
Quote:


20190403, 17:52  #18 
"Tilman Neumann"
Jan 2016
Germany
390_{10} Posts 

20190403, 21:15  #19  
Apr 2019
40_{8} Posts 
Quote:
We are not ready to comment on your test results now because it needs to analyze your program realization. But it'll demand the time. However, we have a question. Why your listing contains errors of factorization? Look the table at the attachment, please. There are values of multiplier "m" in the first column. Our SM and RM code found factors of your numbers that contain errors. Also, this table returns us to the questions (topics) which we touched on in our paper about some hardfactoring numbers for MMFFNN method, about choosing of parameter "m", about some advantage of RM vs SM and so on. 

20190403, 21:55  #20 
"Tilman Neumann"
Jan 2016
Germany
2·3·5·13 Posts 
Those errors occur because the sqrt arrays I used are not big enough for the n's in question. In my implementation of your SM algorithm I only stored sqrt(km) for k<=2^21.

20190404, 13:11  #21  
Apr 2019
2^{5} Posts 
Quote:
If you dream to accelerate SM (alaHart) algorithm please take our advices which are based on our work on MMFFNN method. 1. use constant multiplier 4 in all sqrt equations (4*k*n*m  you understand me) 2. use optimum strategy of choosing a multiplier m (we wrote about): 2.1 the smaller n the smaller m and vice versa 2.2 optimize between the cost of multiplication by m (negative factor) and the presence of more prime divisors of m (positive factor) 3. remember about some hardfactoring numbers n in MMFFNN methods (both SM and RM) because Fermat has cycling addition, Lehman has cycling multiplication and addition, MMFFNN has multiplication only: 3.1 hardfactoring numbers have a special ratio a/b where a*b=n; try to consider such "ratios" 4. use fast multiplication methods by k or speed up multiplication operation by k in 4*k*n*m 5. use the consequences from RM method: 5.1 use SM technique while k < (m*n)^(1/3) 5.2 use RM technique when k > (m*n)^(1/3) (you can emulate recursion without call of recursion) 5.3 use in RM your version of isSieve function (an example for your task here: let k is k=k1*k2*k3 in RM; try to sieve excess iterations of seeking k when for example k=24=1*2*12=1*3*8=2*2*6=2*3*4 6. at last use !_additional_! recommendations for improvements from Lehman's and Hart's papers for SM algorithm (we didn't consider them in our work). 

20190404, 13:48  #22 
Mar 2018
3×43 Posts 

Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
Diamond multiplication and factorization method  datblubat  Computer Science & Computational Number Theory  6  20181225 17:29 
The natural progression of (even perfect numbers)*3  Dubslow  Aliquot Sequences  6  20180515 15:59 
Montgomery method in Prime Numbers: ACP  SPWorley  Math  5  20090818 17:27 
Elliptic Curve Method factoring  Fermat numbers  philmoore  Math  131  20061218 06:27 
Prime Numbers: Nothing but Errors in Multiplication???  Pax_Vitae  Miscellaneous Math  15  20051114 12:41 