![]() |
[QUOTE=LOBES;578355]Doh, nevermind.[/QUOTE]
OK. |
[QUOTE=chalsall;564260]There's some excellent GPU P-1 software available now. But Prime95/mprime (CPU) is by far the easiest to work with.[/QUOTE]
I agree with "easiest". I am running GPUOwl under colab. When I can get a P100 (which is virtually every day) I can do a 8GHz P-1 in 30 minutes. Running a similar P-1 on all 8 cores of my i7-7820X takes about 75 minutes. |
(Somehow missed this when PaulU OPed it:)
[QUOTE=paulunderwood;563452]Congrats! However: [CODE]? factor(499400852887245323683941126088449355702834653807158087 ) [ 290582744822559701357207 1] [1718618403140893608084889221841 1] [/CODE][/QUOTE] Ha! Real men (a.k.a. execrable masochists) factor numbers like this using the world's slowest bignum code - I mean of course *nix 'bc', which IIRC uses base-10 emulation of your CPU's base-2 instructions or some such ludicrously inefficient bignum implementation - and crappy bc-based functions of their own writing: [i] n = 499400852887245323683941126088449355702834653807158087; p = 105032111; pm1(n,p,10^4,5*10^6,5) Stage 1 prime-powers seed = 105032111 Stage 1 residue A = 275242671610725931867172664303887659718570581548948384, gcd(A-1,n) = 1 Stage 2 interval = [10000,5000000]: Using base= 3; Initializing M*24 = 120 [base^(A^(b^2)) % n] buffers for Stage 2... Stage 2 q0 = 10080, k0 = 48 At q = 209790 At q = 419790 At q = 629790 At q = 839790 At q = 1049790 At q = 1259790 At q = 1469790 At q = 1679790 At q = 1889790 At q = 2099790 At q = 2309790 At q = 2519790 At q = 2729790 At q = 2939790 At q = 3149790 At q = 3359790 At q = 3569790 At q = 3779790 At q = 3989790 At q = 4199790 At q = 4409790 At q = 4619790 At q = 4829790 Stage 2: did 23762 loop passes. Residue B = -409059575611368065569985294315721920104412510081096652, gcd(B,n) = 290582744822559701357207 This factor is a probable prime. Processed 581936 stage 2 primes, including 234294 prime-pairs and 113348 prime-singles [80.52 % paired].[/i] Now back to work on my cutting-edge bc-based NFS implementation, with which I hope to someday factor numbers as large as the quantum-computer folks do: "the quantum factorization of the largest number to date, 56,153, smashing the previous record of 143 that was set in 2012." |
[We open our next scene with a hand slapping the owner's forehead, accompanied by the utterance "doh!"]
Re above: In fact it seems silly to use powerful general-modulus factoring machinery like ECM or QS on such (p-1)-found factor-product composites. Here's why: say we have some product of prime factors F = f1*f2*...*fn discovered by running p-1 to stage bounds b1 and b2 on an input Mersenne M(p) (or other bigum modulus with factors of a known form, allowing p-1 to be 'seeded' with a component of same). BY DEFINITION, each prime factor f1-fn will be b1/b2-smooth, in the sense than fj = 2*p*C + 1, where C is a composite all of whose prime factors are <= b1, save possibly one outlier-prime factor > b1 and <= b2. Thus if we again run p-1 to bounds b1/b2, but now with arithmetic modulo the relatively tiny factor product F, we are guaranteed to resolve all the prime factors f1-fn - the only trick is that we will need to do multiple GCDs along the way in order to capture the individual prime factors f1,...,fn, rather than have this secondary p-1 run modulo F again produce the same composite GCD = F which the original p-1 run mod M(p) did. Again, though, since in the followup p-1 run we are working mod F, all the arithmetic is trivially cheap, including the needed GCDs. And since the cost of a p-1 run is effectively akin to a single super-cheap ECM curve, we've reduced the work of resolving the composite F to just that equivalent. |
| All times are UTC. The time now is 13:46. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.