![]() |
As it turns out, including the odds of finding a factor the previous level missed does substantially affect the ECM estimate. The third argument is how much of the missed factor odds (exp(-1)*odds of factor at previous level) to add to the odds of a factor existing at this level. It is an arbitrary way of accounting for the fact that the previous level might've found factors at this level.
[code]In [1]: %run nfsecm.py 60 325000 0 With roughly 8.625820% odds of a factor existing, you should do 16312 curves at 60 digits before switching to NFS In [2]: %run nfsecm.py 60 325000 0.5 With roughly 10.362337% odds of a factor existing, you should do 31373 curves at 60 digits before switching to NFS In [3]: %run nfsecm.py 60 325000 1 With roughly 12.098855% odds of a factor existing, you should do 46382 curves at 60 digits before switching to NFS In [4]: %run nfsecm.py 65 325000 0 Not even one curve should be done, forget about ECM at 65 digits. In [5]: %run nfsecm.py 65 325000 0.5 Not even one curve should be done, forget about ECM at 65 digits. In [6]: %run nfsecm.py 65 325000 1 Not even one curve should be done, forget about ECM at 65 digits.[/code] [url]https://gist.github.com/dubslow/916cb29e30277380c9f2[/url] Edit: Here's how the numbers react to variance in the estimated NFS effort: [code]In [10]: %run nfsecm.py 60 325000 0 With roughly 8.625820% odds of a factor existing, you should do 16312 curves at 60 digits before switching to NFS In [11]: %run nfsecm.py 60 330000 0 With roughly 8.625820% odds of a factor existing, you should do 17463 curves at 60 digits before switching to NFS In [12]: %run nfsecm.py 60 335000 0 With roughly 8.625820% odds of a factor existing, you should do 18614 curves at 60 digits before switching to NFS In [13]: %run nfsecm.py 60 340000 0 With roughly 8.625820% odds of a factor existing, you should do 19765 curves at 60 digits before switching to NFS In [14]: %run nfsecm.py 60 345000 0 With roughly 8.625820% odds of a factor existing, you should do 20916 curves at 60 digits before switching to NFS In [15]: %run nfsecm.py 60 350000 0 With roughly 8.625820% odds of a factor existing, you should do 22067 curves at 60 digits before switching to NFS In [16]: %run nfsecm.py 60 320000 0 With roughly 8.625820% odds of a factor existing, you should do 15161 curves at 60 digits before switching to NFS In [17]: %run nfsecm.py 60 315000 0 With roughly 8.625820% odds of a factor existing, you should do 14010 curves at 60 digits before switching to NFS In [18]: %run nfsecm.py 60 310000 0 With roughly 8.625820% odds of a factor existing, you should do 12860 curves at 60 digits before switching to NFS In [19]: %run nfsecm.py 60 305000 0 With roughly 8.625820% odds of a factor existing, you should do 11709 curves at 60 digits before switching to NFS In [20]: %run nfsecm.py 60 300000 0 With roughly 8.625820% odds of a factor existing, you should do 10558 curves at 60 digits before switching to NFS [/code] The good news is that the curve around the total NFS effort is relatively flat, getting within ~5K curves is pretty damn close to optimal. The missed factor odds are rather more volatile, and I guess I'd like more feedback about that. |
I view the case for 1k-3k curves at 850M after a t60 as a sort of "cleanup" of those missed factors under 60 digits that a t60 might've missed. But, I usually stop before a complete t60 is done and substitute the 850M curves for the last few thousand 260M curves, on the premise that the increased chance of larger factors makes up for the slight inefficiency in using 850M curves to complete a t60.
Great analysis! Here's how I look at those missed factors vs "extra" factors for a t60: a t55 has a 1/e chance to miss a 55 digit factor. A t60 is about 6t55, so that same missed factor has roughly 1/e^5 chance to still be missed; almost zero. So, a t60 is going to find 1/e of the 55-digit factors (the ones missed by the t55). A t55 is 2t53, so a 53-digit factor is missed 1/e^2 of the time. the t60 will find those for "sure". However, the t60 will miss 1/e of the 60-digit factors, so the extra 55 digit factors found are balanced by the missed 60-digit factors. Likewise for 53 vs 58 digit factors; a t60 misses 1/e^2 of the 58 digit factors, but finds the (roughly) same number of 53 digit factors that were missed by the t55. So, your edit 2 had the right idea- the extra factors picked up at a smaller level come very close to canceling the missed factors in the 5-digit range you're searching. This disregards the "bonus" higher factors, but the t55 would have picked up some bonus factors in your 55-to-60 digit range too- the cancellation argument works the same as before. So, the method we usually use of pretending all 55 to 60 digit factors will be found by a t60 is pretty accurate for estimating the actual number of factors that will be found- it's just that a fair number of the factors will be 53-55 digits rather than 58-60 digits. Edit- on projects this big, I think the parallelization of ECM itself has value- that is, solving the matrix requires high-value hardware, while ECM does not. I think that makes it worth doing extra ECM work on the cheap hardware on the chance it saves having to do the matrix on said high-value hardware. Say, 10-20% more curves than strict efficiency arguments call for. |
[QUOTE=VBCurtis;427951]A t60 is about 6t55,...
A t55 is 2t53, ...[/QUOTE] This is one thing I still don't have a good feel for. How do you calculate this? As for the rest, I'm not entirely sure I made myself clear -- the code accounts for the odds that the t60 misses a 56-60 digit factor. It makes substantial use of the CDF = 1-exp(-curves/median). That is to say, in your sentence here: [quote=VBCurtis;427951]However, the t60 will miss 1/e of the 60-digit factors, so the extra 55 digit factors found are balanced by the missed 60-digit factors.[/quote] My code already accounts for the missed 60 digit factors, but not for the missed 55 digit factors (when the third argument is 0), so there is not any such balance. I guess what I'm asking is, when you've run a t55 but nothing at t60, how much of a t60 is "already" accounted for by a t55? By your first quoted sentence above, 1t55= 1/6 t60...? Here's the relevant code: [PASTEBIN]CDdeJsqa[/PASTEBIN] And... [PASTEBIN]5pR4AUHc[/PASTEBIN] |
[QUOTE=Dubslow;427971]This is one thing I still don't have a good feel for. How do you calculate this?
I guess what I'm asking is, when you've run a t55 but nothing at t60, how much of a t60 is "already" accounted for by a t55? By your first quoted sentence above, 1t55= 1/6 t60...? [/QUOTE] I ask GMP-ECM, by comparing the curve counts at a specific B1 value required for a t55 and a t60. At B1 = 260M, a t55 is 8656 curves while a t60 is 47888 curves. So, when one completes the 47888 curves for a t60, one has done 47888/8656 = 5.5ish t55. At B1 = 110M, a t55 is 20479 curves, while a t60 is 128305 curves. So, when one completes 20479 curves for a t55, one has completed 20479/128305 of a t60, or just under 1/6th. To estimate t53 and such, I use a geometric interpolation: if ECM effort doubles every two digits, 5 digits higher would be 2.5 doublings. 2^2.5 is around 5.7, right in between the two ratios mentioned above. So, a rough guide for estimating t53 is half a t55, t57 is 2*t55, t59 is 2*t57 is 4*t55, and t60 is sqrt2*t59 is 4sqrt2 * t55 is 5.7*t55. Again, the 5.7 is a rounded version of the first calculation I did. The ratio changes for various B1 values, but not by much. For instance, I use B1 = 150M to run my t55s; a t55 is 14356 curves, whlie a t60 is 86058. On my machine, 14356 curves at 150M run faster than 20479 curves at 110M, while also completing a teensy bit more of a t60 than the set of curves at 110M (i.e. there's a bit more chance of a "bonus" factor above t55). |
So bottom line...
t55, followed by t60, followed by ?t65. How many curves? For comparisons sake, see [url=http://www.mersenneforum.org/showpost.php?p=404530&postcount=1]this post[/url] for a list of GNFS composites with ECM preprocessing requirements Ultimately, the required level of ECM will be whatever the NFS@Home gatekeeper says it is.:smile: |
[QUOTE=swellman;428044]So bottom line...
t55, followed by t60, followed by ?t65. How many curves? For comparisons sake, see [url=http://www.mersenneforum.org/showpost.php?p=404530&postcount=1]this post[/url] for a list of GNFS composites with ECM preprocessing requirements Ultimately, the required level of ECM will be whatever the NFS@Home gatekeeper says it is.:smile:[/QUOTE] By my analysis, we should do no ECM at t65, and in fact the utility of a full t60 is in question, though I guess we'll likely wind up doing a full t60, or perhaps change the last few K of the t60 into higher level curves. I would argue that at the higher end at least, that XYYXF table is skewed pretty heavily into the too-much-ECM territory. Yes, the gatekeepers are who we ultimately need please. |
[QUOTE=VBCurtis;427989]I ask GMP-ECM, by comparing the curve counts at a specific B1 value required for a t55 and a t60.
At B1 = 260M, a t55 is 8656 curves while a t60 is 47888 curves. So, when one completes the 47888 curves for a t60, one has done 47888/8656 = 5.5ish t55. At B1 = 110M, a t55 is 20479 curves, while a t60 is 128305 curves. So, when one completes 20479 curves for a t55, one has completed 20479/128305 of a t60, or just under 1/6th. To estimate t53 and such, I use a geometric interpolation: if ECM effort doubles every two digits, 5 digits higher would be 2.5 doublings. 2^2.5 is around 5.7, right in between the two ratios mentioned above. So, a rough guide for estimating t53 is half a t55, t57 is 2*t55, t59 is 2*t57 is 4*t55, and t60 is sqrt2*t59 is 4sqrt2 * t55 is 5.7*t55. Again, the 5.7 is a rounded version of the first calculation I did. The ratio changes for various B1 values, but not by much. For instance, I use B1 = 150M to run my t55s; a t55 is 14356 curves, whlie a t60 is 86058. On my machine, 14356 curves at 150M run faster than 20479 curves at 110M, while also completing a teensy bit more of a t60 than the set of curves at 110M (i.e. there's a bit more chance of a "bonus" factor above t55).[/QUOTE] Okay, nice. I've settled on 2^(5/2) as the ratio, and adjusted my code accordingly: [pastebin]rbyh9SKT[/pastebin] Which has the following results: [code]In [1]: run ecmnfs.py 60 325000 With roughly 10.70% odds of a factor existing, you should do 34311 curves at 60 digits before switching to NFS In [2]: run ecmnfs.py 65 325000 Not even one curve should be done, forget about ECM at 65 digits. In [3]: run ecmnfs.py 60 330000 With roughly 10.70% odds of a factor existing, you should do 35739 curves at 60 digits before switching to NFS In [4]: run ecmnfs.py 60 335000 With roughly 10.70% odds of a factor existing, you should do 37167 curves at 60 digits before switching to NFS In [5]: run ecmnfs.py 60 340000 With roughly 10.70% odds of a factor existing, you should do 38595 curves at 60 digits before switching to NFS In [6]: run ecmnfs.py 60 345000 With roughly 10.70% odds of a factor existing, you should do 40023 curves at 60 digits before switching to NFS In [7]: run ecmnfs.py 60 350000 With roughly 10.70% odds of a factor existing, you should do 41450 curves at 60 digits before switching to NFS In [8]: run ecmnfs.py 60 320000 With roughly 10.70% odds of a factor existing, you should do 32883 curves at 60 digits before switching to NFS In [9]: run ecmnfs.py 60 315000 With roughly 10.70% odds of a factor existing, you should do 31456 curves at 60 digits before switching to NFS In [10]: run ecmnfs.py 60 310000 With roughly 10.70% odds of a factor existing, you should do 30028 curves at 60 digits before switching to NFS In [11]: run ecmnfs.py 60 305000 With roughly 10.70% odds of a factor existing, you should do 28600 curves at 60 digits before switching to NFS In [12]: run ecmnfs.py 60 300000 With roughly 10.70% odds of a factor existing, you should do 27172 curves at 60 digits before switching to NFS[/code] Full code here: [url]https://gist.github.com/dubslow/916cb29e30277380c9f2[/url] |
This code is so far advanced from the decimal ratio rules of thumb- thanks! Great work.
The experts (RDS et al) used to tell us plebes that the optimal ECM ratio decreased as the composite size went up; your applet demonstrates that concretely. :tu: |
[QUOTE=Dubslow;428045]By my analysis, we should do no ECM at t65, and in fact the utility of a full t60 is in question, though I guess we'll likely wind up doing a full t60, or perhaps change the last few K of the t60 into higher level curves. I would argue that at the higher end at least, that XYYXF table is skewed pretty heavily into the too-much-ECM territory.
Yes, the gatekeepers are who we ultimately need please.[/QUOTE] Ok, so ECM stops after a full t60. Unless the gatekeeper says otherwise. I agree the table in my previous post is skewed towards too much ECM. I just linked it for comparison purposes. |
I'll run some ECM with higher B1.
|
3 Attachment(s)
I've found a bug, namely that [c]while x1 - x0 > 0.5[/c] really should have been [c]while abs(x1 - x0) > 0.5[/c]. It doesn't really affect the current result, but for situations where the solution was greater than median curves, the code didn't work.
[url]https://gist.github.com/dubslow/916cb29e30277380c9f2[/url] As a bonus, see the attached figures. Note that these should not be used for anything other than C195s, because the hours_per_curve estimate varies with composite size. [code]In [85]: run ecmnfs.py 60 325000 With ~10.70% odds of a factor existing and ~5.95% net odds of success, you should do 34141 curves at 60 digits before switching to NFS [/code] Edit: See the second set of figures to digest the futility of curves at 65 digits. Edit2: And the 55 digit graphs, for good measure. I'm pretty sure I'm on to something here... :smile: |
I finished 17715 curves @11e7 ([url]http://www.rechenkraft.net/yoyo/y_status_ecm.php[/url]). It will continue until I reach 18000 curves.
What is now the conclusion, how many curves are required @ 26e7? |
40K or 42K or whatever is more than enough, take your pick.
|
Let's say 21K curves @ 260e6 = 1/2 t60, pending approval by NFS@Home. That's still a bit more than optimal, though of course "optimal" has a wide margin of error.
|
[QUOTE=Dubslow;428598]Let's say 21K curves @ 260e6 = 1/2 t60, pending approval by NFS@Home. That's still a bit more than optimal, though of course "optimal" has a wide margin of error.[/QUOTE]
[URL="http://www.rechenkraft.net/yoyo/y_status_ecm.php"]Nearly done[/URL]. |
I finished 2000 curves @ B1=1e9. For count thread-hours: each curve took ~8000s and 2.5Gb of memory on Xeon E5-2620.
|
What is status of this C195? Has enough ECM been run? I know a [url=http://www.mersenneforum.org/showpost.php?p=430247&postcount=544]
poly was found[/url]. Anyone still searching for a better poly? Or are we ready for the 15e queue? |
Tomorrow will conclude 14 GPU-days of poly search for me, with nothing better than 9.7e-15. I'll give up tomorrow.
|
My ~15 day trial of CADO was a disaster because I didn't fully understand the way the results are output until after they were already unrecoverable.
I'm considering redoing the work, the second time by involving my other computer which currently runs GIMPS (I would pause that for the ~week it would take). |
Thanks for the update(s). I had totally lost the thread.
VBCurtis - nice effort on your continuing work to find a better poly than RichD's baseline. I have no idea what the duration of GPU poly searching for a C195 GNFS should be before due diligence is considered met. And Dubslow - that really stinks. Would it help to bring others into the CADO search if you decide to rerun? In other words, could I run your code over half the search range in parallel to you in order to complete the search in half the time? Or does CADO's poly search not work that way? That approach may require so much handholding (of me) as to make it not worth the effort but if you have a Win64 executable I'm willing to help. And ECM is considered complete, yes? All up to the 15e gatekeepers it would seem. |
[QUOTE=swellman;431707]Thanks for the update(s). I had totally lost the thread.
VBCurtis - nice effort on your continuing work to find a better poly than RichD's baseline. I have no idea what the duration of GPU poly searching for a C195 GNFS should be before due diligence is considered met. And Dubslow - that really stinks. Would it help to bring others into the CADO search if you decide to rerun? In other words, could I run your code over half the search range in parallel to you in order to complete the search in half the time? Or does CADO's poly search not work that way? That approach may require so much handholding (of me) as to make it not worth the effort but if you have a Win64 executable I'm willing to help. And ECM is considered complete, yes? All up to the 15e gatekeepers it would seem.[/QUOTE] That would be nice, yes. See here for details: [url]http://mersenneforum.org/showthread.php?p=431611#post431611[/url] I'm trying to forgo the wrapper Python scripts and just directly call the underlying binaries -- some work is still needed obviously. CADO does search the same way over the leading coefficient as Msieve does, doing size opt then root opt on the n best polys. For now, download and compile CADO, while I try and figure out a better way to run the binary that actually keeps the results... I'll post calls for assistance in that thread. |
[QUOTE=Dubslow;431709]That would be nice, yes.
See here for details: [url]http://mersenneforum.org/showthread.php?p=431611#post431611[/url] I'm trying to forgo the wrapper Python scripts and just directly call the underlying binaries -- some work is still needed obviously. CADO does search the same way over the leading coefficient as Msieve does, doing size opt then root opt on the n best polys. For now, download and compile CADO, while I try and figure out a better way to run the binary that actually keeps the results... I'll post calls for assistance in that thread.[/QUOTE] Ok, but I'm going to deal with CADO in a Linux environment. I have no tools/talent/desire to compile in Windows, much less debug it. I'll get CADO up and running on my machine and then await direction from you. Please be patient - I am not a software guy, nor do I play one on TV. But it might be fun to play with CADO. |
[QUOTE=swellman;431760]Ok, but I'm going to deal with CADO in a Linux environment. I have no tools/talent/desire to compile in Windows, much less debug it.[/QUOTE]
That makes two of us :smile: (I'm a pretty hardcore libre-tard) |
Okay, per the notes in the CADO thread I've started, you must redirect the [c]polyselect[/c] output to file for it to be of any use.
I've started another run of 0-10M on leading coefficients, subject to the caveat that only LCs divisible by 60 are searched (see the documentation in the CADO thread). I used the following command: [c]nice -n 19 ./polyselect -degree 5 -P 5000000 -admax 10000000 -nq 1000 -keep 1000 -t 8 -N $(cat num) > a4788.0-5M[/c] Anyone who wants to search the next highest range would add the following: [c]-admin 10000020[/c] (as well as change -admax and -t to suit). I'm not yet sure exactly how to pass the results to rootsieve, but I'll work on it for my run here and update. |
The 15e queue is looking low. Shall we nominate the C195 for the 15e queue? I believe the best poly to date is [url=http://www.mersenneforum.org/showpost.php?p=430247&postcount=544]here[/url].
[code] N: 105720747827131650775565137946594727648048676926428801477497713261333062158444658783837181718187127016255169032147325982158719006483898971407998273736975091062494070213867530300194317862973608499 # expecting poly E from 1.08e-14 to > 1.24e-14 R0: -20235742029690041687577373761152054390 R1: 9267958148582139083 A0: -11267322525486743517923188323192978629784204417 A1: 240114201561699843948709214315614352077 A2: 33878720444898812247193073824610 A3: -414616433728607362030181 A4: 1570407958802871 A5: 31157640 # skew 70621938.15, size 4.414e-19, alpha -7.667, combined = 1.044e-14 rroots = 3 [/code] Dubslow - do you want to continue your investigation of CADO before we declare this poly search complete? While the poly is likely not the optimal solution, is it good enough to start sieving? |
I am trial-sieving this polynomial (basically to figure out whether it needs three large primes on at least one side) and will put it on the queue when done.
Do you have the resources available to do the linear algebra? It will probably take between three and six real-time weeks on a fast machine (i7-5960X or Xeon-v2/v3) and will need 32G of memory. |
[QUOTE=fivemack;432571]I am trial-sieving this polynomial (basically to figure out whether it needs three large primes on at least one side) and will put it on the queue when done.
Do you have the resources available to do the linear algebra? It will probably take between three and six real-time weeks on a fast machine (i7-5960X or Xeon-v2/v3) and will need 32G of memory.[/QUOTE] Not sure if this question is directed at me, but I've only got 16Gb. I can run 32 bit jobs but 33 bits is likely past my capacity. Is there anyone else here with 32Gb memory willing to postprocess this number? |
Please do wait, I'm still investigating, and I believe I have at least one poly with a better reported ME score than the one from RichD. Give me another day and I'll post my results for comparison and trial sieve.
|
Thought I'd try giving this some ECM, and found:
[CODE]********** Factor found in step 2: 58504187312033426783937089348158383891551824971250202530619 Found probable prime factor of 59 digits: 58504187312033426783937089348158383891551824971250202530619 Composite cofactor 1807062924629097298443690521212788718205381130882626544320148370821147507418603191432294253211449561997233820401588086608411757973724521 has 136 digits[/CODE] |
I am now prepared to handle the matrix for this!!!! :brian-e: :flex:
Thanks, Ryan! |
WOW! :bow:
BTW, what B1 was used for this? |
[QUOTE=VBCurtis;432648]I am now prepared to handle the matrix for this!!!! :brian-e: :flex:
Thanks, Ryan![/QUOTE] I'll spare you the trouble... [CODE]Wed Apr 27 00:08:30 2016 prp63 factor: 584775484621355384587447640679136598622765183913655518547655269 Wed Apr 27 00:08:30 2016 prp73 factor: 3090182424113039270065503929095662680218231005612731404992737669195465909[/CODE] |
Man, that's two large ones in a row that *nearly* made it to NFS... if I'd spend all my time on CADO on ECM instead...
|
Nice hit RyanP! :shock:
Good to see you again, hope you can stick around. Check out the bottom of [url]http://www.mersenneforum.org/showthread.php?t=20024&page=46[/url]. Did anything ever come from those big jobs? |
I've finished a T40 (300 curves at B1 = 21M) on the C180 from the next line.
I'll add some curves at 34M overnight, then move to 60M tomorrow since others are likely doing some ECM on this too. |
I've done 888 curves at 43M and intend to continue through the ~7k for a t50.
|
[QUOTE=Dubslow;432701]I've done 888 curves at 43M and intend to continue through the ~7k for a t50.[/QUOTE]
Save your cycles -- I expect to have this one done by GNFS later today. Stay tuned... |
Even allowing for 500+ cores for sieving, it boggles the mind to imagine doing the sieve and matrix for a GNFS180 in 72 hrs.
It's nice to see you posting directly once again, sir. Welcome back! |
Thanks. :)
Long square root phase, but here we have it: [CODE]Fri Apr 29 19:10:35 2016 prp78 factor: 144627802025961763976419354552926309281761047936149821764238780170955998664361 Fri Apr 29 19:10:35 2016 prp103 factor: 1556857813225187409870609737072643125460409111006918686481624578670231867626844762827246448671560826769[/CODE] |
[QUOTE=ryanp;432770]Save your cycles -- I expect to have this one done by GNFS later today. Stay tuned...[/QUOTE]
[QUOTE=VBCurtis;432787]Even allowing for 500+ cores for sieving, it boggles the mind to imagine doing the sieve and matrix for a GNFS180 in 72 hrs. It's nice to see you posting directly once again, sir. Welcome back![/QUOTE] :shock: Holy blam. Just how many cores (or core equivalents) do you have at your disposal?! :w00t: Edit: Are you going to continue with the next C180, or would that be a waste of my cycles too :smile: |
I'll handle the next C180 as well. :)
|
OK. Next C180 is done:
[CODE]Mon May 2 13:11:50 2016 prp63 factor: 852828185224582024294795864244283338007551456801212478326125773 Mon May 2 13:11:50 2016 prp117 factor: 704131409692002901688336495667355706435373199424596628222345220771103914385162349468158740451400667823527752200435477[/CODE] |
The line after the second C180 mutated from 2^3*5 to 2^5. :smile:
Edit: I'm now convinced that ryanp has more CPU power than the entirety of NFS@Home combined. That is some absolutely insane number crunching. :showoff: |
Who's factoring the C170? Dubslow?
|
I'll run some curves on it, but for now I'm mostly assuming that ryanp will carry us until he says otherwise.
|
Obviously c170 (even if it needs gnfs) will take only half a day. :rolleyes:
Just relax. And enjoy [URL="https://en.wikipedia.org/wiki/Blinkenlights"]the blinkenlights[/URL]. [QUOTE][B]ACHTUNG![/B] ALLES TURISTEN UND NONTEKNISCHEN LOOKENPEEPERS! DAS KOMPUTERMASCHINE IST NICHT FÜR DER GEFINGERPOKEN UND MITTENGRABEN! ODERWISE IST EASY TO SCHNAPPEN DER SPRINGENWERK, BLOWENFUSEN UND POPPENCORKEN MIT SPITZENSPARKEN. IST NICHT FÜR GEWERKEN BEI DUMMKOPFEN. DER RUBBERNECKEN SIGHTSEEREN KEEPEN DAS COTTONPICKEN HÄNDER IN DAS POCKETS MUSS. ZO RELAXEN UND WATSCHEN DER BLINKENLICHTEN.[/QUOTE] |
Working on the C170 now.
[CODE]linear algebra completed 733239 of 5927756 dimensions (12.4%, ETA 10h 9m)[/CODE] |
Awww. Could've had this one by ECM if I had tried harder...
[CODE]Tue May 3 08:55:24 2016 prp50 factor: 10343669734301041373937055383512171230895724608491 Tue May 3 08:55:24 2016 prp120 factor: 993739759889626805545135903141953923233113832717632966003694092677384994673520180146397186251215841015082033396156750101[/CODE] |
Thank you for making 4788 looking attractive again.
If it gets a downdriver in a dozen iterations to come, it will be well-deserved. :tu: |
After cracking the C196 by ECM, ryanp has now moved it to a sole 2^3 guide :smile:
|
FYI, I've got the next C181 in linear algebra now; ETA from this point is about 30 hours.
|
So what are you using? A quantum computer wouldn't need to do linear algebra (I was starting to wonder if you had one). It it a lot of conventional CPUs or special purpose hardware?
Chris (still slightly amazed at your throughput) |
Just a lot of conventional CPUs...
|
(I am curious for my own edification as well: Why the interest in this particular Aliquot sequence? As opposed to, say, 276 -- the smallest of the open-ended sequences?)
|
The very small-seeded sequences (in particular 276) are and always were long-term reserved by people who have been in this field for years (decades? Paul Zimmermann). We can observe via the [URL="http://www.rechenkraft.net/aliquot/AllSeq.html"]AllSeq-database scraping tool[/URL] that work is indeed being done there. Perhaps slowly, but we must respect other people's reservations, right?
So (then unreserved) 4788 was chosen years ago by the forum and stayed dear to most participants' hearts. On top of that, there was once an exciting downride on it (and some of us had great fun riding that wave). |
[QUOTE=ryanp;433169](I am curious for my own edification as well: Why the interest in this particular Aliquot sequence? As opposed to, say, 276 -- the smallest of the open-ended sequences?)[/QUOTE]
[QUOTE=Batalov;433171]The very small-seeded sequences (in particular 276) are and always were long-term reserved by people who have been in this field for years (decades? Paul Zimmermann). We can observe via the [URL="http://www.rechenkraft.net/aliquot/AllSeq.html"]AllSeq-database scraping tool[/URL] that work is indeed being done there. Perhaps slowly, but we must respect other people's reservations, right? So (then unreserved) 4788 was chosen years ago by the forum and stayed dear to most participants' hearts. On top of that, there was once an exciting downride on it (and some of us had great fun riding that wave).[/QUOTE]In addition, 4788 is the "main" sequence start for 314718, which was, at one time, the longest sequence known. (Since eclipsed by [URL="http://factordb.com/sequences.php?se=1&aq=933436&action=last20"]933436[/URL].) |
Aliquot sequence 3408 is also assigned to the forum though not [URL=http://www.mersenneforum.org/showthread.php?t=18421&page=23]much activity[/URL] has been done lately. It currently stands with a [URL=http://www.factordb.com/sequences.php?se=1&aq=3408&action=range&fr=1620&to=1620]C175[/URL].
Hint, hint. |
The c181 is done now:
[CODE]Thu May 5 20:32:10 2016 prp71 factor: 16154997059017911740662227768113426012378939666568151702160749858675687 Thu May 5 20:32:10 2016 prp110 factor: 71778567619495893704430655892712998170338213837340401077450841986834864298530930586804110304930518105547880693[/CODE] We'll see what we have next... |
The c169 (8820260321...) for index 5260 is now in linear algebra; should be done by tomorrow morning.
|
c169 is factored:
[CODE]Sat May 7 01:22:26 2016 prp79 factor: 3844695644339670615607425287715145660859912603602727904626230373424319537548143 Sat May 7 01:22:26 2016 prp91 factor: 2294137465545832119813112814560093916957233303297810939424925504384223801170244092984755993[/CODE] |
19 lines later and now the guide is 2^2, my second favorite :smile:
|
The c196 on index 5283 (1742276725...) looks to be a tough nut to crack. I've already run 10K curves at B1=26e7 and another 10K at B1=76e8, but no luck so far.
Everyone else is encouraged to run curves too... I will be starting GNFS soon. |
[QUOTE=ryanp;433367]The c196 on index 5283 (1742276725...) looks to be a tough nut to crack. I've already run 10K curves at B1=26e7 and another 10K at B1=76e8, but no luck so far.
Everyone else is encouraged to run curves too... I will be starting GNFS soon.[/QUOTE] 1 mod 4 so 50% chance of raising the power of 2 |
Here's an update on the c196 that is the next blocker:
[CODE]Wed May 11 14:54:21 2016 commencing linear algebra Wed May 11 14:54:26 2016 read 27986211 cycles Wed May 11 14:55:28 2016 cycles contain 77742891 unique relations Wed May 11 15:06:48 2016 read 77742891 relations Wed May 11 15:09:52 2016 using 20 quadratic characters above 2692049954 Wed May 11 15:18:39 2016 building initial matrix Wed May 11 15:51:22 2016 memory use: 11806.3 MB Wed May 11 15:51:45 2016 read 27986211 cycles Wed May 11 15:51:54 2016 matrix is 27986032 x 27986211 (12830.1 MB) with weight 3969810634 (141.85/col) Wed May 11 15:51:54 2016 sparse part has weight 3027502384 (108.18/col) Wed May 11 16:01:27 2016 filtering completed in 2 passes Wed May 11 16:01:37 2016 matrix is 27985506 x 27985685 (12830.1 MB) with weight 3969786240 (141.85/col) Wed May 11 16:01:37 2016 sparse part has weight 3027495594 (108.18/col) Wed May 11 16:06:46 2016 matrix starts at (0, 0) Wed May 11 16:06:55 2016 matrix is 27985506 x 27985685 (12830.1 MB) with weight 3969786240 (141.85/col) Wed May 11 16:06:55 2016 sparse part has weight 3027495594 (108.18/col) Wed May 11 16:06:55 2016 saving the first 48 matrix rows for later Wed May 11 16:07:02 2016 matrix includes 64 packed rows Wed May 11 16:07:08 2016 matrix is 27985458 x 27985685 (12565.1 MB) with weight 3380600754 (120.80/col) Wed May 11 16:07:08 2016 sparse part has weight 3014016578 (107.70/col) Wed May 11 16:07:08 2016 using block size 8192 and superblock size 4423680 for processor cache size 46080 kB Wed May 11 16:11:55 2016 commencing Lanczos iteration (64 threads) Wed May 11 16:11:55 2016 memory use: 10771.0 MB Wed May 11 16:13:36 2016 linear algebra at 0.0%, ETA 493h32m Wed May 11 16:14:09 2016 checkpointing every 60000 dimensions[/CODE] I am trying to see if a little more sieving and a higher target_density (trying 140) will produce a matrix with a shorter ETA. |
48 hours. 48 hours to polyselect and sieve a GNFS-195. What the :censored:
:shock: |
A bit of over-sieving and setting target_density=130 helped a bit.
[CODE]Thu May 12 19:12:53 2016 building initial matrix Thu May 12 19:51:02 2016 memory use: 12717.1 MB Thu May 12 19:51:22 2016 read 29261281 cycles Thu May 12 19:51:30 2016 matrix is 29233499 x 29261281 (13648.1 MB) with weight 4232633740 (144.65/col) Thu May 12 19:51:30 2016 sparse part has weight 3226644007 (110.27/col) Thu May 12 20:09:35 2016 filtering completed in 3 passes Thu May 12 20:09:46 2016 matrix is 29021153 x 29021351 (13565.0 MB) with weight 4205410151 (144.91/col) Thu May 12 20:09:46 2016 sparse part has weight 3207726920 (110.53/col) Thu May 12 20:16:25 2016 matrix starts at (0, 0) Thu May 12 20:16:32 2016 matrix is 29021153 x 29021351 (13565.0 MB) with weight 4205410151 (144.91/col) Thu May 12 20:16:32 2016 sparse part has weight 3207726920 (110.53/col) Thu May 12 20:16:32 2016 saving the first 48 matrix rows for later Thu May 12 20:16:38 2016 matrix includes 64 packed rows Thu May 12 20:16:44 2016 matrix is 29021105 x 29021351 (13288.3 MB) with weight 3587263101 (123.61/col) Thu May 12 20:16:44 2016 sparse part has weight 3193239790 (110.03/col) Thu May 12 20:16:44 2016 using block size 8192 and superblock size 4423680 for processor cache size 46080 kB Thu May 12 20:20:47 2016 commencing Lanczos iteration (48 threads) Thu May 12 20:20:47 2016 memory use: 11419.8 MB Thu May 12 20:21:42 2016 linear algebra at 0.0%, ETA 276h19m Thu May 12 20:21:59 2016 checkpointing every 110000 dimensions[/CODE] |
OK, we're almost there on the c196 blocker. Current ETA for linear algebra is about 45 hours.
|
And onward we go....
[CODE]Sat May 28 04:50:38 2016 prp93 factor: 323809855339456561150604956264226068193303136647143996452370244441487927592878373028105104267 Sat May 28 04:50:38 2016 prp103 factor: 5380554967805842623777304908323517352371059069412454635151090178974054931519218146235535260259241226243[/CODE] |
...and now we have squared 3 :no:
|
which is actually good, that is how we get rid of them... :razz:
|
[QUOTE=LaurV;435045]which is actually good, that is how we get rid of them... :razz:[/QUOTE]
Previously there was no 3, and we hope there shant be a 3 in future lines... if the 3 doesn't immediately push it over 200 digits. (Of course we really hope for the downdriver, but that's unlikely of course...) |
ECM on the c192 blocker has so far come up empty. Starting sieving now.
|
[QUOTE=ryanp;435070]ECM on the c192 blocker has so far come up empty. Starting sieving now.[/QUOTE]
I love how you can do these so quick. What is your record gnfs? How do your resources compare to nfs@home? |
[CODE]Mon Jun 6 20:50:42 2016 prp92 factor: 90583829105506026359000237643013656832883579762787284662453054458052382190284572040232535717
Mon Jun 6 20:50:42 2016 prp100 factor: 5255153121566145690878982195184570893697088775956594346919262419667269108357036289237543830720862767[/CODE] |
The sequence reach 200 digits now (thanks, Ryan!).
|
In latest iterations, after splitting a c193, the sequence arrived at a cautiously optimistic point where in i5307 all larger primes (and the last composite) are 1 (mod 3). This means that is c178 splits into two (or more) 1 (mod 3)'s, the pesky "3" can be lost.
|
...and it did. Congrats to Ryan!
Now, [I]poco a poco diminuendo[/I]! |
ETA ~140 hours (give or take a few) on the next c192 blocker (1254673191...)
|
Hot off the presses.
[CODE]Wed Jul 6 10:55:56 2016 prp89 factor: 99754490980643571313943925574803766669101914698120397206884895434041499959315129701555267 Wed Jul 6 10:55:56 2016 prp103 factor: 1257761108294090514942756592563636329816364603979761582790189681354654265553393881862119345638286919503[/CODE] |
Wow! :tu: I only now noticed the downdriver at this height. Congratulations!
|
Congrats, Ryan!
|
Geeezzz! :shock:
Congrats! We should call this reservation "Ryan and mf" instead of only "mf", hehe. So, one sequence gone... (of course it will terminate this time! I was always an incurable optimist!). |
[QUOTE=rajula;437690]Wow! :tu: I only now noticed the downdriver at this height.[/QUOTE]
A huge BRAVO! This will be fun to watch. |
Thanks for the encouragement!
Next on the chopping block is the c196. Sadly, it's ~3 weeks out. Will be trying more ECM as well (everyone else is encouraged to join the fun)... [CODE]Fri Jul 8 17:43:53 2016 Msieve v. 1.52 (SVN 946M) Fri Jul 8 17:43:53 2016 random seeds: 0a188ba8 6dd0187c Fri Jul 8 17:43:53 2016 factoring 3251956794251635584858770076674957041300989096969044373660943707751109534665398232497519823473784262817149466557698710896409624274012434026784305056103790457975802019272021739096822542612279790457 (196 digits) Fri Jul 8 17:43:55 2016 no P-1/P+1/ECM available, skipping Fri Jul 8 17:43:55 2016 commencing number field sieve (196-digit input) Fri Jul 8 17:43:55 2016 R0: -64137566177911881690421574407466019560 Fri Jul 8 17:43:55 2016 R1: 727175989402432419719 Fri Jul 8 17:43:55 2016 A0: 1768948024037777665563910534370782842512322150183 Fri Jul 8 17:43:55 2016 A1: 8335094363288858313022666746568929678549 Fri Jul 8 17:43:55 2016 A2: -123227375356498122313841442153326 Fri Jul 8 17:43:55 2016 A3: -613241637371315309035511 Fri Jul 8 17:43:55 2016 A4: -4490534929749039 Fri Jul 8 17:43:55 2016 A5: 2996280 Fri Jul 8 17:43:55 2016 skew 274991827.91, size 2.163e-19, alpha -7.491, combined = 6.523e-15 rroots = 3 Fri Jul 8 17:43:55 2016 Fri Jul 8 17:43:55 2016 commencing linear algebra Fri Jul 8 17:43:58 2016 read 26344411 cycles Fri Jul 8 17:44:59 2016 cycles contain 75574890 unique relations Fri Jul 8 18:19:58 2016 read 75574890 relations Fri Jul 8 18:22:40 2016 using 20 quadratic characters above 2147483588 Fri Jul 8 18:29:39 2016 building initial matrix Fri Jul 8 19:05:17 2016 memory use: 11333.9 MB Fri Jul 8 19:05:36 2016 read 26344411 cycles Fri Jul 8 19:05:43 2016 matrix is 26344234 x 26344411 (12938.4 MB) with weight 3926785557 (149.06/col) Fri Jul 8 19:05:43 2016 sparse part has weight 3075593670 (116.75/col) Fri Jul 8 19:14:31 2016 filtering completed in 2 passes Fri Jul 8 19:14:40 2016 matrix is 26343908 x 26344085 (12938.4 MB) with weight 3926771948 (149.06/col) Fri Jul 8 19:14:40 2016 sparse part has weight 3075590362 (116.75/col) Fri Jul 8 19:18:39 2016 matrix starts at (0, 0) Fri Jul 8 19:18:46 2016 matrix is 26343908 x 26344085 (12938.4 MB) with weight 3926771948 (149.06/col) Fri Jul 8 19:18:46 2016 sparse part has weight 3075590362 (116.75/col) Fri Jul 8 19:18:46 2016 saving the first 48 matrix rows for later Fri Jul 8 19:18:53 2016 matrix includes 64 packed rows Fri Jul 8 19:18:58 2016 matrix is 26343860 x 26344085 (12630.6 MB) with weight 3386735376 (128.56/col) Fri Jul 8 19:18:58 2016 sparse part has weight 3047583162 (115.68/col) Fri Jul 8 19:18:59 2016 using block size 8192 and superblock size 4423680 for processor cache size 46080 kB Fri Jul 8 19:22:41 2016 commencing Lanczos iteration (48 threads) Fri Jul 8 19:22:41 2016 memory use: 10791.0 MB Fri Jul 8 19:24:25 2016 linear algebra at 0.0%, ETA 483h38m Fri Jul 8 19:24:59 2016 checkpointing every 60000 dimensions[/CODE] |
Ah, my bad. The ETA has dropped quite a bit since I last looked at the log; now we're "only" looking at ~350 hours.
|
Never mind -- no need to wait 2-3 weeks!
[CODE]********** Factor found in step 2: 1273604103225614062909453491814449846298546658892526619568821267 Found probable prime factor of 64 digits: 1273604103225614062909453491814449846298546658892526619568821267 Probable prime cofactor 2553349809423128060012020374518094250855378311437407502435268764170442261890999277388715539273676476814574884150293437717652172741571 has 133 digits[/CODE] |
I would like to send a big shout out to everyone here past and present that helped advance 4788 from where we picked it up to to where it is today.
Looking back at the first message in the thread, the group effort started with i2350 at a height of 144 digits. I certainly never anticipated that we would ever be able to make it to 200 digits, let alone have the good fortune to have the dowdriver turn up at this height. And all this in just a little over 7 years. Awesome! |
[QUOTE=schickel;437872]Awesome![/QUOTE]
Thanks :smile: Beautiful graphic too... it looks exactly like peeing in the wind.. :razz: (from the origin of the graphic) |
ETA ~48 hours until the next c190 blocker (1026547506...41) factors. Hold on to your butts...
|
c190 finished early -- a nice surprise!
[CODE]Tue Jul 19 17:26:00 2016 initial square root is modulo 17809651 Tue Jul 19 18:35:44 2016 sqrtTime: 9825 Tue Jul 19 18:35:44 2016 prp73 factor: 5075495827594259675270324339868085923458528157347661981228767241468163243 Tue Jul 19 18:35:44 2016 prp117 factor: 202255610303506276501165500260439988798020126551334405304978150438566798840510521458561879903531800206392835464611187[/CODE] |
[QUOTE=ryanp;438443]c190 finished early -- a nice surprise![/QUOTE]
Well done! Now a 196-digit blocker - ugh! |
[QUOTE=Prime95;438480]Well done! Now a 196-digit blocker - ugh![/QUOTE]
And it's survived quite a bit of ECM so far... throwing some more at it before going to GNFS. |
We are now at ~24 hours until LA finishes for the c196 blocker, so hopefully we'll have the factors Tuesday morning or afternoon.
|
Well, you only need to get it about ~25 digits down...
Then we can handle it :razz: [COLOR=White](Good job, btw!)[/COLOR] |
[QUOTE=LaurV;439641]Well, you only need to get it about ~25 digits down...
Then we can handle it :razz: [COLOR=White](Good job, btw!)[/COLOR][/QUOTE] No, I'll take it from here. :) |
The c196 isn't going down without a fight...
[CODE]Tue Aug 9 07:14:45 2016 commencing square root phase Tue Aug 9 07:14:45 2016 reading relations for dependency 1 Tue Aug 9 07:14:49 2016 read 12999656 cycles Tue Aug 9 07:15:33 2016 cycles contain 37415230 unique relations Tue Aug 9 07:52:39 2016 read 37415230 relations Tue Aug 9 07:59:16 2016 multiplying 37415230 relations Tue Aug 9 09:50:48 2016 multiply complete, coefficients have about 2461.55 million bits Tue Aug 9 09:51:08 2016 initial square root is modulo 331871 Tue Aug 9 11:39:30 2016 GCD is N, no factor found Tue Aug 9 11:39:30 2016 reading relations for dependency 2 Tue Aug 9 11:39:32 2016 read 12996972 cycles Tue Aug 9 11:40:00 2016 cycles contain 37404562 unique relations Tue Aug 9 12:08:05 2016 read 37404562 relations Tue Aug 9 12:12:18 2016 multiplying 37404562 relations Tue Aug 9 13:55:16 2016 multiply complete, coefficients have about 2460.83 million bits Tue Aug 9 13:55:36 2016 initial square root is modulo 330623 Tue Aug 9 15:42:17 2016 GCD is 1, no factor found Tue Aug 9 15:42:17 2016 reading relations for dependency 3 Tue Aug 9 15:42:20 2016 read 12996728 cycles Tue Aug 9 15:42:53 2016 cycles contain 37409626 unique relations[/CODE] Still going. Hoping for the factors tonight or tomorrow now. |
And down it goes... finally!
[CODE]Tue Aug 9 23:42:59 2016 reading relations for dependency 5 Tue Aug 9 23:43:02 2016 read 12998465 cycles Tue Aug 9 23:43:42 2016 cycles contain 37408940 unique relations Wed Aug 10 00:15:32 2016 read 37408940 relations Wed Aug 10 00:20:42 2016 multiplying 37408940 relations Wed Aug 10 02:30:05 2016 multiply complete, coefficients have about 2461.15 million bits Wed Aug 10 02:30:28 2016 initial square root is modulo 331147 Wed Aug 10 04:34:12 2016 sqrtTime: 76767 Wed Aug 10 04:34:12 2016 prp90 factor: 197960268041791496705755968043758950086260703585482289571722979034513456400497557130637399 Wed Aug 10 04:34:12 2016 prp107 factor: 15818497134327367298413284274136162668654068956568721420160960775738778841979266908443552005546059501591211 Wed Aug 10 04:34:12 2016 elapsed time 21:19:29[/CODE] |
All times are UTC. The time now is 13:13. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2023, Jelsoft Enterprises Ltd.