![]() |
[quote=Andi47;167640]I reported these efforts to [URL="http://factorization.ath.cx/search.php?query=2^1061-1"]Syd's Factorization database[/URL] with credit to "Mersenneforum (various contributors)". 45.212% should be worth ~[B]18996[/B] curves at B1 = 260M, B2 = default, so I reported this number of curves.[/quote]
I notice some idiot put it up for "very high limits" at one point. |
[QUOTE=10metreh;167641]I notice some idiot put it up for "very high limits" at one point.[/QUOTE]
LOL!!!! I almost expected that - that's one of the reasons why I reported the effort which was made in this thread, as well as the P-1 effort (which took a few months for stage 1!) made by Alex and me. |
This appears to be the "official" M1061 thread, so I thought it important to post that ECM factoring in the B1=260M range has been marked as "Done". I would be very interested in restarting this thread so as to learn from the knowledge, conjecture, and opinions of so many great minds on this forum.
Back in July 2006 Dr. Silverman posted, [quote=R.D. Silverman;83898]I have mixed feelings about these kinds of efforts. It is very likely at this point that M1061 is out of ECM range. However, it is also out of SNFS range with current resources. [and I don't think anyone has software that will accomodate a number this large; it is still being written] I do expect that resources will increase so that it becomes doable with SNFS within 10 years. [it would be a MASSIVE effort] However, I would argue that it should be put aside for the time being and the CPU time spent on something else. There are other base 2 Cunningham numbers that still have not been fully tested to even the 50 digit level, (for example). Or one might help out on the SoB project, etc. etc. Just my tupence worth...... Comments?[/quote] Perhaps that is why M1061 was marked as "Done" now even though the 260M ECM curve-count was only around 92,000 yesterday? Maybe we are being gently nudged to pursue "better" endeavors? |
It was marked done because someone ran 8000 curves at B1=260M, B2=~1000*B1.
It would be interesting to know if the playstation gang attacked this number. |
[quote=Prime95;208634]It was marked done because someone ran 8000 curves at B1=260M, B2=~1000*B1.
It would be interesting to know if the playstation gang attacked this number.[/quote] Thank you for the immediate reply, Mr. Woltman, I appreciate it. I did not see those results listed on [URL]http://www.mersenne.org/report_exponent/?exp_lo=1061[/URL] and was unaware. If I may ask, might you have plans to extend your ECM tracking to cover 850M and 2900M now that M1061's 260M range has been completed? Or, perhaps because M1061 can have a factor of up to ~160-digits which may never be found in our lifetimes that this is not an efficient quest? Also, are playstations producing such results because they are equivalent in processor power and speed to current home computers, or because there are so many and they can run all the time? I appreciate anyone's knowledge and opinion, thank you. |
[quote=WVU Mersenneer;208641]Or, perhaps because M1061 can have a factor of up to ~160-digits which may never be found in our lifetimes that this is not an efficient quest?[/quote]
It's true that M1061 could have a smallest factor of up to 160 digits, but if we got anywhere near that with ECM, we'd switch to SNFS. I think that M1061 within our lifetimes (unless you plan on dying soon :razz:) is near certainty, considering the first kilobit SNFS factorization (just a little smaller than M1061) was completed in 2007. M1061 could probably be done within months from a serious start of a collaboration (over forums like this one, or through universities, or whatever). [quote=WVU Mersenneer;208641]Also, are playstations producing such results because they are equivalent in processor power and speed to current home computers,[/quote] No, they're not [I]equivalent[/I]. They're [I]faster[/I]. Drastically faster. But only for certain types of work (e.g. [URL]http://fah-web.stanford.edu/cgi-bin/main.py?qtype=osstats[/URL], [URL]http://www.mersenneforum.org/showthread.php?t=12827[/URL]). For others, they're about equivalent, (e.g. [URL]http://www.mersenneforum.org/showthread.php?t=12576[/URL], [URL]http://mersenneforum.org/showthread.php?t=11328[/URL]) or slower. [quote=WVU Mersenneer;208641]or because there are so many and they can run all the time?[/quote] This would be a better argument for normal computers than PS3's. :smile: But it applies to both. |
[QUOTE=Mini-Geek;208656]
No, they're not [I]equivalent[/I]. They're [I]faster[/I]. Drastically faster. .[/QUOTE] Yes, and No. They can not run [b]any[/b] ECM curves to completion, unless one uses distinctly sub-optimal Step 2 limits. [and uses the brute force approach to step 2]. They are faster at Step 1, but have insufficient memory to run Step 2. |
Thank you, both. This keeps getting better and better, and faster, too. So much innovation in a number of areas (no pun intended), glad to be a part of it and to be able to learn what's new from so many in-the-know here.
|
NFS@Home is currently sieving 2,1061-
[URL="http://escatter11.fullerton.edu/nfs/"]http://escatter11.fullerton.edu/nfs/[/URL] |
Just for curiosity: Which sieving parameters do they use?
|
Another question : How many raw relation are we looking for (For a [URL="http://mersenneforum.org/showpost.php?p=257919&postcount=17"]162 digit [/URL]
Syd needed around 112 M relations, and for a 135 one, 23.5M..)? Naïve estimate : 162-135= 27 112/23.5=4.766 so doubling the number of relation for each 5.66 digits (lets say 6) 320-162=158 158/6= 26.33 112M*2^26.33=9 447 954 834 860 241 relations? Must have done something wrong. |
| All times are UTC. The time now is 07:36. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.