mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Factoring (https://www.mersenneforum.org/forumdisplay.php?f=19)
-   -   M1061 - t60 (https://www.mersenneforum.org/showthread.php?t=6148)

Andi47 2006-07-21 15:06

M1061 - t60
 
to start with the 60 digits range: :alien:

5 curves at B1=260M and B2=2328563319820 (using Prime95 for step 1 and GMP-ECM for step 2)

[I]Mod edit: Previous M1061 thread: [URL="http://www.mersenneforum.org/showthread.php?t=3192"]http://www.mersenneforum.org/showthread.php?t=3192[/URL]
[/I]

[I]Mod edit: Through msg #17: 14 curves done at B1=260M[/I]

R.D. Silverman 2006-07-21 16:03

[QUOTE=Andi47]to start with the 60 digits range: :alien:

5 curves at B1=260M and B2=2328563319820 (using Prime95 for step 1 and GMP-ECM for step 2)[/QUOTE]

I have mixed feelings about these kinds of efforts.

It is very likely at this point that M1061 is out of ECM range.
However, it is also out of SNFS range with current resources.
[and I don't think anyone has software that will accomodate a number
this large; it is still being written]
I do expect that resources will increase so that it becomes doable with
SNFS within 10 years. [it would be a MASSIVE effort]


However, I would argue that it should be put aside for the time being
and the CPU time spent on something else. There are other base 2
Cunningham numbers that still have not been fully tested to even the
50 digit level, (for example). Or one might help out on the SoB project,
etc. etc.

Just my tupence worth......

Comments?

Greenbank 2006-07-21 17:02

"tuppence"

P.S. I agree.

akruppa 2006-07-21 19:15

I'd agree if waiting would imply that the task could be accomplished more efficiently at a later time. However, ECM with Prime95's DWT for stage 1 and GMP-ECM's CRT/FFT code for stage 2 is probably the best we are going to get any time soon. At least [I]I[/I] don't know about algorithmic enhancements that could significantly speed up the search yet...[sup]*[/sup]

As I see it, short of the possibility of an algorithmic breakthrough in integer factoring, starting now or starting later will result in the same amount of work to be done in total. Those who are keen on factoring this particular number might as well start now, but should be aware of the expected time to find a factor after so much ECM has already been done unsucessfully.

Alex

(*) Note: we've been toying with the idea of a sliding window stage 1 in affine coordinates with many curves in parallel. This could be up to ~20% faster than Montgomery's projective coordinates with PRAC, at the expense of greatly increased memory use. I'm not sure how practical the idea is.

R.D. Silverman 2006-07-21 19:47

[QUOTE=akruppa]I'd agree if waiting would imply that the task could be accomplished more efficiently at a later time. However, ECM with Prime95's DWT for stage 1 and GMP-ECM's CRT/FFT code for stage 2 is probably the best we are going to get any time soon. At least [I]I[/I] don't know about algorithmic enhancements that could significantly speed up the search yet...[sup]*[/sup]

As I see it, short of the possibility of an algorithmic breakthrough in integer factoring, starting now or starting later will result in the same amount of work to be done in total. Those who are keen on factoring this particular number might as well start now, but should be aware of the expected time to find a factor after so much ECM has already been done unsucessfully.

Alex

(*) Note: we've been toying with the idea of a sliding window stage 1 in affine coordinates with many curves in parallel. This could be up to ~20% faster than Montgomery's projective coordinates with PRAC, at the expense of greatly increased memory use. I'm not sure how practical the idea is.[/QUOTE]

So let's start SNFS now instead of ECM Any data that is accumulated will be
usefull, even if we won't finish until much later........ :whistle:

Wacky 2006-07-21 20:10

[QUOTE=R.D. Silverman]So let's start SNFS now instead of ECM Any data that is accumulated will be usefull, even if we won't finish until much later........ :whistle:[/QUOTE]

Bob,
I disagree. If we were to "start SNFS now", the code available would not be able to find many of the relations that would be found by a future siever.
Although the relations that are found would certainly be "good" ones, there is a significant cost in that you would have "cherry picked" from the most fertile sieving area. Since I don't know of any efficient way to "scrap the field" (just pick what is left), we would either have to generate the previously found relations and discard them as duplicates or waste the relations that would only be found by the improved sieving scenario. Either way, the effort is quite wasteful.

As I result, I agree that we should encourage those who want to contribute to work on some other effort which is more likely to produce useful results.

akruppa 2006-07-21 20:33

[QUOTE=R.D. Silverman]So let's start SNFS now instead of ECM Any data that is accumulated will be
usefull, even if we won't finish until much later........ :whistle:[/QUOTE]

Well, we should do enough ECM before we start sieving. By the 2/9 rule of thumb, ECM should be done up to p70, but it is quite possible that this rule of thumb is seriously off for numbers this large.

If enough (however much that may be) ECM were already done [I]and[/I] we had already decided on the final NFS parameters we're going to use [I]and[/I] had an efficient implementation of a siever for these parameters plus the hardware to run it on, anyone who wanted to start sieving now could. No work would be wasted. This [I]avant-garde[/I] would merely have to accept that their effort would not come to fruition for many more years.

Problem is, our NFS implementations aren't there yet for a number of this size. Our ECM implementations, however, are, at least for p60 parameters.

Alex

Edit: I think our disagreement is because we have not stated what exactly we are trying to optimise for. This is a scheduling problem. If we only go for maximum overall throughput and assume that the efficiency with which we can process jobs does not change over time, it does not matter in which order the jobs are started.

Andi47 2006-07-23 13:14

What about P-1?

Error 404 has p-1ed M1061 up to 128e8 - is this the highest B1 ever done on this number?

If not: What is the highest known B1 for a p-1 attempt on M1061?

R.D. Silverman 2006-07-24 10:58

[QUOTE=akruppa]Well, we should do enough ECM before we start sieving. By the 2/9 rule of thumb, ECM should be done up to p70, but it is quite possible that this rule of thumb is seriously off for numbers this large.

If enough (however much that may be) ECM were already done [I]and[/I] we had already decided on the final NFS parameters we're going to use [I]and[/I] had an efficient implementation of a siever for these parameters plus the hardware to run it on, anyone who wanted to start sieving now could. No work would be wasted. This [I]avant-garde[/I] would merely have to accept that their effort would not come to fruition for many more years.

Problem is, our NFS implementations aren't there yet for a number of this size. Our ECM implementations, however, are, at least for p60 parameters.

Alex

Edit: I think our disagreement is because we have not stated what exactly we are trying to optimise for. This is a scheduling problem. If we only go for maximum overall throughput and assume that the efficiency with which we can process jobs does not change over time, it does not matter in which order the jobs are started.[/QUOTE]

If we use (say) ECM followed by SNFS, the run time is a random
variable. We want to minimize the *expected* run time. My joint
paper w/ Sam Wagstaff discusses this problem: how long do we run ECM
before switching.

bdodson 2006-07-28 14:23

t60 - (t55 - t50) = 35.8K curves, ecm
 
[QUOTE=Andi47]What about P-1?

Error 404 has p-1ed M1061 up to 128e8 - is this the highest B1 ever done on this number?

If not: What is the highest known B1 for a p-1 attempt on M1061?[/QUOTE]

So far as I know, that would be 5e10, with 3e10 for p+1; no special
attention for M1061, I ran the complete list (c. 865 numbers) with
default b2 (and no new factors). Step 1 was on the Opterons, with
-save available (p-1/p+1, complete list). Step 2 on the itanium2/altrix;
four jobs occupying a nontrivial portion of the 128Gb (like 4*12).

My current b1=260M curve count on M1061 is at 297. Alex's -v
reports t50 = 1512, t55 = 7643 and t60 = 42017. Since the t55
included the previous t50, we ought to count t55-t50 = 7643-1512
= 6131 curves toward t60, for 42,017 - 6,131 = 35,886 curves needed
in this thread for t60. So 359 curves/week would take 2 years? It's
not just kilo-bit snfs that's a nontrivial computation, even ecm-pretesting
looks substantial (I ran a bit of b1=850M, p65-optimal, after which I
agreed with Paul Zimmerman that we're not ready for t65's yet). Any
chance of 690 curves/week, to bring the wait down to a year?

Meanwhile, we could start a pool for how many discouraging posts to
expect from Bob before we finish t60 (if we do). Seems to me that
suggestions for other ecm targets (if new) belong in other threads,
unless there's evidence of a newbie that hasn't heard the same before -
as long as we're counting, how many times did we get the same view
from Bob repeated in the (successful!) t55-thread? We're very lucky
Aoki wasn't listening to Bob when he found the p64 factor in the previous
kilobit snfs candidate, R311 = (10^311-1)/(10-1) = 111....1 (311 of them).
Yes, of course, factors are unlikely; but it's a proven phenomenon that
over time unlikely events do occur (cf also Alex's p63, not to mention
somebody's p66).

Bruce Dodson

Andi47 2006-07-28 15:14

[QUOTE=bdodson]So far as I know, that would be 5e10, with 3e10 for p+1; no special
attention for M1061, I ran the complete list (c. 865 numbers) with
default b2 (and no new factors). Step 1 was on the Opterons, with
-save available (p-1/p+1, complete list). Step 2 on the itanium2/altrix;
four jobs occupying a nontrivial portion of the 128Gb (like 4*12).[/quote]

Thanks for the info. As for myself, I have run p-1 with B1=35e9 and default B2 in june (with the -save option) and increased B1 to 55e9 (with -save option) without doing step 2 yet. I plan to purchase additional memory (2*2GB) for my Pentium 4 in september (hoping they get cheeper until then - 2*199 Euro is a bit expensive) and then I can go maybe up to B1=1e11 or 2e11 - if that makes sense.

[quote]My current b1=260M curve count on M1061 is at 297. Alex's -v
reports t50 = 1512, t55 = 7643 and t60 = 42017. Since the t55
included the previous t50, we ought to count t55-t50 = 7643-1512
= 6131 curves toward t60, for 42,017 - 6,131 = 35,886 curves needed
in this thread for t60. So 359 curves/week would take 2 years? It's
not just kilo-bit snfs that's a nontrivial computation, even ecm-pretesting
looks substantial (I ran a bit of b1=850M, p65-optimal, after which I
agreed with Paul Zimmerman that we're not ready for t65's yet). Any
chance of 690 curves/week, to bring the wait down to a year?
[/QUOTE]

I can start ECMing in a month or so, don't know yet how many curves I can do per week.


All times are UTC. The time now is 07:36.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.