mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Factoring (https://www.mersenneforum.org/forumdisplay.php?f=19)
-   -   OddPerfect enthusiasts & others: ECM help needed! (https://www.mersenneforum.org/showthread.php?t=13739)

frmky 2010-08-17 02:09

OddPerfect enthusiasts & others: ECM help needed!
 
NFS@Home is working its way up in difficulty toward a kilobit SNFS. We're at 285 digits now, and in about 1-2 months we will start a 290 digit number. sigma(3^606) = 3,607- is a very attractive number since it's simultaneously an OddPerfect roadblock, Cunningham 5th hole once the 3- extension is added (and 3,563- is finished, which will happen shortly), and a number with no known nontrivial factors. We would appreciate any help the OP community and others can provide. For now, the number needs more ECM. Currently a bit over 1*t55 has been completed, so it needs curves with B1 = 26e7.

The easiest way is to create a file containing the line

(3^607-1)/2

then run the command

ecm -c 10 26e7 < file.txt

Adjust 10 to the number of curves you wish to run. Just report the number of curves you have completed in this thread, and thanks in advance for any help you can provide!
[I]
All curves are included in the ECM Server at oddperfect.no-ip.com:8201[/i]
[code]Post# Curves
4 10
20 40
28 4
36 1
41 15
46 224
49 3100 included in #72
53 320
57 3
72 3508 included in ECM Server
73 3393 new in ECM Server
74 240
75 10
76 960
77 6703
87 883
89 4594
[/code]
[i]Including directly reported curves, the ECM Server stands at 23785

including lower ECM curves (13875@43e6 & 18240@11e7), this is [B]4.90 t55[/B] or [B]0.96 t60[/B][/i]

Andi47 2010-08-17 06:08

[QUOTE=frmky;225800]
The easiest way is to create a file containing the line

(3^607-1)/2

then run the command

ecm -nn -c 10 26e7 < file.txt

Adjust 10 to the number of curves you wish to run. Just report the number of curves you have completed in this thread, and thanks in advance for any help you can provide![/QUOTE]

I would suggest a command line like

ecm [B]-nn[/B] -c 10 26e7 <file.txt [B]>>outputfile.out[/B]

with the -nn flag, ecm will run on lowest priority.

">>outputfile.out" redirects the output to an outputfile and thus prevents from data loss when a factor is found and the command line window (or bash window) is accidently closed (or the computer crashes for some reason).

If the PC is short on RAM, the command line flag -maxmem <number of MB to use> can be used. For example, -maxmem 500 uses not more than 500 MB RAM.

I will run a fistful of curves until my next GNFS (from aliquot sequence 10212) takes my ressources for a while.

mdettweiler 2010-08-17 06:20

[quote=Andi47;225814]I would suggest a command line like

ecm [B]-nn[/B] -c 10 26e7 <file.txt [B]>>outputfile.out[/B]

with the -nn flag, ecm will run on lowest priority.

">>outputfile.out" redirects the output to an outputfile and thus prevents from data loss when a factor is found and the command line window (or bash window) is accidently closed (or the computer crashes for some reason).[/quote]
Even better, if you're on Linux (or Windows w/Cygwin):

./ecm -nn -c 10 26e7 < file.txt | tee -a outputfile.out

That will send the screen output both to the file outputfile.out and to the screen, which is nicer for checking progress along the way. :smile:

Those on Windows without Cygwin can probably get tee in [url=http://unxutils.sourceforge.net]UnxUtils[/url]; just place tee.exe in the same directory as ecm.exe (or somewhere on your path, if you prefer).

debrouxl 2010-08-17 07:26

Another way to output both to a file and to the screen:
ecm -nn -c 10 26e7 < file.txt >> outputfile.out &; tail -f outputfile.out
or
echo "(3^607-1)/2" | ecm -nn -c 10 26e7 >> outputfile.out &; tail -f outputfile.out
(CTRL + C to end)

I'm running 10 curves, 1 done so far without finding a factor. I won't repost if these curves run to completion without yielding a factor, so as not to clutter the topic :smile:

Andi47 2010-08-17 08:30

P-1: B1=1e9, B2=1e14, no factor.

Andi47 2010-08-17 10:03

[QUOTE=Andi47;225814]
I will run a fistful of curves until my next GNFS (from aliquot sequence 10212) takes my ressources for a while.[/QUOTE]

cancelled / crashed, no single curve finished.

In a windoze environment it seems that it is [I]absolutely necessary[/I] to specify -maxmem 1800 (or smaller), even if you have [I]plenty[/I] of free memory and you are using Win 64 bit and a 64 bit ECM binary.

(ECM crashed when it tried to use more than ~1.8 GB (or 2 GB??) RAM, and I am pretty sure to have downloaded the 64 bit binary of ecm-6.2.3 (how can I check if this is indeed a 64 bit binary?))

10metreh 2010-08-17 11:19

[QUOTE=Andi47;225828](ECM crashed when it tried to use more than ~1.8 GB (or 2 GB??) RAM, and I am pretty sure to have downloaded the 64 bit binary of ecm-6.2.3 (how can I check if this is indeed a 64 bit binary?))[/QUOTE]

Run it (on something smaller so that it will run without crashing), and look in Task Manager. If it shows up as "ecm.exe *32" then it is 32-bit; if it is just "ecm.exe" then it is 64-bit.

Andi47 2010-08-17 11:23

[QUOTE=10metreh;225829]Run it (on something smaller so that it will run without crashing), and look in Task Manager. If it shows up as "ecm.exe *32" then it is 32-bit; if it is just "ecm.exe" then it is 64-bit.[/QUOTE]

Thanks. And oops, it seems that something went wrong when I downloaded the binary (maybe I copied it into the wrong directory), as it says that the binary I'm using is 32 bit.

Edit: Just downloaded the 64 bit binary and started a large p-1: it is running happily with a memory useage of 3.3 GB. :smile:

em99010pepe 2010-08-17 12:22

Anyone here wants to set up an ecmserver for this number?

R.D. Silverman 2010-08-17 12:28

[QUOTE=frmky;225800]NFS@Home is working its way up in difficulty toward a kilobit SNFS. We're at 285 digits now, and in about 1-2 months we will start a 290 digit number. sigma(3^606) = 3,607- is a very attractive number since it's simultaneously an OddPerfect roadblock, Cunningham 5th hole once the 3- extension is added (and 3,563- is finished, which will happen shortly), and a number with no known nontrivial factors. We would appreciate any help the OP community and others can provide. For now, the number needs more ECM. Currently a bit over 1*t55 has been completed, so it needs curves with B1 = 26e7.

The easiest way is to create a file containing the line

(3^607-1)/2

then run the command

ecm -c 10 26e7 < file.txt

Adjust 10 to the number of curves you wish to run. Just report the number of curves you have completed in this thread, and thanks in advance for any help you can provide![/QUOTE]

My personal opinion is that doing a number from the extensions, when
the extensions still have not been officially added to the table is
[b]ridiculous[/b]

There are many other suitable numbers, still undone, from the 1st printed
edition of the book. Let's work on finishing them.

Andi47 2010-08-17 12:39

[QUOTE=Andi47;225825]P-1: B1=1e9, B2=1e14, no factor.[/QUOTE]

extended B2 to 1e15, still no factor.

em99010pepe 2010-08-17 12:42

[quote=R.D. Silverman;225833]My personal opinion is that doing a number from the extensions, when
the extensions still have not been officially added to the table is
[B]ridiculous[/B]

There are many other suitable numbers, still undone, from the 1st printed
edition of the book. Let's work on finishing them.[/quote]

We are not interested in your personal opinion, go away if you don't want to help.

debrouxl 2010-08-17 12:56

Hey people, keep cool :smile:
It's not the first time I see Prof. Silverman posting clear-cut, sometimes controversial, opinions. He has the right to do so, just as much as we have the right to disagree.

As Greg wrote above, factoring this number would help two projects. Likewise, William Lipp makes yoyo@home and subsequently RSALS work on a number of Brent composites. Would the undone numbers from the 1st printed edition of the book help multiple projects ?

R.D. Silverman 2010-08-17 13:15

[QUOTE=debrouxl;225837]Hey people, keep cool :smile:
It's not the first time I see Prof. Silverman posting clear-cut, sometimes controversial, opinions. He has the right to do so, just as much as we have the right to disagree.

As Greg wrote above, factoring this number would help two projects. Likewise, William Lipp makes yoyo@home and subsequently RSALS work on a number of Brent composites. Would the undone numbers from the 1st printed edition of the book help multiple projects ?[/QUOTE]

I disagree that it helps the tail-chasing for OddPerfect since I think that
project is pointless. Raising the bound on the minimal size for an odd perfect
number does nothing toward proving that none exist. All the wasted CPU
time for this project would be much better spent on (say) Seventeen or Bust
which has a definitive END.

BTW: Didn't your mother ever tell you "finish what you start before doing
something new"?

wblipp 2010-08-17 13:19

[QUOTE=em99010pepe;225832]Anyone here wants to set up an ecmserver for this number?[/QUOTE]

Next week I'll be back in the US and will change the OddPerfect Most Wanted ECM server to hand out only this number. It currently hands out this number plus several others. That server is at

oddperfect.no-ip.com:8201

William

frmky 2010-08-17 16:40

[QUOTE=R.D. Silverman;225833]My personal opinion is that doing a number from the extensions, when
the extensions still have not been officially added to the table is
[b]ridiculous[/b].[/QUOTE]
Normally I would agree with you, but in this case I'm making an exception because the confluence of attractive qualities listed in the first post for me overrides "finish what you start." For the "science" (I'm a physicist after all!) I just need to collect data on how a 290 behaves, and any ole 290 will do. :smile:

Andi47 2010-08-17 16:56

p+1: 2 runs with B1=1e9, B2=1e14, no factor.

R.D. Silverman 2010-08-17 18:34

[QUOTE=frmky;225865]Normally I would agree with you, but in this case I'm making an exception because the confluence of attractive qualities listed in the first post for me overrides "finish what you start." For the "science" (I'm a physicist after all!) I just need to collect data on how a 290 behaves, and any ole 290 will do. :smile:[/QUOTE]

So why not do e.g. 12,269-? (a second hole from the existing table)
There are plenty of numbers from the current tables that fit your needs.
I see no reason to draw from the extension(s).

fivemack 2010-08-17 19:16

But people are actually interested in 3^607-1; the fact that you don't think they should be doesn't alter the fact that they are, and does make it more interesting than some random Cunningham-table number of about the right size which isn't even the size of a finite field.

em99010pepe 2010-08-17 21:12

1 Attachment(s)
Done with 40 curves.

R.D. Silverman 2010-08-17 21:21

[QUOTE=fivemack;225909]But people are actually interested in 3^607-1; the fact that you don't think they should be doesn't alter the fact that they are, and does make it more interesting than some random Cunningham-table number of about the right size which isn't even the size of a finite field.[/QUOTE]

And a lot of people also like to read the National Enquirer.......

Chasing 3,607- shows a lack of historical perspective.

CRGreathouse 2010-08-17 21:46

[QUOTE=R.D. Silverman;225897]So why not do e.g. 12,269-? (a second hole from the existing table)
There are plenty of numbers from the current tables that fit your needs.
I see no reason to draw from the extension(s).[/QUOTE]

Bob, I'm curious as to why you feel these factorizations are so important. Your arguments (esp. with the OPN crowd) come down to "factoring your numbers isn't important", but I don't really see the intrinsic importance of the Cunningham factorizations either.

frmky 2010-08-17 22:43

[QUOTE=R.D. Silverman;225930]Chasing 3,607- shows a lack of historical perspective.[/QUOTE]
Argh! Those darn kids today are ruining everything! :wink:

[QUOTE=em99010pepe;225929]Done with 40 curves.[/QUOTE]
Thanks! Only about 15,000 left to go to reach 3*t55!

R.D. Silverman 2010-08-18 00:15

[QUOTE=CRGreathouse;225935]Bob, I'm curious as to why you feel these factorizations are so important. Your arguments (esp. with the OPN crowd) come down to "factoring your numbers isn't important", but I don't really see the intrinsic importance of the Cunningham factorizations either.[/QUOTE]


Only as an historical matter. It is the longest computational project
in history. Are you aware of some of the pre-electronic computing machines
that were built (and used successfully) for the project?

BTW, Dick Lehmer thought they were important.

CRGreathouse 2010-08-18 00:33

[QUOTE=R.D. Silverman;225959]Only as an historical matter. It is the longest computational project
in history. Are you aware of some of the pre-electronic computing machines
that were built (and used successfully) for the project?[/QUOTE]

I think I remember two such projects, one from roughly Babbage's time (don't remember if it was his) and another just before the time of electronic computers. A 'loom' and a 'bicycle', perhaps?


It's interesting to me that you would use that explanation in the particular case of OPNs whose existence has been described as the oldest open problem in mathematics. Now I agree that pushing the bounds doesn't bring the project any closer to fruition, but technically neither does factoring 12,269- bring us closer to factoring the Cunninghams.

[QUOTE=R.D. Silverman;225959]BTW, Dick Lehmer thought they were important.[/QUOTE]

I wonder on what basis. I mean, *I* think they're important -- at one point I wrote a program to do the grunt work of using the tables (find algebraic factors, look up appropriate table entries for composites, etc.). But I can't articulate a reason for that and I was wondering if someone else could.

R.D. Silverman 2010-08-18 13:04

[QUOTE=CRGreathouse;225962]I think I remember two such projects, one from roughly Babbage's time (don't remember if it was his) and another just before the time of electronic computers. A 'loom' and a 'bicycle', perhaps?


It's interesting to me that you would use that explanation in the particular case of OPNs whose existence has been described as the oldest open problem in mathematics. Now I agree that pushing the bounds doesn't bring the project any closer to fruition, but technically neither does factoring 12,269- bring us closer to factoring the Cunninghams.



I wonder on what basis. I mean, *I* think they're important -- at one point I wrote a program to do the grunt work of using the tables (find algebraic factors, look up appropriate table entries for composites, etc.). But I can't articulate a reason for that and I was wondering if someone else could.[/QUOTE]


Actually, what I would [b]really[/b] like to see is for work to be done
on the base 2 tables [b]exclusively[/b]. They are the only unfinished
numbers from the 1st edition of the book. Let's finish them off and
move on to doing something else with the CPU time.

I can suggest a number of things:

Looking for elliptic curves of high rank.

Further development in algorithms for computing class numbers/fundamental
units of number fields of degree higher than 2.

Looking for an integer that is both an ordinary pseudoprime and a LL
pseudoprime (with discriminant -5 [The Wagstaff-Pomerance challenge]).
This is a project that is finite in duration and has a definite goal.

Finishing off Seventeen or Bust
This is a project that is finite in duration and has a definite goal.

There is a whole bunch of stuff in R. Guy's "Unsolved Problems in Number Theory" that could be investigated.

R.D. Silverman 2010-08-18 13:14

[QUOTE=R.D. Silverman;226002]Actually, what I would [b]really[/b] like to see is for work to be done
on the base 2 tables [b]exclusively[/b]. They are the only unfinished
numbers from the 1st edition of the book. Let's finish them off and
move on to doing something else with the CPU time.

I can suggest a number of things:

Looking for elliptic curves of high rank.

Further development in algorithms for computing class numbers/fundamental
units of number fields of degree higher than 2.

Looking for an integer that is both an ordinary pseudoprime and a LL
pseudoprime (with discriminant -5 [The Wagstaff-Pomerance challenge]).
This is a project that is finite in duration and has a definite goal.

Finishing off Seventeen or Bust
This is a project that is finite in duration and has a definite goal.

There is a whole bunch of stuff in R. Guy's "Unsolved Problems in Number Theory" that could be investigated.[/QUOTE]


BTW, There was a time when Lehmer's sieves were actually held by
the computer museum in Boston. Both his photoelectric sieve and DLS-127
were there. They were [b]not[/b] however on display. I personally
protested this to the museum staff, pointing out that they were classic
examples of computing without digital computers. The museum staff was
completely clueless about the machines. Noone, repeat noone knew
what the hardware was, what it had been used for, or even why they had it.
How can a museum function without a curator who knows the history of
the artifacts in the museum???

I understand that the hardware has been returned to Berkeley. It
was quite proper to do so. Does anyone know of its current status?

I got permission to play around with the photoelectric sieve when I was
at MITRE. I borrowed some equipment (a high torque motor, a rubber
belt of the right size, a laser, and a photometer) and actually got the
machine to work. I had to explain to the museum staff how the machine
worked. Of course, they failed to follow my description about modular
arithmetic, quadratic residues, and exclusion moduli. Whether this was
due to my inadequate explanation I will leave for others to decide.....

I will say that the museum staff was hopelessly clueless about the
history of the machines that they had on display, how they were used,
and what they were used for.... I expect the general public to be clueless
about math and computation, but am surprised at the ignorance of the staff.
I would have expected them to know [b]something[/b].

Andi47 2010-08-18 13:29

4 curves at B1=26e7 for benchmarking purpose, no factors.

xilman 2010-08-18 13:44

[quote=R.D. Silverman;226003]How can a museum function without a curator who knows the history of the artifacts in the museum???[/quote]I think your expectations are too high.

I doubt that there is a museum worthy of the name anywhere in the planet containing only artefacts which are recognized by the institution's present curators. Every now and again I read news reports of some thingummyajig lying deep in the bowels of a musuem which has escaped attention for a lengthy period of time until re-discovered by someone who knows, or is interested enough to find out, the nature of the object.


Paul

R.D. Silverman 2010-08-18 13:50

[QUOTE=xilman;226006]I think your expectations are too high.

I doubt that there is a museum worthy of the name anywhere in the planet containing only artefacts which are recognized by the institution's present curators. Every now and again I read news reports of some thingummyajig lying deep in the bowels of a musuem which has escaped attention for a lengthy period of time until re-discovered by someone who knows, or is interested enough to find out, the nature of the object.


Paul[/QUOTE]

Sure. But the museum staff was clueless about MOST of the items that
they had in their possession.

fivemack 2010-08-18 16:29

If you're talking about benchmarking purposes, building gmp-5.0.1 out of the box and ecm-6.3 out of the box against it, and running on the macpro I have at work (2.66GHz i7 Xeon CPUs) one curve at 26e7 takes an hour for Step 1 and 20 minutes for step 2, and sometimes has 1700MB of memory resident.

So I could reasonably run 270 over a weekend (sixty hours, six cores), but no more, and I can't reasonably have a job running in the background on each core because I'd run out of memory. The ECM step is requiring noticeably larger per-CPU resources than the sieving.

Andi47 2010-08-18 17:11

[QUOTE=fivemack;226012]If you're talking about benchmarking purposes, building gmp-5.0.1 out of the box and ecm-6.3 out of the box against it, and running on the macpro I have at work (2.66GHz i7 Xeon CPUs) one curve at 26e7 takes an hour for Step 1 and 20 minutes for step 2, and sometimes has 1700MB of memory resident.

So I could reasonably run 270 over a weekend (sixty hours, six cores), but no more, and I can't reasonably have a job running in the background on each core because I'd run out of memory. The ECM step is requiring noticeably larger per-CPU resources than the sieving.[/QUOTE]

For me it turned out that it was a quite "dirty" benchmark: I had all 8 threads of my i7 860 @2.8 GHz (Win 7 pro 64 bit) busy (100% each) when I started, but as the other factorization, which I had running, found a factor, 6 threads fell idle somewhere between. Anyway I got these timings (for one of the two threads running (3^607-1)/2 with ecm -maxmem 3000):

[code]GMP-ECM 6.3 [configured with GMP 5.0.1 and --enable-asm-redc] [ECM]
Input number is (3^607-1)/2 (290 digits)
Using B1=260000000, B2=3178559884516, polynomial Dickson(30), sigma=3595759605
Step 1 took 4420818ms
Step 2 took 887053ms
Run 2 out of 2:
Using B1=260000000, B2=3178559884516, polynomial Dickson(30), sigma=1767865861
Step 1 took 3253105ms [COLOR="Blue"](54 minutes)[/COLOR]
Step 2 took 770691ms [COLOR="Blue"](12.8 minutes)[/COLOR][/code]

compared to this: ecm-6.2.3 32 bit took 4 hours(!) for step 1.

em99010pepe 2010-08-18 18:02

Andi47 and fivemack,

My timings:

Machine 1 - Core i5 750@3.7GHz
[code]
GMP-ECM 6.3 [configured with GMP 5.0.1 and --enable-asm-redc] [ECM]
Input number is (3^607-1)/2 (290 digits)
Using B1=260000000, B2=3178559884516, polynomial Dickson(30), sigma=1725068313
Step 1 took 2554531ms
Step 2 took 603256ms
[/code]Machine 2 - Q6600@2.8 GHz
[code]GMP-ECM 6.3 [configured with GMP 5.0.1 and --enable-asm-redc] [ECM]
Input number is (3^607-1)/2 (290 digits)
Using B1=260000000, B2=3178559884516, polynomial Dickson(30), sigma=3536664674
Step 1 took 3383594ms
Step 2 took 880016ms[/code]Funny my Q6600@2.8GHz is almost as fast as your i7 860 @2.8 GHz(Andi47), maybe you should run only 4 threads.

Anyway, I started 800 curves on Machine 1 (100 done so far) and 400 curves on Machine 2 (80 done so far).

Andi47 2010-08-18 18:16

[QUOTE=em99010pepe;226020]Funny my Q6600@2.8GHz is almost as fast as your i7 860 @2.8 GHz(Andi47), maybe you should run only 4 threads.
[/QUOTE]

Even as the timings per thread for running 8 threads are slower than for running 4 threads, running 8 threads still gives higher overall throughput.

xilman 2010-08-18 18:25

"You" in the quoted material is em99010pepe who claimed that an unspecified "we" are not interested in Bob's opinion.
[quote=R.D. Silverman;226017]Actually, a fair number of competent people [B]ARE[/B] interested in my opinion.[/quote]Both statements are consistent. All that is stated is that "we" intersection "a fair number of competent people" is the empty set.
[quote=R.D. Silverman;226017]I have been doing computational number theory and working on factoring algorithms/prime testing algorithms/Cunningham project longer than many people in this discussion group have been alive.[/quote]Quite possibly true, likely even though not knowing the birthdates of more than an extremely small fraction of the participants forces me to conclude that I don't know it to be true. It's certainly the case that a number of participants were born before you began your work on the Cunningham project.
[quote=R.D. Silverman;226017]The personal opinion of someone who has contributed a great deal to the topics relevent to this forum should matter a great deal. The fact that you think my opinions don't matter says a lot about [B]you[/B].[/quote]Reciprocally, your opinions says a lot about you.
[quote=R.D. Silverman;226017]
John Selfridge makes up the wanted lists. Perhaps you think his opinions
about what numbers are important don't matter either??? [/quote]Assumes facts not in evidence, to use a phrase I'm fairly sure I've read here relatively recently.

Don't worry, be happy. Both communities can disagree on priorities without needing to get upset over the priority of the other. You (both of you) might find that Sura 109 is worth reading.


Paul

richs 2010-08-18 18:30

One curve using YAFU on my P4 took 15 hours.

08/18/2010 01:51:17 v1.18 @ RICH, Finished 1 curves using Lenstra ECM method on C290 input, B1 = 260000000, B2 = 1100784651

R.D. Silverman 2010-08-18 18:33

[QUOTE=xilman;226030]
Reciprocally, your opinions says a lot about you.
[/QUOTE]

I stand by my opinions and have the courage not to post under an alias.

[QUOTE]
You (both of you) might find that Sura 109 is worth reading.
[/QUOTE]

Reference?

R.D. Silverman 2010-08-18 18:36

[QUOTE=R.D. Silverman;226035]I stand by my opinions and have the courage not to post under an alias.



Reference?[/QUOTE]

Never mind. I easily found the reference.

R.D. Silverman 2010-08-18 18:38

[QUOTE=xilman;226030]
Assumes facts not in evidence, to use a phrase I'm fairly sure I've read here relatively recently.


Paul[/QUOTE]

Which is why I asked a QUESTION, rather than stating an opinion.

bsquared 2010-08-18 18:48

[QUOTE=richs;226032]One curve using YAFU on my P4 took 15 hours.

08/18/2010 01:51:17 v1.18 @ RICH, Finished 1 curves using Lenstra ECM method on C290 input, B1 = 260000000, B2 = 1100784651[/QUOTE]

The pre-compiled YAFU executables use GMP-ECM, however they are likely not optimized for your system. If you're planning on contributing significantly to this project, I'd recommend picking up a standalone version of gmp-ecm which is tuned for your system. You might be able to find some [URL="http://www.mersenneforum.org/showthread.php?t=4087"]here[/URL].

debrouxl 2010-08-18 18:56

Core 2 Duo T7600, 3 GB DDR2 RAM, tuned GMP:
[code]GMP-ECM 6.2.3 [powered by GMP 4.3.2] [ECM]
Input number is (3^607-1)/2 (290 digits)
Using B1=260000000, B2=3178559884516, polynomial Dickson(30), sigma=954808638
Step 1 took <~4.3e6 to ~4.5e6> ms
Step 2 took <~800e3 to ~850e3> ms[/code]

15 curves done so far.

10metreh 2010-08-18 19:16

[QUOTE=bsquared;226038]You might be able to find some [URL="http://www.mersenneforum.org/showthread.php?t=4087"]here[/URL].[/QUOTE]

[url=http://www.mersenneforum.org/showpost.php?p=172959&postcount=201]These ones[/url] are fastest on my old P4.

em99010pepe 2010-08-18 19:47

[quote=frmky;225944]
Thanks! Only about 15,000 left to go to reach 3*t55![/quote]

What about yoyo@home help?

xilman 2010-08-18 19:58

[quote=R.D. Silverman;226035]I stand by my opinions and have the courage not to post under an alias.[/quote]True, as do I, though my alias is a very loosely guarded secret.

Paul

bsquared 2010-08-18 21:08

Intel X5570 @2.93 GHz
[CODE]GMP-ECM 6.3 [configured with GMP 5.0.1 and --enable-asm-redc] [ECM]
Input number is (3^607-1)/2 (290 digits)
Using B1=260000000, B2=3178559884516, polynomial Dickson(30), sigma=4175656254
Step 1 took 2577000ms
Step 2 took 497196ms
[/CODE]

There are many such cores available... but they are highly utilized. I'll try to put away a few curves in off hours.

bsquared 2010-08-19 11:41

224 curves done.

Raman 2010-08-19 12:29

[quote=R.D.Silverman;225839] I disagree that it helps the tail-chasing for OddPerfect since I think that
project is pointless. Raising the bound on the minimal size for an odd perfect
number does nothing toward proving that none exist. All the wasted CPU
time for this project would be much better spent on (say) Seventeen or Bust
which has a definitive END.

BTW: Didn't your mother ever tell you "finish what you start before doing
something new"? [/quote]

I approve with 3,607- rather. My perspective is not from odd perfect numbers increasing lower bound. But it is a number with no known non-primitive factors (similar to that of M1061), and that it is only a bit beyond the official table limits for base 3. Anyway, it is after all going to be added up to that official base 3 tables someday. In my opinion, base 3 table limits of 600 are much lesser than when compared up with that other base Cunningham tables.
Only the trivial factor of 2 is known, and that it is going to be done by using SNFS only after that it receives up with a sufficient amount of ECM.

By the way, I think of doing some numbers from that base 2 LM tables (quartics) after finishing off with sieving for 5,415+. 2,2226L 2,2238L 2,1870L 2,1870M 2,1910M. Say what you want to do, the rest I will do so with, before 2,2334M, 2,985-.

R.D. Silverman 2010-08-19 12:57

[QUOTE=Raman;226135]I approve with 3,607- rather. My perspective is not from odd perfect numbers increasing lower bound. But it is a number with no known non-primitive factors (similar to that of M1061), and that it is only a bit beyond the official table limits for base 3. Anyway, it is after all going to be added up to that official base 3 tables someday. In my opinion, base 3 table limits of 600 are much lesser than when compared up with that other base Cunningham tables.
Only the trivial factor of 2 is known, and that it is going to be done by using SNFS only after that it receives up with a sufficient amount of ECM.

By the way, I think of doing some numbers from that base 2 LM tables (quartics) after finishing off with sieving for 5,415+. 2,2226L 2,2238L 2,1870L 2,1870M 2,1910M. Say what you want to do, the rest I will do so with, before 2,2334M, 2,985-.[/QUOTE]

I will take 2,1870L,M. Take the rest.

bdodson 2010-08-19 13:33

[QUOTE=bsquared;226130]224 curves done.[/QUOTE]

I'm at 3100 curves, but would prefer to keep my count separately
from the OP ecm server count. I'm just finishing sieving 2,1043+
but won't have diskspace for postprocessing or starting sieving on
our next sieving target (that c161 gnfs that was making the rounds,
until Serge steped in). So I may have the quadcore cluster available
for ecm on 3,607- for a few days until 2, 919- finishes (we hope!).

-Bruce

Raman 2010-08-19 13:39

[quote=R.D. Silverman;226138]I will take 2,1870L,M. Take the rest.[/quote]

Ok. It is too early to reserve for now, rather I will reserve that up when time comes...

PS: Other compatible sized numbers are 10,550M 5,815L 10,530M (other than 5,785M of course)

frmky 2010-08-19 17:26

I don't have the privileges necessary to keep a count of the curves done in the first post. Batalov or fivemack, would you mind? Thanks!

Batalov 2010-08-19 18:11

Me neither (I am a local greenfingers). Need Jasonp.

Bruce: I will email small details about possible gzipping. -- I do like to stand under built bridges.

Anyway, M919 will vacate the premises on Monday (if all goes well), if there's no time for experiments then no need to change your plans.

em99010pepe 2010-08-19 19:02

1 Attachment(s)
~320 curves done.

xilman 2010-08-19 19:11

[quote=Batalov;226199]I do like to stand under built bridges.[/quote]You mean you're a troll?

Paul

Batalov 2010-08-19 20:31

Do trolls build bridges?


[COLOR=lemonchiffon]Yeah. I know that these days engineers don't stand under the bridges they built. That's so old fashioned. But I do.[/COLOR]

R.D. Silverman 2010-08-19 20:33

[QUOTE=Batalov;226250]Do trolls build bridges?[/QUOTE]

fee fie foe foo

Jeff Gilchrist 2010-08-19 23:07

I will run a some curves for you. Some benchmark info on a Core2 Xeon with this number on a Linux 64bit system.

GMP-ECM 6.2.3 [powered by GMP 4.3.0] [ECM]
Using B1=260000000, B2=3178559884516, polynomial Dickson(30), sigma=679768835
Step 1 took 4600030ms
Step 2 took 879332ms

GMP-ECM 6.3 [configured with GMP 5.0.1 and --enable-asm-redc] [ECM]
Using B1=260000000, B2=3178559884516, polynomial Dickson(30), sigma=4274582256
Step 1 took 4506844ms
Step 2 took 894422ms

GMP-ECM 6.3 [configured with MPIR 2.1.1 and --enable-asm-redc] [ECM]
Using B1=260000000, B2=3178559884516, polynomial Dickson(30), sigma=3873338469
Step 1 took 4578511ms
Step 2 took 940473ms

R.D. Silverman 2010-08-19 23:19

[QUOTE=Jeff Gilchrist;226301]I will run a some curves for you. Some benchmark info on a Core2 Xeon with this number on a Linux 64bit system.

GMP-ECM 6.2.3 [powered by GMP 4.3.0] [ECM]
Using B1=260000000, B2=3178559884516, polynomial Dickson(30), sigma=679768835
Step 1 took 4600030ms
Step 2 took 879332ms

GMP-ECM 6.3 [configured with GMP 5.0.1 and --enable-asm-redc] [ECM]
Using B1=260000000, B2=3178559884516, polynomial Dickson(30), sigma=4274582256
Step 1 took 4506844ms
Step 2 took 894422ms

GMP-ECM 6.3 [configured with MPIR 2.1.1 and --enable-asm-redc] [ECM]
Using B1=260000000, B2=3178559884516, polynomial Dickson(30), sigma=3873338469
Step 1 took 4578511ms
Step 2 took 940473ms[/QUOTE]

This is sub-optimal. One should spend as much time in step 2 as in step 1.
I suggest increasing B2.

axn 2010-08-20 01:25

[QUOTE=R.D. Silverman;226304]This is sub-optimal. One should spend as much time in step 2 as in step 1.
I suggest increasing B2.[/QUOTE]

Isn't that analysis based on the assumption that time for stage 2 is proportional to B2? (I vaguely recall seeing something like that, having only skimmed thru your paper).

wblipp 2010-08-20 01:39

[QUOTE=wblipp;225840]Next week I'll be back in the US and will change the OddPerfect Most Wanted ECM server to hand out only this number.

oddperfect.no-ip.com:8201[/QUOTE]

Now in operation.

I am also updating curve counts in the first post and integrating all except bdodson's in the server.

frmky 2010-08-20 05:41

Minor correction... That should be [URL="http://oddperfect.no-ip.com:8201"]oddperfect.no-ip.com:8201[/URL] And thanks!

[I]fixed. Thanks.[/I]

Jeff Gilchrist 2010-08-20 10:18

[QUOTE=em99010pepe;226020]
My timings:

Machine 1 - Core i5 750@3.7GHz
[/QUOTE]

My Core2 Q9550 @ 3.2GHz isn't far off from your i5 considering yours is overclocked a lot more:

[CODE]Using B1=260000000, B2=3178559884516, polynomial Dickson(30), sigma=2613054812
Step 1 took 2903381ms
Step 2 took 732534ms[/CODE]

Jeff.

firejuggler 2010-08-20 11:16

[code]GMP-ECM 6.3-rc3 [configured with GMP 5.0.1] [ECM]
Input number is (3^607-1)/2 (290 digits)
Using MODMULN
Using B1=260000000, B2=3079973376496, polynomial Dickson(30), sigma=296568008
dF=362880, k=2, d=3993990, d2=17, i0=49
Expected number of curves to find a factor of n digits:
35 40 45 50 55 60 65 70 75 80
23 83 336 1526 7676 42206 251394 1609787 1.1e+007 8e+007
Step 1 took 14294063ms
Estimated memory usage: 2683M
Initializing tables of differences for F took 49547ms
Computing roots of F took 360937ms

[/code]
crashed there, not enough memory, Core 2 duo E6300, 1.8GHz
half a curve?

10metreh 2010-08-20 11:38

[QUOTE=firejuggler;226359]crashed there, not enough memory, Core 2 duo E6300, 1.8GHz
half a curve?[/QUOTE]

Just step 1 is very unlikely to find a factor, so it doesn't mean much.

firejuggler 2010-08-20 11:44

no worry, i was just kidding.
i'll try again on a better computer.

em99010pepe 2010-08-20 12:25

[quote=Jeff Gilchrist;226357]My Core2 Q9550 @ 3.2GHz isn't far off from your i5 considering yours is overclocked a lot more:

[code]Using B1=260000000, B2=3178559884516, polynomial Dickson(30), sigma=2613054812
Step 1 took 2903381ms
Step 2 took 732534ms[/code]Jeff.[/quote]

My timings were when running all four cores at the same time but as I said once the core i5 is not as fast as people think. Compared to my Q6600@2.8 GHz is only ~30% faster...

em99010pepe 2010-08-20 17:28

Can't reach oddperfect.no-ip.com:8201....

Jeff Gilchrist 2010-08-20 19:31

[QUOTE=em99010pepe;226367]My timings were when running all four cores at the same time but as I said once the core i5 is not as fast as people think. Compared to my Q6600@2.8 GHz is only ~30% faster...[/QUOTE]

My timings were also when all 4 cores were running ecm at the same time.

What is the main difference between the i5 and i7, less memory channels, no hyperthreading or how does Intel separate them by feature?

Jeff.

yoyo 2010-08-21 07:50

[QUOTE=em99010pepe;226043]What about yoyo@home help?[/QUOTE]

If the number should run in yoyo@home, William would add it.
yoyo

firejuggler 2010-08-21 12:59

for those lacking the memory to run 26e7 B1 curves (requiring 265x MB of ram for phase 2) i suggest you to use the -maxmem switch to help. Sure it might take more time to run it, but at least, it will run.

yoyo 2010-08-21 13:25

How many curves are missing?

bdodson 2010-08-21 14:08

[QUOTE=wblipp;226319]Now in operation.

I am also updating curve counts in the first post and integrating all except bdodson's in the server.[/QUOTE]

Thanks. My current curve count is 3508+5006 (all B1=26e7, default B2).
The 3508 can safely be added in to the OP ecm server count (that's 2t50,
in my usual curve count reporting), and the 5006 should be kept separate
until those t50's finish. The partials are from another 5t50. -Bruce

PS - I've been using 1580 curves for t50 and 5.7t50 = c. t55,
which works out to c. 9000 curves. If we had c. t55 before Greg
started this thread (mostly with B1 = p55-optimal, 11e7; if I'm
reading the server report correctly), we must be past t55+t55,
with another t55 to go? As this is a record snfs (for NFS@Home,
presuming that ecm doesn't find a factor, as is becoming ever more
likely) we could probably head towards t60 [which I have at around
25t50, 5t55; _very_ roughly estimated]. Or we could stop at 3t55
and go on to something more likely to have a factor in ecm range.

bdodson 2010-08-22 13:07

[QUOTE=bdodson;226481]Thanks. My current curve count is 3508+5006 (all B1=26e7, default B2).
The 3508 can safely be added in to the OP ecm server count (that's 2t50,
in my usual curve count reporting), and the 5006 should be kept separate
until those t50's finish. The partials are from another 5t50. ...[/QUOTE]

Today's count is 3508+3393+3682 = 10580 curves, with
3508+3393 = 6901 in four complete t50's (which may be added
in with the OP ecm server count), and the other 3682 in four
additional t50's that I'm still working on. -bd

jcrombie 2010-08-22 14:13

curve count
 
Here is my (puny) contribution from the last 5 days on an Athlon 2.2 Ghz dual-core.

240 curves with B1=260e6 and default B2.

Good luck!

ET_ 2010-08-23 14:01

1 Attachment(s)
My 10 curves :smile:

Luigi

bsquared 2010-08-23 14:02

960 curves done @ 260M, default B2.

bdodson 2010-08-23 21:04

[QUOTE=bdodson;226546]Today's count is 3508+3393+3682 = 10580 curves, with
3508+3393 = 6901 in four complete t50's (which may be added
in with the OP ecm server count), and the other 3682 in four
additional t50's that I'm still working on. -bd[/QUOTE]

OK, sounds like sieving's about to restart here; so I'll be winding
down this ecm run. The current count is
3508+3393+6703+3682 = 15,916 curves, of which
there are eight complete t50's, 3508+3393+6703 = 13,604 curves
(which may be added in to the OP ecm server count), and the
3682 are from an additional 2.5t50 still running.
-Bruce (all curves 26e7 with default B2)

frmky 2010-08-24 04:27

I concur that ECM can now wind down. I was shooting for about 0.5*t60, and we're about there. But we still have a while before sieving starts. 5,409- still has about 185,000 tasks to go, and at the current rate of about 6,000/day it'll take another month.

R.D. Silverman 2010-08-24 11:40

[QUOTE=frmky;226775]I concur that ECM can now wind down. I was shooting for about 0.5*t60, and we're about there. But we still have a while before sieving starts. 5,409- still has about 185,000 tasks to go, and at the current rate of about 6,000/day it'll take another month.[/QUOTE]

The web site shows a whole bunch of numbers between 5,409- and 3,607-
that are waiting to be done. Are you skipping those?

debrouxl 2010-08-24 12:28

Based on [url]http://escatter11.fullerton.edu/nfs/forum_thread.php?id=211[/url] , I think that NFS@home clients are going to sieve some of these integers and 3,607- in a concurrent manner.

jasonp 2010-08-24 12:30

Many of the NFS@Home participants do not have the gigabyte per core that the 16e lattice siever requires, so the project queues up smaller jobs in between the bigger ones, that are suitable for the 15e siever. The smaller jobs proceed in parallel.

(Not to put words in Greg's mouth, but he's on west coast time and I'm awake now)

bdodson 2010-08-24 13:56

[QUOTE=R.D. Silverman;226813]The web site shows a whole bunch of numbers between 5,409- and 3,607-
that are waiting to be done. Are you skipping those?[/QUOTE]

There are two queues among the numbers on the "Status" page; and
3,607- is next on the queue for the 16e siever. That's "next" as in
starting in c. one month, as the 5,409- tasks finish. The numbers
in between will be done with the 15e siever (<= snfs difficulty 270,
and what were the "smallest available" gnfs). Plenty of time for more
ECM (towards the rest of t60); most of the remainder of my curves
will run on 32-bit xeons.

-Bruce

(off topic Postscript: the Batalov+Dodson number M919 finished
last night, C261 = p126*p135; one digit below the Childers/Dodson
2nd place record.)

R.D. Silverman 2010-08-24 17:00

[QUOTE=bdodson;226833]There are two queues among the numbers on the "Status" page; and
3,607- is next on the queue for the 16e siever. That's "next" as in
starting in c. one month, as the 5,409- tasks finish. The numbers
in between will be done with the 15e siever (<= snfs difficulty 270,
and what were the "smallest available" gnfs). Plenty of time for more
ECM (towards the rest of t60); most of the remainder of my curves
will run on 32-bit xeons.

-Bruce

(off topic Postscript: the Batalov+Dodson number M919 finished
last night, C261 = p126*p135; one digit below the Childers/Dodson
2nd place record.)[/QUOTE]

I would have thought that some of the other numbers (e.g. the 180+ digit
GNFS jobs) required the 16e siever as well. Is this not the case?

The status page does not show the separate queues.

frmky 2010-08-24 17:07

[QUOTE=R.D. Silverman;226813]The web site shows a whole bunch of numbers between 5,409- and 3,607-
that are waiting to be done. Are you skipping those?[/QUOTE]
See what those east of me said... :smile:

frmky 2010-08-24 17:12

[QUOTE=R.D. Silverman;226856]I would have thought that some of the other numbers (e.g. the 180+ digit
GNFS jobs) required the 16e siever as well. Is this not the case?

The status page does not show the separate queues.[/QUOTE]
GNFS-180 jobs run just fine with 15e. I have yet to see how the GNFS-184 job does, though. I've kept explicit mention of the separate 15e/16e queues off the status page for simplicity, but as a general rule SNFS < 271 will be done with 15e and larger with 16e.

jrk 2010-08-24 20:05

[QUOTE=jasonp;226824]Many of the NFS@Home participants do not have the gigabyte per core that the 16e lattice siever requires, [/QUOTE]

[QUOTE=frmky;226862]GNFS-180 jobs run just fine with 15e. I have yet to see how the GNFS-184 job does, though. I've kept explicit mention of the separate 15e/16e queues off the status page for simplicity, but as a general rule SNFS < 271 will be done with 15e and larger with 16e.[/QUOTE]

Greg, if you are interested, you may want to do some trials on those 16e jobs to see if they can benefit from using 3 large primes on the side you are sieving the special-Q on. Doing that increases the yield somewhat for the smaller special-Q in the sieving range (not as much for larger Q), and allows for the factor base limit to be reduced as well without destroying yield. With the lower memory requirement of a smaller fb you may be able to fit the 16e tasks onto more PCs.

Note that the GGNFS siever has a limit of 96 bits for mpqs, though (and you'll get a lot of "mpqs failed" at that size). 90bit algebraic cofactors were used for 6,353+, with large primes up to 31 bits. Using exactly 3*lpb was worse, probably because it is harder to find good 3-way splits near that limit without one prime being too big. So if you have 33bit large primes, 95 or 96bit mpqs should suffice.

It's hard to tell exactly how much 6,353+ benefited from this (the limited sieving trial suggested a marginal 8% or so speed gain overall, most of that coming from the small Q end). A SNFS >270 will likely benefit more clearly.

Remember to set the lambda value to something larger than log(2^mfb)/log(fblim), if you decide to try this.

Jeff Gilchrist 2010-08-25 03:35

883 curves done @ 26e7, default B2.

frmky 2010-08-25 04:15

[QUOTE=jrk;226885]Greg, if you are interested, you may want to do some trials on those 16e jobs to see if they can benefit from using 3 large primes on the side you are sieving the special-Q on. [/QUOTE]
Do you still have the filtering log from 6,353+? I'd like to see how that progressed.

bdodson 2010-08-26 14:44

[QUOTE=bdodson;226719]OK, sounds like sieving's about to restart here; so I'll be winding
down this ecm run. The current count is
3508+3393+6703+3682 = 15,916 curves, of which
there are eight complete t50's, = 13,604 curves
... -Bruce (all curves 26e7 with default B2)[/QUOTE]

OK, the last 7 curves were hung (or something), so I sent them
on to the next NFS@Home candidate (from among the ones listed
between 5p409 and 3mx607 on the "Status" page). That's c.
10.5t50, a "light 2t55" (above 2*5t50, instead of 2*5.7t50; the
t50 -> t55 -> t60 equivalences I'm using are far from exact). The
actual curve count is
3508+3393+6703+4595 = 18199.

If we started at c. 1t55 (from the previous OP ecm effort), so above
5t50, we're surely above 15t50, with c. 25t50 = 5t55 = t60; more like
3/5 = .6t60; and >> Greg's .5t60. Suppose someone could give us a
more precise status once the OP ecm server counts are updated with
the above, and Jeff's and everyone else's, so far?

Still plenty of room to go to remove [p58..p63]'s, at 2t60
(to 80% prob instead of 62%), and this number may now be
more likely to have a p70 than a p55. A good candidate for
ecm record effort. -Bruce

(The status count would be just p55-optimal 11e7 and p60-optimal
26e7 curves, not including the p50-optimal curves, yes?)

wblipp 2010-08-27 00:56

[QUOTE=bdodson;227152](The status count would be just p55-optimal 11e7 and p60-optimal 26e7 curves, not including the p50-optimal curves, yes?)[/QUOTE]

The number I've been updating in the first post counts 43e6, 11e7, and 26e7 curves, and calculates based on the output of ecm -v. That calculation is currently 4.90 t55 or 0.86 t60. The 43e6 curves contribute 0.28 and 0.04 to those totals.

William

bdodson 2010-08-27 15:18

[QUOTE=wblipp;227218]The number I've been updating in the first post counts 43e6, 11e7, and 26e7 curves, and calculates based on the output of ecm -v. That calculation is currently 4.90 t55 or 0.86 t60. The 43e6 curves contribute 0.28 and 0.04 to those totals.

William[/QUOTE]

Ah; updating the first post. I'd have given simpler counts
if I'd been paying better attention (seeing counts from
previous posts already included). The 0.86 t60 looks very good!

-Bruce

chris2be8 2010-08-29 12:55

15 curves at 26e7, no factors.

My 0.1% (so I can say I contributed, even if not much).

Chris K

warut 2010-09-11 21:46

3^607-1 is now in the [URL="http://homes.cerias.purdue.edu/~ssw/cun/pmain910"]main Cunningham table[/URL].

wblipp 2010-09-11 22:33

[QUOTE=warut;229430]3^607-1 is now in the [URL="http://homes.cerias.purdue.edu/~ssw/cun/pmain910"]main Cunningham table[/URL].[/QUOTE]

Just barely. The threshold was increased to 610.

bdodson 2010-09-11 23:48

[QUOTE=wblipp;229437]Just barely. The threshold was increased to 610.[/QUOTE]

Just enough to fill the 4th and 5th holes. As gnfs polyn selection
winds down on on the c176, we're requesting the reservation on
the 4th (605-), to go along with the NFS@Home reservation for 607-.
-Bruce (for) Batalov+Dodson

frmky 2010-09-15 07:14

Sieving of 3,607- has begun. If you would like to help, go to [URL="http://escatter11.fullerton.edu/nfs/"]http://escatter11.fullerton.edu/nfs[/URL], download the BOINC software, and connect to NFS@Home. By default you will also receive tasks for 7,338+ or 6,346+, but you can disable the "lasievee" application in your [URL="http://escatter11.fullerton.edu/nfs/prefs.php?subset=project"]account preferences[/URL] to only work on 3,607- (and a few recycled tasks for 5,409- that are slowly trickling out for the next week or so). The sieving will require approximately 1.5GB of free memory per core toward the end, but a bit less for now.

Edit: Windows, Linux, and Mac OS X (Intel) are all supported.


All times are UTC. The time now is 15:38.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.