mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Cunningham Tables (https://www.mersenneforum.org/forumdisplay.php?f=51)
-   -   5- Table Discussion and OddPerfect.org (https://www.mersenneforum.org/showthread.php?t=4425)

Zeta-Flux 2005-06-15 02:14

William,

For those of us who are interested in your progress on the other roadblock, is there a place you are posting that information?

Thanks,
Pace

wblipp 2005-06-15 14:10

[QUOTE=Zeta-Flux]
For those of us who are interested in your progress on the other roadblock, is there a place you are posting that information?[/QUOTE]

There isn't anyplace yet - that's a second tier priority among the many things that are needed at oddperfect.org.

At present the C246 from 19[sup]193[/sup]-1 is about 140 curves ahead of the C244 from 5[sup]349[/sup]-1 at the 55 digit (B1=11e7) test level. The ECM Server keeps the two counts close together, but off-server work sometimes opens a gap. The "19" is presently ahead because of recent off-server work.

The ecm server code for initializing slave servers is now working, so someone "really really" interested in this could set up a new slave server with no ecm.ini file to get an update.

.

Zeta-Flux 2005-06-15 16:33

William,

I understand that there are other, more important, aspects of oddperfect.org. I just thought you might post the information about the other roadblock here, when you update the information about the first roadblock. (So it won't take up that much time.)

Also, I downloaded the ECM client, and it was significantly slower (I'm assuming it was because it was version 5, and I'm using version 6). So there is a reason some of us prefer to ECM on our own.

By the way, why do you want to keep the two roadblocks about equal? It seems to me that if you don't find a factor using ECM you will need computers to do sieving on. Wouldn't it be better if one of the numbers was ready to be started on SNFS while the other was still ECMing away on those computers which didn't want to sieve?

Best,
Pace

wblipp 2005-06-15 20:59

1. Yes, I'll mention the status of "19" here when I update "5."

2. My idea was to use the slave server only as a method to peek into the master server. On reflection, a better suggestion is to use the ecmadmin program.

3. ecmclient and ecm are indepedent. It's best to have the latest version of each, but any combination will work.

4. I've changed the ecm server at oddperfect.no-ip.com:8201 to use "smallest composite" method with 90% of the curves going to the smaller value. Over time this will move the "5" substantially ahead of the "19", at least based on on-server work. SInce "19" is presently ahead, it would be nice to bias the work towards it, but I don't think there is a server-only method to accomplish that. Thanks for the suggestion.

Zeta-Flux 2005-06-15 22:45

1. Great! Thanks!

3. I have ECM 6.0, but when I downloaded the program from ElevenSmooth it ran version 5. :| Did I do something wrong?

4. I guess, to conform, I will work on the "5" block. ;)

wblipp 2005-06-16 02:28

[QUOTE=Zeta-Flux]
3. I have ECM 6.0, but when I downloaded the program from ElevenSmooth it ran version 5. :| Did I do something wrong?[/QUOTE]

I haven't updated the ElevenSmooth package yet, partly because I've not yet seen a compilation of ecm.exe versions optimized for various processors. So don't use the ecm.exe from the package, copy in the version 6 that you already have.

If you have a 2.6.2 ecmclient, use that (and it's configuration file) instead of the ecmclient.exe in the package, too.

smh 2005-06-16 06:48

[QUOTE=wblipp]I haven't updated the ElevenSmooth package yet, partly because I've not yet seen a compilation of ecm.exe versions optimized for various processors. So don't use the ecm.exe from the package, copy in the version 6 that you already have.[/QUOTE]

I think this is v6.0

[url]http://www.mersenneforum.org/showpost.php?p=52303&postcount=105[/url]

wblipp 2005-07-29 19:48

OddPerfect.org's curves for 5[sup]349[/sup]-1 did not get integrated into the Cunningham tables - possibly because the chatter on this thread about 19[sup]193[/sup]-1 caused it to be overlooked. I've just posted a full update in the 5- table thread.

19[sup]193[/sup]-1 has completed 820 curves at 110M.

wblipp 2005-08-19 19:25

19[sup]193[/sup]-1, the second known roadblock at oddperfect.org, has completed
832 curves at 110M
11 curves at 260M

Progress has slowed substantially on this number because the majority of resources are presently committed to the first roadblock, 5[sup]349[/sup]-1

akruppa 2005-08-19 19:30

Dear William,

can you let the ECMNET server hand out only 19^193-1 assignments from now on?

Thank you,
Alex

Zeta-Flux 2005-08-19 20:02

Alex,

Are you going to attempt the 5 roadblock? If so, I'll stop my ECM work (which is not assigned by William).

Best,
Pace

wblipp 2005-08-19 20:18

[QUOTE=akruppa]
can you let the ECMNET server hand out only 19^193-1 assignments
[/QUOTE]
It's been done. This is sooner than I was expecting!

William

alpertron 2005-08-19 20:19

I think it would be best if Alex could coordinate the SNFS sieving with the people who were trying to factor 5^349-1, in order to make this stage faster.

akruppa 2005-08-19 20:21

>Are you going to attempt the 5 roadblock?

Yes.

>If so, I'll stop my ECM work (which is not assigned by William).

That would be nice.

Thanks,
Alex

akruppa 2005-08-19 20:26

>I think it would be best if Alex could coordinate the SNFS sieving with the people who were
>trying to factor 5^349-1, in order to make this stage faster.

Part of the reason I'm taking on this numbers is that I wanted something that lets me run the sievers unattended for a while. Sieving is progressing nicely already and won't take terribly long - there are a couple of Athlons at my University I can use. I have little personal time at the moment and could not coordinate distributed sieving - if I may, I would like to attempt this one on my own. If there is interest (and no ECM factor), we could do the 19 roadblock with voluteer sievers later on.

Alex

philmoore 2005-08-19 23:27

What resources do you estimate you will need to do the linear algebra?

akruppa 2005-08-20 09:47

The size of the number is very close to 2,811- that NFSNET did (very slightly smaller) so I expect I will get a matrix of similar size. In [URL=http://www.mersenneforum.org/showthread.php?t=2364]this thread[/URL], xilman states the size of the (properly merged) matrix as (9.5M)^2 with 670M non-zero entries. This means the matrix could fit into 2.5GB of memory and could be done on a single-cpu 32-bit machine, if need be - it would take forever, though.

I plan to oversieve a lot more than NFSNET did, so I should get a smaller and less heavy matrix. I'm not yet entirely sure where I'll solve it - maybe I could use a few Athlon XP at the chair for Efficient Algorithms (the same ones that are doing the sieving right now) with 100MB Ethernet, but there may be a much better alternative.

I've often heard about a Opteron cluster with dual-cpu nodes and Infiniband interconnects at our department, which would be ideal for the matrix jobs. I'll need to talk to the responsible ones and ask if I can use that, but supposedly the cluster is not utilised too much, so I'm having high hopes there. I'd then still have to get the parallel version of CWI's block lanczos, and compile and test it on the cluster.

My current order of preference for the linear algebra is: 1. that Opteron cluster, 2. the Efficient Algorithms Athlons, 3. go begging CWI or Jens Franke for help, 4. buy ram, stick it in a single machine and wait for it.

Alex

akruppa 2005-09-07 07:35

Progress
 
Sieving is about 50% done now. Note, I had started sieving quite a while before officially reserving the number.

Alex

Numbers 2005-09-07 09:24

Have you sorted out where you're going to be doing the algebra?

akruppa 2005-09-07 10:52

No, not yet. I have an exam on October 2nd, studying for that is what I focus on most of the time now. Sieving will take until well after that date so I'll go looking for suitable hardware sometime in October.

Alex

Zeta-Flux 2005-10-07 04:18

Alex,

How was your exam?

Hope it went well,
Pace

akruppa 2005-10-07 11:03

I, ummm, kinda misremembered the date :alex: it is on the 10th in fact - monday.

Alex

xilman 2005-10-07 11:31

[QUOTE=akruppa]I, ummm, kinda misremembered the date :alex: it is on the 10th in fact - monday.

Alex[/QUOTE]
Oh well, at least you misremembered in the correct direction. Turning up on the 10th for an exam that actually occurred on the 2nd would be embarassing.


Paul

akruppa 2005-10-11 08:33

The exam is over and went much better than I had expected. Many thanks to Prof. Ernst W. Mayr for a very relaxed and successful examination! :wink:

And I just got an email saying that I'll get an account on the Opteron cluster!!! :banana:

I like how this week is starting.

Alex

akruppa 2005-10-11 14:54

I've got the account on the cluster now. It's a 36 node cluster with quad-Opteron 850 nodes - 144*2.4 GHz of Opteron power. That's one bad-ass piece of number cruncher. And the crazy part is: according to the reservation schedule, this monster is idle most of the time. Sitting there, doing nothing... I can't believe it.

Time for some [I]serious[/I] factoring work.

Alex

Zeta-Flux 2005-10-11 15:53

Sweet! Enjoy this week while it lasts!

R.D. Silverman 2005-10-11 16:47

[QUOTE=akruppa]I've got the account on the cluster now. It's a 36 node cluster with quad-Opteron 850 nodes - 144*2.4 GHz of Opteron power. That's one bad-ass piece of number cruncher. And the crazy part is: according to the reservation schedule, this monster is idle most of the time. Sitting there, doing nothing... I can't believe it.

Time for some [I]serious[/I] factoring work.

Alex[/QUOTE]

Nice. What are your plans (after 5,349-?)? NFSNET will do 2,764+ after
2,761-.

I could do a lot with that kind of hardware....... :grin:

Do you have a parallel B-L running???

I just started 2,1366M. With my current hardware it will take ~10 days
to sieve, using about a dozen machines, and another week to do the LA
on a single machine... I plan on doing all 2,LM up to 2,1402L.

akruppa 2005-10-11 18:16

I asked Peter and Herman for the parallel block-Lanczos version of gauss. When I get that, I'll probably spend a few days compiling it for the Opteron/Infiniband cluster and testing. I once tried to compile their line siever for Opteron, but it didn't recognise the 64 bit architecture. If gauss has a similar problem, I may have to tweak a few things to get it to full speed, so this phase may take a while.

Once the parallel block-Lanczos runs satisfactorily, I can offer matrix solving service to other factorers.

I test filtered the 5,349- relations today (dupe removal only), there are ~110M unique relations. I expect I'll need about 150M, so sieving will take a bit more time. After that, I don't know yet - the Athlon64 and Opteron are excellent at GMP work, so I'll probably do some ECM work. I'm kinda attracted to the OPN research right now, I may do 3221^73-1 later if there's no ECM factor. This could be done with volunteer sievers. Other than that, I may do a bit of ECM and NFS on the base 3 Cunningham numbers. It would be nice to clear the tables up to exponent 500.

Alex

Edit: Oh, and I'm working with Paul Zimmermann on Aliquot sequences. Those will need a lot of ECM and GNFS, too.

akruppa 2005-10-12 06:01

Bummer. The development of CWI's parallel Block Lanczos was largely sponsored by Microsoft Research and is not available to outside parties. Maybe I can use Jens Franke's.

Alex

akruppa 2005-10-19 17:39

Did a test pruning today, there are ~2.5M more ideals than relations. With the partial output files that weren't included in this test, there should probably be enough for a dependency, but I'll go for a lot of excess.

Nothing new on the parallel block-Lanczos matter. Now I have access to the perfect hardware for the matrix step, but no software to run on it. :sad:

Alex

Mystwalker 2005-11-07 16:24

Just for satisfying my curiousity:

What is the status of the 5,349- SNFS factorization, Alex?

akruppa 2005-11-07 18:02

I did another test filtering a few days ago and it seemed that I had enough relations for a dependency. I'll sieve a while longer, though - I want a lot of excess to get a smaller matrix. I still have no access to a parallel Block-Lanczos implementation and apparantly won't get one... I may have to do the matrix on a single quad-Opteron node, provided that I may reserve one for a few months in one piece. So nothing dramatically new so far... I'll post a report when I start post-processing.

Alex

akruppa 2005-12-04 22:17

Did another test filtering today. I modified the CWI filter program to run on Opterons in 64 bit mode, it can handle primes up to 2^31 on those (on x86 something broke with primes greater than approx. 1G). I needed that since I used lp=2^31 for the sieving, as this considerably increased yield.

CWI's filter program produces much more accurate ideal counts than the home-grown code I used for earlier test filter runs. The bad news: the excess is much less than I had hoped, only about 4M. After clique removal, there are about 35M ideals left, so merging will reduce this to a matrix of maybe (7M)^2 at best - probably larger than that. This is too large in any case.

Need to do more sieving. I used to sieve on a couple of Athlons, but I think I'll move everything over to the Opteron cluster. It's time to get this over with.

Alex

ewmayer 2005-12-05 20:32

[QUOTE=akruppa]The exam is over and went much better than I had expected. Many thanks to Prof. Ernst W. Mayr for a very relaxed and successful examination! :wink:[/QUOTE]For those who were wondering - no relation there. (My last name has an extra 'e' - and neither I nor Alex's examining professor is any relation to the recently-deceased famous Harvard zoologist and evolutionary theorist Ernst Mayr, AFAIK.) Alex and I may be buddies, but even we have out limits as far as beer-lubricated academic nepotism is concerned. ;)

akruppa 2005-12-06 09:58

I've split the patents discussion into a [URL=http://www.mersenneforum.org/showthread.php?t=5092]new thread[/URL] in the Lounge.

Alex

akruppa 2005-12-26 17:46

The CWI filter program gave me a bit of trouble. It turned out that removing dupes and singletons worked fine, but merging didn't! Internally, filter uses the 31st bit of the dword storing the primes as a flag for whether the prime has been paired yet or not, so 31 bit primes would get mangled. I changed the code to store primes shifted right by one bit, after all, almost all of them are odd, so the lsb doesn't hold a lot of information. The prime 2 is represented by 0.

Enter the next catch - free relations have the different ideals stored in place of the primes in the relations, and the ideals can be even or odd! So another nights worth of hacking went by, making the code treat free and non-free relations differently. Only prime are shifted right now, ideals aren't. This means I'll get free relations only on primes <2^30, but that's ok - I don't think there are many primes >2^30 where all ideals are present in my relations.

Started a merging run last night, it now is at pass 18 and still has 8.6M unbalanced ideals :sad:. I doubt I'll get the matrix size much below (8M)^2 which will take awfully long to solve. Maybe I borked something during the hacking and merging doesn't work like it should... need to do a regression test next.

Stay tuned to see me mumble to myself some more,
Alex

Numbers 2005-12-28 15:28

Please feel free to mumble away. I'm sure I am not alone in being fascinated to see how this turns out. In fact I have what is possibly a very silly question.

As I understand it, if we are trying to factor n, the idea is to find as many pairs x, y as satisfy x^2 = y^2(mod n), and GCD(xy, n) = 1.

when it will be seen that GCD(x-y, n) is a factor of n.

I get that bit. And, obviously, sometimes GCD(x-y, n) = 1, which is not a very useful factor. Which explains why we need a lot of relations, because some of them are not useful. But seeing as you only need one relation to find a factor, why do you start by finding millions of them. Isnt that just overkill, or have I completely misunderstood?

xilman 2005-12-28 16:06

[QUOTE=Numbers]Please feel free to mumble away. I'm sure I am not alone in being fascinated to see how this turns out. In fact I have what is possibly a very silly question.

As I understand it, if we are trying to factor n, the idea is to find as many pairs x, y as satisfy x^2 = y^2(mod n), and GCD(xy, n) = 1.

when it will be seen that GCD(x-y, n) is a factor of n.

I get that bit. And, obviously, sometimes GCD(x-y, n) = 1, which is not a very useful factor. Which explains why we need a lot of relations, because some of them are not useful. But seeing as you only need one relation to find a factor, why do you start by finding millions of them. Isnt that just overkill, or have I completely misunderstood?[/QUOTE]Close but no cigar.

What we are looking for is a set of relations which [b]when multiplied together[/b] satisfy x^2==y^2 (mod n).

For instance, suppose we have -6 == 10 mod N, -2==5 mod N and 27 == 8 mod N
(I'm just making these up as I go along!)

None of these are of the form x^2==y^2 mod N, but if you multiply them together, you find 324 == 400 mod N, or 18^2 == 20^2 mod N, from which gcd(18+20,N) may be a useful factor of N, if N is divisible by 19, that is.

The example above is greatly oversimplified, but it illustrates the fundamental idea behind virtually all factor-base methods for factoring, including CFRAC, QS and NFS.

Paul

alpertron 2005-12-28 16:44

This is also explained in the introduction of the [URL=http://www.mersennewiki.org/index.php/SIQS]article about SIQS[/URL] on MersenneWiki.

akruppa 2005-12-28 17:33

The merging finished yesterday, filter reports 7.21M unbalanced ideals. With small primes added, the matrix should be about (7.35M)^2, so the last merging passes fared better than I expected.
I'm currently merging relations for 3,463+. The hacked version has finished already, I'll repeat with the same parameters and input file but using the original CWI version. Fingers crossed that the output will be identical...

Alex

Numbers 2005-12-28 23:50

[QUOTE=xilman]Close but no cigar.[/QUOTE]

Shame about the cigar, would have gone well with a brandy after dinner.

Found a nice paper on this:
[url]http://www.mersenneforum.org/attachments/pdfs/FSP200-50SNV.pdf[/url]

highly readable and not too difficult, but probably a bit out of date.

Many thanks, again.

akruppa 2005-12-30 23:36

Oh, the joy of hacking someone else's code.

My attempts to re-run the merging pass with the original filter program were met with limited success - the program locked up hard each time. It keeps running but does not even respond to kill -9. What is more, if you start it more than once, the machine locks up hard! This is now the third time the admin has to reboot the node I was using. Not sure if he'll see the humour in that.

Turns out that the CWI tools put a time stamp into binary format files, which simply uses a time_t variable. But in 64-bit applications, sizeof(time_t)=8, in 32-bit only 4. I created the file with my 64 bit version, reading it with the original 32 bit binary causes the data to be read shifted by 4 bytes. This causes a length value to be read as 0, after subtracting a header length it becomes negative. This small negative value is then used as the length to read from a file. If I'm not mistaken, the value gets cast to 64 bits with sign extension on the way to the system call. If the kernel interprets it as an unsigned then, it'll try to read 2^63-1bytes = 10.000.000 TByte from disk. Would explain why it takes so long...

Long story short, beware of binary relations files.

Alex

Wacky 2005-12-31 00:53

[QUOTE=akruppa]Long story short, beware of binary relations files.[/QUOTE]

Or, more generally, be careful about all of your relations.

akruppa 2006-01-20 17:58

[QUOTE=Wacky]Or, more generally, be careful about all of your relations.[/QUOTE]

The nice thing about NFS is that you can get a lot of excess relations and then choose which ones to keep! :wink:

Brief history: after sieving sq on the rational side in [30M, 70M] it turned out that a sieve region of 16394x8192 (gnfs-lasieveI14e) would use a lot of special-q values (about up to 150M) and probably produce a pretty large matrix. So sieving on the algebraic side was done with a 32768x16384 region. There is no mention of a gnfs-lasieveI15e program in Franke's documentation, but it turned out that a simple "make gnfs-lasieveI15e" worked just fine. The binary did not run on Pentiums as the L1 cache size too small, but no problem on Athlons or Opterons. Sieving of sq on the algebraic side was done in the interval [20M, 70M]. In addition, some line sieving over the lattice siever factor base primes was done. Later, a part of the rational side was re-sieved with the large sieve region size, but this didn't add too many new relations, unsurprisingly.

All in all there were 174237784 unique relations, after removing singleton ideals of norm >70M, 107136618 relations remained on 79210759 ideals of norm >70M (plus about 2*4118064 smaller ideals) for a real excess of about 19689731.

The clique algorithm was run in many small passes. By default, the clique algorithm tries to halve the excess in each pass, but experiments suggested that running the algorithm more often, discarding fewer relations each time, leads to better results. When the algorithm was allowed to only discard 100000 relations per pass at most, the resulting data set included about 260000 fewer ideals than after running it with the default behaviour (23505974 vs. 23769293). Only about a 1% improvement, but so easily obtained that it should be considered worthwhile.

Merging with 35 passes, merging ideals that appear up to 25 times, discarding relations sets with more than 17.5 relations left 7308156 equations on 7139734 ideals >1M, the resulting matrix being 7296206x7314657. The filter program was hacked (note the pattern?) to only discard relation sets of weight >=maxrels+0.5 if the maximum allowed number of relations were discarded. That way another pass can be run quite easily to discard exactly as many of the heaviest relation sets as desired. This left 7290681 equations on 7139474 ideals >1M and produced a matrix of 7296195x7297185, oversquare by 990. This matrix is currently running.

The behaviour of the parallel block-Lanczos (not the distributed one, mind you, as I don't have it) is pretty unintuitive... the single cpu version had an estimated time of 53 days to solve the matrix, the two cpu version almost exactly the same so the second cpu did not result in any speedup at all. However, the four cpu version has an estimated time of about 28 days. That is now running. Depending on how often I get time on a cluster node, we might see factors in late February or early March.

Alex

R.D. Silverman 2006-01-20 18:53

[QUOTE=akruppa]

Brief history: after sieving sq on the rational side in [30M, 70M] it turned out that a sieve region of 16394x8192 (gnfs-lasieveI14e) would use a lot of special-q values (about up to 150M) and probably produce a pretty large matrix.

[/QUOTE]


This surprises me. A bit.

For 5x^6-1, a typical rational norm will be somewhere around 10^45 to
10^48, while an algebraic norm will be 'typically' about 10^38 to 10^42.
It is usually better to run special q's on the side with the larger norm.
The algorithm works best when the norms we hope are smooth are as close
as possible on each side. For sq ~ 10^7, rational norm/sq will
be close to the algebraic norm. The opposite is not true; the norms
would be lopsided.

akruppa 2006-01-20 21:34

Yes, the norms on the algebraic side are a bit larger and the yield with sq on the algebraic side is a bit better, but the difference isn't that great. At sq=30M, alg. sq produce about 20.9 relations per sq, rat.sq about 24.6, about 18% more. As sq increases, the norms on the algebraic side grow faster, so the difference diminishes. With sq=70M, the yield is about 15.9 and 17.9, resp., about 12% difference.

I did most of the sieving on the algebraic side on the cluster and had a few scripts to start jobs on nodes. I simply used them until sq=70M, the same as on the rational side. That wasn't optimal... but I was being lazy.

Alex

akruppa 2006-01-20 22:17

The figures in my last post were done with the smaller sieve region. With the larger region, the size of the norms shifts towards the algebraic side some more, and the yield at SQ=70M is almost exactly the same: about 32.9 vs 32.7/sq. So it didn't really matter which side I did at these high sq.

Alex

akruppa 2006-01-28 11:05

Ah well, all good things must end. Usage of the cluster has increased so much that they inquired who is actually using it for research projects, and since my diploma thesis is long finished by now, I don't really have a good excuse to keep using it. They'll delete my account there shortly. Fortunately they agreed to let me finish the 5,349- matrix yet! Fingers crossed that the dependencies it'll find will be useful, after all my filter hacking...

My time with this cluster was another case where I got far more than I had hoped for, so no hard feelings here. I would have liked to finish 3+ to exponent 500, but there's still time for that later. Besides, I have to prepare for my last two finals and hunting idle time on the cluster was quite a distraction. Maybe it's for the better this way...

Due to the exams, I will rarely be online until April.

Alex

R.D. Silverman 2006-01-30 19:39

[QUOTE=akruppa]Ah well, all good things must end. Usage of the cluster has increased so much that they inquired who is actually using it for research projects, and since my diploma thesis is long finished by now, I don't really have a good excuse to keep using it.

Due to the exams, I will rarely be online until April.

Alex[/QUOTE]

To the tune of a familiar American folk song......


Oh, grad strudents come and grad students go,
And the best we have is gone I know,
But if he don't come back this spring,
We'll have to stop the factoring!

akruppa 2006-02-11 20:12

The matrix is about 50% done now. It probably will be finished some time in early march.

Alex

Zeta-Flux 2006-03-06 02:47

Alex,

Update?

akruppa 2006-03-06 07:25

The matrix is 87.7% done. Will probably take another week.

Alex

akruppa 2006-03-14 19:42

The matrix step is completed now. The sqrt step will be a bit of a headache again, I will need to adapt it to 31 bit primes first. It also likes to use primes just above 10^9 for CRT and I forgot to remove relations containing such primes during the filtering, so I'll have to find a different set of CRT primes to use. I have no idea how difficult that will be and I'm rather occupied atm, so the factors may take a while yet - sorry!

Alex

akruppa 2006-03-21 18:52

Another brief update: making sqrt use different CRT primes was not hard at all, only a constant in the code needed to be changed. I found a large enough interval of primes that do not appear among my relations so I'm using those for CRT now. Unfortuntately, the program still aborted with a failed assertion. I'll have to check whether the 31st bit of primes gets mangled somewhere, but due to little spare time, this will take quite a while.

Alex

Zeta-Flux 2006-04-07 01:08

Update time yet?

akruppa 2006-04-07 04:14

Sorry, no... had an exam yesterday and will have another (the last one!) on the 21st. I'll play with sqrt again after that.

Alex

akruppa 2006-04-09 15:55

Turns out all I had to do was change a #define...

[CODE]
Original number had 244 digits:
2180075438084173168593947502718622130302257527967068979673625647025361320317722693777938884468406733797422512728450828995171625828330838682415089213199694141322162661316080591217910460859207203227286693244038606742662977922009304165840148925781
Probable prime factor 1 has 108 digits:
465248728728895394653153888943818531375975270853944206732179708325578302943996260672261958710669770611963071
Probable prime factor 2 has 136 digits:
4685827823840272482505664867616517656515469121675968181975198207240785431020525292215919392072745559212525121242512397104031773498892011
[/CODE]

Found right on the first dependency.

Well, that was it! :smile:

Alex

Mystwalker 2006-04-10 08:02

Excellent work, Alex! :bow:

And clearly [b]not[/b] an ECM miss. :showoff:

R.D. Silverman 2006-04-10 11:04

[QUOTE=akruppa]Turns out all I had to do was change a #define...

[CODE]
Original number had 244 digits:
2180075438084173168593947502718622130302257527967068979673625647025361320317722693777938884468406733797422512728450828995171625828330838682415089213199694141322162661316080591217910460859207203227286693244038606742662977922009304165840148925781
Probable prime factor 1 has 108 digits:
465248728728895394653153888943818531375975270853944206732179708325578302943996260672261958710669770611963071
Probable prime factor 2 has 136 digits:
4685827823840272482505664867616517656515469121675968181975198207240785431020525292215919392072745559212525121242512397104031773498892011
[/CODE]

Found right on the first dependency.

Well, that was it! :smile:

Alex[/QUOTE]


Nice result......:bow:

jchein1 2006-04-10 16:51

Dear Alex,


May I ask you, is 5,349- is OddPerfectSearch's only roadblock left to prove any opn > 10^500 ?

What a great pleasure it is to see your final result on the 5^349-1. Congratulations.


Cheers,


Joseph E.Z. Chein

philmoore 2006-04-10 22:45

Congratulations, Alex, on the completion of an impressive factorization! I see that this is the second-largest penultimate factor found of a Cunningham number.

To answer Joseph Chein's question, William has said that he believes that this was the last remaining roadblock, but that actually putting all this together in a proof will be necessary to verify it. The methods of the first Brent and Cohen paper should now be sufficient to show that any OPN > 10^500, but the methods of the second paper (with te Riele) should be able to push this limit somewhat higher, perhaps 10^625, or even greater. I don't know even if anyone has yet established that 2[sup]520[/sup](2[sup]521[/sup]-1) and 2[sup]606[/sup](2[sup]607[/sup]-1) at 314 and 366 decimal digits, respectively, are in fact the 13th and 14th perfect numbers in order of size. The next perfect number, 2[sup]1278[/sup](2[sup]1279[/sup]-1) at 770 digits, presumably 15th, may actually be proven so given a few more roadblock SNFS factorizations. But the next one, 2[sup]2202[/sup](2[sup]2203[/sup]-1) at 1327 digits, definitely seems out of reach without new methods, as it would require SNFS factorizations of numbers with over 400 digits.

jchein1 2006-04-11 02:53

Dear Philmoore,

Thanks. I guess you mean that William and his company might use BCR’s “lifting” algorithms to pushing opn > 10^625 or higher ?.


Regards

Joseph E.Z. Chein

akruppa 2006-04-11 09:34

Thanks, Dennis, Bob, Joseph and Phil!

Alex

philmoore 2006-04-14 23:00

[QUOTE=jchein1]Dear Philmoore,

Thanks. I guess you mean that William and his company might use BCR’s “lifting” algorithms to pushing opn > 10^625 or higher ?.

Regards
Joseph E.Z. Chein[/QUOTE]

We should ask William about this. For those who are interested, the relevant paper is at:
[url]http://wwwmaths.anu.edu.au/~brent/pub/pub116.html[/url]

wblipp 2006-04-15 04:07

[QUOTE=philmoore]We should ask William about this. For those who are interested, the relevant paper is at:
[url]http://wwwmaths.anu.edu.au/~brent/pub/pub116.html[/url][/QUOTE]

Yes, the methods of the BCR paper can be used to stretch a boundary proof. But I'm under the impression that the work to stretch a boundary is essentially lost when another factorization is achieved, so we don't want to undertake that task until we are confident that we have reached a roadblock that is beyond the project's factoring resources. The next roadblock is still undergoing ECM at the 55 digit level (6453 curves have been run at B1=110M). Partway through the 60 digit level this number becomes a reasonably qualified candidate for SNFS, and we will see about interest. It would be a very large but not record breaking undertaking, and there will be less widespread interest than Alex's recent factorization because the base is not in the Cunningham range.

All of this premature because I continue to work very long hours so that we don't even have to the tools to complete an "unstretched" proof, so it's early to squabble about whether now is the time to stretch the proof.

R.D. Silverman 2006-04-17 14:15

[QUOTE=wblipp]Yes, the methods of the BCR paper can be used to stretch a boundary proof. But I'm under the impression that the work to stretch a boundary is essentially lost when another factorization is achieved, so we don't want to undertake that task until we are confident that we have reached a roadblock that is beyond the project's factoring resources. The next roadblock is still undergoing ECM at the 55 digit level (6453 curves have been run at B1=110M). Partway through the 60 digit level this number becomes a reasonably qualified candidate for SNFS, and we will see about interest. It would be a very large but not record breaking undertaking, and there will be less widespread interest than Alex's recent factorization because the base is not in the Cunningham range.

All of this premature because I continue to work very long hours so that we don't even have to the tools to complete an "unstretched" proof, so it's early to squabble about whether now is the time to stretch the proof.[/QUOTE]


OK. Suppose those undertaking this project succeed in raising the bound.
I have no doubt that they will succeed.

Allow me to ask:

What insight is gained by this computation?

Allow me to quote Hamming:

The purpose of computing is insight, not numbers.

If this computation leads to any new mathematical insights about the
problem, I will applaud it heartily. Until then, it is just mindless computing.

On the other hand, if raising the bound helps convince people of the
unlikelihood that OPN's exist, and thereby dissuades such people from
trying to actually find an OPN, then I will also applaud the effort.

ewmayer 2006-04-17 17:13

[QUOTE=R.D. Silverman]Allow me to quote Hamming:

The purpose of computing is insight, not numbers.

If this computation leads to any new mathematical insights about the
problem, I will applaud it heartily. Until then, it is just mindless computing.[/QUOTE]
I think a little more care is required in applying Hamming's maxim here - after all, a skeptic could level a virtually identical accusation at the Cunningham table efforts - to what extent does each new factorization lead to appreciable progress on the theoretical/algorithmic/software fronts, and to what extent is it just another in-itself-completely-useless addition to Sam Wagstaff's "stamp collection?" (As some wag whose name I don't recall put it.)

I don't think you can so easily divorce the "mindless computation" aspect of these kinds of projects from the "genuine advance in insight" aspect - at the very least the number-crunching aspects help us continually improve/debug our algoithms and software implementations thereof, and while that's going on, one hopes that there is a parallel continual ferment on the theoretical side, often spurred by frustration at the seeming intractability of "the really big" computations according to the state of the current art.

I do take your (implied) point about the dangers of computation becoming a mere end in itself though.

R.D. Silverman 2006-04-17 17:28

[QUOTE=ewmayer]I think a little more care is required in applying Hamming's maxim here - after all, a skeptic could level a virtually identical accusation at the Cunningham table efforts - to what extent does each new factorization lead to appreciable progress on the theoretical/algorithmic/software fronts, and to what extent is it just another in-itself-completely-useless addition to Sam Wagstaff's "stamp collection?" (As some wag whose name I don't recall put it.)

I don't think you can so easily divorce the "mindless computation" aspect of these kinds of projects from the "genuine advance in insight" aspect - at the very least the number-crunching aspects help us continually improve/debug our algoithms and software implementations thereof, and while that's going on, one hopes that there is a parallel continual ferment on the theoretical side, often spurred by frustration at the seeming intractability of "the really big" computations according to the state of the current art.

I do take your (implied) point about the dangers of computation becoming a mere end in itself though.[/QUOTE]

I agree with you about the Cunningham project. While it holds great
historical interest for me personally, the factorizations per se are not
really useful. I did promise Dick Lehmer that I would push to finish the
base 2 tables [up to the current limits of the project].

I use the project as a vehicle for tinkering with my NFS code [and earlier
for my MPQS code etc]. Early on, it was also a good reason for me to learn
some mathematics. Then when NFS came along, it was a reason for
me to learn some algebraic field theory.

In a paper still under referee review, I showed (theoretically) how to
find an improved sieve region for line sievers. I have a related argument
for lattice sievers [I adjust the SIZE of the sieve region as the special-q
varies]. It takes a lot of data to see if the improvment is measurable
given the many uncertainties in the run-time. Everytime I do a factorization
I devote one extra machine to just "sampling" various sieve regions changes
to compare yields against the actual factorization.

BTW, It was Oliver Atkin who first coined the phrase "Wagstaff's Stamp
Collection".

philmoore 2006-04-17 23:28

[QUOTE=R.D. Silverman]Allow me to ask:

What insight is gained by this computation?

If this computation leads to any new mathematical insights about the
problem, I will applaud it heartily. Until then, it is just mindless computing.
[/QUOTE]

Algorithmic improvements do not necessarily depend upon new mathematical insights. For example, many of George Woltman's improvements to his multiplication FFTs resulted from efficient use of cache memory. Any new implementation of the Brent, Cohen, and te Riele algorithm will undoubtedly require some clever programming, and should be fertile ground for an aspiring computationalist who wants to hone their skills.

That said, I don't think that one can necessarily discount the possibility that someone working on such a project might not also develop a new mathematical insight. The last few years have seen quite a few new papers on odd perfect numbers, most of them computationally based, and although I am not an expert in this area, I have studied several of these papers and have been pleased to see the application of some very clever ideas.

Pascal Ochem 2008-04-24 11:04

[QUOTE=R.D. Silverman;77866]
On the other hand, if raising the bound helps convince people of the
unlikelihood that OPN's exist, and thereby dissuades such people from
trying to actually find an OPN, then I will also applaud the effort.[/QUOTE]

Let us take Pomerance's heuristic as a mesure of this unlikelihood.
[url]http://oddperfect.org/pomerance.html[/url]
As I understand it,
if the OPN is N=p^e*m^2 with gcd(p,m)=1 and if we can prove m > B,
then the heuristic expects about log(B)/B odd perfect numbers.

So, if the goal to raise unlikelihood, it is more efficient to modify
the (Brent, Cohen, te Riele)-method so that it gives lower a bound
on m rather than on N.

For example, proving m > 10^150 should be a lot easier than
proving N > 10^600 while still giving the same unlikelihood:
150*log(10)/10^150 odd perfect numbers.


All times are UTC. The time now is 08:04.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.