mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Lone Mersenne Hunters (https://www.mersenneforum.org/forumdisplay.php?f=12)
-   -   Anyone factoring <5M? (https://www.mersenneforum.org/showthread.php?t=13302)

garo 2010-04-16 09:45

Anyone factoring <5M?
 
I am planning to do some exponents from 61 to 62 or 63. They cannot be assigned via PrimeNet so I just wanted to check here first.

axn 2010-04-16 13:47

[QUOTE=garo;212013]I am planning to do some exponents from 61 to 62 or 63. They cannot be assigned via PrimeNet so I just wanted to check here first.[/QUOTE]

What is the relative CPU cost of getting the exponent from 61->63 vs running a 20-digit level ECM?

garo 2010-04-16 13:51

Honestly, I am not sure. But my guess is that Prime95 is still faster at 61-63. And is guaranteed to find a factor if it exists. But I have some hardware that is good for factoring < 64 bits and with 64 bit exponents in danger of running out I thought I'd put some work towards <5M.

If ECM is more efficient at finding factors, I'd be happy to switch.

petrw1 2010-04-16 14:43

[QUOTE=garo;212032]Honestly, I am not sure. But my guess is that Prime95 is still faster at 61-63. And is guaranteed to find a factor if it exists. But I have some hardware that is good for factoring < 64 bits and with 64 bit exponents in danger of running out I thought I'd put some work towards <5M.

If ECM is more efficient at finding factors, I'd be happy to switch.[/QUOTE]

We know that the factoring limits are set based on relative time and potential effectiveness vs LL. [url]http://www.mersenne.org/various/math.php[/url]

I also know that when I try to assign low exponents (mind you, lower than yours) to higher levels of factoring I get an error message something like: "Invalid assignments use ECM instead". I have ASSUMED from this error that ECM would be more efficient.

Finally, in my experiments with assigning ECM to old hardware it is more INeffecient than TF<64 and even more INeffecient than LL/DC. i.e. Points Per Day (PPD). Mind you, ECM might just be a litle more frugal on points as I found that on every PC I tried ECM the PPD was lower.

axn 2010-04-16 15:16

[QUOTE=garo;212032]If ECM is more efficient at finding factors, I'd be happy to switch.[/QUOTE]

So I did some preliminary timings on a C2D. It looks like near the 5M range, it may still be worthwhile to TF upto 2^64. Probably not below 3M. Of course, a decent P-1 might be even better.

petrw1 2010-04-16 15:21

I see from [url]http://www.mersenne.org/primenet/[/url] that they are just starting to hand out ECM in the 5M range to 25 digits.

There are exponents up to 8.2M less than 64 bits.
Just a thought: You might be safer at the higher end.

cheesehead 2010-04-16 17:44

[quote=garo;212013]I am planning to do some exponents from 61 to 62 or 63. They cannot be assigned via PrimeNet[/quote]Yes, they can (for exponents below 10M, that is) -- use the procedure described in the sticky thread "How to LMH using Prime95 v25.8 and PrimeNet v5" ([URL]http://mersenneforum.org/showthread.php?t=11308[/URL]). Note my recent cautionary post #4 of that thread!

I've been running TFs on exponents < 10M for a while. However, instead of selecting a range, in my case I select individual exponents based on their so-far P-1 bounds. So I'm not doing steps #1 and #2 given in the "How to ..." thread, but instead am accomplishing the same assignment-collision-avoidance by running Exponent Status reports (not the same as the Factoring Effort reports specified in step #3) to see whether any of my exponents of interest are already assigned to someone else, then proceeding with the other steps.

So, what I do (which may not be exactly what you want to do) is:

First, I do step #3 of the "How to ..." thread, in order to get the TF and P-1 limits within a range. [I]Note: check the "[SIZE=2]Exclude currently assigned exponents[/SIZE]"[/I][I] box[/I][I]!!!! [/I]

Next, I select a few specific exponents with (relatively) high P-1 bounds but (relatively) low TF limits. This is my own criterion -- substitute your own.

Next, I get an Exponent Status report for each of those selected exponents, and drop any that are already assigned to someone.

Then I proceed with steps #4-end of the "How to ..." thread.

(If I forgot to check Exponent Status reports, or if, between when I got the Exponent Status report and when I manually communicated to PrimeNet, PrimeNet assigned an exponent I specified, then PrimeNet will give me "N/A" as the assignment key. As in my "How to ..." post #4, I always check for "N/A" and delete the worktodo lines that have them, so I don't step on anyone else's assignment.)

This does require that you personally select and specify each exponent you want to TF, instead of having PrimeNet do [I]that[/I] part, and it does require you to regularly and systematically perform assignment-collision-avoidance activities.

If the consensus is that I should post the exponents I intend to test here: okay, but I'll warn you that I'd be posting every day or so and they'd be lists of scattered individual exponents, not simple contiguous ranges.

WVU Mersenneer 2010-04-16 19:26

[quote=axn;212031]What is the relative CPU cost of getting the exponent from 61->63 vs running a 20-digit level ECM?[/quote]
Forgive me forposting here even though I am a vast novice at all of this, but I felt a discovery by one of my computers this morning dove-tails nicely with this thread.

I have almost half of my computers doing small ECM factoring work, and this morning one of my machines found 29,166,507,389,557,009 is a factor of 2 ^ 1,620,989 -1.

However, that factor < 2^55, but the V5 server reports show all exponents of this size have no factors up to 2^60.

I am wondering how this happened. Have all exponents truly been factored up to 2^60?

Similar to this is a find I made just over one year ago that 22,049,255,272,665,169 is a factor of 2^ 1,104,409 - 1. It also struck me as odd to get 17-digit factors via ECM, but again, I know very little about all of this.

As to speed, my computer can factor 1,620,989 from 2^50 to 2^59 in 19 minutes whereas 3 ECM curves at B1=50,000 and B2=B1*100 takes one hour.

I thank everyone for their input and assistance.

Edit: here's the results file:
[Fri Apr 16 04:50:08 2010]
ECM found a factor in curve #2, stage #2
Sigma=7080176765941688, B1=50000, B2=5000000.
UID: /ryan, M1620989 has a factor: 29166507389557009, AID: C2A31C6E0062425DE48C2058D89018B1

cheesehead 2010-04-16 23:23

[quote=WVU Mersenneer;212060]Forgive me forposting here even though I am a vast novice at all of this[/quote]We're very glad you posted!

[quote]I have almost half of my computers doing small ECM factoring work, and this morning one of my machines found 29,166,507,389,557,009 is a factor of 2 ^ 1,620,989 -1.

However, that factor < 2^55, but the V5 server reports show all exponents of this size have no factors up to 2^60.[/quote]There have been other such reports a few times over the years.

[quote]I am wondering how this happened.[/quote]Apparently, since we've occasionally had other reports such as yours, either:

a) one or more of the trial factoring (TF) programs used in the past had some bug that caused it/them to miss some factors,

or (the following is much more likely, in my opinion):

b) when TF was performed on that exponent in the past, someone made a mistake when specifying which "bit levels" (powers of 2) were to be searched, and the range from 2^54 to 2^55 was skipped somehow.

Unfortunately, though we have a database that records all reported TF results, there has not always been a guarantee that no power-of-2 was skipped AFAIK.

[quote]Have all exponents truly been factored up to 2^60?[/quote]The database indicates that it was truly "thought" that that exponent had been TFed up to 2^60. But as I explained, we don't yet have foolproof verification that all TF ranges were properly scanned.

[quote]Similar to this is a find I made just over one year ago that 22,049,255,272,665,169 is a factor of 2^ 1,104,409 - 1. It also struck me as odd to get 17-digit factors via ECM, but again, I know very little about all of this.[/quote]The more reports of such discrepancies, the sooner we'll get the cause investigated and cured.

markr 2010-04-17 04:38

[QUOTE=cheesehead;212084]There have been other such reports a few times over the years.[/QUOTE]
Here's a thread from a long, [B]long[/B] time back:
[url]http://www.mersenneforum.org/showthread.php?t=1425[/url]

markr 2010-04-17 05:01

[QUOTE=garo;212013]Anyone factoring <5M?

I am planning to do some exponents from 61 to 62 or 63. They cannot be assigned via PrimeNet so I just wanted to check here first.[/QUOTE]
I'm doing TF from 61 to 62, currently in the 48xxxxx range. Using the LMH method to have them assigned in PrimeNet, I simply take the highest currently unassigned, queuing up a few days work at a time.

Ahead of that effort I do P-1 to fairly high bounds on the few exponents that haven't had "enough" P-1. This actually finds more factors than the TF effort, measured by success rate.

cheesehead 2010-04-17 07:25

[quote=markr;212099]Here's a thread from a long, [B]long[/B] time back:
[URL]http://www.mersenneforum.org/showthread.php?t=1425[/URL][/quote]Thanks!

Two notes from re-examining that thread after all these years:

- -

1. In post #9 dswanson correctly points out that "ribwoods" (that's me, before I was "cheesehead" here) misinterpreted the second column, but he himself misinterprets me when he speculated that I had

"missed the point, which is that nofactor.txt had claimed that this exponent had no factor below 2^62. So it is indeed a trial-factoring failure."

No, I had not missed that point. And, I now think, we were [I]both partly right[/I] about whether the misses were a failure of "trial factoring" -- depending on whether one considers "trial factoring" to mean only the actual [I]trial-factoring search computations on potential factors[/I], or to mean not only the search computations but also the [I]setup[/I] and [I]reporting[/I] of such searches.

- -

2. According to all the evidence I've seen, both back then and now, the most likely cause of those missed factors was that (note the [I]part B revision[/I] from what I stated above in post #9):

Some entire power-of-2 ranges of TF results on a few exponents were not properly reported, because either:

A) mistakes in setting up the TF runs accidentally omitted specifying some power-of-2 ranges, or

B) a few TF runs on some power-of-2 ranges were executed but never reported, but the database was erroneously updated to show that those power-of-2 ranges had been searched and reported unsuccessful. (In some cases, the missing reports would have shown a factor found, and those factors are what we're rediscovering since then.)

[U]Possibilities A and B were both explored in a 9 Oct 2001[/U][U] mailing list posting by Reto Keiser, quoted in "Missed small factors" thread post #17:[/U]

[quote]While completing some trial factoring in the 60 million range I noticed, that the prime95 checksum of factoring from 65 to 66 bits and from 66 to 68 bits are the same (I did this two parts of the same exponent on different computers). That means that it is easily possible that some mistakes can happen, when someone writes the 'Factor=' line into the worktodo file.[/quote]-- possibility (A)

[quote]Another reason might be, that one person ran a broadband factoring range from 55 to 58 bits on one computer, from 58 to 59 on another (to split up the work) and forgot to check in the former results. As there is no information about the starting bit in the result file, the primenet server is not able to detect that problem.[/quote]-- possibility (B)

There also could be other reasons than forgetfulness for failure to check in results on some power-of-2 ranges.

- -

Personally, I think the evidence we have does not support the theory that the computations examining whether potential factors were actual factors were at fault in some TF software, but is all consistent with the theory that the missed factors were due to a setup or reporting failure.

Prime95 2010-04-17 13:49

Any missed factors for exponents below 2,000,000 could easily be due to a program bug. There were several such bugs in the earliest prime95 versions. These exponents were likely trial factored in the 1996 - 1999 time frame.

garo 2010-04-17 20:21

Hmm! For some reason, when I first tried to reserve these exponents for TF from 61->62, I got an error saying no more TF is needed and the exponents were removed from my wtd. But now it seems I am able to get the exponents reserved without any trouble. I'm doing a P-1 on some exponents as well. generally, those with B1,B2 at 20k,20k or less.

garo 2010-04-17 20:23

[quote=markr;212103]I'm doing TF from 61 to 62, currently in the 48xxxxx range. Using the LMH method to have them assigned in PrimeNet, I simply take the highest currently unassigned, queuing up a few days work at a time.

Ahead of that effort I do P-1 to fairly high bounds on the few exponents that haven't had "enough" P-1. This actually finds more factors than the TF effort, measured by success rate.[/quote]

I took some exponents at 4.7M. I have reserved them via PrimeNet. What is your definition of "enough" P-1?

markr 2010-04-17 22:59

[QUOTE=garo;212187]I took some exponents at 4.7M. I have reserved them via PrimeNet. What is your definition of "enough" P-1?[/QUOTE]
Glad you got it working through PrimeNet, garo. If we're all using PrimeNet we can work in the same ranges & we won't wind up wasting work. There are lots of ECM assignments in the 4M range, too.

Quite arbitrarily, anything with B2 < 200000, I do to 100000,2500000. IIRC, roughly 5% find a factor (although my bounds were slightly lower then). TF by one bit-level finds ~1.5%.

Having them assigned in PrimeNet is not so simple, though. To specify the bounds, one needs "Pminus1" lines in the worktodo, but PrimeNet won't accept them. (It says "unsupported assignment work type: 3".) So I put in "Pfactor" lines, PrimeNet registers them & supplies the AIDs, and then I change them to what I want, leaving the AIDs in.

S485122 2010-04-18 07:39

[QUOTE=markr;212206]Having them assigned in PrimeNet is not so simple, though. To specify the bounds, one needs "Pminus1" lines in the worktodo, but PrimeNet won't accept them. (It says "unsupported assignment work type: 3".) So I put in "Pfactor" lines, PrimeNet registers them & supplies the AIDs, and then I change them to what I want, leaving the AIDs in.[/QUOTE]Since Pfactor lines are accepted you can change the last parameter of Pfactor lines to increase the bounds :[code]Woktodo.txt line B1 bound B2 bound
Pfactor=1,2,4700021,-1,61,1 25000 343750
Pfactor=1,2,4700021,-1,61,2 60000 960000
Pfactor=1,2,4700021,-1,61,3 95000 1781250
Pfactor=1,2,4700021,-1,61,4 130000 2697500
Pfactor=1,2,4700021,-1,61,5 165000 3671250
...[/code]Of course the bounds are not so beautifully rounded numbers then :-)

Jacob

markr 2010-04-18 14:08

[QUOTE=S485122;212255]Of course the bounds are not so beautifully rounded numbers then :-)[/QUOTE]
No, but it is a much simpler procedure! Thank you, Jacob.

garo 2010-04-18 21:03

You can use non-integers for bounds as well. Try 3.2. I used the standard "2" tests saved and I got bounds of 60/65k and 1200/1218k with 1.5GB of memory (just a test run on my main rig) and a 4.42% chance of finding a factor. I'll be lazy with the P-1 and only do it on exponents with B2 < 100k. I generally have rotten luck finding factors with P-1 - though I have been on a roll lately and found 5 factors in the 51-53M range in the past month.

PS: I just realized that even though you get a 5% chance of finding a factor - the P-1 that is already done also has a chance of finding a factor - usually about 2.5% according to the typical bound - so the effectiveness of your P-1 is halved.

markr 2010-04-18 21:29

[QUOTE=garo;212334]You can use non-integers for bounds as well. Try 3.2. I used the standard "2" tests saved and I got bounds of 60/65k and 1200/1218k with 1.5GB of memory (just a test run on my main rig) and a 4.42% chance of finding a factor.[/QUOTE]
Thanks for the tip about non-integers, garo.
[QUOTE]I generally have rotten luck finding factors with P-1 - though I have been on a roll lately and found 5 factors in the 51-53M range in the past month.[/QUOTE]
Five factors in P-1-large is excellent! My last batch of 25 P-1 in M48xxxxx had four factors, so anything's possible... :whistle:... it makes up for the batches with 0 or 1 found. Happy hunting!
[QUOTE]PS: I just realized that even though you get a 5% chance of finding a factor - the P-1 that is already done also has a chance of finding a factor - usually about 2.5% according to the typical bound - so the effectiveness of your P-1 is halved.[/QUOTE]
That some P-1 was already done is one reason why I opted for highish bounds.

garo 2010-04-19 09:42

And another 53M factor last night!

You do have a better chance of finding a factor with highish bounds but I'd say do an efficiency analysis vs. TF to 62 to TF to 63. With higher bounds you are spending more time with a smaller additional chance of finding a factor so doing TF for 1 bit first might be better.

BTW, I'm not doing any more TF in this range after my current assignments finish. I'm setting that machine back to TF-LMH as I have decided against any babysitting.

cheesehead 2010-04-19 20:19

[quote=Prime95;212152]Any missed factors for exponents below 2,000,000 could easily be due to a program bug. There were several such bugs in the earliest prime95 versions. These exponents were likely trial factored in the 1996 - 1999 time frame.[/quote]My recollection (already shown to be fallible :) is that after those bug fixes, someone went back through all reports from the buggy version and re-ran all unsuccessful TFs that had been done with the buggy version.

Another incorrect recollection? (* sigh *) Or was that done only after some bug fixes but not others?

Prime95 2010-04-19 23:05

[QUOTE=cheesehead;212470]Another incorrect recollection? [/QUOTE]

Will Edgington made an effort. We corresponded by email to nail down the bug's impact as well as the ranges thought to be affected. He then used his own program to cover the gaps. This was likely a fairly error-prone process.

The ranges were never sent out to be trial factored again by mainstream prime95 clients.

cheesehead 2010-05-12 08:53

Poacher call-out: "GrunwalderGIMP"
 
Poacher call-out:

"GrunwalderGIMP" poached my TF assignment on 1528291

I had 1528291 reserved through PrimeNet for TF from 61 to 64.

While I had it reserved, "GrunwalderGIMP" reported no factor found from 61 to 63.

When I reported no factor found from 61 to 64 just now, PrimeNet informed me that the first two levels were not needed, and gave me no credit for them.

Note: [U]I don't care about the credit for myself![/U] (Please don't bother to credit me for it, George.)

Poachers like "GrunwalderGIMP" not only

(a) steal credit from other folks, some of whom [I]may[/I] care about [U][I]their[/I][/U] PrimeNet credits,

but also

(b) waste the time of folks who follow the rules. I [I]do[/I] care about that.

I could've run some other PrimeNet assignment instead of doing those first two levels of TF, if "GrunwalderGIMP" had had the courtesy to register with PrimeNet.

NBtarheel_33 2010-05-12 10:17

1528291? Really?
 
[quote=cheesehead;214788]Poacher call-out:

"GrunwalderGIMP" poached my TF assignment on 1528291

I had 1528291 reserved through PrimeNet for TF from 61 to 64.

[/quote]

1528291? From 61 to 64? Wouldn't ECM be a better choice on a "small" exponent like that - seems like 61->64 would take forever.

cheesehead 2010-05-12 10:45

[quote=NBtarheel_33;214793]1528291? From 61 to 64? Wouldn't ECM be a better choice on a "small" exponent like that[/quote]It depends on one's goal, and the relative balance of how far each method has been used.

Unlike P-1 or ECM, TF is exhaustive to a limit that is simply expressed as a single number, and can be continued without duplication of effort [I]--[/I][I] (* ahem *)[/I][I] barring poaching --[/I] without having a previous save file (except that one may consider the record of previous power-of-two reached to be a tiny save file). I wanted to exhaustively extend factoring on that exponent. I was not trying to be efficient in any other sense.

P-1 is exhaustive, but to multidimensional limits which are more complicated than a single number. Also, someone who wants to extend P-1 limits from the recorded previous highs has to either duplicate the previous computation up to the previous limits, or have a save file from the previous attempt.

ECM may indeed be the most probabilistically efficient method for factoring, depending on how far other methods have been used, but it isn't deterministically exhaustive.

- - -

Hmm ... I had intended to post about the 1528291 poach in a different thread: "Anyone factoring <5M" at [URL]http://www.mersenneforum.org/showthread.php?t=13302[/URL] It's really OT here.

[U]Moderators: how about moving these three 1528291 posts to that other thread?[/U]

:-)

10metreh 2010-05-12 14:18

I believe GrunwalderGIMP is Graff on this forum. PMing him might be an idea.

cheesehead 2010-05-12 17:31

Thank you. I've sent a PM to Graff asking to confirm whether he is "GrunwalderGIMP", and explaining the duplication.

cheesehead 2010-05-12 22:54

Update: Graff confirmed that he's "GrunwalderGIMP", and explained:

"I grabbed a load of exponents in the 1.5 M range many months back for LMH work. When I tried to register the exponents with the server, it wouldn't let me ... I will check to see if other exponents I currently have are registered to others and will try again to register them if not."

and

"I see that I can register the exponents by adding them to a worktodo.txt file, but that is cumbersome for the number of exponents I work. I'm looking at using curl to register the exponents to prevent future duplication of effort."

I requested of Graff: "If you work out a reliable semi-automated method for registering large numbers of exponent assignments while avoiding duplication, please post a description in the LMH subforum!"

Graff 2010-05-15 01:17

[QUOTE=cheesehead;214898]I requested of Graff: "If you work out a reliable semi-automated method for registering large numbers of exponent assignments while avoiding duplication, please post a description in the LMH subforum!"[/QUOTE]

Well, my attempt to get all the grabbed 1.5M exponents registered to my
LMH work appeared to work. I registered ~1400 exponents on May 12
using a custom DCL command file that made calls to the server via a simple
curl line.

I just checked my assignments and see that all but four of the 1.5M
assignments have disappeared! I think I know why this happened. Since
this work is "manual testing", there is no machine hardware id that the
assignments can be tied to by the server. So I used the program
self-assigned permanent id from one of my PCs that gets work from the
server. But when that machine did its daily update of completion dates,
the 1.5M exponents were released since it didn't have a record of them
in its worktodo.txt file.

I am currently reregistering the exponents using the self-assigned permanent
id from a machine that no longer reports results. The reregistration should
be completed shortly. I will check again in a few days to make sure that
they are still registered.

Of course, the ideal situation would be to register a "virtual machine"
that would have a id that I could use to register these LMH results.
(Reporting of results is no problem as I use the manual forms for that.)
But I have no yet figured out how to achieve this (I suspect that it
is not possible), as every attempt complains about missing security
hashes, which the on-line documentation implies are optional but that
seem to be mandatory for assignment of new machines.

Gareth (Graff/GrunwalderGIMP)

Graff 2010-05-16 23:48

[QUOTE=Graff;215057]I will check again in a few days to make sure that
they are still registered.
[/QUOTE]

Hmm. They've disappeared again. They were there when I checked
this morning (I think that was the last time I checked). I'm reporting
some more results via the manual reporting forms.

Graff

cheesehead 2010-05-17 10:19

[quote=petrw1;212034]We know that the factoring limits are set based on relative time and potential effectiveness vs LL. [URL]http://www.mersenne.org/various/math.php[/URL][/quote]Yes, and let's remind ourselves that that's [U]versus LL[/U]. That is, the TF limits coded into prime95 are for the crossovers between:

A) efficiency of eliminating (re: M[sup]p[/sup] primeness) a candidate mersenne exponent via finding a factor with TF

and

B) efficiency of eliminating (re: M[sup]p[/sup] primeness) a candidate mersenne exponent via L-L test.

They were [I]not[/I] intended to be the crossovers between:

A) efficiency of finding a factor via TF

and

C) efficiency of finding a factor via ECM

Is there a reference table for the latter comparison anywhere?

[quote]I also know that when I try to assign low exponents (mind you, lower than yours) to higher levels of factoring I get an error message something like: "Invalid assignments use ECM instead".[/quote]This is because V25 module commonc.c has a [U]hard-coded lower limit of 20,000[/U] for exponents on which it will allow TF. When a worktodo line specifies TF for lower exponents, V25 commonc.c writes the "Error: Use ECM instead of trial factoring for exponent: nn" message.

[code] if (w->work_type == WORK_FACTOR && w->n < 20000) {
char buf[100];
sprintf (buf, "Error: Use ECM instead of trial factoring for exponent: %ld\n", w->n);
OutputBoth (MAIN_THREAD_NUM, buf);
goto illegal_line;
}
[/code][quote] I have ASSUMED from this error that ECM would be more efficient.[/quote][SIZE=3]Well, at least it means George[/SIZE]:

(1) wants folks to do ECM instead of TF on exponents less than 20000 with V25 prime95

and

(2) deprecates V25 worktodo Factor= lines specifying exponents less than 20000 as "illegal". :smile:

My guess is that he has indeed figured that ECM is more efficient than further TF for the mass of exponents below 20000.
[SIZE=3]
[SIZE=2]But there are more important things in life than efficiency ... such as one's obsessive-compulsive drive to TF low exponents to higher powers-of-two, or one's perfectionistically noting that TF is deterministically certain to find factors in a given range, whereas ECM is only probablistic. (I was referring to TF the method there, not necessarily to TF the as-implemented-in-software-by-fallible-programmers code.)

V24 commonc.c did not have this hard-coded restriction. So ... if one were to set up a separate folder for V24 prime95, put the V24 executable in it, and create the proper V24 .ini files (paying attention to the earlier worktodo command line formats), one could have V24 prime95 do TF on exponents under 20000. One would need to remember to use FACTOR_OVERRIDE to specify the top TF level, and not to try TFing exponents to different top levels using the same FACTOR_OVERRIDE setting!

(Hmmm... could someone persuade George to add an undocumented TF_EXPONENT_LOW_LIMIT_OVERRIDE parameter, with default value of 20000 or something like that, to some future version?)

On a multi-CPU system, one could even have both V24 and V25 chugging away separately without bothering each other, as long as one first went to the trouble of modifying one's V25 prime95 local.txt, prime.txt and worktodo.txt parameter files so as to avoid using CPU 0. (One would, of course, have first saved copies of those files, with names such as local_for_V25_when_no_V24_is_running.txt, [/SIZE][/SIZE][SIZE=3][SIZE=2]wouldn't one[/SIZE][/SIZE][SIZE=3][SIZE=2]? And after the modifications, one would save copies with names such as local_for_V25_when_separate_V24_is_on_CPU_0.txt, wouldn't one?)

Then all one would have to do is to solve the problem of how to get V5 PrimeNet to register below-20000 TF assignments. Would a simple manual communication from a V24 prime95 suffice? Who knows?
[/SIZE][/SIZE]

garo 2010-05-17 11:26

[quote=cheesehead;215195][SIZE=3]
[SIZE=2]But there are more important things in life than efficiency ... such as one's obsessive-compulsive drive to TF low exponents to higher powers-of-two, or one's perfectionistically noting that TF is deterministically certain to find factors in a given range, whereas ECM is only probablistic. [/SIZE][/SIZE][SIZE=3][SIZE=2]
[/SIZE][/SIZE][/quote]

How do you square that with your argument that poaching is inefficient? Other people might have obsessive compulsive desires to clean up the trailing exponents. Everything under M39 doublechecked and all that.

cheesehead 2010-05-18 17:56

[quote=garo;215197]How do you square that with your argument that poaching is inefficient?[/quote] I don't work on someone else's assigned exponent. Tell me who I'm hurting. Show me where I'm taking away someone else's hope or credit, or otherwise discouraging them.

I hope there's a way to register TF assignments below 20000 with v24 prime95 communicating to v5 PrimeNet. I haven't tested that yet.

[quote]Other people might have obsessive compulsive desires to clean up the trailing exponents.[/quote]... and they can learn to channel that into less-interfering non-duplicative efforts.

garo 2010-05-19 10:22

To-MAH-to To-MAY-to

BTW a fair proportion of poached exponents are never completed by their original assignee.

Joe O 2010-05-19 14:15

[QUOTE=garo;215381]To-MAH-to To-MAY-to

BTW a fair proportion of poached exponents are never completed by their original assignee.[/QUOTE]

Yes, but two out of my first three exponents were poached. And no, they were not on slow machines, but on fast (for the time) dedicated machines.

garo 2010-05-19 14:17

Look I am not defending poaching. Anyone who cares to search through the forum archives will note that I am strongly against it. Or rather I laid out three different forms of poaching and came out against two of those when the old server was running. I have been a victim of poaching too. But I do find cheesehead's moralizing a bit tedious at times.

cheesehead 2010-05-19 19:40

[quote=garo;215407]But I do find cheesehead's moralizing a bit tedious at times.[/quote]Understandable. It's a hot-button issue with me, and I go too far sometimes. I'm trying to develop a way of thinking about this that will enable me to stay calmer.

NBtarheel_33 2010-05-20 01:02

Be It Resolved By the PrimeNet Congress...
 
How about adopting the following new rules, perhaps with the eventual introduction of PrimeNet v6 (or immediately with regard to the EFF prize, not that that will realistically be an issue for 5+ years, most likely).

The overarching rule shall be that all exponents must report "reasonable" progress no less than every 73 days (1/5 year). Now what is "reasonable"? Well, a typical 3-GHz PIV system (conservatively) puts out at least 1% of an LL per day on a 50M exponent, and (I'd guess) at least 2% per day of an LL-D on a 30M exponent. Of course, there are folks for whatever reason who like to hoard larger-than-normally-assigned quantities of work and distribute and maintain it over multiple systems. I believe Cheesehead has mentioned doing this in the past, and I myself have done it (and actually am doing it as we speak - take a look at my early thread about taking out 24 exponents on my birthday last year, and LL'ing them between then and my upcoming birthday - I'm 70% of the way between birthdays, with roughly 68% of the LLs completed). I believe that this is OK...to a certain extent. I would hope that most folks would agree with me that it is unreasonable for a user with one CPU to have a worktodo containing 50 LL exponents. Yes, I took out 24 birthday exponents, but I also have as many as 33 cores to run them on.

Let us then consider adopting the following rules on exponent reporting, progress, and completion:

1. All exponents assigned via PrimeNet or PrimeNet Manual Assignment (in the current "classical" range of interest - note that I'm not really concerned with the 332M tests (right now)) must (1) check in with PrimeNet no less than once every 73 days, and (2) show progress in completion of no less than (a) 10% for an LL test or (b) 20% for an LL-D test. Note that this ideally places an upper bound on LL testing time at 2 years, and an upper bound on LL-D testing time at 1 year. (Are there many systems left out there that would need a year to run a 30M double-check? If so, maybe make the percentages in (a) and (b) both 10%, yielding a 2-year upper bound for everything.) COROLLARY: PrimeNet assignments no longer need to have an ultimate, one-year expiration date. Either you check all of your assignments in, with appropriate progress, every 73 days (and keep them), or you don't (and lose them).

2. (This is the part that would possibly require some work on the PrimeNet infrastructure.) In the case that either or both of (1) and (2) above are not satisfied, on day 74 from assignment or last progress report, two things happen. First, an e-mail to the user owning the exponent(s) in question is dispatched, warning them that if their exponent(s) do not make a satisfactory progress report within 15 days, these exponents will be lost to be reassigned. At the end of day 89 from assignment or last satisfactory report, if the user stays mum, the AIDs of the exponents should immediately be invalidated, and the exponents should be reassigned as regular 1st LLs or LL-Ds, as appropriate.

Secondly, the exponents without proper reports should be displayed in red on the user's PrimeNet assignments report, and the user's PrimeNet summary should display an appropriate message (e.g. "WARNING! Check in exponents now to avoid losing assignments!").

3. (Regarding poaching.) Any and all poaching of PrimeNet LL or LL-D assignments is permitted...








(gotcha Cheesehead)


...subject to the following caveats. (1) No progress report on any exponent, other than such from the exponent's registered owner, shall be acknowledged by PrimeNet. (I believe this is pretty much what happens now, right?) (2) In the event of a completion report from a user other than the registered owner, PrimeNet should handle the result in the following manner: (a) If the result is a first LL result and no other LL result yet exists, PrimeNet should treat the result as a DC-in-waiting. The residue should be posted to the exponent's record, however, as a potential DC, and credit for a DC should be given to the submitter. If the registered user never finishes the LL assignment (due to failure to update in 73 days or due to seeing the poacher's residue), promote the DC-in-waiting to a first LL result. ***If the result is a zero residue (and thus the exponent in question likely yields a Mersenne prime), the submitter should receive DC credit, and the registered owner should be contacted at once to arrange for sending save files, etc. The registered owner is to be credited for the prime discovery, or in the case of the EFF Prize, awarded the prize. The poacher gets a "gee, thanks for helping User X by finding her prime for her!" (and what a lesson that would be). (b) If the result is a first DC result and no other DC result yet exists, PrimeNet should treat the result as a Triple-Check-in-Waiting. No credit is immediately assigned to the poacher, but if the registered user doesn't submit their result, or if their result differs from the LL result (hence requiring a triple check), the poacher should later be credited with DC credit for their early triple check.

Note that this should discourage poaching for the sake of milestones. Say, for instance, that M43xxxxxx (assigned to Alice) is holding up a first-LL milestone. Bob comes along and poaches it. When Bob reports completion to PrimeNet, the result will be held and treated as a DC-in-waiting. This is not going to immediately help the first-LL milestone count, and in fact, won't ever, unless Alice lets 73+15 days lapse, or drops the assignment. In other words, Alice GETS TO KEEP the benefit of being the "Milestone Queen" until SHE finishes the exponent, or lets it drop by not checking in with PrimeNet. In other words, Bob may be able to poach, but he is not able to hurt Alice's enjoyment of GIMPS. She is in sole control of whether or not she is credited for completing the LL on M43xxxxxx, and by extension, whether or not she is credited for being the one to hit the milestone. Last but certainly not least, if M43xxxxxx should happen to be a prime, Alice is credited as the discoverer, once she can provide her save files, and it can be verified that Bob's zero residue is correct.

The above scenario goes ditto for a DC milestone. In fact, we could make all of this a little more punitive on the poacher by resetting Alice's 73-day reporting window as soon as Bob reports his result to PrimeNet. This would mean that a poacher trying to rush a milestone would only delay it by as long as another 89 days!

How 'bout it guys (especially Cheesehead)? What do you think? What else might we add to put the hammer on poaching? How hard are these extra mechanisms for George/Scott to add into PrimeNet?

cheesehead 2010-05-20 03:12

[quote=NBtarheel_33;215467]How about adopting the following new rules, perhaps with the eventual introduction of PrimeNet v6[/quote]I applaud your having come up with a comprehensive proposal, not just here-and-there tweaks!

[quote]The overarching rule shall be that all exponents must report "reasonable" progress no less than every 73 days (1/5 year).[/quote]Fine. Numeric details are minor.

[quote]Of course, there are folks for whatever reason who like to hoard larger-than-normally-assigned quantities of work and distribute and maintain it over multiple systems. I believe Cheesehead has mentioned doing this in the past,[/quote]Clarification: For a couple of years, sometime before 2002, I did "hoard" a somewhat-larger-than-normally-assigned quantity (that is, larger than would normally have been assigned to a system with my system's CPU speed) of L-L work, for the specific purpose of glomming on to certain particular exponents. However, I never allowed any of my assignments to even come close to holding up any milestone.

There were times when someone who was casually monitoring my reported progress could have extrapolated from a subset of the figures that I would be holding up a milestone at some time in the future. What the figures could not show was that my active monitoring would never allow that to happen.

Let me note that at that time, the user-settable parameter for number of days of work to accumulate was not available (or else had a lower limit than now). If the current parameter and its limits had been available then, none of the amounts I ever had assigned at any one time would have been greater than the upper limit currently allowed for that parameter. That is, my queued amount of work would have been no greater than what anyone else could have requested under the current limits.

Recently, what I have done is to process a normally-assigned-amount (indeed, considerably less than the normally-available upper limit on requests) of work at the same time as I ran other projects, so that:

(A) the average progress of my GIMPS assignments was slower than it would have been if I'd devoted all my system's spare processing power to the GIMPS assignments alone.

(B) the reported progress of my GIMPS assignments reflected (A) -- that is, someone following them over time would have noticed a regular slippage of expected completion dates, but also noticed that I did actually complete assignments steadily even if each took longer than the first-reported estimated time.

C) I've always kept an eye on milestones and made sure that none of my assignments was even close to holding up any milestone.

[quote]and I myself have done it (and actually am doing it as we speak - take a look at my early thread about taking out 24 exponents on my birthday last year, and LL'ing them between then and my upcoming birthday - I'm 70% of the way between birthdays, with roughly 68% of the LLs completed). I believe that this is OK...to a certain extent.[/quote]I agree.

[quote]I would hope that most folks would agree with me that it is unreasonable for a user with one CPU to have a worktodo containing 50 LL exponents.[/quote]That would depend on the exponents' sizes and the user's system speed. Also, a user might request exponents by communication from only one CPU, but be actually processing them on multiple CPUs (which, as you reveal a few sentences later, is identical, or at least analogous, to your situation).

I recommend not drawing conclusions from the number of LL exponents assigned to one CPU, especially not judging them unreasonable.

Drawing conclusions from actual reported completions by a user over an extended period of time is, I think, the only reliable basis on which to judge whether that user's progress is satisfactory.

[quote]Let us then consider adopting the following rules on exponent reporting, progress, and completion:[/quote]I'll post comments on these separately.

petrw1 2010-07-26 20:19

[QUOTE=markr;212103]I'm doing TF from 61 to 62, currently in the 48xxxxx range.[/QUOTE]

Still???

markr 2010-07-27 02:51

[QUOTE=petrw1;222935]Still???[/QUOTE]
Still going! Down to somewhere in the 4.6M range now. There are some left above that that are assigned to others for ECM, but eventually they'll become available.

cheesehead 2010-07-27 22:22

Whoops. I forgot to come back two months ago to finish here.

[quote=NBtarheel_33;215467]Let us then consider adopting the following rules on exponent reporting, progress, and completion:

1. All exponents assigned via PrimeNet or PrimeNet Manual Assignment (in the current "classical" range of interest - note that I'm not really concerned with the 332M tests (right now)) must (1) check in with PrimeNet no less than once every 73 days, and (2) show progress in completion of no less than (a) 10% for an LL test or (b) 20% for an LL-D test. Note that this ideally places an upper bound on LL testing time at 2 years, and an upper bound on LL-D testing time at 1 year.[/quote]Re: (2)

There are conceivable situations in which a system's reported progress might drop below 10%/20% between two successive reports, yet the system could be making fine progress overall. (Consider successive reported first-time LL progress of 25%, 25%, 2%, 2%, 6%, 20% & 20% = 1.4 years completion.) I recommend not being too strict about (2).

What I recommend instead is looking at the system's past record of on-time completions. Only if that has been spotty should one judge on current assignment progress. Then, it should judge on cumulative progress-so-far, not "how much have you done for us lately" during only the most recent reporting interval.

[quote]COROLLARY: PrimeNet assignments no longer need to have an ultimate, one-year expiration date. Either you check all of your assignments in, with appropriate progress, every 73 days (and keep them), or you don't (and lose them).[/quote]See my above counterexample and counter-recommendation.

[quote]2. (This is the part that would possibly require some work on the PrimeNet infrastructure.) In the case that either or both of (1) and (2) above are not satisfied, on day 74 from assignment or last progress report, two things happen. First, an e-mail to the user owning the exponent(s) in question is dispatched, warning them that if their exponent(s) do not make a satisfactory progress report within 15 days, these exponents will be lost to be reassigned. At the end of day 89 from assignment or last satisfactory report, if the user stays mum, the AIDs of the exponents should immediately be invalidated, and the exponents should be reassigned as regular 1st LLs or LL-Ds, as appropriate.[/quote]NO. That's unnecessarily strict and tunnel-visioned. See above.

[quote]Secondly, the exponents without proper reports should be displayed in red on the user's PrimeNet assignments report, and the user's PrimeNet summary should display an appropriate message (e.g. "WARNING! Check in exponents now to avoid losing assignments!").[/quote]Okay, but I imagine most users who haven't submitted proper reports won't be checking their PrimeNet reports either. :-)

[quote]3. (Regarding poaching.) Any and all poaching of PrimeNet LL or LL-D assignments is permitted...[/quote]As I've explained elsewhere (in the "New milestone" thread at [URL]http://www.mersenneforum.org/showthread.php?t=7082[/URL] in the "Data" subforum), there is no need to resort to unethical methods in order to treat the "milestone irritation" problem. See that thread for better proposed solutions than yours.

[U]Note: I'm not accusing you of having unethical motives is proposing your ideas![/U] I think you just haven't analyzed your proposal from the same viewpoint I have, so you haven't noticed the ethical flaws I did.

[quote]...subject to the following caveats. (1) No progress report on any exponent, other than such from the exponent's registered owner, shall be acknowledged by PrimeNet. (I believe this is pretty much what happens now, right?)[/quote]No, it's not what happens now AFAIK.

[quote](2) In the event of a completion report from a user other than the registered owner, PrimeNet should handle the result in the following manner: (a) If the result is a first LL result and no other LL result yet exists, PrimeNet should treat the result as a DC-in-waiting. The residue should be posted to the exponent's record, however, as a potential DC, and credit for a DC should be given to the submitter. If the registered user never finishes the LL assignment (due to failure to update in 73 days or due to seeing the poacher's residue), promote the DC-in-waiting to a first LL result.[/quote]There are proposed solutions in the other thread that don't require changing PrimeNet reports or fiddling with assignment status.

[quote]***If the result is a zero residue (and thus the exponent in question likely yields a Mersenne prime), the submitter should receive DC credit, and the registered owner should be contacted at once to arrange for sending save files, etc. The registered owner is to be credited for the prime discovery, or in the case of the EFF Prize, awarded the prize. The poacher gets a "gee, thanks for helping User X by finding her prime for her!" (and what a lesson that would be).[/quote]So, [U]in effect[/U] your proposal would violate EFF rules in order to justify violating GIMPS rules in order to capitulate to the sense of irritation felt by some, but not all, participants.

Note: [U]I'm not accusing you of having unethical motives is proposing your ideas![/U] I think you just haven't analyzed your proposal from the same viewpoint I have, so you haven't noticed the ethical flaws.

There are proposed solutions in the other thread that don't have those ethical flaws.

[quote](b) If the result is a first DC result and no other DC result yet exists, PrimeNet should treat the result as a Triple-Check-in-Waiting. No credit is immediately assigned to the poacher, but if the registered user doesn't submit their result, or if their result differs from the LL result (hence requiring a triple check), the poacher should later be credited with DC credit for their early triple check.[/quote]Such complications arise from the (IMO) mistaken notion that it's necessary to present status reports that aren't in accord with reality.

(Note: I've recently advocated removing some information from displayed reports, which is not the same as displaying status that isn't in accord with reality. If I did do the latter in the past, please feel free to point it out to me, so I can plainly reject my earlier mistakes.)

[quote]In fact, we could make all of this a little more punitive on the poacher[/quote]As pointed out in the other thread, it's more effective to remove the motivation for rule-breaking behavior than to punish rule-breaking behavior.

Graff 2010-07-31 04:45

[QUOTE=Prime95;212152]Any missed factors for exponents below 2,000,000 could easily be due to a program bug. There were several such bugs in the earliest prime95 versions. These exponents were likely trial factored in the 1996 - 1999 time frame.[/QUOTE]

You would appear to be correct in this thought. I'm currently factoring
in the 1.5-1.6M range, extending the upper limit from 2^61 (or
occasionally 2^62) to 2^63, and checking the entire range up to that limit.
To date, I have checked 2837 exponents and found 65 factors. Eleven of
these factors are smaller than the previous upper limit recorded by GIMPS.
The smallest of these "missed" factors was for M1618241 at 55.511 bits,
the largest was for M1509727 at 60.912 bits.

Gareth

petrw1 2010-09-24 03:00

[QUOTE=markr;222994]Still going! Down to somewhere in the 4.6M range now. There are some left above that that are assigned to others for ECM, but eventually they'll become available.[/QUOTE]

I'm working towards you; started at 3,000,000 about a month ago with an old PIV that is 30% more efficient below 62 bits than above....currently just passed 3,02x,xxx so don't wait up for me :smile:

alpertron 2010-09-24 11:57

Someone said above that ECM is a probabilistic algorithm so we are not sure whether a factor is found or not using this method. Notice that the trial division method has two drawbacks:

* It is a lot slower than ECM for the same level, especially for exponents less than 1M.

* We are not sure whether the trial factoring went ok or not. On ECM the probabilistic nature of finding factors can be fighted by running more curves, but in the case of TF the lost factor (if an error occurred in the computer running this algorithm) will never be found.

By completing ECM to the 25-digit level in all exponents less than 1M we are fairly sure that only a few factors with less than 64 bits will be missing (and a lot of prime factors of more than 64 bits will appear), that will be finally found when extending the search to the 30-digit level.

cheesehead 2010-09-28 07:04

[QUOTE=alpertron;231246]* We are not sure whether the trial factoring went ok or not.[/QUOTE]... and we're not sure whether the ECM went okay or not. ECM code is not automatically immune to programming bugs or hardware errors.

The ECM method is not more reliable than the TF method. You're noting that multiple ECM runs decrease the chance of missing a factor, but failing to mention that multiple TF runs with independent hardware and independently developed code does the same.

Correct TF code doesn't miss any factors. Correct ECM code finds as many as predicted. There could be an error in ECM code that missed as many factors, proportionally, as the buggy TF code did, but, because of the probabilistic nature of ECM, would be harder to detect. How long would it take to detect that ECM code had a bug that was systematically missing 1/5000 (or whatever the fraction was in the TF case) of the factors that it should find?

alpertron 2010-09-28 11:34

[QUOTE=cheesehead;231727]Correct TF code doesn't miss any factors. Correct ECM code finds as many as predicted. There could be an error in ECM code that missed as many factors, proportionally, as the buggy TF code did, but, because of the probabilistic nature of ECM, would be harder to detect. How long would it take to detect that ECM code had a bug that was systematically missing 1/5000 (or whatever the fraction was in the TF case) of the factors that it should find?[/QUOTE]
I'm not talking about software errors, but about hardware errors, for instance due to overclocking or defective motherboard, memory, etc. When running a TF and a hardware problem occurs, the missing factor will never be found.

markr 2010-09-28 13:43

[QUOTE=markr;222994]Still going! Down to somewhere in the 4.6M range now. There are some left above that that are assigned to others for ECM, but eventually they'll become available.[/QUOTE]
Just started at the top of the 4.4M range. Someone cleaned up the few remaining above 4.5M regardless that they were assigned to others for ecm, or to me. Fortunately only a small duplication of effort, and PrimeNet still gave me credit.

[QUOTE=petrw1;231194]I'm working towards you; started at 3,000,000 about a month ago with an old PIV that is 30% more efficient below 62 bits than above....currently just passed 3,02x,xxx so don't wait up for me :smile:[/QUOTE]
Great! It will indeed be a long time before we meet, but who cares. Let's see - if it's left to my resources, 4M will be finished in April 2011, maybe. Anyone else working in this area, or thinking about it?

garo 2010-09-28 14:40

@markr, petrw1
Have you found any factors guys or have the ECM folks taken them all?

petrw1 2010-09-28 14:58

[QUOTE=garo;231760]@markr, petrw1
Have you found any factors guys or have the ECM folks taken them all?[/QUOTE]

Ahhh.... that explains it; ECM. I was just about to report that something was fishy in this range because I was below they expected 1/61 or so with factors.

BUT...I am still finding some; about half of that:
551 exponents: 5 factors found or about 1/110.

petrw1 2010-09-28 15:05

[QUOTE=markr;231756]Great! It will indeed be a long time before we meet, but who cares. Let's see - if it's left to my resources, 4M will be finished in April 2011, maybe. Anyone else working in this area, or thinking about it?[/QUOTE]

My one 2.8 Ghz PIV is doing just over 13 a day.
The entire 3M Range was just under 24,000.

So let's see: Pi-R-Squared over the Angle of the Hypotenuse; Sine; Tangent; Cosine; carry the 1; Net Present Value; ....

I get just over 5 years....like I said: "Don't wait up".
Though I am considering sneaking in a little time on a couple other PCs

gjmccrac 2010-09-28 15:10

[QUOTE=markr;231756] Anyone else working in this area, or thinking about it?[/QUOTE]

I just added 20 exponents to a Pentium II that has been doing TF-LMH.

I started at 4M. I made sure the exponents were not already assigned to anyone.

The machine should start on them in 2 days once the current TF-LMH work clears out.

Grant.

alpertron 2010-09-28 15:13

Notice that almost no ECM was running in the 3M range yet. I see that only 3 curves out of 280 curves (in order to complete the 25-digit range) were ran.

petrw1 2010-09-28 15:21

[QUOTE=alpertron;231767]Notice that almost no ECM was running in the 3M range yet. I see that only 3 curves out of 280 curves (in order to complete the 25-digit range) were ran.[/QUOTE]

Is it reasonable that 3 out of 280 curves should have already found nearly half the factors in the 18 or so digit range that 2^62 factoring is looking for?

garo 2010-09-28 15:28

I think most of the range has had at least 3 curves at 25 digits(2^83) done. That has a good chance of capturing factors under 66 bits - by my totally wild-ass calculation about a 1 in 3 chance. That sort of tallies with your experience - you just need to extrapolate my wild-ass calculation for 63 bits.

Edit: An interested reader could go and read the Silverman Wagstaff paper and calculate a more precise answer. I learnt how to do it about 5 years ago but don't have time to relearn it now.

chalsall 2010-09-28 15:31

[QUOTE=markr;231756]Anyone else working in this area, or thinking about it?[/QUOTE]

I occasionally have a few of my machines working in the <1M range, bring the exponents from 60 to 61.

Rather silly really, I know. Out of 3964 tested, only 5 factors found (0.1261%)....

petrw1 2010-09-28 15:32

[QUOTE=markr;231756]Someone cleaned up the few remaining above 4.5M regardless that they were assigned to others for ecm, or to me.[/QUOTE]

An "Exponent Status" report well tell you who in case it is someone on this forum that you can contact.

cheesehead 2010-09-29 00:06

[QUOTE=alpertron;231743]When running a TF and a hardware problem occurs, the missing factor will never be found.[/QUOTE]Never say never.

a) What accounts for the multiple cases in which earlier-TF-missed factors actually [I]were[/I] found?

b) What exempts ECM from hardware problems? :)

markr 2010-09-29 11:48

[QUOTE=garo;231760]@markr, petrw1
Have you found any factors guys or have the ECM folks taken them all?[/QUOTE]
There's still enough. A quick bit of counting text strings in current & old results files gives 113 factored out of 10181 attempts, for TF from 61 to 62 'bits' between about 4500000 & 5000000. That's 1.1%.

I'm only doing this TF because of two machines I have that are still in use, old Athlon XP's which are good below 2^64 and really shine below 2^62. Doing more P-1 on exponents in this region with relatively little already done is far more productive, even though a P-1 result with the parameters I use is less effort than TF, as measured by the credit.

markr 2010-09-29 11:55

[QUOTE=petrw1;231776]An "Exponent Status" report well tell you who in case it is someone on this forum that you can contact.[/QUOTE]
I did check who it was. I'm hoping it was a one-off.

alpertron 2010-09-29 12:04

[QUOTE=cheesehead;231827]Never say never.

a) What accounts for the multiple cases in which earlier-TF-missed factors actually [I]were[/I] found?

b) What exempts ECM from hardware problems? :)[/QUOTE]
a) These cases were software errors, not hardware errors, as recognized by Woltman, and the factors were found by rerunning all bit levels. Some of the missing factors were found by ECM.

b) In that case the missing factors will be found in another computer using more curves. This will need to be done anyway when no factors are found.

Also notice that after some bit level threshold which depend on the exponent, you will find more results per unit of time using ECM than using TF. For smaller exponents, that bit level is lower, so it is recommended not to use TF but ECM instead.

lorgix 2010-09-29 13:09

[QUOTE=garo;212013]I am planning to do some exponents from 61 to 62 or 63. They cannot be assigned via PrimeNet so I just wanted to check here first.[/QUOTE]

I've taken a special interest in doing fast P-1 in the area of ~4.42-4.54M.

The TF limits on the ones I'm currently looking at are about 50:50::61:62

(I also did 61-62 on five 4.482<exponents<4.485M. -> ~0.343GHz-Days & 0 factors found.)

I'll probably keep working my way down. Now doing P-1 with FFT length 224-256K.

I'm also taking a closer look at 1~1.4M.


Anyway, the chunks you're referring to should be so fast that interference is unlikely, no?

markr 2010-09-29 21:54

[QUOTE=lorgix;231906]I've taken a special interest in doing fast P-1 in the area of ~4.42-4.54M.

The TF limits on the ones I'm currently looking at are about 50:50::61:62

(I also did 61-62 on five 4.482<exponents<4.485M. -> ~0.343GHz-Days & 0 factors found.)

I'll probably keep working my way down. Now doing P-1 with FFT length 224-256K.

I'm also taking a closer look at 1~1.4M.


Anyway, the chunks you're referring to should be so fast that interference is unlikely, no?[/QUOTE]
Welcome to the neighborhood!

My two TF machines are currently factoring to 2^62 in the 4.48M-4.50M area, with >100 exponents assigned to them. I'm working my way down, giving them bunches of exponents much as described in [URL="http://www.mersenneforum.org/showthread.php?t=11308"]this thread[/URL], avoiding others' assignments. Pragmatically, you're correct - there's not much chance of duplicated effort. But it's not difficult to avoid already-assigned exponents.

Just out of interest, do you use pfactor or pminus1 lines in your worktodo for your P-1 work, and what kind of bounds?

Pfactor=k,b,n,c,how_far_factored,num_primality_tests_saved
Pminus1=k,b,n,c,B1,B2

lorgix 2010-09-29 23:06

[QUOTE=markr;231976]Welcome to the neighborhood!

My two TF machines are currently factoring to 2^62 in the 4.48M-4.50M area, with >100 exponents assigned to them. I'm working my way down, giving them bunches of exponents much as described in [URL="http://www.mersenneforum.org/showthread.php?t=11308"]this thread[/URL], avoiding others' assignments. Pragmatically, you're correct - there's not much chance of duplicated effort. But it's not difficult to avoid already-assigned exponents.

Just out of interest, do you use pfactor or pminus1 lines in your worktodo for your P-1 work, and what kind of bounds?

Pfactor=k,b,n,c,how_far_factored,num_primality_tests_saved
Pminus1=k,b,n,c,B1,B2[/QUOTE]

Mostly I just looked for exponents that hadn't been factored very far. There was a wide range of different bounds. I decided to focus on exponents that had P-1 B2 less than 362500 (pretty arbitrary), and go through them methodically. I gave those that had only gone through TF to 61 a little preference. I used Pfactor, 3 tests saved. That ended up B1~80-100k B2~1.5-2M (stage 2 using 885MB). The "worst" cases before were B1:20k B2:200k.
I stayed away from exponents that were already assigned. I did happen to notice that I had done P-1 on exponents like the day after you had done TF on them.

Anyway, the yield was decent. Still, I have changed my strategy a bit.
Now being more of a part of the collaborative effort.

I now run two workers, one for each logical CPU. One is set to do server assigned TF when the WR-size LL-tests I've gathered are done (calcs that don't need a lot of memory), the other one is finishing off a few P-1 in the above mentioned interval (should be no more than 1.5days, I arbitrarily unregistered a few), and is then set to do ECM on small Mersennes (requires a lot of memory). So my plan is now to stick with server assigned TF and ECM and not do manually assigned exponents for a while. Although P-1 is sort of my "favorite", so I might throw in a few appropriately sized (>45M) exponents for that along with the ECMs (which are currently 5M+ btw). Gonna try to stick with this for a while.

Nice to know that there are more people out there interested in finding factors, not only for the purpose of excluding prime candidates.

Next time I go into a one person sub project I think I'm gonna put in a little more planning. I mean I'm probably more stubborn and relentless than my CPU... but that's not necessarily efficient. ;P

markr 2010-09-30 07:46

[QUOTE=lorgix;231986]Mostly I just looked for exponents that hadn't been factored very far. There was a wide range of different bounds. I decided to focus on exponents that had P-1 B2 less than 362500 (pretty arbitrary), and go through them methodically. I gave those that had only gone through TF to 61 a little preference. I used Pfactor, 3 tests saved. That ended up B1~80-100k B2~1.5-2M (stage 2 using 885MB). The "worst" cases before were B1:20k B2:200k.[/QUOTE]
The reason the "worst" cases you saw had B2=200000 is that I selected exponents with B2<200000 (arbitrarily) when I went through doing P-1 ahead of TF. I used Pfactor, set up so B1 & B2 were at least about 100000 & 2000000. In the 3M range I changed to selecting with B2<180000 because there's a lot of them, and bumped up the target bounds a bit. I'm below 3060000 now.

Lucky for me I went through your area of interest before you - I found my largest low-exponent [URL="http://www.mersenneforum.org/showpost.php?p=226448&postcount=613"]factor[/URL] there!

Good luck with the ECM work!

lorgix 2010-09-30 08:27

[QUOTE=markr;232023]The reason the "worst" cases you saw had B2=200000 is that I selected exponents with B2<200000 (arbitrarily) when I went through doing P-1 ahead of TF. I used Pfactor, set up so B1 & B2 were at least about 100000 & 2000000. In the 3M range I changed to selecting with B2<180000 because there's a lot of them, and bumped up the target bounds a bit. I'm below 3060000 now.

Lucky for me I went through your area of interest before you - I found my largest low-exponent [URL="http://www.mersenneforum.org/showpost.php?p=226448&postcount=613"]factor[/URL] there!

Good luck with the ECM work![/QUOTE]

I see, good for you! And us! :D

Between us we've gone through the interval pretty well then.

Yeah, like I said; for me Pfactor 3tests saved gave bounds around there.

That's a NICE factor!

My later work was more in the 4.4~4.48M range btw.

You (or anyone interested ofc) might wanna have a look as I have now unreserved all exponents in the range I was working on.

Maybe I left something behind for you (wasn't being super thorough, and didn't quite finish the "project"); the range I looked at was:

M[SIZE=2]4400131-M[/SIZE][SIZE=2]4519561[/SIZE]

As of right now that range (only looking at TF 0-61) has [B][51exponents with B1=20k][/B], [B][32 with B2<250k][/B], and [B][13 with B1=20k B2=205k][/B].

Probably more efficient with ECM now, but if anyone wants to have a go...


Live long and factor.

//L

p.s. 205 and 250 above are NOT typos.

petrw1 2010-09-30 15:47

This is probably a silly question but what is to stop all of TF, P-1 and ECM from finding, reporting and getting credit for the very same factor?

Or is there some client or server code that ignores duplicate factors?

garo 2010-09-30 15:48

Once a factor is found, no more credit is awarded for any further work on the exponent.

petrw1 2010-09-30 16:01

[QUOTE=garo;232077]Once a factor is found, no more credit is awarded for any further work on the exponent.[/QUOTE]

That makes sense, however, on the ECM Progress report one of the sections is "ECM on Mersenne numbers with known factors". Does this not mean there are still people doing ECM on the exponents? and if so, are they NOT getting credit but rather doing this for the "prize" of fully factoring?

Thanks

lorgix 2010-09-30 16:09

I'd certainly like to finish the factoring of M929... even if it gave me negative "credit".

petrw1 2010-09-30 16:43

[QUOTE=lorgix;232080]I'd certainly like to finish the factoring of M929... even if it gave me negative "credit".[/QUOTE]

Well at 280 digits and factors so far equating 67 digits the remaining composite should be 213 digits.

If I assume that, since the largest factor found so far is 51 digits, that the remaining factors are larger than that then there could be a factor remaining up to about 160 digits.

Yes, that would be an amazing find.

TimSorbet 2010-09-30 19:52

[QUOTE=petrw1;232079]That makes sense, however, on the ECM Progress report one of the sections is "ECM on Mersenne numbers with known factors". Does this not mean there are still people doing ECM on the exponents? and if so, are they NOT getting credit but rather doing this for the "prize" of fully factoring?

Thanks[/QUOTE]

It won't give repeat credit for the same factor, but you can continue to find further factors. And I think it does keep giving credit...at least for ECM on small Mersenne numbers and on Fermat numbers. Maybe there are special exceptions so that ECM work still gives credit but other things don't, I don't know.

lorgix 2010-09-30 22:13

4th+ prime factor of M929 - unknown
 
[QUOTE=petrw1;232090]Well at 280 digits and factors so far equating 67 digits the remaining composite should be 213 digits.

If I assume that, since the largest factor found so far is 51 digits, that the remaining factors are larger than that then there could be a factor remaining up to about 160 digits.

Yes, that would be an amazing find.[/QUOTE]

Yes, 214digits even.
11233987055329272412876331600598951897049736479775
75010718300556383281688180366350559792974172384557
07354841517981565066492488482182991795198282677373
04091777193052937545026022471107579402189543491305
99583977799817 to be more specific.
[FONT=monospace][B][FONT=Verdana][SIZE=2]The largest smaller factor is
91238872988674526
30283577249393667
31341350831028977[/SIZE][/FONT]
[/B][/FONT]

GET TO IT PEOPLE!

cheesehead 2010-10-02 20:18

[QUOTE=alpertron;231899]a) These cases were software errors, not hardware errors, as recognized by Woltman, and the factors were found by rerunning all bit levels.[/quote]Yes, by rerunning TF.

BTW, exactly how did you determine that there have been, or not been, any hardware errors that prevented finding a factor, by any method?

lorgix 2010-10-03 07:21

[QUOTE=chalsall;231775]I occasionally have a few of my machines working in the <1M range, bring the exponents from 60 to 61.

Rather silly really, I know. Out of 3964 tested, only 5 factors found (0.1261%)....[/QUOTE]

[QUOTE=alpertron;231899]...Also notice that after some bit level threshold which depend on the exponent, you will find more results per unit of time using ECM than using TF. For smaller exponents, that bit level is lower, so it is recommended not to use TF but ECM instead.[/QUOTE]

Just the other day I brought down (among the non-assigned) the number of the biggest exponent that still hadn't been factored to 61. Probably not very efficient. (mostly doing ECM on 5M+ now btw)

The biggest exponent available that hasn't been factored to 61 is [B]693967[/B].

694277 has been assigned for ECM for about 6months.

alpertron 2010-10-03 14:36

[QUOTE=lorgix;232405]The biggest exponent available that hasn't been factored to 61 is [B]693967[/B].

694277 has been assigned for ECM for about 6months.[/QUOTE]
Many of the assigned exponents were forgotten. According to [url]http://www.mersenne.org/report_exponent/?exp_lo=694277&exp_hi=694277&B1=Get+status[/url] there were other people that were doing ECM on that factor after being assigned to WileECoyote on April. This user has not returned any result on this exponent.

markr 2010-10-12 11:37

[QUOTE=markr;231893]There's still enough [factors]. A quick bit of counting text strings in current & old results files gives 113 factored out of 10181 attempts, for TF from 61 to 62 'bits' between about 4500000 & 5000000. That's 1.1%.

I'm only doing this TF because of two machines I have that are still in use, old Athlon XP's which are good below 2^64 and really shine below 2^62. Doing more P-1 on exponents in this region with relatively little already done is far more productive, even though a P-1 result with the parameters I use is less effort than TF, as measured by the credit.[/QUOTE]
It took a little mucking about to separate out the P-1-small, but I finally worked out some stats for my currently-main P-1-small machine. Its success rate to date is 4.7% (163/3445) in the 3M & 4M ranges. About 85% of the factors came from stage 2.

It took one core of a core2 quad in the region of 7-10 weeks, but I think it was worth it for 163 factors. :grin:

lorgix 2010-10-12 11:58

[QUOTE=markr;233213]It took a little mucking about to separate out the P-1-small, but I finally worked out some stats for my currently-main P-1-small machine. Its success rate to date is 4.7% (163/3445) in the 3M & 4M ranges. About 85% of the factors came from stage 2.

It took one core of a core2 quad in the region of 7-10 weeks, but I think it was worth it for 163 factors. :grin:[/QUOTE]

Nice job!

I'm back below 5M again btw...

Mostly doing P-1 in the 2~2.5M area (112-128K FFT), but also a little in 4.5~5M.

[Speaking of factors.... anyone who hasn't checked out [URL]http://factorization.ath.cx/[/URL] yet should do it now.]

petrw1 2010-10-18 21:18

Update in 3M range.

7 Factors out of 816 tests: 0.86%

petrw1 2010-10-19 21:08

[QUOTE=petrw1;233763]Update in 3M range.

7 Factors out of 816 tests: 0.86%[/QUOTE]

How quickly things changer:

8 out of 830 = .96%

gjmccrac 2010-10-20 12:54

In the 4.0 to 4.1M range (61 to 62)

4 factors out of 541 = 0.74%

markr 2010-10-20 20:54

[QUOTE=gjmccrac;233983]In the 4.0 to 4.1M range (61 to 62)

4 factors out of 541 = 0.74%[/QUOTE]
Cool! That's consistent with a success rate about 1%.

chessmc 2010-10-24 02:04

Have a gpu working in the 1.0-1.01M range (up to 64 bits).

alpertron 2010-10-25 11:44

I'm running ECM in the range 400000-401000 (100 curves with B1=250000, B2=25000000). The computer found 3 factors for the 15 exponents it tested. There are still 3 exponents to finish this range. So the success rate in this case was 20%.

petrw1 2010-10-25 22:23

[QUOTE=petrw1;233917]How quickly things changer:

8 out of 830 = .96%[/QUOTE]

10 / 911 = 1.1%

lorgix 2010-11-13 10:33

Hi everybody,

I figured this thread might be a good place to ask;

[B]Does anyone know what the status is on missed small factors?
Factors that [I]should[/I] have been found by TF already.[/B]

I know there are threads about this, but I haven't seen one that appears to be up to date.

garo 2010-11-14 22:25

In what range? It is generally agreed that it is not worth the effort to refactor all ranges to catch the few factors that may have been missed due to hardware error. A couple of years ago I did a stats analysis and found some ranges that had fewer factors than expected and these were subsequently refactored.

lorgix 2010-11-15 09:30

[QUOTE=garo;237131]In what range? It is generally agreed that it is not worth the effort to refactor all ranges to catch the few factors that may have been missed due to hardware error. A couple of years ago I did a stats analysis and found some ranges that had fewer factors than expected and these were subsequently refactored.[/QUOTE]

Thanks for responding.

Just the other day tiny factors were found <1M.

I believe that George said something along the lines "there may be missed factors in the <1M range", but that was long ago.

New question:

[B]Assuming one wanted to spend time looking for missed factors, where would you (or someone else) suggest looking?[/B]

For instance; the state of GIMPS at the time of relevant software changes could (I speculate) set limits for ranges of interest.


According to PrimeNet, these are the 58bit and smaller factors found this year. Sorted by factor size. The largest exponents are near 9.03M.

[CODE]99023 980560602097
1016959 18653763679711
1000249 18718209673951
1012631 19848410108993
1010083 20476992464473
1019177 21488556163649
1006037 23404082594681
1019747 23799508124081
1018313 24324727758641
1008979 31637343728201
1012087 32573220546271
1013879 34625346725441
1015499 35394567725641
1015507 37283510851289
1001023 43400827672903
864007 46477709992471
1009997 48129370881049
1002289 48903508485137
1012631 50384038680457
1015549 51839698005217
1017703 61295136287513
1018207 61634570426393
1009991 65613588599249
1012751 66624391675561
1019617 73345567713311
1001549 78737816245961
1000907 80089865682031
1005913 82565258566961
1011817 85194319553513
1002191 89413003603849
1002851 90306098748169
1011733 103881086013473
1001023 105325934316809
1002809 108624581750791
1017781 119335489914337
1004657 120969487520809
1005593 121804213923937
400157 128240538662921
1011797 151518203436449
1007651 185960105727329
1002191 206961669674729
1007723 207476662211801
106243 220091409617207
1016159 227720112092783
1014199 228600194965057
1010957 242816965849993
1003913 252300058074929
1015127 259085723606873
1015471 263462537414369
1002851 265430938262447
100523 265722819270463
1002359 298610005183879
1017437 317388210528121
2420779 319464569056489
1001381 348075947962759
1005493 351900700637111
1004743 355675043217721
1006513 360597218428999
1006507 368083732552673
1011671 372929571475063
4133449 402229658827897
1010201 406245525415663
104681 421630521561673
4342111 431267678479543
4247863 446251303961191
1018109 471922881727343
1008857 504298736777233
1014193 518205426514553
1016303 539032923610121
4274969 554148465384329
1010579 561804491415863
1009457 569495154117559
1008613 570848425426423
1017131 583226334994423
108869 640334321542471
1004441 649241734499201
1018559 720553676854457
1014833 724291410620993
108401 754055767693049
1010809 808866600136687
1009837 907916064221279
1018313 984707807470577
83009 990980413706951
125399 1011662939203313
1003943 1134859295559161
1007179 1166357726794913
1018291 1200214155372671
1004371 1201615242506641
1007959 1206562695464911
1012771 1225524778255703
1002359 1346039738821537
1002173 1357535167633721
99079 1412184542980087
1002511 1475685631549127
1002143 1533246771531151
1005493 1538728354360943
1008913 1601754304362871
1011431 1783738457388967
1005527 1797217598520353
1004233 1806811609034063
1012993 1895467579412423
1012087 1936194720326063
901097 1952850613315457
98009 1988617268030023
1006193 2020846479258359
1017011 2033149477786793
901547 2124272174277161
400217 2394731204476207
1009601 2400592635533327
1014193 2447466101263849
1009637 2490076365496351
1004567 2507583067761001
1013923 2725924871816233
1002149 2803832954121263
1002569 2872444793803049
62099 3019058178003401
82307 3030500283220913
1004779 3033896470116761
1018313 3227638113018551
1010353 3236064988674431
1008809 3267646386769361
101273 3271387127383937
86857 3321995396388511
1017827 3435506187107663
1009991 3528760434779887
104513 3611892180968927
1014131 3652586416418783
100271 3661741939738607
83137 3671207650081553
1001639 3860527691238961
1004441 3899769535437313
9028337 3912521837704031
1016573 3940972432466063
1014199 4020853779326623
104827 4312079700970529
1000253 4365256714603753
1016947 4508448594046097
90821 4530157350487799
104123 4848782238266807
1001659 5095495810541057
103471 5188727978053273
85487 5316775202138161
1011779 5366906234880833
1016051 5367413221442759
1008809 5434110214311407
93719 5449249011160751
1001947 6023437950357383
1015697 6106280787650177
1001023 6239409282646471
1000507 6545453841447847
1009289 6722020453106921
1012997 6828461100959191
1019747 6860676621234559
1013671 7463026192839703
81373 7535738901877169
106087 7545325068873769
107867 8659739619596137
1009037 8710630331154617
82217 8785578980234401
1004873 8920225963481009
1019443 9048286923381911
400009 9326149030431929
1000537 9362759471355617
9028337 9720574988871041
1007459 9974630715927529
126047 10068069852964433
108343 10093628875392599
1007129 10374999104164153
1016959 10468076425908791
1016573 10489367636012537
1011071 10822284427910209
100447 10833463432273607
1008223 11020294552348481
1006339 11206106968395847
1008437 11260060308181127
88523 11278949304402127
1002523 11974269164369623
62131 12184261955089673
1013671 12587896925293649
1017857 12746205441772921
1000907 13671464359549247
60337 14157609727264231
1019209 14199609416529217
88811 15410174172800081
1012703 16826761459540961
127289 16990947052876033
1016053 19055636858196751
1018999 21187947967344409
1009559 21268826671484057
1003193 22446770078845153
1009937 23845373163654071
1182953 24042946210458353
1011677 24410729824112999
1008407 24643610973806777
1001173 25906499624318393
1002517 26137376131629977
1014743 26859693834111353
1018859 27509358587002399
1007813 27948761631467279
1001809 28253697237572929
125119 28706030530418713
1011343 29010047325596663
1620989 29166507389557009
104779 29952897673599311
1005541 30164687516194657
81131 30970934344553807
1005269 31730846049890033
100823 32718508623684503
107903 33240811694356087
66601 33598299665978119
1019267 36035719707737639
125899 37896766522865713
1003621 40858310866986911
105211 41527813489337039
885977 42768654755388593
73693 42798053430500713
1452457 44223530791070777
1011343 45702720956876761
1003363 46374008358700271
1000847 49805097491188079
1618241 51329980060216993
65357 58130304979246361
53609 60792876838136119
104231 61584794440146511
108421 63388824385479511
66221 63396230446135231
102077 63736696785276377
72341 65355437734517401
103511 65928873019240271
1006063 66167532034560599
1009037 66852840610376657
1012829 68232868550629439
1001587 68509862263995583
127219 69961396774604807
1011733 71859146656358623
109589 73530089455028033
77137 74027323761489881
1510799 78128410571491879
1017703 78831282269233657
1001327 79724596802652383
1015871 80239306685666977
1002247 81754558937865607
1000099 82815026005984871
1008863 82866332946794017
1016359 83239897060454089
98887 83256236543665489
73939 83757780760254191
1019357 86371091158126561
63521 86927908707753073
87641 89472672952096751
1010957 95992770210964313
1502689 101582513471959879
1007933 103015778813126183
1006063 104477544626947361
1501663 105064962893473657
82307 109585211839872919
1013791 110528245389881287
1012457 113619522018953569
127301 116783194462665937
72211 122869954525849481
69991 124990441490405599
97771 125898247734526409
1012399 126045865312962607
1004233 128658770240793503
1006193 138559778438573321
1001387 146249244431956031
1013581 148741927534567273
39239 151688561135612231
9000683 152951220467377351
60383 154821431322412159
103177 167230673565034081
41887 168666943367160473
101467 171665469954516919
84067 172890589120324177
66629 177840727690088273
104681 183610827669992551
106367 192625801807206121
101467 195715582128306503
109639 196708855126048247
83843 198423011702650313
108343 216924007232406889
71471 230415693584014439
38833 273437273838431047
107137 278540511429188759[/CODE]

garo 2010-11-15 11:48

A Mersenne number has a factor of size x bits with a probability 1/x. Use this to estimate the average number of factors a range should have. Then identify ranges that have too few factors. You could use 2 SD as a starting point. The resulting ranges are where I would start looking. There is an old thread here that will help you: [url]http://www.mersenneforum.org/showthread.php?t=1425[/url]

alpertron 2010-11-15 12:59

[QUOTE=lorgix;237175]According to PrimeNet, these are the 58bit and smaller factors found this year. Sorted by factor size. The largest exponents are near 9.03M.[/QUOTE]
A lot of these factors (if not all) are not the first prime factors found for that exponent. If a prime factor is found, GIMPS does not try to continue factoring that Mersenne number.

markr 2010-11-15 13:49

[QUOTE=alpertron;237186]A lot of these factors (if not all) are not the first prime factors found for that exponent. If a prime factor is found, GIMPS does not try to continue factoring that Mersenne number.[/QUOTE]
Indeed, prior to version 5 of the server, about three years ago, GIMPS only kept one factor for each exponent in its database. (Of course other folk tried to keep track of all factors.)

lorgix 2010-11-15 15:53

[QUOTE=garo;237179]A Mersenne number has a factor of size x bits with a probability 1/x. Use this to estimate the average number of factors a range should have. Then identify ranges that have too few factors. You could use 2 SD as a starting point. The resulting ranges are where I would start looking. There is an old thread here that will help you: [URL]http://www.mersenneforum.org/showthread.php?t=1425[/URL][/QUOTE]

I wouldn't have guessed it was that simple. I've had some success finding factors using other statistical methods though.
I integrate over a given exponent range and bit depth, and then compare the results to the distribution of known factors, right?

Some of the ranges mentioned in the thread has actually caught my attention already, by other means.

I'm under the impression that truly missed factors 58bit or smaller are rare at this point. (59-60; I have little basis to speculate) I will probably give this some more attention. Thanks for the pointers.

[QUOTE=alpertron;237186]A lot of these factors (if not all) are not the first prime factors found for that exponent. If a prime factor is found, GIMPS does not try to continue factoring that Mersenne number.[/QUOTE]

I am aware (I just realized the first factor is composite btw). I haven't checked the list with regard to that, it's a simple result query.

Would this be an interesting example?;

1012631,243031441,
1012631,647776000177,2008-07-16 09:30
1012631,19848410108993,2010-11-02 23:32
1012631,50384038680457,2010-11-02 23:32

[QUOTE=markr;237191]Indeed, prior to version 5 of the server, about three years ago, GIMPS only kept one factor for each exponent in its database. (Of course other folk tried to keep track of all factors.)[/QUOTE]

I did not think of that, although it sounds familiar. Might have read it in some older documentation.


Now, how does [B]FactorOverride[/B] work? Could someone please tell me where to look for an accurate description of how it currently behaves? Or just give a brief description?

cheesehead 2010-11-15 20:59

[QUOTE=lorgix;237196]Now, how does [B]FactorOverride[/B] work?[/QUOTE]All it does is override the default bit limit for TF. It changes nothing else (such as the algorithm).

Prime95 source module commonc.h specifies the default TF bit limits. In v25, they're:

[code]/* These breakeven points we're calculated on a 2.0 GHz P4 Northwood: */

#define FAC80 516000000L
#define FAC79 420400000L
#define FAC78 337400000L
#define FAC77 264600000L
#define FAC76 227300000L
#define FAC75 186400000L
#define FAC74 147500000L
#define FAC73 115300000L
#define FAC72 96830000L
#define FAC71 75670000L
#define FAC70 58520000L
#define FAC69 47450000L
#define FAC68 37800000L
#define FAC67 29690000L
#define FAC66 23390000L

/* These breakevens we're calculated a long time ago on unknown hardware: */

#define FAC65 13380000L
#define FAC64 8250000L
#define FAC63 6515000L
#define FAC62 5160000L
#define FAC61 3960000L
#define FAC60 2950000L
#define FAC59 2360000L
#define FAC58 1930000L
#define FAC57 1480000L
#define FAC56 1000000L
[/code]Interpretation: FAC[I]nn[/I] is the low end of the range of exponents for which [I]nn[/I] is the default TF limit. That range extends up to FAC([I]nn[/I]+1).

For example, exponents between 47,450,000 and 58,520,000 have a TF bit limit of 69 by default. Unless FactorOverride is used, Prime95 will TF up through 2^69 for exponents in that range.

Exponents greater than 516,000,000 all have a default TF limit of 80 at present, but future versions of Prime95 might specify ranges for limits of 81, 82 ...

Now, if you look at the bit levels to which exponents have actually been TFed, you'll find that there's no break at 6515000. Exponents between 6000000 and 6515000 have all been TFed to 63 (or more), as well as exponents 6515000-6999999. That's because the TF default limits used to be different in earlier versions of Prime95. At the time when exponents between 6000000 and 6515000 were being TFed, the then-current version of Prime95 specified a lower exponent range for TF-to-63 than it does now. I.e., FAC63 (and most other FAC[i]nn[/i]) then had a lower value than it does now.

Note: FactorOverride can be used to set either a higher or a [I]lower[/I] TF limit than the default value.

markr 2010-11-16 04:09

[QUOTE=lorgix;237196]I wouldn't have guessed it was that simple.[/QUOTE]
As you've probably guessed, it's an approximation.

[QUOTE=lorgix]Would this be an interesting example?;

1012631,243031441,
1012631,647776000177,2008-07-16 09:30
1012631,19848410108993,2010-11-02 23:32
1012631,50384038680457,2010-11-02 23:32[/QUOTE]
The first one, without a date, would have been from PrimeNet v4; the rest from v5. It's an example of how things are [I]supposed[/I] to be, with factors found in order of size.:wink:

[QUOTE=lorgix]Now, how does [B]FactorOverride[/B] work? Could someone please tell me where to look for an accurate description of how it currently behaves? Or just give a brief description?[/QUOTE]
Perhaps you also read about FactorOverride in some older documentation. It was dropped starting with client versions 25.x.

Currently, trial-factoring worktodo lines specify the starting & ending level. Previously, up to client version 24.x, they looked like "Factor=21990487,65", with the starting level only and the client determined the ending level. FactorOverride could be used to change the ending level. It did not work with PrimeNet communication turned on. You would have to find an undoc.txt from v24.x or earlier for a definitive write-up.

cheesehead 2010-11-16 04:45

[QUOTE=markr;237266]It was dropped starting with client versions 25.x.[/QUOTE]That's what I get for staying a version behind. When will I learn?

Please consider all verb tenses in post #94 to be adjusted accordingly.

henryzz 2010-11-16 16:33

[QUOTE=cheesehead;237270]That's what I get for staying a version behind. When will I learn?

Please consider all verb tenses in post #94 to be adjusted accordingly.[/QUOTE]
two versions behind
26.x is out


All times are UTC. The time now is 13:14.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2023, Jelsoft Enterprises Ltd.