![]() |
[QUOTE=Dubslow;299191]Heh -- I was thinking "won't become outdated"... unless "we kids" advance technology to the point where we become merged with the machine anyways. :smile:[/QUOTE]Even more reason to study human nature now, so you'll know what you're messing with (or someone else is messing with).
|
[QUOTE=cheesehead;299213]Even more reason to study human nature now, so you'll know what you're messing with (or someone else is messing with).[/QUOTE]
Touché. :smile: |
46 retests done on my range M83xxxxx: 1 factor found, Stage 1 (B1=95000).
I am surprised. I expected to find one or two in Stage 2 given that the first runs were very badly done and back then most people could not allocate close to 2 GB of ram (back then a hot rod would have been a P3 Katmai perhaps with 256 MB total ram). Stage 1 should have found this, or what gives? 1410 tests pending ... |
[QUOTE=PageFault;299404]46 retests done on my range M83xxxxx: 1 factor found, Stage 1 (B1=95000). I am surprised. I expected to find one or two in Stage 2 given that the first runs were very badly done and back then most people could not allocate close to 2 GB of ram (back then a hot rod would have been a P3 Katmai perhaps with 256 MB total ram). Stage 1 should have found this, or what gives?[/QUOTE]Which exponent / factor? If you've submitted your results to [url]http://mersenne-aries.sili.net[/url] then you'll see where the factor falls on the P-1 graph and how it was missed by the first P-1 (whether B1 and/or B2 was too small to find it).
The "poorly done" P-1s may or may not have had stage 2 done, but either way the factor probability for that batch is somewhere just over 2%. Assuming you're giving sufficient RAM to achieve a nominal 5% probability, you'd expect around 3% to have factors (since 2% have already been found by the first round of P-1) -- 46 * 0.03 = 1.38, which is pretty close to the 1 you've found already. The ~3% success rate isn't exciting, except by virtue of these exponents being small and being able to chew through a hundred or so of them per day (depending on what you're running it on, naturally). |
[QUOTE=James Heinrich;299411]The ~3% success rate isn't exciting, except by virtue of these exponents being small and being able to chew through a hundred or so of them per day[/QUOTE]As a slightly larger sample size shows: my last batch of 697 exponents had 20 factors (2.87%).
|
James,
It isn't the incidence of factors, rather it is the finding of a factor in Stage 1. This should have been harvested as low hanging fruit the first time it was factored. Perhaps the client should do more in Stage 1 if there is insufficient ram for a good Stage 2? What I observed was a factor with B1=95000; the data on your page indicated B1=65000 for the first run. I'll continue doing my range and I may try a hundred or so using the Pminus1 argument. Can you comment on bounds? I have 1920 MB ram allocated, server assigned default M5xxxxxxx assigments grab 1880 MB (the other core uses some for LL testing) and the M83xxxxx tests default to 1735 MB for bounds of B1=95000 and B2=1686250. |
[QUOTE=PageFault;299485]This should have been harvested as low hanging fruit the first time it was factored.[/quote]That depends on the bounds involved. Which exact exponent are you referring to?
[QUOTE=PageFault;299485]Perhaps the client should do more in Stage 1 if there is insufficient ram for a good Stage 2?[/quote]It does. As a very rough guide, B2 = 20*B1 assuming normal amounts of RAM (for that range) are available, and time to run stage 2 is roughly equal to the time to run stage 1. The process of finding which bounds to use is the balance between probability of factor vs runtime. Using very rough numbers, a stage1+2 P-1 run will give you about 6% chance of factor; a stage1-only around 4% chance. Of course, this means that stage1 of stage1-only will run approx 50% longer than the stage1 part of 1+2, therefore bounds will be 50% higher. To use an example, let's take M60,000,000: B1=600000, B2=14375000 -> 6.00% chance of factor, 4.35GHz-days but, if you have insufficient RAM and therefore run stage1 only, it would be something around: B1=700000 -> 3.00% chance of factor, 2.25GHz-days half the effort, half the chance of factor, and only slightly higher B1 than normal. [QUOTE=PageFault;299485]What I observed was a factor with B1=95000; the data on your page indicated B1=65000 for the first run.[/quote]Tell me which exponent you're talking about and I can comment further. [QUOTE=PageFault;299485]I'll continue doing my range and I may try a hundred or so using the Pminus1 argument. Can you comment on bounds?[/QUOTE]My comment on bounds: let Prime95 do its thing. Its a complex process to select optimal bounds (which I have glossed over above), and setting them manually without a [i]very[/i] specific reason almost always results in worse bounds selection than letting Prime95 figure it out. |
Another thing to think about with the bounds, a lot of times a machine that had too little memory will run a P-1 with a higher B1 than would be used by default. I had several B1=B2 exp that I had to run as Pminus1 since primenet will give no credit if my B1< the last run B1 even if I have a B2 that is 10x the old B1.
On a FYI point, the 7M to 8M(+?) range also seems to have been during a possible 'not up to snuff' programming time, since there are a fair number of factors found that should have been caught before. I even found one factor that was missed by both P-1 and TF when originally checked. |
[QUOTE=bcp19;299501]the 7M to 8M(+?) range also seems to have been during a possible 'not up to snuff' programming time[/QUOTE]I know the issue exists, what's not clear is whether the problem lies with a) factors being found but not reported correctly; b) factors found and reported but not stored correctly and/or lost over time; c) bad code in Prime95 not finding factors it should. I know I've found a dozen such factors myself.
|
The exponent in question is M8360353.
I'm planning to fire up another batch in a week or so. I'll do a few hundred or so and we shall see the results. There is another observation. M57078733 just started its run and (as normal) the client prints "Using Pentium4 Type-0 FFT length 3M, Pass1=768, Pass2=4K". For the M83xxxxx it used a Core2 FFT (details were not recorded). Of course, if a particular code path is faster, the client should check and use the best one. I'm wondering if anyone else has seen this behavior. [QUOTE=James Heinrich;299493]That depends on the bounds involved. Which exact exponent are you referring to? It does. As a very rough guide, B2 = 20*B1 assuming normal amounts of RAM (for that range) are available, and time to run stage 2 is roughly equal to the time to run stage 1. The process of finding which bounds to use is the balance between probability of factor vs runtime. Using very rough numbers, a stage1+2 P-1 run will give you about 6% chance of factor; a stage1-only around 4% chance. Of course, this means that stage1 of stage1-only will run approx 50% longer than the stage1 part of 1+2, therefore bounds will be 50% higher. To use an example, let's take M60,000,000: B1=600000, B2=14375000 -> 6.00% chance of factor, 4.35GHz-days but, if you have insufficient RAM and therefore run stage1 only, it would be something around: B1=700000 -> 3.00% chance of factor, 2.25GHz-days half the effort, half the chance of factor, and only slightly higher B1 than normal. Tell me which exponent you're talking about and I can comment further. [/QUOTE] |
[QUOTE=PageFault;299503]The exponent in question is M8360353[/QUOTE]That factor is one of those that [i]bcp19[/i] mentioned: that factor should have been found with [url=http://mersenne-aries.sili.net/M8360353]the original P-1 bounds[/url], and can be found in stage1 with a B1 >= 63059 (or stage 2 with B1>= 4073 and B2 >= 63059). [i]Should've[/i] been found, maybe it was, but the record was lost for whatever reason, as mentioned above between me and [i]bcp19[/i].
If you submit your results.txt to the site you'll see how the bounds overlap between the original P-1 and your more recent one that (re-)found the factor. Unfortuantely PrimeNet throws away the data on what bounds were used when a P-1 factor is found so I can't graph that data without the user submitting the results for F-PM1 results. |
| All times are UTC. The time now is 22:27. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.