![]() |
![]() |
#716 | |
Basketry That Evening!
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88
3·29·83 Posts |
![]() Quote:
|
|
![]() |
![]() |
![]() |
#717 | |
"James Heinrich"
May 2004
ex-Northern Ontario
E8416 Posts |
![]()
If the machine had insufficient memory to do any stage2 at all, it would (using the M54952927 example from above) start with bounds where B1=B2, scaled to a lower overall effort than if stage2 were being done to maintain the balance of no-stage2 => lower factor probability => worth spending less effort:
Quote:
Last fiddled with by James Heinrich on 2011-10-23 at 10:58 |
|
![]() |
![]() |
![]() |
#718 | ||
Jun 2003
7×167 Posts |
![]() Quote:
Quote:
About a year or so ago, I tried the experiment of seeing how many relative primes were processed each pass of stage 2 on minimal memory settings. The answer was 2 out of 8 total. The total number of passes, therefore, are 4, not the 24 it would take if there were 48 relative primes in total, or the horrendous 240 to do 480 relative primes. This experiment was done on a previous version of mprime. I'll try to catch this exponent just before the end of its stage 1, take a copy of the save file, then complete stage 1 with 92M available in order to repeat the experiment. Last fiddled with by Mr. P-1 on 2011-10-23 at 14:47 |
||
![]() |
![]() |
![]() |
#719 | ||
Jun 2003
7×167 Posts |
![]() Quote:
Quote:
We don't have to guess, however. We can see for ourselves. |
||
![]() |
![]() |
![]() |
#720 |
Aug 2002
Termonfeckin, IE
276810 Posts |
![]()
Thanks for those numbers Mr. P-1. I would say that even 200MB would give you a decent P-1. After 300MB the marginal gains really do shrink.
|
![]() |
![]() |
![]() |
#721 |
(loop (#_fork))
Feb 2006
Cambridge, England
33·239 Posts |
![]()
This is interesting data, it would be even more interesting if we could see the 'effort=' lines that James posted for the different memory settings that Mr P-1 posted.
Code:
100M: 2.42% from 2.26 GHz-days = one factor per 93.39 GHz-days 300M: 4.19% from 2.76 GHz-days = one factor per 65.87 GHz-days 10000M: 4.74% from 3.92 GHz-days = one factor per 82.70 GHz-days Is it possible to do ECM on these large exponents? |
![]() |
![]() |
![]() |
#722 | |
Jun 2003
7·167 Posts |
![]() Quote:
What certainly still is true, that of those assignments going to LL machines without having been Pre-P-1ed, no more than about half are getting any stage two. |
|
![]() |
![]() |
![]() |
#723 | ||
Jun 2003
7·167 Posts |
![]() Quote:
Quote:
|
||
![]() |
![]() |
![]() |
#724 |
Jun 2003
100100100012 Posts |
![]()
The percentage success rate is not the whole story. You also need to take into account the running time. For example, I would guess that B1=460000, B2=2185000 is cheaper to run, even with just 100MB, than B1=B2=795000.
|
![]() |
![]() |
![]() |
#725 | |
Jun 2003
7·167 Posts |
![]() Quote:
I can't get it to start stage 2 with any less that 112MB. With 112MB, it uses 92M to process 1 relative prime out of 48. However with 113MB available, it uses 112MB to process 2 relative primes out of 48. It looks to me as though there are 2, possibly three, separate bugs here. Bug 1: Presumably the intention is that stage 2 will only be run when there is sufficient memory to process 2 relative primes, however there appears to be an off by one error in handling the case where the available memory is exactly enough to process 2 relative primes. It starts stage 2, but then only processes 1 relative prime. Bug 2: When calculating optimal bounds, when deciding whether or not it can do stage 2 at all, it assumes it can if it has enough memory to process only one relative prime, not two. This is a significant bug. Anyone allowing exactly 100MB will, for exponents of this size, accumulate unfinished P-1 save files without ever completing them. Possible Bug 3: In earlier versions, I'm sure I recall it choosing plans with just 8 relative primes in total. Shouldn't it have chosen such a plan here? There are two other minor "output" bugs. When re-starting stage 2, it reports that instead of the optimal bounds it calculates, it is "using B1=560000 from the save file". 560000 is the B1 bound it computed based upon the generous memory allocation at the very start of its stage 1 calculation. However stage 1 was finished with a much lower memory allocation, and consequently a much lower optimal B1, but it never told me during stage 1 that it was using B1 from the save file. Finally the message "other threads are using lots of memory now" is confusing when you have no other threads running. Linux,Prime95,v26.5,build 5. I will PM George to draw his attention to this post. Last fiddled with by Mr. P-1 on 2011-10-23 at 21:39 |
|
![]() |
![]() |
![]() |
#726 | |
Oct 2011
12478 Posts |
![]() Quote:
|
|
![]() |
![]() |