![]() |
[QUOTE=James Heinrich;287383]NTP = [url=http://en.wikipedia.org/wiki/Network_Time_Protocol]Network Time Protocol[/url]. If you run Windows, you're familiar with the concept where Windows can sync time over the internet.[/QUOTE]Yes, I understand now. I've just never seen what it was called before specifically. It's set to sync once a week. I'm also now worrying that my fan is dying because, even though vacuuming the keyboard and vents on my laptop made it apparently run faster again like it used to, the fan itself will start making my laptop buzz and vibrate after a while of being used for games or video.
|
[QUOTE=Mini-Geek;287412]p*B1[SUP]pi(B1)[/SUP][/QUOTE]
With B1=620000, this has nearly 300,000 digits. Needless to say, the probability of actually finding a factor near that size is nearly 0. |
P-1 done deep enough?
For my current P-1 assignment I have allocated 1200 MB of memory, so I am getting this output:
[CODE][Jan 30 10:04] Worker starting [Jan 30 10:04] Setting affinity to run worker on any logical CPU. [Jan 30 10:04] Optimal P-1 factoring of M56****** using up to 1200MB of memory. [Jan 30 10:04] Assuming no factors below 2^71 and 2 primality tests saved if a factor is found. [Jan 30 10:04] Optimal bounds are B1=590000, B2=12685000 [Jan 30 10:04] Chance of finding a factor is an estimated 4.73% [Jan 30 10:04] Using Core2 type-3 FFT length 3M, Pass1=3K, Pass2=1K, 2 threads [Jan 30 10:04] Setting affinity to run helper thread 1 on any logical CPU. [Jan 30 10:05] Using 1196MB of memory. Processing 44 relative primes (396 of 480 already processed). [Jan 30 10:05] M56****** stage 2 is 85.16% complete. [Jan 30 10:21] M56****** stage 2 is 86.57% complete. Time: 950.475 sec. [Jan 30 10:37] M56****** stage 2 is 87.98% complete. Time: 939.702 sec.[/CODE] As the chance of finding a factor is only an estimated 4.73%, I worry that my work is not very useful to the project. Should I continue doing P-1 assignments using this amount of memory, or should I leave it to the 'big guns', who can use much more memory than me? |
[QUOTE=M0CZY;287744]For my current P-1 assignment I have allocated 1200 MB of memory, so I am getting this output:
[CODE][Jan 30 10:04] Worker starting [Jan 30 10:04] Setting affinity to run worker on any logical CPU. [Jan 30 10:04] Optimal P-1 factoring of M56****** using up to 1200MB of memory. [Jan 30 10:04] Assuming no factors below 2^71 and 2 primality tests saved if a factor is found. [Jan 30 10:04] Optimal bounds are B1=590000, B2=12685000 [Jan 30 10:04] Chance of finding a factor is an estimated 4.73% [Jan 30 10:04] Using Core2 type-3 FFT length 3M, Pass1=3K, Pass2=1K, 2 threads [Jan 30 10:04] Setting affinity to run helper thread 1 on any logical CPU. [Jan 30 10:05] Using 1196MB of memory. Processing 44 relative primes (396 of 480 already processed). [Jan 30 10:05] M56****** stage 2 is 85.16% complete. [Jan 30 10:21] M56****** stage 2 is 86.57% complete. Time: 950.475 sec. [Jan 30 10:37] M56****** stage 2 is 87.98% complete. Time: 939.702 sec.[/CODE]As the chance of finding a factor is only an estimated 4.73%, I worry that my work is not very useful to the project. Should I continue doing P-1 assignments using this amount of memory, or should I leave it to the 'big guns', who can use much more memory than me?[/QUOTE] You're making a difference. The chance of finding a factor may look small, but you save a lot of time if you do find a factor. (For that particular assignment you may want to adjust the memory settings so that 40 or 48 relative primes can be processed at once.) |
[QUOTE](For that particular assignment you may want to adjust the memory settings so that 40 or 48 relative primes can be processed at once.)[/QUOTE]
OK, for my next assignment I'll increase the memory allocation to 1400 MB and see what happens. But that particular machine I was using is at the public library, and only has 2 GB of RAM, so I can't use too much more before it starts thrashing and becoming unresponsive during Stage 2! My own computer has 3 GB of RAM, but is very much slower (3.00 GB Pentium 4 Prescott core), which is why I use the fast Dual Core 'whizz box' at the library for as long as I can every day. |
[QUOTE=M0CZY;287744]As the chance of finding a factor is only an estimated 4.73%, I worry that my work is not very useful to the project.
Should I continue doing P-1 assignments using this amount of memory, or should I leave it to the 'big guns', who can use much more memory than me?[/QUOTE]Absolutely keep doing. 1200MB may not seem like much compared to the "big guns" around here, but the extra RAM generally provides only a marginal increase in factor probability. For example, my own big system with 10GB per worker:[code]Optimal P-1 factoring of M58407191 using up to 10000MB of memory. Assuming no factors below 2^71 and 2 primality tests saved if a factor is found. Optimal bounds are B1=625000, B2=14687500 Chance of finding a factor is an estimated 4.92%[/code]So I've given it more than 8x as much RAM as you did, and the factor probability is 0.19% higher... Conversely, 1200MB is [i]much[/i] better than 200MB, and hugely better than what can be expected by random GIMPS user doing P-1 as initial part of L-L test. |
I was just noticing that my P-1 machine has had a weird bounds with no change to the memory, can anyone explain if this is something other than how far the exp was TF'd?
[code][Sat Jan 28 09:29:51 2012] P-1 found a factor in stage #2, B1=430000, B2=8707500. UID: bcp19/HP-NEW, M45048023 has a factor: 360751991413212824008821379007 [Sat Jan 28 17:23:52 2012] UID: bcp19/HP-NEW, M45122951 completed P-1, B1=340000, B2=5780000, E=6, We4: 8F2BB36D [Sun Jan 29 04:22:56 2012] UID: bcp19/HP-NEW, M45158209 completed P-1, B1=430000, B2=8707500, E=12, We4: 90ECBFD6 [Sun Jan 29 15:21:47 2012] UID: bcp19/HP-NEW, M45159679 completed P-1, B1=430000, B2=8707500, E=12, We4: 90C4BFFD [Mon Jan 30 02:20:51 2012] UID: bcp19/HP-NEW, M45163583 completed P-1, B1=430000, B2=8707500, E=12, We4: 90E7BF9E [/code] |
If the mem allocation (and TF level) was the same, then I don't know. Are you sure it isn't TF level? I got something like the last three at TF=72, but with TF=76 (thanks to roswald) I got something like the first one you have (even less, I think, but I could easily see 73 giving those bounds).
|
[QUOTE=bcp19;287761]I was just noticing that my P-1 machine has had a weird bounds with no change to the memory[/QUOTE]Bounds-selection is a complex process. My guess would be that you passed over some threshold where it could no longer get a "nice" number of relative primes into a single pass, so since it would have to do more passes anyway the balance of efficiency said that higher bounds and better Brent-Suyama extension usage (E=12 instead of E=6) were worth it.
|
[QUOTE=bcp19;287761]I was just noticing that my P-1 machine has had a weird bounds with no change to the memory, can anyone explain if this is something other than how far the exp was TF'd?[/QUOTE]
Here's a possibility that affected me recently: if Prime95's RollingAverage (a measure of how fast you really are vs how fast it expects you to be) is off, it will choose bounds differently. I saw Prime95 choose much higher bounds when I manually bumped up the rolling average to be more accurate (it was about half what it should've been). |
[QUOTE=Mini-Geek;287830]Here's a possibility that affected me recently: if Prime95's RollingAverage (a measure of how fast you really are vs how fast it expects you to be) is off, it will choose bounds differently. I saw Prime95 choose much higher bounds when I manually bumped up the rolling average to be more accurate (it was about half what it should've been).[/QUOTE]
I think you might be onto something here. I have sometimes tried to lie to mprime about what the system can do, and it "fixes" it with 24 hours. And some have left mprime/Prime95 alone, and it sometimes goes insane and thinks a single instance of a supercomputer is the speed of a 6502. George? |
| All times are UTC. The time now is 23:01. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.