![]() |
[QUOTE=kladner;275130]That's good to know, but it's way down the road for me. I am running on my "new computer". I do have two more RAM slots, though. <G>[/QUOTE]
I didn't think anyone was actually planning such a build. My comment was really intended to convey just how insignificant all that extra memory really is. GIMPS needs more P-1ers, and nobody should be put off from doing this kind of work because they don't have multiple GB to allocate to it. 500MB per HighMemWorker is ample. |
I do take your meaning, Mr. P-1. And I appreciate getting a clear picture of the relative importance of different aspects in a computer which make a difference.
I couldn't resist joking, though. I was gob smacked at the thought that 10GB was not excessive, even if it became clear that the returns don't exactly justify it unless those 10 gigs are lying fallow. Still, I can throw a fair amount of RAM at P-1 when I'm not using the box. |
[QUOTE=Mr. P-1;275124]The B-S extension finds more factors, but it also increases the running time, so the benefits are actually quite marginal. [/QUOTE]
Actually, it is not at all obvious that it _is_ beneficial. Period. The current P-1 code just makes WAGs about B/S efficiency. It is probably more beneficial if people don't give enough memory for B/S so that they can run thru more P-1 tests -- rather than some tests getting super P-1 and some getting none at all. |
WAGs ?
I don't understand the meaning of WAGs in this context. What does it mean?
[URL]http://en.wikipedia.org/wiki/Wag_(disambiguation[/URL]) |
Forced B/S test?
If you really, very much, would like your P-1 test to include a B/S test is it possible to force P95 to include a B/S test in your P-1 test even though you don't have "enough" memory?
Of course I do understand that it would slow down the test and it will not be the most rational thing to do, but is it possible? |
[QUOTE=aketilander;275161]I don't understand the meaning of WAGs in this context. What does it mean?
[URL]http://en.wikipedia.org/wiki/Wag_(disambiguation[/URL])[/QUOTE] [url]http://www.acronymfinder.com/WAG.html[/url] I'll let you figure out which one I meant :smile: |
[QUOTE=axn;275154]Actually, it is not at all obvious that it _is_ beneficial. Period. The current P-1 code just makes WAGs about B/S efficiency.[/QUOTE]
That's true. [QUOTE]It is probably more beneficial if people don't give enough memory for B/S so that they can run thru more P-1 tests -- rather than some tests getting super P-1 and some getting none at all.[/QUOTE] I've argued before that people taking P-1 assignments are trying to solve a different optimisation problem from that faced by those doing P-1 as a preliminary to LLing the same exponent, and that the former should do P-1 to lower bounds than the latter. I wouldn't, however recommend reducing the available memory to speed up the computation. Rather I would suggest increasing the nominal number of bits factored in the worktodo file. But I honestly don't think this makes a lot of difference, and I don't do this myself. |
[QUOTE=aketilander;275161]I don't understand the meaning of WAGs in this context. What does it mean?
[URL]http://en.wikipedia.org/wiki/Wag_(disambiguation[/URL])[/QUOTE] Try: [url]http://www.urbandictionary.com/define.php?term=wag[/url] Def #5. |
[QUOTE=Mr. P-1;275170]I've argued before that people taking P-1 assignments are trying to solve a different optimisation problem from that faced by those doing P-1 as a preliminary to LLing the same exponent, and that the former should do P-1 to lower bounds than the latter.[/quote]
Indeed. In the context of this thread, we're trying to avoid "no P-1 LL" as much as possible, so the volunteers should ensure the maximum number of exponents to be "reasonably" P-1'ed. [QUOTE=Mr. P-1;275170]I wouldn't, however recommend reducing the available memory to speed up the computation. Rather I would suggest increasing the nominal number of bits factored in the worktodo file. But I honestly don't think this makes a lot of difference, and I don't do this myself.[/QUOTE] As long as increasing memory decreases the run time, go for it. But after a certain point, B/S will kick in, and increase the run time. Not good (at least from this thread's context). Another factor: larger memory usage means, if you're stopping and starting the P-1 run frequently, the overhead will be higher [naturally, this is not relevant if you allow it to run to completion] ---- I'd be interested to see the precise relationship b/w memory allocated and stage2 run time for the current group of exponents under consideration [modulo the caveat that memory change will slightly alter the bounds chosen and success probability]. I'd like to see some realworld data. To avoid the effect of different cpus, this can be measure as the ratio of stage2 run time : stage1 run time. I can haz data? :smile: |
[QUOTE=axn;275177]I'd like to see some realworld data. To avoid the effect of different cpus, this can be measure as the ratio of stage2 run time : stage1 run time. I can haz data? :smile:[/QUOTE]
That doesn't necessarily avoid the effect of different cpus. A while back I experimented with underclocking my PC by reducing the multiplier, effectively giving it a "different" cpu. This had appreciably more effect on stage 1 than it did on stage 2, presumably because stage 2 is bound by memory bandwidth and perhaps also latency to a greater degree than stage 1. Also I find that, when it's otherwise idle, my dual core system spends considerably longer in stage 2 than stage 1. Consequently, with maxhighmemworkers=1, it accumulates uncompleted stage 2 over time. To counteract this, I run the stage 2 process at high priority. Stage 2 gets the entire core almost all the time, and produces results to a quite regular schedule, while stage 1 competes with every other process for the other core, and timings vary wildly depending upon what other applications I'm using. I still accumulate stage 2 over time, and have to switch to maxhighmemworkers=2 every now and again, but not as often as I otherwise would. So I'm not sure how informative the ratio would be to you. |
[QUOTE=axn;275177]As long as increasing memory decreases the run time, go for it. But after a certain point, B/S will kick in, and increase the run time. Not good (at least from this thread's context). [/QUOTE]
Isn't it only bad if the increased runtime doesn't give a proportionate increase in probability of factor? Tom |
Changing worktodo after additional TF work
When I receive a P-1 assignment, sometimes I do additional trial factoring with the GPU from 68—>71 before the P-1 work begins. It it important to change the worktodo file to reflect this increase before the P-1 process starts?
Chuck |
[QUOTE=Chuck;275185]When I receive a P-1 assignment, sometimes I do additional trial factoring with the GPU from 68—>71 before the P-1 work begins. It it important to change the worktodo file to reflect this increase before the P-1 process starts?
Chuck[/QUOTE] I don't think any changes are critical but I think you should change the second last parm that indicates the bits of TF done. |
[QUOTE=fivemack;275181]Isn't it only bad if the increased runtime doesn't give a proportionate increase in probability of factor?
Tom[/QUOTE] Yes. Problem is p95 doesn't have any hard data for calculating the probability. And AFAICT, it overestimates the worth of B/S. But let's say, for the sake of argument, that it is around a 10% increase (which would mean roughly 1 in 10 P-1 factor that was found where B-S was used could only have been found that way -- not sure empirical data supports that). OTOH, _I_ don't have hard numbers as to how much worse the runtime is with different degrees of B/S. If it is say 20% more, then probably B/S is not worth it (in the context of this thread). |
[QUOTE=Chuck;275185]When I receive a P-1 assignment, sometimes I do additional trial factoring with the GPU from 68—>71 before the P-1 work begins. It it important to change the worktodo file to reflect this increase before the P-1 process starts?
Chuck[/QUOTE] Sort of. The level of TF done does affect the optimal P-1 bounds |
[QUOTE=Chuck;275185]When I receive a P-1 assignment, sometimes I do additional trial factoring with the GPU from 68—>71 before the P-1 work begins. It it important to change the worktodo file to reflect this increase before the P-1 process starts?[/QUOTE]
It's not critically important. The client uses this information to compute the optimal bounds. If the client thinks the exponent has been factored less deeply than it actually has been (or will be, the order of factoring doesn't make any difference to the factors you will actually find), then it will choose somewhat higher bounds than is optimal. |
[QUOTE=fivemack;275181]Isn't it only bad if the increased runtime doesn't give a proportionate increase in probability of factor?[/QUOTE]
The problem is, as AXN points out, that we don't know that it does. In the case where the assignment is a P-1 (rather than an LL assignment getting an initial P-1) there is another issue: the more time spent on each assignment, the fewer the client is able to complete in a given period of time, and the more assignments which pass through to LL testing without having been P-1ed first. Of these, about half never get a stage 2. This means that, in exchange for a slightly increased chance of finding a factor with the exponents we do test, we're losing even more with the exponents we don't. |
The correct optimality criterion is, for the vast majority of mersenne exponents, how to prove the most of them composite for the least amount of effort. Factors found per GHz-Day is the correct metric.
Mr P-1 points out that by doing relatively deep P-1, we have many exponents not getting any stage 2 P-1, which has a significantly higher return of factors found per time spent. Thus, exponents that could have had a factor found relatively easily are getting LL tested instead. This is also happening with TF, though in this case, the change is due to an increase in the ease of doing TF on GPUs. |
B-S extension
How does P95/64 indicate that B-S has kicked in? Also, what is Stage 1 GCD?
|
[QUOTE=kladner;275214]How does P95/64 indicate that B-S has kicked in?[/QUOTE]
You'll see an "E=6" (or higher) in your results.txt file, if it fails to find a factor. For some reason it doesn't say when it finds one. [QUOTE]Also, what is Stage 1 GCD?[/QUOTE] The client performs a GCD ([URL=http://en.wikipedia.org/wiki/Greatest_common_divisor]Greatest Common Divisor[/URL]) calculation at the end of each stage. The GDC extracts the factor(s) found (if any) from the result of the computation in each stage. |
Thanks, Mr. P-1!
|
[QUOTE=Christenson;275207]Mr P-1 points out that by doing relatively deep P-1, we have many exponents not getting any stage 2 P-1, which has a significantly higher return of factors found per time spent. Thus, exponents that could have had a factor found relatively easily are getting LL tested instead[/QUOTE]
This is too bad this is happening. I know a lot of us have been trying to get P-1 done before the first LL, but I guess there are still too few P-1 still getting done before PrimeNet hands them out for first LL? I've practically turned all my computers that have enough memory to doing P-1. So is it that we are just too few doing P-1? |
[QUOTE=delta_t;275220]
I've practically turned all my computers that have enough memory to doing P-1. So is it that we are just too few doing P-1[/QUOTE] Basically yes. Turning every computer that has enough memory to doing P-1 is probably the best thing you could be doing for GIMPS. The only exception is if you have TF-capable GPUs. Currently the GPU factoring programs also need a great deal of CPU time (typically an entire core or two) to support the GPU. Depending of the specific work you do, this may be even more beneficial to GIMPS than devoting those cores to P-1 |
[QUOTE=Christenson;275207]The correct optimality criterion is, for the vast majority of mersenne exponents, how to prove the most of them composite for the least amount of effort. Factors found per GHz-Day is the correct metric.[/QUOTE]
In fact a dedicated P-1er's contribution is optimal if he maximises the number of factors he finds [i]that would otherwise not be found[/i] (and minimises the number of factors he fails to find that would otherwise be found.) It's easy to see that P-1ers "should" choose lower bounds than LL testers doing preliminary P-1s, but quantifying how much lower is extraordinarily difficult. Despite the logic, it "feels" wrong to deliberately reduce the bounds in any way, so I don't do this. A dedicated P-1er with a reasonable amount of memory who uses prime95's default bounds calculation is making a contribution to GIMPS that is significantly greater than if he devoted his cores to LL testing. And that is good enough for me. |
Quote Mr. P-1: "You'll see an "E=6" (or higher) in your results.txt file, if it fails to find a factor."
Ah! Like this: [Wed Oct 19 21:45:54 2011] UID: kladner/pod64, M52315441 completed P-1, B1=610000, B2=15555000, E=6, We4: 498F4FED, AID: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx [Wed Oct 19 22:32:00 2011] UID: kladner/pod64, M52310233 completed P-1, B1=610000, B2=15555000, E=6, We4: 49964FA4, AID: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx So given discussion just previous, perhaps I don't need to allocate quite so much RAM; perhaps instead dedicating another worker to P-1 and spreading the benefits further. |
It's been mentioned if enough RAM is dedicated to P-1 that the Brent-Suyama extension would be used at the cost of more time completing the assignment. Do we know what kind of time difference (savings) there would be if the Brent-Suyama extension were not used? It sounds like more P-1 may be able to get done but is that much time really saved?
|
[QUOTE=delta_t;275232]It's been mentioned if enough RAM is dedicated to P-1 that the Brent-Suyama extension would be used at the cost of more time completing the assignment. Do we know what kind of time difference (savings) there would be if the Brent-Suyama extension were not used? It sounds like more P-1 may be able to get done but is that much time really saved?[/QUOTE]
Much of the discussion has been that we don't have a clue as to the answers for these questions. Axn was asking if anybody had data, but so far not. |
[QUOTE=Dubslow;275235]Much of the discussion has been that we don't have a clue as to the answers for these questions. Axn was asking if anybody had data, but so far not.[/QUOTE]
I think the results I posted above came about with just shy of 1GB per worker doing P-1. I had 2048MB allocated with 2 workers. But I could be mistaken. These might have happened in the wee hours when 4096MB were available, and I wasn't watching. |
[QUOTE=Dubslow;275235]Much of the discussion has been that we don't have a clue as to the answers for these questions. Axn was asking if anybody had data, but so far not.[/QUOTE]
Yeah I saw that earlier. Unfortunately I don't have any numbers either. I'm thinking of trying a test of it though. |
[QUOTE=kladner;275238]I think the results I posted above came about with just shy of 1GB per worker doing P-1. I had 2048MB allocated with 2 workers. But I could be mistaken. These might have happened in the wee hours when 4096MB were available, and I wasn't watching.[/QUOTE]
I meant the data about the effectiveness of B-S finding factors. |
[QUOTE=Dubslow;275240]I meant the data about the effectiveness of B-S finding factors.[/QUOTE]
Oops. OK. |
S'all good. :omg:
Sorry, that's the problem with the interblags, a.k.a. wobitubes, is that it seems so harsh :down: :loco: :huh: :mad: :cmd: :ermm: :hello: :wink: :lol: :mellow: :chevy: :sick: :ernst: :big grin: :yawn: :nuke: :sleep: :evil: :popcorn: :rofl: :love: :flex: :toot: :squash: Weeeeeeeeeeeeeeeeeeeeeeeeeeeee!!!!!!!!!!! </shit> <thread> |
[QUOTE=KingKurly;275046]Some day, I'll catch up to the LL wavefront. Maybe. :wink:[/QUOTE]I'm also doing the same thing, and definite progress is being made. 6 months ago there was stragglers down to about 30M, we're now cleaned up to 47.4M. Of course, new assignments become available all the time as older first-time LLs with no P-1 come in, but we'll keep cleaning them up. I generally run my "near-current" P-1s with 2 tests saved, and re-done old (e.g. 10M range) with 3 tests saved.
|
[QUOTE=Brain;275058]256MB are not enough for current P-1 assignments. PrimeNet will assign other work then. Give 512MB per core a try. I'm happy with that.[/QUOTE][QUOTE=kladner;275081]It certainly seems that P-1 will gulp down as much as it is allowed.[/QUOTE]To do a proper P-1 on a [url=http://mersenne-aries.sili.net/prob.php?exponent=54000000&b1=595000&b2=14875000&factorbits=&K=1&C=-1&submitbutton=Calculate]54M range exponent[/url], the minimum RAM requirement is somewhere around 466MB (so 512MB is reasonable); a "generous" allocation would be about 1400MB; and Prime95 should be able to use up to around 11GB (if processing all stage2 relative primes in one pass).
|
[QUOTE=James Heinrich;275359]To do a proper P-1 on a [url=http://mersenne-aries.sili.net/prob.php?exponent=54000000&b1=595000&b2=14875000&factorbits=&K=1&C=-1&submitbutton=Calculate]54M range exponent[/url], the minimum RAM requirement is somewhere around 466MB (so 512MB is reasonable); a "generous" allocation would be about 1400MB; and Prime95 should be able to use up to around 11GB (if processing all stage2 relative primes in one pass).[/QUOTE]
[QUOTE=http://mersenne-aries.sili.net/prob.php?exponent=54000000&b1=595000&b2=14875000&factorbits=&K=1&C=-1&submitbutton=Calculate]Recommended RAM allocation: min=466MB; good=1,387MB; max=11,340MB; insane=55,577MB;[/QUOTE] That's the minimum recommended, not the minimum needed to do any stage 2 at all, right? Given that half of all exponents never get any stage 2 at all, a machine with 300MB available (the minimum needed to get a P-1 assignment) will on average do a better P-1 than if this task were left to the LL-testing machine. |
[QUOTE=Mr. P-1;275364]That's the minimum recommended, not the minimum needed to do any stage 2 at all, right?[/QUOTE]Min/good/max/insane memory values are estimated amounts required to process 8/48/480/2400 relative primes respectively. I'm not sure what Prime95 would do with 300MB for a 54M P-1 assignment. Probably worth a quick experiment if you're curious. Just starting stage1 shows this:
[quote]Optimal P-1 factoring of M54952927 using up to [b]300MB[/b] of memory. Assuming no factors below 2^71 and 2 primality tests saved if a factor is found. Optimal bounds are [b]B1=535000, B2=7222500[/b] Chance of finding a factor is an estimated [b]4.19%[/b] [i]effort = [b]2.76GHz-days[/b][/i][/quote][quote]Optimal P-1 factoring of M54952927 using up to [b]10000MB[/b] of memory. Assuming no factors below 2^71 and 2 primality tests saved if a factor is found. Optimal bounds are [b]B1=580000, B2=13340000[/b] Chance of finding a factor is an estimated [b]4.74%[/b] [i]effort = [b]3.92GHz-days[/b][/i][/quote]How it plays out in Stage 2 I'm not sure. |
[QUOTE=Mr. P-1;275364]That's the minimum recommended, not the minimum needed to do any stage 2 at all, right?
Given that half of all exponents never get any stage 2 at all, a machine with 300MB available (the minimum needed to get a P-1 assignment) will on average do a better P-1 than if this task were left to the LL-testing machine.[/QUOTE] Processing only 8 relative primes is not very many, so I wouldn't expect 300MB to do very good Stage 2, if at all. OTOH, I can't say what the average LL machine is like, though guessing at something like curtisc might have, I don't think the average college workstation has too much memory, so you're probably right. |
If the machine had insufficient memory to do any stage2 at all, it would (using the M54952927 example from above) start with bounds where B1=B2, scaled to a lower overall effort than if stage2 were being done to maintain the balance of no-stage2 => lower factor probability => worth spending less effort:[quote]Optimal P-1 factoring of M54952927 using up to [b]100MB[/b] of memory.
Assuming no factors below 2^71 and 2 primality tests saved if a factor is found. Optimal bounds are [b]B1=735000, B2=735000[/b] Chance of finding a factor is an estimated [b]2.42%[/b] [i]effort = [b]2.26GHz-days[/b][/i][/quote] |
[QUOTE=James Heinrich;275395]If the machine had insufficient memory to do any stage2 at all, it would (using the M54952927 example from above) start with bounds where B1=B2, scaled to a lower overall effort than if stage2 were being done to maintain the balance of no-stage2 => lower factor probability => worth spending less effort:[/QUOTE]
Here's what I get with various memory settings for an exponent TFed to 68 at varying available memory levels: [quote][Worker #1 Oct 23 15:05] Optimal P-1 factoring of M46221811 using up to 1050MB of memory. [Worker #1 Oct 23 15:05] Assuming no factors below 2^68 and 2 primality tests saved if a factor is found. [Worker #1 Oct 23 15:05] Optimal bounds are B1=555000, B2=14013750 [Worker #1 Oct 23 15:05] Chance of finding a factor is an estimated 6.34% [Worker #1 Oct 23 15:08] Optimal P-1 factoring of M46221811 using up to 800MB of memory. [Worker #1 Oct 23 15:08] Assuming no factors below 2^68 and 2 primality tests saved if a factor is found. [Worker #1 Oct 23 15:08] Optimal bounds are B1=555000, B2=13597500 [Worker #1 Oct 23 15:08] Chance of finding a factor is an estimated 6.3% [Worker #1 Oct 23 15:09] Optimal P-1 factoring of M46221811 using up to 500MB of memory. [Worker #1 Oct 23 15:09] Assuming no factors below 2^68 and 2 primality tests saved if a factor is found. [Worker #1 Oct 23 15:09] Optimal bounds are B1=545000, B2=12398750 [Worker #1 Oct 23 15:09] Chance of finding a factor is an estimated 6.18% [Worker #1 Oct 23 15:11] Optimal P-1 factoring of M46221811 using up to 300MB of memory. [Worker #1 Oct 23 15:11] Assuming no factors below 2^68 and 2 primality tests saved if a factor is found. [Worker #1 Oct 23 15:11] Optimal bounds are B1=530000, B2=10070000 [Worker #1 Oct 23 15:11] Chance of finding a factor is an estimated 5.93% [Worker #1 Oct 23 15:11] Optimal P-1 factoring of M46221811 using up to 200MB of memory. [Worker #1 Oct 23 15:11] Assuming no factors below 2^68 and 2 primality tests saved if a factor is found. [Worker #1 Oct 23 15:11] Optimal bounds are B1=510000, B2=6885000 [Worker #1 Oct 23 15:11] Chance of finding a factor is an estimated 5.5% [Worker #1 Oct 23 15:12] Using Core2 type-3 FFT length 2560K, Pass1=640, Pass2=4K [Worker #1 Oct 23 15:12] M46221811 stage 1 is 94.09% complete. [Worker #1 Oct 23 15:12] Optimal P-1 factoring of M46221811 using up to 150MB of memory. [Worker #1 Oct 23 15:12] Assuming no factors below 2^68 and 2 primality tests saved if a factor is found. [Worker #1 Oct 23 15:12] Optimal bounds are B1=500000, B2=4750000 [Worker #1 Oct 23 15:12] Chance of finding a factor is an estimated 5.11% [Worker #1 Oct 23 15:13] Optimal P-1 factoring of M46221811 using up to 100MB of memory. [Worker #1 Oct 23 15:13] Assuming no factors below 2^68 and 2 primality tests saved if a factor is found. [Worker #1 Oct 23 15:13] Optimal bounds are B1=460000, B2=2185000 [Worker #1 Oct 23 15:13] Chance of finding a factor is an estimated 4.31% [Worker #1 Oct 23 15:18] Optimal P-1 factoring of M46221811 using up to 90MB of memory. [Worker #1 Oct 23 15:18] Assuming no factors below 2^68 and 2 primality tests saved if a factor is found. [Worker #1 Oct 23 15:18] Optimal bounds are B1=795000, B2=795000 [Worker #1 Oct 23 15:18] Chance of finding a factor is an estimated 3.38% [/quote] In fact stage 2 is possible on this exponent with a minimum of 92MB. About a year or so ago, I tried the experiment of seeing how many relative primes were processed each pass of stage 2 on minimal memory settings. The answer was 2 out of 8 total. The total number of passes, therefore, are 4, not the 24 it would take if there were 48 relative primes in total, or the horrendous 240 to do 480 relative primes. This experiment was done on a previous version of mprime. I'll try to catch this exponent just before the end of its stage 1, take a copy of the save file, then complete stage 1 with 92M available in order to repeat the experiment. |
[QUOTE=Dubslow;275380]Processing only 8 relative primes is not very many, so I wouldn't expect 300MB to do very good Stage 2, if at all.[/QUOTE]
As you can see from my previous post, a 300M P-1 on a 46M exponent isn't too bad, and it wouldn't be that much worse on 55M exponent. The Stage 2 code has a variety of plans to chose from, and some of these are optimised for memory restricted scenarios. More is better, but less isn't necessarily terrible. [QUOTE]OTOH, I can't say what the average LL machine is like, though guessing at something like curtisc might have, I don't think the average college workstation has too much memory, so you're probably right.[/QUOTE] It's not the amount of memory the workstation has that matters, but the amount that the client is configured to use. I would imagine that for most institutions which allow the client to be installed, minimizing the impact upon performance is a priority. We don't have to guess, however. We can [url=http://mersenne.org/report_factoring_effort/?exp_lo=49000000&exp_hi=50000000&bits_lo=0&bits_hi=999&B1=Get+Data]see for ourselves[/url]. |
Thanks for those numbers Mr. P-1. I would say that even 200MB would give you a decent P-1. After 300MB the marginal gains really do shrink.
|
This is interesting data, it would be even more interesting if we could see the 'effort=' lines that James posted for the different memory settings that Mr P-1 posted.
[code] 100M: 2.42% from 2.26 GHz-days = one factor per 93.39 GHz-days 300M: 4.19% from 2.76 GHz-days = one factor per 65.87 GHz-days 10000M: 4.74% from 3.92 GHz-days = one factor per 82.70 GHz-days [/code] so it would be interesting to see what the percentage and effort marks were for (say) 200M, 400M, 600M. Is it possible to do ECM on these large exponents? |
[QUOTE=Mr. P-1;275364]Given that half of all exponents never get any stage 2 at all...[/QUOTE]
That statement [url=http://mersenne.org/report_factoring_effort/?exp_lo=49000000&exp_hi=50000000&bits_lo=0&bits_hi=999&B1=Get+Data]used to be true[/url]. Now that the LL wavefront has now reached the point where P-1 assignments first started, I'm not sure it [url=http://mersenne.org/report_factoring_effort/?exp_lo=51000000&exp_hi=52000000&bits_lo=0&bits_hi=999&B1=Get+Data]still is[/url]. What certainly still is true, that of those assignments going to LL machines without having been Pre-P-1ed, no more than about half are getting any stage two. |
[QUOTE=fivemack;275417]This is interesting data, it would be even more interesting if we could see the 'effort=' lines that James posted for the different memory settings that Mr P-1 posted.
[code] 100M: 2.42% from 2.26 GHz-days = one factor per 93.39 GHz-days 300M: 4.19% from 2.76 GHz-days = one factor per 65.87 GHz-days 10000M: 4.74% from 3.92 GHz-days = one factor per 82.70 GHz-days [/code][/QUOTE] The relevant metric here, surely, is not "factors per GHz-Days" but "expected GHz-Days saved by running this P-1". [QUOTE]Is it possible to do ECM on these large exponents?[/QUOTE] I don't see why not. My understanding is that the reason we don't do ECM (or P+1) isn't because we can't, but because it's not cost efficient. |
[QUOTE=garo;275416]Thanks for those numbers Mr. P-1. I would say that even 200MB would give you a decent P-1. After 300MB the marginal gains really do shrink.[/QUOTE]
The percentage success rate is not the whole story. You also need to take into account the running time. For example, I would guess that B1=460000, B2=2185000 is cheaper to run, even with just 100MB, than B1=B2=795000. |
[QUOTE=Mr. P-1;275411]This experiment was done on a previous version of mprime. I'll try to catch this exponent just before the end of its stage 1, take a copy of the save file, then complete stage 1 with 92M available in order to repeat the experiment.[/QUOTE]
This is interesting, with as little as 92MB available, mprime chooses bounds with B2 > B1. However, when it comes to actually doing stage 2, it declares "other threads are using lots of memory now" and moves on to the next assignment. I can't get it to start stage 2 with any less that 112MB. With 112MB, it uses 92M to process 1 relative prime out of 48. However with 113MB available, it uses 112MB to process 2 relative primes out of 48. It looks to me as though there are 2, possibly three, separate bugs here. Bug 1: Presumably the intention is that stage 2 will only be run when there is sufficient memory to process 2 relative primes, however there appears to be an [url=http://en.wikipedia.org/wiki/Off-by-one_error]off by one error[/url] in handling the case where the available memory is exactly enough to process 2 relative primes. It starts stage 2, but then only processes 1 relative prime. Bug 2: When calculating optimal bounds, when deciding whether or not it can do stage 2 at all, it assumes it can if it has enough memory to process only one relative prime, not two. This is a significant bug. Anyone allowing exactly 100MB will, for exponents of this size, accumulate unfinished P-1 save files without ever completing them. Possible Bug 3: In earlier versions, I'm sure I recall it choosing plans with just 8 relative primes in total. Shouldn't it have chosen such a plan here? There are two other minor "output" bugs. When re-starting stage 2, it reports that instead of the optimal bounds it calculates, it is "using B1=560000 from the save file". 560000 is the B1 bound it computed based upon the generous memory allocation at the very start of its stage 1 calculation. However stage 1 was finished with a much lower memory allocation, and consequently a much lower optimal B1, but it never told me during stage 1 that it was using B1 from the save file. Finally the message "other threads are using lots of memory now" is confusing when you have no other threads running. Linux,Prime95,v26.5,build 5. I will PM George to draw his attention to this post. |
[QUOTE=Mr. P-1;275441]Bug 1: Presumably the intention is that stage 2 will only be run when there is sufficient memory to process 2 relative primes, however there appears to be an [URL="http://en.wikipedia.org/wiki/Off-by-one_error"]off by one error[/URL] in handling the case where the available memory is exactly enough to process 2 relative primes. It starts stage 2, but then only processes 1 relative prime.
[/QUOTE] I thought P-1 was only supposed to use 90% of available memory, which would kinda explain why at 112MB available it uses 92M, but doesn't explain the use of 112MB at 113MB available... |
[QUOTE=bcp19;275444]I thought P-1 was only supposed to use 90% of available memory[/QUOTE]Prime95 will only let you allocate up to 90% of your system memory, but it will use 100% of what you allocate to it where possible (it might not be possible to use every last MB since it's allocated in discrete chunks depending how many relative primes are being processed).
|
[QUOTE=bcp19;275444]I thought P-1 was only supposed to use 90% of available memory, which would kinda explain why at 112MB available it uses 92M, but doesn't explain the use of 112MB at 113MB available...[/QUOTE]
Earlier versions of the client would never use all the memory you allowed it, but I haven't seen this behaviour for a long time now. Even if it did, it should decide to do stage 2 with a certain amount of memory avaiable, then be unable to do it with that amount of memory. |
[QUOTE=James Heinrich;275446]Prime95 will only let you allocate up to 90% of your system memory, but it will use 100% of what you allocate to it where possible (it might not be possible to use every last MB since it's allocated in discrete chunks depending how many relative primes are being processed).[/QUOTE]
You can put any amount of memory, even > 100%, into local.txt and have it "use" that when calculating P-1 bounds. Of course, you must ensure that it never actually goes to stage 2 in this condition. I do this on my secondary machine, which only has 256MB of physical memory. I use LowMemWhileRunning=prime95 to stop it from going to stage 2. Every once in a while I shift the accumulated stage-2-ready assignments to my main box. |
Any progress on P-1 on GPUs?
P-1 loves ram - most top end cards have >=1GB P-1 loves fast ram - somewhat modest GTX560Ti - 52.61GB/s (beats definitely not modest SNB-E quad channel theoretical max of 51.2GB/s) Anything on the drawing board? -- Craig |
Is there any way to get a more accurate credit for the P-1s? I am asking because I only do 1 P-1 at a time and have a handful always queued up to start. For some reason, even though each P-1 only takes a bit over a day to compute, the credit given to me is several days worth as if it's crediting the time the P-1 tests are sitting there unprocessed. I feel like it's an unforseen cheat that isn't on purpose by the user.
|
The credit given is based on the difficulty of the assignment alone, it doesn't matter how long you've had the assignment. [url=http://www.mersennewiki.org/index.php/GHz-days]GHz-days[/url] is what credit is given in, and that's how many days it would take to complete the assignment on a 1.0GHz single-core Intel Core² CPU. Your CPU is likely more powerful and can complete a (for example) 4GHz-days assignment in one day.
|
[QUOTE=James Heinrich;275537]The credit given is based on the difficulty of the assignment alone, it doesn't matter how long you've had the assignment. [url=http://www.mersennewiki.org/index.php/GHz-days]GHz-days[/url] is what credit is given in, and that's how many days it would take to complete the assignment on a 1.0GHz single-core Intel Core² CPU. Your CPU is likely more powerful and can complete a (for example) 4GHz-days assignment in one day.[/QUOTE]Thank you for explaining it. Just FYI personally, I have an i7 Q720 1.6 GHz quad core.
|
[QUOTE=James Heinrich;275537]The credit given is based on the difficulty of the assignment alone, it doesn't matter how long you've had the assignment. [url=http://www.mersennewiki.org/index.php/GHz-days]GHz-days[/url] is what credit is given in, and that's how many days it would take to complete the assignment on a 1.0GHz single-core Intel Core² CPU. Your CPU is likely more powerful and can complete a (for example) 4GHz-days assignment in one day.[/QUOTE]
James: Mr P-1 seems to have found a problem where he gets a different amount of credit depending on manual or automatic submission. E |
[QUOTE=Christenson;275591]Mr P-1 seems to have found a problem where he gets a different amount of credit depending on manual or automatic submission.[/QUOTE]Yes, there are apparently some issues there, and I'll try and have a look as to why that's happening. But that's a bug in the manual results submission code somewhere, and it'll be several weeks before I can try and track that down.
|
Didn't P95 say that was due to a lack of FFT size information? Or is this something else I missed?
|
CPU cycles available for P-1
To whom it may concern (Mr. P-1,etc.):
I am currently running 3 1090T cores on P-1. 2 cores are feeding mfaktc on a GTX 460. (But that belongs in another thread.) 1 core is cleaning up the LL assignments which I moved from the other cores. In Win7-64 I have 8GB RAM, so I can handle giving fully adequate amounts to 4 cores once the LL's finish. I have been taking whatever PrimeNet gives me in the P-1 department, without changing the depth. I would gladly accept assignments from other sources, if it would be helpful. |
The credit in fact should be based not only on how difficult an assignment is, but also on how "needed" it is. I need this more than that, so I would pay more money for it, even if its value is the same, or lower. This is the key of any "marketing" system. So, George or other guys there at the helm of the boat, could give any credit they like, if they consider that some work is more necessary then the other, to stimulate the people to go for that "most needed" assignments. I started to do P-1 since this thread started**, because I was reading some arguments written by some guys who seem they know what they are talking about, in this very current thread. And I feel ok doing it, in spite of the fact that I never did it for credit, I still feel "important", as I am doing some "useful" work :smile:
**edit: correction: This thread started long ago, so in fact I am doing P-1 since [B]I read[/B] this thread, in june, when I joined the forum and read all the interesting topics from the beginning. The argument that P-1 could be more needed than other work types still sounds reasonable for me now, especially after GPU's took over with trial factoring and cudaLL tests. |
[QUOTE=Dubslow;275596]Didn't P95 say that was due to a lack of FFT size information? Or is this something else I missed?[/QUOTE]
Indeed P95 did say this was due to the server having to guess the FFT size. I put this in the same class as getting odd amounts of credit for mfaktc factors found...not a very big problem. Indeed, my CPUs are doing majority P-1...and P-1 is making some headway due to you guys help. At the moment, credit seems to be effort-based, with a big correction due because TF is so much faster on CPUs. The advantage of an effort-based system is that it is relatively easier to figure out the minimum effort to get the maximum effect, that is to prove the most mersenne exponents are not in fact associated with mersenne primes in the smallest amount of time. |
[QUOTE=kladner;275614]To whom it may concern (Mr. P-1,etc.):
I am currently running 3 1090T cores on P-1. 2 cores are feeding mfaktc on a GTX 460. (But that belongs in another thread.) 1 core is cleaning up the LL assignments which I moved from the other cores. In Win7-64 I have 8GB RAM, so I can handle giving fully adequate amounts to 4 cores once the LL's finish. I have been taking whatever PrimeNet gives me in the P-1 department, without changing the depth. I would gladly accept assignments from other sources, if it would be helpful.[/QUOTE] At this point, PM either ckdo (27M TF assignments) or Mr P-1 (48-50M TF assignments) to get the best assignments. Mr P-1, I *assume* (with all the risks thereunto) that what the server is handing out for P-1 assignments is good, though I did note a 60M P-1 assignment in one of my queues this weekend. Or should I be running some of those non P-1'ed expired assignments you've been looking at? |
[QUOTE=LaurV;275616]The credit in fact should be based not only on how difficult an assignment is, but also on how "needed" it is.[/QUOTE]
Yes! I was thinking something similar. A weighted Ghz-days (WGD) metric. So the rankings would be based on a WGD. Then the project people can adjust metrics and people change the workload. You'd have to keep them similar as you don't want _all_ the nodes to do the highest rank. Weighting examples: TF: 0.9 LL:1.0 LLD:1.2 P-1: 1.5 Yes I understand it may not be feasible due to the extra work it would create for the server. -- Craig |
[QUOTE=Christenson;275630]At this point, PM either ckdo (27M TF assignments) or Mr P-1 (48-50M TF assignments) to get the best assignments. Mr P-1, I *assume* (with all the risks thereunto) that what the server is handing out for P-1 assignments is good, though I did note a 60M P-1 assignment in one of my queues this weekend. Or should I be running some of those non P-1'ed expired assignments you've been looking at?[/QUOTE]
Thanks, Christenson. |
[QUOTE=Christenson;275630]At this point, PM either ckdo (27M TF assignments) or Mr P-1 (48-50M TF assignments) to get the best assignments.[/QUOTE]
My assignments range from 45M-52M, with a very small number higher, but still within the Test wavefront. Also, with all due respect to ckdo, I question whether the assignments he supplies are "the best", whether your criteria is value to the project, or factors found. The value of a TF bit level of a DC exponent is the same as that four bits higher on an LL exponent twice the size. Most of my assignments are from 69-70. Unless ckdo's are 65-66 or lower, his are not as valuable as mine. Also many of mine have not had P-1 done, so there are more factors to find. [QUOTE]Mr P-1, I *assume* (with all the risks thereunto) that what the server is handing out for P-1 assignments is good, though I did note a 60M P-1 assignment in one of my queues this weekend. Or should I be running some of those non P-1'ed expired assignments you've been looking at?[/QUOTE] My own assumptions are that there is enough GPU capacity working on Primenet-assigned TFs too keep that wavefront ahead of the LLs, even if some are diverted to TFing at the LL wavefront, but there is insufficient CPU capacity to keep the P-1 wavefront ahead of the LLs. If those assumptions are correct, then diverting P-1 effort to the LL test range will be like digging holes further up the road, in order to fill them in where you are now. Despite this, I still think there is some benefit to doing P-1s at the LL test range. First, your results are put to use immediately rather than months or years down the line. Second, the P-1 bounds calculated are more likely to be based upon the actual amount of TF that will likely ever be done on the exponent, thus are more likely to be optimal. Third, we cannot tell the future. Perhaps the amount of P-1 capacity will increase so that by the time we reach the holes further up the road, they will have been filled in. I currently have a small number of P-1 ripe exponents in the 45M range and above. As more and more TF workers report back, I will soon have a large number of them. There will be very little additional work involved in farming these out to whoever wants them. |
[QUOTE=nucleon;275657]Yes! I was thinking something similar. A weighted Ghz-days (WGD) metric.
So the rankings would be based on a WGD. Then the project people can adjust metrics and people change the workload. You'd have to keep them similar as you don't want _all_ the nodes to do the highest rank. Weighting examples: TF: 0.9 LL:1.0 LLD:1.2 P-1: 1.5 Yes I understand it may not be feasible due to the extra work it would create for the server. -- Craig[/QUOTE] This is the daftest post I've ever read. It is tricky enough to come up with a sensible measure of "work done", without a subjective "value-added" component. This is what is important: GPUs TF as far as poss before 1st time LLs are assigned to CPUS. The CPU does P-1 if necessary, and completes its test. DCs will look after themselves. End of story. David |
[QUOTE=davieddy;275687]This is the daftest post I've ever read.[/QUOTE]Congrats [i]nucleon[/i], that's quite the achievement! (You even beat out [url=http://www.mersenneforum.org/showpost.php?p=275247&postcount=711]post #711[/url] on the previous page of this thread :smile:)
Back to topic, I understand where Craig was going with his idea, but it's probably best to achieve the desired effect with more tweaking to the "Whatever makes sense" PrimeNet worktype. If L-Ls were only handed out to machines that [i]can't[/i] reasonably run P-1 (due to lack of RAM) and anyone cable of running it was assigned P-1, then the assignment landscape would change very quickly. |
[QUOTE=James Heinrich;275688]Congrats [i]nucleon[/i], that's quite the achievement! (You even beat out [url=http://www.mersenneforum.org/showpost.php?p=275247&postcount=711]post #711[/url] on the previous page of this thread :smile:)
[/QUOTE] Ehh... I swear that's not a usual occurrence. Usually. :blush: I did have fun making that post though, for what it's worth. |
Cable physicist
[QUOTE=James Heinrich;275688]If L-Ls were only handed out to machines that [I]can't[/I] reasonably run P-1 (due to lack of RAM) and anyone cable of running it was assigned P-1, then the assignment landscape would change very quickly.[/QUOTE]
I allowed 400MB of my Gig of RAM for stage two, and noted here the effect it had on my browsing. But it only took a day or so, so I don't really see what the fuss is about. David |
[QUOTE=Mr. P-1;275423]The relevant metric here, surely, is not "factors per GHz-Days" but "expected GHz-Days saved by running this P-1".[/QUOTE]
But optimising one is the same as optimising the other; you save GHz-days only by finding a factor, and the number of GHz-days you save (for the two LL tests) is not a function of the effort required by the P-1. (a 54M takes about 180 GHz-days for two LL tests, so P-1 testing that costs less than 180 GHz-days per expected factor is worth doing; but parameters that cost 65 GHz-days per expected factor should definitely be used ahead of parameters that cost 90 GHz-days per expected factor) |
[QUOTE=James Heinrich;275688]Congrats [I]nucleon[/I], that's quite the achievement! (You even beat out [URL="http://www.mersenneforum.org/showpost.php?p=275247&postcount=711"]post #711[/URL] on the previous page of this thread :smile:)
Back to topic, I understand where Craig was going with his idea, but it's probably best to achieve the desired effect with more tweaking to the "Whatever makes sense" PrimeNet worktype. If L-Ls were only handed out to machines that [I]can't[/I] reasonably run P-1 (due to lack of RAM) and anyone cable of running it was assigned P-1, then the assignment landscape would change very quickly.[/QUOTE] I'll throw one at you: I have a 4-core Phenom II x4, and I just brought it on-line for GIMPS. It proceeded to start 4 LL tests BEFORE I could configure it to do P-1, and tell it it could use a Gigabyte or two of RAM with P-1. So I got 4 preliminary P-1s, and 4 LL tests got started, and I didn't feel like fixing it up, so they will complete, before it starts turning in a bit more P-1. Is there a way to detect multiGigabyte machines at installation/startup and ask if the required 300M of RAM can be used for P-1 testing, before the LL tests get started? |
I think I have the components to put together for an Opteron dual-core, 2.4GHz, with 3GB of RAM. No real GPU ability, I think. It's an ATI All in Wonder 9800 Pro, and the motherboard is only AGP. But this would work pretty well as a P-1 machine? Something else?
|
[QUOTE=fivemack;275740]But optimising one is the same as optimising the other; you save GHz-days only by finding a factor, and the number of GHz-days you save (for the two LL tests) is not a function of the effort required by the P-1.
(a 54M takes about 180 GHz-days for two LL tests, so P-1 testing that costs less than 180 GHz-days per expected factor is worth doing; but parameters that cost 65 GHz-days per expected factor should definitely be used ahead of parameters that cost 90 GHz-days per expected factor)[/QUOTE] Not quite. Consider the hypothetical case below: [CODE]LL P-1 Prob Effort Effort Factors/ Cost Cost Succ saved spent Unit time ---------------------------------------------- 100 1.0 0.10 9.0 91.0 0.100 100 1.5 0.12 10.5 89.5 0.080 100 2.0 0.13 11.0 89.0 0.065 [/CODE] Highest factors/unit time is option 1. Most profitable for project is Option 3. And this is the basis of calculation today. However, in the context of this thread, this is not the correct calculation. It is not a local (i.e. per exponent) optimization problem, but a global optimization problem. i.e Given the amount of compute power we have for dedicated P-1, what is the most number of factors we can find, while staying ahead of the LL wave. Naturally, this is a dynamic point which will change based on the ratio of compute power available for LL vs compute power available for P-1. So it might turn out that option 2 is the right one in the given context. Anyway, my gut feel is that B/S extension can safely be ditched in this context. |
[QUOTE=axn;275770]However, in the context of this thread, this is not the correct calculation.[/QUOTE]
To make your point more explicitly, it is the correct calculation of a client which is doing a P-1 prior to LLing the same exponent. It is not the correct calculation for a client which takes P-1 assignments specifically. [QUOTE]It is not a local (i.e. per exponent) optimization problem, but a global optimization problem.[/QUOTE] Yes. This is a point I have made repeatedly over the months. [QUOTE]i.e Given the amount of compute power we have for dedicated P-1, what is the most number of factors we can find, while staying ahead of the LL wave.[/QUOTE] No, it is not the number of factors which we find, which needs to be maximised, but the GHz-Days saved on other machines, by us doing the P-1 calculation, per GHz day spent by us. This depends, not just upon the P-1 we do, but upon the P-1 those other machines would do if we don't. So using your hypothetical figures, in the same time, we could do six of option 1, four of option 2, or three of option 3. Suppose the average LLing machine does an option 1 P-1 on every non-P-1ed exponent it gets. If we did 6 option 1s, we would save just the P-1s on the LLing machines, for a ratio of work saved to work done by us of 1:1. Or we could do 4 option 2s. There is a 10% chance that we find a factor that the LLing machines Option 1 would also find, saving only that LLing machines Option 1 P-1. There is also a 2% chance of finding a factor that the Option 1 machine would not have found, saving an Option 1 + LL cost. There is a 88% chance of finding no factor, again, only saving the LL machines' option 1 P-1. combining these probabilities, we will certainly save the LL machines' option 1 P-1 and we have a 2% chance of saving the LL cost [i]which would not otherwise have been saved[/i]. The total saving is 4 * (1 + 0.02 * 100) = 12 the cost to us is 6 giving us a ratio of 2:1 Or we could do 3 option 3s. Using similar reasoning the total saving is 3 * (1 + 0.03 * 100) = 12, also giving us a ratio of 2:1. In this example, options 2 and 3 are equally good, but if option 1 was slightly more expensive, that would tip the balance in favour of us doing option 2, while a cheaper option 1 would tip the balance the other way. [QUOTE]Naturally, this is a dynamic point which will change based on the ratio of compute power available for LL vs compute power available for P-1.[/QUOTE] That's another consideration. The above analysis assumes that we really can do 4 Option 2s on exponents to be LL tested in the near future. If in fact, there are only three such available, and our fourth test just pushes the P-1 wavefront out further and further ahead of the LLs, then Option 3 would be more desirable. |
[QUOTE=kladner;275761]I think I have the components to put together for an Opteron dual-core, 2.4GHz, with 3GB of RAM. No real GPU ability, I think. It's an ATI All in Wonder 9800 Pro, and the motherboard is only AGP. But this would work pretty well as a P-1 machine? Something else?[/QUOTE]
P-1 with 2 cores and 2 Gigs of RAM?? Yep....just get it an Xubuntu disk, so the OS doesn't eat it alive. |
[QUOTE=Christenson;275909]P-1 with 2 cores and 2 Gigs of RAM?? Yep....just get it an Xubuntu disk, so the OS doesn't eat it alive.[/QUOTE]
(Aside from the fact that there are 3 Gigs of RAM.) Yes. I was thinking that I would do a Linux setup if I get the hardware together. The alternative would be XP-32, and I have daily examples of how slowly things run there, compared with 64bit. I switch back and forth between XP-32 and Win7-64 every day and the difference, especially in mfaktc is glaring. I haven't made the same comparisons in Prime95, but I suspect that the situation is similar. This just involves deconstructing an original Athlon box and putting the Opteron motherboard and PSU into the case. It will only have IDE drives, but I think that is a minor issue as none of the Prime apps are HDD constrained. It won't matter otherwise as this will be a dedicated box. Does VPN work on Linux? That would be the easiest way for me to control such a system: over my network. |
[QUOTE=kladner;275920]Does VPN work on Linux? That would be the easiest way for me to control such a system: over my network.[/QUOTE]
SSH won't work? |
[QUOTE=delta_t;275928]SSH won't work?[/QUOTE]
lol Why do you still use XP-32 at all? |
[QUOTE=delta_t;275928]SSH won't work?[/QUOTE]
Ignorance in play at this end. I only need something I can set up to control over the LAN from a Windows box to Linux. VPN is what I know for Windows to Windows. |
[QUOTE=Dubslow;275929]lol
Why do you still use XP-32 at all?[/QUOTE] How is SSH XP-32? It's how I access my Unix servers. Ahh, the simultaneous posting at work again. There are some Xp-32 SSH freeware clients. |
[QUOTE=Dubslow;275929]lol
Why do you still use XP-32 at all?[/QUOTE] I have legacy hardware and software that doesn't like Win7 (Nikon film scanner, for one). |
[QUOTE=kladner;275933]I have legacy hardware and software that doesn't like Win7 (Nikon film scanner, for one).[/QUOTE]
So Linux then, or you can try FreeBSD (my personal favorite). |
[QUOTE=delta_t;275935]So Linux then, or you can try FreeBSD (my personal favorite).[/QUOTE]
The thing in question is only a dedicated machine to run P-1 (or something P95 related.) My main computer is dual boot XP-32 and Win7-64. I suppose that my monitor actually has multiple connections, but that still leaves mouse and keyboard issues. As I hope to just set up some old components to run with minimal attention, I was looking for some way to check up on it once in a while with what I currently have. Remote via LAN seemed like the way to go. |
[QUOTE=kladner;275938] I was looking for some way to check up on it once in a while with what I currently have. Remote via LAN seemed like the way to go.[/QUOTE]
I'll PM you and try to see if I can't get you started. |
[QUOTE=delta_t;275958]I'll PM you and try to see if I can't get you started.[/QUOTE]
Thanks, delta_t! |
You guys do know about [URL="http://mersenne-aries.sili.net/p1small.php"]this[/URL], right? Just making sure.
|
[QUOTE=Dubslow;276192]You guys do know about [URL="http://mersenne-aries.sili.net/p1small.php"]this[/URL], right? Just making sure.[/QUOTE]
Uh.....now I do. Did that just appear? Whatever the case, thanks for making sure. It looks very interesting in light of the recent discussions of assignments. EDIT: The distributions graphs are going to need some study for me to fully grasp them. There are more categories in the search functions than I had conceived of existing, too. (Looks again) Whoa! This widget actually hands out assignments!? |
[QUOTE=kladner;276196]Uh.....now I do. Did that just appear? Whatever the case, thanks for making sure. It looks very interesting in light of the recent discussions of assignments.[/QUOTE]No, it's not new. But it is handy, if that's the kind of P-1 work you like to do. :smile: Documentation, such as it is, is [url=http://www.mersennewiki.org/index.php/Mersenne-aries.sili.net#P-1_Work_Finder]here[/url]. Or just ask, of course.
|
I was mostly aiming this at Mr. P-1, but anyone such as petrw1 who have dedicated boxes would find it useful too. Thanks, Mr. James Heinrich.
|
Thanks, James, for the answer and the link.....and for the "just ask".
Actually, Dubslow, my reaction is pretty much, "Wow that's interesting! But I sure am glad that others understand the math and the system well enough, and are willing to find good stuff for me to do." I do appreciate your putting something new (to me) out there. |
Actually, my last post was aimed at Mr. Heinrich, but maybe if we stop posting, we'll stop misinterpreting each other. :wink::lol:
|
[QUOTE=Dubslow;276192]You guys do know about [URL="http://mersenne-aries.sili.net/p1small.php"]this[/URL], right? Just making sure.[/QUOTE]
Yes |
"....maybe if we stop posting, we'll stop misinterpreting each other."
You're right. Please excuse my misunderstanding. |
:smile:
|
[QUOTE=James Heinrich;276197]No, it's not new. But it is handy, if that's the kind of P-1 work you like to do. :smile: Documentation, such as it is, is [URL="http://www.mersennewiki.org/index.php/Mersenne-aries.sili.net#P-1_Work_Finder"]here[/URL]. Or just ask, of course.[/QUOTE]1. How about adding a Help link to the page (and other similar pages)?
2. How about adding a button for just looking at what would have been (or would be) assigned, instead of actually assigning? (Could such would-be assignments be locked for 60 seconds or so to give the user the chance to actually get the assignments shown?) After all, the page text starts with "Looks for exponents ...", not "Assigns exponents ...". |
[QUOTE=cheesehead;276335]1. How about adding a Help link to the page (and other similar pages)?[/quote]You know that real programmers never document anything, right? :razz:
[QUOTE=cheesehead;276335]2. How about adding a button for just looking at what would have been (or would be) assigned, instead of actually assigning?[/QUOTE]:huh: My page just shows a list of available assignments, which you can copy-paste into Prime95 (or similar) and the assignments will then get assigned to you (if available). I don't have any magic powers of reservation to actually reserve or create assignments. |
[QUOTE=James Heinrich;276356]You know that real programmers never document anything, right? :razz:[/QUOTE]:-)
We had our fun, but need to be more responsible nowadays. [quote]:huh: My page just shows a list of available assignments, which you can copy-paste into Prime95 (or similar) and the assignments will then get assigned to you (if available). I don't have any magic powers of reservation to actually reserve or create assignments.[/quote]Oh ... so it's the text on the button, not at page top, that's [strike]misleading[/strike] too easy to misunderstand IMO. |
[QUOTE=cheesehead;276361]Oh ... so it's the text on the button, not at page top, that's [strike]misleading[/strike] too easy to misunderstand IMO.[/QUOTE]Sorry, I still don't think I follow. What text on the page and/or buttons is misunderstanable?
|
| All times are UTC. The time now is 10:25. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.