![]() |
Of course you can read anything in the numbers, if you trample the logic. You did 68 and 69, that is where you saved your 20k GHzD. It has nothing to do with our discussion. Look to flashjh, for example, who did a DOUBLE amount of 70 bit compared with you, and only saved 2000 GHzD.
Of course, everybody is free to do whatever work he likes. Be my guest to do as many DCTF as you want.... If you look in that table, you will see that I even took 11 expos to 72 myself. But that was "ages ago" when I joined the project and I did not realize how I am wasting my resources. Axn and few others convinced me (the posts are on the forum). If your card can do DCLL, then 69 is MAX you may want to TF, for this range. Over 69, you clear them faster doing DCLL. |
[QUOTE=LaurV;325782]If your card can do DCLL, then 69 is MAX you may want to TF, for this range. Over 69, you clear them faster doing DCLL.[/QUOTE]
Since my CPU does in fact do DCs faster than my GPU (yes, I did benchmark that), I consider myself part of the "my card can not do DCLL" team. How far should I be factoring? |
[QUOTE=ckdo;325795]Since my CPU does in fact do DCs faster than my GPU [/QUOTE]
Whoops! :blush: Then you can do TF to how high you like :D as long as you have no alternative... You may consider doing LLTF however... which is worth doing more then DCTF to 71, (my opinion, I can't argue here, and other people can contradict me; my argument was GPU-DC-TF against GPU-DC-LL only, but as long as you can't do the last...). But of course, it is entirely up to your preference. |
[QUOTE=ckdo;325795]Since my CPU does in fact do DCs faster than my GPU (yes, I did benchmark that), I consider myself part of the "my card can not do DCLL" team. How far should I be factoring?[/QUOTE]
This logic is flawed, you are comparing apples to oranges. Look at it this way, you say your GPU is not suited to DCLL, but by your logic if you moved your GPU to an old Pentium 4 box the same GPU is now well suited to DCLL. What LaurV and I recommend is comparing a GPU's TF throughput to a GPU's CUDALucas throughput, one can determine whether a GPU should be doing more TF on an exponent or should run LL on the exponent (or let someone else run the LL). |
[QUOTE=Prime95;325800]What LaurV and I recommend is comparing a GPU's TF throughput to a GPU's CUDALucas throughput, one can determine whether a GPU should be doing more TF on an exponent or should run LL on the exponent (or let someone else run the LL).[/QUOTE]
Do this, go to James H's page: [url]http://www.mersenne.ca/cudalucas.php[/url] Click on your card. The table that pops up will tell you where the break point is. The green-yellow semi-diagonal line is the break even for DC GPU-LL vs GPU-TF for that card. |
[QUOTE=Uncwilly;325817]Do this, go to James H's page: [url]http://www.mersenne.ca/cudalucas.php[/url]
Click on your card. The table that pops up will tell you where the break point is. The green-yellow semi-diagonal line is the break even for DC GPU-LL vs GPU-TF for that card.[/QUOTE] As noted earlier, this chart assumes no P-1 has been done. It would be great if James could change the chart to show the DC breakeven point when P-1 has been performed (the factoring chart LaurV pointed to, [url]http://www.gpu72.com/reports/factoring_cost/[/url], gives enough information to produce a fairly accurate adjustment to the chance of finding a factor). For extra credit, the web page could offer a checkbox to show the crossover points assuming no P-1 done. Such a chart would help end the speculation as to where we should increase GPU72's DC TF bit depth. |
Adding to the confusion doesn't James' page still rely on data from mfaktc 1.19 not 1.20. Isn't the new version quite a bit more efficient as it allows the cpu to be used for DC/LL.
|
[QUOTE=Prime95;325832]Such a chart would help end the speculation as to where we should increase GPU72's DC TF bit depth.[/QUOTE]
OK... I've finally got myself a semi-reasonable GPU (EVGA GTX 560 2GB "Factory Overclocked") to replace my pathetic FX 1800. Wow! ($600 BDS; $300 US -- not the best and overpriced, but it was the best 2G card available here in Bimshire and I wanted to buy locally to avoid shipping hassles etc.) The purchase was motivated by my computer vision work, but I'm going to run a few hundred TF runs immediately in front of the wave front in 31M to 70 so we can get a reasonable idea as to what type of percentage we might expect. (31M to 70 has only had 83 attempts before, with zero factors found, so we don't actually have any useful data on what we might expect exactly there.) The card is only doing one every 25 minutes, so I won't be able to keep ahead of the wave, but I'll do what I can. Anyone else interested? |
[QUOTE=chalsall;325850]OK... I've finally got myself a semi-reasonable GPU (EVGA GT560 2GB "Factor Overclocked") to replace my pathetic 1800. Wow! ($600 BDS; $300 US -- not the best and overpriced, but it was the best thing available here in Bimshire and I wanted to buy locally to avoid shipping hassles etc.)
The purchase was motivated by my computer vision work, but I'm going to run a few hundred TF runs immediately in front of the wave front in 31M to 70 so we can get a reasonable idea as to what type of percentage we might expect. (31M to 70 has only had 83 attempts before, with zero factors found, so we don't actually have any useful data on what we might expect exactly there.) The card is only doing one every 25 minutes, so I won't be able to keep ahead of the wave, but I'll do what I can. Anyone else interested?[/QUOTE] Yes me! Tell me what to do. |
[QUOTE=swl551;325851]Yes me! Tell me what to do.[/QUOTE]
Cool. OK. Give me a bit of time (I just got back from the office and we haven't had dinner yet). I need to modify the Assignment form, then bring in some more candidates. |
[QUOTE=chalsall;325850]OK... I've finally got myself a semi-reasonable GPU (EVGA GT560 2GB "Factor Overclocked") to replace my pathetic 1800. Wow! ($600 BDS; $300 US -- not the best and overpriced, but it was the best 2G card available here in Bimshire and I wanted to buy locally to avoid shipping hassles etc.)
The purchase was motivated by my computer vision work, but I'm going to run a few hundred TF runs immediately in front of the wave front in 31M to 70 so we can get a reasonable idea as to what type of percentage we might expect. (31M to 70 has only had 83 attempts before, with zero factors found, so we don't actually have any useful data on what we might expect exactly there.) The card is only doing one every 25 minutes, so I won't be able to keep ahead of the wave, but I'll do what I can. Anyone else interested?[/QUOTE] Sure! Although my firepower sucks, Count me in :smile: |
[QUOTE=chalsall;325850]... I'm going to run a few hundred TF runs immediately in front of the wave front in 31M to 70 so we can get a reasonable idea as to what type of percentage we might expect. (31M to 70 has only had 83 attempts before, with zero factors found, so we don't actually have any useful data on what we might expect exactly there.)[/QUOTE]
I have no reason to believe the hit rate would be any different than in the 32M range -- and we have a lot of data there. |
[QUOTE=Prime95;325863]I have no reason to believe the hit rate would be any different than in the 32M range -- and we have a lot of data there.[/QUOTE]
I agree. Also in the 30M range. And the average between the two is 1.1546% -- better than the ~1% suggested. |
TF level
At the risk ud getting my head bitten off [B]again, [/B]may I remind you all that if it's worth TFing 30M to 71, then it's worthi taking 60M to 75.
David |
[QUOTE=kracker;325861]Sure! Although my firepower sucks, Count me in :smile:[/QUOTE]
OK, thanks guys. I have started bringing in candidates. Only a thousand at a time until we've got a comfortable margin. Go to the [URL="https://www.gpu72.com/account/getassignments/dctf/"]DC TF Assignment form[/URL], set the "Will factor to" field to be 70, and the "Option" to be "Lowest Exponent". I would suggest that people try to only take a hundred or so at a time, and try to return results at least every hour or so, at least at the start of this exercise. |
[QUOTE=chalsall;325868]OK, thanks guys.
I have started bringing in candidates. Only a thousand at a time until we've got a comfortable margin. Go to the [URL="https://www.gpu72.com/account/getassignments/dctf/"]DC TF Assignment form[/URL], set the "Will factor to" field to be 70, and the "Option" to be "Lowest Exponent". I would suggest that people try to only take a hundred or so at a time, and try to return results at least every hour or so, at least at the start of this exercise.[/QUOTE] Thanks :smile: |
[QUOTE=chalsall;325868]OK, thanks guys.
I have started bringing in candidates. Only a thousand at a time until we've got a comfortable margin. Go to the [URL="https://www.gpu72.com/account/getassignments/dctf/"]DC TF Assignment form[/URL], set the "Will factor to" field to be 70, and the "Option" to be "Lowest Exponent". I would suggest that people try to only take a hundred or so at a time, and try to return results at least every hour or so, at least at the start of this exercise.[/QUOTE] This work can be fetched via the GPU72WorkFetcher utility by setting the URL in the config file to [B]URL:[url]https://www.gpu72.com/account/getassignments/dctf/[/url][/B] (set the pledge and option as Chris has stated) |
I will join with one gtx580, for a week or so, starting tonight (my time) to help setting this debate once forever :smile:
|
[QUOTE=LaurV;325906]I will join with one gtx580, for a week or so, starting tonight (my time) to help setting this debate once forever :smile:[/QUOTE]
Coolness. Thanks. :smile: |
took 30 expo for 69-70.... just enough for the night (GTI 560 normal) around 26 minutes for a 31075xxx exponent (3.85GhzD)
|
[QUOTE=henryzz;325845]Adding to the confusion doesn't James' page still rely on data from mfaktc 1.19 not 1.20. Isn't the new version quite a bit more efficient as it allows the cpu to be used for DC/LL.[/QUOTE]My [url=http://www.mersenne.ca/cudalucas.php?model=13]GPU TF-LL comparison page[/url] is purely GPU crossover points, although you are correct that GPU-sieving does allow the CPU to be used for other tasks. But the performance numbers reflect mfaktc v0.20 performance (as does the [url=http://www.mersenne.ca/mfaktc.php]GPU-TF expected performance page[/url]).
[QUOTE=Prime95;325832]It would be great if James could change the chart to show the DC breakeven point when P-1 has been performed (the factoring chart LaurV pointed to, [url]http://www.gpu72.com/reports/factoring_cost/[/url], gives enough information to produce a fairly accurate adjustment to the chance of finding a factor). For extra credit, the web page could offer a checkbox to show the crossover points assuming no P-1 done.[/QUOTE]I'll see what I can come up with over the next few days. |
[QUOTE=chalsall;325850]... but I'm going to run a few hundred TF runs immediately in front of the wave front in 31M to 70 so we can get a reasonable idea as to what type of percentage we might expect. (31M to 70 has only had 83 attempts before, with zero factors found, so we don't actually have any useful data on what we might expect exactly there.)[/QUOTE]
[url]http://www.mersenne.org/various/math.php[/url] says: [QUOTE]the chance of finding a factor between 2[SUP]X[/SUP] and 2[SUP]X+1[/SUP] is about 1/x[/QUOTE] So, assuming all exponents are already at 69 and being brought to 70, we should expect 1/69, or 1.45%, to have factors, right? Edit: I think I just realized why it's not that simple: P-1 factoring will find a portion of the factors. Since not every exponent was P-1'd to the same extent, this becomes much harder to calculate over the range, and to get an accurate figure without simply testing and looking at results, you'd have to get all complete P-1 bounds and calculate based on that. |
[QUOTE=Mini-Geek;325958] Since not every exponent was P-1'd to the same extent, this becomes much harder to calculate over the range...[/QUOTE]
All true, but we aren't seeking perfection here. I'd say if we know our chance of finding a factor within 10%, that's good enough to determine a very reasonable DCTF crossover point. Using chasall's 1.1546% number we get 1 chance in 86.6 of finding a factor. James knows how to turn that into crossover points. |
Just started... it is doing one every 40 min, about ~200M/s on mfakto, 7770.
|
Took 100 last night; I'm doing one every 12 minutes.
|
[QUOTE=Chuck;325971]Took 100 last night; I'm doing one every 12 minutes.[/QUOTE]
Nice!!! Thanks. I really wish I had been able to get me a 580 rather than a 560. Next time... So everyone knows, we need to do at least ~117 a day to pull ahead of the wavefront again. And as you can [URL="https://www.gpu72.com/reports/workers/dctf/70/day/"]see here[/URL], we're comfortably doing more than twice that. I don't want to commit to doing the entire range until we've heard from James as to whether or not this actually "Makes Sense", and we have some more idea what success rate we're going to have. So far it's not looking so good -- only 1 factor found (by Scott) out of a total of 300 attempts -- but it's still too early to read anything into that. I have, however, adjusted the [URL="https://www.gpu72.com/reports/estimated_completion/primenet/"]Estimated Completion[/URL] report to show what is required to complete the range. Please note that the report shows the estimated completion if [B][I][U]all[/U][/I][/B] of our TFing resources were brought to bear, not just what our DC TFing rate is. |
add a second (at least) from me, haven't reported yet. I had to sleep (and still have 3 expo to run for my batch)
M31076021 has a factor: 820637132297246887879 |
Took 200. Did about 30. Found one factor ([SIZE=2]641116418205117719911 o[SIZE=2]f [/SIZE][/SIZE][SIZE=2]31082819). This is, I [SIZE=2]consid[SIZE=2]er I[/SIZE][/SIZE] got [SIZE=2]extremely lucky[SIZE=2]. I don't expect another factor for all the bunch (otherwise my theory would be totally wrong:smile:). [/SIZE][/SIZE]I do one [SIZE=2]for about [SIZE=2]6:30 min (I temporar[SIZE=2]ily[SIZE=2] re-[SIZE=2]allocated a second card [SIZE=2]which was TF-ing in 332M range)[/SIZE][/SIZE][/SIZE][/SIZE][/SIZE][/SIZE][/SIZE], or about 9 per hour, so I am "insured" for the next ~20 hours.
[edit: in fact, if I find a second factor, then I cleared two exponents in the same amount of time I would use to clear them doing DCLL with both cards, so no time was lost] |
Another one.
M31028891 has a factor: 893642580498913519343 For me that's one in 84 attempts in this group. |
my 30 assig are done, only got one factor , reported above
|
And another one... M31042729 has a factor: 1089392747939138080577.
1 for 49 (so far) for me. |
Wow, I look away from the forum for a couple days and missed big news! I'll toss a couple hundred DCs in the hopper with my 480.
|
Or maybe not - site down?
|
[QUOTE=Aramis Wyler;326045]Or maybe not - site down?[/QUOTE]
It's down for me as well. |
[QUOTE=Dubslow;326046]It's down for me as well.[/QUOTE]
Sorry -- doing a complete back-up in preparation of a potential big announcement. It will be back shortly.... |
I took 30 for my 460. It is turning out 177Ghz-Days/Day at this range.
5 assignments down. The fourth test turned out a factor: M31108411 has a factor: 930013238991861808921 CPU credit is 6.3583 GHz-days. |
another factor
M31080983 has a factor: 787976240107172165047 [TF:69:70:mfaktc 0.20 barrett76_mul32_gs]
|
[QUOTE=chalsall;326048]Sorry -- doing a complete back-up in preparation of a potential big announcement. It will be back shortly....[/QUOTE]
Sorry everyone. That took longer than expected... She's back. |
[QUOTE=chalsall;326048]...a potential big announcement....[/QUOTE]
Your's or George's? |
[QUOTE=petrw1;326055]Your's or George's?[/QUOTE]
Let me guess.. GPU72 is with child? GPU73? |
[QUOTE=petrw1;326055]Your's or George's?[/QUOTE]
George's. The "SlashDot Effect" can bring unwanted attention.... |
I pulled 200 DCs and the 2nd one had a factor:
got assignment: exp=31110319 bit_min=69 bit_max=70 (3.84 GHz-days) Starting trial factoring M31110319 from 2^69 to 2^70 (3.84 GHz-days) k_min = 9487138500840 k_max = 18974277003032 Using GPU kernel "barrett76_mul32_gs" Date Time | class Pct | time ETA | GHz-d/day Sieve Wait Jan 26 21:34 | 3896 84.4% | 0.845 2m07s | 409.34 69941 n.a.% M31110319 has a factor: 718932871683122603489 found 1 factor for M31110319 from 2^69 to 2^70 (partially tested) [mfaktc 0.20 barrett76_mul32_gs] tf(): total time spent: 11m 26.995s 409 ghz days/day at this range. I am not sure why the ghz/day goes up based on the range I'm factoring. Shoudn't the smaller numbers be worth less days if they factor so much more quickly? |
Wow, I missed a big discussion. Since you seem to have the firepower working, I will continue with my 33M to 71 runs. I've had the 'crossover' discussion with Larv too many times already (back when CPUs affected the outcome), but looking at what has been posted, 12 min per = 5 per hour, 20 hour LL = 1 LL per 100 TF. 80-120 TF per LL is considered the 'gray' zone, some feel it is not worth it, some do. The problem here has always come down to, as George pointed out, 'apples and oranges'. Suffice it to say, if some people think it worthwhile, what's wrong with letting them do it?
|
bcp19: it is a volunteer project so you are free to do what you want. LLTF is still the most useful for the project as a whole. As davieddy said (and he is right this one time) taking 31M to 70 is like taking 62M to 74 and we are having trouble making sure all 62M will be at 73 let alone 74.
But whatever floats your boat. Everything helps. My observation was simply that 31M to 70 is better than 33M to 71. |
[QUOTE=swl551;326050]M31080983 has a factor: 787976240107172165047 [TF:69:70:mfaktc 0.20 barrett76_mul32_gs][/QUOTE]
Chris, should we continue processing this range or go back to our regular work? |
I did my 200, got 3 factors.
That is, I got some profit, having cleared 3 expos in the time I would only clear 2 by DCLL. I was lucky :D Not convincing. I will take 100 more. |
I also reserved 100 DC 31M tasks, my 480 and 470 should start working on them in about ~1 hour.
|
[QUOTE=swl551;326123]Chris, should we continue processing this range or go back to our regular work?[/QUOTE]
I'd suggest that people shouldn't move too much firepower from LLTF to DCTF, but those who do regularly do DCTF (or LMH (hint... hint... :wink:)) should continue in the 31M range to 70 for the time being. Once we hear back from James we'll have a definitive answer, but it appears that 31M to 70 does make sense. 10 for 939 (1.065%) so far. |
Meh, my cpu's can do a DC faster than my GPU finding a factor in this range.
|
[QUOTE=kracker;326159]Meh, my cpu's can do a DC faster than my GPU finding a factor in this range.[/QUOTE]
Yes, that will be true for some GPU/CPU combinations. If 31M -> 70 doesn't make sense for you, move to somewhere else where it does. |
[QUOTE=Prime95;325832]As noted earlier, this chart assumes no P-1 has been done. It would be great if James could change the chart to show the DC breakeven point when P-1 has been performed[/QUOTE]I have updated my chart page to both be a little easier to read, and to address the above concerns:
[url]http://www.mersenne.ca/cudalucas.php?model=13[/url] The chart is now interactive: you can mouse-over any point and get the breakeven point at 1M granularity without having a big table of numbers. The data is now calculated based on the number of seconds to clear an exponent of that range:[list][*]TF 1st LL: time to run TF to this bit level multiplied by the probability of finding a factor (assuming no P-1 done)[*]TF 2nd LL: time to run TF to this bit level multiplied by the probability of finding a factor, with factor probability modified by assumption that P-1 was done and so TF factor probability reduced.[*]LL 1st test: time to run two LL tests[*]LL 2nd test: time to run one LL test (ignoring chance of non-matching residues)[/list]Note the last point: I don't take into account the fact that some percentage of L-L tests won't match residues, necessitating a 3rd (or more) L-L test. If someone can let me know what is the current average number of mismatched L-L tests I can factor that in. |
[QUOTE=James Heinrich;326167]The chart is now interactive: you can mouse-over any point and get the breakeven point at 1M granularity without having a big table of numbers.[/QUOTE]
[B][I][U]Very[/U][/I][/B] nice work!!! Thanks a lot! :smile: So, (assuming I'm reading this correctly) taking 31M to 70 is definitely profitable for higher-end cards. And 71 is slightly profitable for the very high-end cards. Because we're not doing as much LLTFing as we should at the moment, I'd suggest we just take 31M to 70, not 71. |
[QUOTE=chalsall;326170]So, (assuming I'm reading this correctly) taking 31M to 70 is definitely profitable for higher-end cards. And 71 is slightly profitable for the very high-end cards.[/QUOTE]At 31M for DC the cutoff point is TF to 2[sup]70.775[/sup] on CC 2.0, 2[sup]70.676[/sup] on CC 2.1, 2[sup]70.448[/sup] on CC 3.0. So definitely to 2[sup]70[/sup], to 2[sup]71[/sup] is debatable for CC 2.0 GPUs but isn't really worth it for CC 3.0. But as you said, better to spend the effort on LLTF rather than borderline DCTF.
[b]edit:[/b] but I don't want to read too much into TF cutoff points until I have data on percentage of LL tests needing a triple-check. |
[LEFT]That's some awesomesauce +1 right there, James.
[/LEFT] |
[QUOTE=James Heinrich;326167]I have updated my chart page to both be a little easier to read, and to address the above concerns:
[url]http://www.mersenne.ca/cudalucas.php?model=13[/url][/QUOTE] Very nice! Now we can make informed decisions :smile: As to triple-checking rates, 2% is probably a good first estimate. I think there are some posts in the Data subforum doing more rigorous analysis. |
Worker's Progress for last X is working great, :smile:
[COLOR=Gray][URL="https://www.gpu72.com/reports/workers/dc/week/"][SIZE=1]:P[/SIZE][/URL][/COLOR] |
[QUOTE=James Heinrich;326173]At 31M for DC the cutoff point is TF to 2[sup]70.775[/sup] on CC 2.0...[/QUOTE][QUOTE=Prime95;326219]As to triple-checking rates, 2% is probably a good first estimate.[/QUOTE]After adding in a correction factor on the assumption that 2% of exponents will require a triple-check, the breakeven numbers vary but slightly. For example 31M DCTF breakeven point on CC 2.0 shifts from 2[sup]70.775[/sup] to 2[sup]70.803[/sup].
|
Chalsall, what kind of sample size are you looking for in the dctf 31M to 70 project? aka, we pretty much good to get back to regular lltf work, need another day, or aim for 5k or so? Seems like we churned out about 1200 yesterday.
|
[QUOTE=Aramis Wyler;326240]Chalsall, what kind of sample size are you looking for in the dctf 31M to 70 project? aka, we pretty much good to get back to regular lltf work, need another day, or aim for 5k or so? Seems like we churned out about 1200 yesterday.[/QUOTE]
I was looking for about 2000 to 3000 samples. But I trust James' analysis enough to take George's advice, and I'm just bringing in everything in 31M until it's TFed to 2^70. We're now approximately 10 days ahead of the DC wave. It won't be a loss if a few candidates in 32M (which are already at 2^70) are assigned for DCLLing before the regular DCTFers finish 31M. So, thanks for the work everyone! :smile: But at the end of the day, LLTFing is far more important. |
[QUOTE=chalsall;326246]I was looking for about 2000 to 3000 samples. But I trust James' analysis enough to take George's advice, and I'm just bringing in everything in 31M until it's TFed to 2^70.
So, thanks for the work everyone! :smile: But at the end of the day, LLTFing is far more important.[/QUOTE] OK, I'm done here then; I have a group of LLTF queued up waiting to finish. I checked 200 DCTF 31M and found one factor. |
Aye, I think I have another 20 hours or so of dctf queud up, then it's back to regularly scheduled programming. :)
|
[QUOTE=James Heinrich;326167]I have updated my chart page to both be a little easier to read, and to address the above concerns:
[URL]http://www.mersenne.ca/cudalucas.php?model=13[/URL][/QUOTE] So I was looking at James' fabulous chart, and it's reccomendations to using mfactc. At 60M, for example, it reccomends 74.674 for first timers. Since 74.674 isn't actually a legal value for mfactc, shouldn't the lines have a ceiling, floor, or round function applied? |
Very nice work James!
That chart fits my timing perfectly (in spite of the fact I never sent benchmarks to the site!) Meantime, [B]I switched to the jinxed side[/B]: after finding 3 factors in 200 trials, and considered myself on the lucky side, I got another 100 assignments, finished them overnight, and did not find any new factor, so I have now 1 in 100 successful hits, therefore slower then DCLL-ing them. But not much slower, in fact I am on the gray territory bcp19 was talking about, with only few minutes behind the line. Therefore I will stay to this DCTF-ing activity for a while, but I can't take new assignments till tonight when I will reach home. I mean I can take the assignments, but no way to update the worktodo files if I am not in front of that computer, so for the next 6 hours I still do LMH-TF. After that, I will let chalsall to argue with Uncwilly for a while :bump2: :razz: |
[QUOTE=Aramis Wyler;326270]Since 74.674 isn't actually a legal value for mfactc, shouldn't the lines have a ceiling, floor, or round function applied?[/QUOTE]No, it's intended as an analysis only, to help those who generate assignments for the rest of us (George/Primenet and Chris/GPU72) decide what cutoff points make the most sense. And to a lesser extent anyone who is crafting their own set of assignments. When generating actual assignments you would of course want to use only integer bit levels, but the choice of method of rounding is left to the user.
As an aside, I don't see any reason why mfakt* [i]couldn't[/i] work with non-integer bitlevels in assignments, but it could make tracking of factored status of exponents considerably more complex so I understand why that functionality perhaps [i]shouldn't[/i] be exposed. |
Not very clear from your post if you want to round() them or not. Please don't! Let the decimals there, they are very helpful for guys like me who tune their FFT's. For this range and 580 and CuLu the default 1600k FFT (as I said in the past) is not the optimum, it can be tuned (up) to get about 3%-6% faster DC results. When "balancing" the work (like between DC and TF), those decimals could be really important (I don't know how, yet, but I may find a use of them in the future). I like the graphic with decimals!
|
[QUOTE=Aramis Wyler;326270]Since 74.674 isn't actually a legal value for mfactc, shouldn't the lines have a ceiling, floor, or round function applied?[/QUOTE]
No, they shouldn't. There's at least two reasons: First, on bigger scale basis, a project like GPU72 may choose to take all candidates in your range to 74, and then start taking random candidates to 75, until the average is 74.674. That process might even factor in candidates that will never reach 74 and ones that have exceeded 75 already. Second, it's a flaw of PrimeNet and related software that it deals with bit levels. k values, as used elsewhere, are a much better option, and will allow you to hit 74.674 (or whatever) pretty much spot on. mfaktc actually can be limited to searching only a certain range of k values, which is what the self test does. That feature is not available to the end user though, I think, and you wouldn't be able to submit such results to PrimeNet anyway, found factors aside. |
[QUOTE=James Heinrich;326167]I have updated my chart page to both be a little easier to read, and to address the above concerns:
[URL]http://www.mersenne.ca/cudalucas.php?model=13[/URL] [/QUOTE] Awesome chart! You've taken what I visualize in my head and turned it into something others can understand. @Chalsall: How many days ahead are we with all the 32M exps available and how many 31M exps are left to take to 70? |
Still on the jinxed side, I took another 200 expos, from which I did 40. No factor. From a [URL="https://www.gpu72.com/reports/workers/dctf/70/week/"]total of 340 done[/URL] - only 3 factors. So, before going deeper in the sh.. mud, I better stop.
So, I will do all this batch to the end, and I will stop and go to more profitable tings, if no factor is found. That will be 500 trials with 3 factors only, much behind the return I would have doing DC. If I find a factor on the way, I will stop immediately and unreserve the rest of the expos, to make sure I get "maximum of efficiency" (factors found, over trials). Continuing to get "no factors" will just get me deeper in the jinxed side. If I find a factor right now, then I may continue, because I am again in the gray area. But as this does not show... I will go to sleep and let the batches think. 12:45 AM here. I would have already cleaned 4 expos (two rounds DC in two cards), and being well into the half of the fifth/sixth clearance if I would do DC all this time. Sorry Chris, but with this scores, Uncwilly's side is more tempting... |
[QUOTE=LaurV;326361]Sorry Chris, but with this scores, Uncwilly's side is more tempting...[/QUOTE]
Don't apologize . We appreciate what you've done. But may I suggest that perhaps doing current LLTFing makes more sense than going back to Uncwilly? (Sorry Uncwilly, but as George himself said, 332M is for the foolhardy... :smile:) |
Yikes — I see there are now 18,000+ DCTF available in the 31M range. Perhaps I should devote one day per week to working on these.
OR, Chris could get really clever and create a "blended" option for getting TF work which would include the appropriate mix of LLTF and DCTF. |
[QUOTE=Chuck;326366]OR, Chris could get really clever and create a "blended" option for getting TF work which would include the appropriate mix of LLTF and DCTF.[/QUOTE]
No. I'm not that clever... LLTF is the most important work right now. Only by explicit request will DCTF be made available.... :smile: |
Put a few 31M expos onto my spare GPU (GT640, 73GHzD/D).
30 expos (1.5 days :yucky:) and found 2 factors. |
[QUOTE=Antonio;326369]Put a few 31M expos onto my spare GPU (GT640, 73GHzD/D).
30 expos (1.5 days :yucky:) and found 2 factors.[/QUOTE] Thanks. We're interested in the empirical and theoretical value of factors found / factoring attempts across a large sample set. Unfortunately we don't have infinite time, nor infinite candidates. We work with what we have. :smile: |
[QUOTE=LaurV;326361]Still on the jinxed side, ... From a [URL="https://www.gpu72.com/reports/workers/dctf/70/week/"]total of 340 done[/URL] - only 3 factors.
I would have already cleaned 4 expos (two rounds DC in two cards), and being well into the half of the fifth/sixth clearance if I would do DC all this time.[/QUOTE] When operating near the crossover point, some will be "jinxed" and someone will be "blessed". |
[QUOTE=chalsall;326362](Sorry Uncwilly, but as George himself said, 332M is for the foolhardy... :smile:)[/QUOTE]I am trying to prevent those that decide to do 100M digit LL's without doing the TF from throwing away cycles. There are ~1500 LL's assigned in the range and there is not nearly that many that have taken the expos to 77 or higher.
BTW, I prefer to TF there than LMH TF generically. The borged boxes are not doing any LL's, because they are borged. |
I found 1 factor out of 100 tasks in the 31M range on a GTX480/GTX470 rig, which is somewhat average I believe.
Even if CUDALucas would clear more exponents per day, I would still rather do TF (and clear a bit less exponents), since CUDALL gives ~25GHzdays/day and DCTF 300+GHzdays/day for each card. |
[QUOTE=Uncwilly;326406]I am trying to prevent those that decide to do 100M digit LL's without doing the TF from throwing away cycles. There are ~1500 LL's assigned in the range and there is not nearly that many that have taken the expos to 77 or higher.
BTW, I prefer to TF there than LMH TF generically. The borged boxes are not doing any LL's, because they are borged.[/QUOTE] Yes, I agree that what you're doing Makes Sense[SUP](TM)[/SUP]. Those who are attempting LLs up there, on the other hand.... :wink: |
[QUOTE=Chuck;326366]Perhaps I should devote one day per week to working on these.[/QUOTE]
That would be handy and appreciated. We only need about 120 a day to stay ahead of the wave, and we've got about a 10.4 day lead at the moment. I have adjusted the DCTF assignment form to default to TFing to 70 again. For bcp19 et al who want to keep going to 71 in 33M, just changing the pledge level to 71 will default to candidates at 33M or above for "What Makes Sense". If anyone wants to take 31M to 71 (not recommended), just change the Option to "Lowest Exponent" and it will assign candidates in the 31M range. |
[QUOTE=chalsall;326486]That would be handy and appreciated. We only need about 120 a day to stay ahead of the wave, and we've got about a 10.4 day lead at the moment.
I have adjusted the DCTF assignment form to default to TFing to 70 again. For bcp19 et al who want to keep going to 71 in 33M, just changing the pledge level to 71 will default to candidates at 33M or above for "What Makes Sense". If anyone wants to take 31M to 71 (not recommended), just change the Option to "Lowest Exponent" and it will assign candidates in the 31M range.[/QUOTE] I'll just grab from the default then, it'll be a few days before I grab more though. |
[QUOTE=bcp19;326330]@Chalsall: How many days ahead are we with all the 32M exps available and how many 31M exps are left to take to 70?[/QUOTE]
I realized I hadn't answered your question... Taking into account the fact that all of 32M is already at 70, and the 652 in 33M which are at 71, we are currently about 191 days ahead of the DC wavefront. As to the number of 31Ms to take to 70, that is currently reported on the Available Assignments page; 18,404. At George's suggestion, I've brought in everything available in 31M so if we fall behind in this new work, Primenet will assign candidates from 32M. There are currently 713 candidates assigned for DCLLing in 31M which are only at 69. If and when they become available (read: expire on Primenet) Spidy will bring them in for processing. So, if you're willing, moving back down into 31M to 70 would make sense... We definitely have the time.... |
[QUOTE=bcp19;326501]I'll just grab from the default then, it'll be a few days before I grab more though.[/QUOTE]
(We cross-posted.) Cool. Thanks. I'm going to keep my 560 in this range for the time being (when it's not SIFTing images), so between you, Chuck and myself we should be good. |
While I have not yet done any in the 31M range atm, I think these numbers should help the ^70 bit cause:
29M - 3,406 exp tested, 38 factors found (1.115%) 30M - 3,839 exp tested, 43 factors found (1.12%) 32M - 10,261 exp tested, 122 factors found (1.189%) |
More complete data [URL="http://en.gpu72.com/reports/factor_percentage/"]here[/URL].
|
SIFTing images?
|
[QUOTE=flashjh;326891]SIFTing images?[/QUOTE]
[quote=http://en.wikipedia.org/wiki/SIFT][url=http://en.wikipedia.org/wiki/Scale-invariant_feature_transform]Scale-invariant feature transform[/url], an algorithm in computer vision to detect and describe local features in images[/quote]I don't believe Chris has elaborated as to exactly the nature of his project. I think he's planning on sending an autonomous rover to Mars. :smile: |
[URL="https://www.gpu72.com/reports/workers/saved/"]What would we do without you Chris? :smile:[/URL]
(Work Saved) |
[QUOTE=kracker;327092](Work Saved)[/QUOTE]
Whoops... Thanks for pointing that out. I need to filter out my brief LMH test run.... |
[QUOTE=chalsall;327095]Whoops... Thanks for pointing that out.
I need to filter out my brief LMH test run....[/QUOTE] Bad chalsall! We know how you programmers work! |
[QUOTE=kracker;327092][URL="https://www.gpu72.com/reports/workers/saved/"]What would we do without you Chris? :smile:[/URL]
(Work Saved)[/QUOTE] I noticed that some time ago ... I thought maybe it was a perk given to the creator of this system. |
[QUOTE=Chuck;327115]I noticed that some time ago ... I thought maybe it was a perk given to the creator of this system.[/QUOTE]
Nope. I don't grant myself any perks not offered to anyone else, except occasionally running experiments intended for public use. Sometimes such experiments cause unexpected results, which is the purpose of the experiments in the first place. |
As a connected discussion:
We talked about this in the past (here on the forum): LMH-TF saved work should not be mixed into those tables. because: 1. at the bitlevels we work in 332-334M area, it is still very easy to find factors (10 or more factors per day for a GTX580) 2. each factor found saves "a lot" of time, due to HUGE amount of work needed to be done to LL and DC such exponents. I saved few millions of GHzDays only in the last 5 days, with only two cards. Therefore we discussed in the past that if TF-LMH assignment will be offered by GPU72 as an option, then the "saved work" of such assignments should be accounted separate, or better ignored, because even counting it separate won't be accurate, there is big difference between exponents and bitlevels. The same like one should count how many GHzDays he saved by finding up-to-40 bits factors for exponents under 100M. You can find all of them in two-three days even with a pari script, and eliminate half of the exponents, how many billions of GHzDays would that be? :whistle: |
[QUOTE=LaurV;327139]LMH-TF saved work should not be mixed into those tables
if TF-LMH assignment will be offered by GPU72 as an option, then the "saved work" of such assignments should be accounted separate, or better ignored, because even counting it separate won't be accurate, there is big difference between exponents and bitlevels.[/QUOTE]I still like my suggestion of a [url=http://www.mersenneforum.org/showpost.php?p=323022&postcount=1645]self-balancing work-saved value[/url] (post #1645 in this thread) which works equally well for any exponent and factor size (including both TF and P-1). I haven't done so yet, but I will soon include this value on my site in preference to absolute GHz-days-saved for ranking of factors. I think it would also fit well on GPU72. Or at least something similar, improvements to my idea are welcome. |
[QUOTE=James Heinrich;327164]I still like my suggestion of a [url=http://www.mersenneforum.org/showpost.php?p=323022&postcount=1645]self-balancing work-saved value[/url] (post #1645 in this thread) which works equally well for any exponent and factor size (including both TF and P-1).[/QUOTE]
I agree -- it's a logical solution which removes the need for estimating the time it will take Primenet to reach higher ranges (which was my initial suggestion). [QUOTE=James Heinrich;327164]I haven't done so yet, but I will soon include this value on my site in preference to absolute GHz-days-saved for ranking of factors. I think it would also fit well on GPU72. Or at least something similar, improvements to my idea are welcome.[/QUOTE] Again, I agree. This will mean, however, that I need to write a script to go through GPU72's database and recalculate the GHzDaysSaved values to instead be "Worth", or "Value" or some other nomenclature. (Not a big deal -- probably an hour or two's worth of work.) Also, everyone's GHz Days Saved metric will change to be different. Some might not like this. But I do think it's a good idea. |
I'm all for it, no big deal on the changes to GHz saved/done :smile:
|
Blast. There goes my evil plan of beating everyone with a factor of MM127. :devil:
|
Just noticed swl551 joined the gpu72 team. :smile:
|
[QUOTE=kracker;327545]Just noticed swl551 joined the gpu72 team. :smile:[/QUOTE]
Hope he's using MISFIT......:cool: |
| All times are UTC. The time now is 22:10. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.