![]() |
Chalsall is right, in that TF get easier the higher you go. TF'ing a 150M exponent from x to y bits is twice as much work as a 300M exponent from x to y. So that the gaps are increasing is to be expected, even without an increase in CPU power.
The reason we say GPU has had no effect on this is because the programs designed for the GPUs are very, [i]very[/i] bad at the very short assignments that are going on in that range. They are much better suited to long running assignments, such as 50M from 69 to 71 bits rather than 300M from 65 to 66. And there is also now a need for higher bit depths at the current LL range, and so that's where just about everybody is doing it. (I believe there are some people who do TF-LMH work on GPU's, as there is a special modification of the GPU program, but that is a very very small percentage of GPU people.) |
[QUOTE=chalsall;277623]
Remember that TF, unlike LL, gets faster the higher you go. [/QUOTE] Ah, that's the missing element! It makes sense, thanks. Now, that would account for steady increases over time. Any educated guesses as to why TF might have suddenly (from one month to the next) leapt from a ~12M monthly increase, to >16M? Is there a way to tell how many CPUs were doing TF 6 months vs. 7 months ago? Rodrigo |
[QUOTE=Dubslow;277631]The reason we say GPU has had no effect on this is because the programs designed for the GPUs are very, [I]very[/I] bad at the very short assignments that are going on in that range. They are much better suited to long running assignments, such as 50M from 69 to 71 bits rather than 300M from 65 to 66.[/QUOTE]
How interesting (really!). I'll root around for the reasons for this in the GPU computing subforum as soon as I get the chance to. I appreciate the explanation. Glad I asked. Rodrigo |
[QUOTE=Rodrigo;277621]Help me to understand. Are there that many more CPUs doing TF this fall, than there were last fall? What happened in April/May of this year, to account for the sudden (and growing) jump in the rate of increase? Are certain ranges being skipped?[/QUOTE]If you want stable exponent size, you could let your CPU live in the 332,500,000 to 333,000,000 range. There are 10,700 expos that need to go from 67 to 68 bits.
Or if you want faster turn over there are about 130,000 from 334,000,000 to 340,000,000 that need to go from 65 to 66. |
[QUOTE=Rodrigo;277643]Ah, that's the missing element! It makes sense, thanks.
Now, that would account for steady increases over time. Any educated guesses as to why TF might have suddenly (from one month to the next) leapt from a ~12M monthly increase, to >16M? Is there a way to tell how many CPUs were doing TF 6 months vs. 7 months ago? Rodrigo[/QUOTE] Well, if you notice two months before that, it went down, rather than up as expected; I'd say it's a small enough gap to call it random noise in the data. (That is to say, I don't think it's very significant. Plot the total exponents versus time, and fit a parabola to it: they won't be very far off.) |
[QUOTE=Dubslow;277658]Well, if you notice two months before that, it went down, rather than up as expected; I'd say it's a small enough gap to call it random noise in the data. (That is to say, I don't think it's very significant. Plot the total exponents versus time, and fit a parabola to it: they won't be very far off.)[/QUOTE]
Dubslow, Yeah, my thought was that maybe there'd been a transient drop in the number of participants to account for that one-month decrease. (Hmm, I just realized that it took place between December 8 and January 8. What could possibly be going on during that time to decrease output?? :wink: ) Regarding the plot line, you've given me a reason to learn how to do graphs in Excel. :smile: Rodrigo |
[QUOTE=Dubslow;277631]
(I believe there are some people who do TF-LMH work on GPU's, as there is a special modification of the GPU program, but that is a very very small percentage of GPU people.)[/QUOTE] At the moment, Lavalamp and I are actively working in the OBD range with our GPUs, with Lavalamp outdoing me by about an order of magnitude. There is no special modification of the GPU program involved, however, just put the following in your worktodo.txt: Factor=3321926177,80,81 and mfaktc will be happy....be sure to report the line to the reservation page on OBD, though, and whatever you get for a result on the results thread. As for "completing" TF: GPUs also do halfway quick LL tests with CUDALucas. It's just less automated right now. |
That's Operation Billion Digits, i.e. exponents north of 3.32 billion. Christenson, a rough calculation shows that 80-->81 is around 70 GHz days of work. That certainly is not a short running assignment, is long even by GPU standards. The TF-LMH worktype on PrimeNet will assign 150M-500M exponents for factoring from 65 to 66 bits, though Rodrigo will have to check me there. That certainly qualifies as a microscopic assignment, taking minutes on a CPU, which would be seconds on a GPU, and would thus be very inefficient the way mfaktc/mfakto are designed.
Does anybody actually do the 150M-500M LMH stuff with a GPU? OBD is the extreme of the extreme... |
I had been doing TF at the LL wavefront with my slower, memory bound CPUs but have now realized how inefficient that is compared to GPUs. So I will be either moving them to DC or TF-LMH.
|
[QUOTE=petrw1;277805]I had been doing TF at the LL wavefront with my slower, memory bound CPUs but have now realized how inefficient that is compared to GPUs. So I will be either moving them to DC or TF-LMH.[/QUOTE]I have all of my borged boxen doing TF. Ones that I have regular access to are working in the 100M digit range. Others are working on TF-LMH or standard TF. This intentionally prevents them from possibly finding a prime and causing issues about money.
|
Or, if you have extra memory, one can devote a core to the P-1 effort. That is something which is desperately needed. In fact, calls have gone out to GPU-TFers to extend the bit levels to overcome the short-falls of not doing (having enough) P-1 work.
|
| All times are UTC. The time now is 22:50. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.