![]() |
Whither TF?
Taking a quick break from the rigors of the publishing industry. I have an observation, and a question based on it.
One of my computers is assigned to do TF. When it started crunching some 15 months ago, it was getting exponents in the low 100M range. Now, some 15 months later, it's receiving exponents north of 320M. Obviously, the advent of GPU computing has had an effect. At this rate (which no doubt will continue to increase), trial factoring will hit OBD territory sometime in 2015. When that happens, how will the TF portion of GIMPS proceed -- will it move into billion-digit exponents; recap previous exponents at deeper bit levels; or something else? Just curious. Rodrigo |
My 2p worth
[QUOTE=Rodrigo;277543]Taking a quick break from the rigors of the publishing industry. I have an observation, and a question based on it.
One of my computers is assigned to do TF. When it started crunching some 15 months ago, it was getting exponents in the low 100M range. Now, some 15 months later, it's receiving exponents north of 320M. Obviously, the advent of GPU computing has had an effect. At this rate (which no doubt will continue to increase), trial factoring will hit OBD territory sometime in 2015. When that happens, how will the TF portion of GIMPS proceed -- will it move into billion-digit exponents; recap previous exponents at deeper bit levels; or something else? Just curious. Rodrigo[/QUOTE] TF on GPUs is so shit hot, that CPUs better find something else to do. And since it takes as long to TF from X to X+1 bits as it does from 0 to X, all work above 60M is of neglible value as far as GIMPS is concerned. David |
[QUOTE=Rodrigo;277543]One of my computers is assigned to do TF. When it started crunching some 15 months ago, it was getting exponents in the low 100M range. Now, some 15 months later, it's receiving exponents north of 320M. Obviously, the advent of GPU computing has had an effect. [/quote]
Not at all. This is strictly the CPUs (TF to low limits) that is causing that wavefront to advance. GPUs have had practically zero influence on this. And it'll stay that way -- GPU TF is not yet automated. Only a handful of enthusiasts on this board are actually using it. [QUOTE=Rodrigo;277543]At this rate (which no doubt will continue to increase), trial factoring will hit OBD territory sometime in 2015.[/quote] GIMPS currently has a hard stop at 1,000,000,000. I don't think George is in any rush to extend that. [QUOTE=Rodrigo;277543]When that happens, how will the TF portion of GIMPS proceed -- will it move into billion-digit exponents; recap previous exponents at deeper bit levels; or something else?[/QUOTE] |
[QUOTE=Rodrigo;277543]recap previous exponents at deeper bit levels[/quote]
You might remember this is what happened after we ran out of 63->64 bit assignments in August 2010. Once we finish TFing everything to 65, the LMH assignments will likely start over again at 100M, this time TFing to 66. Note that this will take twice as long as the effort from 64 to 65. So if it takes, say, 18 months (August 2010 - February 2012) to finish everything to 65 bits, we will be busy for three years (!) with finishing everything to 66 bits. When you also consider that the nine-figure exponents all need to be TFed to at least 72, we will certainly be busy for the foreseeable future, even with the lumberjacks and the GPUs. |
It's better to keep TF exponents below 1 billion to higher bit ranges instead of moving beyond 1 billion, since LL wavefront probably won't get there in our lifetimes unless some major advances in algorithm or quantum computing.
|
[QUOTE=ATH;277550] since LL wavefront probably won't get there in our lifetimes [/QUOTE]
Once I jump the broom into the Great Beyond(TM), I plan on picking up right where I left off. First thing is to talk the angel processing new arrivals into letting me install Prime95 on her quantum computer. (And she'll say, "But, sir, I already have it testing MMMM127!". Or, everyone will start laughing, as I get handed a paper containing a 2-line proof that there are no Mersenne primes above 2^60,000,000-1.) |
[QUOTE=NBtarheel_33;277552]Once I jump the broom into the Great Beyond(TM), I plan on picking up right where I left off. First thing is to talk the angel processing new arrivals into letting me install Prime95 on her quantum computer.
(And she'll say, "But, sir, I already have it testing MMMM127!". Or, everyone will start laughing, as I get handed a paper containing a 2-line proof that there are no Mersenne primes above 2^60,000,000-1.)[/QUOTE] :lol: So higher bit levels it is. Thanks, guys! Rodrigo |
[QUOTE=axn;277546]Not at all. This is strictly the CPUs (TF to low limits) that is causing that wavefront to advance. GPUs have had practically zero influence on this.[/QUOTE]
axn, The month-to-month jump in the size of the TF exponents my computer is completing (recently assigned), has grown from 12.4M (10/8/10 - 11/8/10) to 24.2M (10/8/11 - 11/8/11). This was not a fluke: the jump in the period 9/8/10 - 10/8/10 was 9.7M, while the jump in the period 9/8/11 - 10/8/11 was 20.7M. Here are the monthly jumps (every month on day 8): Date - Exponent - Difference 11/10 1259xxxxx 12/10 1383xxxxx +124xxxxx 01/11 1478xxxxx +095xxxxx 02/11 1614xxxxx +136xxxxx 03/11 1737xxxxx +123xxxxx 04/11 1862xxxxx +125xxxxx 05/11 2022xxxxx +160xxxxx 06/11 2193xxxxx +171xxxxx 07/11 2369xxxxx +176xxxxx 08/11 2549xxxxx +180xxxxx 09/11 2739xxxxx +180xxxxx 10/11 2944xxxxx +205xxxxx 11/11 3186xxxxx +242xxxxx So the month-to-month jumps were rather flat from 11/10 to 4/11, and since then they've been rising at an increasing pace. Help me to understand. Are there that many more CPUs doing TF this fall, than there were last fall? What happened in April/May of this year, to account for the sudden (and growing) jump in the rate of increase? Are certain ranges being skipped? Please note -- I'm not being contentious, just trying to get a handle on how this works. :smile: Thanks! Rodrigo |
[QUOTE=Rodrigo;277621]The month-to-month jump in the size of the TF exponents my computer is completing (recently assigned), has grown from 12.4M (10/8/10 - 11/8/10) to 24.2M (10/8/11 - 11/8/11).[/QUOTE]
You are off by an order of (base 10) magnitude. To your above, it's 124M to 242M. [QUOTE=Rodrigo;277621]So the month-to-month jumps were rather flat from 11/10 to 4/11, and since then they've been rising at an increasing pace.[/QUOTE] Remember that TF, unlike LL, gets faster the higher you go. [QUOTE=Rodrigo;277621]Please note -- I'm not being contentious, just trying to get a handle on how this works. :smile:[/QUOTE] And please note we try to be as gentle as we can, but will still point out when you (or anyone) makes an error. It's in our nature.... :smile: |
[QUOTE=chalsall;277623]You are off by an order of (base 10) magnitude. To your above, it's 124M to 242M.[/QUOTE]
He had it right, the exponent 'jumped' by 12.4 mil and the jumps increased to 24.2 million. He was talking about jumps, not the actual exponent. |
[QUOTE=bcp19;277624]He had it right, the exponent 'jumped' by 12.4 mil and the jumps increased to 24.2 million. He was talking about jumps, not the actual exponent.[/QUOTE]
Thanks for the correction. You (and Rodrigo) are correct -- I misread it. :smile: |
Chalsall is right, in that TF get easier the higher you go. TF'ing a 150M exponent from x to y bits is twice as much work as a 300M exponent from x to y. So that the gaps are increasing is to be expected, even without an increase in CPU power.
The reason we say GPU has had no effect on this is because the programs designed for the GPUs are very, [i]very[/i] bad at the very short assignments that are going on in that range. They are much better suited to long running assignments, such as 50M from 69 to 71 bits rather than 300M from 65 to 66. And there is also now a need for higher bit depths at the current LL range, and so that's where just about everybody is doing it. (I believe there are some people who do TF-LMH work on GPU's, as there is a special modification of the GPU program, but that is a very very small percentage of GPU people.) |
[QUOTE=chalsall;277623]
Remember that TF, unlike LL, gets faster the higher you go. [/QUOTE] Ah, that's the missing element! It makes sense, thanks. Now, that would account for steady increases over time. Any educated guesses as to why TF might have suddenly (from one month to the next) leapt from a ~12M monthly increase, to >16M? Is there a way to tell how many CPUs were doing TF 6 months vs. 7 months ago? Rodrigo |
[QUOTE=Dubslow;277631]The reason we say GPU has had no effect on this is because the programs designed for the GPUs are very, [I]very[/I] bad at the very short assignments that are going on in that range. They are much better suited to long running assignments, such as 50M from 69 to 71 bits rather than 300M from 65 to 66.[/QUOTE]
How interesting (really!). I'll root around for the reasons for this in the GPU computing subforum as soon as I get the chance to. I appreciate the explanation. Glad I asked. Rodrigo |
[QUOTE=Rodrigo;277621]Help me to understand. Are there that many more CPUs doing TF this fall, than there were last fall? What happened in April/May of this year, to account for the sudden (and growing) jump in the rate of increase? Are certain ranges being skipped?[/QUOTE]If you want stable exponent size, you could let your CPU live in the 332,500,000 to 333,000,000 range. There are 10,700 expos that need to go from 67 to 68 bits.
Or if you want faster turn over there are about 130,000 from 334,000,000 to 340,000,000 that need to go from 65 to 66. |
[QUOTE=Rodrigo;277643]Ah, that's the missing element! It makes sense, thanks.
Now, that would account for steady increases over time. Any educated guesses as to why TF might have suddenly (from one month to the next) leapt from a ~12M monthly increase, to >16M? Is there a way to tell how many CPUs were doing TF 6 months vs. 7 months ago? Rodrigo[/QUOTE] Well, if you notice two months before that, it went down, rather than up as expected; I'd say it's a small enough gap to call it random noise in the data. (That is to say, I don't think it's very significant. Plot the total exponents versus time, and fit a parabola to it: they won't be very far off.) |
[QUOTE=Dubslow;277658]Well, if you notice two months before that, it went down, rather than up as expected; I'd say it's a small enough gap to call it random noise in the data. (That is to say, I don't think it's very significant. Plot the total exponents versus time, and fit a parabola to it: they won't be very far off.)[/QUOTE]
Dubslow, Yeah, my thought was that maybe there'd been a transient drop in the number of participants to account for that one-month decrease. (Hmm, I just realized that it took place between December 8 and January 8. What could possibly be going on during that time to decrease output?? :wink: ) Regarding the plot line, you've given me a reason to learn how to do graphs in Excel. :smile: Rodrigo |
[QUOTE=Dubslow;277631]
(I believe there are some people who do TF-LMH work on GPU's, as there is a special modification of the GPU program, but that is a very very small percentage of GPU people.)[/QUOTE] At the moment, Lavalamp and I are actively working in the OBD range with our GPUs, with Lavalamp outdoing me by about an order of magnitude. There is no special modification of the GPU program involved, however, just put the following in your worktodo.txt: Factor=3321926177,80,81 and mfaktc will be happy....be sure to report the line to the reservation page on OBD, though, and whatever you get for a result on the results thread. As for "completing" TF: GPUs also do halfway quick LL tests with CUDALucas. It's just less automated right now. |
That's Operation Billion Digits, i.e. exponents north of 3.32 billion. Christenson, a rough calculation shows that 80-->81 is around 70 GHz days of work. That certainly is not a short running assignment, is long even by GPU standards. The TF-LMH worktype on PrimeNet will assign 150M-500M exponents for factoring from 65 to 66 bits, though Rodrigo will have to check me there. That certainly qualifies as a microscopic assignment, taking minutes on a CPU, which would be seconds on a GPU, and would thus be very inefficient the way mfaktc/mfakto are designed.
Does anybody actually do the 150M-500M LMH stuff with a GPU? OBD is the extreme of the extreme... |
I had been doing TF at the LL wavefront with my slower, memory bound CPUs but have now realized how inefficient that is compared to GPUs. So I will be either moving them to DC or TF-LMH.
|
[QUOTE=petrw1;277805]I had been doing TF at the LL wavefront with my slower, memory bound CPUs but have now realized how inefficient that is compared to GPUs. So I will be either moving them to DC or TF-LMH.[/QUOTE]I have all of my borged boxen doing TF. Ones that I have regular access to are working in the 100M digit range. Others are working on TF-LMH or standard TF. This intentionally prevents them from possibly finding a prime and causing issues about money.
|
Or, if you have extra memory, one can devote a core to the P-1 effort. That is something which is desperately needed. In fact, calls have gone out to GPU-TFers to extend the bit levels to overcome the short-falls of not doing (having enough) P-1 work.
|
[QUOTE=RichD;277876]Or, if you have extra memory, one can devote a core to the P-1 effort. That is something which is desperately needed. In fact, calls have gone out to GPU-TFers to extend the bit levels to overcome the short-falls of not doing (having enough) P-1 work.[/QUOTE]
I have two 1090T, and two Opteron 180 cores doing P-1. Both of those setups have up to 2GB of RAM allowed. The Opteron has nothing else to do. The Phenom II is my working computer. Three of the 1090T cores are supporting a GTX-460 on mfaktc. My current batch of TF work, from GPU to 72, is taking exponents from 69 to 72 bits. The last 1090T core is doing LL. I have gathered that perhaps I'm overshooting on the mfaktc work; that going to 70 would be adequate. However, I'm committed to that for ~3+ days until those assignments are cleared out. One other issue is that I got a Kill a Watt EZ a couple of days ago. I'm still analyzing, but initial numbers are a bit scary for electric consumption. I may need to cut back my working hours from 24/7 for everything. I'm considering leaving the Opteron Ubuntu box running full time, because it's headless and harder to start up, at least with my current level of understanding of remote administration. |
OK, this leap can't be attributed to any kind of normal progression.
On November 17, my TF PC got an assignment for a 329M exponent. The next exponent that was assigned to it, on November 18, is in the 435M range. :w00t: What could account for the sudden long skip in PrimeNet assignments? If it helps to track down the cause, this computer has been doing TF from 64 to 65 bits. Rodrigo |
[QUOTE=Rodrigo;279210]What could account for the sudden long skip in PrimeNet assignments? If it helps to track down the cause, this computer has been doing TF from 64 to 65 bits.[/QUOTE]
Rodrigo, please see [URL="http://www.mersenne.info/trial_factored_tabular_data_180/1/400000000/"]Trial Factored Depth for Range 400,000,000 to 500,000,000 as of 2011.05.23[/URL]. I.E. the 400M to 500M range as of six months ago. 400M to 435M had already been completed by a LMH "Lumberjack".... |
[QUOTE=chalsall;279212]Rodrigo, please see [URL="http://www.mersenne.info/trial_factored_tabular_data_180/1/400000000/"]Trial Factored Depth for Range 400,000,000 to 500,000,000 as of 2011.05.23[/URL]. I.E. the 400M to 500M range as of six months ago.
400M to 435M had already been completed by a LMH "Lumberjack"....[/QUOTE] chalsall, Thanks a bunch for the link. Check out this zoom page: [URL]http://www.mersenne.info/trial_factored_tabular_data/4/435500000/[/URL] If I'm reading this right, there are still a few dozen exponents in the 435M range to do from 64 to 65 bits. Then PrimeNet will continue assigning TF more or less normally to that PC through the 510M range, at which point it may skip again to around the 680M range, depending on what the TF vacuum cleaners sweep up in the meantime. :smile: Rodrigo |
[QUOTE=Rodrigo;279215]...from 64 to 65 bits. Then PrimeNet will continue assigning TF more or less normally to that PC through the 510M range, at which point it may skip again to around the 680M range, depending on what the TF vacuum cleaners sweep up in the meantime.[/QUOTE]Correct, TF to low limits is working a single bit level up to 1B. Then it will come back and do the next bit level. [URL="http://en.wikipedia.org/wiki/Lather,_rinse,_repeat"]Lather, rinse, repeat.[/URL]
|
[QUOTE=RichD;277876]Or, if you have extra memory, one can devote a core to the P-1 effort. That is something which is desperately needed. In fact, calls have gone out to GPU-TFers to extend the bit levels to overcome the short-falls of not doing (having enough) P-1 work.[/QUOTE]
I've had about 20 cores doing P-1 since August...well over 1,000 completions since. |
Keep in mind there are 200,000 exponents between 45M and 55M
|
[QUOTE=Uncwilly;279219][URL="http://en.wikipedia.org/wiki/Lather,_rinse,_repeat"]Lather, rinse, repeat.[/URL][/QUOTE]
LOL Thanks for the details, BTW. Rodrigo |
[QUOTE=Uncwilly;279219]Correct, TF to low limits is working a single bit level up to 1B. Then it will come back and do the next bit level. [URL="http://en.wikipedia.org/wiki/Lather,_rinse,_repeat"]Lather, rinse, repeat.[/URL][/QUOTE]
Why stop at 1 Billion? |
[QUOTE=davieddy;279250]Why stop at 1 Billion?[/QUOTE]That is the end of the PrimeNet database at the moment.
|
[QUOTE=Christenson;277757]be sure to report the line to the reservation page on OBD, though, and whatever you get for a result on the results thread.[/QUOTE]My [url=http://mersenne-aries.sili.net]site[/url] also accepts results up to M(2^32) so feel free to dump whatever results you like in there (factor or no).
|
| All times are UTC. The time now is 22:50. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.