![]() |
[QUOTE=Christenson;264843]
P-1 on my CPUs has actually been significantly more productive in eliminating LL candidates than running LL tests.[/QUOTE] Why is that? David |
[QUOTE=R.D. Silverman;264849]He also failed to discuss any mathematics! All he did was claim that
my arguments were bull; a reflection of his mathematical ignorance. Claiming that a 2 yr old would see that my arguments were wrong without bothering to even discuss the mathematics [B]IS[/B] a form of derision.[/QUOTE] Touche |
[QUOTE=R.D. Silverman;264848]And you somehow think that this is a mathematical argument???
[/QUOTE] Did you ever try to teach your grandmother how to suck eggs? [url=http://www.youtube.com/watch?v=kQFKtI6gn9Y] I paid for a mathematical argument[/url] No you didn't |
[QUOTE=R.D. Silverman;264848]I am suggesting that you can SAVE TIME by NOT running TF at all; Just run P-1 with slightly higher bounds that are currently used.[/QUOTE]
Getting this thread back to the math/optimization problem.... P-1 and LL tests are done on the same caliber of Intel/AMD machine. The P-1 bounds are selected comparing the time it takes to run P-1 to the cost of running 2 LL tests times the chance of P-1 finding a factor. Barring any bugs in my understanding of P-1's chance of finding a factor or in my programming, increasing P-1 bounds would be a bad idea because that extra P-1 time would remove more candidates if it were used for LL testing instead. The "problem" GIMPS faces is we now have GPUs which are 100x faster at TF than the Intel/AMD machines. What is the best use for this resource? I've been using them to do more TF on LL candidates. A recent run of one extra TF bit level on 9000 exponents that had already had P-1 done found factors in 1 out of 105 cases instead of the usual 1 out of 70 cases. Thus, the extra TF effort is of some significant value -- in essence making GIMPS' LL resource more effective. When extra TF is done before P-1 has been performed, the extra TF reduces the P-1 bounds (because the chance of finding a factor is reduced). Thus the extra TF lets GIMPS' P-1 resource test more candidates. I can see merit to your argument that we should do more P-1 rather than more TF, but the GPUs are only 100x faster at TF. IIRC, they are 4x faster at LL. GPUs cannot do P-1, but if they could they'd also be only 4x faster (or less). |
[QUOTE=davieddy;264850]Why is that?
David[/QUOTE] I think it's because P-1, like LL-D, is *never* going to get a prize for finding M48. It only makes it easier for someone else to find it. Therefore, not too many do it...or, as Mr P-1 states, do a good job....leaving those of us doing a good job finding factors more quickly than we would if we ran LL. Remember, the minimum work to proving status of any given large set of exponents happens when we are indifferent to whether we prove the status by any particular method. P95: If we get a 4x speedup on LL or (eventually) P-1 on GPUs, and LL isn't all that memory hungry, does it make sense to run multiple exponents in parallel on LL? [I'm doubtful this will speed things up much on P-1, due to its memory-intensiveness]. How bad is the CPU impact of running LL on CUDA? |
[QUOTE=Prime95;264854]
I can see merit to your argument that we should do more P-1 rather than more TF, but the GPUs are only 100x faster at TF. IIRC, they are 4x faster at LL. GPUs cannot do P-1, but if they could they'd also be only 4x faster (or less).[/QUOTE] If they (GPU) can do LL, then they can do P-1; the iterations are the same. They are modular multiplications mod M_p. And, as you say, they would only be 4x faster. The difference in speed between TF and LL on a GPU does mean that it is better to spend more time on the former, but one gets fairly rapid diminishing returns on TF. If there is no factor less than (say) 70 bits, it is unlikely that there is one of 71 or 72 bits..... One can indeed optimize the parameter selections, but the calculations would be quite sensitive to the speed differences in the various pieces of hardware. |
[QUOTE=R.D. Silverman;264857]If they (GPU) can do LL, then they can do P-1; the iterations are
the same. They are modular multiplications mod M_p. And, as you say, they would only be 4x faster. The difference in speed between TF and LL on a GPU does mean that it is better to spend more time on the former, but one gets fairly rapid diminishing returns on TF. If there is no factor less than (say) 70 bits, it is unlikely that there is one of 71 or 72 bits..... One can indeed optimize the parameter selections, but the calculations would be quite sensitive to the speed differences in the various pieces of hardware.[/QUOTE] All becomes clear. This is what a math argument is. David |
[QUOTE=Christenson;264856]I think it's because P-1, like LL-D, is *never* going to get a prize for finding M48. It only makes it easier for someone else to find it.
[/QUOTE] The question was intended as "rhetorical". But since you have risen to the bait (like others:smile:) I may as well point out that a double check LL [i]might[/i] find M48, but TF/P-1 won't. David |
[QUOTE=davieddy;264859]The question was intended as "rhetorical".
But since you have risen to the bait (like others:smile:) I may as well point out that a double check LL [i]might[/i] find M48, but TF/P-1 won't. David[/QUOTE] David it's the first ll that found it it's the second that confirms it so LL-D will not find it it will confirm it or deny it. Even I know this and I'm usually the thickest skulled person on these forums. |
If someone finds a zero residual, then there will be a double-check done immediately on the fastest hardware available to confirm.
What LL-D offers is the possibility of finding a number for which the [i]correct[/i] residual is zero, but the [b]first[/b] run maybe years earlier hit a hardware problem and so the Mersenne prime was missed. |
Bob: It is certainly possible to get a GPU to do P-1. However, at the moment, the possibility is still theoretical -- you can't download the program just yet. And, as a student, I'm not quite ready to make it real.
I'm curious as to why LL is only 4x faster on a GPU...that is, comparatively, where are the bottlenecks on TF versus LL iterations, and why does that leave us with a 100x speedup on TF but only a 4x speedup on LL? |
[QUOTE=Christenson;264867]Bob: It is certainly possible to get a GPU to do P-1. However, at the moment, the possibility is still theoretical -- you can't download the program just yet. And, as a student, I'm not quite ready to make it real.
I'm curious as to why LL is only 4x faster on a GPU...that is, comparatively, where are the bottlenecks on TF versus LL iterations, and why does that leave us with a 100x speedup on TF but only a 4x speedup on LL?[/QUOTE] Memory bandwidth will be a bottleneck. TF takes only a few bytes of storage per processor. TF is also embarrasingly parallel; FFT's are not. There may be other problems as well. |
[QUOTE=Prime95;264854][QUOTE=R.D. Silverman;264848]I am suggesting that you can SAVE TIME by NOT running TF at all; Just run P-1 with slightly higher bounds that are currently used.[/QUOTE]Getting this thread back to the math/optimization problem....
P-1 and LL tests are done on the same caliber of Intel/AMD machine. The P-1 bounds are selected comparing the time it takes to run P-1 to the cost of running 2 LL tests times the chance of P-1 finding a factor. Barring any bugs in my understanding of P-1's chance of finding a factor or in my programming, increasing P-1 bounds would be a bad idea because that extra P-1 time would remove more candidates if it were used for LL testing instead.[/QUOTE]Since Silverman's suggestion is to run P-1 [I]without any preceding (server-assigned) TF[/I], the P-1 bounds selection algorithm will find that the TF bit level for an exponent is in the low 60s (to which the past LMH projects had taken all exponents), rather than the high 60s or low 70s as it now usually faces. In that case, it will select higher bounds as optimal, because of the new possibility of finding factors in the high 60s to low 70s bits. Since the probability that P-1 will find a factor for such an exponent will be greater than it used to be, higher bounds are justified. The extra P-1 time will be less than the skipped TF time, and thus a net savings, by Silverman's argument. (Personally, though that seems reasonable, I'd be more comfortable seeing specific numbers and time-costs/savings for how many would-also-have-been-TF-found factors P-1 will find between the powers of 2 we've been covering with server-assigned TF. How many would P-1 with those higher B1/B2 bounds miss that TF to currently standard power-of-2 limits would have found? I see that Silverman has anticipated me in post #172.) This is simply the mirror image of your later statement "When extra TF is done before P-1 has been performed, the extra TF reduces the P-1 bounds (because the chance of finding a factor is reduced)." <= less, less, increases, increased - - - [QUOTE=R.D. Silverman;264762] Suppose you run P-1 to steps B1/B2. Thus, we are sure there are no factors smaller than 2 B2 p + 1.[/QUOTE]For exponents in the range 55M, I see these current factoring limits: [code] 55000031,71,675000,29750000 55000181,71,660000,19140000 55000189,71,660000,19140000 55000207,71,660000,19140000 55000277,71,660000,19140000 . . .[/code]So, the first line shows that P-1 by itself with B1,B2 = 675000,29750000 eliminated all factors below 2 * 29750000 * 55000031 + 1 = 3272501844500001, which is below 2^52 and not very close to 2^71. In order to [I]guarantee[/I] finding factors up to 2^71 with P-1 alone, we'd have to use B2 = 2^71 / p = 42930580192487, over 100,000 time higher than currently-chosen P2. Which P-1 routines can handle B2 ~ 2^46? So the no-TF scheme might be more efficient at eliminating LL tests, but it would leave much lower power-of-2 limits to which one could say definitely there were no smaller factors. Indeed, since exponents in the 55M range were already TFed to at least 2^60 by LMH, a P-1 B2 bound of 2^60 / p = 20962197360, which is ~1000 times as large as our current customary B2, would be necessary just to guarantee the same level of completeness that we already have through LMH TF. |
Oops. I used p instead of 2p in the divisions. Revised numbers are below.
[QUOTE=cheesehead;264874] In order to [I]guarantee[/I] finding factors up to 2^71 with P-1 alone, we'd have to use B2 = 2^71 / p = 42930580192487, over 100,000 time higher than currently-chosen P2. Which P-1 routines can handle B2 ~ 2^46? So the no-TF scheme might be more efficient at eliminating LL tests, but it would leave much lower power-of-2 limits to which one could say definitely there were no smaller factors. Indeed, since exponents in the 55M range were already TFed to at least 2^60 by LMH, a P-1 B2 bound of 2^60 / p = 20962197360, which is ~1000 times as large as our current customary B2, would be necessary just to guarantee the same level of completeness that we already have through LMH TF.[/QUOTE] In order to [I]guarantee[/I] finding factors up to 2^71 with P-1 alone, we'd have to use B2 = 2^71 / 2p = 21465290096243, over 70,000 times as large as currently-chosen P2. Which P-1 routines can handle B2 ~ 2^45? So the no-TF scheme might be more efficient at eliminating LL tests, but it would leave much lower power-of-2 limits to which one could say definitely there were no smaller factors. Indeed, since exponents in the 55M range were already TFed to at least 2^60 by LMH, a P-1 B2 bound of 2^60 / 2p = 10481098679, which is over 300 times as large as our current customary B2, would be necessary just to guarantee the same level of completeness that we already have through LMH TF. Completeness isn't everything, but eliminating server-assigned TF would dash the current GIMPS claim to comprehensive factor search up to levels of 2^70 and beyond. Would LMH or any other specialty group take on the project of extending TF on P-1ed-only-and-already-LLed exponents if doing so made no contribution to the search for Mersenne primes? |
[QUOTE=science_man_88;264860]David it's the first ll that found it it's the second that confirms it so LL-D will not find it it will confirm it or deny it. Even I know this and I'm usually the thickest skulled person on these forums.[/QUOTE]
[QUOTE=fivemack;264865]If someone finds a zero residual, then there will be a double-check done immediately on the fastest hardware available to confirm. What LL-D offers is the possibility of finding a number for which the [I]correct[/I] residual is zero, but the [B]first[/B] run maybe years earlier hit a hardware problem and so the Mersenne prime was missed.[/QUOTE] See [url=http://mersenneforum.org/showthread.php?t=15700]"Is this a bug...or...?"[/url] thread in the Primenet forum. David |
[QUOTE=davieddy;264887]See [url=http://mersenneforum.org/showthread.php?t=15700]"Is this a bug...or...?"[/url] thread in the Primenet forum.
David[/QUOTE] it looks like I'm logged out on that page but not on this one. |
[QUOTE=science_man_88;264892]it looks like I'm logged out on that page but not on this one.[/QUOTE]
This problem sounds a bit technical. Better ask Christenson (or Silverman) David |
[QUOTE=science_man_88;264892]it looks like I'm logged out on that page but not on this one.[/QUOTE]It's because the link doesn't have "www." in front of "mersenneforum.org". The cookie that keeps you logged in is for "www.mersenneforum.org", not "mersenneforum.org".....
It took me a couple of times to figure that one out.... |
[QUOTE=schickel;264899]It's because the link doesn't have "www." in front of "mersenneforum.org". The cookie that keeps you logged in is for "www.mersenneforum.org", not "mersenneforum.org".....
It took me a couple of times to figure that one out....[/QUOTE] That's what we need: More smartarses:smile: David |
No Sacrifice
[QUOTE=Prime95;264818] I understand you and Bob have a rather unpleasant history. Let it go.
[/QUOTE] [url=http://www.youtube.com/watch?v=NrLkTZrPZA4]Two hearts beating in two separate worlds[/url] David |
[QUOTE=davieddy;264918]That's what we need:
More smartarses:smile: David[/QUOTE] I'd rather smartarses than smarta$$es like a few people I know ( sometimes me included) |
Spelling differnces across the pond
[QUOTE=science_man_88;264956]I'd rather smartarses than smarta$$es like a few people I know ( sometimes me included)[/QUOTE]
In my days (father born 1903) "Ass"* meant either 1)A sort of donkey 2)An idiotic person The common name for the rectum was spelt "arse". Further confusion arises because you Yanks think "fanny" means "bottom" whereas here in civilization it means "front bottom". David *Not sure whether anyone in the US knows how pronounce a short "A" anymore. PS "Jesus rode into somewhere (Jerusalem?) on an arse". Doesn't sound right to me. |
[QUOTE=davieddy;264842]
Look at the size of the typical factor found via P-1 compared with the TF max to judge how little the two enterprizes overlap each other. David[/QUOTE] [QUOTE=R.D. Silverman;264848]And you somehow think that this is a mathematical argument??? ..... I am suggesting that you can SAVE TIME by NOT running TF at all; Just run P-1 with slightly higher bounds that are currently used.[/QUOTE] TF limit = 2^70 (say). As you pointed out, there is enough data to get an accurate distribution, but factors found by P-1 can easily exceed 2^70 by 6 digits (20 bits) P-1 "hit rate" ~6% 4 more bits of TF hit rate ~6% Overlap not too much. I would like to think Einstein would consider this as an "argument" or at least a "thought experiment" David |
[QUOTE=Prime95;264828]
The moderators are merely trying to nip any flame wars in the bud. They were annoying to the combatants, the readers, and the moderators. [/QUOTE] Yep. I can't understand why HRB laid into the sensitive censorship so much, but he is no fool, and (especially via PM) VERY funny. He made the point that (for better or worse) the usual suspects around here are non-stupid adults. David |
even further OT
[QUOTE=davieddy;264982]In my days (father born 1903) "Ass"* meant either
1)A sort of donkey 2)An idiotic person The common name for the rectum was spelt "arse". Further confusion arises because you Yanks think "fanny" means "bottom" whereas here in civilization it means "front bottom". David *Not sure whether anyone in the US knows how pronounce a short "A" anymore. PS "Jesus rode into somewhere (Jerusalem?) on an arse". Doesn't sound right to me.[/QUOTE]Just think of how much more influence on American speech[sup]*[/sup] that England would have now if it just hadn't so stubbornly insisted on taxation without colonial representation during a previous administration. :-) - - [sup]*[/sup] ... at least, influence on American speech east of the Appalachians, if not on the other languages that would now be prevalent westward of that range. |
[QUOTE=cheesehead;265078]Just think of how much more influence on American speech[sup]*[/sup] that England would have now if it just hadn't so stubbornly insisted on taxation without colonial representation during a previous administration. :-)
[/QUOTE] And the only "Tea Party" would be full of [url=http://www.youtube.com/watch?v=WANNqr-vcx0]Mad Hatters[/url] |
[QUOTE=storm5510;185801]Before Y2K, there were many software applications using only the right two digits of the year. i.e. 98. [/QUOTE]
Sorry to dig up such an old post. You think Y2K was an issue, I am glad I won't be around for the mad scramble in the year 9999. :) |
[QUOTE=cheesehead;265078]Just think of how much more influence on American speech[sup]*[/sup] that England would have now if it just hadn't so stubbornly insisted on taxation without colonial representation during a previous administration. :-)[/QUOTE]
I'm not enjoying that taxation [B]with [/B]representation so much either! |
Breadth First Best? MY ARSE
22 factors found for exponents between 50M and 60M in 15 hours.
Bravo. 70x as many tests. Helping the LLwavefront speed???? NOT. David |
Davie:
That's 22 pairs of LL tests that don't need to be done anymore....and noone can do those in 15 hours. It's something.... |
[QUOTE=Christenson;265140]Davie:
That's 22 pairs of LL tests that don't need to be done anymore....and noone can do those in 15 hours. It's something....[/QUOTE] But they were going to get factored anyway. Why now? 2 years before 60M range will get LL tested, by which time the optimum balance between TF and LL may have changed. Meantime in the last 24 hours, 29 more DCs were completed than first time LLs. The factoring effort is best focussed on the 53M range ATM. And say we can do 3 more bits while keeping pace with the LL wavefront: the fairest division of labour among the GPUs is for each participant to do the 3 levels at once. Of course more P-1 would be good as well, but the firepower is currently woefully inadequate. David PS I was sincere when I expressed my gratitude to you for increasing my chance of finding a prime by 73/67. A pity that this message may have become garbled by the various "noises off"! |
[QUOTE=davieddy;265145]But they were going to get factored anyway.
Why now? 2 years before 60M range will get LL tested, by which time the optimum balance between TF and LL may have changed. Meantime in the last 24 hours, 29 more DCs were completed than first time LLs. The factoring effort is best focussed on the 53M range ATM. And say we can do 3 more bits while keeping pace with the LL wavefront: the fairest division of labour among the GPUs is for each participant to do the 3 levels at once. Of course more P-1 would be good as well, but the firepower is currently woefully inadequate. David PS I was sincere when I expressed my gratitude to you for increasing my chance of finding a prime by 73/67. A pity that this message may have become garbled by the various "noises off"![/QUOTE] NP with the thank you, it was understood -- I am also contributing some of that P-1 (and LL and LL-D) firepower. As for TF, you are taking a slightly short-term view. Two years is not that far in the future. I've said what I think should happen...raise the bit level on EVERYTHING that doesn't have a LL-D or LL on it in progress. 3 bit levels =5%. In two years, I hope most of us will be running P-1 and LL on our GPUs, both CUDA and OpenCL. Breadth-first, of course, finds the most factors with the least effort...and then really slows down as the work doubles with each bit level. |
[QUOTE=Christenson;265163]
As for TF, you are taking a slightly short-term view. Two years is not that far in the future. I've said what I think should happen...raise the bit level on EVERYTHING that doesn't have a LL-D or LL on it in progress. 3 bit levels =5%. In two years, I hope most of us will be running P-1 and LL on our GPUs, both CUDA and OpenCL. [/QUOTE] That's the most self-contradictory paragraph imaginable. What is "everything"? Maybe a moratorium on LL might seem quite appropriate for the near future, but how would the momentum GIMPS achieved (notably 96 till 04) be recouped? I was rather surprized that only 2500 users have completed LL tests in the last year. I've done 5 since November on my one machine. The "patience" aspect is easily overcome: just forget it's even running! David |
[QUOTE=Christenson;265163]NP with the thank you, it was understood -- I am also contributing some of that P-1 (and LL and LL-D) firepower.
[/QUOTE] As you will have noticed by now, being obsequious is not one of my many faults! I was trying to emphasize the point that for anyone wavering about whether to go through with an LL test or not, a little help from friends will not go amiss. It was pleasing that my request for someone to do my "dirty work" elicited two prompt and enthusiastic replies. A spirit of collaboration can outweigh cold probability:smile: David PS I missed Bob's post entitled "An apology for a Mathematician". |
Moratorium on LL? No way! I'm just saying that if we did TF an additional 3 bits, we'd need to do 5% fewer LLs....which means that 95% of the LLs are still needed!
The real upgrade comes when GPUs do lots of LL. And it's a bit hard to forget its running, under Windows....Opera has a minimum-priority download function that is interfered with....and there seem to be some very low-priority things that happen along the way to disk access, too. |
As you know, I am enjoying our correspondence.
I would appreciate a slightly closer relationship between your replies and the points I raise though! Let's do those 3 bits now i.e. ~53M David |
[QUOTE=davieddy;265184]As you know, I am enjoying our correspondence.
I would appreciate a slightly closer relationship between your replies and the points I raise though! Let's do those 3 bits now i.e. ~53M David[/QUOTE] It would greatly help that close relationship if you weren't raising your points while I was composing the previous answer.....:razz: And we are missing a link to a classic Beatles song about getting by with a little help from my friends.... And, if anyone has a day or three to wait, I have no problem doing a little more special purpose TF, although it's not on the world's hottest GPU. |
[QUOTE=Christenson;265205]It would greatly help that close relationship if you weren't raising your points while I was composing the previous answer.....:razz:
And we are missing a link to a classic Beatles song about getting by with a little help from my friends.... And, if anyone has a day or three to wait, I have no problem doing a little more special purpose TF, although it's not on the world's hottest GPU.[/QUOTE] That is the inevitable consequence of a snappy exchange of emails! I know styles vary on this, but I almost invariably quote all or part of an email I am responding to, even if the original is likely to directly precede it: 1) It is clearer for third parties to interpret. 2) If someone else posts in between, it looks as if you have ignored them, even though you had no such intention. Hmmm. If you remember the Beatles, you weren't there! I've never quite seen the point of "teams", but if any GPU owner wanted to assist LL progress as directly as possible with extra TF, (s)he could do worse than enlist say 50 LL testing CPU owners. I didn't wait 3 days, but would have been pleased to be informed had you found a factor nonetheless. David |
[QUOTE=davieddy;265211]That is the inevitable consequence of a snappy exchange of emails!
[/QUOTE] Back in '69, I was corresponding with a Parisian girlfriend. Letters took ~3 days to deliver. When you both send letters every 3 days, you get used to the inevitable lag in response. You also need to have confidence about being on the same wavelength: e.g. you don't want to have written "I love you" at the same time as she is writing "You're Chucked"! David |
[QUOTE=davieddy;264982]
*Not sure whether anyone in the US knows how pronounce a short "A" anymore. PS "Jesus rode into somewhere (Jerusalem?) on an arse". Doesn't sound right to me.[/QUOTE] Come to think of it, they can't pronounce a long one either. An irritating nasal dipthong (is that the right word?) seems to do for either. (Obama excepted). David (Oxbridge/West London snob:smile:) On reconsideration, I didn't mean short and long "A". I think you colonials can say "Hay" okay. It is the relevance or otherwise of the "r" that is in question. And possibly the double "s". [URL="http://www.youtube.com/watch?v=0Bwbkr8-GyM&feature=fvst"]Zed Zed Top[/URL] |
What's your exponent guess for M48?
Mine is 49,291,591.
|
[QUOTE=LiquidNitrogen;265322]Mine is 49,291,591.[/QUOTE]
My guess is that you are testing it ATM See the "Predict M48" thread in the lounge. LiquidHelium. |
[QUOTE=LiquidNitrogen;265322]Mine is 49,291,591.[/QUOTE]
Bad guess; that exponent has already been factored; M49291591 is divisible by 219347579951 (k = 2225 = 5*5*89) [url]http://mersenne.org/report_exponent/?exp_lo=49291591[/url] |
[QUOTE=davieddy;265323]My guess is that you are testing it ATM[/QUOTE]
Nope, I am working on M46789177 which has no prime factors below 2^68th. |
[QUOTE=LiquidNitrogen;265332]Nope, I am working on M46789177 which has no prime factors below 2^68th.[/QUOTE]
Me M45xxxxxx. No factors < 2^74. And some P-1 done. David |
[QUOTE=KingKurly;265324]Bad guess[/QUOTE]
I change my guess then :) |
[QUOTE=davieddy;265333]And some P-1 done.[/QUOTE]
And what does P-1 refer to? |
P-1 is a factoring method that depends on the fact that mersenne numbers are one different than an even power of two. It has a larger and different feasible search space for factors than TF...but it doesn't have the big speedup on CUDA that TF does. Have a look at the math page.
Do you want me to feed M46789177 to mfaktc for 68 to 71 bits? It would start sometime tomorrow, GMT-4=E. Coast US time. |
[QUOTE=Christenson;265340]
Do you want me to feed M46789177 to mfaktc for 68 to 71 bits? It would start sometime tomorrow, GMT-4=E. Coast US time.[/QUOTE] Well I'm already working on it, core #3 of 4. It's 17% done and will be finished before the end of this month (Jul 26 estimate). |
[QUOTE=Christenson;265340]P-1 is a factoring method that depends on the fact that mersenne numbers are one different than an even power of two.[/QUOTE]
Not exactly. P-1 factoring works for any number, and will find a factor P of the composite N if P-1 is sufficiently smooth. "Sufficiently smooth" means that all but the largest prime factor of P-1 are less than "B1," and the largest prime factor of P-1 is less than "B2." It is especially effective on Mersenne numbers because we know, from theory, that the divisors of 2^q-1 are all of the form 2*q+1, so we know that 2q is a divisor of P-1; this makes the unfactored part of P-1 smaller, increasing the odds it is sufficiently smooth for the method to find a factor. |
[QUOTE=Christenson;265340]P-1 is a factoring method that depends on the fact that mersenne numbers are one different than an even power of two.[/QUOTE]No.
There's no relationship at all between the P-1 method and the fact that Mersenne numbers differ from a power of two by 1. The "-1" in "P-1" has nothing to do with the [I]-1[/I] in 2[sup]p[/sup]-1. The "P" in "P-1" has nothing to do with the [I]p[/I] in 2[sup]p[/sup]-1. [QUOTE=wblipp;265362]Not exactly.[/QUOTE]Not even approximately! That is the only part of your response with which I disagree, because it may leave the reader with the mistaken idea that Christenson's misstatement is somehow partially correct. |
I'd redact my post if I was allowed...mods, you are invited....
|
[QUOTE=Christenson;265455]I'd redact my post if I was allowed...mods, you are invited....[/QUOTE]
No mods, please don't! "P-1 is a factoring method" is a good start to a reply to someone who had never heard of it, and the erroneous details elicited the erudite response from William. (See another thread in the Misc Math forum) This place is instructive on a lot of levels, as long as you don't Bowdlerize the posting history. David |
[QUOTE=LiquidNitrogen;265337]And what does P-1 refer to?[/QUOTE]
Maybe if you bothered to do some reading about this subject you would find out. Or don't you know how to use Google? Do us all a favor. Go away until you have read (and done the exercizes) at least one book on number theory. Maybe then, you might have sufficient knowledge to actually say something meaningful about this subject. We can suggest some references. |
[QUOTE=R.D. Silverman;265512] Or don't you know how to use Google?
[/QUOTE] if he doesn't I'd be happy to give him a link to make advanced searches ( though I don't do them enough) . I've tried teaching my mom why [TEX].\bar {9} =1 [/TEX] so teaching someone else Google shouldn't be that hard. |
You keep coming back for more, Richard...
[QUOTE=cheesehead;265446]No.
There's no relationship at all between the P-1 method and the fact that Mersenne numbers differ from a power of two by 1. The "-1" in "P-1" has nothing to do with the [I]-1[/I] in 2[sup]p[/sup]-1. The "P" in "P-1" has nothing to do with the [I]p[/I] in 2[sup]p[/sup]-1. Not even approximately! That is the only part of your response with which I disagree, because it may leave the reader with the mistaken idea that Christenson's misstatement is somehow partially correct.[/QUOTE] The "-1" has got a lot to do with factors of 2[SUP]p[/SUP]-1 being 2kp+1 as William explained. Now respond to my post in "CPU 100%". Too many arselickers/nitpickers around here. David |
[QUOTE=R.D. Silverman;265512]Or don't you know how to use Google?
[/QUOTE] As one of their first 100 employees, I guess not. |
[QUOTE=science_man_88;265514]if he doesn't I'd be happy to give him a link to make advanced searches ( though I don't do them enough) . I've tried teaching my mom why [TEX].\bar {9} =1 [/TEX] so teaching someone else Google shouldn't be that hard.[/QUOTE]
And I did write a Factoring Program back in 1998 for the Mac: [URL]http://www.tucows.com/preview/205405[/URL] "It's About Prime" The curtailed nomenclature used on this site is not so ubiquitous, and the site itself is not exactly intuitive either (or will someone point me to the "Outstanding Interface Awards it has won?) |
[QUOTE=LiquidNitrogen;265533]And I did write a Factoring Program back in 1998 for the Mac:
[URL]http://www.tucows.com/preview/205405[/URL] "It's About Prime" The curtailed nomenclature used on this site is not so ubiquitous, and the site itself is not exactly intuitive either (or will someone point me to the "Outstanding Interface Awards it has won?)[/QUOTE] I'm a idiot and even i can find most things. and by the way: [url]http://www.google.ca/search?hl=en&q=%22Outstanding+Interface+Award%22+%2B+google&oq=%22Outstanding+Interface+Award%22+%2B+google&aq=f&aqi=&aql=&gs_sm=s&gs_upl=0l0l0l0l0l0l0l0l0l0l0ll0[/url] needs more results if you're going to bring it up. |
[QUOTE=LiquidNitrogen;265532]As one of their first 100 employees, I guess not.[/QUOTE]
Just putting this out there for consideration... Might you give us your name? |
[QUOTE=chalsall;265540]
Might you give us your name?[/QUOTE] No, I intend to use if for a while longer. |
[QUOTE=LiquidNitrogen;265533]And I did write a Factoring Program back in 1998 for the Mac:
[URL]http://www.tucows.com/preview/205405[/URL] "It's About Prime" The curtailed nomenclature used on this site is not so ubiquitous, and the site itself is not exactly intuitive either (or will someone point me to the "Outstanding Interface Awards it has won?)[/QUOTE] Apologies for the alphabet soup and the nomenclature....it's par for the course in any relatively small group of people working on the same or closely related technical things. As for not winning awards, the interface works well enough, and is similar enough to lots of other forums I have used. The people here who could change the interface have a lot of other, more important things to do with their time. ************** see [url]http://www.mersenne.org/various/math.php[/url] and scroll down for an explanation of P-1 factoring. ************** How long would "It's about prime" take to factor M1061, incidentally? Oh, and moderators: A short note as to the problem with my previous post and the explanation of P-1 would be my idea of properly editing it. |
[QUOTE=LiquidNitrogen;265545]No, I intend to use if for a while longer.[/QUOTE]
But... You made a claim that you were one of the first 100 employees of Goggle. Might you be lying? |
[QUOTE=Christenson;265549] As for not winning awards, the interface works well enough, and is similar enough to lots of other forums I have used.[/QUOTE]
Understood, but I was referring to the Mersenne.org site, which has "accumulated" (and that's being nice about it :smile: ) a fair amount of info over the years which could be organized better. [QUOTE=Christenson;265549] How long would "It's about prime" take to factor M1061, incidentally? [/QUOTE] Somewhere between when the Andromeda Galaxy starts colliding with the Milky Way and when our own sun starts to burn out :smile: It was nothing more than a clever RAM-resident trial division Prime solver that was able to handle 64-bit ints before any other Mac program. Mostly used as a benchmark by those who were confused by the preponderance of Macs spawned during the Guy Amelio era. Featuring "amazing" optimizations such as adding +2, +2, +2, then +4 to candidate primes using a circular linked list after pre-loading 2, 3, 5, and 7 as the seed entries. That way, it would never generate a candidate prime that ended in 5. You might laugh, but no Mac programs would do such things before it, and they were all constrained to 30 bits or less since Apple used the upper 2-bits of memory for "Handle states" in their Memory Manager ROM routines. I also used a memory paging buffer for those with < 4 MB (not GB!) of RAM. Those were the days. |
[QUOTE=LiquidNitrogen;265533]And I did write a Factoring Program back in 1998 for the Mac:
[URL]http://www.tucows.com/preview/205405[/URL] "It's About Prime" The curtailed nomenclature used on this site is not so ubiquitous, and the site itself is not exactly intuitive either (or will someone point me to the "Outstanding Interface Awards it has won?)[/QUOTE] yeah no offense I'd rather not look at something from a infiniteloop.com infinite loops means bad programming last I checked. |
[QUOTE=chalsall;265554]But... You made a claim that you were one of the first 100 employees of Goggle.
Might you be lying?[/QUOTE] I can tell you what it used to say at [URL="http://www.7427466391.com"]www.7427466391.com[/URL] [B]Congratulations. You've made it to level 2. Go to [url]www.Linux.org[/url] and enter Bobsyouruncle as the login and the answer to this equation as the password. f(1)= 7182818284 f(2)= 8182845904 f(3)= 8747135266 f(4)= 7427466391 f(5)= __________ [/B] This particular conversation is over. I know who I am and who I worked for and what my stock portfolio is worth today. And I don't have to answer to the likes of people such as yourself. |
[QUOTE=science_man_88;265558]yeah no offense I'd rather not look at something from a infiniteloop.com infinite loops means bad programming last I checked.[/QUOTE]
1. It was InfiniteLoop.org (not .com) so pay closer attention to details. 2. Apple Computer's Address was "1 Infinite Loop Way" at one point, and this company was a homage to that. 3. [B]EVERY [/B]program [B]ALWAYS [/B]executes an infinite loop while it runs. Usually in the form of "While(1)" in the old Pascal/C days when you created a busy Event Loop. 4. The name is a double-entendre for the exact same reason you mentioned. Most people THINK it's bad, while those who actually know, understand the underlying meaning. Thanks for demonstrating which class you belonged to (and yet another double-entendre to the object oriented programmers out there, or will you enlighten us further with your explanation of OOPS?) |
[QUOTE=davieddy;265516]The "-1" has got a lot to do with factors of 2[SUP]p[/SUP]-1
being 2kp+1 as William explained. [/QUOTE] Another bit of nonsense from someone who refuses to study this subject. The factors of 2^p + 1 are [b]also[/b] of the form 2kp+1. The "-1" in 2^p-1 is irrelevant. The "-1" in P-1 has almost [b]nothing[/b] to do with the "-1" in 2^p-1. The "-1" in P-1 has everything to do with the fact that the group order of Z/pZ* is p-1. [QUOTE] Too many arselickers/nitpickers around here. David[/QUOTE] Too many ignorant posters who prattle about mathematics and fail to understand that math is all about nitpicking. Getting the minute details right is fundamental to the subject. Too many people who have spewed nonsense for years and who refuse to read. |
[QUOTE=R.D. Silverman;265567]Another bit of nonsense from someone who refuses to study this
subject. The factors of 2^p + 1 are [b]also[/b] of the form 2kp+1. The "-1" in 2^p-1 is irrelevant. The "-1" in P-1 has almost [b]nothing[/b] to do with the "-1" in 2^p-1. The "-1" in P-1 has everything to do with the fact that the group order of Z/pZ* is p-1. Too many ignorant posters who prattle about mathematics and fail to understand that math is all about nitpicking. Getting the minute details right is fundamental to the subject. Too many people who have spewed nonsense for years and who refuse to read.[/QUOTE] anyways I forgot this was a thread for predicting Mersenne prime 48. part of my problem is even if I do read something I'm stupid enough to misinterpret it. |
[QUOTE=R.D. Silverman;265567]Too many ignorant posters who prattle about mathematics and fail to understand that math is all about nitpicking.[/QUOTE]
Can someone multiply this guy by (1 + i^2) already? |
[QUOTE=LiquidNitrogen;265570]Can someone multiply this guy by (1 + i^2) already?[/QUOTE]
he's as valid as you multiplied by that expression (1+i^2) (r.d silverman) =(1+(-1))(r.d silverman) = (1-1)(r.d silverman) = 0(r.d silverman) assuming he's an integer ( or real which is higher up) if I remember correctly) then it evaluates to 0 the same as you would however he's about 100 times the use of most of the people he's complaining about. I always knew liquid nitrogen had a cold heart but you have made a new definition for cold. |
[QUOTE=R.D. Silverman;265567]The factors of 2^p + 1 are [B]also[/B] of the form 2kp+1.
The "-1" in 2^p-1 is irrelevant. The "-1" in P-1 has almost [B]nothing[/B] to do with the "-1" in 2^p-1. [/QUOTE] [URL="http://www.youtube.com/watch?v=kA5-hZ73Tiw"]Please Mr Siverman, Mr Silverman Please[/URL] Re-read my post. Did I suggest anything other than the above?? David PS You even quoted it verbatim: "The "-1" has got a lot to do with factors of 2[SUP]p[/SUP]-1 being 2kp+1 as William explained." |
[QUOTE=R.D. Silverman;265512]Or don't you know how to use Google?[/QUOTE]
[url]http://www.lmgtfy.com/?q=p-1+factoring[/url] |
[QUOTE=Mini-Geek;265580][url]http://www.lmgtfy.com/?q=p-1+factoring[/url][/QUOTE]
My question was aimed not at you, but rather at someone exhibiting a great deal of willful ignorance. |
[QUOTE=cheesehead;264884]
In order to [I]guarantee[/I] finding factors up to 2^71 with P-1 alone, we'd have to use B2 = 2^71 / 2p = 21465290096243, over 70,000 times as large as currently-chosen P2. Which P-1 routines can handle B2 ~ 2^45?[/QUOTE] [QUOTE=davieddy;265023]TF limit = 2^70 (say). As you pointed out, there is enough data to get an accurate distribution, but factors found by P-1 can easily exceed 2^70 by 6 digits (20 bits) P-1 "hit rate" ~6% 4 more bits of TF hit rate ~6% Overlap not too much. I would like to think Einstein would consider this as an "argument" or at least a "thought experiment" David[/QUOTE] [QUOTE=R.D. Silverman;265567]Another bit of nonsense from someone who refuses to study this subject. snip Too many ignorant posters who prattle about mathematics and fail to understand that math is all about nitpicking. Getting the minute details right is fundamental to the subject. Too many people who have spewed nonsense for years and who refuse to read.[/QUOTE] If reading and nitpicking enables you to get an answer 5 orders of magnitude wide of the mark, I suggest you try to accompany these admirable habits with a much larger dose of thinking for yourself, doing and teaching. Has the art of arriving at sensible "order of magnitude" estimates been lost completely? The first question on the Oxbridge entrance physics exam invariably asked for such estimates. My Head of Department (no fool himself, but a bit of a pedant) thought that memorizing the number of protons in the earth etc would help candidates with this acid test of "nous". "What if they are asked to estimate the number of protons in the earth?" I replied. He thought "What is the capacitance of a thundercloud?" to be an unfair question. "How big/high is a thundercloud?" "Between 1 and 10 km?" "So what is your order of magnitude estimate?" say I. David |
[QUOTE=R.D. Silverman;265586]My question was aimed not at you, but rather at someone exhibiting a
great deal of willful ignorance.[/QUOTE] Yeah, and you were butting into the middle of a real-time conversation. My question was not posted @ the group, my question was posted to an individual who was online at the same time as me. [QUOTE=LiquidNitrogen;265332]Nope, I am working on M46789177 which has no prime factors below 2^68th.[/QUOTE] [QUOTE=davieddy;265333]Me M45xxxxxx. No factors < 2^74. And some P-1 done. David[/QUOTE] If you could have used your "math powers" you would have observed these posts were a mere 8 minutes apart. Someone with non-retarded socializing skills that was used to interacting with people in "real life" would understand something of this nature. I saw him online, so I tossed out the question. If you want to accuse me of something, then why don't you "nitpick" and accuse me of the correct thing: being too lazy to send him a personal message on the forum. |
[QUOTE=davieddy;265589]If reading and nitpicking enables you to get an answer 5 orders of magnitude wide of the mark,[/QUOTE]I see nothing in Silverman's proposal that is 5 orders of magnitude out. His suggestion of substituting more P-1 for TF makes good sense (my desire to see more numbers notwithstanding) from the standpoint of eliminating the possibility of Mersenne primality as efficiently as possible.
I was simply pointing out an aspect that Mr. Silverman did not mention (and is probably not concerned with), the difference adopting his scheme would make to one minor side goal of GIMPS: exhaustively eliminating the possibility of factors up to certain sizes. That side goal has no particular bearing on the finding of Mersenne primes, and was simply a byproduct of the procedures GIMPS has employed. That there happen to be numbers of magnitude near 10^5 in my analysis of a certain aspect does not imply that Silverman made any mistake or oversight in his analysis [I]of the aspects to which his proposal was addressed[/I]. |
[QUOTE=R.D. Silverman;265586]My question was aimed not at you, but rather at someone exhibiting a
great deal of willful ignorance.[/QUOTE] [url=http://www.youtube.com/watch?v=yqdSl2jFjlE]Brute Force and Ignorance[/url] I think it was Flatlander turned me on to this one. Scour the "Music thread":smile: David |
[QUOTE=cheesehead;265592]I see nothing in Silverman's proposal that is 5 orders of magnitude out. His suggestion of substituting more P-1 for TF makes good sense (my desire to see more numbers notwithstanding) from the standpoint of eliminating the possibility of Mersenne primality as efficiently as possible.
I was simply pointing out an aspect that Mr. Silverman did not mention (and is probably not concerned with), the difference adopting his scheme would make to one minor side goal of GIMPS: exhaustively eliminating the possibility of factors up to certain sizes. That side goal has no particular bearing on the finding of Mersenne primes, and was simply a byproduct of the procedures GIMPS has employed. That there happen to be numbers of magnitude near 10^5 in my analysis of a certain aspect does not imply that Silverman made any mistake or oversight in his analysis [I]of the aspects to which his proposal was addressed[/I].[/QUOTE] If you are suggesting that dropping the "certainly no factors less than 70 bits" criterion would speed up GIMPS, then then I am all for it, and welcome Bob's suggestion from the [strike]arse[/strike]bottom of my heart. David |
[QUOTE=LiquidNitrogen;265590]Yeah, and you were butting into the middle of a real-time conversation. My question was not posted @ the group, my question was posted to an individual who was online at the same time as me.
If you could have used your "math powers" you would have observed these posts were a mere 8 minutes apart. Someone with non-retarded socializing skills that was used to interacting with people in "real life" would understand something of this nature. I saw him online, so I tossed out the question. If you want to accuse me of something, then why don't you "nitpick" and accuse me of the correct thing: being too lazy to send him a personal message on the forum.[/QUOTE] Que? Who He? And anyway, [URL="http://www.youtube.com/watch?v=NztfOSyCCFM"]What DID Della wear?[/URL] David PS I don't need encouragement:smile: |
Cool It!
LN2, you don't *have* to prove your statement about being a very early employee of google...but we don't have to believe it, either. Right now, the evidence is very ambiguous, and the worst of us is out, not the best. Time for me to worry about how to get a bond line below .010" on a 1 meter scale without spending weeks polishing a mirror, and go argue with the mfaktc code.
:smile::drama::batalov::never again::deadhorse::explode: |
[QUOTE=Christenson;265610]LN2, you don't *have* to prove your statement about being a very early employee of google...but we don't have to believe it, either. [/QUOTE]
I saw a billboard ad one day. Find the first 10-digit prime number in sequential digits of "e" and go to that number ".com" I wrote some code, took a stab at it, and posted the result above ^^^. It took me to the next step, which I answered correctly also, and that led me to a recruiting site for what turned out to be Google. Take it or leave it, just stop bringing it up already. |
[QUOTE=LiquidNitrogen;265620]I saw a billboard ad one day. Find the first 10-digit prime number in sequential digits of "e" and go to that number ".com"
I wrote some code, took a stab at it, and posted the result above ^^^. It took me to the next step, which I answered correctly also, and that led me to a recruiting site for what turned out to be Google. Take it or leave it, just stop bringing it up already.[/QUOTE] You weren't responsible for mis-spelling Googol I hope. Or was that previously patented? [url=http://www.youtube.com/watch?v=WANNqr-vcx0]Can't let opportunities go by[/url] David |
[QUOTE=LiquidNitrogen;265620]I saw a billboard ad one day. Find the first 10-digit prime number in sequential digits of "e" and go to that number ".com"
I wrote some code, took a stab at it, and posted the result above ^^^. It took me to the next step, which I answered correctly also, and that led me to a recruiting site for what turned out to be Google. Take it or leave it, just stop bringing it up already.[/QUOTE] That provides a bit of detail that makes you a lot more believable...now hopefully you've seen the worst of us. We have all levels here, both technically and emotionally. I'm reminded, gently, that identity through the internet is quite complicated; just ask the various people thinking they were talking to children in chat, then finding out that it was the cops when the cuffs come out after they prove their identities by going to meet said child in person. Now, figuring out that the Apple OS was keeping 2 bits of 32 in RAM for tags, and working around it...that's a decent "real programming" hack, in the vein of "Real Programmers Don't use Pascal"(they just use negative subscripts in FORTRAN to modify the operating system)....one of my sort of pet projects is figuring out how to make an OS truly secure in the presence of arbitrary code. The virus situation is truly getting scary. |
[QUOTE=chalsall;265540]Just putting this out there for consideration...
Might you give us your name?[/QUOTE] Anyone like to play Clue? The name of the game may seem at first, 'mystery'. I like to play more than once, but not more than twice. Google is my friend. Three minutes of no error and the game is often done. When you choose the right search terms, you'll find what you've won. [COLOR="White"][SIZE="1"]sudo chmod 200 w.file[/SIZE][/COLOR] |
[QUOTE=Christenson;265623]
Now, figuring out that the Apple OS was keeping 2 bits of 32 in RAM for tags, and working around it...that's a decent "real programming" hack[/QUOTE] But even for a long time, Windows wasn't "32-bit clean," was it? Do you know anyone who could (in the past) assign more than 2 GB to a single executable even when they had more RAM? Somehow Micro$oft was doing something with one of its bits, or just not allowing it for other reasons, since 2 ^31 is 2 GB. |
[QUOTE=LiquidNitrogen;265620]I saw a billboard ad one day. Find the first 10-digit prime number in sequential digits of "e" and go to that number ".com"
I wrote some code, took a stab at it, and posted the result above ^^^. It took me to the next step, which I answered correctly also, and that led me to a recruiting site for what turned out to be Google. Take it or leave it, just stop bringing it up already.[/QUOTE] Neat Story- In your opinion, is there a one book out there that gives the best history of Google? There must be some fascinating stories in the background of that company. Thanks- Norm |
I see this thread has gotten sidetracked.
:smile: I will just refrain from posting so this eventually dies down. |
Windows? Clean? In no sense of the word I can think of...I use it because I have to, not because I want to...it's a classic example of the "Second System Effect" (see Mythical Man Month" by Fred Brooks).
|
[QUOTE=LiquidNitrogen;265590]Yeah, and you were butting into the middle of a real-time conversation. My question was not posted @ the group, my question was posted to an individual who was online at the same time as me.
[/QUOTE] This is an open, public forum. If you don't like it, you can leave. [QUOTE] If you could have used your "math powers" you would have observed these posts were a mere 8 minutes apart. Someone with non-retarded socializing skills that was used to interacting with people in "real life" would understand something of this nature. [/QUOTE] Why should I or anyone else bother to observe the timing of the posts? And the moderators have expressed a sever distaste for the name-calling that you exhibit here (i.e. 'retarted'). [QUOTE] If you want to accuse me of something, then why don't you "nitpick" and accuse me of the correct thing: being too lazy to send him a personal message on the forum.[/QUOTE] I do accuse you of being too lazy to study the mathematics required to conduct an intelligent discussion. |
[QUOTE=LiquidNitrogen;265649]Somehow Micro$oft was doing something with one of its bits, or just not allowing it for other reasons, since 2 ^31 is 2 GB.[/QUOTE]Microsoft was probably just dealing with issues similar to those IBM encountered back in the 1970-80s when it extended its S/370 addresses from 24 bits to 31 (not 32) bits. The high-order bit in 32-bit integers is treated as a sign bit by many machine instructions. In IBM's case, it was non-trivial to comb an operating system to find all cases where the earlier programmers, operating under the assumption that addresses occupied only the low-order 24 bits of a fullword integer, used instructions that might give different results depending on whether the sign bit was 0 or 1.
[URL]http://en.wikipedia.org/wiki/31-bit[/URL] has a more detailed explanation. So does [URL]http://edwardbosworth.com/My3121Textbook_DOC/MyText3121_Ch08_V01.doc[/URL] but mixed with a lot of other S/370 addressing non-31-bit details. Now, Microsoft wasn't extending its addressing from 24 bits AFAIK, but the same considerations of treating the high-order bit differently from all the rest would still apply. |
One thing to note is that [i]int[/i] is easier to type than [i]unsigned[/i]. Therein lies the problem, lazy programmers using ints for everything, even in situations when unsigned was the proper choice. How many times have we seen array index variables being declared as ints?
|
[QUOTE=retina;265700]One thing to note is that [I]int[/I] is easier to type than [I]unsigned[/I]. Therein lies the problem, lazy programmers using ints for everything, even in situations when unsigned was the proper choice. How many times have we seen array index variables being declared as ints?[/QUOTE]... and that's just when the coding is in a high-level language rather than assembler. :smile:
|
[QUOTE=retina;265700]One thing to note is that [i]int[/i] is easier to type than [i]unsigned[/i]. Therein lies the problem, lazy programmers using ints for everything, even in situations when unsigned was the proper choice. How many times have we seen array index variables being declared as ints?[/QUOTE]
Do we have an uncomputable number here???? :smile: |
[QUOTE=retina;265700] How many times have we seen array index variables being declared as ints?[/QUOTE]
Whenever I need to do this: [CODE] for (i = x; i >= 0; i--) { some stuff; } [/CODE] :smile: |
[QUOTE=bsquared;265704]Whenever I need to do this:
[CODE] for (i = x; i >= 0; i--) { some stuff; } [/CODE][/QUOTE]With this instead you don't limit yourself to 2GB of memory.[CODE] for (j = x + 1; j > 0; j--) { i = j - 1; some stuff; } [/CODE] |
[QUOTE=retina;265706]With this instead you don't limit yourself to 2GB of memory.[CODE]
for (j = x + 1; j > 0; j--) { i = j - 1; some stuff; } [/CODE][/QUOTE] What made things really tough for me was going from Complex Variables class where i was sqrt(-1) to circuits class where j was sqrt(-1) since we used i for current, then to programming class where we had for(i = 0; ...) and for(j = 1;...) |
| All times are UTC. The time now is 06:00. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.