![]() |
You don't seem to be giving any very clear idea as to why doing the math formally will give us insight that the statistics computed from current experiments doesn't offer.
You can answer a question about what distribution will be observed either with complicated modelling or by expecting it'll be like the distribution which was actually observed last time, but it's not helpful to say that looking at the statistics is completely irrelevant. If you want the complicated modelling done, do it yourself; stop nagging people who are happy with extrapolation to do it; it appears that nobody is interested in discussing the details of the modelling, because you have failed to convince them that it will be interesting and useful. Such is life. As you said thirty posts ago, [quote] Experience and prior analysis has shown that the response surface for this optimization will be very shallow in the region of the optimum. Almost anything reasonable will be close to optimal and I doubt whether it would be even possible to discern from actual computation if the values are not optimal. [/quote] |
[QUOTE=alpertron;268655]If the algorithm p-1 can find a factor p of 95 bits of a 26-bit exponent, that means that p-1 is not only multiple of that 26-bit exponent found by free but also of several prime factors which multiplied have 69 or 70 bits. This means that with the same bounds used on p-1, the algorithm p+1 should find prime factors p in the 69-70 bit range where p+1 is smooth.[/QUOTE]
That is a reasonable argument, and is therefore suggesting that there's little point implementing p+1 because prime factors in that range will have been found by the GPU trial factoring stage. |
[QUOTE=fivemack;268669]You don't seem to be giving any very clear idea as to why doing the math formally will give us insight that the statistics computed from current experiments doesn't offer.
You can answer a question about what distribution will be observed either with complicated modelling or by expecting it'll be like the distribution which was actually observed last time, but it's not helpful to say that looking at the statistics is completely irrelevant. If you want the complicated modelling done, do it yourself; stop nagging people who are happy with extrapolation to do it; it appears that nobody is interested in discussing the details of the modelling, because you have failed to convince them that it will be interesting and useful. Such is life. As you said thirty posts ago,[/QUOTE]:goodposting: THX Tom. Spared me saying exactly that. Most of us can find the maximum value of a function of several variables. Throw in the "imponderables" associated with the project and pragmatic empiricism becomes more of a necessity than a "cop out". Add to that the point that you, I and others are making that walking a few feet from the summit of a round shaped hill will barely decrease your altitude... etc Is it just me, or is Bob finally going completely round the bend? David |
[QUOTE=fivemack;268669]You don't seem to be giving any very clear idea as to why doing the math formally will give us insight that the statistics computed from current experiments doesn't offer.
[/QUOTE] You do not know if you have the right answer. [QUOTE] You can answer a question about what distribution will be observed either with complicated modelling or by expecting it'll be like the distribution which was actually observed last time, but it's not helpful to say that looking at the statistics is completely irrelevant. [/QUOTE] I recall a comment from a class that I took from W. Zangwill in grad school. We were discussing an O.R. problem and Prof. Zangwill asked a generic question about how to go about solving the problem under discussion. The first suggestion was "go out and gather statistics and look for a pattern'. Prof. Zangwill's response was 'this is a truly horrible way to do it; may we have a better suggestion'. I then spoke up: "Start by defining the variables that will affect your decision". Zangwill's reply was that this was precisely the correct way to approach an optimization problem. Perhaps you think you know more than Willard? [QUOTE] If you want the complicated modelling done, do it yourself; stop nagging people who are happy with extrapolation to do it; [/QUOTE] But I don't care about the correct bit-level for TF. I do care about discussing mathematics in a forum that is supposed to be for said discussion. In case you didn't notice, this subforum is supposed to be for the discussion of mathematics. If they don't want to discuss the math, fine. GO SOMEWHERE ELSE. [QUOTE] it appears that nobody is interested in discussing the details of the modelling, because you have failed to convince them that it will be interesting and useful. Such is life. [/QUOTE] I [i]strongly[/i] doubt the correctness of what you say. It is far more probable that they ignore the math because they don't understand it and can't do it. They would much rather indulge in their typical prattling and hand-waving. This is NOT a 'complicated model'. It is very straightforward. The only hard part is (as I have said) working out the details of how to compute the conditional probability of P-1 succeeding after TF has failed. If you don't want to discuss the math behind selecting the optimal TF level, then GO AWAY. Take it somewhere else. |
[QUOTE=R.D. Silverman;268665]starts with the correct MODEL.[/QUOTE]
I don't think anybody has publicly grappled with modeling the GPU/CPU distinction and its implications. As Bob points out, for the homogenous computing scenario the situation is well modeled although many here are less familiar with the modelling than Bob would like. But this thread isn't really about that well understood problem. This thread is about how GPUs change the situation. In the current situation, with a relatively small amount of GPU power available, the heuristic solution reached by exhortation and consensus (do as much GPU trial factoring as you can in front of the new LL work) is almost certainly correct. It is also obvious this heuristic does not scale: at the extreme you would stop LL tests and have all computers doing Trial Factoring. Figuring out the limits of the heuristic should start with the correct modeling. |
[QUOTE=wblipp;268676]I don't think anybody has publicly grappled with modeling the GPU/CPU distinction and its implications. As Bob points out, for the homogenous computing scenario the situation is well modeled although many here are less familiar with the modelling than Bob would like. [/QUOTE]'
I don't care about their familiarity. I do care about their reluctance to discuss the math! It seems to be yet more 'willfull ignorance'. If they are unfamiliar, then they can learn by participating in the discussion. One can't learn if one does not participate. And even the homogeneous situation is not well modelled. And finally, let me point out that the numbers one gets out of a correct model are not in themselves important. What a correct model gives is the ability to do a SENSITIVITY ANALYSIS, i.e. shadow prices, i.e. the values of the duals. This allows us to see the impact of CHANGING one of the bounds upon the overall cost. It also lets us see the partial derivatives of one variable with respect to another; e.g. How does changing B1 in P-1 impact the best value for B2? Finally, let me ask: If they don't think that getting the right answer is important, then why are they bothering with the question (what are optimal bounds) in the first place? |
[QUOTE=R.D. Silverman;268677]Finally, let me ask: If they don't think that getting the right answer is important, then why are they bothering with the question (what are optimal bounds) in the first place?[/QUOTE]Perhaps they are interested in an answer which is "good enough", as evaluated by their own utility function?
Paul |
[QUOTE=xilman;268682]Perhaps they are interested in an answer which is "good enough", as evaluated by their own utility function?
Paul[/QUOTE] Which begs two questions: 'what is good enough?' 'how do they know that their own utility function is anywhere close to being correct? (in terms of being 'reasonably' close to optimal) |
Well at least we out of the math forum now
[QUOTE=R.D. Silverman;268685]Which begs two questions:
'what is good enough?' ...drivel (cut) [/QUOTE] [URL="http://www.youtube.com/watch?v=z4sKdiWlLR8"]Google "beg the question"[/URL] David PS Next stop the soap box:smile: |
[QUOTE=R.D. Silverman;268685]Which begs two questions:
'what is good enough?' 'how do they know that their own utility function is anywhere close to being correct? (in terms of being 'reasonably' close to optimal)[/QUOTE]That which satisfies their own curiosity, to both questions. Paul |
[QUOTE=xilman;268691]That which satisfies their own curiosity, to both questions.
Paul[/QUOTE] One hopes that these people are not involved in 'designing bridges' or other similar activity. Another hackneyed phrase: "they don't set the bar very high, do they?" |
| All times are UTC. The time now is 10:25. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.