![]() |
[QUOTE=R.D. Silverman;398398]I ran 1000 curves at first limit 500M. I won't speak for others as to what they have done.[/QUOTE]
Bob, can you expand on the ECM work you've done? Were these 1000 curves only for the seven mentioned numbers (now six, since 7+3,293 has been factorized), or have you run that many on all Homogeneous Cunningham composites? And did you run stage 1 only, or stage 2 as well for each? I've run about 3700 curves at B1 = 11e7 on these six numbers. There had previously been close to 1000 curves done via Paul's ECMNet server. If Bob's curves ran both stages, then I calculate this is equivalent to a total of approximately 8900 curves run at 11e7. So debrouxl, it sounds like another 1100 curves will get you to the level you want. When I finish up some other work, I can do that. Paul, would it make any sense to resurrect the ECMNet server that you used to run for the Homogeneous Cunninghams so people can easily contribute to ECM pre-testing prior to running NFS on these? Based on SNFS difficulties, there are maybe 40 or so composites that already have enough ECM, and the amount needed for others varying from epsilon to about 20000@11e7 (or maybe a little less, if Bob already did 1000@5e8 on all of them). But they can perhaps be added to the server in groups with roughly the same SNFS difficulties. If NFS@Home is only interested in difficulties in the 240's (as seems to be the case), then we have about 75 composites which fit that bill and would make a good starting point. |
[QUOTE=jyb;398438]Bob, can you expand on the ECM work you've done? Were these 1000 curves only for the seven mentioned numbers (now six, since 7+3,293 has been factorized), or have you run that many on all Homogeneous Cunningham composites? And did you run stage 1 only, or stage 2 as well for each?
[/QUOTE] I ran 1000 curves with B1 = 500M and B2 = default GMP-ECM limit on the entire set of composites. [QUOTE] So debrouxl, it sounds like another 1100 curves will get you to the level you want. When I finish up some other work, I can do that. [/QUOTE] Pointless. The time would be better spent on NFS. I suggest that you compute the marginal probability of finding another factor given the work already done. Multiply by the number of remaining composites to get the total number of expected factors. Then compare it against the number of factors NFS will find if the same amount of time is applied.......... |
[QUOTE=R.D. Silverman;398439]I ran 1000 curves with B1 = 500M and B2 = default GMP-ECM limit on the entire set of composites.[/QUOTE]
Thanks. So that's equivalent to 4200 curves @11e7 which gives us approximately 5000 curves for all composites, plus more for the ones I mentioned. [QUOTE=R.D. Silverman;398439]Pointless. The time would be better spent on NFS.[/QUOTE] You may be right. However it's really not up to us. If whoever runs NFS@Home demands that the numbers get that much ECM before they will allow them to be queued for NFS, then we don't have the luxury of deciding on our own how to allocate our work. And given that the ECM work which *was* done didn't manage to find a P49, I can understand their thinking. Sure, it's just one example (essentially nothing but anecdotal evidence), but it's most unpleasant to do NFS on a 172-digit number and find that it has a P49 and a P52 which could have been found with much less work. [QUOTE=R.D. Silverman;398439]I suggest that you compute the marginal probability of finding another factor given the work already done. Multiply by the number of remaining composites to get the total number of expected factors. Then compare it against the number of factors NFS will find if the same amount of time is applied..........[/QUOTE] That's a good suggestion. Unfortunately I don't know how to do that. Given a factor known to be of a certain size, I can calculate the probability that it will be found by a given amount of ECM work, but I don't know how to calculate the prior. Perhaps you can point me in the right direction? When you decided to run 1000 curves @5e8 on these numbers, did you do that calculation? And did it tell you that a run of 1000 curves was worth it, but that running any more than that was not? In general there's a 2/9 rule of thumb for ECM pre-testing on SNFS numbers. I don't know the calculations behind that rule of thumb, but I always assumed that somebody had determined that it was worth it; i.e. that the probabilities made it worthwhile. Are you suggesting that this rule is flawed? That we would find more factors if we stopped running ECM and moved to NFS earlier (where feasible)? |
[QUOTE=jyb;398444]Thanks. So that's equivalent to 4200 curves @11e7 which gives us approximately 5000 curves for all composites, plus more for the ones I mentioned.
You may be right. However it's really not up to us. If whoever runs NFS@Home demands that the numbers get that much ECM before they will allow them to be queued for NFS, then we don't have the luxury of deciding on our own how to allocate our work. [/QUOTE] They don't know what they are talking about.......What matters is the marginal return for additional CPU time.... [QUOTE] And given that the ECM work which *was* done didn't manage to find a P49, I can understand their thinking. [/QUOTE] They are not thinking..... They should READ my paper, cited below. [QUOTE] Sure, it's just one example (essentially nothing but anecdotal evidence), but it's most unpleasant to do NFS on a 172-digit number and find that it has a P49 and a P52 which could have been found with much less work. [/QUOTE] They are Idiots. ECM is [b]always[/b] going to miss some factors. It is a probabilistic methods. Optimal parameter selection only gives a (1-1/e) probability of success, What matters is not that some factors get missed. What matters is how many are found in total for a GIVEN amount of work. Besides...... SNFS rips through a C172 very very quickly; [QUOTE] That's a good suggestion. Unfortunately I don't know how to do that. [/QUOTE] Discussed in my joint paper with Sam Wagstaff: A Practical Analysis of ECM. Read it! It also discusses how to known when to switch to NFS...... |
[QUOTE=R.D. Silverman;398454][QUOTE=jyb;398444]You may be right. However it's really not up to us. If whoever runs NFS@Home demands that the numbers get that much ECM before they will allow them to be queued for NFS, then we don't have the luxury of deciding on our own how to allocate our work.[/QUOTE]
They don't know what they are talking about.......What matters is the marginal return for additional CPU time.... [/QUOTE] I agree wholeheartedly that that's what matters. But my point still stands. No matter what you (or anyone else) may think of their requirements, if you want to play in their game you have to play by their rules. [QUOTE=R.D. Silverman;398454] They are Idiots. ECM is [b]always[/b] going to miss some factors. It is a probabilistic methods. Optimal parameter selection only gives a (1-1/e) probability of success, What matters is not that some factors get missed. What matters is how many are found in total for a GIVEN amount of work. [/QUOTE] Yes, we are in complete agreement about ECM's probabilistic nature, and about what matters. I don't have enough information to know whether I agree with your assertion that the time to start NFS is now, though I'm working on that. But I do know that calling them idiots is both gratuitously unkind and highly likely to be incorrect. [QUOTE=R.D. Silverman;398454] Besides...... SNFS rips through a C172 very very quickly; [/QUOTE] Not when its SNFS difficulty is 247, as it was in this case. Finding the P52 by ECM, leaving a C120 for GNFS, saved an enormous amount of time. [QUOTE=R.D. Silverman;398454] [QUOTE=jyb;398444]That's a good suggestion. Unfortunately I don't know how to do that. Given a factor known to be of a certain size, I can calculate the probability that it will be found by a given amount of ECM work, but I don't know how to calculate the prior. Perhaps you can point me in the right direction?[/QUOTE] Discussed in my joint paper with Sam Wagstaff: A Practical Analysis of ECM. Read it! It also discusses how to known when to switch to NFS......[/QUOTE] Thanks. I will, at long last, do that. However, I note that you neglected to answer the really important part of my post: [QUOTE=jyb;398444] When you decided to run 1000 curves @5e8 on these numbers, did you do that calculation? And did it tell you that a run of 1000 curves was worth it, but that running any more than that was not? In general there's a 2/9 rule of thumb for ECM pre-testing on SNFS numbers. I don't know the calculations behind that rule of thumb, but I always assumed that somebody had determined that it was worth it; i.e. that the probabilities made it worthwhile. Are you suggesting that this rule is flawed? That we would find more factors if we stopped running ECM and moved to NFS earlier (where feasible)?[/QUOTE] Lest you just send me to your paper again, I'll point out that each of these questions can be answered with a single word. Will you oblige? |
[QUOTE=R.D. Silverman;398454]....What matters is the marginal return for additional CPU time....
... What matters is how many are found in total for a GIVEN amount of work.[/QUOTE] It's the right answer to the wrong problem. The Wagstaff and Silverman paper treats the "GIVEN amount of work" as the total computing resources applied to the problem, and seeks to optimize the rate a which factors are found. But in today's situation, only some of the resources are under control of people seeking to optimize the rate at which factors are found. Large amounts of ECM and Sieving are being done by people interested in optimizing BOINC credit, collecting BOINC Badges, and winning BOINC Challenges. Some of the post processing power is under the control of people with still different goals not directly related to factor generation. In the short run, a "solution" that asserts the "uncontrollable resources" should behave differently is infeasible. In the longer run, changing the behavior of "uncontrollable resources" to better suit our goals is a social engineering / sales problem, not a resource allocation problem. |
[QUOTE=fivemack;398406]That may or may not be true, but was definitely unkind. You should not have said it.[/QUOTE]
Now, if all of you are jumping on my head, I am very sorry, and I apologize to RDS for saying it. I didn't read Serge saying he won't work anymore for this project (in fact, I didn't find his post even after the discussion, but I didn't look very hard) and I was a bit upset about RDS omitting him, when I saw in the list it was mostly his name allover. And I just wanted to byte back at RDS. Now you all know that RDS is one of the guys I respect here, in spite of his "go back to books" attitude (well, or in fact, due to this attitude of him, I am not sure, :razz: although I find myself the target of it sometimes, generally he is right about it). At the end, it was just a joke, and I wouldn't get upset if I would be the object of such joke in such situation. I would laugh together with the guy making the joke. But well, our public interfaces are different. One more reason I should not said it is the fact that I was not following or contributed to this project myself. But that is different story. Now you give me a ~C160 for GNFS (as I said, I didn't follow the project) and I will make it up to you all by factoring it this week. |
Requesting about t55 ECM work for a SNFS difficulty 247 job is indeed simply the 2/9 rule of thumb in action.
Don't bother running those last ~1100 curves at B1=11e7 :smile: I have just queued "7293_minus_5293" to NFS@Home's 14e. The coefficients of the polynomial produced by the "phi" tool are the expected ones: [code]n: 679660429198449109556771567955002013165529448982166244468167757307695812529238339253490512873694755480821951426706147823508039414953995600613513153702352796883089439899876132284219106050323937644445448391660394457621413543312558713 # 7^293-5^293, difficulty: 249.16, skewness: 1.06, alpha: 0.00 # cost: 7.78633e+18, est. time: 3707.78 GHz days (not accurate yet!) skew: 1.058 type: snfs c6: 5 c0: -7 Y1: -17763568394002504646778106689453125 Y0: 256923577521058878088611477224235621321607 m: 340511660667102771498497213973450856145348094424677003409399811765429266470542518671790975266516389868486340919727252725924306298970862958729767872565756007180630746543683124968335540213895275373933212053138138462212906329799119842 rlim: 134217727 alim: 134217727 lpbr: 31 lpba: 31 mfbr: 62 mfba: 62 rlambda: 2.6 alambda: 2.6[/code] |
[QUOTE=LaurV;398468]Now you give me a ~C160 for GNFS (as I said, I didn't follow the project) and I will make it up to you all by factoring it this week.[/QUOTE]
Unless anyone has any better ideas, in which case I'd happily defer to them, this one would be nice. [code] 15245685683654194070528451784367735927134564102584982532265254072253746820313011591722538299058918175786099377638450142276578178842074686064451121444334808941 2,862- [/code] It's a C158 from the GCW project and much easier by GNFS than by SNFS (difficulty around 263 digits). The smallest remaining HCN is this C161 from 8,3+,248 [code]67913146583098242945234710034040860323734180924846788218077857823909839832664083294529021164010342768692899249006235945630154935616014528457767660874893572764737[/code] but I don't know whether anyone is working on it. Paul |
[QUOTE=wblipp;398465]It's the right answer to the wrong problem. The Wagstaff and Silverman paper treats the "GIVEN amount of work" as the total computing resources applied to the problem, and seeks to optimize the rate a which factors are found. But in today's situation, only some of the resources are under control of people seeking to optimize the rate at which factors are found. Large amounts of ECM and Sieving are being done by people interested in optimizing BOINC credit, collecting BOINC Badges, and winning BOINC Challenges.
[/QUOTE] So? They don't get to decide which numbers get done or how they are done. NFS assignments are not in their control. People who know what they are doing are in control. |
[QUOTE=R.D. Silverman;398476]So? They don't get to decide which numbers get done or how they are done. NFS assignments are not in their control. People who know what they are doing are in control.[/QUOTE]They do decide which get done. I've had some accepted and some declined.
The disconnect, as I see it, is that you are applying your cost-benefit analysis to a system which is controlled by those with a different analysis. Just as you don't have to use their service because it goes against your analysis, they don't have to provide a service to anyone who disagrees with theirs. Seems fair to me. |
| All times are UTC. The time now is 23:06. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.