![]() |
(my emphasis)
[quote=R.D. Silverman;530088]It should be obvious to anyone reading this thread that "missed it" refers to being missed by ECM trials. It was not missed. [b]It was simply never run.[/b] To be an "ECM miss" would require that it was run.[/quote] I see at least two people reading the thread to whom this was [i]not[/i] obvious. Possibly because you had recently stated as a fact that you [i]had[/i] run it: [quote=R.D. Silverman;530044][b]I ran 5,2,- a long time ago.[/b] It is likely that it was not among the first holes [b]at the time I ran it.[/b][/quote]So, which [b]R.D. Silverman[/b] are we supposed to believe? |
[QUOTE=Dr Sardonicus;530105](my emphasis)
I see at least two people reading the thread to whom this was [i]not[/i] obvious. Possibly because you had recently stated as a fact that you [i]had[/i] run it: So, which [b]R.D. Silverman[/b] are we supposed to believe?[/QUOTE] I ran numbers from the 52,- TABLE. The number in question was erroneously omitted from the list of numbers that I ran. I was not aware that the number was missed until it was just factored. Surely this is clear? People were aware that I was running the first five holes in each table. I told others which numbers I intended to run. |
[QUOTE=R.D. Silverman;530110]I ran numbers from the 52,- TABLE. The number in question was
erroneously omitted from the list of numbers that I ran. I was not aware that the number was missed until it was just factored. Surely this is clear? People were aware that I was running the first five holes in each table. I told others which numbers I intended to run.[/QUOTE]I have no reason to doubt you. However, it might have led to fewer misunderstandings if you had stated "I ran [b]some[/b] numbers from the 5,2- table" or, perhaps, "I believe I ran the remaining numbers in the 5,2-table". Pedants are everywhere. I plead guilty as charged. :poop: happens to all of us. Your alleged mis-statement pales into insignificance compared with some I've made. |
[QUOTE=Dr Sardonicus;530105](my emphasis)
I see at least two people reading the thread to whom this was [i]not[/i] obvious. Possibly because you had recently stated as a fact that you [i]had[/i] run it: So, which [b]R.D. Silverman[/b] are we supposed to believe?[/QUOTE] I stated that I ran the 5,2- table. People who matter (those who have been involved in this effort, people who have contributed and are contributing, and people who are actively setting up new trials) knew that this meant the first 5 holes to t55 and the rest of the table to t45. I [b]did[/b] run the entire table. There is no central repository that tracks how many curves/limits have been run. Those who have not been involved and are not in the process of getting involved are irrelevant to this discussion. They do not "have a need to know". And a subject that I have repeatedly harped on over the years is that people are too concerned about "ECM misses". They seem to get upset if a small factor was missed by ECM. I say "so what?". It happens. It is not important. |
For me, even as a bystander (I do not participate in the project, but only ran one or two pointy assignments in the past, without knowing much what I was doing) it was very clear from the first post of RDS what happened and I think that all the pedantry was uncalled for. I am with RDS here. Further "you are more guilty than me" posts will be moved/deleted.
Don't piss me off today, when SWMBO put sugar in my coffee by mistake (instead of putting it in hers). :rant: |
[QUOTE=R.D. Silverman;530144]I stated that I ran the 5,2- table. People who matter (those who have been involved
in this effort, people who have contributed and are contributing, and people who are actively setting up new trials) knew that this meant the first 5 holes to t55 and the rest of the table to t45. I [b]did[/b] run the entire table. [COLOR="Red"]There is no central repository that tracks how many curves/limits have been run. Those who have not been involved and are not in the process of getting involved are irrelevant to this discussion. They do not "have a need to know". [/COLOR] And a subject that I have repeatedly harped on over the years is that people are too concerned about "ECM misses". They seem to get upset if a small factor was missed by ECM. I say "so what?". It happens. It is not important.[/QUOTE] With regard to the emphasized lines above, strictly speaking this is true. However, for anybody who is considering factoring one of these numbers with NFS, it is always nice to have a sense of how much ECM has previously been done on it first. "Need" might be a little strong, but it's certainly a reasonable thing to "[I]want[/I] to know". And though there isn't any real way to know for sure how much ECM people have done on their own without telling anyone, I have been trying for a while now, little by little, to create such a central repository. In particular, my ECMnet server at ecm.unshlump.com:8194 contains all the information I know about curve counts/limits, and I am reasonably diligent about keeping it up to date with everything that gets reported to me. |
[QUOTE=jyb;530172]
<snip> And though there isn't any real way to know for sure how much ECM people have done on their own without telling anyone, I have been trying for a while now, little by little, to create such a central repository. In particular, my ECMnet server at ecm.unshlump.com:8194 contains all the information I know about curve counts/limits, .[/QUOTE] This pages shows some factorizations that, at first glance, seem impossible. For example, it lists 11,4,291- as p99.p122, but the "method" listed says "ECM" with B1 = 43M ...... I presume the method used was NFS, but the web page does not say so. Also: Do the curve counts reflect just your work? Do they include the YoYo counts? The data does not seem to include YoYo counts. Many of the numbers were done with 19k curves at 110M by YoYo.... Isn't that data available at [url]http://www.rechenkraft.net/yoyo/download/download/stats/ecm/[/url] ??? BTW, when I run numbers with GMP-ECM, I do [b]not[/b] use the default B2 for a given B1. I use a value of B2 that yields equal run times in step 1 and step 2. This is typically quite a bit higher than the default. It also reduces the number of curves needed to achieve a given digit level for factors that are sought. [reduces with respect to the GMP-ECM default]. Note that the b2/B1 ratio needed to achieve equal time is a function of the size of the number and of the machine being used. For an HCN near 10^200 I was using B1 = 1G and B2 = 10^14 on my home PC for some of my trials. |
[QUOTE=R.D. Silverman;530192]This pages shows some factorizations that, at first glance, seem impossible.
For example, it lists 11,4,291- as p99.p122, but the "method" listed says "ECM" with B1 = 43M ...... I presume the method used was NFS, but the web page does not say so. Also: Do the curve counts reflect just your work? Do they include the YoYo counts? The data does not seem to include YoYo counts. Many of the numbers were done with 19k curves at 110M by YoYo.... Isn't that data available at [url]http://www.rechenkraft.net/yoyo/download/download/stats/ecm/[/url] ??? BTW, when I run numbers with GMP-ECM, I do [b]not[/b] use the default B2 for a given B1. I use a value of B2 that yields equal run times in step 1 and step 2. This is typically quite a bit higher than the default. It also reduces the number of curves needed to achieve a given digit level for factors that are sought. [reduces with respect to the GMP-ECM default]. Note that the b2/B1 ratio needed to achieve equal time is a function of the size of the number and of the machine being used. For an HCN near 10^200 I was using B1 = 1G and B2 = 10^14 on my home PC for some of my trials.[/QUOTE] Note also that your count webpage seems to omit some trials that I told you I did. I told you I ran t55 on the first 5 holes, but that data seems to be omitted. This was done with B1 = 1G for all numbers and (typically) 2000 curves. There was some variation in curve counts depending on the value of B2 that I chose. (which depended on the machines being used and the size of the numbers). |
[QUOTE=R.D. Silverman;530192]This pages shows some factorizations that, at first glance, seem impossible.
For example, it lists 11,4,291- as p99.p122, but the "method" listed says "ECM" with B1 = 43M ...... I presume the method used was NFS, but the web page does not say so. Also: Do the curve counts reflect just your work? Do they include the YoYo counts? The data does not seem to include YoYo counts. Many of the numbers were done with 19k curves at 110M by YoYo.... Isn't that data available at [url]http://www.rechenkraft.net/yoyo/download/download/stats/ecm/[/url] ??? BTW, when I run numbers with GMP-ECM, I do [b]not[/b] use the default B2 for a given B1. I use a value of B2 that yields equal run times in step 1 and step 2. This is typically quite a bit higher than the default. It also reduces the number of curves needed to achieve a given digit level for factors that are sought. [reduces with respect to the GMP-ECM default]. Note that the b2/B1 ratio needed to achieve equal time is a function of the size of the number and of the machine being used. For an HCN near 10^200 I was using B1 = 1G and B2 = 10^14 on my home PC for some of my trials.[/QUOTE] When I receive a factor report for an HCN, I want it reflected in the ECMnet server so that the server stops handing out that number. I could so this by manually editing the server's data file, but that requires taking the server down while I do that, plus it's always a slightly risky thing to do. Instead I just prepare a factor report and send it to the server. The ECMnet server assumes that all work is performed by an ECMnet client and returned that way, so "method" is expected to only be ECM, P-1, or P+1, and it expects a B1 value to display. You can identify these reports on the status page by the fact that the "Machine" field is listed as "special". All others should be real reports of work done by an ECMnet client. Essentially, if you're looking at such a factor in the ECMnet server status page, you can take the factors as correct, but ignore the method and B1 fields. The curve counts do not reflect just my work, they reflect all work I know about, including that done by you and Yoyo@home, but with the following proviso: curve counts are normalized to make comparison between different composites easy. What I mean by "normalized" is that we assume that work is done with the standard B1 values, and that a particular B1 level is completed before moving on to the next one. So if e.g. only 3000 curves have been completed at 43M, but then 19000 are done at 110M, then we first fill up the B1 level to 7557, using a conversion factor between curves at 110M and curves at 43M, then add the residual curves done to the count for curves at 110M. The conversion factors used between different levels are simply the ratio of B1 values. I.e. a curve at 110M counts as 110/43 ≈ 2.558 curves at 43M. Converting work between different B1 values in this way is obviously not entirely accurate (in the sense of reflecting the probabilities of finding factors at various digit counts), but it's probably close enough to still give useful information, and it's the one that the server software uses when assessing work done for each composite. So while Yoyo@home may have done 19600 curves @110M for e.g. 5+3,1395M, the server status page shows this as 17884 curves, because the rest of them went toward filling up the remaining curves at 43M. As for your work on the first 5 holes, you never told me until two days ago that you used the B1 and B2 values that you did. All you said in the past was that you had done a t55. So the server reflects 17884 curves @110M for those numbers. And of course the whole notion of "first 5 holes" is quite problematic, since that's a moving target. Many composites have moved into the first 5 holes since you did the work you did, and since you didn't keep any records of which actual numbers you did the work on, it's quite difficult to know what to record. Nonetheless, I did a great deal of work trying to figure it out based on when you made reports and when factors were found in the past. [QUOTE=R.D. Silverman;530194]Note also that your count webpage seems to omit some trials that I told you I did. I told you I ran t55 on the first 5 holes, but that data seems to be omitted. This was done with B1 = 1G for all numbers and (typically) 2000 curves. There was some variation in curve counts depending on the value of B2 that I chose. (which depended on the machines being used and the size of the numbers).[/QUOTE] Can you give me an example of a specific number which you think you took to t55 that isn't reflected in the server? Just because it's a first-5-hole now doesn't mean it was when you did your work. |
[QUOTE=jyb;530204]
Can you give me an example of a specific number which you think you took to t55 that isn't reflected in the server? Just because it's a first-5-hole now doesn't mean it was when you did your work.[/QUOTE] 4,3,427-. It doesn't show any curves at B1 = 1G. Indeed, I don't see any listings that have B1 = 1G |
[QUOTE=jyb;530204] What I mean by "normalized" is that we assume that work is done with the standard B1 values, and that a particular B1 level is completed before moving on to the next one. So if e.g. only 3000 curves have been completed at 43M, but then 19000 are done at 110M, then we first fill up the B1 level to 7557, using a conversion factor between curves at 110M and curves at 43M, then add the residual curves done to the count for curves at 110M. The conversion factors used between different levels are simply the ratio of B1 values. I.e. a curve at 110M counts as 110/43 ≈ 2.558 curves at 43M.
[/QUOTE] A theoretical note. It is not a critique of your methodology. This "conversion" is not really correct. It somewhat works with respect to the lower target size only; but not with respect to larger numbers. A t55 "requires" about 2300 curves at B1 = 1G. It also "requires" about 8800 curves at B1 = 260M. [according to the -v option in GMP-ECM] But the former has a HIGHER probability of finding 60 or 65 digit factors. The probabilities are not linear with respect to the amount of work done. 2300 curves at 1G is about the same amount of [b]work[/b] as 9200 curves at 250M, but the probabilities for the former are larger for larger numbers. To see this note that if we run a t50, the probability of success is 1-1/e. But running a 2*t50 does not double the probability. It simply increases to 1-1/e^2. Running at B1 = 1G is actually sub-optimal if the target is 55 digits. I could achieve t55 with less work by running with a lower B1. But I would reduce the chance of finding larger factors. When running a trial I prefer to run with a larger, sub-optimal B1 in the chance of finding factors larger than my target. [QUOTE] Converting work between different B1 values in this way is obviously not entirely accurate (in the sense of reflecting the probabilities of finding factors at various digit counts), but it's probably close enough to still give useful information, and it's the one that the server software uses when assessing work done for each composite. [/QUOTE] We agree on this. See just above. |
| All times are UTC. The time now is 15:41. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.