![]() |
I deleted a couple of my posts and a response and edited another after reading back through many of the posts in this thread. I should have done that to begin with. Most of my questions have been answered. It's nice to know that we only need to doublecheck only tests with fftlen differences. That seems like a very manageable task.
|
[QUOTE=gd_barnes;229623]I deleted a couple of my posts and a response and edited another after reading back through many of the posts in this thread. I should have done that to begin with. Most of my questions have been answered. It's nice to know that we only need to doublecheck only tests with fftlen differences. That seems like a very manageable task.[/QUOTE]
Let me know how you want to proceed. The task is large, but manageable. I have been able to update the files here, [url]http://sourceforge.net/projects/openpfgw/[/url], so there is less to navigate through. The website should put the platform specific version for the OS into the link of the big green button. I didn't link to specific files/versions because users who want to double-check will want both 3.3.5 and 3.3.6, but if someone want to put links to each version into a post, they are welcome to do so. |
I had over 87000 tests complete with PFGW on R928. Only 19 used different FFT lengths with PFGW 3.3.6. I was expecting to re-run thousands of tests. This will only set me back about 30 minutes. I still need to look again at 1000 < n < 15000 though.
|
[quote=rogue;229666]Let me know how you want to proceed. The task is large, but manageable.
I have been able to update the files here, [URL]http://sourceforge.net/projects/openpfgw/[/URL], so there is less to navigate through. The website should put the platform specific version for the OS into the link of the big green button. I didn't link to specific files/versions because users who want to double-check will want both 3.3.5 and 3.3.6, but if someone want to put links to each version into a post, they are welcome to do so.[/quote] Let's work backwards from base 1029 since the greatest chance of error is in the larger bases but let's skip bases that have > 25 k's remaining for now. We'll knock out the low-lying fruit first. My preference is to have entirely accurate 1k remaining and proven/1k/2k/3k remaining lists. In other words, better is to have a base that currently shows 500 k's remaining that really should only have 498 k's remaining than have a base that shows 1 or 2 k's remaining that should actually be proven, since the latter is the point of the project. This is only a suggestion. If anyone wants to doublecheck any base that he personally likes regardless of its size, k's remaining, or whomever originally worked on it, we certainly won't argue. Anyone feel like doublechecking base 3? :smile: |
[QUOTE=gd_barnes;229673]Let's work backwards from base 1029 since the greatest chance of error is in the larger bases but let's skip bases that have > 25 k's remaining for now. We'll knock out the low-lying fruit first. My preference is to have entirely accurate 1k remaining and proven/1k/2k/3k remaining lists. In other words, better is to have a base that currently shows 500 k's remaining that really should only have 498 k's remaining than have a base that shows 1 or 2 k's remaining that should actually be proven, since the latter is the point of the project.[/QUOTE]
I agree that we should start on the larger bases. The problem will be managing the list of k's that need double-checking. Do we need a sticky thread for that? Once I complete my current reservations on small conjectures, I should be able to rip though dozens of bases fairly quickly. Gary, since we only need to check for k that are remaining, base 3 shouldn't be too hard. Just use the list of remaining k to produce and ABC2 file (as shown earlier by Max). It might need be done in multiple iterations, but it should be really fast. I put over 700 k on a single line, so even if one put 100 values on a single line, it wouldn't take too many iterations with an ABC2 file. I recommend that double-checking only sieve up to about 1e9. 1e10 is too deep considering the percentage of k that will remain. I suspect more time might be spent on some conjectures where n is really high. A few high n can take a lot more time than some bases. Finally, we need to ensure that users are using 3.3.6 so that no work going forward needs to be double-checked. |
R928 has been double-checked. I only had to redo less than 600 total tests between 1000 < n < 20000. That was from over 600,000 tests.
|
I've had some more time today to check some of my past/current bases with the proper 3.3.5. It's difficult to estimate the amount of double-checking effort beforehand. Similar bases can behave differently.
For reference the percentages of tests that had different FFT lengths and need a double-check were: S35: (n<10K) 1.6%, (10K<n<31K) 0.5% S55: (n<25K) 3.3%, (25K<n<50K) 3.1%, (50K<n<115K) 0% S100: (n<25K) 0.7%, (25K<n<100K) 4.2% S102: (n<25K) 2.1%, (25K<n<100K) 1.2% R275: (n<150K) 0% S914: (n<25K) 0.1%, (25K<n<100K) 0.2% S930: (n<25K) 9.4%, (25K<n<100K) 14.1% I've re-run the tests with 3.3.6 for S35, S55, S100, S102 and S914. All residues matched. These bases along with R275 should be considered completely double-checked up to their current testing limits. I will take on double-checking the rest of the bases I've worked on (R42, R133, S133, S189, R272, S917, S930). I'll do 1 or 2 bases per day (depending on the number of tests) and give an update next week when done. |
So, as I understand post #14 correctly, I should recheck my reserved bases s17 and s19 till 25k first, to see, if there is any difference, right?
If so, what now? Do I have to retest the complete base? s17 has been tested really really far.. a recheck would need many months :( What with s63? I think I have to recheck it too, right? |
[quote=Xentar;229989]So, as I understand post #14 correctly, I should recheck my reserved bases s17 and s19 till 25k first, to see, if there is any difference, right?
If so, what now? Do I have to retest the complete base? s17 has been tested really really far.. a recheck would need many months :( What with s63? I think I have to recheck it too, right?[/quote] It is just the tests that the new version uses a different fft length for that need doublechecking. This shouldn't be that many. See post #14 for instructions to find which need testing. The lower the base the less. AFAIK no wrong residues have been found b!=n |
[QUOTE=henryzz;229991]It is just the tests that the new version uses a different fft length for that need doublechecking. This shouldn't be that many. See post #14 for instructions to find which need testing. The lower the base the less. AFAIK no wrong residues have been found b!=n[/QUOTE]
Ok, I re-read the post. So, when using the -F parameter, it just writes the FFT size to a file, but doesn't do a PRP test. And now I just have to re-test those, where this FFT size differs. Thank you. :smile: |
[QUOTE=Xentar;229992]And now I just have to re-test those, where this FFT size differs.[/QUOTE]
And only if a prime hasn't been found. I don't expect S63 to be too bad, but one can only guess to the number of retests needed. I have found missing primes for Steven Harvey's Generalized Woodall project by looking for retests. The interesting thing is that the list of new primes I have found were missed by an older version of PFGW (possibly 1.x) and would have been found with PFGW 3.3.4 had it been used. |
| All times are UTC. The time now is 10:17. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.