![]() |
Thank you. As I had just finished interval #13, I was going to have a run-off between the two versions. So I interrupted my running version, took a duplicated output file and removed the +1s for the new ABC format, placed that file in the folder containing the new xyyxsieve on my other computer, and ran it. Unfortunately it terminated with a "Segmentation fault: 11".
|
[QUOTE=pxp;554974]Thank you. As I had just finished interval #13, I was going to have a run-off between the two versions. So I interrupted my running version, took a duplicated output file and removed the +1s for the new ABC format, placed that file in the folder containing the new xyyxsieve on my other computer, and ran it. Unfortunately it terminated with a "Segmentation fault: 11".[/QUOTE]
It is possible that you do not have enough memory. The version of software I provided is not designed to support very large ranges of x. I suggest you continue with the version I posted a few weeks ago. |
After I restarted the interrupted run (using its output file as my new input) the program began with an ETC of September 1. That's quite a change from the previous mid-January 2021. Now those early ETC calculations aren't firm but I wonder now if the initial file size somehow skews the ETC guess. As I have a handful of other sieves going I will interrupt a couple of those to see if I get a similar advance in ETC dates using the size-reduced output files as new inputs.
|
[QUOTE=pxp;554998]Now those early ETC calculations aren't firm but I wonder now if the initial file size somehow skews the ETC guess. As I have a handful of other sieves going I will interrupt a couple of those to see if I get a similar advance in ETC dates using the size-reduced output files as new inputs.[/QUOTE]
I am seeing much earlier ETC dates on restarted interruptions. Perhaps a better guess than file size being the cause is multi-core implementation. As all of my sieves are run as a single process on a 6-core machine, I use -W6. Perhaps the ETC does not take that performance improvement into account. |
[QUOTE=pxp;555009]I am seeing much earlier ETC dates on restarted interruptions. Perhaps a better guess than file size being the cause is multi-core implementation. As all of my sieves are run as a single process on a 6-core machine, I use -W6. Perhaps the ETC does not take that performance improvement into account.[/QUOTE]
The ETC is based upon when the sieving started compared to where it currently is and is based upon the last prime that has been successfully sieved. Various things impact this calculation including the type of sieve and the number of threads. Regarding the "type of sieve", sieves such as xyyxsieve and gcwsieve start slow, but "p/sec" increases as terms are removed. This means that each "chunk" takes longer for small p than for large p. This causes the ETC to be reduced as p increases. I could change this, but I haven't thought much about it since so few sieves are impacted by it. |
I have verified all PRPs for x <= 14000. The range for 14000 < x <= 20000 has about 1.5 million terms in it. I will be starting on that soon.
|
[QUOTE=rogue;557236]I have verified all PRPs for x <= 14000. The range for 14000 < x <= 20000 has about 1.5 million terms in it. I will be starting on that soon.[/QUOTE]
Verification done thru x <= 15000. Estimate of about 7 weeks to finish the double check for x <= 20000. I made an interesting observation when looking at primes/PRPs of this form. The last column is the number of primes in the range. Note the relatively even distribution despite the geometric growth of terms in the range (approximately (max x)^2). Is that expected or is that unusual? The numbers for x > 15000 have not been verified yet. [code] 0 <= x < 1000 87 1000 <= x < 2000 87 2000 <= x < 3000 92 3000 <= x < 4000 80 4000 <= x < 5000 80 5000 <= x < 6000 72 6000 <= x < 7000 69 7000 <= x < 8000 80 8000 <= x < 9000 79 9000 <= x < 10000 61 10000 <= x < 10000 63 10000 <= x < 11000 75 11000 <= x < 12000 63 12000 <= x < 13000 70 13000 <= x < 14000 67 14000 <= x < 15000 68 15000 <= x < 16000 66 16000 <= x < 17000 50 17000 <= x < 18000 71 [/code] |
Here are updates based upon prp's searching (barring mistakes in my counting):
[code] 0 <= x < 1000 87 1000 <= x < 2000 87 2000 <= x < 3000 92 3000 <= x < 4000 80 4000 <= x < 5000 80 5000 <= x < 6000 72 6000 <= x < 7000 69 7000 <= x < 8000 80 8000 <= x < 9000 79 9000 <= x < 10000 61 10000 <= x < 10000 63 10000 <= x < 11000 75 11000 <= x < 12000 63 12000 <= x < 13000 70 13000 <= x < 14000 67 14000 <= x < 15000 68 15000 <= x < 16000 66 16000 <= x < 17000 50 17000 <= x < 18000 71 18000 <= x < 19000 72 19000 <= x < 20000 79 20000 <= x < 21000 62 21000 <= x < 22000 79 22000 <= x < 23000 71 23000 <= x < 24000 73 24000 <= x < 25000 56 25000 <= x < 26000 47 26000 <= x < 27000 33[/code] Note that for x > 24000 the distribution changes, but that is because those ranges are not fully tested. I'm not even certain of the entire search space for x < 24000 has been fully tested. Every x < 23000 looks like it has been tested, but I can't speak for x > 23000 as there appear to be some gaps (from my perspective). |
[QUOTE=rogue;557866]Note that for x > 24000 the distribution changes, but that is because those ranges are not fully tested. I'm not even certain of the entire search space for x < 24000 has been fully tested. Every x < 23000 looks like it has been tested, but I can't speak for x > 23000 as there appear to be some gaps (from my perspective).[/QUOTE]
You are correct. By mid-October I will have finished interval #16 which will guarantee x < 24000. To get to x < 25000 I will need to finish interval #17, which I haven't started yet. My current long-term goal is 150000 decimal digits which will bring this up to x < 33000. |
[QUOTE=pxp;557883]You are correct. By mid-October I will have finished interval #16 which will guarantee x < 24000. To get to x < 25000 I will need to finish interval #17, which I haven't started yet. My current long-term goal is 150000 decimal digits which will bring this up to x < 33000.[/QUOTE]
Sieving is really slow for 20000 < x <= 40000. There are over 10m terms remaining at a little over 1e9 and it to take me months to sieve deeply enough using 6 cores. The problem is that xyyxsieve needs a lot of memory to sieve the range efficiently (over 10GB for 6 workers), but the memory access gets expensive. I tried adding a prefetch as that should help with speed, but my initial attempts have hurt performance. I'm sure that is the key, but I very likely am doing something wrong. With 6 cores I am only testing about 30 p per second. I could possibly gain some speed by looking at x with few y terms and avoid building a power table for those x in memory. The same could be said for y with few x terms. I'm not certain if there are enough x or y with few terms that sieving could benefit from it. As soon as I complete for x <= 20000, I will peel off ranges of of y in groups of 1000 from the 10m data set. Those are the smaller terms from the big range and as I pull them out sieving should pick up a little bit of speed. |
Actually there are 68 primes for 15000 <= x < 16000. I miscounted above.
The range for x < 16000 has now been double-checked. No missing primes/PRPs. |
| All times are UTC. The time now is 04:12. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.