![]() |
[QUOTE=Ken_g6;232069]FYI, I have a new version of TPSieve out, based on and in the same archive with the newer PPSieve, [URL="https://sites.google.com/site/kenscode/prime-programs/ppsieve-bin.zip?attredirects=0&d=1"]v0.3.10[/URL] ([URL="https://sites.google.com/site/kenscode/prime-programs/ppsieve-src.zip?attredirects=0&d=1"]source[/URL])
Despite being newer, this version is unfortunately a little slower in many cases. But it's likely to be faster for people with AMD processors.[/QUOTE] On my Phenom II X6 1055T: 32 bit SSE2 old version: 138M p/sec 32 bit SSE2 new version: 131M p/sec 64 bit old version: 85M p/sec 64 bit new version: 163M p/sec :smile: Good job! |
I just noticed something rather odd with tpsieve (CPU) today. All along, I have been using a batch file to run the tpsieve commands for my sieving ranges here, and currently it looks like:
[I]tpsieve-x86-windows-sse2.exe -i 480000-484999_30aug2010.txt -p 1450e12 -P 1455e12 -N 485000[/I] As in the officially suggested command line from the variable-n sieve reservation thread, I have the "-N 485000" parameter on there. But, I got to thinking just now that it was really rather redundant since we're running with a sieve file. So, I tried leaving it off once, and was surprised to see that memory usage jumped to over a gigabyte! :shock: Normally, it's only 25MB or so once it gets into the main sieve loop. Needless to say, I changed it back and restarted the program right away (having only 2 GB of total RAM in the system). Ken, any idea why this is? At first I thought that including the "-N 485000" parameter might be forcing it into no-sieve-file mode, but that can't be since it takes a good 45-60 seconds to load the sieve file each time I run the program. Surely it would be skipping that part if it was really running without a sieve file. Yet if it's got the whole sieve file in there somewhere, how come it's only using 25 MB of memory? (FYI, it does have 1 GB of virtual memory allocated...though that would raise the question of why all that's in virtual memory with -N 485000, but in active memory without it.) |
[QUOTE=mdettweiler;234133](FYI, it does have 1 GB of virtual memory allocated...though that would raise the question of why all that's in virtual memory with -N 485000, but in active memory without it.)[/QUOTE]
That would seem to be the correct question. -N might crop out some N's from the sieve file that aren't in the range you want to sieve. That would save memory, but it would also decrease the virtual memory use. Edit: The reason to use -k, -K, and -N (and -n? I'll have to check on that.) is to avoid a first pass on the sieve file to find those values. |
[QUOTE=Ken_g6;234138]That would seem to be the correct question. -N might crop out some N's from the sieve file that aren't in the range you want to sieve. That would save memory, but it would also decrease the virtual memory use.
Edit: The reason to use -k, -K, and -N (and -n? I'll have to check on that.) is to avoid a first pass on the sieve file to find those values.[/QUOTE] Hmm, I see. Might the "first pass" on the sieve file load it into active memory, but the "second pass" into virtual memory if the former wasn't done already--even though the end result is of course the same (possibly a bug)? |
I guess this is more of a historical curiosity than anything else, but I have some doubts about PrimeGrid's claimed sieve depth of p=200T for the n=666666 quad sieve: [url]https://www.primegrid.com/forum_thread.php?id=1450[/url]
The number of candidates remaining simply don't reflect a sieve depth that high. There were 34,190,344 remaining candidates after sieving k=1-41T, which would equate to 833,911 candidates per 1T and 416,955 candidates/T if only odd values of k were included. I looked through my old progress save files and found that at p=200T, the n=1.7M file only had 350,799 candidates/T, and the n=3.322M file only had 350,830 candidates/T. For both n values, I reached 416,955 candidates/T at p=50T and not p=200T. FWIW, their posted twin and Sophie probabilities (42.3% chance of at least one twin and 66.7% chance of at least one Sophie) are correct if all of their 34,190,344 candidates were indeed sieved to p=200T. At p=200T, the odds of a random n=666666 candidate being prime (not necessarily twin or Sophie) are around 1 in 7880. Their actual number was around 8700 tasks per prime, but a small percentage of those tasks were likely doublechecks. |
1 Attachment(s)
[QUOTE=biwema;102672]
I have the feeling that we should rethink the credit system of the sieving effort. Now that person who did the most sieving will get also credit of the twin prime find. The result now is, that many people try to start sieving their own N (sometimes even beyond 500000) or their own range in a k (what makes mathematically absolutely no sense). If some people see that they have no chance of doing the most effort, they may also move to some other task. Also constellations like lucky plus or minus are forgotten. I suggest not specially crediting one person for sieving (like gimps itself). It is not fair either, but will concentrate the sieving effort on the current and next candidate only. biwema [/QUOTE] Now that the n=1.7M search is underway, I figured that I'd officially change the policy. If a twin or SG is found for either n=1.7M or n=3.322M, the top siever will not share credit with the twin or SG finder on [url]https://primes.utm.edu/[/url] or on the successor site at [url]https://t5k.org/[/url] . Only a footnote on the official announcement will be provided (similar to: [url]https://www.primegrid.com/download/SGS_2618163402417_1290000.pdf[/url] , see attached for reference). However, unlike Primegrid's n=1.29M search, credit will continue to be shared between the twin/SG discoverer and the top LLR tester in terms of number of candidates tested for that n-value. The reason for this change is that the LLR work for those n-values is far, far greater than the quad-sieving work. And there's also the issue of me monopolizing the quad sieving efforts, which isn't really fair for twin/SG credit sharing. So I'm donating my sieving credit, but I expect a lot of LLR tests done in return :smile: |
| All times are UTC. The time now is 13:33. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2023, Jelsoft Enterprises Ltd.