![]() |
Newpgen worthwhile for n*2^n+/-1 numbers?
PrimeGrid has finished their sieving and they only got up to about 2^33(a little under 8.6G).
I'm wondering if it would be a good idea to throw newpgen at these numbers? And, if, so, how high should I go? Also, I'm running Linux, and I've heard that newgen can be run commandline under Linux, is that true? I tried a number and it was banging along at about 10G every 1:30(m:s). |
[QUOTE=jasong;119918]PrimeGrid has finished their sieving and they only got up to about 2^33(a little under 8.6G).
I'm wondering if it would be a good idea to throw newpgen at these numbers? And, if, so, how high should I go? Also, I'm running Linux, and I've heard that newgen can be run commandline under Linux, is that true? I tried a number and it was banging along at about 10G every 1:30(m:s).[/QUOTE] Your thread title and thread text do not belong together. If you have paid any attention to PrimeGrid, you would know that n*2^n+/-1 are known as Cullens and Woodalls. Also, if you have paid any attention to the PrimeGrid message boards you would know that you need to use a combination of MultiSieve and gcwsieve to sieve numbers of this form. BTW, I also gave Geoff the code so that gcwsieve would not need to rely on MultiSieve for the initial sieving, but AFAIK, he has not implemented it. |
NewPGen cannot handle these series. It can do "fixed-n variable-k" or fixed-k variable-n" sieves, but Cullen-Woodalls are "variable-n variable-k" (ok, well n /is/ the k), and so NewPGen can't.
I am aware of only two publicly available programs -- rogue's multisieve and geof's gwcsieve -- that can handle these series. And the latter is being used by PrimeGrid. EDIT:- Beaten to the post by rogue :) |
I don't think you two considered my post for much longer than it took to skim it. 10G in a minute and a half is nothing to sneeze at, and you'll note that I was informed enough about gcwsieve to know that it had gotten to about 2^33 using PrimeGrid software(I suppose I didn't state that last part straight out, but I think it would be kind of obvious to someone who was knowledgeable of the sieving scene in general, given a little thought)
So, I'll restate my question, which I don't consider answered. Does anyone think it would be worthwhile to take the various numbers individually higher using NewPGen, like say, a bit-level or two. Or, maybe not even a bit level. |
[QUOTE=jasong;119934]I don't think you two considered my post for much longer than it took to skim it. 10G in a minute and a half is nothing to sneeze at, and you'll note that I was informed enough about gcwsieve to know that it had gotten to about 2^33 using PrimeGrid software(I suppose I didn't state that last part straight out, but I think it would be kind of obvious to someone who was knowledgeable of the sieving scene in general, given a little thought)
So, I'll restate my question, which I don't consider answered. Does anyone think it would be worthwhile to take the various numbers individually higher using NewPGen, like say, a bit-level or two. Or, maybe not even a bit level.[/QUOTE] Are you asking if it is faster to take a specific n and sieve it deeper? The easy answer is "No". There is an economy of scale in doing many n concurrently. By doing fewer n, the factor removal rate goes down considerably. Based upon my knowledge, doubling the sieving depth will give you a 3% chance of finding a factor. If you can quadruple the sieving depth in the same amount of time it takes to do a PRP test, then the PRP test is clearly what you want to do. BTW, I believe that gcwsieve will be faster than NewPGen for a single n, but only testing that hypothesis will prove it to be correct. |
Ok, thanks for the answer, and sorry for my stiff response. 10G in a minute thirty seemed fast to me and I didn't think that that had been noticed.
|
[QUOTE=jasong;119934]I don't think you two considered my post for much longer than it took to skim it.[/quote]
Mea culpa. I am so used to thinking NewPGen used for sieving a series that I didn't even register your question. [QUOTE=jasong;119934]So, I'll restate my question, which I don't consider answered. Does anyone think it would be worthwhile to take the various numbers individually higher using NewPGen, like say, a bit-level or two. Or, maybe not even a bit level.[/QUOTE] Ok, let's plugin the numbers. 90 seconds to 2^34 - 2.9% chance of factor. 270 seconds to 2^35 - 5.7% chance of factor. 630 seconds to 2^36 - 8.3% chance of factor 1350 seconds to 2^37 - 10.8% chance of factor Breakeven points (i.e., how long must an LLR test take for these to be worthwhile). To 2^34: 90 + (1-2.9%)*t = t ==> t = 90/.029 = 3100 sec. To 2^35: t = 270/.057 = 4700 sec To 2^36: t = 7500 sec To 2^37: t = 12500 sec. So, depending on the t value (LLR test time), you can decide how far you can sieve. NOTE that these are /breakeven/ points -- the optimal sieving point will be somewhat lesser (or alternately, the LLR test time should be greater than what is indicated) |
[QUOTE=jasong;119918]PrimeGrid has finished their sieving and they only got up to about 2^33(a little under 8.6G).[/QUOTE]
I don't think this is right, my current manual reservation is 6300G-6400G, much higher than 2^33. [QUOTE]I'm wondering if it would be a good idea to throw newpgen at these numbers? And, if, so, how high should I go? Also, I'm running Linux, and I've heard that newgen can be run commandline under Linux, is that true? I tried a number and it was banging along at about 10G every 1:30(m:s).[/QUOTE] That is 90sec for one number? You are using NewPGen with a one-line input file? My 2.88GHz Celeron is sieving about 58,000 numbers for PrimeGrid at the rate of about 46,000 p/sec. That is equivilent to 240G for one number in 90 sec. Unless your machine is 24 times slower than mine, you are wasting your time. |
Thanks for the response, and sorry for the harsh words. I'm positive I've rushed to respond to something I didn't read adequately in the past, I just can't think of anything in particular. In the meantime, I need to Google 'mea culpa' to find out precisely what it means, since I'm only familiar with the context it's used in and not the precise meaning.
|
[QUOTE=geoff;119940]I don't think this is right, my current manual reservation is 6300G-6400G, much higher than 2^33.
That is 90sec for one number? You are using NewPGen with a one-line input file? My 2.88GHz Celeron is sieving about 58,000 numbers for PrimeGrid at the rate of about 46,000 p/sec. That is equivilent to 240G for one number in 90 sec. Unless your machine is 24 times slower than mine, you are wasting your time.[/QUOTE] Um, Geoff, there's more than one one sieving effort that's been going on at PrimeGrid. Actually, over it's short life, there have been three separate sieving projects for three separate objectives, (1) Cullen/Woodall sieving, which uses gcwsieve, which is the best method, at least for the lower p-values (and this is the project I'm talking about),(2) twin prime search, which involves n=333,333 at the moment, and involved n=195,000 in the past, and (3) what you're involved in, which is the Sierpinski sieve, which helps both the Prime Sierpinski project and the Seventeen or Bust project which have some overlapping and nonoverlapping k's in their initiatives. |
[QUOTE=jasong;119942]Um, Geoff, there's more than one one sieving effort that's been going on at PrimeGrid. Actually, over it's short life, there have been three separate sieving projects for three separate objectives, (1) Cullen/Woodall sieving, which uses gcwsieve, which is the best method, at least for the lower p-values (and this is the project I'm talking about),(2) twin prime search, which involves n=333,333 at the moment, and involved n=195,000 in the past, and (3) what you're involved in, which is the Sierpinski sieve, which helps both the Prime Sierpinski project and the Seventeen or Bust project which have some overlapping and nonoverlapping k's in their initiatives.[/QUOTE]
LOL. Now you gone done it :wink: Sierpinski sieve is running at at least a 1000 times higher values than what geoff said. And I'm pretty sure, considering that geoff wrote 2 out of 3 of those sieve programs that he knows what he's talkin about :smile: Ah, [URL="http://www.primegrid.com/forum_thread.php?id=745"]here[/URL] is the thread in question -- looks like the sieve reservation has reached 2^[B]4[/B]3. [PS:- I must be having a really off day -- getting a C/W sieve to 2^33 is like a week's job on a single PC] |
[QUOTE=jasong;119942]Um, Geoff, there's more than one one sieving effort that's been going on at PrimeGrid. Actually, over it's short life, there have been three separate sieving projects for three separate objectives, (1) Cullen/Woodall sieving, which uses gcwsieve, which is the best method, at least for the lower p-values (and this is the project I'm talking about),(2) twin prime search, which involves n=333,333 at the moment, and involved n=195,000 in the past, and (3) what you're involved in, which is the Sierpinski sieve, which helps both the Prime Sierpinski project and the Seventeen or Bust project which have some overlapping and nonoverlapping k's in their initiatives.[/QUOTE]
I am not sieving for PSP on PrimeGrid. I am talking about sieving for Cullen/Woodall primes using gcwsieve. gcwsieve is running on my Celeron at 46,000 p/sec for 58,000 Cullen and Woodall numbers. Unless NewPGen has added new capabilities since I last used it, it is the wrong tool for this job. If it does somehow work out to be worthwhile doing more trial factoring on an individual Cullen/Woodall candidate, that that will just show that sieving was stopped too soon. P-1 or ECM factoring is a different matter, it may well be worthwhile doing P-1 or ECM factoring before LLR testing. |
If you want to be accurate the range in P-1 testing was done up to 2^44.7 :wink:
Also P-1 factoring is a great way to go as im currently finding a factor every 45444 seconds using Pfactor=. 2 LLR tests on my computer take approx. 86400 seconds so Im getting excellent results. ~BoB |
Mea [b]maxima[/b] culpa. Pride goeth before a fall(my pride, my fall) and all that stuff.
|
[QUOTE=geoff;119940]
That is 90sec for one number? You are using NewPGen with a one-line input file? My 2.88GHz Celeron is sieving about 58,000 numbers for PrimeGrid at the rate of about 46,000 p/sec. That is equivilent to 240G for one number in 90 sec. Unless your machine is 24 times slower than mine, you are wasting your time.[/QUOTE] Just to clarify, if you are using newpgen to sieve a single number of the form k*2^n+1, then there are 2 different algorithms the program can use. The one used for fixed n is really fast, whereas the one used for fixed k will be slow. So you have to adjust the header line to make sure it is 'fixed n' not fixed k. :smile: |
If you really want to do a proper test then you would need to do the following...
1)create a text file 2)put this as the first line 0:M:0:2:1 (first zero being sieve start point and M meaning minus for woodall units... use p for cullen) 3)put all the n and k values in separated by a space it should look something like this: 179044366:M:0:2:1 3850248 3850248 3850249 3850249 ... ect Make sure the cullen and woodall are separated though. edit the first number on the first line to be the sieve start point then set your max P value in newpgen and run.... This would be the only proper way to compare newpgen and gcwsieve. Also this method is a lot more work for users in the end because they need to edit the first number before every sieve! ~BoB I personally think there will be little difference but you never know... |
I have just done the conversion and I will be running some tests momentarily...
~BoB |
I am having a strange problem... it is not finding the factors it should... (Im using the test available with gcwsieve's source code)
|
[quote=popandbob;120261]I am having a strange problem... it is not finding the factors it should... (Im using the test available with gcwsieve's source code)[/quote]
Guess I should open my eyes every once in a while... (fixed k or n only!) Doh!:whistle::no::blahblah: |
| All times are UTC. The time now is 01:36. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.