![]() |
[QUOTE=Oddball;218623]With the new sieve, it's expected to take 4-5 hours: 1 hour and 10 minutes using the new sieve to p=100M[/QUOTE]
[quote]Can you double both SieveSize and SmallPrimes and rerun it?[/quote] The sweet spot for my PC seems to be increasing SieveSize and SmallPrimes by 50% (SieveSize = 1500000 and SmallPrimes = 9000000). It then takes 1 hour and 3 minutes to sieve a 1T range to p=160M. |
[QUOTE=Oddball;218623]3-4 hours to use NewPGen to sieve from p=100M to p=100G.[/QUOTE]
Sounds wrong. Do you mean 30-40 hrs? |
[QUOTE=axn;218677]Sounds wrong. Do you mean 30-40 hrs?[/QUOTE]
Actually, I meant 15-16 hours, which is 12 hours longer than my original estimate. The time was estimated to be from 7PM - 10:30AM, but I accidentally calculated it as 7PM - 10:30PM :blush: |
Clueless.
[QUOTE=axn;218622]I am currently trying to compile a Win64 build. But running into some weird runtime error. Need to troubleshoot :yucky:[/QUOTE]
I haven't used FreePascal before but fed your text file into 64 bit Lazarus and an .exe fell out! :cool: Anyway, the exe runs okay but I got the following compiler messages: [CODE]lm(44,19) Hint: Converting the operands to "DWord" before doing the add could prevent overflow errors. lm(82,23) Hint: Converting the operands to "Int64" before doing the multiply could prevent overflow errors. lm(102,23) Hint: Converting the operands to "Int64" before doing the multiply could prevent overflow errors. lm(124,23) Hint: Converting the operands to "Int64" before doing the multiply could prevent overflow errors. lm(205,17) Hint: Converting the operands to "DWord" before doing the add could prevent overflow errors. lm(212,48) Hint: Converting the operands to "DWord" before doing the subtract could prevent overflow errors. lm(218,51) Hint: Converting the operands to "DWord" before doing the subtract could prevent overflow errors. lm(223,18) Hint: Converting the operands to "DWord" before doing the add could prevent overflow errors. lm(225,51) Hint: Converting the operands to "DWord" before doing the subtract could prevent overflow errors. lm(312,35) Hint: Converting the operands to "DWord" before doing the add could prevent overflow errors. Project "lm" successfully built. :) [/CODE] Is the exe 'safe' to use? (Windows 7 64bit.) |
[QUOTE=Flatlander;218771]
Is the exe 'safe' to use? [/QUOTE] Should be. I have used "proper" data types, so should be ok. You can spot check a 100G k range against NewPGen, sieved to same depth. |
Not thinking big enough.
SieveSize and SmallPrimes can be bumped up quite a bit. I have tried SmallPrimes of 60e6 (p~1.2e9), and SieveSize of 6e6, and it runs in under 2hrs. That should save a lot more hours off NewPGen sieving.
|
axnSieve rocks!
Using SmallPrimes of 60e6 ("p<=1,190,494,759"), and SieveSize of 6e6 as above:
2hr 55m for 1T range on one core of DualCore T4400 laptop. 2.2Ghz, 1Mb L2 cache. Windows 7, 64 bit. Very nice. :smile::tu: Uses 1,268,972K of RAM in task manager. As you suggest, I'll compare a sample with NPG's output. |
axnSieve
SmallPrimes of 60e6, SieveSize of 60e5 produces identical results to NPG over a 0.05T sample. (The only difference was the header where NPG stopped at a P ten less than axnSieve.)
Testing underway for 90e6/90e5, P=1,824,261,409. Looks like 2T will take about 6hr 15m. (Uses 1,833,380KB.) I reached a compiler error at 93e6/93e5 but 92e6/92e5 compiled fine. (I won't go that high though.) With 60e6/60e5 a 1T sieve uses 192Mb in NPG so 2T will fit comfortably, but even with 90e6/90e5 I don't think 3T will be <485Mb. (Hmmm. Might be worth tweaking the program to do 2.5T.) |
[QUOTE=Flatlander;218881]With 60e6/60e5 a 1T sieve uses 192Mb in NPG so 2T will fit comfortably, but even with 90e6/90e5 I don't think 3T will be <485Mb. (Hmmm. Might be worth tweaking the program to do 2.5T.)[/QUOTE]
192 Mb is what newpgen automatically chooses. But it can work with much less while still using fast array mode (96 Mb, maybe even 48 Mb). I am going to go out on a limb and say that 384 MB fast array can handle ranges much larger than 20T (yes, 20, not 2). PS:- I remember there being a rule saying something like 6 bytes per k. That'd mean 384 MB can handle 64M (=67108864) candidates in fast array mode. |
[QUOTE=Flatlander;218881]Testing underway for 90e6/90e5, P=1,824,261,409. Looks like 2T will take about 6hr 15m. (Uses 1,833,380KB.)
I reached a compiler error at 93e6/93e5 but 92e6/92e5 compiled fine. (I won't go that high though.)[/QUOTE] If there is real savings to be had in allowing the program to sieve higher that 1G, I have a few ideas that can reduce the memory requirement for the bigger primes possibly allowing you to sieve as high as p=3G. However, I am not sure it is worth it. Basically, going from 60e6 (~1.2G) to 90e6 (~1.8G) takes you (6h15)/2 - 2h55 = 12 min (assuming both numbers are from the same machine). Implementing the memory saving measures will probably introduce a slow down of 10-15% (pure speculation). Let's say instead of 3hr7 for a 1T range, it takes 3h30. So the effective delta would be 35 min instead of 12 min. Can NewPGen cover the same range (ie 1.2-1.8G) in 30 min? I realise that, since 1.8G is already done, the correct analysis should be from 1.8G to _optimal sieve point_. Fine. Can you post some timing for NewPGen to take a 1T (or 2T or whatever) range from p=1.8G to p=3G in increments of 0.2G? That'll give me a clue as to what is a good cutoff point. PS:- There is another idea that'll give me a 5x speed improvement. But this involves sieving candidates out of order (technically, residue classes mod 7*11*13). So the candidates will have to be sorted after the sieving step. |
[QUOTE=axn;218882]PS:- I remember there being a rule saying something like 6 bytes per k. That'd mean 384 MB can handle 64M (=67108864) candidates in fast array mode.[/QUOTE]
Unfortunately that is not what I experienced. With 32.5M candidates it used normal array mode. |
| All times are UTC. The time now is 13:33. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2023, Jelsoft Enterprises Ltd.