mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Computer Science & Computational Number Theory (https://www.mersenneforum.org/forumdisplay.php?f=116)
-   -   Wilson-prime search practicalities (https://www.mersenneforum.org/showthread.php?t=16028)

R. Gerbicz 2011-09-01 17:55

Wilson-prime search practicalities
 
[QUOTE=rogue;270580]Thanks. You should consider updating the wiki with the new near-Wilson primes.[/QUOTE]

Done on wikipedia. I observed that p=111310567 is also missed, (p-1)!==-1+22p mod p*p. But maybe this was a typo, since a close prime was duplicated on wiki. The table is now sorted.

rajula 2011-09-03 07:37

[QUOTE=R. Gerbicz;270578]As I promised here you can download and use my code: [url]https://sites.google.com/site/robertgerbicz/wilson[/url]

I've finished the search up to 1e10. There were two new near-Wilson prime.[/QUOTE]

Impressive speed! I felt tempted to leave one core running the code, but without coordination that would most likely be waste of computing resources.

Speaking about waste of resources: I checked what they are doing at the Ibercivis-project: The server gave 30 primes of size ~2e9 to check with a looong expected completion time. Needless to say, I didn't finish those.

I was surprised that Toshio wanted to have a source for the new near-Wilson primes (in Wikipedia) given that he did not provide a valid source for the previous near-Wilson primes found by rogue. Perhaps someone should put the up-to-date information to some other webpage...

MrRepunit 2011-09-03 08:29

[QUOTE=rajula;270707]Impressive speed! I felt tempted to leave one core running the code, but without coordination that would most likely be waste of computing resources.
[/QUOTE]

Actually I am already running the range 1E10 to 5E10. We could coordinate the search here in the forum. Maybe in ranges of size 1E9 because the current version can not continue if aborted.

So far I found 3 near Wilson primes:

10242692519 -1-97p
11355061259 -1-45p
20042556601 -1+27p

Cheers,
Danilo

R. Gerbicz 2011-09-03 10:27

[QUOTE=MrRepunit;270711]Actually I am already running the range 1E10 to 5E10. We could coordinate the search here in the forum. Maybe in ranges of size 1E9 because the current version can not continue if aborted.

So far I found 3 near Wilson primes:

10242692519 -1-97p
11355061259 -1-45p
20042556601 -1+27p

Cheers,
Danilo[/QUOTE]
Thanks, just checked with the slow function, these are really near Wilson primes.
I have modified the code, now it is possible to abort and continue it. It saves the progress after each block of primes that processed.

fivemack 2011-09-03 11:21

There is a tiny problem in the downloaded code when I use some compilers: the prototype for func() doesn't match the definition of func() because you've changed the first two parameters from mpz_t to lli. Trivial to fix.

Is it asymptotically better to run this on several cores using a small interval on each, or on one core using the largest interval that fits in memory? I have quite a large machine and can allocate 32GB to the process if that would help.

I am a little surprised that when I do

echo -e "100000000000\n100010000000\n10000\n1\n" | time ./a.out

I get

Testing p=100000000073...100001026973, p==5 mod 12, time=1 sec.

and then no further output for at least a thousand seconds; is there something in the implementation to make 5 mod 12 unusually quick? I tried 10^9 .. 10^9+10^8 and it worked fine, so I don't think it's hanging.

MrRepunit 2011-09-03 11:26

[QUOTE=R. Gerbicz;270720]Thanks, just checked with the slow function, these are really near Wilson primes.
I have modified the code, now it is possible to abort and continue it. It saves the progress after each block of primes that processed.[/QUOTE]

Nice work, indeed! Now we can really go for a distributed search.
(Still I let the range 1E10 to 5E10 running by the older version.)

I found yet another "very near" Wilson prime, damn close:
11774118061 -1-1p

R. Gerbicz 2011-09-03 12:11

[QUOTE=fivemack;270726]There is a tiny problem in the downloaded code when I use some compilers: the prototype for func() doesn't match the definition of func() because you've changed the first two parameters from mpz_t to lli. Trivial to fix.

Is it asymptotically better to run this on several cores using a small interval on each, or on one core using the largest interval that fits in memory? I have quite a large machine and can allocate 32GB to the process if that would help.

I am a little surprised that when I do

echo -e "100000000000\n100010000000\n10000\n1\n" | time ./a.out

I get

Testing p=100000000073...100001026973, p==5 mod 12, time=1 sec.

and then no further output for at least a thousand seconds; is there something in the implementation to make 5 mod 12 unusually quick? I tried 10^9 .. 10^9+10^8 and it worked fine, so I don't think it's hanging.[/QUOTE]

Thanks I will correct, in one of the first version of my code func used mpz_t type for n1,n2.

In fact when the code prints "Testing ..." it is still testing those primes, so they are not checked, but found all those type of primes from that interval and computed the product of these primes. The ratio of speed for different types are: 3:2:1 for large primes, so p==1 mod 3 is the fastest and p==11 mod 12 is the slowest.

It is better to run the largest interval on one core that fits in memory. That was the reason I've used only one core for my search. (running t cores and on each of them testing interval/t primes yield the same speed with 1 core and interval primes.)

fivemack 2011-09-03 12:58

OK, I'll take 5e10 to 6e10 (interval size 4e7 seems about right for that)

MrRepunit 2011-09-03 17:40

Wilson-prime search practicalities
 
[QUOTE=R. Gerbicz;270728]
It is better to run the largest interval on one core that fits in memory. That was the reason I've used only one core for my search. (running t cores and on each of them testing interval/t primes yield the same speed with 1 core and interval primes.)[/QUOTE]

I am not sure about that. I made some benchmarks up to 5E7 using different intervals and found that the optimal interval size is [B]always[/B] (about) 1/1000 of the upper boundary. Though I did not check for very large values, my guess here is that there should be also some optimal value which is not using the full memory. I will do some more testing with higher values...

CRGreathouse 2011-09-03 17:43

[QUOTE=MrRepunit;270746]I am not sure about that. I made some benchmarks up to 5E7 using different intervals and found that the optimal interval size is [B]always[/B] (about) 1/1000 of the upper boundary. Though I did not check for very large values, my guess here is that there should be also some optimal value which is not using the full memory. I will do some more testing with higher values...[/QUOTE]

Your tests showed that memory (cache, etc.) size is irrelevant? :surprised

MrRepunit 2011-09-03 17:50

[QUOTE=CRGreathouse;270748]Your tests showed that memory (cache, etc.) size is irrelevant? :surprised[/QUOTE]
Not irrelevant, but there is an optimal value of the interval.
Here are my timings:

pmax interval seconds
50000000 2000000 1494
50000000 50000 1024
10000000 10000000 178
10000000 2000000 166
10000000 1000000 163
10000000 500000 168
10000000 10000 110
5000000 2000000 69
5000000 1000000 66
5000000 500000 64
5000000 100000 73
5000000 50000 61
5000000 10000 44
5000000 5000 42
5000000 1000 52
2000000 500000 19
2000000 200000 18
2000000 100000 17
2000000 2000 12
1000000 500000 7
1000000 200000 7
1000000 100000 6
1000000 1000 5

You can see that the optimal interval value is always 1/1000 of the upper boundary.

fivemack 2011-09-03 19:25

Those are figures starting from zero; I think the behaviour is a bit different with (for example) start=1e9 end=1e9+1e8

which I have just run four times, getting

interval=10000: 23287.8 seconds (lots of blocks)
interval=100000: 5348.9 seconds (about 60 blocks)
interval=1000000: 2816 seconds (seven blocks)
interval=10000000: 2162.8 seconds (three blocks)

start=1e11, end=1e11+1e9, interval=1e8 (i.e. very much larger than the ~40 million primes in the range, of which we never look at more than half at once) took 22489 seconds for the first chunk, which was [b]all[/b] the 5-mod-12 primes (whereas interval=1e6 took 13169 seconds for about 10% of the 5-mod-12 primes); quite a lot of RAM (up to 3.3 gigabytes), but I have quite a lot of RAM so that's OK.

start=5e10, end=6e10, interval=4e7 is taking up to 3.6 gigabytes at times and hasn't yet finished an interval; this would suggest that interval=2e8, which would hold all the primes and finish the job in three bites, would take about twenty gigabytes.

R. Gerbicz 2011-09-03 20:59

[QUOTE=MrRepunit;270749]Not irrelevant, but there is an optimal value of the interval.
Here are my timings:

pmax interval seconds
5000000 2000000 69
5000000 1000000 66
5000000 500000 64
5000000 100000 73
5000000 50000 61
5000000 10000 44
5000000 5000 42
5000000 1000 52

You can see that the optimal interval value is always 1/1000 of the upper boundary.[/QUOTE]

Yes, there is obviously an optimal interval. These are really small tests, the times are very close: [42,73] sec., and this is due to the fast calculation of n! mod T^2.

[QUOTE=fivemack;270756]
start=1e11, end=1e11+1e9, interval=1e8 (i.e. very much larger than the ~40 million primes in the range, of which we never look at more than half at once) took 22489 seconds for the first chunk, which was [b]all[/b] the 5-mod-12 primes (whereas interval=1e6 took 13169 seconds for about 10% of the 5-mod-12 primes); quite a lot of RAM (up to 3.3 gigabytes), but I have quite a lot of RAM so that's OK.[/QUOTE]

The code uses O(log(en)*interval) memory; yes the test uses about ~20 million primes of that range for p==1 mod 3, so the product tree holds only the product of ~20 million primes. But you will see (much) larger product tress in this test also because when when we calulcate the tail of the (p-1)! mod p^2 the "func" calculates the product of approx. "interval"/2 consecutive integers with product tree.

Thanks for your times, those gives that an interval of k*1e7 for these high p values (p>>1e10) uses a little less than k GB of RAM.

rajula 2011-09-04 07:33

Now that we have some coordination: I'll check the range 6e10 to 7e10.

I decided to collect the data (at least temporarily) on [URL="http://users.jyu.fi/~tamaraja/Wilson.html"]this web page[/URL] so as to give an easier reference for the reservations and results. If you come up with more appropriate place (a dedicated and updated thread here? Or another place where other results are also gathered?) I'll remove that page.

fivemack 2011-09-04 09:54

I am currently also doing 100G .. 102G (might as well do my calibration runs in useful territory); 100G-101G is currently in the final stage, writing out about an 80M range of residues every eleven minutes with 600M to go. Reserve 102G-110G also.

fivemack 2011-09-04 10:45

got one!
 
100660783343 -1+87p

(100G..101G complete, took 79427.6 seconds utime and 3.647G maximum memory, using interval size '1e8' i.e. whole interval at once, on one core of 1.9GHz Magny Cours)

fivemack 2011-09-06 07:38

101G-102G complete; 200194.42 seconds on one core of 1.9GHz Magny Cours, using interval 4e6 (11 blocks). Maximum memory usage 0.397GB. No interesting primes found.

[code]
Testing p=101000000051...101405292671, p==11 mod 12, time=131 sec. 35080
Testing p=101000000011...101202702379, p==1 mod 3, time=35211 sec. 10799
Testing p=101000000033...101405467049, p==5 mod 12, time=46010 sec. 19699
Testing p=101202702439...101405417359, p==1 mod 3, time=65679 sec. 12196
Testing p=101405292839...101810835047, p==11 mod 12, time=77875 sec. 40467
Testing p=101405417539...101608187797, p==1 mod 3, time=118342 sec. 11902
Testing p=101405467061...101811138809, p==5 mod 12, time=130244 sec. 18794
Testing p=101608187887...101810900473, p==1 mod 3, time=149038 sec. 11539
Testing p=101810835143...101999999987, p==11 mod 12, time=160577 sec. 25263
Testing p=101810900479...101999999923, p==1 mod 3, time=185840 sec. 11258
Testing p=101811138917...101999999897, p==5 mod 12, time=197098 sec. 12504
Done the st=101000000000-en=102000000000 interval. Time=209602 sec.
200194.42user 1538.25system 58:13:21elapsed 96%CPU
[/code]

R. Gerbicz 2011-09-06 10:20

[QUOTE=fivemack;270936]101G-102G complete; 200194.42 seconds on one core of 1.9GHz Magny Cours, using interval 4e6 (11 blocks). Maximum memory usage 0.397GB. No interesting primes found.
[/QUOTE]

Thanks, your table also shows the speed ratio is somewhat close to the theoretical ratio: 1:2:3 for different type of primes.
(Don't see the last three blocks, because in those there are less than "interval" primes.)

rajula 2011-09-06 17:57

First (near) hit here:
[INDENT]60373446719 -1-48p[/INDENT]
[SIZE="1"]My new (cheap) computer should arrive soon so I can use more than just my laptop to this and other things...[/SIZE]

MrRepunit 2011-09-06 19:03

I also found a few more:

12896325149 -1+86p
13286279999 -1+52p
21950810731 -1+93p
23607097193 -1+97p
41659815553 -1+55p
42647052491 -1+10p

Still the range is not completed yet...

fivemack 2011-09-06 22:50

Reserve 110G..111G for further calibration exercise: how does interval=2e7 behave differently memory-wise from interval=1e8 when both are larger than the total number of primes?

Jeff Gilchrist 2011-09-06 23:38

I would like to help out with the search. You guys are taking R. Gerbicz's code and compiling it, but what command line parameters are you using?

I see talk about different ranges are better for things, but if you have core2 based CPUs what would the best settings be? Did I see that someone said it is better to run it on 1 core because running it on multiple ends up being the same speed?

Christenson 2011-09-07 01:31

[QUOTE=Jeff Gilchrist;271011]I would like to help out with the search. You guys are taking R. Gerbicz's code and compiling it, but what command line parameters are you using?

I see talk about different ranges are better for things, but if you have core2 based CPUs what would the best settings be? Did I see that someone said it is better to run it on 1 core because running it on multiple ends up being the same speed?[/QUOTE]
Time to rename this thread...something like "Wil(l there be more )son(s of) primes search(ed for?)"

MrRepunit 2011-09-07 07:44

[QUOTE=Jeff Gilchrist;271011]I would like to help out with the search. You guys are taking R. Gerbicz's code and compiling it, but what command line parameters are you using?[/QUOTE]

My command line parameter are:
icc -O3 -lgmp wilsontest.c -o wilsontest.exe
(needs of course GMP installed properly)

If you mean the parameters for running it:
There are no real command line parameters as you have to interactivly type it in. But you can mimic the input with echo (assuming that you start from scratch)
echo -e "<start>\n<end>\n<interval>\n0\n" | ./wilsontest.exe

Jeff Gilchrist 2011-09-07 07:53

[QUOTE=MrRepunit;271039]My command line parameter are:
icc -O3 -lgmp wilsontest.c -o wilsontest.exe
(needs of course GMP installed properly)[/QUOTE]

No I meant command line parameters to *run* the program, not compile it. I now have a working 64bit Windows binary if anyone is interested.

I understand the start and end value part, but when it says how many primes to test what are people using? It is best to keep increasing that until you use as much RAM as possible to speed things up? And someone was saying it is better to use 1 core with lots of RAM instead of multiple cores with your RAM split between them?

I will try the range 7e10 to 8e10 then.

Jeff.

MrRepunit 2011-09-07 08:04

[QUOTE=Jeff Gilchrist;271041]No I meant command line parameters to *run* the program, not compile it. I now have a working 64bit Windows binary if anyone is interested.

I understand the start and end value part, but when it says how many primes to test what are people using? It is best to keep increasing that until you use as much RAM as possible to speed things up? And someone was saying it is better to use 1 core with lots of RAM instead of multiple cores with your RAM split between them?

I will try the range 7e10 to 8e10 then.

Jeff.[/QUOTE]

That what I was thinking after I answered, so I edited my last post before seeing yours.
Concerning the runtime with splitting and without:
I don't have exact timings, but my feeling is that using more cores should be faster. By looking at the timings of fivemack I see that a factor of ten in the used interval (4E6 to 4E7) just gives a speedup of a factor 2.5. So ten cores instead of one on that range would be faster.

MrRepunit 2011-09-07 08:11

[QUOTE=Jeff Gilchrist;271041]I now have a working 64bit Windows binary if anyone is interested.
[/QUOTE]
I am interested. Is it on your usual webpage?

fivemack 2011-09-07 10:29

I've split this into algorithm-discussion and practicalities threads; am happy to rearrange if it becomes unwieldy again

fivemack 2011-09-07 10:33

[QUOTE=MrRepunit;271042]
I don't have exact timings, but my feeling is that using more cores should be faster. By looking at the timings of fivemack I see that a factor of ten in the used interval (4E6 to 4E7) just gives a speedup of a factor 2.5. So ten cores instead of one on that range would be faster.[/QUOTE]

I'm still not quite sure what the right answer is; it's going to depend on how much memory you have per core, and it is clear that the memory usage does depend on the value of 'interval' even if interval is much larger than the number of primes in the range.

At the moment, if you have about (maybe a little over) one gigabyte per core, I would run a 1e9 range on each core and use 2e7 as the interval parameter. But I haven't yet completed a calibration run with that parameter choice, I expect it to take a bit over 24 hours but I'm not at all sure how much over.

Jeff Gilchrist 2011-09-07 12:11

[QUOTE=fivemack;271053]I'm still not quite sure what the right answer is; it's going to depend on how much memory you have per core, and it is clear that the memory usage does depend on the value of 'interval' even if interval is much larger than the number of primes in the range.

At the moment, if you have about (maybe a little over) one gigabyte per core, I would run a 1e9 range on each core and use 2e7 as the interval parameter. But I haven't yet completed a calibration run with that parameter choice, I expect it to take a bit over 24 hours but I'm not at all sure how much over.[/QUOTE]

I have 8GB on my system, but I tried running the tool with the full 7e10 to 8e10 start and finish with a 20,000,000 interval and it seemed to be using about 1.2GB of RAM. Now I'm trying just 7e10 to 7.1e10 but with a 50,000,000 interval and the RAM usage only went up slightly to 1.4GB of RAM max, but often time it hovers around 550MB of RAM.

Jeff Gilchrist 2011-09-07 12:17

[QUOTE=MrRepunit;271044]I am interested. Is it on your usual webpage?[/QUOTE]

It is but not publicly listed. You can download the core2 optimized version for Windows 64bit here:
[url]http://gilchrist.ca/jeff/factoring/wilsontest_win64.zip[/url]

MrRepunit 2011-09-07 12:22

[QUOTE=Jeff Gilchrist;271056]I have 8GB on my system, but I tried running the tool with the full 7e10 to 8e10 start and finish with a 20,000,000 interval and it seemed to be using about 1.2GB of RAM. Now I'm trying just 7e10 to 7.1e10 but with a 50,000,000 interval and the RAM usage only went up slightly to 1.4GB of RAM max, but often time it hovers around 550MB of RAM.[/QUOTE]

I just took a memory profile of running over 2 hours (the test itself was running much longer before I started the profiling) the range 1e9 to 5e9 using an interval of 50000000 primes. It is normally around 3GB but with lots of peaks up to 4GB. So one can say that using an interval of k*10^7 the program uses at maximum k*0.8 GB RAM.

(If someone is interested in the profiling script I can upload it.)

fivemack 2011-09-07 12:51

1 Attachment(s)
The memory usage is very peaky: the below graph is for running 50G .. 60G with interval= 4e7. X axis is in ten-second units since the program starts, Y axis is memory use in kilobytes.

The drops to ~400M use occur at the start of intervals; you can see three phases for each interval - the large-memory phase, an intermediate phase where there's a sort of square-root curve visible in the growth of memory use, and the output phase where the memory use is flat.

The moduli in this graph are [11, 1, 5, 1, 1, 5] (the last 5 is not finished yet); you can clearly see the 1-2-3 timing ratio.

fivemack 2011-09-07 12:53

102..110 graph
 
1 Attachment(s)
This graph has the same axes as before, and covers one 1-mod-3 and one 5-mod-12 pass on 102G..110G with interval=8e8. For this huge interval the printout subphase does not use memory as flatly as before, and the sqrt-like behaviour in subphase 2 appears to be absent.

fivemack 2011-09-07 12:58

110..111 i=2e7 graph
 
1 Attachment(s)
It looks as if the really peaky memory usage is in part a function of the interval being much larger than the number of primes in the range: here's the graph from running 110..111 with i=2e7. Note that the 1-mod-3 section takes twice as much memory as the others, but less time.

[code]
Testing p=110000000003...110999999927, p==11 mod 12, time=312 sec.
Testing p=110000000041...110999999977, p==1 mod 3, time=52119 sec. about 20% slower than 1e8
Testing p=110000000189...110999999837, p==5 mod 12, time=79959 sec. 24% slower than 1e8
Done the st=110000000000-en=111000000000 interval. Time=107912 sec. 27% slower in total
97210.03user 1412.42system 29:58:31elapsed 91%CPU (0avgtext+0avgdata 7641440maxresident)k

Peak memory usage observed is 1.904G
[/code]

Jeff Gilchrist 2011-09-07 16:46

I just updated my mingw64 environment. The previous binary I posted was compiled with GMP 5.0.1 and this new one is with MPIR 2.4.0 which might be faster. Can someone please help me benchmark this to see which one is faster?

GMP5: [url]http://gilchrist.ca/jeff/factoring/wilsontest_win64.zip[/url]

MPIR: [url]http://gilchrist.ca/jeff/factoring/wilsontest_win64_mpir.zip[/url]

Try running the same small range with the same parameters and please let me know if there is a big speed difference.

Christenson 2011-09-07 17:33

[QUOTE=fivemack;271052]I've split this into algorithm-discussion and practicalities threads; am happy to rearrange if it becomes unwieldy again[/QUOTE]

:no: I said to find a FUNNY thread title!!! :smile:

R. Gerbicz 2011-09-07 17:54

I have a slightly improved code: [url]https://sites.google.com/site/robertgerbicz/wilson[/url] (wilsontest2.c). Doing the st=10000000033-en=11000000000 with interval=1e7 the first block (p==1 mod 3) done in 4514 sec. with 774MB Ram. The older code done the same block in 5499 sec. with 902MB Ram. So it seems both an improvement in speed and memory for large primes.

For smallish starting values it is possible that it uses more memory: say testing all primes less than 5e7, with interval=2e6, the time is 569 sec. but used more than 100MB Ram.

Jeff Gilchrist 2011-09-07 19:12

I haven't really looked at the algorithm you are using but if you re-do the same range with the same parameters should you get the same results?

Doing 3 runs, I get mostly the same values in wilson.txt but there are some entries are are in one but not the other and vice-versa.

For example with a wilsonwork.txt file of:
[CODE]st=5-en=10000000
S[0]=7
S[1]=5
S[2]=11
interval=2000192
printtofile=0
[/CODE]

I get mostly the same values but these values are in 1 run but not the other:
[CODE]1750901 -1+34p
1851953 -1-50p
2031053 -1-18p
1666403 -1+99p
2278343 -1+21p
2313083 -1+15p
[/CODE]

and these values are in the second run but not the first:
[CODE]780887 -1-1p
890231 -1+62p
898787 -1-53p
1308491 -1-55p
1638347 -1-45p
1640147 -1-88p[/CODE]

If that is normal, that is fine, but if not, there might be a bug in there somewhere.

Jeff Gilchrist 2011-09-07 19:15

Also, I'm wondering if I found another bug. One of my runs is producing a huge amount of results in wilson.txt but they don't seem of actual value.

I'm getting about 3.6MB so far with a ton of lines like:
[CODE]70039080617 -1+0p
70039083185 -1+0p
70039087061 -1+0p
70039088333 -1+0p
70039089617 -1+0p
70039091345 -1+0p
70039091669 -1+0p
70039092905 -1+0p
70039093649 -1+0p
70039094777 -1+0p
[/CODE]

Using the work file:
[CODE]st=70000000000-en=71000000000
S[0]=70000000003
S[1]=70000000001
S[2]=70000000007
interval=50000256
printtofile=0
[/CODE]

R. Gerbicz 2011-09-07 19:27

[QUOTE=Jeff Gilchrist;271096]I haven't really looked at the algorithm you are using but if you re-do the same range with the same parameters should you get the same results?

Doing 3 runs, I get mostly the same values in wilson.txt but there are some entries are are in one but not the other and vice-versa.

For example with a wilsonwork.txt file of:
[CODE]st=5-en=10000000
S[0]=7
S[1]=5
S[2]=11
interval=2000192
printtofile=0
[/CODE]
If that is normal, that is fine, but if not, there might be a bug in there somewhere.[/QUOTE]

I don't know know how you have gotten that, but interval%BR==0 should be true (the code modify interval a little to satisfy this), now I have inserted an assert to check this.
(and BR=128 in the code).

Jeff Gilchrist 2011-09-07 19:33

[QUOTE=R. Gerbicz;271098]I don't know know how you have gotten that, but interval%BR==0 should be true (the code modify interval a little to satisfy this), now I have inserted an assert to check this.
(and BR=128 in the code).[/QUOTE]

So it is supposed to produce the same results every time?

I just entered the start as 1, the end as 10000000 and put 2000000 as the interval. When it wrote the wilsonwork.txt file it created 2000192 as the interval all on its own.

Jeff Gilchrist 2011-09-07 19:40

[QUOTE=Jeff Gilchrist;271075]I just updated my mingw64 environment. The previous binary I posted was compiled with GMP 5.0.1 and this new one is with MPIR 2.4.0 which might be faster. Can someone please help me benchmark this to see which one is faster?[/QUOTE]

I was able to finish the testing myself, and the MPIR build is *much* faster. I have now re-compile the latest wilsontest and wilsontest2 with the fix Robert just made linked with MPIR and they are both available here:

[url]http://gilchrist.ca/jeff/factoring/wilsontest_win64.zip[/url]

R. Gerbicz 2011-09-07 19:40

[QUOTE=Jeff Gilchrist;271100]So it is supposed to produce the same results every time?

I just entered the start as 1, the end as 10000000 and put 2000000 as the interval. When it wrote the wilsonwork.txt file it created 2000192 as the interval all on its own.

You mean the code posted for wilson1 and 2 have been updated now?[/QUOTE]

I have just inserted an assert in the two codes.
Yes, the code saves the modified interval value, and use that if you stop the program and rerun. Moreover it would not be problem to modify this value after the stop. (as long as this is positive and divisible by 2*BR=256)

R. Gerbicz 2011-09-07 19:46

[QUOTE=Jeff Gilchrist;271097]Also, I'm wondering if I found another bug. One of my runs is producing a huge amount of results in wilson.txt but they don't seem of actual value.

I'm getting about 3.6MB so far with a ton of lines like:
[CODE]70039080617 -1+0p
70039083185 -1+0p
70039087061 -1+0p
70039088333 -1+0p
70039089617 -1+0p
70039091345 -1+0p
70039091669 -1+0p
70039092905 -1+0p
70039093649 -1+0p
70039094777 -1+0p
[/CODE]

Using the work file:
[CODE]st=70000000000-en=71000000000
S[0]=70000000003
S[1]=70000000001
S[2]=70000000007
interval=50000256
printtofile=0
[/CODE][/QUOTE]

What I don't know understand, that all of those numbers are composites, some of them are divisible by 5, so surely they are composites by mpz_probab_prime_p function also. Have you got about 5 GB Ram on that machine? (interval=5e7 requires appprox. that amount for large primes).

When I put a code, that means it survived thousands of different runs for different st,en,interval inputs for en<10^5, and passed a large test, where en~1e10, just to check there is no int/long long int problem in the code.

Jeff Gilchrist 2011-09-07 19:58

[QUOTE=R. Gerbicz;271103]What I don't know understand, that all of those numbers are composites, some of them are divisible by 5, so surely they are composites by mpz_probab_prime_p function also. Have you got about 5 GB Ram on that machine? (5e7 requires appprox. that amount).

When I put a code, that means it survived thousands of different runs for different st,en,interval inputs for en<10^5, and passed a large test, where en~1e10, just to check there is no int/long long int problem in the code.[/QUOTE]

Note that the interval is 50000256 which is not divisible by BR=128 so could that be screwing things up?

I have 8GB of RAM on that machine, but I never saw it use more than 1.4GB of RAM though. I'm now re-trying the range with the wilsontest2 code using MPIR. There also might have been a bad GMP5 library or something since my old mingw64 environment was misbehaving.

Jeff.

R. Gerbicz 2011-09-07 20:19

[QUOTE=Jeff Gilchrist;271106]Note that the interval is 50000256 which is not divisible by BR=128 so could that be screwing things up?

I have 8GB of RAM on that machine, but I never saw it use more than 1.4GB of RAM though. I'm now re-trying the range with the wilsontest2 code using MPIR. There also might have been a bad GMP5 library or something since my old mingw64 environment was misbehaving.

Jeff.[/QUOTE]

Now I understand you, interval should be divisible by 2*BR (=256, since BR=128) the correct assertion, because sg=interval/BR, but sg is also even. (your problem could be that in your case sg is an integer, but odd, and the primes are not stored, only the difference's in 2 bytes, so 2 differences in an unsigned int) Modified again the 2 codes.

Yes, that uses less than 5 GB, the reason that your 71e9-70e9=1e9 range contains much less than 2*interval primes.

MrRepunit 2011-09-08 21:05

[QUOTE=Jeff Gilchrist;271101]I was able to finish the testing myself, and the MPIR build is *much* faster.[/QUOTE]

I can confirm this. The same under Linux 64 Bit, speedup of factor 1.5 to 2 compared to GMP5, depending on input parameters.

Jeff Gilchrist 2011-09-09 00:00

Some results with different intervals:

interval=50000128
Done the st=71000000000-en=72000000000 interval. Time=57585 sec.

interval=5000192
Done the st=72000000000-en=73000000000 interval. Time=111822 sec.

Huge difference in speed with those intervals. Testing 75000064 right now.

MrRepunit 2011-09-09 05:23

[QUOTE=Jeff Gilchrist;271230]Some results with different intervals:

interval=50000128
Done the st=71000000000-en=72000000000 interval. Time=57585 sec.

interval=5000192
Done the st=72000000000-en=73000000000 interval. Time=111822 sec.

Huge difference in speed with those intervals. Testing 75000064 right now.[/QUOTE]

My guess is that it will be at around 57500 sec.
Reason is that there are only 40M primes in that interval. So except from other effects the speed should be the same.

fivemack 2011-09-09 12:38

It will be faster; the interval parameter is used in two places in the software, in one place min(interval length, number of primes) is relevant, and in the other it isn't.

Compare the timings in my posts

[url]http://www.mersenneforum.org/showpost.php?p=270756&postcount=12[/url]

and

[url]http://www.mersenneforum.org/showpost.php?p=271065&postcount=35[/url]

in both of which the interval is larger than the number of primes in the range.

To compare:
[code]
interval RAM time slices
1e8 3.647 79427.6 3
2e7 1.904 97210.0 3
1e7 0.983 126835.4 4
4e6 0.397 200194.4 11
[/code]

(this is with wilsontest.c linked against gmp-5; I am rerunning one parameter choice with wilsontest2.c linked against mpir-2.4)

R.D. Silverman 2011-09-09 13:15

[QUOTE=MrRepunit;271260]My guess is that it will be at around 57500 sec.
Reason is that there are only 40M primes in that interval. So except from other effects the speed should be the same.[/QUOTE]

Everyone needs to keep in mind that Wilson primes are going to be
very very rare. Much rarer than than Mersenne primes. They should
be comparable in density (heuristically) to Wiefrich or Miramanoff primes.

As John Selfridge said: We know that loglog N goes to infinity. However,
noone has ever actually observed it doing so.

rajula 2011-09-09 13:32

[QUOTE=R.D. Silverman;271288]Everyone needs to keep in mind that Wilson primes are going to be
very very rare. Much rarer than than Mersenne primes. They should
be comparable in density (heuristically) to Wiefrich or Miramanoff primes.

As John Selfridge said: We know that loglog N goes to infinity. However,
noone has ever actually observed it doing so.[/QUOTE]

To put some actual figures: it should be reasonably easy to get to 10[SUP]13[/SUP] and the current known search-limit is around 10[SUP]11[/SUP]. The (heuristically) expected number of Wilson primes on that interval is [TEX]\log\left(\frac{13}{11}\right) \approx 17%[/TEX], which [I]for me[/I] is still worth the shot. However, at some point on one has to think if the odds are too low...

rogue 2011-09-09 13:43

[QUOTE=rajula;271289]To put some actual figures: it should be reasonably easy to get to 10[SUP]13[/SUP] and the current known search-limit is around 10[SUP]12[/SUP]. The (heuristically) expected number of Wilson primes on that interval is [TEX]\log\left(\frac{13}{12}\right) \approx 8%[/TEX], which [I]for me[/I] is still worth the shot. However, at some point on one has to think if the odds are too low...[/QUOTE]

I'm not certain where you got 1e12. Right now they are around 1e11.

R.D. Silverman 2011-09-09 13:53

[QUOTE=rajula;271289]To put some actual figures: it should be reasonably easy to get to 10[SUP]13[/SUP] and the current known search-limit is around 10[SUP]12[/SUP]. The (heuristically) expected number of Wilson primes on that interval is [TEX]\log\left(\frac{13}{12}\right) \approx 8%[/TEX], which [I]for me[/I] is still worth the shot. However, at some point on one has to think if the odds are too low...[/QUOTE]

Slight nitpick. The use of percentage here is wrong. A percentage
represents a faction with respect to some other value. The expected
number (which is an absolute number) of Wilson primes up to N is
~C loglog(N), giving .08C as the number of expected primes in [1e12, 1e13]
for some constant C.

I have never seen a strong argument that gives the value of C. Is it 1??

Jeff Gilchrist 2011-09-09 13:55

[QUOTE=MrRepunit;271260]My guess is that it will be at around 57500 sec.
Reason is that there are only 40M primes in that interval. So except from other effects the speed should be the same.[/QUOTE]

It is slightly faster. This is on Opteron with wilsontest.c

[CODE]Range Interval Time(sec) RAM(MB)
------ -------- --------- -------
71e9-72e9 50000128 57585 2230
72e9-73e9 5000192 111822 543
73e9-74e9 75000064 54651 2758
[/CODE]

Same thing on my core2 with wilsontest2.c having larger still improves speed:

[CODE]Range Interval Time(sec)
------ -------- ---------
70e9-71e9 50000128 64988
76e9-77e9 75000064 58059
77e9-78e9 75000064 58325
[/CODE]

I'm almost finished my range, so I will reserve 80e9 to 100e9 as well.

rajula 2011-09-09 14:03

[QUOTE=R.D. Silverman;271293]Slight nitpick. The use of percentage here is wrong. A percentage represents a faction with respect to some other value. The expected number (which is an absolute number) of Wilson primes up to N is ~C loglog(N), giving .08C as the number of expected primes in [1e12, 1e13] for some constant C.

I have never seen a strong argument that gives the value of C. Is it 1??[/QUOTE]

Percentage is mostly used to represent a fraction but by itself it means percentage. I used the value C=1 as was done in [URL="http://www.ams.org/journals/mcom/1997-66-217/S0025-5718-97-00791-6/S0025-5718-97-00791-6.pdf"]Crandall-Dilcher-Pomerance[/URL] accepting their heuristic estimates without thinking about it too much.

(Thanks rogue for pointing out the wrong exponent.)

R. Gerbicz 2011-09-09 16:36

[QUOTE=fivemack;271284]It will be faster; the interval parameter is used in two places in the software, in one place min(interval length, number of primes) is relevant, and in the other it isn't.
...
in both of which the interval is larger than the number of primes in the range.
...
(this is with wilsontest.c linked against gmp-5; I am rerunning one parameter choice with wilsontest2.c linked against mpir-2.4)[/QUOTE]

Yes, probably it is a little confusing, that you still see a speed improvement, when there are (much) less than ~interval/2 primes in your range. But interval is used in many places of the algorithm, in the most time consuming part of the wilson code (for large ranges) the ff() where in each chunk you multiple interval primes (up to n), followed by a division. In this part if your interval is larger then you get a little speedup, the reason is that the division is more expensive than the multiplication (between same size of numbers): so it is better to multiple and finally do a division between unbalanced numbers.
A larger interval also helped the func() routine for wilsontest; however wilsontest2 handle this much better with modproducttree().

R. Gerbicz 2011-09-09 16:47

[QUOTE=Jeff Gilchrist;271294]It is slightly faster. This is on Opteron with wilsontest.c

[CODE]Range Interval Time(sec) RAM(MB)
------ -------- --------- -------
71e9-72e9 50000128 57585 2230
72e9-73e9 5000192 111822 543
73e9-74e9 75000064 54651 2758
[/CODE]

Same thing on my core2 with wilsontest2.c having larger still improves speed:

[CODE]Range Interval Time(sec)
------ -------- ---------
70e9-71e9 50000128 64988
76e9-77e9 75000064 58059
77e9-78e9 75000064 58325
[/CODE]

I'm almost finished my range, so I will reserve 80e9 to 100e9 as well.[/QUOTE]

Would not be faster to check a range of 2e9 (for st>=70e9) with interval=42e6 ? (3 blocks: one for each type).

Or a range of 4e9 with interval of 42e6? (4 blocks: two of p==1 mod 3; one of p==5 mod 12 and one of p==11 mod 12). If we see only the number of primes, this is close to optimal, because in each block there are approx. 40e6=4e7 primes. What do you think about it? I know that a larger interval helps a little, but a semi-filled block of primes is time consuming.

Jeff Gilchrist 2011-09-09 17:07

[QUOTE=R. Gerbicz;271315]Would not be faster to check a range of 2e9 (for st>=70e9) with interval=42e6 ? (3 blocks: one for each type).

Or a range of 4e9 with interval of 42e6? (4 blocks: two of p==1 mod 3; one of p==5 mod 12 and one of p==11 mod 12). If we see only the number of primes, this is close to optimal, because in each block there are approx. 40e6=4e7 primes. What do you think about it? I know that a larger interval helps a little, but a semi-filled block of primes is time consuming.[/QUOTE]

Could be faster, I was just splitting the work to easily manage between my CPUs. I never really thought about finding the optimal range as well as interval. I will try a range of 2e9 and 4e9 to see.

For fun I also tried a 1e9 range with interval 100000256 and came back at 55263 sec (3,249 MB RAM) so starts to slow down up that high.

Jeff Gilchrist 2011-09-10 11:38

I have now finished the 70e9 to 80e9 range, nothing found.

I haven't finished my next range but I do have some results:
[B]80248324571 -1+46p
80908082573 -1-20p
[/B]

Should we be manually checking this with the slow function and should they be reported anywhere in particular?

R. Gerbicz 2011-09-10 12:52

[QUOTE=Jeff Gilchrist;271379]
Should we be manually checking this with the slow function and should they be reported anywhere in particular?[/QUOTE]

Reporting only here is fine. As I can remember I have checked all new near Wilson primes reported here. Just to be sure now I will check all of these. (so far 14 new primes).

Jeff Gilchrist 2011-09-10 22:35

[QUOTE=R. Gerbicz;271315]Would not be faster to check a range of 2e9 (for st>=70e9) with interval=42e6 ? (3 blocks: one for each type).[/QUOTE]

It is indeed faster, using a 2e9 range and interval of 42e6 it took 99,053 sec compared to 2 x 54651 (109,302 sec).

Now that I'm using larger 2e9 and 4e9 ranges all my current reserved ranges are all running. I'm now reserving 111e9 to 113e9.

Jeff.

R. Gerbicz 2011-09-11 00:38

[QUOTE=R. Gerbicz;271393]Reporting only here is fine. As I can remember I have checked all new near Wilson primes reported here. Just to be sure now I will check all of these. (so far 14 new primes).[/QUOTE]

Checked, all of them are near Wilson prime.

[QUOTE=Jeff Gilchrist;271425]It is indeed faster, using a 2e9 range and interval of 42e6 it took 99,053 sec compared to 2 x 54651 (109,302 sec).
[/QUOTE]

I thought there will be a larger speedup, but that's still nice.

fivemack 2011-09-11 16:35

Jeff, I'm afraid I've already done 111G to 113G as calibration runs.

Found 112825721339 -1+70p at 12:52 10/9/2011

Also:
102G-110G (interval 8e8) complete, no hits
[code]
Testing p=102000000061...109999999987, p==1 mod 3, time=4414 sec.
Testing p=102000000041...109999999937, p==5 mod 12, time=136766 sec.
Testing p=102000000227...109999999979, p==11 mod 12, time=266342 sec.
Done the st=102000000000-en=110000000000 interval. Time=501789 sec.
493311.22user 8250.93system 139:23:10elapsed 99%CPU (0avgtext+0avgdata 107148784maxresident)k
1540776inputs+22162544outputs (26611major+1927975376minor)pagefaults 0swaps

Maximum memory use 26.743G
[/code]

50G-60G, interval 4e7 complete, no hits
[code]
Testing p=50000000423...53947922087, p==11 mod 12, time=1011 sec.
Testing p=50000000023...51972279967, p==1 mod 3, time=132601 sec.
Testing p=50000000021...53947813781, p==5 mod 12, time=165829 sec.
Testing p=51972280039...53947594627, p==1 mod 3, time=234953 sec.
Testing p=53947594681...55925963257, p==1 mod 3, time=268054 sec.
Testing p=53947813793...57907253993, p==5 mod 12, time=300460 sec.
Testing p=53947922123...57907614407, p==11 mod 12, time=369817 sec.
Testing p=55925963263...57907169977, p==1 mod 3, time=508006 sec.
Testing p=57907170049...59891256703, p==1 mod 3, time=541926 sec.
Testing p=57907254221...59999999861, p==5 mod 12, time=575239 sec.
Testing p=57907614419...59999999999, p==11 mod 12, time=607262 sec.
Testing p=59891256769...59999999959, p==1 mod 3, time=667609 sec.
Done the st=50000000000-en=60000000000 interval. Time=672907 sec.
666107.93user 6265.94system 186:55:07elapsed 99%CPU (0avgtext+0avgdata 14905040maxresident)k
4192inputs+27350896outputs (49major+2469411071minor)pagefaults 0swaps

Maximum memory 3.726G
[/code]

Taking 120G to 144G

Jeff Gilchrist 2011-09-11 16:44

[QUOTE=fivemack;271459]Jeff, I'm afraid I've already done 111G to 113G as calibration runs.[/QUOTE]

Ok no prob, I just started that so killed it now, no time wasted. So you just went up to 113G and stopped?

In that case I will take 113e9 to 120e9 to fill in the gap from your latest reservation.

Jeff.

fivemack 2011-09-11 16:47

I ran 111G-112G to get a speed and memory use measurement for interval size 1e7 for wilsontest compiled against gmp-5.0, and then 112G-113G to get the same measurement for wilsontest2.c compiled against mpir-2.4.0)

Jeff Gilchrist 2011-09-11 17:18

[QUOTE=fivemack;271462]I ran 111G-112G to get a speed and memory use measurement for interval size 1e7 for wilsontest compiled against gmp-5.0, and then 112G-113G to get the same measurement for wilsontest2.c compiled against mpir-2.4.0)[/QUOTE]

And what did you find for speed and memory difference? I have seen a big difference in speed compared to gmp5 and mpir.

Jeff.

fivemack 2011-09-11 18:02

ah yes, results often come in useful
 
111e9 .. 112e9 interval 1e7
[code]
Testing p=111000000049...111508672567, p==1 mod 3, time=343 sec.
Testing p=111000000029...111999999989, p==5 mod 12, time=19874 sec.
Testing p=111000000071...111999999983, p==11 mod 12, time=55544 sec.
Testing p=111508672579...111999999961, p==1 mod 3, time=124224 sec.
Done the st=111000000000-en=112000000000 interval. Time=143094 sec.
126835.38user 1555.64system 39:44:53elapsed 89%CPU (0avgtext+0avgdata 4027728maxresident)k
[/code]

112e9..113e9, new code + mpir, interval 1e7
[code]
Testing p=112000000289...112999999877, p==5 mod 12, time=237 sec.
Testing p=112000000003...112508982601, p==1 mod 3, time=25044 sec.
Testing p=112000000031...112999999859, p==11 mod 12, time=42104 sec.
Testing p=112508982637...112999999951, p==1 mod 3, time=95729 sec.
Done the st=112000000000-en=113000000000 interval. Time=112854 sec.
99689.07user 2899.91system 31:20:53elapsed 90%CPU (0avgtext+0avgdata 4067952maxresident)k
[/code]

So the new code runs in 80% of the time and uses essentially the same amount of memory. Note that the 'maxresident' figures are overstated by precisely a factor four - actually both processes use about a gigabyte.

Jeff Gilchrist 2011-09-11 18:18

I will also reserve 144e9 to 160e9.

MrRepunit 2011-09-12 07:30

Finished my range 1E10 to 5E10.
Found a few more near Wilson primes:

35525054743 -1+26p
24664241321 -1+46p
28737804211 -1-58p
44034466379 -1+39p

Also reserving 16E10 to 20E10.

Jeff Gilchrist 2011-09-12 13:51

Thanks rajula for maintaining your reservation page, it is very helpful, and the new green blocks for completed work make it easier to read. You are fast too, sometimes the pages are updated almost immediately. :smile:

Now looking at an i5 based system comparing a 1e9 range to a 4e9 range.

1e9 range = 54219 sec
4e9 range = 189560 sec (47390 sec/1e9 range)

Some savings.

Jeff.

rajula 2011-09-12 15:09

[QUOTE=Jeff Gilchrist;271534]Thanks rajula for maintaining your reservation page, it is very helpful, and the new green blocks for completed work make it easier to read. You are fast too, sometimes the pages are updated almost immediately. :smile:
[/QUOTE]

You are welcome. I am happy to maintain the list and I try to make updates as fast as I can. I do check the forums quite frequently... Sometimes I am away for a week or so, so expect some delay from time to time. :smile:

Jeff Gilchrist 2011-09-13 00:12

I just finished my 80e9 to 100e9 range with nothing new that hasn't already been reported.

I will now reserve 200e9 to 250e9.

Jeff.

fivemack 2011-09-13 08:24

120G to 144G done, nothing of interest found.

Jeff Gilchrist 2011-09-16 17:57

113e9 to 120e9 finished, nothing found.

Jeff Gilchrist 2011-09-17 10:20

144e9 to 160e9 finished, nothing found.

rajula 2011-09-18 10:47

60e to 70e done. Three new near Wilson primes found after the first one I already reported:

[CODE]64643245189 -1-21p
66966581777 -1+91p
67133912011 -1+9p[/CODE]

Reserving 250e9 to 270e9.

Jeff Gilchrist 2011-09-18 11:27

[QUOTE=rajula;271945]60e to 70e done. Three new near Wilson primes found after the first one I already reported:[/QUOTE]

You picked a good range, so far all dry up here at this end.

Jeff Gilchrist 2011-09-20 12:24

I'm now reserving:
[B]270e9 to 300e9[/B]

And new discovery
[CODE]231939720421 -1+41p[/CODE]

R.D. Silverman 2011-09-20 13:44

[QUOTE=Jeff Gilchrist;272134]I'm now reserving:
[B]270e9 to 300e9[/B]

And new discovery
[CODE]231939720421 -1+41p[/CODE][/QUOTE]

Please reduce my ignorance by explaining the notation.

Jeff Gilchrist 2011-09-20 13:44

Now that we are getting up into higher numbers the memory usage is going up a lot. Trying to do 3e9 range with 45M primes is pushing over 4GB a process now. In order to run 2 processes at the same time I'm going to have to start cutting down to smaller ranges again.

Jeff Gilchrist 2011-09-20 13:49

[QUOTE=R.D. Silverman;272144]Please reduce my ignorance by explaining the notation.[/QUOTE]

(p-1)! == -1+k*p mod p^2 for abs(k) <= 100

p=231939720421
k=41

MrRepunit 2011-09-22 07:52

I completed the range 160e9-200e9, found no near Wilson prime.

Reserving the range 300e9-340e9.

jasonp 2011-09-22 15:55

[QUOTE=R.D. Silverman;272144]Please reduce my ignorance by explaining the notation.[/QUOTE]
You probably have pieced it together already, but the idea is to quantify the 'closeness' to a Wilson prime by expressing the factorial in base p. The smaller the p multiple the closer to being -1 mod p^2.

R.D. Silverman 2011-09-22 16:42

[QUOTE=jasonp;272403]You probably have pieced it together already, but the idea is to quantify the 'closeness' to a Wilson prime by expressing the factorial in base p. The smaller the p multiple the closer to being -1 mod p^2.[/QUOTE]

Note that 'closeness' as defined here conflicts with the notion of closeness
over the p-adics.

rogue 2011-09-22 17:50

[QUOTE=R.D. Silverman;272408]Note that 'closeness' as defined here conflicts with the notion of closeness over the p-adics.[/QUOTE]

Do you mean that jason used the term "closeness" incorrectly?

Could you elaborate on your statement? I don't understand what you mean when you say "closeness over the p-adics". I presume that is something different than what is referred to here as "special instance".

R.D. Silverman 2011-09-22 19:18

[QUOTE=rogue;272415]Do you mean that jason used the term "closeness" incorrectly?

Could you elaborate on your statement? I don't understand what you mean when you say "closeness over the p-adics". I presume that is something different than what is referred to here as "special instance".[/QUOTE]

Serre's "A Course In Arithmetic" contains a good exposition of p-adic fields.
I can provide other references as well. I have neither the time (or the space!) in this forum to give a "P-adics 101" course.

jasonp 2011-09-23 11:21

See David Yun's paper 'Algebraic Algorithms Using P-adic Constructions' for a light introduction to p-adic arithmetic. Basically if you have a number modulo p it's treated as an approximation to another number mod p^k for k>1. In p-adic arithmetic closeness is based on how large the residue is, not on its precise value.

(That's at least based on my understanding, of which anything p-adic is close to the limit)

It's traditional for Wilson residues to be measured in terms of how close a 'near miss' they are to declaring p a Wilson prime, though, modified by the fact that they're -1 mod p by construction

R.D. Silverman 2011-09-23 12:17

[QUOTE=jasonp;272502]See David Yun's paper 'Algebraic Algorithms Using P-adic Constructions' for a light introduction to p-adic arithmetic. Basically if you have a number modulo p it's treated as an approximation to another number mod p^k for k>1. In p-adic arithmetic closeness is based on how large the residue is, not on its precise value.
[/QUOTE]

Almost. A p-adic field is a non-Archimedean valuation domain. (google!)
'closeness' is measured by how close a number is to the [i]nearest[/i]
power of p.

Elements are constructed as follows:

Let f be a monic, irreducible polynomial. Let a be a root mod p. Lift
a (via Hensel) to be a root mod p^2 (a_2), then p^3 (a3) , ..... The sequence

.....a_3 a_2 a_1 a_0 is a p-adic number. Note that it extends
infinitely to the LEFT. These numbers (believe it or not) form a field
known as a p-adic field.

Random Poster 2011-09-24 09:29

[QUOTE=R.D. Silverman;272507]'closeness' is measured by how close a number is to the [I]nearest [/I]power of p.[/QUOTE]
I have no idea what that's supposed to mean. By definition, the distance between two p-adic numbers a and b is determined by the largest integer n such that p^n divides a-b; the larger n is, the smaller the distance. In the context of this thread, the question is about the p-adic distance between (p-1)! and -1, or in other words, what is the largest n such that p^n divides (p-1)!+1. For true Wilson primes n>=2, but for any other prime n=1, so this is completely useless for distinguishing what are near-Wilson primes.

R. Gerbicz 2011-09-25 07:12

There will be a parallel code for multicore computers (though still not finished). Moreover there are some really nice algorithmic improvements from Edgar Costa and David Harvey.

rogue 2011-09-25 12:55

[QUOTE=R. Gerbicz;272683]There will be a parallel code for multicore computers (though still not finished). Moreover there are some really nice algorithmic improvements from Edgar Costa and David Harvey.[/QUOTE]

I pointed them to this search. It'll be interesting to see what you come up with once you put your brains together.

Jeff Gilchrist 2011-09-25 18:24

[QUOTE=R. Gerbicz;272683]There will be a parallel code for multicore computers (though still not finished). Moreover there are some really nice algorithmic improvements from Edgar Costa and David Harvey.[/QUOTE]

That would be great since things are starting to slow down now.

Any idea how much of an improvement these algorithmic changes will be? As in should we pause our search temporarily until the new code is ready?

R. Gerbicz 2011-09-25 19:15

[QUOTE=Jeff Gilchrist;272705]
Any idea how much of an improvement these algorithmic changes will be? As in should we pause our search temporarily until the new code is ready?[/QUOTE]

Seeing only the parallel part: running on c cores the code would be faster by approx. c/2, even for c=16. David Harvey's improvements gives a gain factor of (at least) two in speed.

You can pause, but what I can tell you that due to the algorithmic changes the new code's savefile will be different from what we are currently using in wilsontest. So it won't recognize it.

Jeff Gilchrist 2011-09-25 23:51

[QUOTE=R. Gerbicz;272708]You can pause, but what I can tell you that due to the algorithmic changes the new code's savefile will be different from what we are currently using in wilsontest. So it won't recognize it.[/QUOTE]

Sounds good. When I said pause, I meant once I have finished my current jobs in progress, wait until the new code is ready, not literally pause a file part-way through. But that is good to know.

Any ETA on when this will be ready?

R. Gerbicz 2011-09-26 18:40

[QUOTE=Jeff Gilchrist;272722]
Any ETA on when this will be ready?[/QUOTE]

I hope it will be less than a month.


All times are UTC. The time now is 22:00.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2022, Jelsoft Enterprises Ltd.