mersenneforum.org Question about these fascinating primes
 Register FAQ Search Today's Posts Mark Forums Read

2020-01-27, 02:06   #23
CRGreathouse

Aug 2006

2·32·52·13 Posts

Quote:
 Originally Posted by storm5510 This must be an expression calculator. I found a few online but none could return a value of any size.
PARI/GP is a programmable calculator. Its official website is
https://pari.math.u-bordeaux.fr/
and we have a subforum here devoted to it (a good place to ask questions). You can use it online but it's designed to run on your computer in Windows, Mac, Linux, etc.

2020-01-28, 13:55   #24
Dr Sardonicus

Feb 2017
Nowhere

3,011 Posts

Quote:
 Originally Posted by VBCurtis Using 2 + floor is equivalent to using 1 + ceiling, no?
No. If N = integer, then floor(N) = ceiling(N) = N.

2020-01-28, 15:43   #25
VBCurtis

"Curtis"
Feb 2005
Riverside, CA

2·1,993 Posts

Quote:
 Originally Posted by Dr Sardonicus No. If N = integer, then floor(N) = ceiling(N) = N.
Thanks- but in the case that was being discussed, I didn't think it was possible for N to be an integer.

2020-02-27, 18:12   #26
diep

Sep 2006
The Netherlands

2×337 Posts

Quote:
 Originally Posted by storm5510 After some thought, I think I can explain what I was getting at. Consider the following: Code: 2^50-1 = 1,125,899,906,842,623‬ 2^51-1 = 2,251,799,813,685,247‬ There is a lot of open space between. The larger they are, the more space there is. These are just minuscule to what many typically run. My line of thinking is to be able to go deep between and still maintain the form. Something like this: Code: ? * (2^1951871-1) - (2^366497) If would take a massive coefficient to equate and allow the second exponentiation to drop off. I do not know if any of the current software forms could do it. If my thinking seems sort of loony, please forgive me. I have had the flu for a week and, at times, the fever which goes with it.
I don't think whether the question is interesting whether 'some software' could calculate all this. The real question is: "how fast would such software be?"

That's why most try small k's for riesels.

Riesel stands for k * 2^n - 1 in itself

https://en.wikipedia.org/wiki/Riesel_number

Proth's being the +1 form.

The smaller k is, the faster the LLR software with the library from Woltman inside, can calculate. It's about how fast you can run LLR on each exponent and the slowdown for larger k's you really notice if you try something there - even until 1M bits.

2020-02-28, 05:49   #27
storm5510
Random Account

Aug 2009
U.S.A.

22·52·11 Posts

Quote:
 Originally Posted by diep ...Proth's being the +1 form. The smaller k is, the faster the LLR software with the library from Woltman inside, can calculate. It's about how fast you can run LLR on each exponent and the slowdown for larger k's you really notice if you try something there - even until 1M bits.
There is not much interest in Proth's here beyond using it to check LLR results for twin primes.

It has been my experience that the larger the n the longer LLR will spend on each. I didn't see k making much of a difference. There is some I imagine.

2020-02-28, 06:44   #28
Happy5214

"Alexander"
Nov 2008
The Alamo City

3·7·13 Posts

Quote:
 Originally Posted by storm5510 There is not much interest in Proth's here beyond using it to check LLR results for twin primes. It has been my experience that the larger the n the longer LLR will spend on each. I didn't see k making much of a difference. There is some I imagine.
The LLRTools library (see the post above https://www.mersenneforum.org/showth...2303#post62303) uses the following formula to scale max n values for each gwnum FFT length from the Mersenne max to the max for any k less than 1e6 (higher k values, and some 6-digit k values with a few n values, generally require zero-padding, which makes the FFT length decision more complicated):

$N_k = N_{mersenne} - (\log_2 k + \log_2 k * fftlen / 2)$

 2020-03-03, 17:36 #29 storm5510 Random Account     Aug 2009 U.S.A. 44C16 Posts Sieving I am wandering if there is a rule-of-thumb about how deep to sieve any particular group of n's. It seems everyone has their own way of sieving and none seem to match. For the past few weeks, I have been working on k = 98475. I keep track of everything in a spreadsheet. I have been experimenting with a formula to determine how deep to sieve. It goes like this: n / 780 * 1e9.. Example: Let's say the lower end of a group is 700,000. So, 700,000 / 780 * 1e9 = 897,435,897,435. I do not mess with all these digits, so I would round it up to 898e9. I have been gradually lowering the divisor down to increase the sieve value. My goal is to sieve deeply as possible but not so far that the sieving time exceeds the time required to run the group through LLR. The current group I am running will take roughly 18 hours in LLR. The next group sieving will take about 10 hours to complete. There is a point where this becomes self-defeating. The removal rate for the sieve is currently 216 seconds per n. LLR is completing each n in 71 seconds, on this hardware. I am using sr2sieve for the bulk of the process. It has the -R switch which forces a stop when the removal rate exceeds the set value, in seconds. I have not used it, to date. My quandary is where to stop the sieve, or to simply let it run.
 2020-03-03, 17:45 #30 paulunderwood     Sep 2002 Database er0rr 61038 Posts I have found that for a variable n search the rule of thumb is to measure the time for LLR test at 80% of the maximum n and sieve until the elimination time is the same, Also, it is better to sieve one big batch, since the time to sieve is proportional to the square root of the range. Last fiddled with by paulunderwood on 2020-03-03 at 17:50
 2020-03-03, 17:55 #31 VBCurtis     "Curtis" Feb 2005 Riverside, CA 2×1,993 Posts How to calculate optimal sieve depth depends a bit on the range of exponents in the sieve. If you're doing a small sieve, like n=700k to n=1M, picking an exponent around 70-80% of the way through the file will do for matching LLR time to sieve-removal time. If you're doing a large sieve, like n=700k-10M, then the removal time should be a bit over twice the LLR time for the lowest candidate. When you reach that time, cut off a piece of the sieve for LLR, and continue sieving the rest until the removal time again matches twice the time for the (new) lowest candidate. Doubling the LLR time is related to the marginal productivity of the sieve software; that is, sieve speed doesn't increase linearly when you cut off a chunk of candidates. Sieve speed scales with the square root of the n-range of the sieve, so when cutting off small chunks (say, 700k-750k) doubling LLR time is a good rule of thumb. When the "large sieve" rule of thumb gives a result longer than the "small sieve" rule of thumb, you're now running a small sieve. Turns out this happens about when N = 2n; say, 1M to 2M. Edit: As Paul said, it's a massive waste to sieve small ranges. You should run one sieve for a range a bit bigger than you think you are likely to ever test for that k. You're wasting a ton of sieve time; it's typical to sieve for about 5% of the total time LLR takes, but this ratio changes when you sieve a tiny range. Last fiddled with by VBCurtis on 2020-03-03 at 18:00
2020-03-03, 23:13   #32
storm5510
Random Account

Aug 2009
U.S.A.

22×52×11 Posts

Quote:
 Originally Posted by VBCurtis How to calculate optimal sieve depth depends a bit on the range of exponents in the sieve. If you're doing a small sieve, like n=700k to n=1M, picking an exponent around 70-80% of the way through the file will do for matching LLR time to sieve-removal time. If you're doing a large sieve, like n=700k-10M, then the removal time should be a bit over twice the LLR time for the lowest candidate. When you reach that time, cut off a piece of the sieve for LLR, and continue sieving the rest until the removal time again matches twice the time for the (new) lowest candidate. Doubling the LLR time is related to the marginal productivity of the sieve software; that is, sieve speed doesn't increase linearly when you cut off a chunk of candidates. Sieve speed scales with the square root of the n-range of the sieve, so when cutting off small chunks (say, 700k-750k) doubling LLR time is a good rule of thumb. When the "large sieve" rule of thumb gives a result longer than the "small sieve" rule of thumb, you're now running a small sieve. Turns out this happens about when N = 2n; say, 1M to 2M. Edit: As Paul said, it's a massive waste to sieve small ranges. You should run one sieve for a range a bit bigger than you think you are likely to ever test for that k. You're wasting a ton of sieve time; it's typical to sieve for about 5% of the total time LLR takes, but this ratio changes when you sieve a tiny range.
You are both correct. I have been sieving small ranges. An example would be 700,000 to 710,000. Why? So I can keep the two machines running LLR relatively close when it comes to run time.

An option: Sieve much larger and break off pieces for each machine according to their capability. My i5 runs somewhere between 80% to 85% of what the i7 can do. It fluctuates considerably at times.Other times, not.

I wrote a small binary to split the sieve results in such a way that the i7 gets proportionally more of the load. This can easily be modified as needed. Sieving and running LLR at the same time creates a bottleneck for either machine. Oddly enough, it is worse on the i7. So, I sieve on the i5.

 2020-03-04, 03:55 #33 Happy5214     "Alexander" Nov 2008 The Alamo City 11116 Posts Since you're only working with one k, you should use sr1sieve instead of sr2sieve. sr1sieve is significantly faster when working with one or two k values (run separately for two k's), while sr2sieve is better for three or more k values. sr1sieve also has the advantage of outputting (in-place) to an LLR-format file, so there's no need to work with srfile.

 Similar Threads Thread Thread Starter Forum Replies Last Post mart_r Prime Gap Searches 9 2020-01-19 22:20 enzocreti enzocreti 55 2019-04-27 11:10 tServo Hardware 7 2018-11-17 16:28 doctornash Other Mathematical Topics 7 2018-07-14 00:06 firejuggler Math 31 2014-01-08 18:28

All times are UTC. The time now is 17:50.

Sat Mar 28 17:50:03 UTC 2020 up 3 days, 15:23, 2 users, load averages: 1.40, 1.63, 1.69