mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   LMH > 100M (https://www.mersenneforum.org/forumdisplay.php?f=46)
-   -   332.2M - 333.9M (aka 100M digit range) (https://www.mersenneforum.org/showthread.php?t=10693)

Uncwilly 2013-01-02 07:08

1 Attachment(s)
[QUOTE=LaurV;323364]Don't understand why there are more exponents now than then? (over 700k, against under 700k, if I correctly read the 0x scale?
Personally, I am waiting for mfaktc 0.20 to be realeased, then I will pump a new factoring session on this range.[/QUOTE]The graph shows the total [U]effort[/U] expended for each of the exponents at a particular bit level. This is not a count of the [U]number of exponents[/U].
The attached shows the exponent count over time and effort.

LaurV 2013-01-11 03:27

Since mfaktc 0.20 is out, as I promised, I made a list with unreserved expos in the range, which were TF-ed under 75 bits, and put two gtx580 to them for a while. I use this list to play with mfaktc 0.20, I don't know for how long, depends on my mood and other tasks which may have higher priority.

You can see the progress if you look to reports: over 100 expos brought to 75, and 3 factors of 75 bits found already in the last days. There are still about 15000 expos in the list :razz: (split in two, for two cards)

I might redo work already done by others meantime (I mean the work for some X exponent to 75 bits is not done right now, but until - and if - I would reach that exponent, someone else might do the work; in this case I will redo the work, because I don't have time to check and change the worktodo files every hour). It takes mfaktc 0.2 about an hour to raise an expo from 73 to 75 in this range, for a credit of about 17 GHzDays (about 20 minutes and 6GHzD to 74 bits, and 45 minutes and 11GHzD to 75 bits).

I queued exponents which were NOT assigned by PrimeNet. Worst what can happen is that I might "step on someone's toes", i.e. poach someone's work, without intending to do so, therefore, if your (unassigned) assignments take long time, please check if they are not already done before starting them (and during you work on them, check from time to time too), and don't waste that long time. I don't know who works which exponents, many people work without assignments in this range, on their own risk, and it is too complicate for me to keep track.

Edit:
TL;DR version: If you don't have already an assignment with PrimeNet in the range, then stay away of TF-ing 332M to 75 bits, for a while. :razz:

LaurV 2013-01-14 06:13

Somewhere in the past, George indicated that all the exponents should go to 74, then P-1, then to 76. Inserting the newest mfaktc into equation (my speed, other people can get various results), they would all go to something like 76(.4)=76 or 77 bits, then P-1, then to 78(.4)=78 or 79 bits.

[edit: this also corresponds "somehow" to releasing 60M expos at 73-74 bits, the "highly debated" cutout of the GPU272 project, because 332/60=5.5, and log2(5.5)=2.4, therefore we have to go 2.4 bits higher, therefore the cutout should be 76, then P-1, then 79 bits]

Would it make any sense to TF them higher, like 80, etc? (as I see about 30 expos TF-ed to 80-84 bits, which I think it would be waste of time).

Uncwilly 2013-01-14 14:04

[QUOTE=LaurV;324630]Somewhere in the past, George indicated that all the exponents should go to 74, then P-1, then to 76. Inserting the newest mfaktc into equation[/QUOTE]It was initially (IIRC) 76, P-1, 77 or 75, P-1, 77.
Since that time I believe that James H indicated that the value should be 80 or even 81. Look at one of these charts: [url]http://www.mersenne.ca/cudalucas.php?model=9[/url]

I am trying to balance wave front issues (catching all of the exponents that people are asking for LL, as many have never turned in a lick of TF on them), versus proper depth. There are 1500 LL's assigned. If we actually had 1500 CPU's working on TF, then LL we would have seen a significant difference.

LaurV 2013-01-14 15:18

Ok, clear about the 80 bit. Somehow in my calculus, I missed how long a LL can take for a 332M exponent. Putting that into calculus, it matches.

Related to the 1500 expos reserved and never worked, it bothers me too. You can see[URL="http://www.mersenne.info/trial_factored_tabular_data/4/332200000/"] from here[/URL], my factoring to 75 front reached 332M27, everything what is behind of this and is factored to 74 only (about 92 expos) are "reserved" and I skipped them. They did not move a dime for ages, and I am tempted to ignore the fact they are reserved, and give them a run, but I can't do this without upsetting the very few serious guys who indeed do some work factoring there.

James Heinrich 2013-01-14 17:00

1 Attachment(s)
[QUOTE=Uncwilly;324668]Since that time I believe that James H indicated that the value should be 80 or even 81. Look at one of these charts: [url]http://www.mersenne.ca/cudalucas.php?model=9[/url][/QUOTE]For what it's worth, this is what my chart looks like if I expand the range to cover 50M-400M.
So yes, ~333M is right on target for TF to 2[sup]81[/sup].

c10ck3r 2013-01-14 17:57

Okay, I'm confused on what would probably be considered the basics of the graph. I'm reading the numbers as saying that the time to add an additional bit depth is actually slightly [B]over[/B] twice the previous bit. Why is this? (Or why am I reading this wrong?) I would think that a lower prime density at the higher range would reduce the number of candidates and speed up the test as such. Please help/move this to a more suited place if I picked a bad thread...

kracker 2013-01-14 20:09

Taking 332.5 to 71.

VictordeHolland 2013-01-15 01:37

[QUOTE=c10ck3r;324692]Okay, I'm confused on what would probably be considered the basics of the graph. I'm reading the numbers as saying that the time to add an additional bit depth is actually slightly [B]over[/B] twice the previous bit. Why is this? (Or why am I reading this wrong?) I would think that a lower prime density at the higher range would reduce the number of candidates and speed up the test as such. Please help/move this to a more suited place if I picked a bad thread...[/QUOTE]
I'm not sure what you are asking exactly, but here is a possible answer.

Let's take some (fictional, not so Mersenne) exponents to make calculations easier.
1,000,000
10,000,000
100,000,000

And let's take the TF range 2^64 to 2^65. This range contains 2^64 numbers, let's round that down to 18,400,000,000,000,000,000 (18.4*10^18) numbers for simplicity. We do not need to test all those numbers, since we know that a factor of a Mersenne number:
- must be of the algebraic form 2kp+1
- must leave a remainder of either 1 or 7 upon division by 8

By using the first restriction (2kp+1 form):
18,400,000,000,000,000,000 /(2*p) = numbers need to be tested. Let's call those Factor Candidates (FCs).
18,400,000,000,000,000,000 /(2*1,000,000) = 9,200,000,000,000 FCs
18,400,000,000,000,000,000 /(2*10,000,000) = 920,000,000,000 FCs
18,400,000,000,000,000,000 /(2*100,000,000) = 92,000,000,000 FCs
You can clearly see that as exponents get larger, less FCs need to be tested for the same TF range.

Next, it is possible to sieve the FCs, so we do not have to test the FCs that can be divided by small primenumbers (3,5,7,11), or do not satisfy the second restriction (leave 1 or 7 upon division by 8). Mfakt(c/o) uses 'classes' to do this very efficiently. The program puts the FCs into 4620 classes. 3660 of these classes do not need to be tested, because they only contain FCs that do not satisfy the second restriction or contain FCs that can be divided by small primes (3,5,7,11). So only 960 out of 4620 classes (20.8%) need to be tested. TheJudger explains this in his post: [URL="http://www.mersenneforum.org/showpost.php?p=200887&postcount=37http://"]http://www.mersenneforum.org/showpost.php?p=200887&postcount=37[/URL]

The FCs in the classes can be further sieved by the next primes (13,17,19, etc.). In Mfakto and (Mfaktc v0.19 and lower) this is done by the CPU and the amount of primes used is called SievePrimes. Between 2,000 and 200,000 SievePrimes are used to sieve the candidates. Something like 90-95% of the FCs are removed by sieving them with the SievePrimes. It is faster to test the remaining 5% (which still include composite numbers) on a GPU than to sieve them even further. (The sieving part can be done on the GPU in Mfaktc v.20).

In our example:
9,200,000,000,000 *960/4620 classes *0.05% = ~ 95,600,000,000 FCs to be tested
920,000,000,000 *960/4620 classes *0.05% = ~ 9,560,000,000 FCs to be tested
92,000,000,000 *960/4620 classes *0.05% = ~ 956,000,000 FCs to be tested
So after sieving still more FCs need to be tested for low exponents, than for higher exponents.
TFing to a higher bitlevel, in this case 2^65 to 2^66, means roughly twice the FCs need to test.

It takes less work (=faster) to TF a large exponent from 2^64 to 2^65 than a small exponent. Also a LL test takes more time (=more work) on a higher exponent than a low exponent. These two reasons together cause it to be efficient to TF to higher bitlevels on higher exponents.

I hope I didn't make a calculation error and I hope this helps.

LaurV 2013-01-15 01:58

[QUOTE=c10ck3r;324692]Okay, I'm confused on what would probably be considered the basics of the graph. I'm reading the numbers as saying that the time to add an additional bit depth is actually[COLOR=Red] slightly [B]over[/B] twice[/COLOR] the previous bit. Why is this? (Or why am I reading this wrong?) I would think that [COLOR=Red]a lower prime density[/COLOR] at the higher range would reduce the number of candidates and speed up the test as such. Please help/move this to a more suited place if I picked a bad thread...[/QUOTE]
Shorter version of what Victor (correctly) said above: we do not use "primes" as factor candidates (which indeed have lower densities as they grow higher). We use "numbers sieved to some depth", which, when the depth is the same, they have the same density. If you only sieve with 2, you have all odd numbers, regardless of the size. Sieve them with 3, and you have all numbers which are 1 or 5 (mod 6), regardless of size. Our factor candidates sometime are composite, because is cheaper (faster) to test if the candidate divides Mp than to test if the candidate is prime. Once a candidate survives sieving, we check if it divides Mp, which is faster, without wondering if it is indeed prime or not. There are 105097565 primes to 2^31. So, even if sieved with a hundred million primes, there will still be plenty of composites which survive the sieving when you TF over 2^62.

That is with the density (and doubling the speed with the bit). The "slightly over" part come from effectively testing the remaining candidates, which are bigger if the bit level is higher, therefore taking more "modular work" each.

Uncwilly 2013-01-15 01:58

[QUOTE=LaurV;324673]my factoring to 75 front reached 332M27, everything what is behind of this and is factored to 74 only (about 92 expos) are "reserved" and I skipped them. They did not move a dime for ages, and I am tempted to ignore the fact they are reserved, and give them a run, but I can't do this without upsetting the very few serious guys who indeed do some work factoring there.[/QUOTE]Let me work on them slowly with that one core that I have set aside for that. I will push them all to 76 as I pass.
[QUOTE=kracker;324709]Taking 332.5 to 71.[/QUOTE]I have a few reserved at the beginning of that range. Let me know if you want me to dump them. I can move elsewhere to stay out of your hair.


All times are UTC. The time now is 22:49.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.