![]() |
[QUOTE=Uncwilly;258931]So I will now have a way to figure P-1 credit for the 100M digit range.[/QUOTE]Since they'll all use an 18M FFT, the Excel version of the GHz-days formula should be:[code]=1.2852 * ((1.45 * <B1>) + (0.079 * (<B2> - <B1>))) / 86400[/code](replacing <B1> and <B2> with cell references, of course).
|
I find it odd that just as P-1 finished the 53M range many of my recent P-1 assignments are in the high 54M to low 55M range.
Virtually every exponent in the 54M range has been factored to at least 69 bits and is currently unassigned. |
[QUOTE=petrw1;260149]I find it odd that just as P-1 finished the 53M range many of my recent P-1 assignments are in the high 54M to low 55M range.
Virtually every exponent in the 54M range has been factored to at least 69 bits and is currently unassigned.[/QUOTE]Guilty m'Lud A bunch of us have been using mfaktc for the last week or few. It really chews through the TF allocations given out by default. In just over a week I alone have processed well over 1500 exponents in the 53-54M range and have gone from nowhere to 92nd in the TF league table. The current batch is factoring to 70 bits; the previous ones to 69. Paul |
[QUOTE=xilman;260151]Guilty m'Lud
A bunch of us have been using mfaktc for the last week or few. It really chews through the TF allocations given out by default. In just over a week I alone have processed well over 1500 exponents in the 53-54M range and have gone from nowhere to 92nd in the TF league table. The current batch is factoring to 70 bits; the previous ones to 69. Paul[/QUOTE] Exactly, by you and yours completing the factoring to the necessary limits all these exponents should be available for P-1 |
[QUOTE=petrw1;260165]Exactly, by you and yours completing the factoring to the necessary limits all these exponents should be available for P-1[/QUOTE]
IIRC, within each 1 million range the server hands out the least TF'ed exponents for P-1. Thus, it will first hand out all the exponents TFed to 2^68, then those to 2^69, etc. |
My P-1 assignments have been 53.9M and just turned in a 54.0M this afternoon.
My CUDA card came up last night; it will start doing a little OBD TF for real in half a day after repeating one of my successful P-1 assignments; this probably means the six-core beast isn't getting much ECM done. Any chance of a CUDA P-1 program? |
[QUOTE=Prime95;260167]IIRC, within each 1 million range the server hands out the least TF'ed exponents for P-1. Thus, it will first hand out all the exponents TFed to 2^68, then those to 2^69, etc.[/QUOTE]
That makes sense |
If there is a 2^69 > factor > 2^68, what is the chance of P-1 finding it?
David |
And the race is on.
The server has just started handing out LL assignment in the 53M range (I think, unless someone manually grabbed 50 or so) but keep in mind with still ample lower exponents expiring daily there won't be a full daily complement in that 53M range for a while.
Before George decided to add another TF bit in the 53-59M range the P-1'ers had almost 18,000 exponents ready for LL in the 53M range. We should see a similar number back in the LL Available column in a few days once the 53M TF is done that extra bit. Then we can watch if the LL available number grows or drops in the coming months as an indication of whether P-1 is keeping up or not. |
My recent P-1 efforts have found 4 factors in perhaps 90 attempts and 360GHz days. I'm doing better at that than at deep(extra bit or two) TF, which has only found one factor.
Question: would P-1 on CUDA offer the same kind of speedup that is offered by TF on CUDA? |
[QUOTE=Christenson;261100]My recent P-1 efforts have found 4 factors in perhaps 90 attempts and 360GHz days. I'm doing better at that than at deep(extra bit or two) TF, which has only found one factor.
Question: would P-1 on CUDA offer the same kind of speedup that is offered by TF on CUDA?[/QUOTE] Good question. I'd argue that P-1 is a tad pensionned algorithm. One year older than Pollard-Rho, yet suffering from the same randomness. There sure is room for an adapted version of it at a gpu. Yet if you do all that effort anyway, why not go for a new algorithm, it's not so hard to design a new one. If you take a look to old ones anyway i'd argue why not take a look at pomerance? As basically it needs to sieve quick. What you really want to avoid is to use algorithms that need to do things modulo P where P is some millions of bits. Regrettably that's nearly about all algorithms... the gpu's have not much of a room for that type of calculation. Sure they can do it faster than a CPU, provided gpu has enough RAM. the question is also whether you want to throw such code on the net once you have it to work :) Regards, Vincent |
| All times are UTC. The time now is 23:04. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.