![]() |
|
|
#12 |
|
Dec 2005
668 Posts |
Sorry for bumping this old thread again but something crossed my mind.
The estimated time to complete, that was said to be around 10 years, is it counting with double checks already? Or will that add up to 20 years instead? |
|
|
|
|
|
#13 |
|
Jun 2003
2·7·113 Posts |
Really it depends on number of primes, number of users etc. The project might be over in 5 years. It is impossible to give any estimates.
Though if we take the same time as SOB, the project should be over in the next 6-7years. (Assuming we soon have as many users as SOB) Citrix |
|
|
|
|
|
#14 |
|
Dec 2005
5410 Posts |
My doubts were more about the lenght of work in double checking.
Is it necessary to double check every candidate? Or maybe only retest some values who comeback with strange results? If so, that will effectively double the "theorical maximum time" the project would take, right? Or is the process of double checking faster? Last fiddled with by NeoGen on 2006-01-15 at 07:30 |
|
|
|
|
|
#15 |
|
Apr 2003
14048 Posts |
In principle all residues must be double checked as there are only a few cases where you can really see that a residue is wrong.
But due to the fact that we continue sieving all the time we reduce the number of double checks. For example for the k values where we have no prime found there are 5782 n values where we have done a first PRP test but after that there was a factor found so no need to do a PRP double check anymore. This shows again the importance of sieving as at the moment (thanks to Brucifer but also all the other sievers) we have removed ~1% of the open tests by sieving within 15 days!!! ![]() These numbers will never see a PRP test. Lars Edit: Forgot to answer the last question: A double check takes the same time as the first PRP check.(But lets hope there are new faster client in the future) Last fiddled with by ltd on 2006-01-15 at 07:56 |
|
|
|
|
|
#16 |
|
Jun 2003
158210 Posts |
[QUOTE=NeoGen]10 years??
![]() Oh well... plenty of work for the future then. QUOTE] Neogen, just to explain things better. We are currenty finding 1000 factors daily (Thanks to Brucifier and others). If we could maintain this rate of factors it would take us 5 years to reach 50m. As we sieve larger and larger ranges, it gets difficult to find factors and this rate would drop. So if we could match the rate drop by adding new machines, the project would reach 50M in 5 years. If we could double our effort then 2.5 Years. This is assuming we are not doing PRP. If we find a prime with PRP it reduces about 6 months of work at a time. Also primes are difficult to predict. So the PRP ascpect is difficult to predict. But if we can keep getting new users as the project gets more and more difficult and we maintain our rate of factors we should reach 50M in less than 5 years. I hope that answers your question. Citrix |
|
|
|
|
|
#17 |
|
Dec 2005
2·33 Posts |
I'm getting more of the "big picture" everyday! Thanks guys!
And that time might even be shortened as the users will (hopefully) get faster, more powerful machines during the years. :) |
|
|
|
|
|
#18 |
|
Jun 2003
2×7×113 Posts |
We would get an exponetial boost as soon as SOB joins forces with us. I hope that time comes soon.( ie we are able to catch up with them really fast).
Citrix |
|
|
|
|
|
#19 |
|
Dec 2005
2·33 Posts |
By the way... I read in places that the sieving part is only good up until a certain point, and beyond that it is better to just make the primality tests than sieving because the factors found are very few compared to the effort spent in sieving them.
How do we know if we reached that point? |
|
|
|
|
|
#20 | |
|
Jun 2003
2·7·113 Posts |
Quote:
Citrix |
|
|
|
|
|
|
#21 |
|
Jul 2004
Potsdam, Germany
83110 Posts |
I think it's not that easy to explain, because the effort-increasing factor (pun not intended) is different.
For primalty tests, effort increases with bigger n values. For sieving, factor density decreases (and thus effort per factor increases ) with higher sieving depth. Furthermore, primalty tests and sieving affect each other when it comes to usefulness. There is no sense in sieving deep when you don't do a lot of primalty testing, because those early primalty tests are not that hard (and you have to consider easier sieving due to found primes...). On the other hand, a lot of primalty testing without thorough sieving is inefficient as well, especially when the n values increase. This is because the n value does not affect sieving speed (the size of the n range does). You have to keep a good balance between primalty testing and sieving to get the most out of it. One could say that you have to sieve deeper the higher n values you test. But as it most efficient to sieve large ranges, the lower bound of the n sieve range can be minimal. |
|
|
|