![]() |
Philosophical question
If I understand correctly, as the size of the exponent increases, the amount of time to perform an LL test increases while the amount of time required to factor to a specified depth decreases.
Therefore, a given number of computers performing factoring assignments will tend to get further and further "ahead" of another number of computers doing LL assignments. Does this represent a potential problem? Should the factoring depth be increased beyond the currently specified minimums to slow down factoring? If I misunderstand this, please explain it to me. Thank you. |
The number of bits that an exponent is factored to increases as the size of the exponent increases. This increase isn't smooth though; for instance, there is some cutoff line around 28M where the bit depth increases from 67 to 68 (my numbers might not be exactly right, but you get the idea).
|
[QUOTE=jinydu]The number of bits that an exponent is factored to increases as the size of the exponent increases. This increase isn't smooth though; for instance, there is some cutoff line around 28M where the bit depth increases from 67 to 68 (my numbers might not be exactly right, but you get the idea).[/QUOTE]Thank you, jinydu, for the reminder that the depth increases as the exponents gets bigger. I just wonder if the depth gets big enough, fast enough to keep from needing a boat load of exponents available for factoring assignments without frustrating people by running out all of the time.
|
The depth is decided by how efficient it is to trial factor at a particular level vs. running two LL tests rather than with a view to keeping all computers busy. The latter is the Berzerkely philosophy :rolleyes:
As far as running out of exponents is concerned - part of the "problem" and it is a nice problem to have, is caused by the recent run on factoring by TheJudger and TPr. However, now that george has released about 25,000 exponents we should be set for a while. |
Well, I'm afraid that I'm not familiar with Professor Berzerkeley's work. :wink:
I am asking this under the heading "Philosophical Question" because I think it would be bad form to say "Go away. We don't need help factoring." if there is a method to provide work to those who wish to factor without crushing the server with the number of exponents it would need to serve and track. Is there any harm to "overfactoring" a given exponent? (Other than the fact that someone might spend time factoring that someone else might profitably spend performing an LL test.) However, maybe I'm obsessing about an issue that could or would never come to pass. It's too bad that so many decisions are made at the client level (compiled into the Prime95 program). If they were made at the server level then the factoring depth could be adjusted dynamically as the supply of and demand for exponents ebbed and flowed. |
[QUOTE=JHagerson]If they were made at the server level then the factoring depth could be adjusted dynamically as the supply of and demand for exponents ebbed and flowed.[/QUOTE]
I think you're missing the point, here. The criterion for when to stop factoring is determined by how to most effectively use the available resources. If you factor beyond this pre-determined cutoff point, it means that you're not making the most effective use of the pool of processors participating in GIMPS. If factoring assignments run out, it means that more computers should be LL testing rather than factoring, not that they're not factoring far enough. Correct me if I'm wrong, but the server *does* decide dynamically what the threshold is for whether a computer is assigned a factoring or LL test (well, all the cases where the user allows the server the freedom to do this, anyway). Drew |
No, there is a default built until the client that decides whether the computer should pursue first time LL testing, doublechecking or trial factoring. This decision is based solely on processor speed.
|
[QUOTE]However, maybe I'm obsessing about an issue that could or would never come to pass.[/QUOTE]
Yes I think so. And what drew said. It is more important to make sure that efficient use of resources is made to increase the overall throughput of GIMPS. There is no shortage of numbers to factor. If George wanted he could release all exponents upto 40M for factoring and that would keep us busy for a year or more. The only reason exponents are released for factoring in small batches is to let the LMHers have a go. I do not understand why you think we may run out of numebrs to factor and hence we need to factor to a greater depth. On what assumptions do you base this claim? |
[QUOTE=garo]I do not understand why you think we may run out of numebrs to factor and hence we need to factor to a greater depth. On what assumptions do you base this claim?[/QUOTE]I would not say "run out." I would say "over tax the server with too many to track simultaneously."
Again, maybe this is a non-issue. I was under the (probably mistaken) impression that there is an army of people who choose to be involved with the project solely to do PrimeNet-scored factoring (to generate stats credit) and that these people would be [disappointed|disillusioned|dismayed|dispeptic] if there was no factoring work available. As to drew's point that "if that happens, more people should do LL tests," I personally agree wholeheartedly but am [thinking|worried] about the army of factoring-stats-motivated participants who may or may not exist. |
[QUOTE=JHagerson]"over tax the server with too many to track simultaneously."[/quote]I don't recall ever seeing any indication that the server's capacity for keeping track of assignments (TF or otherwise) was in any danger of being exceeded.
[quote]Again, maybe this is a non-issue.[/quote]Yup. [quote]I was under the (probably mistaken) impression that there is an army of people who choose to be involved with the project solely to do PrimeNet-scored factoring (to generate stats credit)[/quote]Well, my only army experience is that once upon a time I was part of "a small army of IBM programmers" (quoted from a trade journal) who chose to be hired to develop a real-time network for running automatic teller machines. But just as the Pacific isn't terrific, so too did that army resemble the Atlantic in that it wasn't quite [i]what it's cracked up to be[/i]. Hmmm -- "an army of people who choose to be involved with the project solely to do PrimeNet-scored factoring (to generate stats credit)", eh? [quote]and that these people would be [disappointed|disillusioned|dismayed|d[/quote]y[quote]speptic] if there was no factoring work available.[/quote]Well, if a significant number of such soldiers were dis/dis/dis/dys-ed, surely one or more would find their way to this forum and post their complaints themselves, wouldn't they? So, if we don't currently see a significant number of such complaints here, then maybe the supposed army is only a platoon at most? :-) |
Forgive my ignorance, but is there a tangable benefit to "padding your stats", so to speak, with factoring assignments, or is it just a friendly competition? And couldn't the same competitive spirit be in place for LL testers (plus the potential to win some money, as well)?
Drew |
| All times are UTC. The time now is 21:55. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.