-   GPU Computing (
-   -   Attempts vs. Successes oddity (

Rodrigo 2014-09-06 05:02

Attempts vs. Successes oddity
Over time I've developed a sense of the GPU factoring process, including an idea of the general proportion of TF exponents that come out as Not Prime and how long (in GHz-Days) it takes to factor to various exponent levels.

Thus, for example, I can grasp how "dbaugh," who stands at #5 on the [URL=""]Top Trial Factoring Producers[/URL] list, could have such a large number of GHz-Days for the small number of TF attempts: I attribute this to factoring to very deep levels. (Right?)

However, there is one remarkable phenomenon that leaves me scratching my head. How can #16, Bill Staffen, have found only 2 factors in more than eight thousand tries? Is it simply bad luck, or can someone explain (in not too-technical terms -- I'm no mathematician :smile: ) how that can be?

Typical success seems to run in the 1 - 1.5 percent range. How does one manage to get less than 0.025 percent? What exponent range and to what levels might one work on to achieve this?

Just curious...


LaurV 2014-09-06 05:52

It may be attributed to a bad hardware, or ignorance. As some big gun says here on the forum, don't attribute to malice what it can be attributed to stupidity, hehe. I also suspected some people of wrong doing on those lists, even have proof for some, but what can we do? :smile:

Rodrigo 2014-09-06 06:23

Huh, I hadn't even thought about that sort of possibility (malice or ignorance). I was wondering basically about how, mathematically speaking, such a low ratio of successes to attempts could come about.

I'm not sure how I could achieve that kind of ratio, even if I set out to do it on purpose. The laws of probability would seem to preclude it, and how would I know beforehand which exponents to avoid? :unsure:


LaurV 2014-09-06 06:42

You are thinking too much like a honest guy.

Just make a list of 100 lines of "no factor for exponent xxxx from aa to bb [mfakto blah blah]" with Notepad, and send it to the server. It will be digested and you will get the credit. No need to do any work. Or pick an ECM assignment (3 curves is enough) and do it with P95 offline, so it won't be able to submit, then submit the results manually, but before submission change from "3 curves" to "150 curves". They generate the same checksum, and you even have a valid assignment key. Otherwise how do you explain some guy like NOOE (that with the palindromic name) going from zero to hero in the ECM lifetime top lists in such a short time? (the same guy who both LLed and "double checked" the largest exponent, ~383M, or so, George said he knows the guy and he is not faker, but let me doubt).

I mean, I am also a bit of "credit whore", but in different direction: I like to get the right credit I worked for, but I won't go so far as deliberately falsifying results. Other people do. This subject is over-debated, if you look around in the forum. At the end, they don't cause too much damage, as all exponents will end up either with a factor, or with a double/triple check done by independent users. The only "bad" things is that in case a factor is missed because the range was fake-reported, then someone will lose few days with a LL test which would not have been needed if the factor would have been known.

[edit: it may be nice to know which assignments your guy fulfilled, they are not many, and if reasonable, I can repeat them with my farm. I say "if reasonable", because he may be doing low-expos, where lots of P-1 and ECM was done, and the chances to find factors are much lower. This would also justify the high credit, for example doing 100k expos to 63 is the same effort as doing 100M to 73 (a 2^10 factor in both cases), but the chances to find a factor is null in comparison. Doing this, he gets high credit, and invests even more time, as the tools to factor lowexpo ranges are not so proficient, think about mfaktc, which is doing 400GHzD/D on out frontline TF, but it will only do 200GHzD/D or so, on the same card, for low expos].

VictordeHolland 2014-09-06 06:48

Looking at: [URL][/URL]
he found at least 18 factors this year (2014)

Looking at P-1 factoring: 509 attempts 148 factors (which is a lot), so I think his TF factors are reported as P1 factors.

LaurV 2014-09-06 07:06

Good catch! Those are all TF factors. They are a bit below 2^74, and many have huge (HUGE!) values for B2, so they can't be P-1 factors. I think we just witnessed the "TF factor recorded as P-1 factor" bug in action. This bug will not bother anymore in the future, James just said he solved it :smile: and it is confirmed as solved, see some posts from today in a parallel thread.

I think the poor guy was separating the "factor" lines from "no factor" lines (I used to do this long time ago too, as I was recording the factors I found) and send them separate, therefore if it is no "no factor" line in the file, all the factors were recorded as P-1.

Mystery solved...

Rodrigo 2014-09-06 15:58

Thanks LaurV and Victor, that was enlightening.

Makes sense now. I'll go look for that other thread about the TF/P1 bug.


TheMawn 2014-09-06 16:03

Yep, it's TF factors being reported as P-1, somehow. I had this question a while back and that was the answer.

snme2pm1 2014-09-19 08:44

[QUOTE=LaurV;382271]wrong doing on those lists, even have proof for some, but what can we do?[/QUOTE]

Surely if you have clear evidence then it can be stated.
If the evidence is vague, then perhaps better not.
If such feedback were to expose faulty hardware, then is that not also useful?
I'm a tiny bit curious as to the nature of wrong doing evidence that you possess.

Also, don't call me Shirley.

All times are UTC. The time now is 11:12.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.