mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   GPU to 72 (https://www.mersenneforum.org/forumdisplay.php?f=95)
-   -   GPU to 72 status... (https://www.mersenneforum.org/showthread.php?t=16263)

chalsall 2012-02-22 14:44

[QUOTE=Bdot;290449]I just randomly checked what primenet thinks about some of my assignments, and I noticed that almost all of my DC assignments have been assigned to other people by primenet ...

For example [B][URL="http://www.mersenne.org/report_exponent/?exp_lo=26173363"]26173363[/URL] [/B]was assigned to me by GPU272 on 2012-02-03. Primenet assigned it as Double-checking to "KYOJI_KAMEI" on 2012-02-03 (which is not me :smile:).[/QUOTE]

Yes -- this was discussed over on the [URL="http://www.mersenneforum.org/showthread.php?t=16352"]Assignment discrepancy[/URL] thread yesterday. For some reason PrimeNet mapped the Anonymous DC account to "KYOJI_KAMEI" a few days ago. I had to create a new account, unreserve all the candidates which had not yet been assigned to workers, and then re-reserve them.

[QUOTE=Bdot;290449][B][URL="http://www.mersenne.org/report_exponent/?exp_lo=26087387"]26087387[/URL][/B] was assigned to be by GPU272 on 2012-01-11. Primenet lists it as Double-checking to "ANONYMOUS" on 2012-02-22 --- if that is me, then why was it assigned today? I assume it was reassigned to someone else.[/QUOTE]

Nope -- if you look at that page, you'll see that PrimeNet reports that you completed that DC today. I checked a few of your other assignments, and those which were assigned today PrimeNet is reporting as being assigned to "Anonymous". I suspect this is because your Prime95 client hasn't checked in with PrimeNet to claim them yet.

[QUOTE=Bdot;290449]I already unreserved a few of my DC's from GPU272 when I noticed that many of them are almost complete on machines that have no internet access.

I now decided to kick all DC assignments that have no or less than 10% of their work done. Assignments that have more than 75% done already will keep going (there were none between 10-75%).[/QUOTE]

Even if "assigned" to "KYOJI_KAMEI", it is safe to complete them. The AID is valid, and KYOJI_KAMEI's machines have [U]not[/U] claimed them.

[QUOTE=Bdot;290449]But what is happening here? And do I need to check each of my GPU272 assignments if they are still valid? Or just DC and LL which are supposed to be assigned to ourselves (which is not happening for machines with no internet)? What about P-1?

Is there some way GPU272 could help identifying cases where primenet reassigned stuff differently than GPU272?[/QUOTE]

I didn't release and re-reserve any which were already assigned to workers because there is a small chance Spidy wouldn't be successful in re-reserving them, and the keys would change.

However, if anyone sees this and wants this done, please PM me. I'm going to do this today for one user who already has asked for it to be done.

Note that this did [B][U]NOT[/U][/B] happen for any LL assignments. And P-1 and TF assignments are owned by the non-anonymous Spidy accounts.

chalsall 2012-02-22 14:52

[QUOTE=Dubslow;290453]I do agree that we still need it for DCTF and for the lower LLTF -- but not necessarily the higher stuff. Perhaps other users can see what sort of TF they get from PrimeNet?[/QUOTE]

My thinking is since Spidy is always "on the job", that the system will continue to be useful for coordinating these efforts even if we pull ahead of the LL "wave" like we have the DC wave.

Keep in mind that Spidy is not grabbing any work above 55M which is already TFed to 71 bits or above. However, once it finds a candidate at 70 or below, it keeps it until TFed to 72 (or 73 above 59.69M).

And, yes, certainly people can work directly with PrimeNet. But I think some enjoy seeing the pretty graphs and stats (and competing with others) that GPU72 offers, and it does generally have lower candidates than are available from PrimeNet.

Graff 2012-02-22 23:28

[QUOTE=chalsall;290079]Hey all. Just so people know, over the weekend I:

- Added a "Released Trend" (Linear Regression) line to the Trial Factoring Depth per Day graphs on the [URL="http://www.gpu72.com/reports/overall/graph/month/"]Overall System Progress Graphs[/URL].

- Tweeked the LR lines on the other graphs so the current day is not including in the linear regression. This is to prevent the trend line varying during the day, and tending to be a lower slope than it should be.

- Made both of the daily GHz Days graphs on the [URL="http://www.gpu72.com/reports/worker/4d89b8ff781e27a0fe80450cd4cd74b6/"]Individual Stats[/URL] pages have a 30 day moving average rather than an overall running average.

- Added a conditional for the Individual graphs so the X-axis date labels don't bunch up / overwrite for those who have been with the project for while.

- Fell off a Segway at 15 km/h onto coral stone. Three times... Semi-serious abrasions on both arms, and possibly a broken toe... Boy was it fun!!! :smile:[/QUOTE]

Are the top-n graphs generated dynamically? Or are they prepared
at intervals? I ask because the P-1 graphs don't seem to match the
P-1 table.

Gareth

LaurV 2012-02-23 01:56

[QUOTE=Graff;290498]Are the top-n graphs generated dynamically? Or are they prepared
at intervals? I ask because the P-1 graphs don't seem to match the
P-1 table.

Gareth[/QUOTE]

@chalsall: ...and could you do linear regressions on the graphs mentioned above, WITHOUT the last day? As long as the last candle is not completed, it always "pulls down" the right-side of the regression lines, that is one of the reason the line always look bearish biased. Trust me, I know that from Forex :P

Bdot 2012-02-23 09:35

[QUOTE=chalsall;290455]
...

Even if "assigned" to "KYOJI_KAMEI", it is safe to complete them. The AID is valid, and KYOJI_KAMEI's machines have [U]not[/U] claimed them.

...
[/QUOTE]

Oh boy, I obviously do not spend sufficient time reading the forums (though I already thought it is too much of my time).

And now I'm in big sh...

I'm currently finishing
26073517 [83.45%] "Double-checking to "ANONYMOUS" on 2012-02-22"
26087389 [79.11%] "Double-checking to "Kyle Askine" on 2012-02-22"
25834519 [94.67%] "Double-checking to "Frederick Menninger" on 2012-02-22"
26078791 [90.99%] "Double-checking to "ANONYMOUS" on 2012-02-22"

When I saw in primenet that they were assigned to someone else, I unreserved them from GPU272 before I noticed that these were the active ones.

I guess I have no right to finish them, but I'm sorry for the wasted cycles. I can send the 26087389 save file to Kyle, but have no contact information for the others. Any suggestions?

Dubslow 2012-02-23 09:48

[QUOTE=Graff;290498]Are the top-n graphs generated dynamically? Or are they prepared
at intervals? I ask because the P-1 graphs don't seem to match the
P-1 table.

Gareth[/QUOTE]

Yes, they definitely don't. E.g., chalsall, [URL="http://gpu72.com/reports/workers/p-1/graph/11-20/"]My Favorite Graph[/URL]. It shows Craig > Geek, though they are correctly sorted as opposite. Also note it shows Geek ~ 600, Craig ~ 600+eta, and me <600. We're at
670, 650 and 620, respectively. (And the simplest explanation, that's it's just lagging doesn't hold, just by looking at the dates.)




@Bdot: That's terrible :P I wonder what cheesehead thinks.

LaurV 2012-02-23 10:04

[QUOTE=Bdot;290529]Oh boy, I obviously do not spend sufficient time reading the forums (though I already thought it is too much of my time).

And now I'm in big sh...

I'm currently finishing
26073517 [83.45%] "Double-checking to "ANONYMOUS" on 2012-02-22"
26087389 [79.11%] "Double-checking to "Kyle Askine" on 2012-02-22"
25834519 [94.67%] "Double-checking to "Frederick Menninger" on 2012-02-22"
26078791 [90.99%] "Double-checking to "ANONYMOUS" on 2012-02-22"

When I saw in primenet that they were assigned to someone else, I unreserved them from GPU272 before I noticed that these were the active ones.

I guess I have no right to finish them, but I'm sorry for the wasted cycles. I can send the 26087389 save file to Kyle, but have no contact information for the others. Any suggestions?[/QUOTE]
If I'd be you, I would certainly finish them. If you get matches, you could notify the assignee (so they could stop wasting their time). Or not. I won't. It is not your fault that the system went crazzy. And it's a pity for that high percents already finished, to screw them.

KyleAskine 2012-02-23 12:09

I am not due to start 26087389 until tomorrow. I can keep it assigned so no one else gets it and starts on it. I will pull a new exponent and put it first in line tonight when I get home from work.

chalsall 2012-02-23 14:15

[QUOTE=Graff;290498]Are the top-n graphs generated dynamically? Or are they prepared
at intervals? I ask because the P-1 graphs don't seem to match the
P-1 table.[/QUOTE]

Everything you see on the site is generated dynamically in "real-time".

The discrepency you pointed out on the P-1 graphs was a "Stupid Programmer Error" -- I wasn't including the GDs credit for factors found. This has been fixed.

chalsall 2012-02-23 14:18

[QUOTE=LaurV;290504]@chalsall: ...and could you do linear regressions on the graphs mentioned above, WITHOUT the last day? As long as the last candle is not completed, it always "pulls down" the right-side of the regression lines, that is one of the reason the line always look bearish biased. Trust me, I know that from Forex :P[/QUOTE]

This is already the case. And noted in my message above.

You can confirm this yourself by looking at the graphs shortly after midnight UTC, and then again shortly before on the same day -- the LR trend line will not change during the day.

chalsall 2012-02-23 14:21

[QUOTE=Bdot;290529]And now I'm in big sh...[/QUOTE]

Me more than you... :cry:

I'm going to do a full audit of all DC assignments, and try to prevent as much duplication of effort as I can...

Sorry about this everyone.... :sad:

Bdot 2012-02-23 14:55

LL too
 
My LL assignments may be affected too:

In order to avoid duplicated effort on my machines I tried to setup proxies so they can be hooked up to primenet directly. Upon running mprime -c on the first one, it deleted all my LL assignments and assigned a DC (29M) to me:

[code]
[Comm thread Feb 23 15:26] pnErrorResult=43
[Comm thread Feb 23 15:26] pnErrorDetail=ap: no such assignment key, GUID: e6f558f28db5775a1d850021449a26df, key: A75CD9F19D9BFBB088D7608E007F0DB2
...
[Comm thread Feb 23 15:26] pnErrorResult=43
[Comm thread Feb 23 15:26] pnErrorDetail=ap: no such assignment key, GUID: e6f558f28db5775a1d850021449a26df, key: 1ABAAFB82129CC31B08DEAF0D6C76F8A
...
[Comm thread Feb 23 15:26] pnErrorResult=43
[Comm thread Feb 23 15:26] pnErrorDetail=ap: no such assignment key, GUID: e6f558f28db5775a1d850021449a26df, key: C995206A12CE5FC81D745C9EF920EEBA
...
[Comm thread Feb 23 15:26] pnErrorResult=43
[Comm thread Feb 23 15:26] pnErrorDetail=ap: no such assignment key, GUID: e6f558f28db5775a1d850021449a26df, key: 37F36E3766FD122CB8AB5236CFB7C7DA
[/code]

45343873 [61.07%] "LL testing to "ANONYMOUS" on 2012-02-19"
45340277 [32.01%] "LL testing to "ANONYMOUS" on 2012-02-19"
45220037 [58.22%] "LL testing to "GPU Factoring" on 2012-02-02"
45135169 [18.89%] "LL testing to "GPU Factoring" on 2012-02-03"

(the dates do not match the GPU272 dates of assigning the LL to me).

They are still in my assignments list in GPU272, so I can use that to recreate the worktodo lines, but I guess whenever it connects to primenet it will delete these lines again ... ?

I'm really afraid to try that for the other machines ...

chalsall 2012-02-23 15:01

[QUOTE=Bdot;290561]My LL assignments may be affected too:[/QUOTE]

GRRRRRR!!!!

OK, I've got some work todo today...

I've temporarily disabled both DC and LL reservations until I've had a chance to get to the bottom of this....

Edit: Also, I find it [B][I][U]very[/U][/I][/B] strange that Prime95 deleted these assignments which had had work done on them. My (mis?)understanding was that no assignment would be deleted even if the AID was "invalid" in such a case.

kladner 2012-02-23 15:24

Just for the record, I have one DC and one LL reserved through GPUto72. PrimeNet has my name on them.

Bdot 2012-02-23 15:33

[QUOTE=kladner;290563]Just for the record, I have one DC and one LL reserved through GPUto72. PrimeNet has my name on them.[/QUOTE]
I have a few other LL and DC assignments with my name as well. This problem only seems to appear for machines that never talked to primenet. Maybe because those assignments expire faster? But they should not expire within 2 months ...

mprime did only delete the lines from worktodo. Fortunately it did not delete the save files.

Graff 2012-02-23 15:33

[QUOTE=chalsall;290556]Everything you see on the site is generated dynamically in "real-time".

The discrepency you pointed out on the P-1 graphs was a "Stupid Programmer Error" -- I wasn't including the GDs credit for factors found. This has been fixed.[/QUOTE]

Thanks, the graphs and tables now match.

Gareth

KyleAskine 2012-02-23 15:39

[QUOTE=Bdot;290564]I have a few other LL and DC assignments with my name as well. This problem only seems to appear for machines that never talked to primenet. Maybe because those assignments expire faster? But they should not expire within 2 months ...

mprime did only delete the lines from worktodo. Fortunately it did not delete the save files.[/QUOTE]

I thought that you had to claim the assignments from Primenet within 30 days or it would be kicked back into the queue by GPUto72?

chalsall 2012-02-23 16:01

[QUOTE=KyleAskine;290568]I thought that you had to claim the assignments from Primenet within 30 days or it would be kicked back into the queue by GPUto72?[/QUOTE]

That's only for DCTF or LLTF assignments. P-1s are valid for 90 days; DCs and LLs go by PrimeNet's expiry rules.

But the magic "two months" has lead me to the probable problem... Working it....

KyleAskine 2012-02-23 16:23

[QUOTE=chalsall;290574]That's only for DCTF or LLTF assignments. P-1s are valid for 90 days; DCs and LLs go by PrimeNet's expiry rules.

But the magic "two months" has lead me to the probable problem... Working it....[/QUOTE]

But he said he didn't contact primenet on those machines?

Maybe I am missing something.

chalsall 2012-02-23 16:38

[QUOTE=KyleAskine;290579]But he said he didn't contact primenet on those machines?

Maybe I am missing something.[/QUOTE]

No, you're not missing things -- I misread your post. And yes, this appears to have been the problem.

"Spidy" wasn't checking in with PrimeNet for DC and LL assignments, as I considered such assignments to be the responsibility of the "owner".

I'm doing some analysis to determine who has been affected by this, and having Spidy check in with PrimeNet for those DC/LL assignments which PrimeNet still thinks is owned by Anonymous.

Also, it appears that many which were "lost" by workers have been reclaimed by Spidy (and fortunately not yet reassigned to others).

Let me keep working the problem -- I'll report back once I've completed what I need to do.

chalsall 2012-02-24 00:48

[QUOTE=chalsall;290583]Let me keep working the problem -- I'll report back once I've completed what I need to do.[/QUOTE]

Just so everyone knows, I'm still working this. A few will have already received PMs indicating that a few of their AIDs have changed as I reclaim the assignments from "KYOJI_KAMEI" and/or "GPU Factoring" (read: Spidy).

Manually going through log files, running SQL queries, and spidering PrimeNet was not how I had hoped to spend the day.... :sad:

chalsall 2012-02-24 04:07

OK, time for bed...

I've done a sanity check on the currently available DC and LL assignments, and reactivated the DC and LL assignment pages.

Will finish off the clean-up tomorrow.

flashjh 2012-02-24 04:08

[QUOTE=chalsall;290661]OK, time for bed...

I've done a sanity check on the currently available DC and LL assignments, and reactivated the DC and LL assignment pages.

Will finish off the clean-up tomorrow.[/QUOTE]

Thanks

ckdo 2012-02-24 21:00

My "View Assignments" page states

[quote]You currently have 4568 Trial Factoring Assignments totaling 4110.091 GHz Days of work[...][/quote]at the top vs.

[quote]4297 Assignments.[/quote]at the bottom. The latter is correct; I'm not sure about the GHzd total. :smile:

chalsall 2012-02-24 21:07

[QUOTE=ckdo;290750]The latter is correct; I'm not sure about the GHzd total. :smile:[/QUOTE]

Whoops. Thanks.

While PrimeNet is suffering load issues, I turned off a bunch of spiders from the crontab. Turned off the script which recalculates this by mistake. Reactivated.

KyleAskine 2012-02-24 22:15

Now that the only assignments available are to take things from 71-72 I suspect that we will need a lot more P-1 power soon...

Dubslow 2012-02-24 22:50

I suspect we'll see quite a few more 69 expos once the server is back up.

chalsall 2012-02-24 23:05

Tweeks to Individual Workers' Graphs...
 
Since PrimeNet was having issues today, I wasn't able to finish the clean-up work I had started yesterday.

So, I spent a little bit of time experimenting with my suggestion of spreading the work received by those who return a large batch of results which were really done over a number of previous days. I know at least one person doesn't like the idea, but I think it's worthwhile.

For examples, [URL="http://www.gpu72.com/reports/worker/829a683f5d991e17d4cca0453117d491/"]Dubslow's[/URL] and [URL="http://www.gpu72.com/reports/worker/89edb735f68ff3faa688634ac776d5f8/"]Carsten's[/URL] graphs are now actually readable. As in, their Y axis is now not unreasonably compressed.

Note that I am not actually destroying data -- this is a new field in the table which has an "Adjusted" return date. And the Dubslow's and Carsten's adjustments were done by hand to test the idea and develop an algorithm. This will be automated if it's agree this has more upside than down.

Anyone want to be excluded from having their charts adjusted in this way?

Dubslow 2012-02-24 23:18

I like it, but some others may want to compare the old and new side by side. (Remembering my own graph previously, I can say without direct comparison that this is a major improvement.)


(In case you're wondering about the DCTF I'm doing for all the hubub I made, I'm just trying to get each individual work type rank above the overall rank :razz::smile:)

chalsall 2012-02-24 23:26

1 Attachment(s)
[QUOTE=Dubslow;290765]I like it, but some others may want to compare the old and new side by side. (Remembering my own graph previously, I can say without direct comparison that this is a major improvement.)[/QUOTE]

Well, as a comparison, I'm attaching your graph without the Adjustment function....

chalsall 2012-02-24 23:26

1 Attachment(s)
And here is Carsten's....

flashjh 2012-02-25 01:48

[QUOTE=KyleAskine;290756]Now that the only assignments available are to take things from 71-72 I suspect that we will need a lot more P-1 power soon...[/QUOTE]

I'm going to move a few more cores to P-1 once I get done with my current batch of TF. It won't help much, but hopefully an extra P-1 or two a day.

Dubslow 2012-02-25 02:06

Wait, you mean shift a GPU to CUDALucas? :confused: (If a CPU is doing TF, that means mfatkc?)

flashjh 2012-02-25 02:19

[QUOTE=Dubslow;290785]Wait, you mean shift a GPU to CUDALucas? :confused: (If a CPU is doing TF, that means mfatkc?)[/QUOTE]

No, right now my 6-core CPU is running TF through a 580. It takes a core per instance, so I figured I could stop TFing on a couple cores and start another P-1 on the system.

Dubslow 2012-02-25 02:30

So you mean you had 6 instances on one 580?

James Heinrich 2012-02-25 02:38

[QUOTE=flashjh;290787]No, right now my 6-core CPU is running TF through a 580.[/QUOTE]That seems a lot. I can feed a 570 with only 2 cores of a 3930K, leaving 4 cores left for P-1... what is your 6-core CPU?

flashjh 2012-02-25 02:43

[QUOTE=James Heinrich;290791]That seems a lot. I can feed a 570 with only 2 cores of a 3930K, leaving 4 cores left for P-1... what is your 6-core CPU?[/QUOTE]

[QUOTE=Dubslow;290789]So you mean you had 6 instances on one 580?[/QUOTE]

Ah, yes it would be a lot... Four cores and 4 instances (one core per instance). This is a 1055T, so not as efficient as Intel (and definitely not SB-E)

Right now the other two cores do P-1 and run CUDALucas on another 580; this also leaves some headroom for the system to be usable. I was running 5 instances on a 580 along with CUDALucas on the other 580 and a P-1, but that made the system so unresponsive it was irritatng.

I'm actually quite happy with how everything runs right now, but if we need a shift to P-1 as we mature in G272, then I can slow down on TFing a bit and add to P-1. Either way, if we notice more TF needs to be done, I can shift back.

chalsall 2012-02-25 03:04

[QUOTE=flashjh;290792]I'm actually quite happy with how everything runs right now, but if we need a shift to P-1 as we mature in G272, then I can slow down on TFing a bit and add to P-1. Either way, if we notice more TF needs to be done, I can shift back.[/QUOTE]

I would argue GPUs should be maximally used to do TFing, rather than forgo the resource for P-1ing.

We can always release candidates without P-1 done back to PrimeNet if we find we have too many cached. But only GPUs can do the TFing we're doing.

bcp19 2012-02-25 04:56

[QUOTE=flashjh;290792]Ah, yes it would be a lot... Four cores and 4 instances (one core per instance). This is a 1055T, so not as efficient as Intel (and definitely not SB-E)

Right now the other two cores do P-1 and run CUDALucas on another 580; this also leaves some headroom for the system to be usable. I was running 5 instances on a 580 along with CUDALucas on the other 580 and a P-1, but that made the system so unresponsive it was irritatng.

I'm actually quite happy with how everything runs right now, but if we need a shift to P-1 as we mature in G272, then I can slow down on TFing a bit and add to P-1. Either way, if we notice more TF needs to be done, I can shift back.[/QUOTE]

I have to agree with chalsall, we have numerous people who have cores doing DC or LL that could help on the P-1 effort. I personally have 12 cores doing DC, 2 doing LL, 1 doing P-1, 1 doing a 332M and 11 cores running GPUs. With the advent of the 27.3 64 bit, I'll be switching a couple of cores to P-1 once I finish a few more DC's.

kladner 2012-02-25 05:06

[QUOTE=bcp19;290811]I have to agree with chalsall, we have numerous people who have cores doing DC or LL that could help on the P-1 effort. I personally have 12 cores doing DC, 2 doing LL, 1 doing P-1, 1 doing a 332M and 11 cores running GPUs. With the advent of the 27.3 64 bit, I'll be switching a couple of cores to P-1 once I finish a few more DC's.[/QUOTE]

I guess I could take a hint from this. I have 1 out of 6 cores on LL/DC now. With 2 mfaktc cores and 3 P-1, it is pretty low maintenance. The setup occasionally has to block an S2, but it also sometimes runs 2 S1's.

When I finish my current DC I will shift that worker back to P-1.

Dubslow 2012-02-25 06:49

I'd actually say that 4 P-1 workers will probably cause you to get less overall throughput, as you'll need to have MaxHighMemWorkers=3 then, and that'll cause memory issues. You're more than 50% P-1, which is better than most people. I have 3/4 P-1, 1/4 mfatkc, and on an i3M laptop, 1 P-1 and one LL/DC (low memory). Since I've switched to 27.3, I've run into a lot more memory bandwidth bottlenecking (mfaktc dropped 8M/s) and was actually considering switching to 2 P-1 1 LL/DC on my main box.

chalsall 2012-02-25 16:00

Team "GPU to 72" is now 1, 2 and 3!
 
Now that PrimeNet is back, I'm happy to report that Team "GPU to 72" is now:

[URL="http://www.mersenne.org/report_top_teams_TF/"]#1 for Trial Factoring.[/URL]
[URL="http://www.mersenne.org/report_top_teams_P-1/"]#2 for P-1 Factoring.[/URL]
[URL="http://www.mersenne.org/report_top_teams/"]#3 Overall.[/URL]

(For the last year. Not bad for four months of work.)

Thanks for everything Team Mates!!! :smile:

kladner 2012-02-25 16:17

[QUOTE=Dubslow;290828]I'd actually say that 4 P-1 workers will probably cause you to get less overall throughput, as you'll need to have MaxHighMemWorkers=3 then, and that'll cause memory issues. You're more than 50% P-1, which is better than most people. .............[/QUOTE]

Good points. I would have to cut back the memory allocation per worker to run MaxHighMemWorkers=3. Just a minor detail: the CPU is 50% P-1, 33.3% feeding mfaktc, and 16.7% alternating between LL and DC right now.

KyleAskine 2012-02-25 16:42

[CODE]Sorry KyleAskine, but you already have too many assignments.

In the last 30 days you have done on average 496.450 GHz Days of work per day.

You currently have 2275 assignments totalling 9532.391 GHz Days of work assigned, or 19 days worth based on your history. The oldest is 9 days old.[/CODE]

I thought you could check out 30 days at a time?

chalsall 2012-02-25 17:14

[QUOTE=KyleAskine;290868]I thought you could check out 30 days at a time?[/QUOTE]

Up to a maximum of 2000 assignments...

This was added at the same time as the GHzDays Saved metric was added, to prevent people from grabbing all the low TF level exponents as they become available.

As I said above, I have no problem with people doing low TFing only one or two levels. But I think that everyone should have the opportunity to do so (limited, of course, by what's available from PrimeNet).

KyleAskine 2012-02-25 17:18

[QUOTE=chalsall;290876]Up to a maximum of 2000 assignments...

This was added at the same time as the GHzDays Saved metric was added, to prevent people from grabbing all the low TF level exponents as they become available.

As I said above, I have no problem with people doing low TFing only one or two levels. But I think that everyone should have the opportunity to do so (limited, of course, by what's available from PrimeNet).[/QUOTE]

No worries! I would love those 69->70's sitting there, but I took all of the 70->71's yesterday, so I will deal with it!

kladner 2012-02-25 17:27

[QUOTE=KyleAskine;290877]No worries! I would love those 69->70's sitting there, but I took all of the 70->71's yesterday, so I will deal with it![/QUOTE]

Thanks for the tip! I grabbed some of those, though I'm taking them 69-72.

Dubslow 2012-02-25 21:50

Observation concerning why Spidey might have been assigned double checks before 45M:

[code]Thresholds for first-time LL assignments
Force double-checks if CPU reliability less than 0.7
and CPU confidence level is greater than or equal to 2.0[/code]
So an LL doesn't necessarily need a nonzero error code to have a forced double check.

ckdo 2012-02-25 23:13

[QUOTE=chalsall;290876]Up to a maximum of 2000 assignments...[/QUOTE]

Bummer. 2000 assignments being only 7 days worth probably doesn't justify being exempted, I guess. :unsure:

chalsall 2012-02-26 00:30

[QUOTE=ckdo;290912]Bummer. 2000 assignments being only 7 days worth probably doesn't justify being exempted, I guess. :unsure:[/QUOTE]

On the other hand... 2000 assignments from 67 to 69, rather than only to 68, is 21 days worth.... (Hint, hint, hint... :smile:)

Again, I don't have a problem with anyone only going one bit level. But I don't think it's fair that only a few have the chance to do so....

oswald 2012-02-26 02:15

[QUOTE=chalsall;290915]Again, I don't have a problem with anyone only going one bit level. But I don't think it's fair that only a few have the chance to do so....[/QUOTE]
I can go

67 to 72
OR
70 to 72.

Which works out better for everyone?

kladner 2012-02-26 03:14

My current practice is to try to get assignments as low-factored as possible, and take them to 72. That says nothing about what others do. Mostly this has meant doing 70-72 lately, and seeing [U]very[/U] few factors found. :(

"Sometimes the magic works. Sometimes it doesn't."

EDIT: The 69-72 runs with stages disabled I have right now take about 4.5 hours. These are 56-58+M exponents. I'm running 2 at a time on a GTX 460. This seems to come out to about 10.5 completions per day.

chalsall 2012-02-26 16:26

[QUOTE=kladner;290925]My current practice is to try to get assignments as low-factored as possible, and take them to 72. That says nothing about what others do. Mostly this has meant doing 70-72 lately, and seeing [U]very[/U] few factors found. :(

"Sometimes the magic works. Sometimes it doesn't."[/QUOTE]

Thank you for going to 72, and I'm sorry to hear that you are preceiving that you're not finding many factors. However, I did an analysis of your results vs the overall stats, and you're pretty close to what you should be seeing.

From the empirical data on the [URL="http://www.gpu72.com/reports/factoring_cost/"]Trial Factoring Cost[/URL] page, there is approximately a 1.03% chance of finding a factor going from 68 to 69, 1.01% from 69 to 70, 1.00% to 71, and 0.93% to 72. Note that this is a summary of results over the entire LL range.

Based on the work you've done, you should have found approximately 24.1 factors. You've actually found 23.

James Heinrich 2012-02-26 16:35

[QUOTE=chalsall;290957]Based on the work you've done, you should have found approximately 24.1 factors. You've actually found 23.[/QUOTE]I'm sure that's a tidbit of interesting for many people -- could you put that on the user stats page (expected vs found factors; by bit range and overall)?

Of course, if you do that there's bound to be somebody complaining that they're missing some factors because they're below average.

kladner 2012-02-26 16:53

[QUOTE=chalsall;290957]Thank you for going to 72, and I'm sorry to hear that you are preceiving that you're not finding many factors. However, I did an analysis of your results vs the overall stats, and you're pretty close to what you should be seeing.

From the empirical data on the [URL="http://www.gpu72.com/reports/factoring_cost/"]Trial Factoring Cost[/URL] page, there is approximately a 1.03% chance of finding a factor going from 68 to 69, 1.01% from 69 to 70, 1.00% to 71, and 0.93% to 72. Note that this is a summary of results over the entire LL range.

Based on the work you've done, you should have found approximately 24.1 factors. You've actually found 23.[/QUOTE]

Thanks! It's good to know the odds. I should not make a fuss over it, anyway. I have had far more fruitful periods to arrive at those averages. That should suffice. It all serves the overall project.

(It did seem that the DC range had a lot more factors lurking. But that leaves aside the fact that I ran through a lot more exponents in ~67-69 range.)

chalsall 2012-02-26 18:15

[QUOTE=James Heinrich;290960]I'm sure that's a tidbit of interesting for many people -- could you put that on the user stats page (expected vs found factors; by bit range and overall)?[/QUOTE]

I knew I shouldn't have opened my mouth... :smile:

I would be hesitant to do that for each bit range for each user's individual attempts, because the statistical data is quite noisy. Please see a new report I just created, [URL="http://www.gpu72.com/reports/factor_percentage/"]Factor Found Percentages[/URL] to see what I mean. (And, yes, I'll do a similar report for P-1 percentages as well.)

However, I agree such statistics based on overall percentages for each bit level and meta range (DC and LL) would be interesting. Added to my To Do list... (Sigh... It seems the more work I do on the system, the longer (rather than shorter) the list becomes... :wink:)

LaurV 2012-02-27 02:50

[QUOTE=ckdo;290750]My "View Assignments" page states
You currently have 4568 Trial Factoring Assignments totaling 4110.091 GHz Days of work[...]
at the top vs.
4297 Assignments.
at the bottom. The latter is correct; I'm not sure about the GHzd total. :smile:[/QUOTE]
For me they NEVER matched, but they ALWAYS were correct. Both of them. Remark: the first number is only the TF assignments, but the last is the TOTAL. It is normal to be different (however, not normal that the first being bigger then the second). The first one is used to estimate the time needed to work them out. I think you should not change them.

ckdo 2012-02-27 06:09

I had (and have) 38 non-TF assignments. The upper count simply wasn't updating any more.

Bdot 2012-02-27 08:41

[QUOTE=Bdot;290529]
I'm currently finishing
26073517 [83.45%] "Double-checking to "ANONYMOUS" on 2012-02-22"
26087389 [79.11%] "Double-checking to "Kyle Askine" on 2012-02-22"
25834519 [94.67%] "Double-checking to "Frederick Menninger" on 2012-02-22"
26078791 [90.99%] "Double-checking to "ANONYMOUS" on 2012-02-22"

[/QUOTE]


26087389 still has ~ 7hrs to go, the other 3 are successfully completed now. I wrote a PM to bcp19 who got the 2 "ANONYMOUS" ones. I hope Frederick's machine picks up that the DC is complete ...

Kyle, I'll let you know when your one is submitted so you can remove it from your machine. Thanks for your help.

diamonddave 2012-02-27 16:07

1 Attachment(s)
[QUOTE=chalsall;289787]Just so everyone knows, Dubslow has once again beaten me (:smile:) to announcing that I've added an overall GHz Days per Day of Work Saved graph, and a Linear Regression Trend line to many of the graphs, on the [URL="http://www.gpu72.com/reports/overall/graph/month/"]Overall System Progress Graphs[/URL] page.

I've been thinking about how to do this for the TF Depth graphs. The problem I'm having is what would I regress?

The aggregate of each completion type?

Those that have been TFed to the release level?

I need to run some experiments, and see what visually communicates the information best.[/QUOTE]

Re-posting from 4 page back because, I think it went under the radar...

That graph looks highly suspicious... Just roughly adding the DC saved in the graph would give about [B]45,000[/B] GD in the [B]last 30 days[/B], but if we look at the overall report it shows we have only saved about 16,925 GD (Now 22,278) for DC [B]since the project started[/B].

chalsall 2012-02-27 16:28

[QUOTE=diamonddave;291062]Re-posting from 4 page back because, I think it went under the radar...[/QUOTE]

It didn't go under the radar -- Dubslow speculated, and I confirmed he was correct.

But to be explicit... The graph shows the amount of DC, LL and P-1 saved (although the latter is so small in comparison to the other two that it can barely be seen).

The chart is showing the total amount of DC, LL and P-1 (if it was actually needed) saved by each of the factoring work types -- DC TF, LL TF and P-1. As in, a successful DC TF will save a DC, while a successful LL TF will save a LL, a DC and possibily a P-1. A successful P-1 (currently) saves a LL and a DC; in the future it might only save a DC if we ever find a DC candidate which has not yet had P-1 work done.

I have added a note to the page to hopefully make it clearer.

James Heinrich 2012-02-27 16:35

[QUOTE=chalsall;291065]if we ever find a DC candidate which has not yet had P-1 work done.[/QUOTE]There's a huge pile of them. Although, if you're looking <40M you're unlikely to find any since I (and a few helpers) cleared them all out last year. There's a bunch starting at around [url=http://mersenne-aries.sili.net/p1small.php]44M[/url].

diamonddave 2012-02-27 16:40

[QUOTE=chalsall;291065]It didn't go under the radar -- Dubslow speculated, and I confirmed he was correct.

But to be explicit... The graph shows the amount of DC, LL and P-1 saved (although the latter is so small in comparison to the other two that it can barely be seen).

The chart is showing the total amount of DC, LL and P-1 (if it was actually needed) saved by each of the factoring work types -- DC TF, LL TF and P-1. As in, a successful DC TF will save a DC, while a successful LL TF will save a LL, a DC and possibily a P-1. A successful P-1 (currently) saves a LL and a DC; in the future it might only save a DC if we ever find a DC candidate which has not yet had P-1 work done.

I have added a note to the page to hopefully make it clearer.[/QUOTE]

So in the last week we saved about 10,000 Ghz-Day of DC LL? Yet since the project started we saved 23,000 Ghz-Days of DC LL?

I really don't understand how those number makes any sense.

1 successful DC TF saves about 43.75 Ghz-Day (lets be very generous and give a 35M credit here)

So if I add all those little blue bars in the graph I included in attachment we should have found at LEAST 1000 DC factor in the last month to get 44k Ghz-Day of work saved.

Yet since the project started we only found 727 DC Factor...

James Heinrich 2012-02-27 16:54

[QUOTE=diamonddave;291068]1 successful DC TF saves about 43.75 Ghz-Day (lets be very generous and give a 35M credit here)... Yet since the project started we only found 727 DC Factor...[/QUOTE]Remember that a DC-TF factor saves a DC-LL. But a LL-TF factor also saves a DC-LL. So does a P-1 factor. DC work saved isn't based on DC-TF factors, but on [i]all[/i] factors found.

chalsall 2012-02-27 17:01

[QUOTE=diamonddave;291068]So in the last week we saved about 10,000 Ghz-Day of DC LL? Yet since the project started we saved 23,000 Ghz-Days of DC LL?

I really don't understand how those number makes any sense.[/QUOTE]

You're comparing two different things.

Since the project started DCTFing saved 22,178 GDs of DC work.

But since the project started, DCTFing, LLTFing and P-1'ing saved 161,595.11 GDs of DC work.

Let me give you an example raw query from the database:

[CODE]mysql> select date(Completed) as Date,count(*) as Facts,
sum(GHzDaysLL),sum(GHzDaysDC),sum(GHzDaysP1) from Facts
where Completed>"2012-02-21" group by date(Completed) order by Date(Completed);
+------------+-------+-----------------+-----------------+----------------+
| Date | Facts | sum(GHzDaysLL) | sum(GHzDaysDC) | sum(GHzDaysP1) |
+------------+-------+-----------------+-----------------+----------------+
| 2012-02-21 | 26 | 736.2905807495 | 1357.4044265747 | 13.1904993057 |
| 2012-02-22 | 38 | 1371.8684616089 | 2208.5607604980 | 13.2259583473 |
| 2012-02-23 | 23 | 1070.4165115356 | 1514.3111228943 | 13.2968802452 |
| 2012-02-24 | 19 | 736.9928588867 | 1122.5219669342 | 8.7936668396 |
| 2012-02-25 | 32 | 1221.2140426636 | 1898.4470863342 | 11.4979257584 |
| 2012-02-26 | 27 | 1369.0946960449 | 1822.6814956665 | 13.2426235676 |
| 2012-02-27 | 29 | 2142.2888793945 | 2448.9804286957 | 18.7429766655 |
+------------+-------+-----------------+-----------------+----------------+
7 rows in set (0.00 sec)[/CODE]

GHzDaysLL is the amount of LL saved, etc. GHzDaysLL will never be more than GHzDaysDC.

If you (or anyone) would like this data exposed as a CSV, please let me know. I'm happy to share data, and welcome sanity checking and integrity auditing.

diamonddave 2012-02-27 17:14

[QUOTE=chalsall;291070]You're comparing two different things.

Since the project started DCTFing saved 22,178 GDs of DC work.

But since the project started, DCTFing, LLTFing and P-1'ing saved 161,595.11 GDs of DC work.

Let me give you an example raw query from the database:

[CODE]mysql> select date(Completed) as Date,count(*) as Facts,
sum(GHzDaysLL),sum(GHzDaysDC),sum(GHzDaysP1) from Facts
where Completed>"2012-02-21" group by date(Completed) order by Date(Completed);
+------------+-------+-----------------+-----------------+----------------+
| Date | Facts | sum(GHzDaysLL) | sum(GHzDaysDC) | sum(GHzDaysP1) |
+------------+-------+-----------------+-----------------+----------------+
| 2012-02-21 | 26 | 736.2905807495 | 1357.4044265747 | 13.1904993057 |
| 2012-02-22 | 38 | 1371.8684616089 | 2208.5607604980 | 13.2259583473 |
| 2012-02-23 | 23 | 1070.4165115356 | 1514.3111228943 | 13.2968802452 |
| 2012-02-24 | 19 | 736.9928588867 | 1122.5219669342 | 8.7936668396 |
| 2012-02-25 | 32 | 1221.2140426636 | 1898.4470863342 | 11.4979257584 |
| 2012-02-26 | 27 | 1369.0946960449 | 1822.6814956665 | 13.2426235676 |
| 2012-02-27 | 29 | 2142.2888793945 | 2448.9804286957 | 18.7429766655 |
+------------+-------+-----------------+-----------------+----------------+
7 rows in set (0.00 sec)[/CODE]

GHzDaysLL is the amount of LL saved, etc. GHzDaysLL will never be more than GHzDaysDC.

If you (or anyone) would like this data exposed as a CSV, please let me know. I'm happy to share data, and welcome sanity checking and integrity auditing.[/QUOTE]

Yeah, took me a while, but I finally figured out the graph while walking to go grab me some lunch...

I really taught it was a break down of the summary exposed at the top of the page for each work type in the last 30 days...

Sorry,

chalsall 2012-02-27 17:20

[QUOTE=diamonddave;291075]Sorry,[/QUOTE]

No problem. You've helped catch mistakes I've made in the past. :smile:

And since this GDs saved metric is new, I should probably add another table to the Overall Stats page which shows what the graph does.

KyleAskine 2012-02-27 17:40

[QUOTE=Bdot;291039]26087389 still has ~ 7hrs to go, the other 3 are successfully completed now. I wrote a PM to bcp19 who got the 2 "ANONYMOUS" ones. I hope Frederick's machine picks up that the DC is complete ...

Kyle, I'll let you know when your one is submitted so you can remove it from your machine. Thanks for your help.[/QUOTE]

Crap... I removed it from my worktodo.txt thinking that as long as I didn't manually remove the assignment from primenet I would keep it.

I was wrong: [URL="http://mersenne.org/report_exponent/?exp_lo=26087389&exp_hi=26087389&B1=Get+status"]http://mersenne.org/report_exponent/?exp_lo=26087389&exp_hi=26087389&B1=Get+status[/URL]

I'm really sorry to you, but more to the person who got it.

chalsall 2012-02-28 01:07

[QUOTE=KyleAskine;291081]Crap... I removed it from my worktodo.txt thinking that as long as I didn't manually remove the assignment from primenet I would keep it.[/QUOTE]

The problem was you Unreserved it on your G72 Assignments page, and Spidy then unreserved it from PrimeNet but wasn't able to recapture it in time.

So everyone knows, I finished with the manual clean-up of this, and sent PMs to those last three people affected. Most were simply given new keys, but unfortunately Bdot actually lost a few.

So this doesn't happen again, I have modified Spidy and AnonSpidy such that if it recaptures a candidate which is assigned as LL or DC work to one of our workers, it will be put into a "quarantine" state. I'll be notified, and it will not be assigned to another worker nor released back to PrimeNet.

But in addition to this, for anyone who's going to work on such assignments with either a GPU based system, or a CPU system which isn't going to talk to PrimeNet until the work is finished, could I ask that you still use Prime95/mprime to claim the assignment from PrimeNet when you get them?

In this way the assignment will be officially transfered to you and appear in your PrimeNet assignments page. Then you can extend it on PrimeNet via the Manual Testing -> Extensions function if needed.

And please note that PrimeNet appears to expire any and all candidates after appoximately 60 days of no communcations.

KyleAskine 2012-02-28 02:19

[QUOTE=chalsall;291126]The problem was you Unreserved it on your G72 Assignments page, and Spidy then unreserved it from PrimeNet but wasn't able to recapture it in time.
[/QUOTE]

Sorry about that :/ - I thought once it was mine in PrimeNet only I could unreserve it.

Dubslow 2012-02-28 04:44

Are Double Checks still on hold? I set the range to 0-28M, and could not get any. It says there are 46 available in the 26M range. I just now tried it with default settings, still says no availables matching criteria.

Unrelated: Can we have the CPUs and Num per CPU options for getting P-1 work? That would make the copy and pasting easier.

chalsall 2012-02-28 15:15

[QUOTE=Dubslow;291144]Are Double Checks still on hold? I set the range to 0-28M, and could not get any. It says there are 46 available in the 26M range. I just now tried it with default settings, still says no availables matching criteria.[/QUOTE]

No; nothing's on hold. And are you talking DCs, or DCTFs?

I just did a test DCTF allocation, setting the maximum to 28000000 and leaving the option "What Makes Sense" (Pledge to 70), and I was given a 26M assignment.

And so everyone knows, since we're so far ahead of the DC wavefront now, I'm keeping a few candidates below 29.69M for those who are willing to take them to 70. Adjust the maximum to be below 29.69, or choose the new "Lower Exponents to 70" option.

[QUOTE=Dubslow;291144]Unrelated: Can we have the CPUs and Num per CPU options for getting P-1 work? That would make the copy and pasting easier.[/QUOTE]

Sure. Added to the To Do list.

KyleAskine 2012-02-28 16:55

[QUOTE=chalsall;291164]
And so everyone knows, since we're so far ahead of the DC wavefront now, I'm keeping a few candidates below 29.69M for those who are willing to take them to 70. Adjust the maximum to be below 29.69, or choose the new "Lower Exponents to 70" option.
[/QUOTE]

One we reach 45M there will be no DC TF queue anymore I guess!

Dubslow 2012-02-28 18:08

[QUOTE=chalsall;291164]No; nothing's on hold. And are you talking DCs, or DCTFs?
[/QUOTE]

The former, actual LL tests. I just tried again, and am still unable to acquire any, default settings. The available report says there are 50 at 26M, 39 at 69 bits and 11 at 70 bits.

chalsall 2012-02-28 18:16

[QUOTE=Dubslow;291175]The former, actual LL tests. I just tried again, and am still unable to acquire any, default settings.[/QUOTE]

Stupid programmer error.... My new quarantine conditional wasn't quite sane...

Fixed.

chalsall 2012-02-28 18:17

[QUOTE=KyleAskine;291169]One we reach 45M there will be no DC TF queue anymore I guess![/QUOTE]

That's years off, but it will simply mean the LL and DC "windows" will have shifted, and the DC work will be going to higher bit levels.

Xyzzy 2012-02-28 19:00

[QUOTE]But in addition to this, for anyone who's going to work on such assignments with either a GPU based system, or a CPU system which isn't going to talk to PrimeNet until the work is finished, could I ask that you still use Prime95/mprime to claim the assignment from PrimeNet when you get them?[/QUOTE]Does this apply to LLTF? Our boxes have no network access.

:sad:

chalsall 2012-02-28 19:09

[QUOTE=Xyzzy;291183]Does this apply to LLTF? Our boxes have no network access.[/QUOTE]

No -- only "real" LL and DC work.

All the candidates which are having LL-TF, DC-TF and P-1 work done are "owned" by Spidy, which checks in with PrimeNet at least once a month for each candidate to ensure they are not expired until all such work is completed.

Dubslow 2012-02-29 00:42

I can now see the SQL query when I get DC work. It's just an aesthetic thing, but it does show the "Status=" part of the query as well, and somehow, I don't think you want that shown :wink::smile:
[QUOTE=chalsall;291164]
And so everyone knows, since we're so far ahead of the DC wavefront now, I'm keeping a few candidates below 29.69M for those who are willing to take them to 70. Adjust the maximum to be below 29.69, or choose the new "Lower Exponents to 70" option.[/QUOTE]
Note that the DC wavefront has now passed 29M, into the ~29.2M range, or at least, that's what PrimeNet gave me before I got some from here. I personally would not hold back those below the wave for extra factoring.

chalsall 2012-02-29 01:08

[QUOTE=Dubslow;291214]I can now see the SQL query when I get DC work. It's just an aesthetic thing, but it does show the "Status=" part of the query as well, and somehow, I don't think you want that shown :wink::smile:[/QUOTE]

Whoops... Not my best day... :cry: Although there's nothing "secret" about the Status field. It's 4 if owned by Spidy for LL/DC assignment.

[QUOTE=Dubslow;291214]Note that the DC wavefront has now passed 29M, into the ~29.2M range, or at least, that's what PrimeNet gave me before I got some from here. I personally would not hold back those below the wave for extra factoring.[/QUOTE]

Yeah, I know. I'm holding anything above 29.5M. (If you saw lower candidates available earlier today, it was because I was doing maintaince on another sub-system, and had turned off the returning script.)

Dubslow 2012-02-29 01:10

[QUOTE=chalsall;291217]Whoops... Not my best day... :cry: Although there's nothing "secret" about the Status field. It's 4 if owned by Spidy for LL/DC assignment.[/quote]Oh. I thought that was the trust level. And considering all the work that has gone into this, you can be way more than excused for a few minor mishaps that don't cause damage. :smile:

[QUOTE=chalsall;291217]

Yeah, I know. I'm holding anything above 29.5M. (If you saw lower candidates available earlier today, it was because I was doing maintaince on another sub-system, and had turned off the returning script.)[/QUOTE]
Not only today, but also most of yesterday (Mon) as well, that's how I came to that conclusion. Oh well, all is well in this corner of the internet.

Dubslow 2012-02-29 02:02

Fun Shtuff
 
1 Attachment(s)
Hit the sweet spot:
[code][Worker #2 Feb 28 19:53:42] Optimal P-1 factoring of M54628073 using up to 10000MB of memory.
[Worker #2 Feb 28 19:53:42] Assuming no factors below 2^73 and 2 primality tests saved if a factor is found.
[Worker #2 Feb 28 19:53:42] Optimal bounds are [U]B1=500000, B2=10000000[/U]
[Worker #2 Feb 28 19:53:42] Chance of finding a factor is an estimated 3.57%[/code]
Don't you just love round numbers? :smile:

Edit: And in other news, check this out: [url]http://gpu72.com/reports/worker/829a683f5d991e17d4cca0453117d491/[/url]
My rank in all individual work types are all higher than my overall rank :smile:
@kracker: I'm sorry I rather one-upped you (three upped you?), but I think it was for a decent cause :razz:
Seeing as this is likely to change in the next few days, I've attached a screenshot for reference, but of course it's much cooler at the proper link.

flashjh 2012-02-29 05:17

Hey chalsall,

On the available TF Assignments page:

[CODE]
Note: The yellow cells are the (approximate) factoring depth we release candidates back to PrimeNet at.
For LLTF we release at 73 above 58.52M, and for DCTF we release at 70 above 29.69M
[/CODE]

If possible, & when you get a chance, can you break the 58M and 29M blocks into two sections each so we can see the actual available exponents <58.52M & <29.69M and >=58.25M & >=29.69M? It would make it easier to see which ones are available for higher TF and which ones are just needing P-1.

Thanks.

ckdo 2012-02-29 08:49

Try [URL="http://www.gpu72.com/reports/available/p-1/"]this[/URL] or [URL="http://www.gpu72.com/reports/available/nop-1/"]that[/URL].

chalsall 2012-02-29 23:05

Now Xyzzy can't break the graphs...
 
Just a quick update...

During a break today I codified the Assignment Completion Adjustment algorithm discussed earlier, and ran it against all the Workers' data. I'm quite pleased with the results.

The algorithm simply looks at each person's data grouped by days, and when there are several days of no results and then a large batch, it spreads the results over the preceeding empty days. Importantly, no adjustment is made such that the Adjusted date is earlier than the Assigned date.

This has resulting in improved graphs for just about all of our top twenty or so users. And you can actually now see meaningful data in [URL="http://www.gpu72.com/reports/worker/7e6a2e592a37a719fac4f765eb0f6ca8/"]Xyzzy's[/URL] graphs. :smile:

Equally important in my mind is it has also resulting in smoothing of the Overall Graphs -- no longer are the graphs compressed because some large producer hasn't checked in for a week or so. This is particularily evident in the new [URL="http://www.gpu72.com/reports/overall/graph/quarter/"]Quarterly Overall System Progress Graphs[/URL].

And, to be clear, this is not destroying any data; the graphs are working on a conditional -- if the Adjusted field is > 0, that date is used. Otherwise the Completed field is. This means if people insist on it the original noisy graphs could be made available as well.

James Heinrich 2012-03-01 00:12

[QUOTE=chalsall;291326]This has resulting in improved graphs for just about all of our top twenty or so users.[/QUOTE]Very nice, I like it, thanks. :smile:

Dubslow 2012-03-01 05:34

Even more Brent-Suyama 0_o

[url]http://mersenne-aries.sili.net/exponent.php?factordetails=3546977485247966555997217[/url]

:mellow:


Edit: For every other expo, E=12, but P95 doesn't report E when a factor is found.

flashjh 2012-03-01 05:42

[QUOTE=Dubslow;291359]Even more Brent-Suyama 0_o

[URL]http://mersenne-aries.sili.net/exponent.php?factordetails=3546977485247966555997217[/URL]

:mellow:[/QUOTE]

So this post got me to thinking about which factors I've found with Brent-Suyama 0_o

So I found this one: [URL="http://mersenne-aries.sili.net/exponent.php?exponentdetails=52733041"]M52733041[/URL] (Edit: not Brent-Suyama 0_o :redface:)

But, it's only 68.831 bits... why wasn't this found in trial factoring? Is it because James factored to 62 and bdot started at 69?

[QUOTE]
Edit: For every other expo, E=12, but P95 doesn't report E when a factor is found.[/QUOTE]

That's probably because it no longer matters how well it was TFd or P-1d since a factor was found. I was asking the other day about getting 'more' information from PrimeNet on exponents with factors found, but apparently it gets removed from the database to save room.

Dubslow 2012-03-01 05:45

That k is within B2 bounds...? Although that it wasn't TFd is a bit funky.

James Heinrich 2012-03-01 12:51

[QUOTE=flashjh;291361][URL="http://mersenne-aries.sili.net/exponent.php?exponentdetails=52733041"]M52733041[/URL] But, it's only 68.831 bits... why wasn't this found in trial factoring? Is it because James factored to 62 and bdot started at 69?
...apparently it gets removed from the database to save room.[/QUOTE]Data from me and Bdot are there because of manually-submitted results.txt, otherwise I wouldn't know who did what when. PrimeNet keeps a record of all LL tests performed (1st time, DC, TC+, etc) but just a record of how far an exponent claims to have been factored, but not who did it when. We just have to trust PrimeNet's record that it was TF'd unsuccessfully by a series of people from zero to wherever it says it is now. That's accurate far more than 99% of the time, but inevitably a few failures slip through. Whether that's from TF client error, network error, PrimeNet error or pure malice can't easily be determined.

I'm sure this problem is spread in small quantities throughout the data (random errors of various kinds). But I did find a [url=http://www.mersenneforum.org/showpost.php?p=283553&postcount=1022]large cluster of missed TF factors[/url] when I redid P-1 on many small exponents in the M6-M9 range. Short of re-TF'ing everything, there's not much that can be done to validate a no-factor TF, so just ignore it and assume it's valid.

chalsall 2012-03-01 13:21

[QUOTE=James Heinrich;291386]PrimeNet keeps a record of all LL tests performed (1st time, DC, TC+, etc) but just a record of how far an exponent claims to have been factored, [U]but not who did it when[/U].[/QUOTE]

That's not entirely true.

If you do a [URL="http://www.mersenne.org/report_exponent/?exp_lo=54103603&exp_hi=&B1=Get+status"]Exponent Status[/URL] query on PrimeNet, the report will show you who did what factoring when, so long as it was after the PrimeNet V5 migration.

All historical knowledge seems to be thrown away (or at least, becomes inaccessable) once a factor is found.

nucleon 2012-03-01 13:57

I'll re do it on my farm M52733041,68,69.

That should turn up the factor right?

-- Craig

KyleAskine 2012-03-01 14:14

[QUOTE=nucleon;291391]I'll re do it on my farm M52733041,68,69.

That should turn up the factor right?

-- Craig[/QUOTE]

It should.

James Heinrich 2012-03-01 16:14

[QUOTE=chalsall;291388]That's not entirely true.[/QUOTE]I wasn't entirely sure, but of course PrimeNet was broken at the time so I spoke from memory. Which, apparently, isn't all that good. :smile:

Dubslow 2012-03-01 21:22

We suddenly have like 500 more expos below 50M... why would we suddenly get such a large group? Before they were done being collected, I grabbed some of them, and checked on PrimeNet, and those 10 or so did seem to be legitimate... it's just rather suspicious.

James Heinrich 2012-03-01 21:26

[QUOTE=Dubslow;291449]why would we suddenly get such a large group?[/QUOTE]Because PrimeNet has been down and Spidy has been sleeping?

chalsall 2012-03-01 21:32

Spidy learnt a new trick today...
 
As a side effect of other work, "Spidy" learnt a new reservation technique today. Without going into details, it basically makes PrimeNet follow its own expiry rules.

After several hundred candidates were reserved at below 70 bits, someone came in and reserved a few hundred, pledging to take them to 72 bits. Cool, thought I.

Then a little bit later someone else came in and reserved a few hundred, but only pledged to take them up one bit level. Fsck, thought I...

While Spidy continues to work, I have modified the LL-TF reservation page such that any candidates TFed to 69 bits and below are now considered "preferred". Only those who pledge to take them to 72 bit or above will have access.


All times are UTC. The time now is 06:41.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.