![]() |
[QUOTE=garo;382725]Thanks for looking at this. I tried to get 20 assignments. And it took ~ 2 minutes. Just repeated that right now and same deal. Yes I guess a big part of the problem is repeating the SQL query for each assignment. It should really do only a single query for each request. Note that if I don't specify a range it is more or less instant. (And I get 64M assignments or Cat3). So I don't think the problem has to do with the checkout or PHP and much more likely inefficient SQL.[/QUOTE]
It's an interesting thing for sure. If the range min/max are left blank, it actually does the same query but with default values of 0 and 1 billion for the min/max. And for whatever reason, that actually does execute a lot faster (sub-second). Trying to get a first-time test actually pulls 3 different exponents on each iteration... The lowest LL candidate, lowest P-1 candidate, and lowest TF candidate, and then picks the one with the lowest "difficulty" rating as the final choice. The difficulty is based on exponent size, whether an LL has already been trial factored or not, etc. So it does this each time, grabbing 3 types of results and then picking one of them. By process of elimination (running each of the 3 sub-queries at a time), I saw that the one that picks the smallest TF is the one that slows things down. The other two run splendidly fast, but in that third case, if there are zero matching records, as is the case when using a low range, below where TF work has already been done, it seems to take a long time. Whereas, running that 3rd part with basically no min/max (the default 0 to 1 billion) it finds the smallest one lickety split. It probably has to do with the way the table is organized or if there's a good index at work. If it has to scan the entire table before realizing there aren't any matching results, yeah, that would take a while. Otherwise without any range, it finds an entry before scanning the whole table. An interesting little puzzle there. I presume that trying to get "first time work" isn't exclusive to LL tests on purpose. In my mind when I saw "smallest available first time tests" I just assumed first time *LL* tests. Anyway, in my fictional and artificial test, doing the SQL queries themselves, instead of picking the "top 1" result from each sub query, I could do, for example, the top 12 of each type, as if I were going to try and reserve 12 exponents at once. Still no TF results in that low range, but it easily found 12 P-1 and 12 LL tests, so 24 altogether, and then still sorted by difficulty. *In theory* if you then took the top 12 of those, it *should* give you the same result as if you went through this 12 separate times, picking the top 1 of each type and then the top 1 of those. Plus, in whatever SQL magic way, I can get those top 12 results faster than I could (2 seconds) than I could even just getting one at a time (3 seconds). Someone at Microsoft would have to explain that to me... that's like arriving at your destination before you've left, or something. :smile: It could probably be made even quicker with a few other minor tweaks or doing it as one single query rather than 3 sub-queries unioned together but in the back of my mind I think George will post a reply and tell me why it does it the way it does. :smile: |
[QUOTE=Madpoo;382728]It's an interesting thing for sure. If the range min/max are left blank, it actually does the same query but with default values of 0 and 1 billion for the min/max. And for whatever reason, that actually does execute a lot faster (sub-second).
It could probably be made even quicker with a few other minor tweaks or doing it as one single query rather than 3 sub-queries unioned together but in the back of my mind I think George will post a reply and tell me why it does it the way it does. :smile:[/QUOTE] That query was a bear to optimize in SQL2005. If you save the query in c:\code\badquery.sql I'll take a look at it. I do this by using "Display Estimated Execution Plan" to determine if MSSQL is using the best possible index. The reason 3 queries are unioned together is we must find the smallest (or close to it) exponent available. For an LL request, we can hand out an exponent that is marked available for TF, P-1, or LL. Since we prefer to hand out an LL assignment if we can, the difficulty column is a manufactured column that helps us do this. Difficulty groups all exponents in the same million range together, then adds a value so exponents needing P-1 and TF sort higher than ones not needing LL. An example, if M60999999 is ready for LL it has a difficulty of 60000. If M60000001 needs 3 more levels of TF then it has a difficulty of 60003. Thus, requesting an LL test will return 60999999 rather than the smaller 60000001 because of the smaller difficulty value. Such shenanigans are mostly obsolete now as GPU72 as pretty much assured that all exponents have been TFed. |
Not updating since 0400
Summary page
Recent results recent cleared Have not been updated since 0400 UTC. LL results Known factors Are updating |
Manual Result Check in Error
Found 2 lines to process.
processing: TF no-factor for M45936097 (272-273) Error code: 40, error text: TF result for M45936097 was not needed Done processing: Notice: Undefined variable: expected_error_text_list in C:\inetpub\www\manual_result\default.php on line 1041 Warning: array_diff(): Argument #2 is not an array in C:\inetpub\www\manual_result\default.php on line 1041 * Parsed 1 lines. * Found 0 datestamps. GHz-days Qty Work Submitted Accepted Average 1 Trial Factoring: no factor 20.823 - 20.823 1 - all - 20.823 Status codes: Code Meaning Count 40 invalid assignment key 1 Lines by error code: Error Text Lines with this error TF result for <exponent> was not needed Did not understand 0 lines. Recognized, but ignored 0/1 of the remaining lines. Skipped 0 lines already in the database. Accepted 1 lines. |
[QUOTE=Gordon;382744]Notice: Undefined variable: expected_error_text_list[/QUOTE]Thanks, fixed.
|
[QUOTE=flagrantflowers;382737]Summary page
Recent results recent cleared Have not been updated since 0400 UTC. LL results Known factors Are updating[/QUOTE] Gah! I thought I fixed that. I did a little tweak to the hourly task that generates those stats so it outputs them in UTF-8 (accented characters in usernames, for example, were showing up funny). That meant an extra step in the process and I had a little difficulty getting the syntax just right. I thought I got it going last night before calling it good but I probably missed one last step. I'm on it now, thanks for the reminder. I meant to check it today but, you know... real jobs and all that. :smile: Didn't have the time earlier. EDIT: And... fixed. During my testing I actually had it skip doing the SQL steps and just doing the ANSI->UTF-8 conversion... guess what I forgot to do? Have it create fresh reports again after I was done with the testing. |
[QUOTE=Prime95;382736]That query was a bear to optimize in SQL2005. If you save the query in c:\code\badquery.sql I'll take a look at it. I do this by using "Display Estimated Execution Plan" to determine if MSSQL is using the best possible index.
The reason 3 queries are unioned together is we must find the smallest (or close to it) exponent available. For an LL request, we can hand out an exponent that is marked available for TF, P-1, or LL. Since we prefer to hand out an LL assignment if we can, the difficulty column is a manufactured column that helps us do this. Difficulty groups all exponents in the same million range together, then adds a value so exponents needing P-1 and TF sort higher than ones not needing LL. An example, if M60999999 is ready for LL it has a difficulty of 60000. If M60000001 needs 3 more levels of TF then it has a difficulty of 60003. Thus, requesting an LL test will return 60999999 rather than the smaller 60000001 because of the smaller difficulty value. Such shenanigans are mostly obsolete now as GPU72 as pretty much assured that all exponents have been TFed.[/QUOTE] Hmm... I think I see. I'll email you the SQL query I was using to test (basically pulled from the PHP with static values in place of some variables). I'll also toss out a few proposals for optimizing it. In the meanwhile if you're having performance issues like that, I'd recommend leaving the min/max values blank. It'll pick the lowest possible values. If you actually want some higher ones, enter that for the minimum value but put in 1 billion for the max. That will keep it running good and fast. |
[QUOTE=Madpoo;382768]I'll email you the SQL query I was using to test (basically pulled from the PHP with static values in place of some variables). [/QUOTE]
Nevermind. I think I've got a fix in place. Give it a try, garo. BTW, MSSQL was using the proper index. It was my fault for not putting enough information in the where clause to make this particular case super fast. |
The graph of the PrimeNet activity has become remarkably flat at 160 TFlops for the past 28 hours.
|
[QUOTE=Prime95;382769]Nevermind. I think I've got a fix in place. Give it a try, garo.
BTW, MSSQL was using the proper index. It was my fault for not putting enough information in the where clause to make this particular case super fast.[/QUOTE] Superfast! Thanks. |
Primenet's having a bad day...
Just to let the admins know, I got back to my main workstation a little while ago to discover that GPU72's spiders are reporting many extremely long timeouts, and some "500 read timeout" errors, since about 1910 UTC today.
|
| All times are UTC. The time now is 23:07. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.