![]() |
[QUOTE=chalsall;412120]A follow up.
Just look at [URL="http://www.mersenne.org/assignments/?exp_lo=80000000&exp_hi=90000000"]this query[/URL] to get an idea what we're dealing with. No disingenuous assumption intended, but a stupid AI has a bit of a problem dealing with that with a scarce resource on a five minute timescale.[/QUOTE] Wow! There's a name I haven't seen for a LOOOONG time. (Mr. P-1) |
[QUOTE=chalsall;412120]A follow up.
Just look at [URL="http://www.mersenne.org/assignments/?exp_lo=80000000&exp_hi=90000000"]this query[/URL] to get an idea what we're dealing with. No disingenuous assumption intended, but a stupid AI has a bit of a problem dealing with that with a scarce resource on a five minute timescale.[/QUOTE] Aurashift has some nice resources and it seems like he blazes through the P-1 work fast and efficiently, so that should be fine. Mr P-1 however, looks like a lot of old assignments in there, over a month old and estimated completions sometime early next year. I have no knowledge of that user so I don't have an opinion one way or another on that. :smile: |
[QUOTE=Madpoo;412157]Mr P-1 however, looks like a lot of old assignments in there, over a month old and estimated completions sometime early next year. I have no knowledge of that user so I don't have an opinion one way or another on that. :smile:[/QUOTE]
Mr. P-1 was actually the genesis of GPU72. He was coordinating GPU TF'ing (via PMs and email) before GPU72 came online. To put on the record, I don't think anyone is being malicious here. But, when being only a few hours ahead in the TF'ing for P-1'ing, sometimes things get handed out sub-optimally TF'ed. Sub-optimal, but not the end of the world. |
[QUOTE=LaurV;412145]Today, after this discussion, I enabled the proxy again. Remark that the computer is set to get First Time Tests. Guess what kind off assignments I have? Of course, you guessed right: I just got 22 pfactor lines in the last hour.[/QUOTE]
Do you remember what I suggested you do? In case you do not... Change the settings for each client, and have the client communicate with Primenet through the proxy. Then change the settings back to what you want (in this case, LL) and communicate again. That should fix things. It's probably a SPE on my part, but this should solve the problem. :smile: |
[QUOTE=chalsall;412165]But, when being only a few hours ahead in the TF'ing for P-1'ing, sometimes things get handed out sub-optimally TF'ed.[/QUOTE]
Let me finish Manfred's DC (275M mark passed with a match, ETA ~11 days) and then I promise I bring 1800GHDays/Day LLTF power till the end of the year :razz: Regarding the second part, LL versus P-1 assignments, I have to end the working day and go home to try that (now lunch break, but still at job), but I am skeptical, because I did that in the past (and I just "did it", well, partially, when switching back to the proxy, after all these months when I was getting right assignments from PrimeNet). I remember some other (Petrw?) having the same problem and you "tickled" some small wire inside of that server, which solved the problem. But wait first till I get home, and I will be back to you. The P-1 assignments may have been finished by now, and if your suggestion won't work, it is just the right time to get some new of them, requesting LL work :razz: |
[QUOTE=chalsall;412166]... a SPE on my part ...[/QUOTE][url=https://www.acronymfinder.com/SPE.html]Submassive Pulmonary Embolism[/url]?
[size=1][url=http://acronyms.thefreedictionary.com/SPE]Swedish Penis Enlarger[/url]?[/size] |
@Chris: don't reply to retina, he knows the meaning, he only likes to hear you saying it...
edit: actually, you can reply, we all like to hear you saying it... :razz: |
[QUOTE=LaurV;412218]@Chris: don't reply to retina, he knows the meaning, he only likes to hear you saying it...
edit: actually, you can reply, we all like to hear you saying it... :razz:[/QUOTE] It's a term I've incorporated in my mental reviews of my own code :smile: |
Nope, it won't work. Going back and forth how many times you want, it cycles like that:
[QUOTE]Getting assignment from server PrimeNet success code with additional info: [COLOR=Red][B]GPU72[/B] [/COLOR]assigned [COLOR=Red]P-1 [/COLOR]factoring work. Got assignment c10c95e323f2f65...:[COLOR=Red] P-1 M73578259[/COLOR] Sending expected completion date for M73578259: Oct 10 2015 Getting assignment from server PrimeNet success code with additional info: [COLOR=Red][B]GPU72[/B] [/COLOR]assigned [COLOR=Red]P-1 [/COLOR]factoring work. Got assignment 3b2e431a68a2ff63...: [COLOR=Red]P-1 M73578199[/COLOR] Sending expected completion date for M73578199: Oct 11 2015 Updating computer information on the server Getting assignment from server PrimeNet success code with additional info: [COLOR=Red][B]Server [/B][/COLOR]assigned [COLOR=Red]Lucas Lehmer[/COLOR] primality test work. Got assignment 84839C0DBCB...1C61B: [COLOR=Red]LL M76914287[/COLOR] Sending expected completion date for M76914287: Nov 03 2015 Getting assignment from server PrimeNet success code with additional info: [COLOR=Red][B]Server [/B][/COLOR]assigned [COLOR=Red]Lucas Lehmer [/COLOR]primality test work. Got assignment A6ACC9637F5...E96: [COLOR=Red]LL M76915441[/COLOR] Sending expected completion date for M76915441: Nov 03 2015 [/QUOTE]Your proxy hasn't even a little pain in the button about what type of assignment I want. You need to tickle that little wire. Now, going through this procedure, and because it happened again, I remember the odd things that happened at that time (documented on the forum), which made me to delete the proxy: one core was getting P-1 assignments and another core was getting normal, LL/DC assignments when I was using the proxy. Which was pissing me off. It doesn't matter anyhow, I won't use the proxy (and I will do the P-1, I won't unreserve them, in spite of the fact they were unwanted, I need some P-1 activity on that account :smile:) and it is not affecting me. I only resurrected this discussion because people wonder "why so many P-1 assignments", "we are concentrating too much power on P-1", etc. This may be a reason: they get P-1 assignments when they ask for another type of work. |
[QUOTE=LaurV;412234]Nope, it won't work. Going back and forth how many times you want, it cycles like that:.[/QUOTE]
I found the sequence of events is very specific....give me a little time to dig out the exact post somewhere in the previous couple hundred pages. :P |
For me, this has worked:[INDENT]Stop P95
Change work type to something you don't want to end up with Have P95 communicate and send completion dates Change work types to what you want Have P95 communicate with server Log into PrimeNet and see what it says your work types are. Correct, if necessary. Have P95 communicate with server See if the settings in P95 are correct Unreserve unwanted assignments, which will have come in during these contortions. Make sure P95 settings and PrimeNet settings agree [/INDENT]YMMV. Not all of these steps may be necessary. This sequence is aimed at beating things into submission. :deadhorse: |
[QUOTE=kladner;412243]YMMV. Not all of these steps may be necessary. This sequence is aimed at beating things into submission. :deadhorse:[/QUOTE]
LaurV, could you please give this a try? For that particular machine, the GPU72 database doesn't see a change for each Computer_CPU record since 2013-11-26. Again, it's probably a Stupid Programmer Error (SPE) (:wink:) on my part, but it would be interesting to analyse the logs generated by the attempt. |
[QUOTE=chalsall;412245]LaurV, could you please give this a try?
For that particular machine, the GPU72 database doesn't see a change for each Computer_CPU record since 2013-11-26. Again, it's probably a Stupid Programmer Error (SPE) (:wink:) on my part, but it would be interesting to analyse the logs generated by the attempt.[/QUOTE] Butting in here, uninvited... Since Primenet stores user preferences on the server and hands out assignments based on that, is it a fair assumption that when using the GPU72 proxy, since it can't divine those user preferences itself, it has it's own (hopefully matching) settings for the user? And the only way it would know is if it sees the client trying to change those settings as it's proxying that particular request? Thus the "fix" of changing and setting back the work type will allow GPU72 to pick that up and assign the expected work? Butting back out now... |
[QUOTE=Madpoo;412251]Butting in here, uninvited...[/QUOTE]
Please always feel free to "butt in". :smile: [QUOTE=Madpoo;412251]Thus the "fix" of changing and setting back the work type will allow GPU72 to pick that up and assign the expected work?[/QUOTE] That's the hope. The Primenet API is rather complicated. And, I'm not convinced all the clients fully honer the protocol. |
"Just in time" in LLTF, well over a year ahead in DCTF.
Now that James has brought (back) to the table [URL="http://www.mersenne.ca/status/tf/"]Delta Reports[/URL] for the GIMPS project (thanks again mate; great job!), I thought it might be time to do another Status Report on the GPU72 sub-project.
[CODE]Range Available LL'ed 30 Day LL'ed Day Days Ahead TF'ed 30 Day TF'ed Day Days G/L DC 75187 5657 188.57 398.73 31423 1047.43 4.55 LL 1 & 2 4386 1044 34.80 126.03 3 0.10 -1.00 LL 3 7182 5275 175.83 40.85 8486 282.87 0.61 [U]LL 4 1018 2018 67.27 15.13 3004 100.13 0.49 [/U]Totals 12586 8337 277.90 45.29 11493 383.10 0.38[/CODE] Some notes: 1. "Days G/L" == "Days Gained/Lost". 2. As we are so far ahead in the DCTF'ing domain, I simply merged all the "Categories" together. 2.1. In fact, we're even further ahead than 399 days; I simply counted the appropriately TF'ed candidates up to (but not including) 45M. There are another 15,000 or so candidates already ready between 45M and 50M. 3. As before, very little LL Cat 2 is being done, so I merged it with Cat 1. 4. The -1 value for the "LL 1 & 2" row is a bit misleading, as just about everything in those categories is already appropriately TF'ed. 5. What is not readily apparent from this is just how close to the wire we are with "feeding" the P-1'ers. We're ~45 days ahead of the LL'ers, but still only a few days ahead of the P-1'ers. Please let me know if anyone has any questions or comments. |
How far ahead of P-1 are we if we change strategy so that rather than attempt a full TF before P-1, we plan to TF to one bit level lower, currently 74, before P-1, then complete the final bit level after P-1? With the excess P-1 power right now, it should be much easier to stay ahead of LL.
|
[QUOTE=frmky;412521]How far ahead of P-1 are we if we change strategy so that rather than attempt a full TF before P-1, we plan to TF to one bit level lower, currently 74, before P-1, then complete the final bit level after P-1? With the excess P-1 power right now, it should be much easier to stay ahead of LL.[/QUOTE]
"Spidy" is already coded for this. It releases P-1 candidates at "only 74" if needed. We actually have the firepower to take everything to 75, so long as there is not unexpected (and great immediate demands) for P-1 candidates. I hope that makes sense. |
[QUOTE=chalsall;412523]I hope that makes sense.[/QUOTE]
Sure it does. Thanks. I advocated for a long time for what Greg said. The 75th bit is a bit too much unless you do it on AMD cards (and then it worth because openCL FFT library is slower, therefore LL tests are slower on these cards, which make them much more appropriate for TF - I myself have a 7970 which were (and is) running DCTF since it was installed, without stop, except for my annual leaving holiday - and there was a time when a 7990 was doing that too, but I sold it) (edit: I didn't forget that I said I will bring my LLTF contribution to 1.8THzD/D after I finish the current LL/DC work - about 6-7 days left). |
[QUOTE=LaurV;412540]Sure it does. Thanks. I advocated for a long time for what Greg said. The 75th bit is a bit too much unless you do it on AMD cards (and then it worth because openCL FFT library is slower, therefore LL tests are slower on these cards, which make them much more appropriate for TF - I myself have a 7970 which were (and is) running DCTF since it was installed, without stop, except for my annual leaving holiday - and there was a time when a 7990 was doing that too, but I sold it)[/QUOTE]
When I look at the graph on mersenne.ca for a [url=http://www.mersenne.ca/cudalucas.php?model=12]GTX 580[/url] it shows the cutoff to be 75 bits above 66M (76 above 84M). What is the reason for saying the 75th bit is too much? |
[QUOTE=Mark Rose;412542]When I look at the graph on mersenne.ca for a [URL="http://www.mersenne.ca/cudalucas.php?model=12"]GTX 580[/URL] it shows the cutoff to be 75 bits above 66M (76 above 84M). What is the reason for saying the 75th bit is too much?[/QUOTE]
My 580s take a hit at 71 and 72 bits on wavefront TF. They open up at 73. Of course, things are different in the 320M zone. |
Yes, also those tables are "theoretical" values, they don't consider how much P-1 was done, etc. Expected factors at 75 bits can go down from "1 in 75" to as low as "1 in a hundred" or so. We already discussed this many times, you have to "tune" your system. Do TF for few days, do P-1 few days, see how many exponents you clear (finding factors). Double the numbers for LL range (because you save two LLs if you find a factor, and a little bit of P-1), multiply by some under-unit number for DC range (due to huge amount of P-1 done in the area, your chances to find factors by TF are lower), [B]then[/B], if you can [U]clear more exponents[/U] by doing TF to 80 bits, than you would do LL and/or DC tests [U]with your particular system[/U] then go for TF. You can go to 81 if you consider yourself lucky :wink: (this was a joke, in spite of the fact that I behave like that sometimes - also please consider that 80 and 81 are intentionally exaggerated, you should do your homework for [U]your particular rig[/U] which includes gpgpu card, memory, cpu, etc (a very busy cpu will slow LL or TF rate of the GPU card, in spite of the fact they don't "wait for each other").
There are many factors to consider which are not "in the tables". At the end, consider that doing TF you provide free lunch for others; doing LL/DC you have a "negligible" chance to find a prime (lunch for yourself). |
P95 take off a percent of GPU usage on both my cards when it kicks in. However, this is not so true running Small FFT Torture Test. This leads me to believe that memory contention is the culprit on this FX-8350, dual-channel-memory system.
There are other system tasks which eat into GPU usage, but none so demonstrably. EDIT 2: This only concerns mfaktc. CUCALucas [U]always[/U] runs at 99% GPU, at least the way I have it set. |
Yeah, my DCTF factors found are 10% below expected, probably due to all the P-1 effort. But that shouldn't be a problem in front of the P-1 wave, right?
I don't do any LL. I usually use my available CPU power for SoB. Sometimes I'll help out with the random DC work Madpoo comes up with, or when someone wants an immediate triple check. My three GTX 580's run at factory overclocks, getting 430 to 433 GHz-d/d doing current DCTF work after mfaktc.ini tweaks. That's all they've ever done. I've never even plugged a monitor into them :) |
Yes, a small part of mfaktc is still a CPU task.
It just came to my mind reading your post, if I am not mistaken, the GCD step of cudaPm1 is still a CPU task too, so when you think about computing times, running P95 slows you down there too (but the GCD step is minuscule of the time, depending on E also) |
[QUOTE=frmky;412521]How far ahead of P-1 are we if we change strategy so that rather than attempt a full TF before P-1, we plan to TF to one bit level lower, currently 74, before P-1, then complete the final bit level after P-1? With the excess P-1 power right now, it should be much easier to stay ahead of LL.[/QUOTE]
To speak to this a bit further... My thinking is that it is better to go to 75 before P-1 (where possible) because it lets the P-1 run search with higher bounds. Further, to speak to LaurV's argument, on James' graph yes the 75 bit cross over point is indeed 66M, but keep in mind that even down at 60M it's still 74.6545 bits. In my mind this means it would still be "profitable" going to 75 down there (or ideally 74.6545 bits if mfaktX supported it). Happy to be proven that my thinking is wrong. |
[QUOTE=chalsall;412572]To speak to this a bit further... My thinking is that it is better to go to 75 before P-1 (where possible) because it lets the P-1 run search with higher bounds.[/QUOTE]
Actually with a higher TF level, P95 runs with lower bounds since there is a smaller chance of finding a factor. To get actual numbers, I used 73412063 which is currently TF'd to 76. Given TF to 76, P95 on my computer runs with B1=555k, B2=7.77M. For lower TF levels, we find ... [CODE]TF B1 B2 74 635K 9.68M 75 605K 8.77M 76 555K 7.77M[/CODE] |
[QUOTE=frmky;412620]Actually with a higher TF level, P95 runs with lower bounds since there is a smaller chance of finding a factor.[/QUOTE]It's the complex relationship between bounds, factor probability and runtime. Higher bounds mean higher chance of factor, but longer runtime. Runtime goes up a lot quicker with higher bounds than factor probability does, so there's a break-even point somewhere. Prime95 tries to pick bounds to maximize factors-per-time (through an iterative trial-error process if I understand correctly).
Expanding on the above table a bit: [CODE]TF B1 B2 FactorProb 74 635K 9.68M 3.406898% 75 605K 8.77M 2.993129% 76 555K 7.77M 2.591607%[/CODE] |
GPU72 not noticing completed 65M TF work
GPU72 assigned me exponents in the 65M range to take to 75 bits, but it doesn't seem to be noticing the completion of that work.
........ It has since picked them up. |
[QUOTE=Chuck;412624]It has since picked them up.[/QUOTE]
Yeah; thanks for the "ping". Spidy wasn't watching that range because we haven't had any candidates down there until recently; that range has slipped down into Cat 1, so very old assignments are being recycled. |
[QUOTE=James Heinrich;412621]Expanding on the above table a bit:[/QUOTE]
OK, help me out here guys. As you know, I do code, not math. What is optimal for GIM[B][U]P[/U][/B]S? As in, what will find the most factors per time unit based on our available resources? Should we indeed only go to 74 (or even 73 and below) before the P-1 run, and then 75 if a factor isn't found? My (perhaps mis-) understanding was going as high as we could with TF'ing first was better, but if it's not then we should change our strategy. GPU72 was created and is managed to help find the next Mersenne Prime, not just to find factors. |
[QUOTE=chalsall;412645]OK, help me out here guys. As you know, I do code, not math.[/QUOTE]Same here... :cool:
But I can generate some numbers to work with. Continuing the above example, I set Prime95 to use 4GB and ran the same exponent through at different TF levels (just long enough to see the bounds and ETA):[code] Exponent TF B1 B2 Prob Runtime 73412063 65 785000 30026250 9.87 27h07m 73412063 66 805000 28980000 9.06 26h41m 73412063 67 805000 27168750 8.24 25h33m 73412063 68 805000 25357500 7.46 24h25m 73412063 69 800000 23600000 6.73 23h18m 73412063 70 795000 21663750 6.02 22h03m 73412063 71 765000 19698750 5.35 20h30m 73412063 72 730000 16972500 4.69 18h26m 73412063 73 690000 15007500 4.12 16h45m 73412063 74 655000 13263750 3.62 15h18m 73412063 75 610000 11895000 3.17 13h58m 73412063 76 565000 10452500 2.75 12h35m 73412063 77 535000 9496250 2.40 11h39m 73412063 78 495000 8415000 2.07 10h35m 73412063 79 455000 7166250 1.76 9h22m 73412063 80 410000 6047500 1.47 8h11m [/code] |
[QUOTE=James Heinrich;412653]But I can generate some numbers to work with.[/QUOTE]
Interesting... And, adding another column, "Probability per Hour" we get:[CODE]Level Prob/Hour 65 0.3640 66 0.3395 67 0.3225 68 0.3055 69 0.2888 70 0.2730 71 0.2610 72 0.2544 73 0.2460 74 0.2366 75 0.2270 76 0.2185 77 0.2060 78 0.1956 79 0.1879 80 0.1796[/CODE] OK, I'm beginning to be convinced that releasing for P-1'ing at lower levels might make sense for two reasons. First, it would slow down the P-1'ing since it takes longer. And secondly, it would mean that the GPU TF'ers would have (slightly) less work to do. I'm wondering though... This test was with 4GB allocated. Is the same trend evident with less? Seperately, is the same trend present for all candidate ranges? What about when Stage 2 isn't done? Perhaps Aaron could speak to what percentage of P-1 have both stages done? Thoughts? |
[QUOTE=James Heinrich;412653]Same here... :cool:
But I can generate some numbers to work with. Continuing the above example, I set Prime95 to use 4GB and ran the same exponent through at different TF levels (just long enough to see the bounds and ETA):[code] Exponent TF B1 B2 Prob Runtime 970TF 73412063 65 785000 30026250 9.87 27h07m 73412063 66 805000 28980000 9.06 26h41m 73412063 67 805000 27168750 8.24 25h33m 73412063 68 805000 25357500 7.46 24h25m 73412063 69 800000 23600000 6.73 23h18m 73412063 70 795000 21663750 6.02 22h03m 73412063 71 765000 19698750 5.35 20h30m 73412063 72 730000 16972500 4.69 18h26m 73412063 73 690000 15007500 4.12 16h45m 0h38m 73412063 74 655000 13263750 3.62 15h18m 1h15m 73412063 75 610000 11895000 3.17 13h58m 2h30m 73412063 76 565000 10452500 2.75 12h35m 73412063 77 535000 9496250 2.40 11h39m 73412063 78 495000 8415000 2.07 10h35m 73412063 79 455000 7166250 1.76 9h22m 73412063 80 410000 6047500 1.47 8h11m [/code][/QUOTE] I partially filled in another column...on can simply extrapolate the bit levels above and below. So my GTX-970 Extreme can complete the TF 73-74 above in about 1.25 hours and save James P-1 1.45 hours. The next bit TF 74-75 would take my card 2.5 hours and save P-1 1.33 hours. However the odds that the TF finds a factor are about 1/3 that of P-1. Mind you if the TF does find a factor it saves the entire P-1 time ... Ok maybe I need a REAL math/stats person too. |
I think if we have the TF capacity, we should still fully TF before P-1. It's work that needs to be done anyway, and if it saves P-1 time, it increases the overall system throughput.
If I understand the math correctly, by eliminating the smaller factors, any remaining factor would have to be [url=https://en.wikipedia.org/wiki/Smooth_number#Powersmooth_numbers]more smooth[/url], so the range of potential factors is smaller. P-1 is like TF in that it's eventually cheaper to run the full LL test. So by TF'ing higher, we save P-1 work at no detriment, if I understand correctly. So the real problem is that we have too many resources doing P-1 work and not LL/DC or TF. |
I don't think this is a math problem, rather it is a resource allocation problem.
You can calculate the optimal order to do TF, P-1, and LL if all work is done on your GPU. If you start looking at prime95's P-1 speed then you are comparing apples (CPUs) to oranges (GPUs). I've not done these GPU-only calculations. My gut tells me that there is only a small change in total throughput (and I expect doing P-1 earlier would be better). If we look at this as resource allocation problem, then to me it appears we have an excess of P-1 capacity vs TF capacity. Therefore, we'd want to release exponents for P-1 one bit earlier. This would increase the amount of work P-1 users do and reduce the amount of work TFers have to do. |
[QUOTE=Mark Rose;412671]I think if we have the TF capacity, we should still fully TF before P-1. It's work that needs to be done anyway, and if it saves P-1 time, it increases the overall system throughput.[/QUOTE]
But, we're currently [B][U]right at the edge[/U][/B] of feeding the P-1'ers. If it's agreed that P-1'ing at lower bit levels makes more sense, then when "Spidy" needs to pull its rip-cord it releases at lower bit-levels rather than higher, and then recaptures for final TF'ing those candidates not factored to take to 75 before being re-released for LL'ing. [QUOTE=Mark Rose;412671]So the real problem is that we have too many resources doing P-1 work and not LL/DC or TF.[/QUOTE] Definitely don't disagree with that! DC'ing, in particular, needs some love (read: it continues to fall behind LL'ing by ~90 candidates a day).... :smile: |
[QUOTE=Prime95;412676]If we look at this as resource allocation problem, then to me it appears we have an excess of P-1 capacity vs TF capacity. Therefore, we'd want to release exponents for P-1 one bit earlier. This would increase the amount of work P-1 users do and reduce the amount of work TFers have to do.[/QUOTE]
Agreed. But not necessarily only one bit early. Perhaps as many bits lower which are available that can be safely released for P-1'ing without the risk of having the candidate assigned to an LL'er (who might not do the P-1 run "well", or even at all). BTW George, if I may ask... Why does the Probability per Hour drop so much for each bit level? |
[QUOTE=chalsall;412667]This test was with 4GB allocated. Is the same trend evident with less?[/QUOTE]Sorry, I tried to be modest with 4GB, I normally have 8GB allocated per worker :smile:
The same general trend will exist, with slightly different numbers, down to very-low memory allocation at which point Prime95 will give up on Stage2 and run Stage1-only with a larger B1. Much more RAM translates to very slight efficiency increase, specifically in that the each pass of Stage2 has a small fixed overhead, so if you can do it in fewer passes you can afford to spend a little more time with higher bounds [SIZE="1"](which translates into more RAM used, potentially more passes required... see why this is a complex optimization? :)[/SIZE] [QUOTE=chalsall;412667]Perhaps Aaron could speak to what percentage of P-1 have both stages done?[/QUOTE]I'm no Aaron, but all submitted results for 2015-Jan-01 through 2015-Sep-30:[code]SELECT COUNT(*) AS `howmany`, `result_type`, (`message` LIKE "%B1=%") AS `stage1`, (`message` LIKE "%B2=%") AS `stage2` FROM `primenet_results_archive` WHERE (`date_received` > "2015-01-01") AND (`result_type` IN ("F-PM1", "NF-PM1")) GROUP BY `result_type` ASC, `stage1` ASC, `stage2` ASC; +---------+-------------+--------+--------+ | howmany | result_type | stage1 | stage2 | +---------+-------------+--------+--------+ | 35040 | NF-PM1 | 1 | 0 | | 113495 | NF-PM1 | 1 | 1 | | 451 | F-PM1 | 0 | 0 | | 1442 | F-PM1 | 1 | 0 | | 7318 | F-PM1 | 1 | 1 | +---------+-------------+--------+--------+ 5 rows in set (21.54 sec)[/code]So, for no-factor P-1 results: 23.6% were done with no stage2 (presumably due to lack of available memory; having the default Prime95 setting of 8MB is a strong cause I assume since that's insufficient to run stage2 on current exponents). For P-1 factors, 80% were found in stage2, 15% in stage1, and 5% indeterminate (not reported in results data). Which I find unexpected, since in my experience there should be pretty much an even split between stage1 and stage2 factors. In fact, I just checked my last 86 F-PM1 results, and exactly 43 were stage1 and 43 were stage2. I suspect it's something to do with the amount of RAM available. If you allocate 8MB (default) you won't get stage2. If you allocate a tiny amount (I dunno, 50MB?) then it's enough to do a feeble stage2, but in that case B1 is set so low that many stage1 factors that could have been found aren't found in stage1 but are found in stage2. |
[QUOTE=James Heinrich;412680]Sorry, I tried to be modest with 4GB, I normally have 8GB allocated per worker :smile:[/QUOTE]
LOL... If I may go tangentially nostalgic... The first computer I (and many) owned was a TRS-80 model 1; 4 [B][U]K[/U][/B]B of RAM. At my high-school we first worked on Commodore PETs (again with 4 KB of RAM). There was ongoing heated (but friendly) argument amongst all the teachers and the students as to what was better, Z-80 or 6502 assembly (BASIC was of course, by definition, for Beginners :wink:)... When you step back a bit, it is truly stunning just how much progress has been made in a very short period of time. |
I think it would be time to change the default Prime95 mem allocation from 8MB to something more in line with the amount of memory modern computers are usually fit with.
I joined GIMPS more than 13 years ago, by that time 256 MB total memory were not uncommon on a mid range desktop PC, and Prime95 default allocation was already 8MB. It seems to me that 128MB, or even 256 MB. would be a reasonable amount to allocate. |
[QUOTE=lycorn;412685]I think it would be time to change the default Prime95 mem allocation from 8MB to something more in line with the amount of memory modern computers are usually fit with.[/QUOTE]
Makes sense. And should be done. But, there are a great many workers which have been "fired and forgotten" still working. No chance of changing their settings nor their code. |
1 Attachment(s)
[QUOTE=lycorn;412685]It seems to me that 128MB, or even 256 MB. would be a reasonable amount to allocate.[/QUOTE]Let's look at that with some numbers (using the aforementioned M73412063 as a test case):[code]
MB B1 B2 Prob 8 965000 965000 2.48 16 965000 965000 2.48 32 965000 965000 2.48 64 965000 965000 2.48 128 965000 965000 2.48 256 640000 6400000 3.86 512 695000 11815000 4.37 1024 720000 15120000 4.59 2048 730000 16242500 4.66 4096 730000 16972500 4.69 8192 735000 17272500 4.71 16384 730000 17337500 4.71 32768 730000 17337500 4.71 [/code]In fact, 8MB or 128MB is the same thing -- no stage2 is attempted. For current exponents 256MB would be just enough to get into stage2, but 512MB would be much better (and give a little breathing room to still be useful a year or two from now). Beyond 1GB there is some benefit, but not much. |
[QUOTE=chalsall;412684]There was ongoing heated (but friendly) argument amongst all the teachers and the students as to what was better, Z-80 or 6502 assembly…[/QUOTE][URL="http://tlindner.macmess.org/wp-content/uploads/2006/09/byte_6809_articles.pdf"]6809[/URL]!
:tu: |
[QUOTE=chalsall;412679]BTW George, if I may ask... Why does the Probability per Hour drop so much for each bit level?[/QUOTE]
Because the GPU found the "easy" factors at the lower bit levels. In other words, the probabilities aren't going down so much due to the changing bounds as due to the fact that there are fewer possible small factors to find. |
[QUOTE=Prime95;412698]Because the GPU found the "easy" factors at the lower bit levels. In other words, the probabilities aren't going down so much due to the changing bounds as due to the fact that there are fewer possible small factors to find.[/QUOTE]
OK. Cool. But... If the P-1 code knew what had already been found, would it not optimize itself to take this into account? |
[QUOTE=Xyzzy;412690]6809[/QUOTE]
I have to say that at one point in time I fell in love with 68000 assembly. Very symmetrical. I mostly hand coded "Amoeba Invaders" in 68K assembly. Unrolled loops, etc. We thought we would get a lucrative contract. Instead we (Late Night Developments) almost got our asses sued off.... |
[url=https://www.youtube.com/watch?v=O4J8kaqBhXg]This?[/url]
It does have your name on it :smile: |
[QUOTE=James Heinrich;412720][url=https://www.youtube.com/watch?v=O4J8kaqBhXg]This?[/url]
It does have your name on it :smile:[/QUOTE] Yup. That's it. :smile: We were about to get a serious contract right up until we were told we were about to be sued because it was too accurate. I'm not joking.... |
[QUOTE=James Heinrich;412653]I set Prime95...[/QUOTE]
We compare, as Chris would say, apple and oranges. The comparison has to be between [U]GPU[/U]-TF, [U]GPU[/U]-LL and [U]GPU[/U]-Pm1, which can clear more exponents per time unit. This with and without P95 running in background (in the [U]CPU[/U]). And if you have a Nvidia card and [U]can[/U] run cudaPm1, then [B][U]definitively[/U][/B] it is better to run P-1 [U]before[/U] going to that last bit (i.e 75 for the LL front). If you have even 2% chance of finding a factor, this means you will find one factor in 50 trials, and the TF will roughly find 1 in 75, for the comparable running time. In fact, much before the times become comparable, you must switch to P-1. For a chance of 3.9% or 4% to a factor (which is the usual one at the LL front right now, without manually changing the PFactor lines), you will find one in ~25 exponents. So, if your TF time takes a third of how much time P-1 takes, you are [B][U]better[/U][/B] doing P-1. Some may remember RDS's plea for P-1 long time ago, he was right, and George also said long time ago, when all TF/P-1/LL was done on CPU only, that P-1 before the last bit is better - now we only moved from CPU to GPU, but the scores stay. The GPUs may be faster at TF, so we raise few bitlevels, but the logic is exactly the same, it became the same since we became able to do [U]all[/U] 3 types of work on [U]GPU only[/U]. |
[QUOTE=chalsall;412704]OK. Cool.
But... If the P-1 code knew what had already been found, would it not optimize itself to take this into account?[/QUOTE] (sorry double posting, continued to read through the thread, saw George's post too) It doesn't work like that. Big factors can be smooth (i.e. P-1 discoverable) and small factors can be "rough" (i.e. nor discoverable by P-1). The fact that we TF to some limit doesn't help P-1, or say, it helps it as a side-effect, because we only do P-1 if we don't find any TF factors, and it scrambles a little bit the "percent of chances" calculus (i.e. instead of "how many B1-power-smooth numbers", we should ask "how many B1-power-smooth numbers over xx bits", when we calculate the chances to find a factor). |
[QUOTE=LaurV;412726]We compare, as Chris would say, apple and oranges. The comparison has to be between [U]GPU[/U]-TF, [U]GPU[/U]-LL and [U]GPU[/U]-Pm1[/QUOTE]I must respectfully disagree.
What you say is perhaps true if you want to optimize what work you as a user with your particular set of GPU/CPU resources should work on, but I think here we're looking at the overall GIMPS picture, where 90+% of TF is done on GPUs, and 90+% of P-1 is done (usually by a different person) on a CPU. |
[QUOTE=James Heinrich;412730]I must respectfully disagree.
What you say is perhaps true if you want to optimize what work you as a user with your particular set of GPU/CPU resources should work on, but I think here we're looking at the overall GIMPS picture, where 90+% of TF is done on GPUs, and 90+% of P-1 is done (usually by a different person) on a CPU.[/QUOTE] You appear to claim that solving for the most-efficient path should take into account the typical hardware used for each task, rather than judging by what a single piece of hardware can do for each task. How would you go about accurately doing that? How do you value 1 hr of GPU time vs 1 hr of CPU-core time (or 1 hr of a full CPU)? I don't see how making those judgments is an improvement over pretending one's own GPU is the only item that is going to work on a particular number, and determining the most (expected) time-efficient way to go about testing that candidate. The solution is likely different on CPU vs GPU (for instance, it may be optimal to P-1 higher bounds on a GPU than a CPU because the code is relatively faster than TF code on a GPU, or vice versa), but it seems folly to declare one solution that holds for all of GIMPS for any machine. |
Well, Nash said in his Equilibrium that the group is better if any individual does what is better for the group [U]and himself[/U].
You mean that if I can clear one exponent per day doing "this type of work" with [U]my hardware[/U], but I can only clean one exponent every X>1 days doing "this other type of work", [U]is there any situation[/U] when the project (in its totality) would do better if I do the "other type of work"? For the hack of me, I can't believe that. The project is better if any user clears how many exponents he can, how fast he can, with the hardware he has. Period. If your hardware can clear more exponents per day (week, month) doing TF, then go for TF. I might be wrong... edit: and of course, this does not consider that some guys want to find primes, and don't care about us, the small fish doing the dirty work for them, remember davieddy. Or curtisc, who only does LL. Others want to find factors, etc... |
Change in P-1 release policy
Just so everyone knows, based on this discussion GPU72 will now release candidates for P-1'ers back to Primenet as needed sorted by Factored level ahead of the Cat 3 range. This is so there's little to no chance of them being assigned for LL'ing.
If a factor isn't found GPU72 will then recapture the candidate to take to 75 bits before re-releasing back to Primenet for LL assignment. Those using the GPU72 manual P-1 assignment page may wish to choose the option "Lowest TF level". The default "What Makes Sense", and the Proxy, will continue to assign candidates sorted by Lowest Exponent in order to appropriately feed the LL'ers. |
[QUOTE=LaurV;412758]You mean that if I can clear one exponent per day doing "this type of work" with [U]my hardware[/U], but I can only clean one exponent every X>1 days doing "this other type of work", [U]is there any situation[/U] when the project (in its totality) would do better if I do the "other type of work"? For the hack of me, I can't believe that. The project is better if any user clears how many exponents he can, how fast he can, with the hardware he has. Period.[/QUOTE]Try this: Look for small factors in the 900M range. You'll clear more exponents per day (by finding [SIZE="1"]small[/SIZE] factors) than you do now, but it's arguably less useful for the project overall.
|
[QUOTE=LaurV;412758]You mean that if I can clear one exponent per day doing "this type of work" with [U]my hardware[/U], but I can only clean one exponent every X>1 days doing "this other type of work", [U]is there any situation[/U] when the project (in its totality) would do better if I do the "other type of work"? For the hack of me, I can't believe that. The project is better if any user clears how many exponents he can, how fast he can, with the hardware he has. Period.[/QUOTE]Simplistic greedy algorithms are rarely the best if everyone is being greedy. Cooperation and coordination is usually the best way for everyone to make maximal progress towards some shared goal.
|
[QUOTE=LaurV;412758]The project is better if any user clears how many exponents he can, how fast he can, with the hardware he has. Period.
If your hardware can clear more exponents per day (week, month) doing TF, then go for TF. [/QUOTE]As James already replied this would mean all GPUs would do only TF on low levels : no need to stop at 1 G exponents. This would maximise the number of cleared exponents indeed.[QUOTE=VBCurtis;412734]You appear to claim that solving for the most-efficient path should take into account the typical hardware used for each task, rather than judging by what a single piece of hardware can do for each task. How would you go about accurately doing that? How do you value 1 hr of GPU time vs 1 hr of CPU-core time (or 1 hr of a full CPU)? I don't see how making those judgments is an improvement over pretending one's own GPU is the only item that is going to work on a particular number, and determining the most (expected) time-efficient way to go about testing that candidate. The solution is likely different on CPU vs GPU (for instance, it may be optimal to P-1 higher bounds on a GPU than a CPU because the code is relatively faster than TF code on a GPU, or vice versa), but it seems folly to declare one solution that holds for all of GIMPS for any machine.[/QUOTE]The reasoning you apply (and encoded in the Prime95 program) that a signle machine will do all work on an exponent, this has been almost true in the beginning of GIMPS and is still visible in the very poorly P-1'd exponents. But now different machines do the TF to different levels, other machines do P-1 and still others do the first LL and the double check(s) on a single exponent. It will indeed be difficult to have an accurate typical hardware (one of the reasons being that the hardware mix continuously changes). But there are trends and one can calculate approximate TF levels, P-1 bounds based on that. Especially now that people specialise on particular types of work. Jacob |
[QUOTE=S485122;412776]But there are trends and one can calculate approximate TF levels, P-1 bounds based on that. Especially now that people specialise on particular types of work.[/QUOTE]
Completely agree. Economics has been called a "Bastard Science" (even by The Economist). Rightly so. I particularly enjoyed the lessons learnt by the mistakes made by Reinhart and Rogoff of Harvard University in their spreadsheet (which, because they were PhD's, was assumed to be correct). Interestingly, fiscal policy was informed by this (mis-) information in many nations (to their detriment). |
[QUOTE=chalsall;412778]Completely agree. Economics has been called a "Bastard Science" (even by The Economist). Rightly so.[/QUOTE]
Economics is really a humanity. Economics is fundamentally about human behaviour. |
[QUOTE=Mark Rose;412799]Economics is really a humanity. Economics is fundamentally about human behaviour.[/QUOTE]
So, then, a "social science". No where near as accurate as Asimov predicted in "The Foundation". Read: not worth its weight in salt. |
[QUOTE=chalsall;412802]So, then, a "social science". No where near as accurate as Asimov predicted in "The Foundation".
Read: not worth its weight in salt.[/QUOTE] Isn't the Foundation serie based on some sort of psycho-history? That is at least what I can remember (I read the first 3 books of the serie in Dutch some 10 years ago). |
[QUOTE=VictordeHolland;412818]Isn't the Foundation serie based on some sort of psycho-history? That is at least what I can remember (I read the first 3 books of the serie in Dutch some 10 years ago).[/QUOTE]
Yes. "Psycho-history" is remarkably close to what we now call economics. Equally non predictive. |
[QUOTE=S485122;412776]As James already replied this would mean all GPUs would do only TF on low levels : no need to stop at 1 G exponents. [/QUOTE]
You (and James) miss the fact that [U]yes[/U], the best way to clear exponents in 1G range is - [B][U]still[/U][/B] - trial factoring. Why don't you do LL in 1G if you think otherwise? :razz: The goal of the project is to find primes, and you only find primes by doing LL. This LL is more "reasonable" to be done for lower expos (this was davieddy's argument, wasn't it? haha), for whatever reasons, therefore lower expos is where you have to clear more exponents, by any meanings, with [U]your[/U] hardware. Of course I am not against cooperation, see my activity in the forum and in the project, for years. But if it takes me 3 hours to do a P-1 in 70M and I found a factor every 25 trials (say 4% chance to find a factor), than it makes no sense to TF at a bitlevel higher than 75, if that would take me more than 1 hour. Because in P-1 case I can find 1 factor in 75 hours, and in the TF case I would find less than a factor in 75 hours. With the same hardware and power consumption. Of course, if that hardware is a newest Tesla and I can do one 70M LL test in 37 hours (just an example!), then neither the TF nor P-1 would make sense, because I could clear that expo in 74 hours by doing one LL and one DC to it, and THAT is a sure thing (not probabilistic, like finding factors) and additionally, it may bring me a prime. What I want you to understand is that I am not arguing against you. What you say is ok, and again, I am not against cooperation. But I want that [U]every[/U] user understand the goal, and make this type of calculus for him/herself, for the hardware [U]he owns/use[/U] and see for himself how he can help better. And when I say help, I say "help the project" [U]and[/U] "help yourself". Help better, feel better, do whatever you like. But think first. And that should be the attitude. Don't forget the main goal is to find primes. Not to clear exponents. But somehow, they go together - lord works in mysterious ways :razz: edit: Disclaimer: the numbers used are only examples for calculus |
[QUOTE=LaurV;412830]... that [U]every[/U] user understand [color=red][b]the[/b][/color] goal ...[/QUOTE]I think this is where you are perhaps creating a problem. There is no one single goal. People have different goals. Some just want to test their hardware for errors. Some want to find lots of factors. Some want to find primes. Some want to be the top of the producers report. Some just want to be part of the community. etc. People will optimise for [color=red][b]their[/b][/color] goal, whatever that goal may be.
|
That is exactly what I was saying. You piked on my English :yucky: I was just arguing for "the group is doing well when the individual does what's better for the group [U]and himself[/U]" before, and you and others jumped on me about the "greedy" part. It was my reply to it. This is not a complaint, I like this discussion. We are all in a "violent agreement" here. I will follow whatever is decided, in my own way (i.e. doing work at which I "feel" I am more efficient).
|
[QUOTE=LaurV;412834]You piked on my English[/QUOTE]Okay, I didn't realise it was just a translation thing. But for others reading the sentence it is perhaps good that the confusion is cleared.
|
[QUOTE=retina;412835]I didn't realise it was just a translation thing[/QUOTE]
[COLOR=White]Honestly it wasn't. And I didn't really think that you picked on it. But every time when we are cornered, we blame the fact that we are not native speaker... Where is that picture from Mike? (pity we can't make that picture to be white color only, too) :kitten:[/COLOR] |
More DCTF exponents are needed... Anonymous might grab nearly all of the available ones this weekend if history repeats.
|
[QUOTE=Mark Rose;412890]More DCTF exponents are needed... Anonymous might grab nearly all of the available ones this weekend if history repeats.[/QUOTE]
Do we still need more DC? I can move around 1000gHzd/d from TF to DC if needed. |
[QUOTE=Mark Rose;412890]More DCTF exponents are needed... Anonymous might grab nearly all of the available ones this weekend if history repeats.[/QUOTE]
I don't think so, as we get to 73 bits, the time gets double now, so he would only get half of the assignments :razz: |
When I wrote that two weeks ago the situation was different.
|
[QUOTE=dragonbud20;414783]Do we still need more DC? I can move around 1000gHzd/d from TF to DC if needed.[/QUOTE]
No, please don't, unless you want to. There is zero need to move resources to DCTF. I was talking about GPU72 needing more available exponents for DCTF assignment. Some of us are working on finishing all the DCTF years ahead of time, for fun. It would be better for the project if we were doing LLTF, so I don't encourage anyone to switch from LLTF to DCTF. |
[QUOTE=Mark Rose;414793]When I wrote that two weeks ago the situation was different.[/QUOTE]
:redface: (didn't see the date, the post that followed confused me, sorry; you were unlucky to be at the beginning of the page this time, hehe) |
Honestly I always have this page open in a tab so I quickly forget when everything was posted lol
|
[QUOTE=Mark Rose;412890]More DCTF exponents are needed... Anonymous might grab nearly all of the available ones this weekend if history repeats.[/QUOTE]
And again...hey but it's a good thing ANONYMOUS is helping out so much. Spidey just needs to put in a double shift. |
Chris, please check those assignment limits. That is, 47M and 48M are supposed to be released at 72.
I switched my DCTF preference to "factor to 73". Before it was "to 70". I did not modify this for long time, following your update to feed us with 71 and later with 72, which was working ok for me, so I was lazy to "move my butt" to actualize local work preference. But then I did it yesterday, and now I am getting 48M to 73 (!?!), from which I did already a bucket, before seeing what's going on. But you won't expect me to continue to do so. I modified all local cached work by hand from "71,73" to "71,72". The last bit will "hang" on my assignment, I will try to go through them as they complete to 72 and unreserve, but this is a some work to do, and I can't promise I will really do it. I think that if I select "73", you should either feed me with 48M to 72 (only! because that is the "release" point), or if you give me 73, then it should be 50M+ exponents. That is how I see it. |
Alas. There are people like me who when requesting assignments to a factoring limit, would like to get assignments that go to that factoring limit. I go to 76 on LLTF because it fits what I think I can do best with my machine to help the overall prime95 effort. I wouldn't want the system to derate my requests automatically to 75.
-Walt |
[QUOTE=LaurV;415221]Chris, please check those assignment limits.[/QUOTE]
I've noticed this behaviour if I select "Lowest Exponent". If I pick "What makes sense" it works as expected, giving out assignments at 50M+. |
[QUOTE=Mark Rose;415318]I've noticed this behaviour if I select "Lowest Exponent". If I pick "What makes sense" it works as expected, giving out assignments at 50M+.[/QUOTE]
"Lowest Exponent" can go to quite high exponents if no upper limit is set. Careful balancing between range and request types can pull in just about anything which is available. Reference [URL]http://www.gpu72.com/reports/available/[/URL] to set your parameters. I occasionally get the urge to take assignments from lower levels (factor lust). This gets me into 73M to 76+M territory. Doing LLTF, until recently, GPU72 was giving me 67M assignments from 74 to 75, with either WMS or Let GPU Decide selected. I see that the ones I have queued since a few days are high 74M's and low 75M's from 74 to 75. Would this be to keep the churners fed? EDIT: I realize that 67M has been essentially wiped out. |
[QUOTE=kladner;415357]"Lowest Exponent" can go to quite high exponents if no upper limit is set. Careful balancing between range and request types can pull in just about anything which is available. Reference [URL]http://www.gpu72.com/reports/available/[/URL] to set your parameters. I occasionally get the urge to take assignments from lower levels (factor lust). This gets me into 73M to 76+M territory.
Doing LLTF, until recently, GPU72 was giving me 67M assignments from 74 to 75, with either WMS or Let GPU Decide selected. I see that the ones I have queued since a few days are high 74M's and low 75M's from 74 to 75. Would this be to keep the churners fed? EDIT: I realize that 67M has been essentially wiped out.[/QUOTE] I just picked off three stray 67M's, and then got a couple of 68M's using WMS. It's fun to empty a column. |
[QUOTE=kladner;415357]"Lowest Exponent" can go to quite high exponents if no upper limit is set. Careful balancing between range and request types can pull in just about anything which is available. Reference [URL]http://www.gpu72.com/reports/available/[/URL] to set your parameters. I occasionally get the urge to take assignments from lower levels (factor lust). This gets me into 73M to 76+M territory.[/QUOTE]
I think you're describing the behaviour of "Lowest TF level" not "Lowest Exponent". |
"Workers' Overall Progress for the last Day" report
I thought this report included only the previous 24 hours of work, but I noticed that AirSquirrels numbers have not changed in several days.
If a user does no work in a 24 hour period, I thought this report should show zeroes. Does it default to the last 24 hour period of activity when no new work is being reported? |
[QUOTE=Mark Rose;415375]I think you're describing the behaviour of "Lowest TF level" not "Lowest Exponent".[/QUOTE]
Oops! |
[QUOTE=Chuck;415382]I thought this report included only the previous 24 hours of work, but I noticed that AirSquirrels numbers have not changed in several days. If a user does no work in a 24 hour period, I thought this report should show zeroes. Does it default to the last 24 hour period of activity when no new work is being reported?[/QUOTE]
You have caught me out. :smile: Because of the computational load to do a full recompute, it can take a while for inactive participants to be marked as "dirty" (in a cache sense). This actually annoys me. Some do an amazing amount of work, but only submit their results only every week or so. This is, at the end of the day, my mistake, not theirs. I expected results to be submitted at least within 24 hours of completion. This doesn't always happen. Again, my mistake. I thank the heavy hitters for the fire-power. |
1 Attachment(s)
[QUOTE=chalsall;415452]I thank the heavy hitters for the five-power.[/QUOTE]Yes, thanks!
... Is that like something[sup]5[/sup], or 5[sup]something[/sup]? :unsure: |
It's not just you! [URL]http://www.gpu72.com[/URL] looks down from here. ?
"500 Internal Server Error" It's Baaaack! :smile: |
[QUOTE=kladner;416329]It's not just you! [URL]http://www.gpu72.com[/URL] looks down from here. ?
"500 Internal Server Error" It's Baaaack![/QUOTE] Hmmm.... |
It's happening again, but now it is reported as "Just Me."
Time to power cycle the cable modem. |
[QUOTE=kladner;416336]It's happening again, but now it is reported as "Just Me."
Time to power cycle the cable modem.[/QUOTE] Just for the information. The problem seems to be somewhere in my connection. Well, cable modem reset has not helped. DNS for the router seems to be in order, Is It Everyone still says it's just me. At one point I was able to see the GPU72 home page, but not individual statistics or assignments pages. Now they all respond very quickly with the 500 Server Error message. Oh well. Almost time for work, anyway. |
[QUOTE=kladner;416342]The problem seems to be somewhere in my connection.[/QUOTE]It's not just you. It was showing error when you first posted, then it worked again (at least the home page), and now it's not working again.
|
[QUOTE=James Heinrich;416344]It's not just you. It was showing error when you first posted, then it worked again (at least the home page), and now it's not working again.[/QUOTE]
Thanks, James. :smile: |
Apparently I still have the old proxy setup.
[code][Main thread Nov 16 19:45:58] Starting workers. [Comm thread Nov 16 19:45:58] Updating computer information on the server [Worker #1 Nov 16 19:45:58] Worker starting [Worker #1 Nov 16 19:45:58] Setting affinity to run worker on logical CPU #1 [Worker #3 Nov 16 19:45:58] Waiting 10 seconds to stagger worker starts. [Worker #4 Nov 16 19:45:58] Waiting 15 seconds to stagger worker starts. [Worker #2 Nov 16 19:45:58] Waiting 5 seconds to stagger worker starts. [Comm thread Nov 16 19:45:58] PnErrorResult value missing. Full response was: [Comm thread Nov 16 19:45:58] <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> [Comm thread Nov 16 19:45:58] <html><head> [Comm thread Nov 16 19:45:58] <title>500 Internal Server Error</title> [Comm thread Nov 16 19:45:58] </head><body> [Comm thread Nov 16 19:45:58] <h1>Internal Server Error</h1> [Comm thread Nov 16 19:45:58] <p>The server encountered an internal error or [Comm thread Nov 16 19:45:58] misconfiguration and was unable to complete [Comm thread Nov 16 19:45:58] your request.</p> [Comm thread Nov 16 19:45:58] <p>Please contact the server administrator, [Comm thread Nov 16 19:45:58] chalsall@ideas4lease.com and inform them of the time the error occurred, [Comm thread Nov 16 19:45:58] and anything you might have done that may have [Comm thread Nov 16 19:45:58] caused the error.</p> [Comm thread Nov 16 19:45:58] <p>More information about this error may be available [Comm thread Nov 16 19:45:58] in the server error log.</p> [Comm thread Nov 16 19:45:58] <hr> [Comm thread Nov 16 19:45:58] <address>Apache/2.2.23 (CentOS) Server at v5.mersenne.org Port 80</address> [Comm thread Nov 16 19:45:58] </body></html> [Comm thread Nov 16 19:45:58] Visit http://mersenneforum.org for help. [Comm thread Nov 16 19:45:58] Will try contacting server again in 300 minutes.[/code] |
[QUOTE=gpu72site]The server encountered an internal error or misconfiguration and was unable to complete your request.[/QUOTE]
This is what I am getting from here, and the "only me?" says "no". edit: interesting, if I only type gpu72.com (without www) the isup.me says it is only me. But with www it says not. DNS problem? |
[QUOTE=LaurV;416368]This is what I am getting from here, and the "only me?" says "no".
edit: interesting, if I only type gpu72.com (without www) the isup.me says it is only me. But with www it says not. DNS problem?[/QUOTE] Interesting observation. The same happens for me with and without the www. |
[QUOTE=LaurV;416368]This is what I am getting from here, and the "only me?" says "no".
edit: interesting, if I only type gpu72.com (without www) the isup.me says it is only me. But with www it says not. DNS problem?[/QUOTE] Don't know enough to know what it means but I'm getting the same thing with www. is down for all and without is just me. I also tried isitdownrightnow.com and that reports both variations as being down for everyone. |
"He's dead, Jim!"
As Mr Stewart says of Lindsey Graham, "Ahm gettin' the vapuhs!" :wink: |
Checking the fresh satellite images, Barbados is still on the map, no meteorite strike, no tsunami*... then it must be something from the server....
:smile: OTOH, ideas4lease goes extremely slow too (but it goes trough - aren't they the same server? we still expect that is a DNS problem)** *edit: we also checked for whales, sharks, etc, that country is so big that we were afraid it got swallowed by a [URL="https://en.wikipedia.org/wiki/Kraken"]kraken[/URL] or something... :razz: ** edit 2: no, it does not go through, only partial. Clicking on "blog" from the said site comes back with "error the database can not be contacted" |
| All times are UTC. The time now is 22:10. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.