![]() |
The output I got when I re-ran it right now was
[CODE][---] Starting M24036583 fft length = 1835008 Iteration 10000 M( 24036583 )C, 0xcbdef38a0bdc4f00, n = 1835008, CUDALucas v2.03 err = 0.0002 (0:35 real, 3.4479 ms/iter, ETA 23:00:17) This residue is correct. Starting M25964951 fft length = 1310720 iteration = 22 < 1000 && err = 0.319336 >= 0.25, increasing n from 1310720 Starting M25964951 fft length = 1572864 iteration = 501 < 1000 && err = 0.5 >= 0.25, increasing n from 1572864 Starting M25964951 fft length = 1835008 iteration = 9901 >= 1000 && err = 0.5 >= 0.35, fft length = 1835008, writing che ckpoint file (because -t is enabled) and exiting. C:\Users\Patrik Johansson\Documents\CUDALucas>[/CODE] and [CODE]C:\Users\Patrik Johansson\Documents\CUDALucas>CUDALucas-2.03-cuda4.0-sm_20-x86-6 4.exe -t worktodo.txt Starting M33271093 fft length = 1835008 iteration = 533 < 1000 && err = 0.5 >= 0.25, increasing n from 1835008 Starting M33271093 fft length = 1966080 iteration = 11 < 1000 && err = 0.499999 >= 0.25, increasing n from 1966080 Starting M33271093 fft length = 2097152 iteration = 1201 >= 1000 && err = 0.5 >= 0.35, fft length = 2097152, writing che ckpoint file (because -t is enabled) and exiting. C:\Users\Patrik Johansson\Documents\CUDALucas>[/CODE] but I think I have seen different output as well. Another point to make is that before I started using -t (check round off error all iterations) I saw (by comparing with two later matching runs) that the residues were wrong already a few 10000 iterations before CUDALucas detected the rounding error. |
[QUOTE=patrik;313047]I think we have identical cards: GV-N570OC-13I V2.0
The error in CUDALucas is not reproducible and happens at different iterations. Also in the self-test it fails at different exponents (but most often at M20996011).[/QUOTE] That's not good :razz: I would say it's a card problem, except kladner has been testing his own 570 for a while now, and everything except CUDALucas works great on it. I haven't known what to say to him, and I still don't know what to say to you, sorry :razz: |
Succesful CL -r run on GTX 570!
1 Attachment(s)
OMFG! :shock:
I thought that I had tried everything, but it seems I had not. I downclocked the GPU to nVidia stock (732 MHz) and the memory to 1900 MHz. While it still threw an error on a test run of a DC (because -t is DISabled!?), it continued on a restart. It then completed on -r. See attached. This has never happened before. I am going to try again with memory underclocked to 1800. Results anon. This seems to back LaurV's hunch that memory is the culprit. Too bad I can't do an elimination test as on a motherboard by removing parts of the RAM and testing each piece. EDIT: Completed -r at factory GPU OC of 781 MHz, RAM at 1800. The 570 is reporting a beginning ETA of 22.5 hrs on M27361157 vs ~37 hrs on the GTX 460 in the same general range. While this makes me feel rather foolish for not having tried these settings before :redface:, it is very gratifying to see it actually work. :grin: EDIT2: This is puzzling. -r just completed with factory settings of 781 GPU, 1900 RAM. Perhaps this is due to cooler conditions, though the readouts aren't that different from warmer days. Of course, I can't monitor the VRAM temps, so that might be involved. If I put this card to DC work in CuLu I will probably go with the RAM underclocked to 1800, just to be a bit safer. EDIT3: Stop the presses! I just realized that I did not Apply the 1900 RAM setting in Afterburner. I'll have to run it again. EDIT4: It craps out in less than a minute at 1900. It seems that this is the answer. |
[QUOTE=kladner;313723]OMFG! :shock:
I thought that I had tried everything, but it seems I had not. I downclocked the GPU to nVidia stock (732 MHz) and the memory to 1900 MHz. While it still threw an error on a test run of a DC (because -t is DISabled!?), it continued on a restart. It then completed on -r. See attached. This has never happened before. I am going to try again with memory underclocked to 1800. Results anon. This seems to back LaurV's hunch that memory is the culprit. Too bad I can't do an elimination test as on a motherboard by removing parts of the RAM and testing each piece. EDIT: Completed -r at factory GPU OC of 781 MHz, RAM at 1800. The 570 is reporting a beginning ETA of 22.5 hrs on M27361157 vs ~37 hrs on the GTX 460 in the same general range. While this makes me feel rather foolish for not having tried these settings before :redface:, it is very gratifying to see it actually work. :grin: EDIT2: This is puzzling. -r just completed with factory settings of 781 GPU, 1900 RAM. Perhaps this is due to cooler conditions, though the readouts aren't that different from warmer days. Of course, I can't monitor the VRAM temps, so that might be involved. If I put this card to DC work in CuLu I will probably go with the RAM underclocked to 1800, just to be a bit safer. EDIT3: Stop the presses! I just realized that I did not Apply the 1900 RAM setting in Afterburner. I'll have to run it again. EDIT4: It craps out in less than a minute at 1900. It seems that this is the answer.[/QUOTE] Glad you got it figured out! I run all my cards at 1600, though my MSI cards come @ 1600 from the factory, I have to OC the others a bit to get that. |
[QUOTE=flashjh;313811]Glad you got it figured out! I run all my cards at 1600, though my MSI cards come @ 1600 from the factory, I have to OC the others a bit to get that.[/QUOTE]
nVidia spec for the 570 is 732 GHz for the GPU, 1900 RAM. Gigabyte runs this card at 780/1900, but 1900 doesn't cut it. I have it at 780/1750. It seemed OK at 1800, but I backed off a hair more just for margin. It has run overnight without problem and should complete ~0230 UTC, 10/7/12. We'll see if it matches the first LL. What's funny is that I can push the GTX 460 much harder without issues. It has turned out one DC after another running at 823 MHz GPU, 2000 RAM. nVidia spec is 675/1800. Gigabyte is 715/1800. Still, in the 27M range the 460 is running at ~58% the speed of the 570: 4.86 ms/iter vs 2.83 ms/iter. ATM, I have both running CL, and 4 P-1s on the CPU, with 2 cores unassigned. I'm waiting to clear the current CL assignments (which are dues within a few minutes of each other) before I decide whether to leave the 570 on CuLu. That would mean putting 2x or 3x mfaktc on the 460 and adjusting the P-1s if necessary. 3 and 4 P-1 instances are "good" numbers, in that they stay in relative balance between S1 and S2 with 2 or 3 HighMemWorkers respectively. |
I just completed a succesful double-check of M33273391 on my GPU with its memory downclocked to 1800 MHz. This seems to be the solution for this GPU.
The main problem for me was that I never underclocked (or overclocked) a GPU before, so I had to learn that nvidia had some tools that I could download. |
[QUOTE=patrik;314706]I just completed a succesful double-check of M33273391 on my GPU with its memory downclocked to 1800 MHz. This seems to be the solution for this GPU.
The main problem for me was that I never underclocked (or overclocked) a GPU before, so I had to learn that nvidia had some tools that I could download.[/QUOTE] MSI Afterburner is freely available and provides monitoring and control functions. I should have mentioned that. [url]http://event.msi.com/vga/afterburner/download.htm[/url] |
Does anyone know what would be involved in writing a small program (or modifying CUDALucas) to automatically assign exponents?
What I mean is instead of getting LL-DC work from GPU72 and then putting it into P95 to get it assigned to me and then moving them to CUDALucas worktodo, just have a small program in the same directory and anytime the worktodo file gets updated, you double click it and it updates your PrimeNet account with your exponent and an expected completion date, maybe 30 days or a settable date. I'm thinking even a Perl script might be enough; I just don't know how to do it. |
[QUOTE=flashjh;318815]Does anyone know what would be involved in writing a small program (or modifying CUDALucas) to automatically assign exponents?
What I mean is instead of getting LL-DC work from GPU72 and then putting it into P95 to get it assigned to me and then moving them to CUDALucas worktodo, just have a small program in the same directory and anytime the worktodo file gets updated, you double click it and it updates your PrimeNet account with your exponent and an expected completion date, maybe 30 days or a settable date. I'm thinking even a Perl script might be enough; I just don't know how to do it.[/QUOTE] I don't do it quite that way, but such a feature would be great. I had been using a separate instance of P95 to get LL-DC via the proxy, then moving the assignments to CuLu worktodo.txt. Right now, I'm short-handed in the hardware department, so I haven't been doing CL. |
[QUOTE=kladner;318817]I don't do it quite that way, but such a feature would be great. I had been using a separate instance of P95 to get LL-DC via the proxy, then moving the assignments to CuLu worktodo.txt. Right now, I'm short-handed in the hardware department, so I haven't been doing CL.[/QUOTE]
Really, using one directory for CuLu and P95, one could share a worktodo.txt file. Anytime P95 was fired up it would update all the exponents and delete the completed ones. I just don't want to end up in the same boat getting DC assignemnts I don't want. Seems like it would be easier to use a program or script designed just for CuLu processing. If nothing surfaces, I'll probably go with P95 in the directory and just make sure the computer name is different than the main worker to hopefully avoid any troubles. |
[QUOTE=flashjh;318815]Does anyone know what would be involved in writing a small program (or modifying CUDALucas) to automatically assign exponents?
What I mean is instead of getting LL-DC work from GPU72 and then putting it into P95 to get it assigned to me and then moving them to CUDALucas worktodo, just have a small program in the same directory and anytime the worktodo file gets updated, you double click it and it updates your PrimeNet account with your exponent and an expected completion date, maybe 30 days or a settable date. I'm thinking even a Perl script might be enough; I just don't know how to do it.[/QUOTE] The problem is, you'd need a fairly decent understanding of the PrimeNet protocols, which isn't that hard; more importantly, I'm not sure how strict PrimeNet is about the information in account/work requests. When I looked at the protocol, it [i]seemed[/i] to require a lot of things, such as computer name, GUID, hardware ID, basic computer info, and various other miscellanea on top of the expo/AID. Once we understand that, writing such a script would be easy. I would ask chalsall for his advice; AFAIK, he, Prime95, Scott Kurowski and perhaps Christenson (who I haven't seen for a year) are the only ones who are more than a bit knowledgeable about the protocol. (There is a public description of the API, but like I said, I don't know how strictly it must be adhered to, which is why I would ask chalsall.) [QUOTE=flashjh;318818]Really, using one directory for CuLu and P95, one could share a worktodo.txt file. Anytime P95 was fired up it would update all the exponents and delete the completed ones. I just don't want to end up in the same boat getting DC assignemnts I don't want. Seems like it would be easier to use a program or script designed just for CuLu processing. If nothing surfaces, I'll probably go with P95 in the directory and just make sure the computer name is different than the main worker to hopefully avoid any troubles.[/QUOTE] That's a pretty good idea in the meantime. |
| All times are UTC. The time now is 23:14. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.