![]() |
1 Attachment(s)
I switched to build 8 in the middle of a double check test. The test finished successfully and matches the first test, but the residue is not displayed in my work results details. Instead, n/a is displayed.
You can also see this on the [URL="https://www.mersenne.org/report_exponent/?exp_lo=46655069&full=1"]exponent status[/URL] page. |
[QUOTE=Chuck;506594]I switched to build 8 in the middle of a double check test. The test finished successfully and matches the first test, but the residue is not displayed in my work results details. Instead, n/a is displayed.
You can also see this on the [URL="https://www.mersenne.org/report_exponent/?exp_lo=46655069&full=1"]exponent status[/URL] page.[/QUOTE] I've got Aaron working on it. I didn't think the new JSON results would cause any problems since the old text is also sent to the server. |
[QUOTE=ATH;506558]
[Work thread Jan 17 17:51:17] Iteration: 54083596/88479649, Possible error: round off (0.4366392078) > 0.42188[/QUOTE] That error is not significant enough to change the crossovers. Your previous reports had several errors, some as high as 0.45+ and 0.46+. BTW, build 8 will not do any round off checking during a Gerbicz PRP (build 6 did when near the limit of an FFT size) |
[QUOTE=Prime95;506603]I've got Aaron working on it. I didn't think the new JSON results would cause any problems since the old text is also sent to the server.[/QUOTE]
I think I have the new JSON results being parsed okay for the history section now. I may be missing the proper parsing for PRP results but I haven't seen any live examples of those pass through from the new P95 build yet. It may be different than what gpuowl is doing for it's JSON, so it may or may not show up correctly. I'll deal with it when the time comes. :smile: |
The FixedHardwareUID=1 is not working in 29.5b8 or maybe I am doing it wrong.
I removed FixedHardwareUID=1 and the line HardwareGUID= from prime.txt and then started mprime again for about 1 minute. Then I stopped it again and added FixedHardwareUID=1 back to prime.txt and started it again, but some hours later I got: [Comm thread Jan 22 03:57:02] Updating computer information on the server [Comm thread Jan 22 03:57:02] PrimeNet error 33: CPU identity mismatch [Comm thread Jan 22 03:57:02] CPU identity mismatch: g=481073DBD354B3EA38D6C9286ADA4D03 hg=daf2d7cfa4eefcf9c6f2696915f78d9f wg= [Comm thread Jan 22 03:57:02] Updating computer information on the server [Comm thread Jan 22 03:57:02] Exchanging program options with server and it created a new computer account again for most of my instances. Is it because several instances cannot use the same ComputerGUID= value in local.txt at the same time? |
[QUOTE=ATH;506634]The FixedHardwareUID=1 is not working in 29.5b8 or maybe I am doing it wrong.[/QUOTE]
I changed my scripts to try a new way and it seems to solve the problem. When my scripts launch a new instance, they install a prime.txt file with only [c]FixedHardwareUID=1[/c]. There is no [c]HardwareGUID=[/c] line and no [c]WindowsGUID=[/c] line. And they install a local.txt file with only [c]ComputerID=c5.large[/c] (for example, or whatever the AWS instance is), there is no [c]ComputerGUID=[/c]. After the new instance launches, mprime itself creates a [c]HardwareGUID=[/c] line in prime.txt and fills in a value, it creates a [c]WindowsGUID=[/c] line in prime.txt and leaves it blank (it's Linux after all). It also creates a [c]ComputerGUID=[/c] line in local.txt and fills in a value. When an old instance terminates and a new instance launches later and takes over the existing working directory, the script never overwrites the prime.txt file, and it only overwrites the local.txt file if the instance type is different from the one in the ComputerID line. The latter only happens if I manually moved the working directory to a different parent directory, for instance if c5.xlarge spot instances temporarily became cheaper than c5.large sport instances. With this setup, I find it no longer keeps creating new entries in [url]https://www.mersenne.org/cpus/[/url] This setup is changed from what I described in my How-to guide, but it seems to work, so I need to update the guide. Unfortunately AWS also changed their configuration screens, so a bunch of other edits to the guide also need to be made. |
[QUOTE=GP2;506654]I changed my scripts to try a new way and it seems to solve the problem.[/QUOTE]
However, this still isn't ideal, because a better long-term solution would probably be just to switch to AWS Batch and run each exponent as a separate batch job in a container. But then the problem of proliferating CPUs reappears again... a new CPU would get created for every single exponent you test. Another problem is that when mprime runs out of worktodo lines, it just idles. For a batch job, it should just terminate. It ought to be possible to create a modified mprime that is more cloud-native. Have it write its savefiles to S3 buckets instead of to a filesystem. Avoid trying to identify which physical machine ran a particular exponent, that doesn't make sense in a new world of virtual machines and containers. And so forth. |
[QUOTE=Madpoo;506621]I think I have the new JSON results being parsed okay for the history section now. I may be missing the proper parsing for PRP results but I haven't seen any live examples of those pass through from the new P95 build yet. It may be different than what gpuowl is doing for it's JSON, so it may or may not show up correctly. I'll deal with it when the time comes. :smile:[/QUOTE]
I've got a [URL="https://www.mersenne.org/report_exponent/?exp_lo=79160167&full=1"]PRP double check[/URL] running now. |
[QUOTE=Madpoo;506621]I think I have the new JSON results being parsed okay for the history section now. I may be missing the proper parsing for PRP results but I haven't seen any live examples of those pass through from the new P95 build yet. It may be different than what gpuowl is doing for it's JSON, so it may or may not show up correctly. I'll deal with it when the time comes. :smile:[/QUOTE]
This PRPDC will finish in ~25 hours with 29.5b8: [url]https://mersenne.org/M78106811[/url] It was started with 29.5b6 but I assume that does not matter. Here are 2 examples of PRPCF and PRPCFDC that was fully done with 29.5b8: [url]https://mersenne.org/M8786537[/url] [url]https://mersenne.org/M6915737[/url] |
@ATH,GP2: I just change FixedHardwareUID to not send any HardwareGUID info to the server. This seems to work just fine in my limited testing. I think that will address both of your scenarios.
Just generate a ComputerGUID and use it in as many places as you like. |
Started testing 29.5b8 x64 for Windows, and noted PRP or LL worker windows don't indicate what sort of calculation is being done in the worker title bar updates; same behavior as earlier builds, see [URL]https://www.mersenneforum.org/showpost.php?p=505790&postcount=158[/URL]
It's really handy to have the computation type display if it's P-1 or ECM, and indicating PRP or LL would be a very welcome addition. |
| All times are UTC. The time now is 22:08. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.