![]() |
Was there any change in handling Affinity between 29.4 and 29.6?
[code] [Worker #1] Affinity=4,6 [Worker #2] Affinity=0,2 [/code] Those settings does not seem to work in 29.6 as they were in 29.4, and CPU cores are chosen randomly and change during calculation? |
[QUOTE=Cruelty;510084]"Small FFTs" 12h stress test passed - I'd suggest lowering Max FFT size from 586K to maybe 128K,[/QUOTE]
Is this for 20 torture threads on 13.75MB L3 cache? |
[QUOTE=Cruelty;510108]Was there any change in handling Affinity between 29.4 and 29.6?[/quote]
Not to my knowledge. [quote] [code] [Worker #1] Affinity=4,6 [Worker #2] Affinity=0,2 [/code] Those settings does not seem to work in 29.6 as they were in 29.4, and CPU cores are chosen randomly and change during calculation?[/QUOTE] 29.6 is linked with a newer version of hwloc. Is it reporting an accurate description of your architecture in results.bench.txt? |
[QUOTE=Prime95;510122]Is this for 20 torture threads on 13.75MB L3 cache?[/QUOTE]Yes
[quote]29.6 is linked with a newer version of hwloc. Is it reporting an accurate description of your architecture in results.bench.txt?[/quote] I don't see such file in prime95 directory :no: CPU information in prime95 seems accurate though. |
[QUOTE=Cruelty;510141]Yes
I don't see such file in prime95 directory :no: CPU information in prime95 seems accurate though.[/QUOTE] You get results.bench.txt whenever you do any benchmark. Just start one and abort it. |
[QUOTE=Cruelty;510122]Is this for 20 torture threads on 13.75MB L3 cache?[/QUOTE]
Something is off. A small torture test trying to run in L3 cache should assume 13.75MB / 20 threads = 704KB / test. Each FFT word is an 8-byte double. Thus max FFT size should be under 88K. |
1 Attachment(s)
[QUOTE=Prime95;510148]You get results.bench.txt whenever you do any benchmark. Just start one and abort it.[/QUOTE] Here you go.
|
[QUOTE=Cruelty;510108]Was there any change in handling Affinity between 29.4 and 29.6?
[code] [Worker #1] Affinity=4,6 [Worker #2] Affinity=0,2 [/code] Those settings does not seem to work in 29.6 as they were in 29.4, and CPU cores are chosen randomly and change during calculation?[/QUOTE] So 29.6 should bind worker #1 to PU#4 and PU#6 (which are logical CPUs on two different cores) as reported in results.bench.txt. Are you getting messages on screen saying what logical CPU each worker is being bound to? |
2 Attachment(s)
[QUOTE=Prime95;510178]So 29.6 should bind worker #1 to PU#4 and PU#6 (which are logical CPUs on two different cores) as reported in results.bench.txt. Are you getting messages on screen saying what logical CPU each worker is being bound to?[/QUOTE]
Yes, but it's not what actually happens when I watch the graphs in task manager. |
[QUOTE=Cruelty;510188]Yes, but it's not what actually happens when I watch the graphs in task manager.[/QUOTE]
Hmmm. I looked at the code and see no problems (though that is far from conclusive!). Are there any Windows system tools that tell you if a process' threads are bound to specific logical CPUs? I'll try replicating on my non-hyperthreaded quad-core. |
[QUOTE=harlee;509885]Madpoo,
It appears that JSON isn't displaying PRP-C results the same way as non-JSON - known factors are not being shown. Please see [URL="https://www.mersenne.org/M5342101"]M5342101[/URL] and [URL="https://www.mersenne.org/M7039433"]M7039433[/URL] and PrimeNet Most Recent Results page (sort by Type). Even the entry on my Account Results Details page is different:[/QUOTE] Fixed. The latest builds have the known-factors in an array in the JSON data... not sure how the previous P95 builds would have included the known-factors if there were more than one, maybe just as a comma-delimited single value. |
| All times are UTC. The time now is 20:42. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.