mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Software (https://www.mersenneforum.org/forumdisplay.php?f=10)
-   -   Prime95 version 29.6/29.7/29.8 (https://www.mersenneforum.org/showthread.php?t=24094)

Prime95 2019-02-18 03:56

Prime95 version 29.6/29.7/29.8
 
Prime95 version 29.8 build 6 is available.

From whatsnew.txt:

[code]1) Support added for AVX-512 FFTs.
2) FMA3 FFTs now have slightly higher FFT crossover points. Soft crossovers are
no longer used by default. See undoc.txt.
3) Torture test dialog box options now based on cache sizes. Options for performing
a weaker torture test are available. Torture tests that use all RAM are now more
stressful. In-place vs. not in-place memory accesses now displayed on screen.
On machines with more than 4GB of memory, blend defaults to 1/16th of RAM.
4) Add & subtract operations for AVX-512 FFTs are now multithreaded. This should
improve performance for P-1 and ECM when using multiple threads.
5) Benchmark results are now written to results.bench.txt.
6) JSON results are now available for all work performed. JSON results are
written to results.json.txt.
7) PRP tests with Gerbicz error checking are more immune to hardware errors.
[/code]

This is a release candidate version. If no serious bugs are reported this will become the official release on the mersenne.org download page.

There are no known serious bugs. There should be no problem with replacing your existing prime95/mprime executable with the new one (the Gerbicz PRP changes will in rare situations cause a save file to be discarded and a previous save file used).

Thanks to all that helped reporting bugs during the AVX-512 development in the 29.5 thread.


Download links:
Windows 64-bit: [URL]https://mersenne.org/ftp_root/gimps/p95v298b6.win64.zip[/URL]
Linux 64-bit: [URL]https://mersenne.org/ftp_root/gimps/p95v298b6.linux64.tar.gz[/URL]
Mac OS X: [URL]https://mersenne.org/ftp_root/gimps/p95v298b7.MacOSX.tar.gz[/URL]
Mac OS X no GUI: [URL]https://mersenne.org/ftp_root/gimps/p95v298b7.MacOSX.noGUI.tar.gz[/URL]
Windows 32-bit: [URL]https://mersenne.org/ftp_root/gimps/p95v298b6.win32.zip[/URL]
Linux 32-bit: [URL]https://mersenne.org/ftp_root/gimps/p95v298b6.linux32.tar.gz[/URL]
FreeBSD11 64-bit: [URL]https://mersenne.org/ftp_root/gimps/p95v298b6.FreeBSD11-64.tar.gz[/URL]
Source: [URL]https://mersenne.org/ftp_root/gimps/p95v298b6.source.zip[/URL]
Windows 64-bit service: [URL]https://mersenne.org/ftp_root/gimps/p95v298b6.win64.service.zip[/URL]
Windows 32-bit service: [URL]https://mersenne.org/ftp_root/gimps/p95v298b6.win32.service.zip[/URL]

Prime95 2019-02-18 03:57

Placeholder for bugs reported and bugs fixed.

1) Gerbicz PRP tests of (2^N+1)/factors fails in the last few iterations. Fixed in build 2.
2) Bulldozer users got a "no available FFT lengths" error trying to torture test with FMA3 or AVX FFTs. Fixed in build 2. Note that for the first time ever Bulldozer users can torture test using FMA3 and AVX FFTs. However, SSE2 FFTs may be more stressful.
3) x87 FFTs broken. Fixed in future build 2.
4) In weak torture test, Windows and Mac users are allowed to select options that cause "no FFT lengths" found errors. Fixed in build 6.
5) ScaleOutputFrequency=1 did not work testing ting numbers on AVX-512 machines. Fixed in build 3.
6) Older architectures, such as Pentium-4, may not select the best FFT implementation from gwnum.txt benchmark data. Fixed in build 3.
7) Older architectures, such as Pentium M or 4, do not display L1/L2 cache sizes. Fixed in build 3.
8) Torture test dialog box incorrectly calculated which FFTs will fit in the L3 or L4 cache. Fixed in build 7.
9) AVX-512 FFT fails on 18347731*109^1536-1 (a "rational" FFT -- no FFT weights). Fixed in 29.7.
10) FMA3 FFTs for exponents from 595.7M to 922.6M failed. Fixed in 29.7.
11) Some SkylakeX CPUs would say "no FFT sizes available" for default small torture test. Upper bound on FFT size changed to use L2+L3 cache size since the L3 cache is not inclusive. Fixed in 29.7.
12) The ability to stop individual torture test threads was not working. Running a benchmark without first stopping the torture test could cause a hang or crash. Fixed in 29.7.
13) Windows only!!! Zero-padded AVX-512 FFTs are not working. Fixed in 29.8.
14) If an error occurs writing worktodo.txt, then prime95 will hang at some later point in time. Fixed in 29.8 build 2.
15) If an error occurs deleting a worktodo.txt entry, the worker stopped computing. In 29.8 build 3, the worker will ignore the error and move onto the next entry in worktodo.txt.
16) Incorrect default value for torture test type in Linux menuing system. Fixed in 29.8 build 4.
17) URL of Mersenne Wiki was wrong. Fixed in 29.8 build 4.
18) Mojave dark mode not supported properly. Relinked with latest xcode. Fixed in 29.8 build 4, but untested for pre-Mojave OS.
19) Setting NumCPUs in local.txt to less than the number of L2 caches can cause the Torture Test dialog box to crash. Fixed in 29.8 build 4.
20) Benchmark states results are written to results.txt. Changed message to say results.bench.txt. Fixed in 29.8 build 5.
21) For PRP work, TF depth was not being written to worktodo.txt unless P-1 was required. This caused Test/Status to underestimate the chance that the PRP test would find a new Mersenne prime. Fixed in 29.8 build 5.
22) P-1 is frequently missing factors of numbers of the form 2*3^n+1. Fixed in 29.8 build 6.
23) In a throughput benchmark on machines with multiple L3 caches, some combinations of number-of-cores / number-of-workers would raise errors setting affinity or crash. Fixed in 29.8 build 6.

kriesel 2019-02-18 17:26

[QUOTE=Prime95;508841]Prime95 version 29.6 build 1 is available.

From whatsnew.txt:

[code]1) Support added for AVX-512 FFTs.
2) FMA3 FFTs now have slightly higher FFT crossover points. Soft crossovers are
no longer used by default. See undoc.txt.
3) Mprime now creates a pid file.
4) Torture test dialog box options now based on cache sizes. Options for performing
a weaker torture test are available. Torture tests that use all RAM are now more
stressful. In-place vs. not in-place memory accesses now displayed on screen.
5) Add & subtract operations for AVX-512 FFTs are now multithreaded. This should
improve performance for P-1 and ECM when using multiple threads.
6) Benchmark results are now written to results.bench.txt.
7) JSON results are now available for all work performed. JSON results are
written to results.json.txt.
8) Default memory available for prime95 changed from 8MB to 1/16th of RAM.
9) PRP tests with Gerbicz error checking are more immune to hardware errors.
[/code]Note #3 does not appear in the Windows version, so numbering of the rest of the items differ by one. Maybe put OS-specific items at the end of the whatsnew lists. If they were identified separately you might get away with a single version of the file for multiple OSes.

kriesel 2019-02-18 18:51

skipped interim residues; primality test type
 
Welcome changes and a few small things that might be cleaned up.
Following brief test was run on an HP G72 notebook (i3-M370, Win7 x64)

0) no benchmarking tested

1) The skipped interim residue case first reported for V29.4 remains in V29.6b1. See [URL]https://www.mersenneforum.org/showpost.php?p=508229&postcount=427[/URL] Note iteration 11 and 17 below have no interim residue output.
In prime.txt, [CODE]OutputIterations=1
InterimResidues=1
[/CODE]Worker window contains[CODE][Feb 18 12:01] Worker starting
[Feb 18 12:01] Setting affinity to run worker on CPU core #1
[Feb 18 12:01] Setting affinity to run helper thread 1 on CPU core #2
[Feb 18 12:01] Starting primality test of M82589933 using FFT length 4480K, Pass1=896, Pass2=5K, clm=4, 2 threads
[Feb 18 12:01] Iteration: 3 / 82589933 [0.00%], ms/iter: 59.833, ETA: 57d 04:40
[Feb 18 12:01] M82589933 interim LL residue 000000000000000E at iteration 3
[Feb 18 12:01] Iteration: 4 / 82589933 [0.00%], ms/iter: 62.653, ETA: 59d 21:22
[Feb 18 12:01] M82589933 interim LL residue 00000000000000C2 at iteration 4
[Feb 18 12:01] Iteration: 5 / 82589933 [0.00%], ms/iter: 85.160, ETA: 81d 09:43
[Feb 18 12:01] M82589933 interim LL residue 0000000000009302 at iteration 5
[Feb 18 12:01] Iteration: 6 / 82589933 [0.00%], ms/iter: 63.056, ETA: 60d 06:36
[Feb 18 12:01] M82589933 interim LL residue 00000000546B4C02 at iteration 6
[Feb 18 12:01] Iteration: 7 / 82589933 [0.00%], ms/iter: 67.032, ETA: 64d 01:49
[Feb 18 12:01] M82589933 interim LL residue 1BD696D9F03D3002 at iteration 7
[Feb 18 12:01] Iteration: 8 / 82589933 [0.00%], ms/iter: 62.132, ETA: 59d 09:24
[Feb 18 12:01] M82589933 interim LL residue 8CC88407A9F4C002 at iteration 8
[Feb 18 12:01] Iteration: 9 / 82589933 [0.00%], ms/iter: 63.589, ETA: 60d 18:50
[Feb 18 12:01] M82589933 interim LL residue 55599F9D37D30002 at iteration 9
[Feb 18 12:01] Iteration: 10 / 82589933 [0.00%], ms/iter: 63.583, ETA: 60d 18:42
[Feb 18 12:01] M82589933 interim LL residue F460D65DDF4C0002 at iteration 10
[Feb 18 12:01] Iteration: 11 / 82589933 [0.00%], ms/iter: 65.185, ETA: 62d 07:26
[Feb 18 12:01] Stopping primality test of M82589933 at iteration 11 [0.00%]
[Feb 18 12:01] Worker stopped.
[Feb 18 12:01] Worker starting
[Feb 18 12:01] Setting affinity to run worker on CPU core #1
[Feb 18 12:01] Setting affinity to run helper thread 1 on CPU core #2
[Feb 18 12:01] Running Jacobi error check. Passed. Time: 1.159 sec.
[Feb 18 12:01] Resuming primality test of M82589933 using FFT length 4480K, Pass1=896, Pass2=5K, clm=4, 2 threads
[Feb 18 12:01] Iteration: 12 / 82589933 [0.00%].
[Feb 18 12:01] M82589933 interim LL residue 9BDB491DF4C00002 at iteration 12
[Feb 18 12:01] Iteration: 13 / 82589933 [0.00%], ms/iter: 31.265, ETA: 29d 21:15
[Feb 18 12:01] M82589933 interim LL residue 4CEBB477D3000002 at iteration 13
[Feb 18 12:01] Iteration: 14 / 82589933 [0.00%], ms/iter: 60.381, ETA: 57d 17:14
[Feb 18 12:01] M82589933 interim LL residue 0B97D1DF4C000002 at iteration 14
[Feb 18 12:01] Iteration: 15 / 82589933 [0.00%], ms/iter: 91.954, ETA: 87d 21:34
[Feb 18 12:01] M82589933 interim LL residue ACEF477D30000002 at iteration 15
[Feb 18 12:01] Iteration: 16 / 82589933 [0.00%], ms/iter: 61.444, ETA: 58d 17:37
[Feb 18 12:01] M82589933 interim LL residue 9CBD1DF4C0000002 at iteration 16
[Feb 18 12:01] Iteration: 17 / 82589933 [0.00%], ms/iter: 65.462, ETA: 62d 13:48
[Feb 18 12:01] Stopping primality test of M82589933 at iteration 17 [0.00%]
[Feb 18 12:01] Worker stopped.
[Feb 18 12:02] Worker starting
[Feb 18 12:02] Setting affinity to run worker on CPU core #1
[Feb 18 12:02] Setting affinity to run helper thread 1 on CPU core #2
[Feb 18 12:02] Running Jacobi error check. Passed. Time: 1.752 sec.
[Feb 18 12:02] Resuming primality test of M82589933 using FFT length 4480K, Pass1=896, Pass2=5K, clm=4, 2 threads
[Feb 18 12:02] Iteration: 18 / 82589933 [0.00%].
[Feb 18 12:02] M82589933 interim LL residue 0BD1DF4C00000002 at iteration 18
[Feb 18 12:02] Iteration: 19 / 82589933 [0.00%], ms/iter: 34.795, ETA: 33d 06:14
[Feb 18 12:02] M82589933 interim LL residue 2F477D3000000002 at iteration 19
[Feb 18 12:02] Iteration: 20 / 82589933 [0.00%], ms/iter: 68.893, ETA: 65d 20:30
[Feb 18 12:02] M82589933 interim LL residue BD1DF4C000000002 at iteration 20
[Feb 18 12:02] Iteration: 21 / 82589933 [0.00%], ms/iter: 64.226, ETA: 61d 09:27
[Feb 18 12:02] M82589933 interim LL residue F477D30000000002 at iteration 21
[Feb 18 12:02] Iteration: 22 / 82589933 [0.00%], ms/iter: 65.461, ETA: 62d 13:47
[Feb 18 12:02] M82589933 interim LL residue D1DF4C0000000002 at iteration 22
[Feb 18 12:02] Iteration: 23 / 82589933 [0.00%], ms/iter: 66.030, ETA: 63d 02:50
[Feb 18 12:02] Stopping primality test of M82589933 at iteration 23 [0.00%]
[Feb 18 12:02] Worker stopped.
[/CODE]2) This run indicates LL in the worker title bar, a nice addition to distinguish it from PRP3. Similarly, PRP under way indicates x.xx% of PRP (exponent) in the worker title bar. Interim residue lines contained in the worker window also indicate LL or PRP: format approximately
Mexponent interim LL residue DEADBEEF12340002 at iteration n

3) Status counts both LL and PRP3 assignment lines in worktodo.txt toward total exponents to be tested, and presumably toward odds of finding a prime.

4) This worktodo.txt content briefly showed PRP in the worker title bar and then went back to computing LL iterations, despite the worktodo order. [CODE]PRP=N/A,1,2,82589933,-1
Test=N/A,82589933,76,1[/CODE]Are the save files named the same for LL and PRP3? Worker window content:[CODE][Feb 18 12:10] Worker starting
[Feb 18 12:10] Setting affinity to run worker on CPU core #1
[Feb 18 12:10] Setting affinity to run helper thread 1 on CPU core #2
[Feb 18 12:10] Error reading intermediate file: p82589933
[Feb 18 12:10] Renaming p82589933 to p82589933.bad1
[Feb 18 12:10] Trying backup intermediate file: p82589933.bu
[Feb 18 12:10] Error reading intermediate file: p82589933.bu
[Feb 18 12:10] Renaming p82589933.bu to p82589933.bad2
[Feb 18 12:10] Trying backup intermediate file: p82589933.bu2
[Feb 18 12:10] Error reading intermediate file: p82589933.bu2
[Feb 18 12:10] Renaming p82589933.bu2 to p82589933.bad3
[Feb 18 12:10] All intermediate files bad. Temporarily abandoning work unit.
[Feb 18 12:10] Setting affinity to run helper thread 1 on CPU core #2
[Feb 18 12:10] Trying backup intermediate file: p82589933.bad3
[Feb 18 12:10] Running Jacobi error check. Passed. Time: 1.200 sec.
[Feb 18 12:10] Resuming primality test of M82589933 using FFT length 4480K, Pass1=896, Pass2=5K, clm=4, 2 threads
[Feb 18 12:10] Iteration: 12 / 82589933 [0.00%].
[Feb 18 12:10] M82589933 interim LL residue 9BDB491DF4C00002 at iteration 12
[Feb 18 12:10] Iteration: 13 / 82589933 [0.00%], ms/iter: 30.680, ETA: 29d 07:50
[Feb 18 12:10] M82589933 interim LL residue 4CEBB477D3000002 at iteration 13
[Feb 18 12:10] Iteration: 14 / 82589933 [0.00%], ms/iter: 63.538, ETA: 60d 17:39
[Feb 18 12:10] M82589933 interim LL residue 0B97D1DF4C000002 at iteration 14
[Feb 18 12:10] Iteration: 15 / 82589933 [0.00%], ms/iter: 61.414, ETA: 58d 16:56
[Feb 18 12:10] M82589933 interim LL residue ACEF477D30000002 at iteration 15
[Feb 18 12:10] Iteration: 16 / 82589933 [0.00%], ms/iter: 66.542, ETA: 63d 14:35
[Feb 18 12:10] M82589933 interim LL residue 9CBD1DF4C0000002 at iteration 16
[Feb 18 12:10] Iteration: 17 / 82589933 [0.00%], ms/iter: 71.187, ETA: 68d 01:08
[Feb 18 12:10] M82589933 interim LL residue 02F477D300000002 at iteration 17
[Feb 18 12:10] Iteration: 18 / 82589933 [0.00%], ms/iter: 62.478, ETA: 59d 17:20
[Feb 18 12:10] M82589933 interim LL residue 0BD1DF4C00000002 at iteration 18
[Feb 18 12:10] Iteration: 19 / 82589933 [0.00%], ms/iter: 60.849, ETA: 58d 03:59
[Feb 18 12:10] M82589933 interim LL residue 2F477D3000000002 at iteration 19
[Feb 18 12:10] Iteration: 20 / 82589933 [0.00%], ms/iter: 64.582, ETA: 61d 17:37
[Feb 18 12:10] M82589933 interim LL residue BD1DF4C000000002 at iteration 20
[Feb 18 12:10] Iteration: 21 / 82589933 [0.00%], ms/iter: 62.924, ETA: 60d 03:34
[Feb 18 12:10] M82589933 interim LL residue F477D30000000002 at iteration 21
[Feb 18 12:10] Iteration: 22 / 82589933 [0.00%], ms/iter: 63.104, ETA: 60d 07:43
[Feb 18 12:10] M82589933 interim LL residue D1DF4C0000000002 at iteration 22
[Feb 18 12:10] Iteration: 23 / 82589933 [0.00%], ms/iter: 82.314, ETA: 78d 16:24
[Feb 18 12:10] M82589933 interim LL residue 477D300000000002 at iteration 23
[Feb 18 12:10] Iteration: 24 / 82589933 [0.00%], ms/iter: 67.763, ETA: 64d 18:35
[Feb 18 12:10] M82589933 interim LL residue 1DF4C00000000002 at iteration 24
[Feb 18 12:10] Iteration: 25 / 82589933 [0.00%], ms/iter: 60.561, ETA: 57d 21:22
[Feb 18 12:10] M82589933 interim LL residue 77D3000000000002 at iteration 25
[Feb 18 12:10] Iteration: 26 / 82589933 [0.00%], ms/iter: 65.237, ETA: 62d 08:38
[Feb 18 12:10] M82589933 interim LL residue DF4C000000000002 at iteration 26
[Feb 18 12:10] Iteration: 27 / 82589933 [0.00%], ms/iter: 63.078, ETA: 60d 07:06
[Feb 18 12:10] Stopping primality test of M82589933 at iteration 27 [0.00%]
[Feb 18 12:10] Worker stopped.
[/CODE]There were a few intermediate save files labeled bad, in the working folder. Presumably because they were from LL and the first worktodo item was PRP, for the same exponent. This case should be very rare. It may occur by accident or in some testing scenarios.

The worker was stopped and all intermediate files for the exponent were moved to a separate folder. On resume, PRP3 started from iteration 1.
[CODE][Feb 18 12:19] Worker starting
[Feb 18 12:19] Setting affinity to run worker on CPU core #1
[Feb 18 12:19] Setting affinity to run helper thread 1 on CPU core #2
[Feb 18 12:19] Starting Gerbicz error-checking PRP test of M82589933 using FFT length 4480K, Pass1=896, Pass2=5K, clm=4, 2 threads
[Feb 18 12:19] Iteration: 1 / 82589933 [0.00%], ms/iter: 1332.191, ETA: 1273d 10:40
[Feb 18 12:19] Iteration: 2 / 82589933 [0.00%], ms/iter: 212.367, ETA: 203d 00:02
[Feb 18 12:19] M82589933 interim PRP residue 000000000000001B at iteration 1
[Feb 18 12:19] Iteration: 3 / 82589933 [0.00%], ms/iter: 213.918, ETA: 204d 11:38
[Feb 18 12:19] M82589933 interim PRP residue 000000000000088B at iteration 2
[Feb 18 12:19] Iteration: 4 / 82589933 [0.00%], ms/iter: 246.031, ETA: 235d 04:21
[Feb 18 12:19] M82589933 interim PRP residue 0000000000DAF26B at iteration 3
[Feb 18 12:19] Iteration: 5 / 82589933 [0.00%], ms/iter: 210.429, ETA: 201d 03:34
[Feb 18 12:19] M82589933 interim PRP residue 000231C54B5F6A2B at iteration 4
[Feb 18 12:19] Iteration: 6 / 82589933 [0.00%], ms/iter: 226.473, ETA: 216d 11:39
[Feb 18 12:19] M82589933 interim PRP residue D310B7D97DD4E9AB at iteration 5
[Feb 18 12:19] Iteration: 7 / 82589933 [0.00%], ms/iter: 205.084, ETA: 196d 00:58
[Feb 18 12:20] M82589933 interim PRP residue 2AC0B180838228AB at iteration 6
[Feb 18 12:20] Iteration: 8 / 82589933 [0.00%], ms/iter: 244.553, ETA: 233d 18:26
[Feb 18 12:20] M82589933 interim PRP residue 9B5ACA650265A6AB at iteration 7
[Feb 18 12:20] Iteration: 9 / 82589933 [0.00%], ms/iter: 211.244, ETA: 201d 22:16
[Feb 18 12:20] M82589933 interim PRP residue B47759B0D250A2AB at iteration 8
[Feb 18 12:20] Iteration: 10 / 82589933 [0.00%], ms/iter: 250.999, ETA: 239d 22:19
[Feb 18 12:20] M82589933 interim PRP residue DF36E033DAB69AAB at iteration 9
[Feb 18 12:20] Iteration: 11 / 82589933 [0.00%], ms/iter: 215.197, ETA: 205d 16:58
[Feb 18 12:20] M82589933 interim PRP residue C94525688DC28AAB at iteration 10
[Feb 18 12:20] Iteration: 12 / 82589933 [0.00%], ms/iter: 218.688, ETA: 209d 01:04
[Feb 18 12:20] M82589933 interim PRP residue FC7BEC947CDA6AAB at iteration 11
[Feb 18 12:20] Iteration: 13 / 82589933 [0.00%], ms/iter: 213.036, ETA: 203d 15:24
[Feb 18 12:20] Stopping PRP test of M82589933 at iteration 13 [0.00%]
[Feb 18 12:20] Worker stopped.
[/CODE]5) Note the seemingly out of order output above for PRP3; interim residue for iteration 11 appearing after iteration timing for iteration 12, for example. This is also behavior that was observed in v29.4. See also [URL]https://www.mersenneforum.org/showpost.php?p=508241&postcount=428[/URL]

6) The skipped-interim-residue observed for LL also occurs for PRP3. Note there's no interim residue given for iteration 12 below. It's not known whether this skipping may also occur for cases of higher spacing than 1, but it seems likely it does, with lowered probability as the interim interval is increased.[CODE][Feb 18 12:20] M82589933 interim PRP residue C94525688DC28AAB at iteration 10
[Feb 18 12:20] Iteration: 12 / 82589933 [0.00%], ms/iter: 218.688, ETA: 209d 01:04
[Feb 18 12:20] M82589933 interim PRP residue FC7BEC947CDA6AAB at iteration 11
[Feb 18 12:20] Iteration: 13 / 82589933 [0.00%], ms/iter: 213.036, ETA: 203d 15:24
[Feb 18 12:20] Stopping PRP test of M82589933 at iteration 13 [0.00%]
[Feb 18 12:20] Worker stopped.
[Feb 18 12:29] Worker starting
[Feb 18 12:29] Setting affinity to run worker on CPU core #1
[Feb 18 12:29] Setting affinity to run helper thread 1 on CPU core #2
[Feb 18 12:29] Resuming Gerbicz error-checking PRP test of M82589933 using FFT length 4480K, Pass1=896, Pass2=5K, clm=4, 2 threads
[Feb 18 12:29] Iteration: 14 / 82589933 [0.00%].
[Feb 18 12:29] M82589933 interim PRP residue 628CE0A31369AAAB at iteration 13
[Feb 18 12:29] Iteration: 15 / 82589933 [0.00%], ms/iter: 121.541, ETA: 116d 04:21
[Feb 18 12:29] M82589933 interim PRP residue B332521E7C28AAAB at iteration 14[/CODE]

GP2 2019-02-18 20:50

I am doing PRP tests of Wagstaff exponents.

[STRIKE]Several dozen machines resumed 29.5 savefiles without a problem, but one has a problem that seems similar to the one reported by Simon.[/STRIKE]

All of the exponents on all of the machines seem to have a problem similar to the one reported by Simon.

I tried moving away the 29.5 savefiles and starting the same exponent from scratch with 29.6, but the same problem occurred.

The Wagstaff exponent in question is 9081307 ([B]edit[/B]: it's not just this exponent). I am testing another exponent now to see if the problem is the exponent or the machine.


The exponent passes the Gerbicz error check at iteration p−1 (9081306), but then fails somehow in the final processing. It continues in an infinite error loop and does not attempt to resume from earlier savefiles. Only when I interrupt the program with SIGINT does it start to process earlier savefiles, but then it terminates right after, obviously.

When I manually delete the more recent savefiles and try to resume from older savefiles (at iteration 9 million and at 8 million), the same problem happens: success until iteration p−1 and then the same infinite error loop.


[CODE]
PRP=1,2,9081307,1,"3"
[/CODE]

[CODE]
WorkerThreads=1
CoresPerTest=2
HyperthreadLL=1
[/CODE]

[CODE]
PRPBase=3
PRPResidueType=5
[/CODE]

results.txt:

[CODE]
[Mon Feb 18 05:44:04 2019]
ERROR: Comparing PRP double-check values failed. Rolling back to iteration 9081306.
Continuing from last save file.
ERROR: Comparing PRP double-check values failed. Rolling back to iteration 9081306.
Continuing from last save file.
ERROR: Comparing PRP double-check values failed. Rolling back to iteration 9081306.
Continuing from last save file.
ERROR: Comparing PRP double-check values failed. Rolling back to iteration 9081306.
Continuing from last save file.
...
(about 80,000 lines like this, and growing rapidly)
[/CODE]

I sent a SIGINT to stop the program. Then the following lines appeared at the bottom of the results.txt file, after all the tens of thousands of ERROR lines:

[CODE]
Error reading intermediate file: p9081307
Renaming p9081307 to p9081307.bad1
Trying backup intermediate file: p9081307.bu
[/CODE]

The rename succeeded, but obviously the program did terminate right after from the SIGINT.


I tried restarting from each savefile (at iterations 8 million, 9 million, and higher), and all of them eventually gave the same error.

For instance, resuming from iteration 9 million (the .bu3 file):

[CODE]
$ ./mprime -d
[Main thread Feb 18 18:23] Mersenne number primality test program version 29.6
[Main thread Feb 18 18:23] Optimizing for CPU architecture: Core i3/i5/i7, L2 cache size: 2x1 MB, L3 cache size: 25344 KB
[Main thread Feb 18 18:23] Starting worker.
[Work thread Feb 18 18:23] Worker starting
[Work thread Feb 18 18:23] Setting affinity to run worker on CPU core #1
[Work thread Feb 18 18:23] Setting affinity to run helper thread 1 on CPU core #1
[Work thread Feb 18 18:23] Setting affinity to run helper thread 3 on CPU core #2
[Work thread Feb 18 18:23] Setting affinity to run helper thread 2 on CPU core #2
[Work thread Feb 18 18:23] Trying backup intermediate file: p9081307.bu3
[Work thread Feb 18 18:23] Resuming Gerbicz error-checking PRP test of (2^9081307+1)/3 using all-complex AVX-512 FFT length 480K, Pass1=128, Pass2=3840, clm=2, 4 threads
[Work thread Feb 18 18:23] Iteration: 9000001 / 9081307 [99.10%].
[Work thread Feb 18 18:23] Iteration: 9010000 / 9081307 [99.21%], ms/iter: 0.712, ETA: 00:00:50
[Work thread Feb 18 18:23] Iteration: 9020000 / 9081307 [99.32%], ms/iter: 0.710, ETA: 00:00:43
[Work thread Feb 18 18:23] Iteration: 9030000 / 9081307 [99.43%], ms/iter: 0.710, ETA: 00:00:36
[Work thread Feb 18 18:23] Iteration: 9040000 / 9081307 [99.54%], ms/iter: 0.712, ETA: 00:00:29
[Work thread Feb 18 18:23] Iteration: 9050000 / 9081307 [99.65%], ms/iter: 0.710, ETA: 00:00:22
[Work thread Feb 18 18:23] Iteration: 9060000 / 9081307 [99.76%], ms/iter: 0.710, ETA: 00:00:15
[Work thread Feb 18 18:23] Iteration: 9070000 / 9081307 [99.87%], ms/iter: 0.711, ETA: 00:00:08
[Work thread Feb 18 18:23] Iteration: 9080000 / 9081307 [99.98%], ms/iter: 0.711, ETA: 00:00:00
[Work thread Feb 18 18:24] Gerbicz error check passed at iteration 9081225.
[Work thread Feb 18 18:24] Gerbicz error check passed at iteration 9081306.
[Work thread Feb 18 18:24] ERROR: Comparing PRP double-check values failed. Rolling back to iteration 9081306.
[Work thread Feb 18 18:24] Continuing from last save file.
[Work thread Feb 18 18:24] Setting affinity to run helper thread 1 on CPU core #1
[Work thread Feb 18 18:24] Setting affinity to run helper thread 3 on CPU core #2
[Work thread Feb 18 18:24] Setting affinity to run helper thread 2 on CPU core #2
[Work thread Feb 18 18:24] Trying backup intermediate file: p9081307.bu3
[Work thread Feb 18 18:24] Resuming Gerbicz error-checking PRP test of (2^9081307+1)/3 using all-complex AVX-512 FFT length 480K, Pass1=128, Pass2=3840, clm=2, 4 threads
[Work thread Feb 18 18:24] Iteration: 9000001 / 9081307 [99.10%].
[Work thread Feb 18 18:24] Hardware errors have occurred during the test!
[Work thread Feb 18 18:24] 1 Gerbicz/double-check error.
[Work thread Feb 18 18:24] Confidence in final result is excellent.
[Work thread Feb 18 18:24] Iteration: 9010000 / 9081307 [99.21%], ms/iter: 0.714, ETA: 00:00:50
[Work thread Feb 18 18:24] Hardware errors have occurred during the test!
[Work thread Feb 18 18:24] 1 Gerbicz/double-check error.
[Work thread Feb 18 18:24] Confidence in final result is excellent.
[Work thread Feb 18 18:24] Iteration: 9020000 / 9081307 [99.32%], ms/iter: 0.713, ETA: 00:00:43
[Work thread Feb 18 18:24] Hardware errors have occurred during the test!
[Work thread Feb 18 18:24] 1 Gerbicz/double-check error.
[Work thread Feb 18 18:24] Confidence in final result is excellent.
[Work thread Feb 18 18:24] Iteration: 9030000 / 9081307 [99.43%], ms/iter: 0.712, ETA: 00:00:36
[Work thread Feb 18 18:24] Hardware errors have occurred during the test!
[Work thread Feb 18 18:24] 1 Gerbicz/double-check error.
[Work thread Feb 18 18:24] Confidence in final result is excellent.
[Work thread Feb 18 18:24] Iteration: 9040000 / 9081307 [99.54%], ms/iter: 0.713, ETA: 00:00:29
[Work thread Feb 18 18:24] Hardware errors have occurred during the test!
[Work thread Feb 18 18:24] 1 Gerbicz/double-check error.
[Work thread Feb 18 18:24] Confidence in final result is excellent.
[Work thread Feb 18 18:24] Iteration: 9050000 / 9081307 [99.65%], ms/iter: 0.713, ETA: 00:00:22
[Work thread Feb 18 18:24] Hardware errors have occurred during the test!
[Work thread Feb 18 18:24] 1 Gerbicz/double-check error.
[Work thread Feb 18 18:24] Confidence in final result is excellent.
[Work thread Feb 18 18:24] Iteration: 9060000 / 9081307 [99.76%], ms/iter: 0.714, ETA: 00:00:15
[Work thread Feb 18 18:24] Hardware errors have occurred during the test!
[Work thread Feb 18 18:24] 1 Gerbicz/double-check error.
[Work thread Feb 18 18:24] Confidence in final result is excellent.
[Work thread Feb 18 18:24] Iteration: 9070000 / 9081307 [99.87%], ms/iter: 0.715, ETA: 00:00:08
[Work thread Feb 18 18:24] Hardware errors have occurred during the test!
[Work thread Feb 18 18:24] 1 Gerbicz/double-check error.
[Work thread Feb 18 18:24] Confidence in final result is excellent.
[Work thread Feb 18 18:24] Iteration: 9080000 / 9081307 [99.98%], ms/iter: 0.713, ETA: 00:00:00
[Work thread Feb 18 18:24] Hardware errors have occurred during the test!
[Work thread Feb 18 18:24] 1 Gerbicz/double-check error.
[Work thread Feb 18 18:24] Confidence in final result is excellent.
[Work thread Feb 18 18:24] Gerbicz error check passed at iteration 9081225.
[Work thread Feb 18 18:24] Gerbicz error check passed at iteration 9081306.
[Work thread Feb 18 18:24] ERROR: Comparing PRP double-check values failed. Rolling back to iteration 9081306.
[Work thread Feb 18 18:24] Continuing from last save file.
[Work thread Feb 18 18:24] Setting affinity to run helper thread 1 on CPU core #1
[Work thread Feb 18 18:24] Setting affinity to run helper thread 3 on CPU core #2
[Work thread Feb 18 18:24] Setting affinity to run helper thread 2 on CPU core #2
[Work thread Feb 18 18:24] Trying backup intermediate file: p9081307.bu3
[Work thread Feb 18 18:24] Resuming Gerbicz error-checking PRP test of (2^9081307+1)/3 using all-complex AVX-512 FFT length 480K, Pass1=128, Pass2=3840, clm=2, 4 threads
[Work thread Feb 18 18:24] Iteration: 9000001 / 9081307 [99.10%].
[Work thread Feb 18 18:24] Hardware errors have occurred during the test!
[Work thread Feb 18 18:24] 2 Gerbicz/double-check errors.
[Work thread Feb 18 18:24] Confidence in final result is excellent.
[Work thread Feb 18 18:25] Iteration: 9010000 / 9081307 [99.21%], ms/iter: 0.711, ETA: 00:00:50
[Work thread Feb 18 18:25] Hardware errors have occurred during the test!
[Work thread Feb 18 18:25] 2 Gerbicz/double-check errors.
[Work thread Feb 18 18:25] Confidence in final result is excellent.
[Work thread Feb 18 18:25] Iteration: 9020000 / 9081307 [99.32%], ms/iter: 0.711, ETA: 00:00:43
[Work thread Feb 18 18:25] Hardware errors have occurred during the test!
[Work thread Feb 18 18:25] 2 Gerbicz/double-check errors.
[Work thread Feb 18 18:25] Confidence in final result is excellent.
[Work thread Feb 18 18:25] Iteration: 9030000 / 9081307 [99.43%], ms/iter: 0.711, ETA: 00:00:36
[Work thread Feb 18 18:25] Hardware errors have occurred during the test!
[Work thread Feb 18 18:25] 2 Gerbicz/double-check errors.
[Work thread Feb 18 18:25] Confidence in final result is excellent.
[Main thread Feb 18 18:25] Stopping all worker threads.
[Work thread Feb 18 18:25] Stopping PRP test of (2^9081307+1)/3 at iteration 9036608 [99.50%]
[Work thread Feb 18 18:25] Worker stopped.
[Main thread Feb 18 18:25] Execution halted.
[/CODE]

If you let it continue, eventually it reaches "15 or more Gerbicz/double-check errors."

GP2 2019-02-18 20:52

[STRIKE]PS, if you have any test debug programs, I can try running them on this particular virtual machine.[/STRIKE]

[STRIKE]However it's a spot instance, so it could go away at any time with only two minutes' warning.[/STRIKE]

The problem is occurring for all Wagstaff exponents on all machines.

Prime95 2019-02-18 20:53

[QUOTE=GP2;508888]
For instance, resuming from iteration 9 million (the .bu3 file):[/QUOTE]

Please email this save file (as well as worktodo.txt). Thanks.

Prime95 2019-02-18 20:53

BUG: x87 FFTs are not working in the 32-bit builds.

kriesel 2019-02-18 21:25

Minimum processor, OS version?
 
1 Attachment(s)
Tried the win32 Prime95 29.6b1 on Pentium 133 NT 4 sp6a, and got Dr Watson illegal instruction fatal error very early, before the app GUI appeared or any progress files were created. I guess that's expected. Last version I had run on this box successfully was 25.11 it appears.

V29.6b1 seems to run on Pentium M on Vista or XP.

kriesel 2019-02-18 21:46

readme update
 
Although the default behavior has been changed to 1/16 of RAM, the readme.txt still says
[QUOTE]If at all in doubt, leave the settings at 8MB.[/QUOTE]

chalsall 2019-02-18 22:05

[QUOTE=kriesel;508898]Although the default behavior has been changed to 1/16 of RAM, the readme.txt still says[/QUOTE]

Please note that except for those who work outside of the nominal range(s), P-1 will have already been done, and so the RAM needed will be small.

Not to say the "readme" shouldn't be updated to be accurate....

kriesel 2019-02-18 23:18

throughput benchmark on the infamous i7-8750H / Win 10
 
V29.6b1 x64, set to "1-3,6" workers, with and without hyperthreading, but not all possibilities, 1024k-32768k, completed in about 2 hours, no stall.

Prime95 2019-02-19 18:46

Build 2 is out for Linux and Windows 64-bit

harlee 2019-02-19 19:38

I'm doing P-1 testing and I've noticed that the results for the B.S. extension (i.e.; E=12) isn't being reported by mersenne.org. Logs shows that the B.S. is being sent to the server.

Prime95 2019-02-19 20:07

[QUOTE=harlee;508942]I'm doing P-1 testing and I've noticed that the results for the B.S. extension (i.e.; E=12) isn't being reported by mersenne.org. Logs shows that the B.S. is being sent to the server.[/QUOTE]

Can you give Aaron an exponent to look at? I'm guessing he is parsing the new JSON format to create the web page.

harlee 2019-02-19 20:45

Sorry, forgot to include the exponents as I was being call away. Here are some exponents from two different systems (hardware and OS):

5337463, 5337443, 5337373
18506447, 18506309, 18506269

from prime.log:

Sending result to server: UID: harlee/i5-5250U_1600, M18506447 completed P-1, B1=330000, B2=8745000, E=12, Wh10: <snip>, AID: <snip>

GP2 2019-02-20 00:15

[QUOTE=Prime95;508938]Build 2 is out for Linux and Windows 64-bit[/QUOTE]

The problem I reported for PRP testing of Wagstaff numbers in build 1 is no longer happening.

kriesel 2019-02-20 01:20

[QUOTE=Prime95;508938]Build 2 is out for Linux and Windows 64-bit[/QUOTE]
What does build 2 address?

Prime95 2019-02-20 02:30

[QUOTE=kriesel;508959]What does build 2 address?[/QUOTE]

see [url]https://www.mersenneforum.org/showpost.php?p=508842&postcount=2[/url]

kracker 2019-02-20 03:12

Running the torture test with Small FFTs and only Disable AVX checked, gives me a no FFT lengths available. (29.6 build 2)

Prime95 2019-02-20 03:51

[QUOTE=kracker;508965]Running the torture test with Small FFTs and only Disable AVX checked, gives me a no FFT lengths available. (29.6 build 2)[/QUOTE]

I need to know your CPU architecture as reported in the main window at startup. I'll need caches sizes as reported by CPU dialog box. How many torture test threads were you running? What were the FFT sizes shown in the greyed area of the torture test dialog box?

Flammo 2019-02-20 22:28

[QUOTE=Prime95;508968]I need to know your CPU architecture as reported in the main window at startup. I'll need caches sizes as reported by CPU dialog box. How many torture test threads were you running? What were the FFT sizes shown in the greyed area of the torture test dialog box?[/QUOTE]

Hi, I can replicate the same matter in much the same way as kracken.

CPU Info:
Intel(R) Xeon(R) CPI E3-1225 v5 @ 3.30 GHz
CPU speed 3183,73 MHz, 4 cores
CPU features: Prefetchw, SSE, SSE2, SSE4, AVX, AVX2, FMA
L1 cache size: 4x32 KB, L2 cache size: 4x256 KB, L3 cache size: 8 MB

To replicate the described by kracker, I open Torture Test:
Number of torture tests threads to run: 4
Small FFTs
Torture test settings (area all greyed-out, but showing the following in greyed-out:)
Min FFT size: 42
Max FFT size: 682
Run FFTs in-place: checked
Memory to use: 0
Time to run each FFT: 3 minutes

Run a weaker torture test:
Disable AVX-512: greyed-out
Disable AVX: checked
Disable AVX2 (fused multiple-add): unchecked
Disable SSE2: unchecked

The error message is "No FFT lengths available in teh range specified. Worker stopped."

If I repeat the above, going to Torture Test menu, I notice it is now selected as Blend mode. It didn't save my settings (probably by design). I then reselect Small FFTs, without disabling AVX, it works fine, starting at FMA3 384K FFT length.

Using other torture test types, I find everything works fine with smallest FFTs, disabling the three available AVX/SSE2 options, one at a time.

Disabling AVX with Largest FFTs results in "No FFT lengths available in the range specified". But disabling either AVX2 or SSE2 works fine with the Largest FFTs.

Hope that helps.

Falkentyne 2019-02-21 02:24

[QUOTE=Prime95;508968]I need to know your CPU architecture as reported in the main window at startup. I'll need caches sizes as reported by CPU dialog box. How many torture test threads were you running? What were the FFT sizes shown in the greyed area of the torture test dialog box?[/QUOTE]

I can confirm this bug.
i9 9900K.

Bug:
Clicking "Disable AVX" but NOT disable AVX2 on large FFT and on Blend, causes "No FFT's in range" error.

Clicking "disable AVX2 *AND* Disable AVX does not cause the error.
I can only assume (Guessing) that doing this with custom ranges may fail also.

Prime95 2019-02-21 03:07

[QUOTE=Falkentyne;509025]
Clicking "Disable AVX" but NOT disable AVX2 on large FFT and on Blend, causes "No FFT's in range" error.[/QUOTE]

Grrr, I had not contemplated turning off AVX but not AVX2. In fact, using the Linux menus this is impossible to do.

ric 2019-02-21 10:30

2 Attachment(s)
Two minor points with respect to 29.6.2 (mprime, 64bits):[LIST][*]PRP-CF results are written to json file, but no longer appear in results.txt (see attach 1 & 2) - in full disclosure, I've a different name for results.txt file, via a specific prime.txt entry[*]PRP-CF results are parsed by the server (so probably this observation should go to another thread) as if the result were "regular" PRP tests (see [URL="https://www.mersenne.org/report_exponent/?exp_lo=20585021&full=1"]M20585021[/URL], where I tested M20585021/factor, as per json file)[/LIST]

GP2 2019-02-21 15:35

[QUOTE=ric;509034]PRP-CF results are written to json file, but no longer appear in results.txt[/QUOTE]

This is desirable behavior, it's redundant otherwise.

PRP-CF results have been output exclusively in JSON format since version 29.4 build 2 in October 2017, so it's not a matter of writing new parsing code, you just need to look for them in a different file.

Factoring results do get written to both files (in different formats). That makes sense, since people looking for non-Mersenne factors that don't get reported to Primenet might have their own code that parses the old format.

[QUOTE]PRP-CF results are parsed by the server (so probably this observation should go to another thread) as if the result were "regular" PRP tests (see [URL="https://www.mersenne.org/report_exponent/?exp_lo=20585021&full=1"]M20585021[/URL], where I tested M20585021/factor, as per json file)
[/QUOTE]

It does get parsed correctly. The log entry in History no longer displays the list of factors, but the result goes into the PRP Cofactor section.

GP2 2019-02-22 19:22

For the very smallest exponents, there is way too much output:

I am using [c]ScaleOutputFrequency=1[/c] in prime.txt but it doesn't help.

In a matter of minutes, the log file grew to 800 MB.

[CODE]
ECM2=1,2,1087,1,43000000,43000000,8700,"3,121398914459,144288053938906556201,59015046142680340633"
[/CODE]

[CODE]
[Main thread Feb 22 18:58] Starting worker.
[Work thread Feb 22 18:58] Worker starting
[Work thread Feb 22 18:58] Using all-complex FMA3 FFT length 64
[Work thread Feb 22 18:58] ECM on 2^1087+1: curve #1 with s=2147430289016201, B1=43000000, B2=43000000
[Work thread Feb 22 18:58] 2^1087+1 curve 1 stage 1 at prime 3 [0.00%]. Time: 0.000 sec.
[Work thread Feb 22 18:58] 2^1087+1 curve 1 stage 1 at prime 5 [0.00%]. Time: 0.000 sec.
[Work thread Feb 22 18:58] 2^1087+1 curve 1 stage 1 at prime 7 [0.00%]. Time: 0.000 sec.
[Work thread Feb 22 18:58] 2^1087+1 curve 1 stage 1 at prime 11 [0.00%]. Time: 0.000 sec.
[Work thread Feb 22 18:58] 2^1087+1 curve 1 stage 1 at prime 13 [0.00%]. Time: 0.000 sec.
[Work thread Feb 22 18:58] 2^1087+1 curve 1 stage 1 at prime 17 [0.00%]. Time: 0.000 sec.
[Work thread Feb 22 18:58] 2^1087+1 curve 1 stage 1 at prime 19 [0.00%]. Time: 0.000 sec.
[Work thread Feb 22 18:58] 2^1087+1 curve 1 stage 1 at prime 23 [0.00%]. Time: 0.000 sec.
[Work thread Feb 22 18:58] 2^1087+1 curve 1 stage 1 at prime 29 [0.00%]. Time: 0.000 sec.
[Work thread Feb 22 18:58] 2^1087+1 curve 1 stage 1 at prime 31 [0.00%]. Time: 0.000 sec.
[Work thread Feb 22 18:58] 2^1087+1 curve 1 stage 1 at prime 37 [0.00%]. Time: 0.000 sec.
...
[/CODE]


By contrast, for slightly larger exponents and identical prime.txt, I get:

[CODE]
ECM2=1,2,1201,1,43000000,43000000,8700,"3,107117096323,12227544337,1924003,2983620281"
[/CODE]

[CODE]
[Main thread Feb 22 19:15] Starting worker.
[Work thread Feb 22 19:15] Worker starting
[Work thread Feb 22 19:15] Using all-complex FMA3 FFT length 64
[Work thread Feb 22 19:15] ECM on 2^1201+1: curve #1 with s=1747100447905230, B1=43000000, B2=43000000
[Work thread Feb 22 19:16] 2^1201+1 curve 1 stage 1 at prime 3832079 [8.91%]. Time: 12.277 sec.
[Work thread Feb 22 19:16] 2^1201+1 curve 1 stage 1 at prime 7658591 [17.81%]. Time: 12.285 sec.
[Work thread Feb 22 19:16] 2^1201+1 curve 1 stage 1 at prime 11480467 [26.69%]. Time: 12.281 sec.
[Work thread Feb 22 19:16] 2^1201+1 curve 1 stage 1 at prime 15299671 [35.58%]. Time: 12.279 sec.
[Work thread Feb 22 19:17] 2^1201+1 curve 1 stage 1 at prime 19118647 [44.46%]. Time: 12.263 sec.
[Work thread Feb 22 19:17] 2^1201+1 curve 1 stage 1 at prime 22936033 [53.33%]. Time: 12.273 sec.
[Work thread Feb 22 19:17] 2^1201+1 curve 1 stage 1 at prime 26750747 [62.21%]. Time: 12.263 sec.
[Work thread Feb 22 19:17] 2^1201+1 curve 1 stage 1 at prime 30561731 [71.07%]. Time: 12.265 sec.
[Work thread Feb 22 19:17] 2^1201+1 curve 1 stage 1 at prime 34377377 [79.94%]. Time: 12.265 sec.
[Work thread Feb 22 19:18] 2^1201+1 curve 1 stage 1 at prime 38188457 [88.81%]. Time: 12.261 sec.
[Work thread Feb 22 19:18] 2^1201+1 curve 1 stage 1 at prime 41999567 [97.67%]. Time: 12.263 sec.
[Work thread Feb 22 19:18] Stage 1 complete. 1126236177 transforms, 1 modular inverses. Time: 3.220 sec.
[Work thread Feb 22 19:18] Stage 1 GCD complete. Time: 0.000 sec.
[Work thread Feb 22 19:18] ECM on 2^1201+1: curve #2 with s=5405473801738053, B1=43000000, B2=43000000
[Work thread Feb 22 19:18] 2^1201+1 curve 2 stage 1 at prime 3832067 [8.91%]. Time: 12.276 sec.
[/CODE]

and also (note different B1, B2):

[CODE]
ECM2=1,2,2011,1,11000000,11000000,5200,"3,328434864016035321284153784239230639028291"
[/CODE]

[CODE]
[Main thread Feb 22 18:22] Starting worker.
[Work thread Feb 22 18:22] Worker starting
[Work thread Feb 22 18:22] Using all-complex FMA3 FFT length 96
[Work thread Feb 22 18:22] ECM on 2^2011+1: curve #1 with s=491274290942190, B1=11000000, B2=11000000
[Work thread Feb 22 18:23] 2^2011+1 curve 1 stage 1 at prime 3835597 [34.86%]. Time: 16.495 sec.
[Work thread Feb 22 18:23] 2^2011+1 curve 1 stage 1 at prime 7661827 [69.65%]. Time: 16.523 sec.
[Work thread Feb 22 18:23] Stage 1 complete. 287339925 transforms, 1 modular inverses. Time: 14.418 sec.
[Work thread Feb 22 18:23] Stage 1 GCD complete. Time: 0.000 sec.
[Work thread Feb 22 18:23] ECM on 2^2011+1: curve #2 with s=3310755350124869, B1=11000000, B2=11000000
[Work thread Feb 22 18:24] 2^2011+1 curve 2 stage 1 at prime 3835577 [34.86%]. Time: 16.588 sec.
[Work thread Feb 22 18:24] 2^2011+1 curve 2 stage 1 at prime 7661821 [69.65%]. Time: 16.609 sec.
[Work thread Feb 22 18:24] Stage 1 complete. 287339925 transforms, 1 modular inverses. Time: 14.493 sec.
[Work thread Feb 22 18:24] Stage 1 GCD complete. Time: 0.000 sec.
[/CODE]

Prime95 2019-02-22 21:30

[QUOTE=ric;509034][*]PRP-CF results are written to json file, but no longer appear in results.txt [/QUOTE]

See undoc.txt:

[CODE]Setting OutputPrimes=1 will cause prime95 to output a non-JSON message for probable primes to results.txt.
Setting OutputComposites=1 will cause prime95 to output a non-JSON message for composite PRP tests to results.txt.
Setting OutputJSON=0 will cause prime95 to not output the JSON messages to results.json.txt.
[/CODE]

Mark Rose 2019-02-22 22:47

How hard would it be to include a benchmark mode that is run from the cli without any interaction? I think mprime would make a nice addition to the Phoronix Test Suite. It would give us benchmarks are much more hardware.

We'd probably want benchmarks for single core and worker-per-numa-node. We'd probably want to test a single FFT for time, but also pick an FFT that would be applicable for years. Maybe 8192k?

It would also be nice if there were a parameter to automatically submit the results to PrimeNet, too.

Madpoo 2019-02-23 06:02

[QUOTE=Prime95;508943]Can you give Aaron an exponent to look at? I'm guessing he is parsing the new JSON format to create the web page.[/QUOTE]

Found it... snippet of the JSON looks like:
[CODE]"b1":330000, "b2":8745000, "brent-suyama":12[/CODE]

I just need to snag that "brent-suyama" value if present and display it as the E.

Madpoo 2019-02-24 03:12

[QUOTE=Madpoo;509215]Found it... snippet of the JSON looks like:
[CODE]"b1":330000, "b2":8745000, "brent-suyama":12[/CODE]

I just need to snag that "brent-suyama" value if present and display it as the E.[/QUOTE]

Done... see this for an example:
[URL="https://www.mersenne.org/M5337373"]M5337373[/URL]

harlee 2019-02-24 11:54

Found an issue with the Windows 32-bit version. Noticed that 29.6b1 is running much slower then 29.4b7. Yes, I know I should stop using this system - I'm using it to do P1 work on smaller exponents with low B1 & B2 bounds..

29.4b7
[CODE][Feb 23 12:22] Using Pentium4 FFT length 288K, Pass1=384, Pass2=768, clm=2[/CODE]

29.6b1
[CODE][Feb 23 20:17] Using Pentium4 FFT length 288K, Pass1=384, Pass2=768, clm=4
[/CODE]

Noticed that 29.6b1 selected a different "clm".

This morning 29.6b1 ran another benchmark. Notice the benchmark was for FFTlen=288K while the P1 was using a FFTlen=320K.

[CODE]

[Feb 24 05:19] Optimal P-1 factoring of M5339599 using up to 2048MB of memory.
[Feb 24 05:19] Assuming no factors below 2^62 and 10 primality tests saved if a factor is found.
[Feb 24 05:19] Optimal bounds are B1=320000, B2=9520000
[Feb 24 05:19] Chance of finding a factor is an estimated 7.47%
[Feb 24 05:19] Using Pentium4 type-0 FFT length 320K, Pass1=320, Pass2=1K, clm=4

<SNIP>

[Feb 24 05:54] M5339599 stage 1 is 43.31% complete. Time: 105.074 sec.
[Feb 24 05:56] Worker stopped while running needed benchmarks.
[Feb 24 05:58] Benchmarks complete, restarting worker.
[Feb 24 05:58] Optimal P-1 factoring of M5339599 using up to 2048MB of memory.
[Feb 24 05:58] Assuming no factors below 2^62 and 10 primality tests saved if a factor is found.
[Feb 24 05:58] Optimal bounds are B1=320000, B2=9520000
[Feb 24 05:58] Chance of finding a factor is an estimated 7.47%
[Feb 24 05:58] Using Pentium4 type-0 FFT length 320K, Pass1=320, Pass2=1K, clm=4

[/CODE]

[CODE]
[Sun Feb 24 05:56:40 2019]
FFTlen=288K, Type=3, Arch=3, Pass1=384, Pass2=768, clm=4 (1 core, 1 worker): 11.31 ms. Throughput: 88.45 iter/sec.
FFTlen=288K, Type=3, Arch=2, Pass1=384, Pass2=768, clm=4 (1 core, 1 worker): 11.27 ms. Throughput: 88.71 iter/sec.
FFTlen=288K, Type=3, Arch=1, Pass1=384, Pass2=768, clm=2 (1 core, 1 worker): 10.07 ms. Throughput: 99.32 iter/sec.
FFTlen=288K, Type=3, Arch=2, Pass1=384, Pass2=768, clm=2 (1 core, 1 worker): 11.37 ms. Throughput: 87.93 iter/sec.
FFTlen=288K, Type=2, Arch=3, Pass1=384, Pass2=768, clm=4 (1 core, 1 worker): 11.39 ms. Throughput: 87.77 iter/sec.
FFTlen=288K, Type=2, Arch=2, Pass1=384, Pass2=768, clm=4 (1 core, 1 worker): 11.37 ms. Throughput: 87.96 iter/sec.
FFTlen=288K, Type=2, Arch=1, Pass1=384, Pass2=768, clm=2 (1 core, 1 worker): 10.09 ms. Throughput: 99.07 iter/sec.
FFTlen=288K, Type=2, Arch=2, Pass1=384, Pass2=768, clm=2 (1 core, 1 worker): 11.43 ms. Throughput: 87.48 iter/sec.
[/CODE]

harlee 2019-02-24 12:18

[QUOTE=Madpoo;509285]Done... see this for an example:
[URL="https://www.mersenne.org/M5337373"]M5337373[/URL][/QUOTE]

Madpoo, Please look at [URL="https://www.mersenne.org/M18521411"]M18521411[/URL] as 29.6b1 found and reported a a factor but the P1 bounds are not being shown.

[CODE]
[Sat Feb 23 14:42:47 2019]
{"status":"F", "exponent":18521411, "worktype":"P-1", "factors":"165718901219828172167153", "b1":330000, "b2":8745000, "brent-suyama":12, "fft-length":983040, "security-code":"DFB5B559", "program":{"name":"Prime95", "version":"29.6", "build":1, "port":10}, "timestamp":"2019-02-23 19:42:47", "user":"harlee", "computer":"i5-5250U_1600", "aid":"<deleted>"}
[/CODE]

harlee 2019-02-24 14:32

[QUOTE]
This morning 29.6b1 ran another benchmark. Notice the benchmark was for FFTlen=288K while the P1 was using a FFTlen=320K.
[/QUOTE]

Stopped and restarted Prime95 29.6b1 and now it is reporting a FFTlen=288K.

[CODE]
[Feb 24 14:30] Worker starting
[Feb 24 14:30] Optimal P-1 factoring of M5339707 using up to 2048MB of memory.
[Feb 24 14:30] Assuming no factors below 2^62 and 10 primality tests saved if a factor is found.
[Feb 24 14:30] Optimal bounds are B1=320000, B2=9520000
[Feb 24 14:30] Chance of finding a factor is an estimated 7.47%
[Feb 24 14:30] Using Pentium4 FFT length 288K, Pass1=384, Pass2=768, clm=4
[/CODE]

harlee 2019-02-24 14:48

[CODE]8) Default memory available for prime95 changed from 8MB to 1/16th of RAM.[/CODE]

The 32-bit Windows 29.6b1 version is still defaulting to 8MB of memory. As you can see from my early posts that I've allocated 2GB of memory for Prime95 P-1 use.

[CODE][Feb 24 14:40] Worker starting
[Feb 24 14:40] Optimal P-1 factoring of M5339707 using up to 8MB of memory.
[Feb 24 14:40] Assuming no factors below 2^62 and 10 primality tests saved if a factor is found.
[Feb 24 14:40] Optimal bounds are B1=525000, B2=525000
[Feb 24 14:40] Chance of finding a factor is an estimated 4.1%
[Feb 24 14:40] Using Pentium4 type-0 FFT length 320K, Pass1=320, Pass2=1K, clm=4
[/CODE]

Same thing on my MacBook Air with 8GB of memory:

[QUOTE]
[Feb 24 09:51] Optimal P-1 factoring of M18524839 using up to 8MB of memory.
[Feb 24 09:51] Assuming no factors below 2^65 and 3 primality tests saved if a factor is found.
[Feb 24 09:51] Optimal bounds are B1=480000, B2=480000
[Feb 24 09:51] Chance of finding a factor is an estimated 3.4%
[Feb 24 09:51] Using FMA3 FFT length 960K, Pass1=384, Pass2=2560, clm=2
[Feb 24 09:51] Worker stopped.
[Feb 24 09:52] Worker starting
[Feb 24 09:52] Optimal P-1 factoring of M18524839 using up to 4096MB of memory.
[Feb 24 09:52] Assuming no factors below 2^65 and 3 primality tests saved if a factor is found.
[Feb 24 09:52] Optimal bounds are B1=330000, B2=8745000
[Feb 24 09:52] Chance of finding a factor is an estimated 6.4%
[Feb 24 09:52] Using FMA3 FFT length 960K, Pass1=384, Pass2=2560, clm=2
[Feb 24 09:52] M18524839 stage 1 is 0.32% complete.
[/QUOTE]

Prime95 2019-02-24 17:41

[QUOTE=harlee;509323][CODE]8) Default memory available for prime95 changed from 8MB to 1/16th of RAM.[/CODE]

The 32-bit Windows 29.6b1 version is still defaulting to 8MB of memory.[/QUOTE]

[strike]That new default is for new installs on machines with more than 4GB of memory. If you are upgrading prime95 then the maximum amount of RAM is determined from the existing prime.txt.[/strike]

Edit: My bad, the new default is only for the blend torture test. I'm changing whatsnew.txt to reflect this. The options/cpu memory-to-use-for-P-1/ECM remains 8MB by default.

Prime95 2019-02-24 18:41

[QUOTE=harlee;509312]Found an issue with the Windows 32-bit version. Noticed that 29.6b1 is running much slower then 29.4b7. [/QUOTE]

I think I'll need gwnum.txt, prime.txt, local.txt and your CPU info to investigate.

harlee 2019-02-24 19:50

sent via e-mail

Madpoo 2019-02-26 06:03

[QUOTE=harlee;509316]Madpoo, Please look at [URL="https://www.mersenne.org/M18521411"]M18521411[/URL] as 29.6b1 found and reported a a factor but the P1 bounds are not being shown.

[CODE]
[Sat Feb 23 14:42:47 2019]
{"status":"F", "exponent":18521411, "worktype":"P-1", "factors":"165718901219828172167153", "b1":330000, "b2":8745000, "brent-suyama":12, "fft-length":983040, "security-code":"DFB5B559", "program":{"name":"Prime95", "version":"29.6", "build":1, "port":10}, "timestamp":"2019-02-23 19:42:47", "user":"harlee", "computer":"i5-5250U_1600", "aid":"<deleted>"}
[/CODE][/QUOTE]

I don't remember ever showing the bounds for a "factor found" result in that history section. I could be wrong, but it's not ringing a bell. I'll double-check and if the old style non-JSON was doing it, I'll match that.

Madpoo 2019-02-26 06:50

[QUOTE=Madpoo;509473]I don't remember ever showing the bounds for a "factor found" result in that history section. I could be wrong, but it's not ringing a bell. I'll double-check and if the old style non-JSON was doing it, I'll match that.[/QUOTE]

I was wrong... I just hadn't been including that info since the switch to the JSON result format. :smile:

Fixed now.

M0CZY 2019-02-26 14:39

I am testing 32-bit Linux 29.6b1, and I have noticed that the program is unable to detect my CPU cache sizes.
[CODE]CPU Information:
Intel(R) Pentium(R) M processor 2.26GHz
CPU speed: 2260.72 MHz
CPU features: Prefetch, SSE, SSE2
L1 cache size: unknown, L2 cache size: unknown[/CODE]
I don't know if this is very important in the correct running of the program.

Prime95 2019-02-26 17:11

[QUOTE=M0CZY;509504]I am testing 32-bit Linux 29.6b1, and I have noticed that the program is unable to detect my CPU cache sizes.
I don't know if this is very important in the correct running of the program.[/QUOTE]

In 29.6 I switched to using hwloc to determine cache sizes. Apparently, that library has trouble with older architectures. I've fixed this in build 3 (prime95 falls back to old methods if hwloc cannot detect cache sizes).

kriesel 2019-02-26 17:24

1 Attachment(s)
[QUOTE=M0CZY;509504]I am testing 32-bit Linux 29.6b1, and I have noticed that the program is unable to detect my CPU cache sizes.
[CODE]CPU Information:
Intel(R) Pentium(R) M processor 2.26GHz
CPU speed: 2260.72 MHz
CPU features: Prefetch, SSE, SSE2
L1 cache size: unknown, L2 cache size: unknown[/CODE]I don't know if this is very important in the correct running of the program.[/QUOTE]
Similar occurs in Windows 29.6b1 x32 on a Pentium M. Not an issue in 29.4b7 on the same system as it occurs in 29.6b1. Win XP affected, as is Vista; possibly later OS versions also. Not an issue in 29.6b1 x64 on Win 10 i7-8750H, or 29.6b1 x64 on Win 7 i3-370M, or Vista X64 prime95 v29.6b1 X64 on Core 2 Duo.

Prime95 2019-02-26 23:24

Build 3 is now available. This should fix the reported bugs. Please re-test (some of the reported bugs on older hardware I cannot test here).

harlee 2019-02-27 00:57

[QUOTE=Prime95;509543]Build 3 is now available. This should fix the reported bugs. Please re-test (some of the reported bugs on older hardware I cannot test here).[/QUOTE]

Looks good so far. Below are some test results on my Windows 32-bit system.

[CODE]
[Main thread Feb 27 00:34] Mersenne number primality test program version 29.6
[Main thread Feb 27 00:34] Optimizing for CPU architecture: Pentium 4, L2 cache size: 512 KB, L3 cache size: -1 KB
[/CODE]

Since the P4 doesn't have L3 cache it is being reported as -1 KB. No worries here.

[CODE]
[Wed Feb 27 00:21:29 2019]
Compare your results to other computers at http://www.mersenne.org/report_benchmarks
Intel(R) Pentium(R) 4 CPU 2.60GHz
CPU speed: 2593.85 MHz, with hyperthreading
CPU features: Prefetch, SSE, SSE2
L1 cache size: 8 KB, L2 cache size: 512 KB, L3 cache size: -1 KB
L1 cache line size: 64 bytes, L2 cache line size: 128 bytesMachine topology as determined by hwloc library:
Machine#0 (total=2074292KB, Backend=Windows, hwlocVersion=2.0.3, ProcessName=prime95.exe)
Package (total=2074292KB, CPUVendor=GenuineIntel, CPUFamilyNumber=15, CPUModelNumber=2, CPUModel="Intel(R) Pentium(R) 4 CPU 2.60GHz", CPUStepping=9)
Core (cpuset: 0x00000003)
PU#0 (cpuset: 0x00000001)
PU#1 (cpuset: 0x00000002)
Prime95 32-bit version 29.6, RdtscTiming=1
FFTlen=288K, Type=3, Arch=3, Pass1=384, Pass2=768, clm=4 (1 core, 1 worker): 15.51 ms. Throughput: 64.48 iter/sec.
FFTlen=288K, Type=3, Arch=2, Pass1=384, Pass2=768, clm=4 (1 core, 1 worker): 15.37 ms. Throughput: 65.06 iter/sec.
FFTlen=288K, Type=3, Arch=1, Pass1=384, Pass2=768, clm=2 (1 core, 1 worker): 14.22 ms. Throughput: 70.32 iter/sec.
FFTlen=288K, Type=3, Arch=2, Pass1=384, Pass2=768, clm=2 (1 core, 1 worker): 15.65 ms. Throughput: 63.91 iter/sec.
FFTlen=288K, Type=2, Arch=3, Pass1=384, Pass2=768, clm=4 (1 core, 1 worker): 15.70 ms. Throughput: 63.70 iter/sec.
FFTlen=288K, Type=2, Arch=2, Pass1=384, Pass2=768, clm=4 (1 core, 1 worker): 15.55 ms. Throughput: 64.31 iter/sec.
FFTlen=288K, Type=2, Arch=1, Pass1=384, Pass2=768, clm=2 (1 core, 1 worker): 14.66 ms. Throughput: 68.21 iter/sec.
FFTlen=288K, Type=2, Arch=2, Pass1=384, Pass2=768, clm=2 (1 core, 1 worker): 15.77 ms. Throughput: 63.39 iter/sec.
[/CODE]

I can provide the benchmark results for 29.4b7 if you want to compare the results. Likes like the build 3 is reading the gwnum.txt file properly. Question, how do we know that the correct Type / Arch is being selected?
[CODE]
[Feb 27 00:34] Using Pentium4 FFT length 288K, Pass1=384, Pass2=768, clm=2
[/CODE]

Prime95 2019-02-27 01:10

[QUOTE=harlee;509547]Question, how do we know that the correct Type / Arch is being selected?
[CODE]
[Feb 27 00:34] Using Pentium4 FFT length 288K, Pass1=384, Pass2=768, clm=2
[/CODE][/QUOTE]

I don't think you can tell (other than if the timings are way off).

kriesel 2019-02-27 01:13

[QUOTE=Prime95;509543]Build 3 is now available. This should fix the reported bugs. Please re-test (some of the reported bugs on older hardware I cannot test here).[/QUOTE]
Prime95 v29.6b3 on Vista, Pentium M 750. "[Main thread Feb 26 18:23] Mersenne number primality test program version 29.6
[Main thread Feb 26 18:23] Optimizing for CPU architecture: Pentium M, L2 cache size: 2 MB, L3 cache size: -1 KB" CPU-Z indicates no L3 memory on this system.

The skipping a residue at stop/continuedescribed in post 4 is still occurring in build 3; note iteration #71 interim residue is missing in the following.

[CODE][Feb 26 18:38] Iteration: 68 / 82589933 [0.00%], ms/iter: 443.897, ETA: 424d 07:43
[Feb 26 18:38] M82589933 interim LL residue CCCA452E1C116AB5 at iteration 68
[Feb 26 18:38] Iteration: 69 / 82589933 [0.00%], ms/iter: 416.372, ETA: 398d 00:14
[Feb 26 18:38] M82589933 interim LL residue 43F6A14DA66096A3 at iteration 69
[Feb 26 18:38] Iteration: 70 / 82589933 [0.00%], ms/iter: 406.014, ETA: 388d 02:37
[Feb 26 18:38] M82589933 interim LL residue 7FC9987DAFA68A81 at iteration 70
[Feb 26 18:38] Iteration: 71 / 82589933 [0.00%], ms/iter: 409.558, ETA: 391d 11:56
[Feb 26 18:38] Stopping primality test of M82589933 at iteration 71 [0.00%]
[Feb 26 18:38] Worker stopped.
[Feb 26 18:39] Worker starting
[Feb 26 18:39] Running Jacobi error check. Passed. Time: 259.729 sec.
[Feb 26 18:43] Resuming primality test of M82589933 using Pentium4 FFT length 4480K, Pass1=896, Pass2=5K, clm=4
[Feb 26 18:43] Iteration: 72 / 82589933 [0.00%].
[Feb 26 18:43] M82589933 interim LL residue 9E38150B427A99E1 at iteration 72
[Feb 26 18:43] Iteration: 73 / 82589933 [0.00%], ms/iter: 208.759, ETA: 199d 13:15
[Feb 26 18:43] M82589933 interim LL residue 3260B712A16C7199 at iteration 73
[Feb 26 18:43] Iteration: 74 / 82589933 [0.00%], ms/iter: 409.600, ETA: 391d 12:52
[Feb 26 18:43] M82589933 interim LL residue BE64D87AE6907EE7 at iteration 74
[/CODE]The offset appearance of PRP interim residues, and skipped interim residue at stop start of PRP also remain. Here, output of interim residue for iteration 36 is skipped.[CODE][Feb 26 18:58] Iteration: 34 / 1257787 [0.00%], ms/iter: 4.300, ETA: 01:30:08
[Feb 26 18:58] M1257787 interim PRP residue 199770DE73DBE343 at iteration 33
[Feb 26 18:58] Iteration: 35 / 1257787 [0.00%], ms/iter: 3.852, ETA: 01:20:45
[Feb 26 18:58] M1257787 interim PRP residue BFCE96FBA995F5E1 at iteration 34
[Feb 26 18:58] Iteration: 36 / 1257787 [0.00%], ms/iter: 4.170, ETA: 01:27:24
[Feb 26 18:58] M1257787 interim PRP residue 59EFF1E40063D039 at iteration 35
[Feb 26 18:58] Iteration: 37 / 1257787 [0.00%], ms/iter: 3.848, ETA: 01:20:39
[Feb 26 18:58] Stopping PRP test of M1257787 at iteration 37 [0.00%]
[Feb 26 18:58] Worker stopped.
[Feb 26 18:58] Worker starting
[Feb 26 18:58] Resuming Gerbicz error-checking PRP test of M1257787 using Pentium4 FFT length 64K, Pass1=256, Pass2=256, clm=4
[Feb 26 18:58] Iteration: 38 / 1257787 [0.00%].
[Feb 26 18:58] M1257787 interim PRP residue FE5C44613154E578 at iteration 37
[Feb 26 18:58] Iteration: 39 / 1257787 [0.00%], ms/iter: 2.156, ETA: 00:45:11
[Feb 26 18:58] M1257787 interim PRP residue 794A82BDC4619CF4 at iteration 38
[Feb 26 18:58] Iteration: 40 / 1257787 [0.00%], ms/iter: 3.850, ETA: 01:20:42
[Feb 26 18:58] M1257787 interim PRP residue 61DA20FFA80D2AE1 at iteration 39
[/CODE]The application crash on launch documented in post 9 is reproduced in build 3 also.

Prime95 2019-02-27 01:14

[QUOTE=harlee;509547]L3 cache it is being reported as -1 KB.[/QUOTE]

Fixed in build 4 (if there is one).

M0CZY 2019-02-27 09:40

32-bit Linux 29.6b3 detects my CPU cache a bit better than 29.6b1.
[CODE]CPU Information:
Intel(R) Pentium(R) M processor 2.26GHz
CPU speed: 1063.79 MHz
CPU features: Prefetch, SSE, SSE2
L1 cache size: 32 KB, L2 cache size: 2 MB, L3 cache size: -1 KB[/CODE]
My CPU doesn't actually have an L3 cache, and it seems to report the CPU speed incorrectly!

Mark Rose 2019-02-27 19:05

[QUOTE=M0CZY;509574]32-bit Linux 29.6b3 detects my CPU cache a bit better than 29.6b1.
[CODE]CPU Information:
Intel(R) Pentium(R) M processor 2.26GHz
CPU speed: 1063.79 MHz
CPU features: Prefetch, SSE, SSE2
L1 cache size: 32 KB, L2 cache size: 2 MB, L3 cache size: -1 KB[/CODE]
My CPU doesn't actually have an L3 cache, and it seems to report the CPU speed incorrectly![/QUOTE]

It's not a 2.26 GHz processor? The speed is probably the actual clock it's running at, at the time.

Flammo 2019-02-27 21:32

In build 3, I can't load the torture test at all. A dialog box says 'An unsupported operation was attempted'.

This is the same PC I previously reported torture test issues on in this thread with build 2.

If I load a new installation of Prime95, I click to say 'just stress testing', and then this same dialog box appears.

harlee 2019-02-27 22:07

Running 29.6b3 on a MacBook Air. Selected Torture Test and then Prime95 closed without updating the save file. After the second attempt, a Problem Report for Prime95 window popped up. I'll forward the info to Prime95 via e-mail.

harlee 2019-02-28 00:38

[QUOTE=harlee;509635]Running 29.6b3 on a MacBook Air. Selected Torture Test and then Prime95 closed without updating the save file. After the second attempt, a Problem Report for Prime95 window popped up. I'll forward the info to Prime95 via e-mail.[/QUOTE]

Never mind - my bad!!! Thought I had upgraded to 29.6b3 but was still running b1. Testing Torture Test now but it is working properly so far.

kriesel 2019-02-28 03:19

1 Attachment(s)
[QUOTE=Prime95;509543]Build 3 is now available. This should fix the reported bugs. Please re-test (some of the reported bugs on older hardware I cannot test here).[/QUOTE]
Prime95 v29.6b3 x32 on Win XP and PentiumII-400, not primenet connected:

Comm window shows it taking exception to the same exponent I have currently running P-1 on a primenet connected system. Corresponding worktodo entry is

Test=N/A,90094027,75,1
(same form as for the77263057 entry that runs without error) Perhaps fft lengths required for p>77300000 are unsupported on ancient processors because of the huge ETA.[CODE][Main thread Feb 27 17:51] Mersenne number primality test program version 29.6
[Main thread Feb 27 17:51] Error: Worktodo.txt file contained bad LL exponent: 90094027
[Main thread Feb 27 17:51] Illegal line in worktodo.txt file: Test=90094027,75,1
[Main thread Feb 27 17:51] Optimizing for CPU architecture: Pre-SSE2, L2 cache size: 512 KB, L3 cache size: -1 KB
[/CODE]Worker window[CODE][Feb 27 17:53] Worker starting
[Feb 27 17:54] Starting primality test of M77263057 using x87 FFT length 4M, Pass1=1K, Pass2=4K, clm=1
[Feb 27 17:54] Iteration: 3 / 77263057 [0.00%], ms/iter: 4190.434, ETA: 3747d 06:56
[Feb 27 17:54] M77263057 interim LL residue 000000000000000E at iteration 3
[Feb 27 17:54] Iteration: 4 / 77263057 [0.00%], ms/iter: 3689.181, ETA: 3299d 01:02
[Feb 27 17:54] M77263057 interim LL residue 00000000000000C2 at iteration 4
[Feb 27 17:54] Iteration: 5 / 77263057 [0.00%], ms/iter: 3756.399, ETA: 3359d 03:41
[Feb 27 17:55] M77263057 interim LL residue 0000000000009302 at iteration 5
[Feb 27 17:55] Iteration: 6 / 77263057 [0.00%], ms/iter: 3739.550, ETA: 3344d 02:04
[Feb 27 17:55] M77263057 interim LL residue 00000000546B4C02 at iteration 6
[Feb 27 17:55] Iteration: 7 / 77263057 [0.00%], ms/iter: 3770.378, ETA: 3371d 15:41
[Feb 27 17:55] M77263057 interim LL residue 1BD696D9F03D3002 at iteration 7
[Feb 27 17:55] Iteration: 8 / 77263057 [0.00%], ms/iter: 3703.744, ETA: 3312d 01:36
[Feb 27 17:55] M77263057 interim LL residue 8CC88407A9F4C002 at iteration 8
[Feb 27 17:55] Iteration: 9 / 77263057 [0.00%], ms/iter: 3762.832, ETA: 3364d 21:45
[Feb 27 17:55] M77263057 interim LL residue 55599F9D37D30002 at iteration 9
[Feb 27 17:56] Iteration: 10 / 77263057 [0.00%], ms/iter: 3883.107, ETA: 3472d 11:04
[Feb 27 17:56] M77263057 interim LL residue F460D65DDF4C0002 at iteration 10
[/CODE]Taskmgr says prime95 is getting 95% of the cpu during these iterations.

Jacobi timings were interesting; restart at iteration 30 took 26 minutes, a far cry from 13 seconds at iteration 12. I switched from Interimresidues=1 to 10 at iteration 30. Iteration 30's interim residue was skipped, but 31 and 31 were output. Iteration times improved to 3.0 to 3.5 seconds/iter.

A try on a P3-450 running Win98 ran long enough to show some cache info before crashing rather immediately on launch. Since this is a quite old system, it could be a hardware reliability issue. Or maybe it's the old OS. It did not get as far as a P2-400 running XP (no LL iterations from M77263057 in the worktodo.txt). It only crashes if there's a worktodo file.

Prime95 2019-02-28 03:38

[QUOTE=kriesel;509650]Prime95 v29.6b3 x32 on Win XP and PentiumII-400, not primenet connected:

Comm window shows it taking exception to the same exponent I have currently running P-1 on a primenet connected system. Corresponding worktodo entry is

Test=N/A,90094027,75,1
(same form as for the77263057 entry that runs without error) Perhaps fft lengths required for p>77300000 are unsupported on ancient processors because of the huge ETA.[/QUOTE]

Yes, x87 FFTs only support exponents up to about 79.3 million.

bbb120 2019-02-28 07:41

prime95 can only ECM number which like k*b^n+c,
but I want to use it to factor any integer!
how can I do this?

Flammo 2019-02-28 10:47

On another PC, Windows 10 Prime95 x64 PC, with build 3, neither Manual Communication nor Unreserve Exponent are selectable in the menu. They're both greyed out.

Falkentyne 2019-02-28 11:01

[QUOTE=Prime95;509652]Yes, x87 FFTs only support exponents up to about 79.3 million.[/QUOTE]

Can't run beta 3 on a 9900K
Same error as the above poster: "an unsupported operation was attempted" when clicking on the stress test. Can't even open the stress test window.
Clean folder installation.

pepi37 2019-02-28 12:24

[QUOTE=bbb120;509662]prime95 can only ECM number which like k*b^n+c,
but I want to use it to factor any integer!
how can I do this?[/QUOTE]

Pminus1=5,4534567,1,45000,450000

Prime95 2019-02-28 16:36

[QUOTE=bbb120;509662]prime95 can only ECM number which like k*b^n+c,
but I want to use it to factor any integer!
how can I do this?[/QUOTE]

Use GMP-ECM

Prime95 2019-02-28 18:15

[QUOTE=Falkentyne;509670]Can't run beta 3 on a 9900K
Same error as the above poster: "an unsupported operation was attempted" when clicking on the stress test. Can't even open the stress test window.
Clean folder installation.[/QUOTE]

Sorry. Testing on Windows 10 is not easy for me. I usually test on Windows Vista 32-bit and if that works, I assume the 64-bit Windows version will also work.

Anyway, I found the problem (the torture test dialog box is different in Windows 32 and Windows 64). Please try build 4.

Prime95 2019-02-28 18:16

[QUOTE=Flammo;509669]On another PC, Windows 10 Prime95 x64 PC, with build 3, neither Manual Communication nor Unreserve Exponent are selectable in the menu. They're both greyed out.[/QUOTE]

That will happen is UsePrimenet=0 is set in prime.txt.

Uncwilly 2019-03-01 05:33

Unrelated posts have been moved here: [url]https://mersenneforum.org/showthread.php?t=24122[/url]

Flammo 2019-03-02 06:38

[QUOTE=Prime95;509692]That will happen is UsePrimenet=0 is set in prime.txt.[/QUOTE]
Thanks, I had accidentally loaded it from a backup copy I made just prior to copying the new build over. You're right it's working:smile: fine.

Cruelty 2019-03-02 14:05

3 Attachment(s)
I am unable to perform any stress test on i9 7900X using AVX-512. I am using latest build4 in Windows10 Pro x64. After disabling this instruction set ("run a weaker torture test") everything is fine.
I am running this CPU at fixed 3.5 GHz, RAM is running in quad channel configuration at 2666 with stock voltage. CPU cooler is Corsair H115i PRO.
The same applies to another CPU i7 7820X on a different platform.

After launching "Small FFTs" test, program freezes with "worker starting" messages (see attached). CPU utilization is at 100%, however nothing else happens in "worker window". Stopping a torture test produces an output "Stopping all worker threads." in comm window but utilization remains at 100% and again nothing else happens. Selecting "Exit" only minimizes the application to tray icon, and again nothing else changes (utilization is still at 100%). Afterwards I have to manually kill it from task manager.

Launching any other stress test, makes application minimize to tray icon (after 2-3 seconds) and after a moment (another 2-3 seconds) it disappears. This scenario generates in Windows logs two entries "Information" and "Error" - attached are txt files containing details.

As I've mentioned I am reproducing such behavior on another PC with different CPU / motherboard / RAM configuration.

Any ideas what might be wrong? :confused2:

Update: running everything on stock settings doesn't change anything.

Prime95 2019-03-02 15:26

[QUOTE=Cruelty;509873]I am unable to perform any stress test on i9 7900X using AVX-512. I am using latest build4 in Windows10 Pro x64.[/QUOTE]

I don't have an AVX-512 machine running Windows here. I use linux for my AVX-512 testing.

If you run just one torture thread instead of 16 or 20 does the torture test begin OK? If one works, does two? etc.

Have you checked the BIOS for AVX-512 offset (voltage)? Mystical has reported many early motherboards had this set improperly or not at all.

kriesel 2019-03-02 16:25

[QUOTE=Prime95;509879]I don't have an AVX-512 machine running Windows here. I use linux for my AVX-512 testing.

If you run just one torture thread instead of 16 or 20 does the torture test begin OK? If one works, does two? etc.

Have you checked the BIOS for AVX-512 offset (voltage)? Mystical has reported many early motherboards had this set improperly or not at all.[/QUOTE]
No dual-boot on your test AVX512 system? Windows VM?

harlee 2019-03-02 16:39

Madpoo,

It appears that JSON isn't displaying PRP-C results the same way as non-JSON - known factors are not being shown. Please see [URL="https://www.mersenne.org/M5342101"]M5342101[/URL] and [URL="https://www.mersenne.org/M7039433"]M7039433[/URL] and PrimeNet Most Recent Results page (sort by Type). Even the entry on my Account Results Details page is different:

i5-5250U_1600 7039433 PRP-C 2019-03-02 16:03 0.2 M7039433 is not prime. Res64: 0D75599B41FA5DCE 1.5350

[CODE][Comm thread Mar 2 11:03] Sending result to server: UID: harlee/i5-5250U_1600, M7039433/10781201374553 is not prime. Type-5 RES64: 0D75599B41FA5DCE. Wh10: 8EA45244,6058500,00000000, AID: 271744B661831555BCD839BE279B3753[/CODE]

Entry from results.json.txt:
[CODE][Sat Mar 2 11:03:34 2019]
{"status":"C", "exponent":7039433, "known-factors":["10781201374553"], "worktype":"PRP-3", "res64":"0D75599B41FA5DCE", "residue-type":5, "res2048":"BF64055C4BC12A8B6BB02D32678F9AC7D28028D06479C2DE0E13CABE921EF793DF416F1A1BE61283525215AF2AD19FDFD28EC8D211CEFE5302E2AB49C1BC64F675EDC8687CD2C986024B0C94511372189A394B3C39493416543CC27B45CA5B76FD1B3DE38917EEC86C079CB833C11B672789A967EE8355574F35A002F5A1EDA89A97D0B95A5DFD17718382B1298907CC211C071FB9FC8BF38D5992661563550D588E0C9FA4FF543121485FFA794C84764AA2E7686229CC7D066B35632FCAFE10291D5B7173A5BCED3A1899DDA532096645C67A5F5843D6E14DA3DA9123D5C2CB46B303E8FFD3666C59D08552EF7BD7C765AB0054F56389C40D75599B41FA5DCE", "fft-length":393216, "shift-count":6058500, "error-code":"00000000", "security-code":"8EA45244", "program":{"name":"Prime95", "version":"29.6", "build":3, "port":10}, "timestamp":"2019-03-02 16:03:34", "errors":{"gerbicz":0}, "user":"harlee", "computer":"i5-5250U_1600", "aid":"271744B661831555BCD839BE279B3753"}[/CODE]

Prime95,

Noticed the PRP-C results was only listed in results.json.txt and not in results.txt as in the previous versions. Is this expected? Resutls for P-1 testing are being reported in both files.

Cruelty 2019-03-02 19:26

[QUOTE=Prime95;509879]If you run just one torture thread instead of 16 or 20 does the torture test begin OK? If one works, does two? etc.[/quote]
Just tried with one thread - the same result.

[quote=Prime95;509879]Have you checked the BIOS for AVX-512 offset (voltage)? Mystical has reported many early motherboards had this set improperly or not at all.[/QUOTE]
Tried "Auto" / "0" / "-3" / "-5" - no change in behavior. My motherboard is MSI X299 Tomahawk with latest BIOS.
Just out of curiosity:
29.6.b3 - results in "An unsuported operation was attempted."
29.6.b2 - works fine
29.6.b1 - works fine
29.5.b9 - works fine
Something has happened between b2 and b3.

ATH 2019-03-02 19:30

[QUOTE=harlee;509885]Noticed the PRP-C results was only listed in results.json.txt and not in results.txt as in the previous versions. Is this expected? Resutls for P-1 testing are being reported in both files.[/QUOTE]

See post #26 + #28:
[url]https://mersenneforum.org/showpost.php?p=509044&postcount=26[/url]
[url]https://mersenneforum.org/showpost.php?p=509177&postcount=28[/url]

Prime95 2019-03-03 01:12

[QUOTE=Cruelty;509902]
Just out of curiosity:
29.6.b3 - results in "An unsuported operation was attempted."
29.6.b2 - works fine
[/QUOTE]

Good info. I'll review all changes from b2 to b4.

Prime95 2019-03-03 03:23

[QUOTE=Cruelty;509873]Any ideas what might be wrong? :confused2:[/QUOTE]

Yes! Please try build 5.

Cruelty 2019-03-03 11:12

[QUOTE=Prime95;509951]Yes! Please try build 5.[/QUOTE]
No change from build 4.

Prime95 2019-03-03 16:43

[QUOTE=Cruelty;509986]No change from build 4.[/QUOTE]

Crap. Mis-built. Build 5 is the same as build 4 except for the build number.

Build 6 now available.

Cruelty 2019-03-03 17:11

[QUOTE=Prime95;510018]Build 6 now available.[/QUOTE]
It's working now, thanks! :thumbs-up:

Cruelty 2019-03-04 08:33

[QUOTE=Cruelty;510022]It's working now, thanks! :thumbs-up:[/QUOTE]
"Small FFTs" 12h stress test passed - I'd suggest lowering Max FFT size from 586K to maybe 128K, above this value heat generated is lower (highest temperature reported during run):
[code]
120k = 90C
128k = 89C
144k = 84C
192k = 74C
200k = 74C
240k = 71C
280k = 69C[/code]BTW: only LinpackXtreme manages to heat up CPU higher (93C) than Prime95 v.29.6.b6 in my case.

kriesel 2019-03-04 15:32

[QUOTE=Prime95;510018]Crap. Mis-built. Build 5 is the same as build 4 except for the build number.

Build 6 now available.[/QUOTE]
Thanks for the fast fixes. Stuff happens.
Please update [URL]https://www.mersenneforum.org/showpost.php?p=508842&postcount=2[/URL] for builds 4-6. Nice idea, that, one place to look, for a summary of issues and fixes vs. build.

Cruelty 2019-03-04 15:39

Was there any change in handling Affinity between 29.4 and 29.6?
[code]
[Worker #1]
Affinity=4,6

[Worker #2]
Affinity=0,2
[/code]
Those settings does not seem to work in 29.6 as they were in 29.4, and CPU cores are chosen randomly and change during calculation?

Prime95 2019-03-04 20:16

[QUOTE=Cruelty;510084]"Small FFTs" 12h stress test passed - I'd suggest lowering Max FFT size from 586K to maybe 128K,[/QUOTE]

Is this for 20 torture threads on 13.75MB L3 cache?

Prime95 2019-03-04 20:35

[QUOTE=Cruelty;510108]Was there any change in handling Affinity between 29.4 and 29.6?[/quote]

Not to my knowledge.


[quote]
[code]
[Worker #1]
Affinity=4,6

[Worker #2]
Affinity=0,2
[/code]
Those settings does not seem to work in 29.6 as they were in 29.4, and CPU cores are chosen randomly and change during calculation?[/QUOTE]

29.6 is linked with a newer version of hwloc. Is it reporting an accurate description of your architecture in results.bench.txt?

Cruelty 2019-03-04 23:34

[QUOTE=Prime95;510122]Is this for 20 torture threads on 13.75MB L3 cache?[/QUOTE]Yes
[quote]29.6 is linked with a newer version of hwloc. Is it reporting an accurate description of your architecture in results.bench.txt?[/quote] I don't see such file in prime95 directory :no: CPU information in prime95 seems accurate though.

Prime95 2019-03-05 03:16

[QUOTE=Cruelty;510141]Yes
I don't see such file in prime95 directory :no: CPU information in prime95 seems accurate though.[/QUOTE]

You get results.bench.txt whenever you do any benchmark. Just start one and abort it.

Prime95 2019-03-05 03:22

[QUOTE=Cruelty;510122]Is this for 20 torture threads on 13.75MB L3 cache?[/QUOTE]

Something is off. A small torture test trying to run in L3 cache should assume 13.75MB / 20 threads = 704KB / test. Each FFT word is an 8-byte double. Thus max FFT size should be under 88K.

Cruelty 2019-03-05 07:11

1 Attachment(s)
[QUOTE=Prime95;510148]You get results.bench.txt whenever you do any benchmark. Just start one and abort it.[/QUOTE] Here you go.

Prime95 2019-03-05 15:59

[QUOTE=Cruelty;510108]Was there any change in handling Affinity between 29.4 and 29.6?
[code]
[Worker #1]
Affinity=4,6

[Worker #2]
Affinity=0,2
[/code]
Those settings does not seem to work in 29.6 as they were in 29.4, and CPU cores are chosen randomly and change during calculation?[/QUOTE]

So 29.6 should bind worker #1 to PU#4 and PU#6 (which are logical CPUs on two different cores) as reported in results.bench.txt. Are you getting messages on screen saying what logical CPU each worker is being bound to?

Cruelty 2019-03-05 18:43

2 Attachment(s)
[QUOTE=Prime95;510178]So 29.6 should bind worker #1 to PU#4 and PU#6 (which are logical CPUs on two different cores) as reported in results.bench.txt. Are you getting messages on screen saying what logical CPU each worker is being bound to?[/QUOTE]
Yes, but it's not what actually happens when I watch the graphs in task manager.

Prime95 2019-03-06 00:22

[QUOTE=Cruelty;510188]Yes, but it's not what actually happens when I watch the graphs in task manager.[/QUOTE]

Hmmm. I looked at the code and see no problems (though that is far from conclusive!).

Are there any Windows system tools that tell you if a process' threads are bound to specific logical CPUs?

I'll try replicating on my non-hyperthreaded quad-core.

Madpoo 2019-03-06 03:03

[QUOTE=harlee;509885]Madpoo,

It appears that JSON isn't displaying PRP-C results the same way as non-JSON - known factors are not being shown. Please see [URL="https://www.mersenne.org/M5342101"]M5342101[/URL] and [URL="https://www.mersenne.org/M7039433"]M7039433[/URL] and PrimeNet Most Recent Results page (sort by Type). Even the entry on my Account Results Details page is different:[/QUOTE]

Fixed. The latest builds have the known-factors in an array in the JSON data... not sure how the previous P95 builds would have included the known-factors if there were more than one, maybe just as a comma-delimited single value.

Cruelty 2019-03-06 07:00

[QUOTE=Prime95;510206]Are there any Windows system tools that tell you if a process' threads are bound to specific logical CPUs?[/QUOTE]
I use:
[code]start /AFFINITY [n] [.exe][/code]

harlee 2019-03-06 20:28

[QUOTE=Madpoo;510213]Fixed. The latest builds have the known-factors in an array in the JSON data... not sure how the previous P95 builds would have included the known-factors if there were more than one, maybe just as a comma-delimited single value.[/QUOTE]

Thanks. Just checked and noticed that all of the older PRP results are now showing the known factors.

newalex 2019-03-07 09:11

I started to use Prime95 v29.6 on machine with AVX512F for ECM work yesterday.

Today it found first factor for [URL]https://www.mersenne.org/report_exponent/?exp_lo=350249&full=1[/URL]
But result is displayed weird in my account and history of exponent.
It tells only Factor: without anything else.

lycorn 2019-03-07 15:22

And it doesn't show up in the Recent Cleared report, either. The corresponding PRP-C test is there, tho. :confused2:

Prime95 2019-03-07 21:24

[QUOTE=Cruelty;510084]"Small FFTs" 12h stress test passed - I'd suggest lowering Max FFT size from 586K to maybe 128K,[/QUOTE]

Should be fixed in build 7.

No progress on your affinity issue -- no idea on how to proceed. Cannot reproduce the problem on my simple 4-core machine.

kriesel 2019-03-08 15:23

truncating benchmark output in worker window
 
Following is from prime95 v29.6b6, x64, on Win10 Pro, dual e5-2690. Benchmark output is truncated in the worker window, for high worker count on high core count systems. This has been reported also for previous prime95 versions. Scroll to far right for 16-worker line; of "throughput xxxx.xx iter/sec", only "through" appears.[CODE][Mar 8 09:10] Worker starting
[Mar 8 09:10] Your timings will be written to the results.txt file.
[Mar 8 09:10] Compare your results to other computers at http://www.mersenne.org/report_benchmarks
[Mar 8 09:10] Benchmarking multiple workers to measure the impact of memory bandwidth
[Mar 8 09:10] Timing 1024K FFT, 16 cores, 1 worker. Average times: 0.72 ms. Total throughput: 1382.83 iter/sec.
[Mar 8 09:11] Timing 1024K FFT, 16 cores, 2 workers. Average times: 1.01, 1.01 ms. Total throughput: 1982.69 iter/sec.
[Mar 8 09:11] Timing 1024K FFT, 16 cores, 4 workers. Average times: 1.96, 1.98, 1.95, 1.96 ms. Total throughput: 2036.00 iter/sec.
[Mar 8 09:11] Timing 1024K FFT, 16 cores, 8 workers. Average times: 4.54, 4.54, 4.54, 4.54, 4.13, 4.11, 4.11, 4.13 ms. Total throughput: 1851.38 iter/sec.
[Mar 8 09:11] Timing 1024K FFT, 16 cores, 16 workers. Average times: 9.80, 9.80, 9.82, 9.88, 9.86, 9.81, 9.84, 9.79, 8.48, 8.42, 8.55, 8.41, 8.41, 8.41, 8.42, 8.48 ms. Total[B][COLOR=Red] through[/COLOR][/B]
[Mar 8 09:12] Timing 1024K FFT, 16 cores hyperthreaded, 1 worker. Average times: 1.00 ms. Total throughput: 995.87 iter/sec.
[Mar 8 09:12] Timing 1024K FFT, 16 cores hyperthreaded, 2 workers. Average times: 1.12, 1.11 ms. Total throughput: 1798.63 iter/sec.
(etc)[/CODE]

Cruelty 2019-03-08 18:39

[QUOTE=Prime95;510372]Should be fixed in build 7.[/QUOTE]
OK, I'll give it a try.

newalex 2019-03-10 12:52

The second ECM factor was found using prime 29.6. And it isn't displayed correctly in reports, too.

[URL]https://www.mersenne.org/report_exponent/?exp_lo=351457&full=1[/URL]

pepi37 2019-03-10 17:37

[QUOTE]Benchmark type (0 = Throughput, 1 = FFT timings, 2 = Trial factoring) (0): 1

FFTs to benchmark
Minimum FFT size (in K) (256):
Maximum FFT size (in K) (512):
Benchmark with round-off checking enabled (Y): n
Benchmark all-complex FFTs (for LLR,PFGW,PRP users) (N): y
Limit FFT sizes (mimic older benchmarking code) (N): y

CPU cores to benchmark
Number of CPU cores (comma separated list of ranges) (4):

Accept the answers above? (Y):
Main Menu

1. Test/Primenet
2. Test/Worker threads
3. Test/Status
4. Test/Continue
5. Test/Exit
6. Advanced/Test
7. Advanced/Time
8. Advanced/P-1
9. Advanced/ECM
10. Advanced/Manual Communication
11. Advanced/Unreserve Exponent
12. Advanced/Quit Gimps
13. Options/CPU
14. Options/Preferences
15. Options/Torture Test
16. Options/Benchmark
17. Help/About
18. Help/About PrimeNet Server
[Main thread Mar 10 18:29:53] Starting worker.
Your choice: [Work thread Mar 10 18:29:53] Worker starting
[Work thread Mar 10 18:29:53] Your timings will be written to the results.txt file.
[Work thread Mar 10 18:29:53] Compare your results to other computers at [url]http://www.mersenne.org/report_benchmarks[/url]
[Work thread Mar 10 18:29:53] Timing FFTs using 4 cores.
[Work thread Mar 10 18:29:53] Timing 100 iterations of 256K all-complex FFT length. Best time: 0.000 sec., avg time: 0.000 sec.
[Work thread Mar 10 18:29:53] Timing 100 iterations of 320K all-complex FFT length. Best time: 0.000 sec., avg time: 0.000 sec.
[Work thread Mar 10 18:29:53] Timing 100 iterations of 384K all-complex FFT length. Best time: 0.000 sec., avg time: 0.000 sec.
[Work thread Mar 10 18:29:53] Timing 86 iterations of 512K all-complex FFT length. Best time: 0.001 sec., avg time: 0.001 sec.
[Work thread Mar 10 18:29:53] FFT timings benchmark complete.
[Work thread Mar 10 18:29:53] Worker stopped.
[/QUOTE]


Linux x64 , Prime95 29.6 beta3
So I put FFT timings from 256K to 512K on 4 cores


All times are UTC. The time now is 20:28.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.