Thread: GIMPS progress
View Single Post
Old 2018-11-28, 20:27   #7
kriesel
 
kriesel's Avatar
 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

10100011011012 Posts
Default P-1 progress

As of 2021-01-28, PrimeNet is issuing P-1 assignments around 102.4M.

Sampling runs made well ahead of the production wavefront can be useful. If there is some issue detected that is fft length or exponent or bounds or gpu or available-memory dependent, early detection would allow considerable time for detection and debugging and retesting or documentation of limits before it becomes an issue at the wavefront. Determination and documentation of gpu-specific limits is also useful in avoiding doomed or inadequate lengthy runs. I've made a lot of such runs scattered through the mersenne.org range p<109 for those purposes.

P-1 completed is not listed in https://www.mersenne.org/primenet/
Repeated use of https://www.mersenne.org/report_exponent/ yields P-1 results in bins (spans of one million of exponent value) below showing at least one P-1 run per bin (historically, by check of lowest 2000 out of the million span). Current status can be seen by their P-1 bounds in https://www.mersenne.org/report_factoring_effort/?exp_lo=137000000&exp_hi=999999999&bits_lo=77&bits_hi=99 or similar. Some P-1 runs have been made which do not reach the bounds or the combined factor probability level of PrimeNet or gpu72 goals; these runs are mostly omitted from the listing below.
Runs of consecutive bins with at least one P-1 result in each are condensed in the listing below. For example, 74-79 would mean bins 74, 75, 76, 77, 78, 79 each have at least one reported P-1 result. Bins that have had only a P-1 completed through stage 1 but not stage 2 are indicated as (s1). These naturally occur during the process of exploring application and gpu model limits, since the memory requirement for stage 2 is much higher than for stage 1.
All CUDAPm1 application and hardware specific limits indicated below are for 2 primality tests saved if a factor is found.

Note the following detail-specific limits observed:
Quadro K4000 CUDAPm1 V0.20 GPU72-suitable limit ~250M
GTX1050Ti-4GB or GTX1060-3gbGB CUDAPm1 v0.20 bounds GPU72-suitable limit ~300M
376M Tesla C2075, 377M GTX1080x, CUDAPm1 v0.20 bounds GPU72-suitable limits
432.5M GTX1060-3GB CUDAPm1 v0.20 reduced bounds feasible 2-stage limit
510M < GpuOwl v6.7 on GTX1080Ti limit < 511M
Prime95 on cpu below FMA3 limit 595M
GpuOwl ~v6.11 (Fan Ming build) on Colab Tesla K80 >665M+ GPU72 limit verification ongoing
GpuOwl ~v6.11 (Fan Ming build) on Colab Tesla P100 >1G+ GPU72 limit
GPUOwl ~v6.11 (Fan Ming build) on Colab Tesla P4 >400M limit verification ongoing
GPUOwl ~v6.11 (Fan Ming build) on Colab Tesla T4 > 400M limit verification ongoing

Prime95 FMA3 limit 920.8M
Prime95 AVX512 limit >1.16G


0-465 (including 271828171, 314159257)
471 (s1)
480-481

500-503
510
511 (s1)
514
517
520
543 (543656371) (s1)
550 (s1)
552
554
562
564
565
570
580 (s1)
595
599 (s1)

600-606
613
617
623
625
627
628 (628318517) (s1)
642
647
652
655
665
666 s1

700
701
704
720 (s1)
731 (s1)
735 (s1)
737 (s1)
757
764

800-801
843
852
857

900-901
931
937
957
984
993
999

(Mersenne.org limit 999M < expected AVX512 prime95 limit if system RAM is adequate)

(no others completed to 999 to primenet or gpu72 combined factor odds targets, to my knowledge; some s1's omitted; above list is as of 2019 June 7, except edited subsequently as I completed gap-filling P-1 runs)


Top of reference tree: https://www.mersenneforum.org/showpo...22&postcount=1

Last fiddled with by kriesel on 2021-01-28 at 17:05 Reason: updated for wavefront
kriesel is online now