View Single Post
Old 2019-07-19, 14:13   #1
kriesel's Avatar
Mar 2017
US midwest

32×29×31 Posts
Default Kriesel blog sub-site map

Welcome to the GIMPS forum and to the GIMPS computing reference blog site map / table of contents. (See end of this post for acknowledgments of the many who have helped)

Please see also the Announcements page.

This page is l.....o....n....g. You may find it useful to search it for keywords or a phrase using your web browser's function for that. I do, and I wrote it! In Firefox, for example, Ctrl-F, click at the lower left, fill in the search string in the "Find in page" box, then click ^ or v to search upward or downward.
This collection began as GPU-oriented only, and grew (evolved, though some might say ballooned or metastasized!) to include CPU-oriented application coverage, introductory material, history, and more.
For those who would like a less mammoth map, there's an outline/thread only version following. You may want to bookmark that post, or this thread.

The whole blog is a work in progress. Please be patient and understanding. It's a big job. (Five plus years and counting...)
Polls and poll-related threads are not (yet) included in this site map. To find those you'll need to go here for now.

Post comments in discussion threads, and not in reference threads. Discussion threads are shown here in green bold italic font, and are few (about 1.5% of the total).

This is a reference thread. Do not post here. Posting comments in reference threads is discouraged. Such posts may be moved or deleted without warning or recourse. (Rarely, a discussion post regardless of post location is incorporated into a reference thread, with credit to the poster. So unattributed posts can be assumed to be authored by kriesel.)

Discussion thread posts will not be enumerated here. Reference thread posts will be.

Reference material discussion thread I suggest new participants skip reading this discussion thread link, at least initially. Come back later if you're curious.

Suggested reading for new participants begins with the new participant reference thread immediately following, continuing through at least post 6 of the Mersenne Prime GPU computing reference material thread, and for CPU application to GIMPS, both for prime95/mprime and for Mlucas (far down this page in both cases).

New participant reference thread
  1. Intent and table of contents
  2. New participant guidance (forum etiquette and tips)
  3. Background
  4. How much work is it to do x
  5. GIMPS glossary
  6. Older reference thread
  7. Best practices
  8. OS fundamentals for GIMPS GPU application use
  9. Why no one should run LL if they can run PRP with proof generation instead
  10. Nick's Basic Number Theory Series
  11. Assignments background
  12. Proof and Cert handling
  13. Getting or avoiding Cert assignments

Mersenne Prime mostly-GPU computing reference material
  1. Intro
  2. Available Mersenne Prime hunting software
  3. Available Mersenne prime hunting client management software
  4. Disclaimer
  5. Ancestry of available software
  6. Utilities for GPU Computing, etc.
  7. List of fft lengths
  8. Four primality test programs' performance charted together (clLucas, CUDALucas, gpulucas, and gpuOwL)
  9. Mersenne prime hunt work coordination sites vs type and exponent
  10. Devcon (Automating recovery from Windows TDR events for GPUs)
  11. Table of megadigits; what Mersenne exponents have various order of magnitude number of decimal digits
  12. TF & P-1 optimization/tradeoff with each other and primality testing
  13. Assorted handy links
  14. Found a new prime? Really? What next?
  15. NVIDIA-smi
  16. TF & LL GhzD/day ratings & ratios and SP/DP ratios for certain GPUs
  17. P-1 bounds determination
  18. What limits trial factoring?
  19. Error rates
  20. Costs
  21. Reserving a specific exponent
  22. Worktodo entry formats
  23. GPUto72 and PrimeNet P-1 bounds
  24. Moving work in progress
  25. GPU P-1 applicability
  26. Save file (in)compatibility
  27. GPU benchmarks
  28. Result formats
  29. Application vs. operating system availability & compatibility
  30. PrimeNet P-1 bounds
  31. P-1 selftest candidates
  32. GPU serial numbers or other stable unique ids
  33. Result formats accepted by mersenne.*
  34. Optimal prp proof power versus exponent
  35. Requirements for comparability of interim residues
  36. Exponent limits
  37. File sizes and computing costs of proof generation & cert
  38. Gerbicz Error Check block size
  39. GPU device mapping and stability of mapping
  40. Available fft lengths and their corresponding nominal exponent limits

System management
  1. Intro and table of contents
  2. Partial checklist for system maintenance and reliability
  3. Drivers and GPUs trivia / traps / tricks
  4. Application logging and tee
  5. Memory error control
  6. Windows 10
  7. Running multiple computation types on multiple GPUs per system.
  8. Power settings
  9. WSL
  10. Linux

Gpu-specific reference thread
  1. Intro / Table of contents
  2. GPU temperature limits
  3. GPU TF and LL benchmarks, ratios, and SP:DP ratios
  4. NVIDIA GPU model, compute capability level, CUDA level, OS versions, and driver version
  5. KaBo's post about configuring system for power efficiency
  6. How many gpus can go in one system?

Integrated graphics processor specific reference thread
  1. Intro and table of contents
  2. Intel i7-7500U/HD620
  3. Intel i7-8750H/UHD630
  4. Intel i7-4790/HD4600
  5. Intel i5-1035G1/UHD920
  6. Other possibilities
  7. Intel Celeron G1840/HD
  8. Intel i3-4170/HD4400
  9. Intel i7-1165G7/Iris Xe

Xeon Phi specific material (draft in progress)

Application-specific reference threads (GPU oriented first, then CPU-oriented after scrolling down considerably)

Cloud computing specific reference threads or links:
  1. Intro
  2. How to
  3. Mprime attempt
  4. Mfaktc attempt
  5. CUDAPm1 attempt
  6. CUDALucas attempt
  7. GpuOwL attempt
  8. Combined CPU and GPU usage
  9. Worktodo replenishment and result reporting
  10. Mlucas attempt
  11. Notebook instance reverse ssh and http tunnels
  12. The Google drive access authorization sequence
  13. When a VM or GPU is not available
  14. Issues, questions, support
  15. GPU models available through Google Colab
  16. Multiple branches for CPU-only, or various GPU models
  17. work assignment and results submit
  18. GMP-ECM for Colab
  19. Persistent storage
  20. Moving Colab instances
  21. Embellishments
  22. Trial factoring double mersennes with mmff
  23. Google Colab VM OS updates, upgrades, Python version upgrades, and other breakage


Hall of fame
  1. Intro
  2. Near misses
  3. Unsung heroes
  4. The pre-computer era
  5. The pre-GIMPS era (1952-1995)
  6. The GIMPS era (1996 and forward)
  7. Why do it, or Why do we do it?
  8. New discovery verification
  9. Software of historical interest
  10. Timeline of a new Mersenne prime verification and announcement
Hall of shame

PrimeNet API
  1. Table of contents; PrimeNet API documentation and notes
  2. Sample gpu device parameters
  3. Draft Extension to PrimeNet Server Web API to support GPU applications
  4. Constraints imposed by server hardware

Code development
  1. Intro and table of contents
  2. Why don't we double check TF or P-1 runs?
  3. Why don't we start primality tests at an iteration count above zero?
  4. Why don't we consider all integers as exponents, not just primes?
  5. Why don't we use the information in the series of p values for the known Mp to predict the next?
  6. Why don't we compute the Lucas series without the modulo, once, and apply the modulo at the end?
  7. Why don't we run lots of really old computers, on individual assignments each?
  8. Why don't we use several computing devices together to primality test one exponent faster?
  9. Why don't we use the initial iterations of a standard primality test as a self-test?
  10. Why don't we build statistical checks into the GIMPS applications and server?
  11. Why don't we compute multiple P-1 runs at the same time allowing multiple use of some interim values?
  12. Why don't we save interim residues on the PrimeNet server?
  13. Why don't we skip double checking of PRP tests protected by the very reliable Gerbicz check?
  14. Why don't we self test the applications, immediately before starting each primality test?
  15. Why don't we occasionally manually submit progress reports for long-duration manual primality tests?
  16. Why don't we extend B1 or B2 of an existing no-factor P-1 run?
  17. Why don't we do proofs and certificates instead of double checks and triple and higher?
  18. Why don't we run GPU P-1 factoring's GCDs on the GPUs?
  19. Why don't we use 2 instead of 3 as the base for PRP or P-1 computations?
  20. Why don't we use interim 64-bit residues as checking input on later runs?
  21. Why don't we merge the leading CPU and GPU applications?
  22. Why don't we use FPGAs along with or instead of CPUs or GPUs?
  23. Why don't we compute GCD for a P-1 stage in parallel with the next stage or assignment?
  24. Why don't we preallocate PRP proof disk space in parallel with the computation?
  25. Why don't we run GIMPS computations on ASICs?
  26. Why don't we use direct GPU-storage transfers in GIMPS apps?
  27. Why don't we use a formula to discover more Mersenne primes instead of doing lots of slow computations?
  1. Intro and table of contents
  2. Mersenne prime exponent LL worktodo lines (gpuowl <v0.7, CUDALucas and prime95 formats)
  3. Mersenne prime exponent PRP and PRP-1 worktodo lines (GpuOwL format)
  4. Interim 64-bit residues for LL sequences of known Mersenne prime exponents
  5. Interim 64-bit residues for PRP3 sequences of known Mersenne prime exponents
  6. Seed value ten LL series final residues
  7. How fast can we multiply 2 integers, or square an integer?
  8. PRP residue types
  9. The Gerbicz error check for PRP, and on LL and P-1 with restrictions
  10. The Jacobi check for LL
  11. CUDA Toolkit compatibility vs CUDA level
  12. Interim residues for large exponents
  13. LL with shift
  14. Challenges of large exponents
  15. Proof file format
  16. Known good interim LL residues for checking run-time scaling and correct computation
  17. Known good interim PRP residues for checking run-time scaling and correct computation
  18. GIMPS application and user interface design
Blog development

Off topic
  1. Intro
  2. About me
  3. Personal bests in GIMPS
  4. Landau's fourth problem
  5. Blogger blunder backup blackout blues
  6. Fermat numbers
  7. What is this beast called?
  8. Beyond Mersennes
  9. Perfect Numbers
  10. Factoring Methods

My thanks go to all those who have contributed in some way to this compilation, which includes, in rough order of appearance in the few discussion threads here:
Uncwilly, ET_, LaurV, axn, SELROC, Dr. Sardonicus, retina, nomead, Dylan14, Batalov, hansl, ewmayer, chalsall, GP2, kar_bon, mysticial, jwaltos
Also, too numerous to identify and list, are those who asked many questions in various threads, that led to ideas for new posts, additions, or edits here.

This is an attempt at completeness, but it is not possible to cover in advance all possible questions/misconfigurations that users may produce.


Last fiddled with by kriesel on 2023-09-28 at 16:28 Reason: added file size scaling and older versions to mlucas thread, fix xeon phi thread url, misc edits
kriesel is online now