20150120, 01:54  #1 
Jan 2015
2 Posts 
Extremely basic questions
Hello all,
Very new to the great prime search, but figured I'd start some workers while I learn about everything. Have a few basic questions about how prime95 operates itself: First, I don't have a particularly powerful machine (which also happens to be a laptop), and I doubt I'll ever be able to singlehandedly test a number before having to move the machine again. While reading through some of the settings I noticed that p95 will be making some backup files. 1) If I only partially test a number for primality, can I (or anyone else), pick up where I left off? Or is an impartial test just lost progress? 2) If a (suspected) prime is found, what is the process of verifying it? Test, retest, and factor? I noticed some 'milestones' stating that all numbers (not necessarily primes) below N number were tested twice. Is this simply protection against random errors? 3) And finally, I've always been told that GPUs were faster at certain calculations, what is the benefit of using CPU processing vs GPU? I've had some tell me that the way GPUs operate, they are more prone to erroneous calculations, but I would imagine a GPU producing XX times more probable primes, and simply retesting the ones that seemed like they would be true primes using CPUs would be a good approach. I'm probably wrong, but I'm interested as to the reasoning. Thanks, 
20150120, 04:57  #2  
1976 Toyota Corona years forever!
"Wayne"
Nov 2006
Saskatchewan, Canada
2·11^{2}·19 Posts 
Quote:
1. NO .... any partial prime testing is throwaway. On the other hand Trial Factoring assignments to a complete bit level are ok. For example if you ask for an assignment to TF (Trial Factor) a number from 6570 bits and submit the results for each bit level up to 68 those are all good. If you stop in the middle of bit level 69 that work is also lost. 2. It is tested at least twice more by different people and at least one different Prime testing software: CUDALucas for one. 3. GPUs are MUCH MUCH faster at TF than a CPU (20 to 100 times faster) and a little faster at prime testing; maybe 2 to 5 times faster. My GPU (fast but NOT the fastest) will do 515 GhzDays of TF work per day (GhzDays is the standard unit of work used here). My top CPU (fast but not the fastest) will do about 8 GhzDays of work per day per core. This same GPU will do about 25 GhzDays per day of Prime Testing. Mine is one of the fastest at TF but more average at Prime Testing; some are not as fast at TF but faster than mine at Prime Testing. I could be wrong but do NOT believe there is widely available software to let a GPU do P1 or ECM. If you don't know what those are there are lots of forums topics of this too. Other topics on this forum will go into great detail about which GPU is best suited for each work type and how to best set it up, keep it cool, etc. 

20150120, 05:30  #3 
Dec 2012
2×139 Posts 

20150120, 06:24  #4 
"Curtis"
Feb 2005
Riverside, CA
127A_{16} Posts 
I disagree about #1 being no. If you stop running Prime95, it saves your progress into a file. Starting up that Prime95 copy again, with the status file in the folder, will resume the calculation. There is no system for sharing these progress files, so petrw1 meant that if you give up on a test partway through, nobody else will pick it up where you left off. But, you can resume any time without lost work.
I haven't tried resuming on a different architecture for instance, running Prime95 on a USB thumb drive and bouncing from laptop to desktop to work machine probably isn't a great idea. But, even that might work chances are someone else will let us know. 
20150120, 08:22  #5  
6809 > 6502
"""""""""""""""""""
Aug 2003
101×103 Posts
2×7×677 Posts 
Quote:
If your computer reports a new prime, here is what will happen, in approximate order, once your machine reports the new prime to PrimeNet. George and select others will be notified. George will start a double check with a copy of Prime95 and I think also CUDALucas. George will try to contact you. Your machine should have saved the last checkpoint file. If he can get that from you he will rerun the test from there to make sure there are no problems. One or more of the other trusted folks (either those notified direct by the server or by George) will also start a double check using a GPU using CUDALucas, using MLucas on some other hardware (this depends on who is available and what equipment is available.) As these tests proceed, the testers will compare the interim residues to see if they match each other. If at the end they produce a new prime, party time. While that is happening some of us will try to guess what the number is (from the clues that can be found). Once people think they know what the number is, some will try their own double check, others will try doing extra factoring (on the extremely remote chance that it is not prime and that a factor can be found before the double checks finish), both TF and P1. 

20150120, 08:36  #6 
May 2013
East. Always East.
11·157 Posts 
The interim file contains the residue from whatever step you happened to be on. The data is no different from architecture to architecture (though it is handled differently). The residue is kept to a number only as large as the candidate itself using modular arithmetic. That being said, it is still quite a large and clunky number, because it will appear to be more or less a random assortment of 1's and 0's in binary whereas the Mersenne Number itself is a string of 1's.
Interim files have occasionally been saved to serve as checkpoints, especially in longer tests which are only now starting to be feasible in less than a year without dedicating multiple cores of a CPU (or a GPU which as you said can be more error prone). However, by default we do not bother with them, and only care if the result is Zero (Prime) or NonZero (Composite) and we keep the last 64bits of it for doublechecks on composites. I would recommend sticking with doublechecks if you feel that you won't be able to complete the longer tests. Although it is possible to hand off the intermediate results to someone else it is so much simpler to have each test done by a single machine. That being said, you are of course free to work on whichever work type you wish, and your contributions, whatever the size, are appreciated. Welcome to GIMPS, by the way! If a candidate returns Prime, it is rigorously tested on different machines with different implementations of the algorithm to make absolutely sure it is prime. There isn't any factoring officially involved here at this point because there aren't any factors to be found. There is a lot of work put into factoring with GIMPS, but that is simply to eliminate certain candidates with very small factors (relatively speaking anyway). To take the current largest Mersenne Prime and fully trial factor it would take more years than there are seconds in the expected lifetime of the universe, and more computers than there are particles in the known universe. The LucasLehmer algorithm has been proven accurate to death, for positives and negatives. It is a sufficient proof of primality. GPU's are indeed faster overall but there are limitations. Mainstream GPU's are meant more for gaming than productivity, and gaming is has a much, much higher tolerance for errors than the LL test does. A single bit flip (a 1 mistaken for a 0 or vice versa) will ruin the LL test, whereas even graphical glitches you can see aren't usually so bad that they ruin the gaming experience (some people will disagree with me here). These glitches are due to a corruption in the textures loaded into the GPU memory, more often than not when the memory is pushed too far with overclocking. The major ones are a sign of actual instability where the little ones (one pixel off by a slight shade for one frame; nearly impossible to spot) are actually perfectly acceptable... to gaming standards. That same memory glitch ruins the LL test. It's entirely possible to stress test a GPU with CUDALucas to see if it can actually run the tests, and GPU's can be underclocked to ensure better stability, but that reduces their speed. Additionally, graphics are processed using math that is very simple, but done on a massive scale. This math requires much less precision than the LL test, and you'll hear the terms Single Precision and Double Precision. The LL test requires the Double variety which takes a serious performance hit depending on the model of GPU. Some have performance reduced to 1/8 of the SP amount. Others have it down to 1/24 or even 1/32. Trial factoring, on the other hand, only requires Single Precision math so it can take full advantage of everything the GPU has to offer. TF is much less dependent on memory bandwidth and thus is much less susceptible to the instabilities incurred by pushing the bandwidth to its limits. 
20150120, 19:26  #7 
Jan 2015
2 Posts 
Thank you all for the replies. This entire process is very interesting to me, I'll be sure to learn as much as I can about it.
Huh, It's interesting to me that if a prime is found, it shoots straight up to the select few people, But then again, I suppose it takes quite a while to finally find one. On the note of Single and Double precision numbers, got me thinking about architecture limitations inherent in everyday pc's. How is it possible for a 64 bit computer (Which I believe can count to 2^{64} be testing these massive primes? Or does that limitation only come into play when trying to factor test them? And Jayder, Sorry, Yes I "When a prime is found... do we factor it", I meant to 'verify that there are no factors'. Thanks again for all of the info, 
Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
Extremely lucky assignments  apocalypse  GPU to 72  6  20150407 04:41 
Basic ECM questions  VictordeHolland  Information & Answers  5  20130904 01:47 
Extremely nonsmooth Factor of M61^1201  kosta  Factoring  10  20130329 14:23 
Yet another basicfactoringquestions thread  davar55  Factoring  24  20110123 23:57 
extremely large numbers  MiniGeek  Programming  10  20080731 17:04 