![]() |
|
|
#34 | ||
|
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2
251916 Posts |
Quote:
Quote:
This data is also obviously incompressible (because it is truly random). |
||
|
|
|
|
|
#35 | |
|
"David"
Jul 2015
Ohio
11·47 Posts |
Quote:
|
|
|
|
|
|
|
#36 | |
|
If I May
"Chris Halsall"
Sep 2002
Barbados
2×67×73 Posts |
Quote:
At the end of the day, it probably makes more sense for trusted users / machines to double / triple check candidates who were initially LL'ed by suspect machines than try to do a major overall off the client and server code-base. Separately, as George mentioned, an untrusted machine might taint the results of a trusted machine. Further, there's the whole question about "credit": who gets to claim (or at least be named in) the find? The machine/user who did the last iterations, the machine/user who did the majority, or everyone who did a few? (Hint: if the latter was the case, some would do a few iterations on many candidates!) |
|
|
|
|
|
|
#37 |
|
"Carlos Pinho"
Oct 2011
Milton Keynes, UK
494710 Posts |
Dear airsquirrels,
Your machines are well appreciated on sieving for NFS@Home at http://escatter11.fullerton.edu/nfs/. Kind Regards, Carlos |
|
|
|
|
|
#38 | |
|
Just call me Henry
"David"
Sep 2007
Cambridge (GMT/BST)
16FE16 Posts |
Quote:
As far as I am aware a lot of the reason a cluster is used is because people don't usually want to commit an expensive machine(that amount of memory isn't cheap) to one job for 6 months+. As memory gets cheaper with DDR4 I imagine that larger jobs will be done on home pcs again. |
|
|
|
|
|
|
#39 | |
|
Serpentine Vermin Jar
Jul 2014
3,313 Posts |
Quote:
See this thread for the gory details... I think it's where we discussed most of it: http://www.mersenneforum.org/showthread.php?t=13185 In short, on a good Xeon chip you can continue to add all of the cores on one CPU (and even 1 core on the other CPU) with decreasing gains in LL performance, but still slightly faster with each core. It's only when you start adding additional cores on the other CPU (past the first one) where performance actually starts to get worse, as you flood the QPI channel. That thread was specifically about larger exponents, but the same holds true for smaller ones as well. It wasn't until I got down to some really tiny exponents (like sub 5M) before I noticed that cores were waiting on memory, which was weird. If I'm testing a 50M exponent, all of the cores would be 100%, but on a 5M exponent, the first core is 100% and the rest might be between 70-90% utilized. Oh well... they finish really fast at any rate. ![]() For GMP-ECM doing ECM work, you can get one-per-core running and if you have sufficient free RAM, set the parameters of each instance to use as much as it needs. Depending on the exponent in question you may be looking at a pretty large chunk when it's doing stage 2. I was running curves on small exponents like M1277 and some pretty large bounds... stage 2 could take 25-30 GB per instance (I think that was k=2) so obviously if you wanted to do that on 32 cores, you'd want a LOT of RAM. :) But doing ECM on "normal" exponents using "normal" bounds wouldn't use as much as that. I just had this thing about 1277 since it's the lowest one without a known factor yet. Some other thread has my gory details about getting GMP-ECM working well with having Prime95 doing stage 1 and feeding that to GMP-ECM. It's not the easiest process but if that's something you're interested in, then it'll work. Depending on your OS (Windows or Linux), the actual process of launching multiple gmp-ecm instances and setting affinity for each one would vary. I think I went into enough detail on how I did it with Windows to get you started, should you go down that path. For now I'm still devoting resources to triple-checking exponents where the first two didn't match, so I'm not currently doing any ECM work... I may go back to it at some point. |
|
|
|
|
|
|
#40 | |
|
Serpentine Vermin Jar
Jul 2014
3,313 Posts |
Quote:
Of course people who climb aboard the GIMPS train after a new discovery are probably doing so in hopes of finding another new one, as if another one would be found in the next few days... so DC work on their first assignment or two would be a buzz kill. |
|
|
|
|
|
|
#41 | |
|
Serpentine Vermin Jar
Jul 2014
331310 Posts |
Quote:
Primenet saves the final residue (or the last 64 bits of it anyway). Maybe George or someone could see some benefit in saving the partial residue at the 50% point as well, so that a double-checker would have some idea at the halfway point of whether or not they match that first check. I'm not entirely sure if that would be useful or not... it will either match by that point or not. If it matches, it could still be different at the end, but you'd complete the test to know. If it mismatches, the first one may be the bad one, so you'd still need to complete the test to know. Either way you would do the full test but maybe there'd be some interest in knowing way ahead of time if a mismatch had occurred. Unfortunately there's probably zero way to know at what point a bad result went off the rails... it could have been in the first hundred iterations, or it could have been the final one. So I'm just arbitrarily saying "50%". Maybe the rare people that save their residues and do simultaneous runs of the same work and have had mismatches occur could shed some light on "at what % did the results diverge?" |
|
|
|
|
|
|
#42 | |
|
"Kieren"
Jul 2011
In My Own Galaxy!
2×3×1,693 Posts |
Quote:
|
|
|
|
|
|
|
#43 |
|
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2
9,497 Posts |
|
|
|
|
|
|
#44 | |
|
"David"
Jul 2015
Ohio
10058 Posts |
Quote:
Having DC so far behind is the real problem I am looking at here, either lots of resources are spent on low chance of learning anything new or we spend very little resources on double checks and a mistake somewhere along the line 'misses' an important Mersenne. What is the current stat for how many double checks are started and never completed? I'm not sure credit would be as important for DC and the whole project would benefit from all the work of churners who abandon the current low granularity work units. |
|
|
|
|
![]() |
| Thread Tools | |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| Large Gaps >500,000 | mart_r | Prime Gap Searches | 119 | 2017-08-21 12:48 |
| 48-bit large primes! | jasonp | Msieve | 24 | 2010-06-01 19:14 |
| a^n mod m (with large n) | Romulas | Math | 3 | 2010-05-08 20:11 |
| Is this a relatively large number? | MavsFan | Math | 3 | 2003-12-12 02:23 |
| New Server Hardware and price quotes, Funding the server | Angular | PrimeNet | 32 | 2002-12-09 01:12 |