mersenneforum.org  

Go Back   mersenneforum.org > Extra Stuff > Blogorrhea > kriesel

Closed Thread
 
Thread Tools
Old 2019-03-03, 00:08   #1
kriesel
 
kriesel's Avatar
 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

12EA16 Posts
Default Errors

This thread is intended as a reference thread for what errors may occur and what methods are used to detect, prevent, or correct them. For discussion, please use the reference discussion thread https://www.mersenneforum.org/showthread.php?t=23383.
  1. This post
  2. TF https://www.mersenneforum.org/showpo...36&postcount=2
  3. P-1 https://www.mersenneforum.org/showpo...37&postcount=3
  4. Primality testing https://www.mersenneforum.org/showpo...40&postcount=4
  5. Hardware https://www.mersenneforum.org/showpo...12&postcount=5
  6. tbd etc.

Top of reference tree: https://www.mersenneforum.org/showpo...22&postcount=1

Last fiddled with by kriesel on 2019-11-19 at 06:38
kriesel is online now  
Old 2019-03-03, 00:18   #2
kriesel
 
kriesel's Avatar
 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

484210 Posts
Default TF

TF errors are of multiple types. Their impact is slight.
1. Missed factor
2. False factor
3. Reporting error
4. Malicious reports

A missed factor causes additional TF effort, unless it is very near the end of the last TF level to be done. A missed TF factor, if no other factor is found in the remaining TF, leads to spending additional resources performing P-1, and about a 97% chance of spending the effort of 2 or more Lucas-Lehmer primality tests or a PRP test and proof. A missed factor has an estimated 20% chance of being found in the P-1 pass. Smooth factors may be found, but others not.

A false factor will be quickly determined to be false. The PrimeNet server checks each reported factor divides the Mersenne number for which it is reported, by performing the TF computation for that single factor reported. Confirming a factor is very many times faster than searching for a factor. The PrimeNet server also checks whether a submitted factor is itself prime or composite. False factors can be generated in mfaktc and probably other software from unreliable hardware, including memory errors on gpus, or other error sources. Here's what that may look like.
Code:
[Mon May 28 01:18:34 2018]
UID: Kriesel/dodo-gtx480-0, M329000033 has a factor: 38814612911305349835664385407 [TF:80:81:mfaktc 0.20 barrett87_mul32_gs] 
[Mon May 28 05:40:46 2018]
UID: Kriesel/dodo-gtx480-0, M329000033 has a factor: 38814612911305349835664385407 [TF:80:81:mfaktc 0.20 barrett87_mul32_gs]
[Mon May 28 05:44:16 2018]
UID: Kriesel/dodo-gtx480-0, M329000033 has a factor: 38814612911305349835664385407 [TF:80:81:mfaktc 0.20 barrett87_mul32_gs]
[Fri Jun 01 01:02:12 2018]
UID: Kriesel/dodo-gtx480-0, no factor for M331000037 from 2^79 to 2^80 [mfaktc 0.20 barrett87_mul32_gs]
[Sat Jun 02 06:57:35 2018]
UID: Kriesel/dodo-gtx480-0, M331000037 has a factor: 38814612911305349835664385407 [TF:80:81:mfaktc 0.20 barrett87_mul32_gs]
[Sat Jun 02 11:45:02 2018]
UID: Kriesel/dodo-gtx480-0, M331000037 has a factor: 38814612911305349835664385407 [TF:80:81:mfaktc 0.20 barrett87_mul32_gs]
[Sun Jun 03 00:31:23 2018]
 UID: Kriesel/dodo-gtx480-0, M331000037 has a factor: 38814612911305349835664385407 [TF:80:81:mfaktc 0.20 barrett87_mul32_gs]
The reported factor occurs sometimes for a variety of exponent/TF levels. It is easily determined to be wrong. It is a composite factor for M3,321,928,619 and is included as part of mfaktc's selftest routine. It corresponds to kp= 19407 306455 652674 917832 192703 = 36 × 31081 × 65381 × 3943673 × 3321928619. In at least some cases, its occurrence is related to Windows TDR events.
See the mfaktc bug and wish list for more details.
Here's a short perl program to check a factor; alter to suit the exponent and reported factor.
Code:
use Math::BigInt;        #Math::BigInt is an extension package in perl supporting arbitrarily large integers
$f = Math::BigInt->new('2056121949903925392617977');     # the factor being confirmed, as a decimal string
$b=87357233;         # the exponent in decimal of the mersenne number for which it's reported as a factor
$m = Math::BigInt->new('2');    #$m=2**b mod f; iff m=1, f is a factor of 2**$b -1;
$m->bmodpow($b,$f);         # see https://perldoc.perl.org/5.14.1/Math/BigInt.html#bmodpow()
if ($m == 1 ) { print "M$b has factor $f\n"; } else { $m--; print "M$b/$f leaves nonzero remainder $m\n"; }
Reporting errors may consist of duplicate reports of a given TF bit level and exponent combination, failure to report, or some sort of transmission or transcription error. Duplicate reports do not harm or slow project progress. Failure to report a factor found is the same substantial impact as a missed factor, plus the TF is likely to be reassigned.

Some aspects of the result reporting essentially operate on the honor system, which has worked very well for the most part for thousands of participants over many years. It's rare, but occasionally someone chooses to try to be disruptive, or gain computing credit they have not earned by doing the work. There are many ways of dealing with such individuals. Some of them require server administrator effort. One of the aspects they try to exploit is there is currently no verification in the TF no-factor-found report that the work described was actually performed. Robert Gerbicz has discovered an approach which may allow straightforward inclusion in the TF computations, of a modest overhead proof of work performed, for the TF no-factor-found case. It is described at https://www.mersenneforum.org/showpo...7&postcount=30
and discussed at https://www.mersenneforum.org/showpo...7&postcount=34, https://www.mersenneforum.org/showpo...9&postcount=40, https://www.mersenneforum.org/showpo...2&postcount=41, https://www.mersenneforum.org/showpo...5&postcount=44, https://www.mersenneforum.org/showpo...8&postcount=49, https://www.mersenneforum.org/showpo...9&postcount=51, https://www.mersenneforum.org/showpo...1&postcount=52, https://www.mersenneforum.org/showpo...7&postcount=58, through post 64; https://www.mersenneforum.org/showpo...1&postcount=67, https://www.mersenneforum.org/showpo...1&postcount=71, https://www.mersenneforum.org/showpo...7&postcount=74, https://www.mersenneforum.org/showpo...&postcount=122, https://www.mersenneforum.org/showpo...&postcount=123
There's another discussion of TF verification of work done in https://mersenneforum.org/showthread.php?t=25493

AIDs help detect some transmission or transcription errors. Exponents or bit levels not matching what was assigned for that AID will be detected by the server and error messages issued.

The observed TF error rates are low enough and their impact low enough, that it is better for total project throughput to accept them than to double check TF for them. Duplicate TF of a bit level / exponent combination is a waste. It is recorded in the PrimeNet database, but computing credit is not given, to encourage efficiency.

The rates of TF error are, as far as I know, not well known. I suppose it could be estimated by linear interpolation from primality test run times assuming errors occur at a rate over time. (The justification for that is weak, since they are very different computations.) Since TF run times are short compared to primality test run times, in order to represent potential total run time savings, such an error rate estimate would necessarily be low, around ~2% x 2.5% = ~0.05% for the full set of TF levels for a given exponent. That estimate depends on the quite different computations of TF and primality testing having similar error rates in time; unlikely to be the case.


Top of reference tree: https://www.mersenneforum.org/showpo...22&postcount=1

Last fiddled with by kriesel on 2020-12-26 at 13:38 Reason: updated for prp&proof
kriesel is online now  
Old 2019-03-03, 00:23   #3
kriesel
 
kriesel's Avatar
 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

484210 Posts
Default P-1

P-1 errors are of multiple types. Their impact is slight.
1. Missed factor
2. False factor
3. Reporting error
4. Malicious reports

A missed factor causes additional P-1 effort, if it occurs in stage one of a two stage run. A missed P-1 factor, if no other factor is found in the remaining P-1 run, also leads to spending the effort of 2 or more primality tests.

A false factor will be quickly determined to be false. The PrimeNet server checks each reported factor divides the Mersenne number for which it is reported, by performing the TF computation for that single factor reported. Confirming a factor is very many times faster than searching for a factor. The primenet server also checks whether a submitted factor is itself prime or composite. CUDAPm1 has been observed to occasionally report a small false factor such as 3 or 5 or 7, not of the necessary form 2kp+1 and 1 or 7 mod 8.. This may be a bug in the gcd execution. It has been observed to occur in stage 1 or stage 2. Rerun with the same inputs on the same gpu and cpu did not duplicate the error. Multiple occurrences were observed on exponents M86.1M - M86.3M.

Reporting errors may consist of duplicate reports of a given P-1 bounds set and exponent combination, failure to report, or some sort of transmission or transcription error. Duplicate reports do not harm or slow project progress. Failure to report a factor found is the same substantial impact as a missed factor, plus the P-1 factoring is likely to be reassigned.

Some aspects of the result reporting essentially operate on the honor system, which has worked very well for the most part for thousands of participants over many years. It's rare, but occasionally someone chooses to try to be disruptive, or gain computing credit they have not earned by doing the work. There are many ways of dealing with such individuals. Some of them require server administrator effort. One of the aspects they try to exploit is there is currently no verification in the P-1 or TF no-factor-found report that the work described was actually performed. Robert Gerbicz has discovered an approach which may allow straightforward inclusion in the TF computations, of a modest overhead proof of work performed, for the TF no-factor-found case at https://www.mersenneforum.org/showpo...7&postcount=30; see also the TF post above for many links to subsequent discussion in regard to TF verification.

This has not yet to my knowledge been implemented in any factoring code. There's no definite equivalent for P-1 known, although it may be possible to adapt the approach to P-1 or parts of it. See https://www.mersenneforum.org/showpo...&postcount=127

AIDs help detect some transmission or transcription errors. Exponent not matching what was assigned for that AID will be detected by the server and error messages issued. Anomalous P-1 bounds could also be detected.

The observed P-1 error rates are low enough and their impact low enough, that it is better for total project throughput to accept them than to double check all P-1 for them. Duplicate P-1 work of a bounds-set / exponent combination is a waste. It may be recorded in the primenet database, but I think computing credit is not given, to encourage efficiency. It may be worthwhile to rerun P-1 if an obviously wrong small factor is produced.

The rates of P-1 error are, as far as I know, not well known. I suppose it could be estimated by linear interpolation from primality test run times assuming errors occur at a rate over time; ~2% x 2.5% = ~0.05%. If the rate is proportional to memory footprint, a sizable multiplier should be applied to that. I think the complexities of CUDAPm1 make its error rate higher; the bug list is nontrivial.

Known implementations of P-1 factoring for GIMPS contain little in the way of error detection or correction. There are possibilities for adding error detection, at some increase in computation time. Gpuowl V7 has more P-1 error detection than most if not all others.

The standard method of checking a hardware and software combination for reliable P-1 factoring operation is to attempt to reproduce factors on exponents with known factors. It's best to use exponents with the same fft length as what will be run on unfactored exponents, or as close as possible if no test value is available. See the list of selected test exponent & factor combinations at https://www.mersenneforum.org/showpo...8&postcount=31


Top of reference tree: https://www.mersenneforum.org/showpo...22&postcount=1

Last fiddled with by kriesel on 2020-12-26 at 13:41 Reason: minor edits
kriesel is online now  
Old 2019-03-03, 00:52   #4
kriesel
 
kriesel's Avatar
 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

2·32·269 Posts
Default primality testing

How reliable are the major Mersenne testing codes, and how is it achieved? What more might be done?

There are three types of errors in primality testing we would like to avoid, and one we generally accept
  1. A false positive indication, indicating the Mersenne number corresponding to a given prime exponent, when the Mersenne number is actually composite, sounds serious but is very likely to be detected quickly in the verification process applied to any suspected prime. (Multiple parallel runs by multiple trusted people on multiple hardware architectures with multiple software packages.) There are known software issues that generate this case at much higher than random probability.
  2. A false negative indication (mistaking a Mersenne prime as composite). This has serious consequence. It both reduces achievement of the GIMPS goal of finding Mersenne primes, and distorts the perceived or empirical distribution of Mersenne primes that number theorists use to inform their study of them. Limiting its impact is performed by (eventually) testing twice or more, all exponents that survive factoring tests.
  3. A false residue, concluding a composite Mersenne number is composite, but with a computation that occurred with error sufficient to give an incorrect final residue. Since nearly all Mersenne numbers are composite, probability heavily favors this type. These are less serious. They are found when a second run does not match the first.
  4. Recoverable errors, that are recovered from during the computation, so that they do not affect the final result. We'd like to moderate both the frequency of occurrence, and the throughput impact of them also. Some level of recoverable error is accepted, to the extent that it enables higher overall aggregate correct throughput, than reducing or avoiding them would allow.
Looked at another way, there are two categories of error: those we find, and those we don't.

GIMPS uses two types of primality tests. The Lucas-Lehmer sequence is understood to be a conclusive primality test for Mersenne numbers when performed correctly with a properly selected seed value. There is a small set of seed values suitable for any prime exponent. GIMPS convention is to use the seed 4. The Fermat pseudoprime test (PRP) is not conclusive regarding primality, but a pseudoprime test that indicates whether a candidate is definitely composite, or extremely likely but not certainly prime. Usually the seed (base) 3 is used. Verification with multiple LL tests and probably also a pseudoprime double check with different users, hardware, and software would promptly follow a Fermat pseudoprime test indicating "probably prime".

Lucas-Lehmer primality tests have produced, as of 18-20 Dec 2018, the following "interesting" bad residues that occurred near minimum or maximum value:

LL seed 4 64-bit residues near zero regarded as bad: https://www.mersenneforum.org/showpo...&postcount=142
Code:
Residue                          Count
0000000000000000    311 (known issue with CUDALucas < v2.06 and other software)
0000000000000002     14 (known issue with CUDALucas < v2.06 and other software)
0000000000000003      1 attributed to V17 prime95 shift bug
0000000000000004      1 cause uncertain
000000000000006C      1 attributed to V17 prime95 shift bug
0000000000000269      1 attributed to V17 prime95 shift bug
total            329
LL seed 4 64-bit residues near ffffffffffffffff regarded as bad: https://www.mersenneforum.org/showpo...&postcount=150
Code:
exponent      Partial Residue     Status
 3370013     FFFFFFFFFFFFFFFE      Bad
37614691     FFFFFFFFFFFFFFFD      Bad
47359579     FFFFFFFFFFFFFFFF      Bad
64847711     FFFFFFFF80000000      Bad
67077133     FFFFFFFFFFFFFFFD      Bad
67151753     FFFFFFFFFFFFFFFD      Bad
68834723     FFFFFFFFFFFFFFFD      Bad
81857519     FFFFFFFF80000000      Bad
81857537     FFFFFFFF80000000      Bad
81860447     FFFFFFFF80000000      Bad
81860479     FFFFFFFF80000000      Bad
88283761     FFFFFFFF80000000      Bad
88283807     FFFFFFFF80000000      Bad
So, by residue, we have counts
Residue Count
ffffffffffffffff 1 (residue is equivalent to 0)
fffffffffffffffe 1
fffffffffffffffd 4 (residue is equivalent to -2; known issue with algebra, and CUDALucas < v2.06)
ffffffff80000000 7 (known issue with CUDALucas; 6 of 7 were from the same user in 2018)
total 13

The number of total PRP primality tests done to date is not nearly as large as of LL tests, and the highly effective Gerbicz error check prevents most errors from persisting, so statistics are thin on errors from that. From a PRP results export available 2019-05-13 at https://www.mersenneforum.org/showpo...1&postcount=11, a dash of perl after residue-sort in OpenOffice yielded:
1 header record
9945 exponent records
0 residue-64s shared among multiple exponents (no duplication of res64 values)

Duplication would be very highly unlikely by random chance in such a small sample sprinkled over a 264 res64 space, so if duplication occurred, it would be indicative of some sort of errors. Inspection of the low and high ends showed no special or suspicious residues (0, 3, ff...ffc, ff...ff), just as you'd expect if the Gerbicz error check was working well.

State indicators found were as follows
R Reliable (which means it's Gerbicz check passed, but not yet verified)
S Suspect (had error codes attached)
U Unverified (one result for the exponent; not R or S)
V Verified (more than one result for the exponent; matched)
Perl counted 7111 reliable, 3 suspect, 1878 unverified, 953 verified, total 9945. Two of those 3 suspect are by the same user and have 0 offset. A suspect rate of 3 / 953 = 0.0031 is markedly better than the typical LL track record of ~2.0% bad results.

Some of the codes have ancestry in common. My crude diagram for the ancestry of the various codes is attached at https://www.mersenneforum.org/showpo...04&postcount=5 If there was an issue in common, present in a bit of "DNA in common", despite the best efforts of the authors and extreme care in using differing programs on differing hardware in multiple confirmations by multiple individuals, how could we know?

The ancestry in common question occurs on multiple levels.
  1. If there is any mersenne-search-specific code in common, that may contain flaw;
  2. If there is reuse of libraries with issues, such as completely independently written programs that nevertheless both use a library such as the NVIDIA CUDA fft library, or the gwnum routines used in many packages;
  3. Commonality of conceptual error (writers of different software making the same mistake or invalid assumption);
  4. A flaw in the hardware design on which the programs run;
  5. An underlying flaw in the mathematical basis for the programs. (Imagine if the Lucas-Lehmer test was not actually a conclusive primality test, but _almost_ certain, a pseudoprime test, that gave primality indication at 0.1ppm of exponents too often. Or if there was some sort of issue with the irrational base discrete weighted transform that gave a similar rare false-positive effect.)
We can rerun new software on inputs for which we know or think we know the correct results. Those results are obtained from earlier software, or in small-input cases, from other methods.
If they all pass, it does not necessarily mean the existing codes are without flaw.
If one fails, it does not necessarily mean the existing codes are flawed, it could be a transient hardware issue.

There's a sliver of a chance of a computation error occurring outside the loop of iterations that are guarded for accuracy by the Jacobi check or Gerbicz check. Bugs have been identified outside the checking loop.
No disrespect to any of the authors (or users, or hardware designers for that matter; some tasks are just very hard). I've fixed code that failed nonreproducibly at low rates. Ppb or lower frequency bugs can be hard to determine exist, much less identify and resolve. Current assignments can be of order 1018 core clocks to complete.

The output of programs, and the best understandings and conjectures of number theorists, are compared.

In the case of prime95, which can process not only Mersenne number computations, but Proth numbers, Fermat, etc, or its underlying gwnum routines that are also used in other software, an error detected in those other numbers if it occurred might be revealed to lie in fft or other code common to the various computations. The more eyes on it and the more cases checked, the better for the code's reliability.

Over time, and with a basis in number theory, some interim results (indicated by the low 64 bit values) have been shown to sometimes occur when they should not, and checks for specific incorrect early residues have been incorporated into most of the various software applications. Some of these problem residues are application-specific, not merely algorithm-specific.

Most of the programs use
  1. either the conclusive Lucas-Lehmer primality test or a (usually base 3) pseudoprime test,
  2. standard libraries or carefully developed and tested math routines for performing very efficient computations,
  3. pseudorandom varied shift (offset), as a means of having different numerical error occur in different runs for the same exponent by the same algorithm,
  4. double precision floating point for performance, at the cost of some imprecision, addressed by various error checks and correction methods,
  5. roundoff error checking,
  6. checking for sum of inputs ~ sum of outputs, within a set tolerance,
  7. interim save files, for resumption from an earlier point if serious error is detected,
  8. some sort of checksum or CRC stored in the save files, and checked upon subsequent read,
  9. output of interim and final 64-bit residues, for comparison, and determination of at what point a given computation went wrong,
  10. known-bad-interim-residue checks, which are algorithm-dependent or application-dependent,
  11. logging for later comparison and diagnosis.
Some programs additionally use the Jacobi test periodically which detects LL computation errors at 50% probability, or the Gerbicz error check which detects errors in the base 3 pseudoprime computations at nearly 100% probability. Jacobi symbol check: https://mersenneforum.org/showpost.p...3&postcount=30 Gerbicz check: https://www.mersenneforum.org/showpo...1&postcount=88

To my knowledge, only prime95/mprime and gpuOwL incorporated the Jacobi test or Gerbicz PRP check, as of early March 2019. Mlucas added them at some point. The Gerbicz check has quite low overhead, costing only about 0.2% on run time at a one million iteration interval (Gerbicz block size 1000). https://www.mersenneforum.org/showpo...0&postcount=31

A more recent development is the PRP proof and verification (certificate) process. Using MD5 hash and other safeguards, this provides proof of correct completion of a PRP test and protects against falsified results. Gpuowl and prime95 now include this feature. It is being added to Mlucas. There is an analogous proof proposed by R. Gerbicz for the LL test, which requires more interim residues and data storage, and has not yet been implemented by anyone in any application.

Another layer of error checking, detection, and correction occurs at the project / PrimeNet server level.

The GIMPS project records 64-bit residues returned for composite primality tests, At least two tests are done per unfactored Mersenne number (except for a PRP first test with successful proof, which is more reliable). Additional tests are done if the first two are of different types, or the first n are of the same type but not matching, until two of matching type and residue are obtained, produced by different users.

Some programs maintain and report error counts. Certain errors substantially increase the probability of a wrong final residue. Those results are flagged for early checking. (See the strategic double check and triple check thread on mersenneforum.org.)

Reliability of a given set of hardware declines over time. Users can help maintain the reliability of their hardware by avoiding overclocking and environmental extremes, by periodically checking logs, by periodically testing their hardware with memory test programs, by periodically running double checks, and periodically running self-tests incorporated in the software.

Users can help maintain and improve the accuracy of their own results and the project as a whole by logging runs, reviewing the logs for anomalies, and reporting anomalies seen, to the software authors, to the GIMPS community, and/or to me for inclusion in the bug and wish lists I am maintaining for gpu software.

If we assume the hardware error rate is about constant per unit time, averaged over the fleet, as the project moves to larger exponents over time we can expect the error rate per primality test to increase. Addition of recently identified error checks allowing restart from a known-good savefile will help counter this effect, as will switching from LL, to PRP with its excellent Gerbicz error check. It is possible that other error checks or periodic self test will be added in the future.

Current error rate (in the absence of substantial hardware issues or software misconfiguration or certain logged error types) is around 2% per primality test, 4% per double-tested exponent. Error rate rises to around 40% for prime95 LL tests with illegal sumout errors logged. Error rate of PRP3 with Gerbicz check is small by comparison, but nonzero; hardware error or bugs affecting software operation "outside" the Gerbicz error check of blocks of iterations can occur and affect the results, and have occurred since introduction of the Gerbicz check.

https://www.mersenne.org/various/math.php indicates an error rate per primality test of 1.5% when no serious errors (excluding illegal sumout as a serious error) are reported; 50% when a serious error is reported.

Historically, primality test error rate varied considerably, up to nearly 4% per test. https://www.mail-archive.com/mersenn.../msg07476.html

See also https://www.mersenneforum.org/showpo...3&postcount=19 for more discussion of error rates. This post is already rather long.

https://www.mersenneforum.org/showpo...&postcount=102 late 2016 error rate charts
https://www.mersenneforum.org/showpo...&postcount=104 late 2017 error rate charts
https://www.mersenneforum.org/showpo...&postcount=105 late 2018 error rate charts
https://www.mersenneforum.org/showpo...&postcount=111 late 2019 error rate charts
(Thanks Patrik for all those!)

You can read a bit about the history circa 1999 of GIMPS bugs and QA at https://www.mersenneforum.org/showth...=23877&page=13

Additional error checks, after discovery, that save more than they cost, and are otherwise deemed worthwhile, may be implemented by the various software titles' authors or maintainers at their discretion.


Top of reference tree: https://www.mersenneforum.org/showpo...22&postcount=1

Last fiddled with by kriesel on 2020-12-26 at 13:45 Reason: misc edits; added late 2019 error rate charts link
kriesel is online now  
Old 2019-05-21, 20:51   #5
kriesel
 
kriesel's Avatar
 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

2×32×269 Posts
Default Hardware

For what a gpu's memory going bad looks like, and how to detect it, see the GPU RIP thread https://www.mersenneforum.org/showthread.php?t=23472


Top of reference tree: https://www.mersenneforum.org/showpo...22&postcount=1

Last fiddled with by kriesel on 2019-11-19 at 06:40
kriesel is online now  
Closed Thread

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
ERRORS Unregistered Information & Answers 2 2013-04-01 04:14
!@#$%^& Mail Errors schickel Aliquot Sequences 5 2012-04-22 06:41
Rounding Errors in v25.11 iBeta Software 10 2011-10-09 04:15
Core 2 Duo errors? paulunderwood Hardware 3 2006-11-16 00:00
heat and errors crash893 Hardware 37 2002-11-12 16:33

All times are UTC. The time now is 18:41.

Fri Jan 15 18:41:25 UTC 2021 up 43 days, 14:52, 0 users, load averages: 2.86, 2.49, 2.22

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.