![]() |
[QUOTE=bsquared;172125]Right now, yafu stores A coefficients as a bigint, while msieve stores the indices into the factor base of the factors of A. Also, I think I use a N on the first line and msieve doesn't. I believe relations are stored the same.
So they are not compatible as is, but the differences are slight. I guess there's no reason for them not to be, so I can try to make them compatible on the next revision of yafu.[/QUOTE]I did replace the first two lines of the yafu dat file with the first two lines from msieve.dat and started msieve but i get errors on every line. I did this because the 32 bits yafu failed to find factors of a c105 and aliqueit started all over again with msieve. Using the same relations, the 64 bit version of yafu did find the factors. |
[quote=smh;172203]I did replace the first two lines of the yafu dat file with the first two lines from msieve.dat and started msieve but i get errors on every line.
I did this because the 32 bits yafu failed to find factors of a c105 and aliqueit started all over again with msieve. Using the same relations, the 64 bit version of yafu did find the factors.[/quote] What's your gnfs_cutoff? I would have thought GNFS would be faster for a C105. |
[QUOTE=10metreh;172205]What's your gnfs_cutoff? I would have thought GNFS would be faster for a C105.[/QUOTE]The PC has no perl interpreter so I can't run GGNFS on that pc.
|
Quick question about msieve poly selection. After the time limit is reached, does it read through the poly dat file to figure out which was the best polynomial or does it keep the best one to date in memory?
I have a system that lost its NFS connection to the drive I was writing to but if msieve keeps its best in memory and will write that out at the end, I should still be ok, but if it parses the file I will likely have to re-do it again. |
Jeff: the save file is never read at all, it's there for reference only. The best poly found is kept in memory throughout the run; note that if msieve already found a good poly, polynomial selection will not restart unless you rename the .fb file
Ben: the big problem with msieve's current format is that it's stateful. You get one A value and then a block of relations that depend on it. As long as all you do is concatentate relation files it's fine, but many users have done things like make their distributed clients write to the same savefile, and in that case two clients colliding in the savefile will invalidate hundreds of relations, instead of just one. Also, the format itself is quite wasteful: you don't need to know that the polynomial value is negative, since you'll have to compute it and will find out anyway. Factors < 256 don't need to be stored, and the full multiplicity of each factor isn't needed because that will get discovered too. Finally, storing factor base offsets in order means all the sieving clients have to be using the same version of msieve. |
[quote=jasonp;168335]The notion of optimality is difficult to apply here; NFS poly selection is a combinatorial optimization problem, and the search space is much too large to have any hope of getting completely explored.[/quote]
Wouldn't this be a good candidate for searching via Genetic Algorithm, which is excellent at exploring large search spaces? |
[quote=jasonp;172248] Ben: the big problem with msieve's current format is that it's stateful. You get one A value and then a block of relations that depend on it. As long as all you do is concatentate relation files it's fine, but many users have done things like make their distributed clients write to the same savefile, and in that case two clients colliding in the savefile will invalidate hundreds of relations, instead of just one. [/quote]
I agree it's undesireable, but it's also easy to work around. A large distributed run pretty much begs for a script anyway, so why not increment the output savefile name for each instance of msieve in the script? *shrug* Probably the users aren't entirely aware of the consequences. To make the savefile non-stateful maybe store all 'A' coefficients in a separate msieve.A file, indexed by a hash of the coefficient or something? Then each relation in the savefile would need to store that hash and the 'B' index. The filtering routine would also need to be smart enough to resolve hash collisions. This would add back some file size that is saved by the ideas below... [quote=jasonp;172248] Also, the format itself is quite wasteful: you don't need to know that the polynomial value is negative, since you'll have to compute it and will find out anyway. Factors < 256 don't need to be stored, and the full multiplicity of each factor isn't needed because that will get discovered too. [/quote] Yes, all good ideas. [quote=jasonp;172248] Finally, storing factor base offsets in order means all the sieving clients have to be using the same version of msieve. [/quote] To be truly version independant wouldn't you need to store the primes themselves since the factor base could change from version to version? The fact that they are in order makes it easier to merge them with factor of A, but otherwise wouldn't be necessary, unless I'm forgetting something. |
Your suggestions are exactly what I'm planning: every A value gets a small hash, and relations specify which hash they correspond to, with the relation-reading code resolving collisions. And yes, version independence means relations should store their primes, in arbitrary order, with the large primes somewhere in the list and not separated out like they are now.
Regarding the use of general combinatorial techniques like genetic search and simulated annealing, these have been described as 'the second best way of solving almost any problem'. Second to custom methods that exploit any of the underlying structure in the problem. Nevertheless, right now the method used to select polynomials with good size and root properties is to optimize for good size, then optimize for good roots, then fix the size that was messed up by the good roots. That doesn't sound very satisfying, it would be neat to optimize for size and roots simultaneously, but separating the search into two halves allows the fast part of the search to come first and eliminate most candidates, leaving the slow part of the search to find the best one. I don't know how to change things to make a better search process, but even with Kleinjung's improved algorithm we only explore a small fraction of the available search space. Many have wondered about a better way. |
Just a question:
When polynomial search was interrupted (e.g. due to a power outage) - is there an option just to read the outputfile and give the best polynomial which is stored in the outputfile, without doing any further poly search? |
Afraid not; if you Ctrl-C out of the run then the best poly found so far is automatically saved, but that doesn't happen if there's a crash. You have to manually find the best E value in the file.
|
Store the leading coefficient range searched?
I have used the polynomial search capabilities several times by restarting and specifying the last leading coefficient as the place to start the new search.
Fortunately, I have always remembered, or been able to scroll up to, the previous upper bound of leading coefficients examined. A record in the log of which coefficient ranges have been searched would help, for those obsessive-compulsives who think the magic polynomial lies just a few minutes past the last coefficient searched. |
the known bug ...Laguerre?
Another core dump debug case, just in case; very small:
[CODE] N 5571256368958985423567427396399955429949048328116611460580828800356560407613375067108315353369597147516739093 SKEW 1.98 A5 44 A4 A3 A2 A1 A0 1325 R1 1 R0 -50000000000000000000000 FAMAX 580000 FRMAX 580000 SALPMAX 33554432 SRLPMAX 33554432[/CODE] this is x86_64 linux. 1.41 crashes with segv; 1.36 runs through (but SKEW line has to be removed). |
Yes, this is likely a rootfinder problem. SNFS polynomials tend to have floating point roots that all have nearly the same magnitude, and apparently this throws off the Laguerre rootfinder. As a workaround, if you just want the NFS postprocessing to run then comment out the call to analyze_one_poly in gnfs/gnfs.c
v1.42 will have a much more sophisticated rootfinder. |
Root cellar
[QUOTE=jasonp;172704]
v1.42 will have a much more sophisticated rootfinder.[/QUOTE] Are you implementing Jenkins-Traub? |
Brian Gladman has kindly produced his own complex- and real-valued Jenkins-Traub rootfinders by converting the original Fortran to C and applying heavy cleanup. I'm in the process of applying much more cleanup to the complex version, so that it can find roots to (at most) double precision. Then I'll switch to Laguerre's or Newton's method to get down to quad-precision accuracy. J-T is not fooled by multiple roots or multiple roots with the same magnitude, so hopefully these annoying crashes will go away.
|
is there a vague ETA for 1.42(when is version 2 coming along)?
|
Real life is intervening again, no idea. Sheesh, didn't I just pay bills [i]last[/i] month?
|
"Yes, but what have you done for us [I]lately[/I]!"
:popcorn: |
[QUOTE=Batalov;172876]"Yes, but what have you done for us [I]lately[/I]!"
:popcorn:[/QUOTE] That's not fair! :smile::smile: Luigi |
For the record, I was referring to the need to have a day job to pay bills that look familiar :)
|
SQRT failed
I just got a "Algebraic Square root failed" error in the first dependency of a c102 GNFS factorization. The second one succeeded:
[code]Sun May 17 14:35:25 2009 Msieve v. 1.41 Sun May 17 14:35:25 2009 random seeds: e7b39168 68c42838 Sun May 17 14:35:25 2009 factoring 853929635599422763695461633612823384788514896160663790367275722570272427322586836076040691299795492743 (102 digits) Sun May 17 14:35:27 2009 searching for 15-digit factors Sun May 17 14:35:29 2009 commencing number field sieve (102-digit input) Sun May 17 14:35:29 2009 R0: -49477531833368853558 Sun May 17 14:35:29 2009 R1: 35633560421 Sun May 17 14:35:29 2009 A0: -225941343291346407191225 Sun May 17 14:35:29 2009 A1: -135008887690178691988 Sun May 17 14:35:29 2009 A2: 17514637356596807 Sun May 17 14:35:29 2009 A3: 2916207462008 Sun May 17 14:35:29 2009 A4: -97016682 Sun May 17 14:35:29 2009 A5: 2880 Sun May 17 14:35:29 2009 skew 11007.57, size 1.096739e-009, alpha -5.321200, combined = 2.784860e-009 Sun May 17 14:35:29 2009 Sun May 17 14:35:29 2009 commencing relation filtering Sun May 17 14:35:29 2009 commencing duplicate removal, pass 1 Sun May 17 14:36:59 2009 found 292946 hash collisions in 4845545 relations Sun May 17 14:37:36 2009 added 649 free relations Sun May 17 14:37:36 2009 commencing duplicate removal, pass 2 Sun May 17 14:37:53 2009 found 263960 duplicates and 4582234 unique relations Sun May 17 14:37:53 2009 memory use: 40.3 MB Sun May 17 14:37:53 2009 reading rational ideals above 1769472 Sun May 17 14:37:53 2009 reading algebraic ideals above 1769472 Sun May 17 14:37:53 2009 commencing singleton removal, pass 1 Sun May 17 14:39:23 2009 relations with 0 large ideals: 68426 Sun May 17 14:39:23 2009 relations with 1 large ideals: 520708 Sun May 17 14:39:23 2009 relations with 2 large ideals: 1471072 Sun May 17 14:39:23 2009 relations with 3 large ideals: 1745334 Sun May 17 14:39:23 2009 relations with 4 large ideals: 722405 Sun May 17 14:39:23 2009 relations with 5 large ideals: 23413 Sun May 17 14:39:23 2009 relations with 6 large ideals: 30876 Sun May 17 14:39:23 2009 relations with 7+ large ideals: 0 Sun May 17 14:39:23 2009 4582234 relations and about 4482739 large ideals Sun May 17 14:39:23 2009 commencing singleton removal, pass 2 Sun May 17 14:40:54 2009 found 1995969 singletons Sun May 17 14:40:54 2009 current dataset: 2586265 relations and about 2098891 large ideals Sun May 17 14:40:54 2009 commencing singleton removal, pass 3 Sun May 17 14:41:47 2009 found 436832 singletons Sun May 17 14:41:47 2009 current dataset: 2149433 relations and about 1632475 large ideals Sun May 17 14:41:47 2009 commencing singleton removal, final pass Sun May 17 14:42:35 2009 memory use: 35.5 MB Sun May 17 14:42:35 2009 commencing in-memory singleton removal Sun May 17 14:42:35 2009 begin with 2149433 relations and 1678596 ... <some of the filtering snipped> Sun May 17 14:43:43 2009 reduce to 797782 relations and 649522 ideals in 6 passes Sun May 17 14:43:43 2009 max relations containing the same ideal: 21 Sun May 17 14:43:43 2009 relations with 0 large ideals: 11842 Sun May 17 14:43:43 2009 relations with 1 large ideals: 80737 Sun May 17 14:43:43 2009 relations with 2 large ideals: 208309 Sun May 17 14:43:43 2009 relations with 3 large ideals: 263248 Sun May 17 14:43:43 2009 relations with 4 large ideals: 169083 Sun May 17 14:43:43 2009 relations with 5 large ideals: 54397 Sun May 17 14:43:43 2009 relations with 6 large ideals: 9509 Sun May 17 14:43:43 2009 relations with 7+ large ideals: 657 Sun May 17 14:43:43 2009 commencing 2-way merge Sun May 17 14:43:44 2009 reduce to 493899 relation sets and 345639 unique ideals Sun May 17 14:43:44 2009 commencing full merge Sun May 17 14:43:52 2009 memory use: 25.2 MB Sun May 17 14:43:52 2009 found 230803 cycles, need 211839 Sun May 17 14:43:52 2009 weight of 211839 cycles is about 14921060 (70.44/cycle) Sun May 17 14:43:52 2009 distribution of cycle lengths: Sun May 17 14:43:52 2009 1 relations: 20437 Sun May 17 14:43:52 2009 2 relations: 19168 Sun May 17 14:43:52 2009 3 relations: 19961 Sun May 17 14:43:52 2009 4 relations: 19394 Sun May 17 14:43:52 2009 5 relations: 18886 Sun May 17 14:43:52 2009 6 relations: 17704 Sun May 17 14:43:52 2009 7 relations: 16051 Sun May 17 14:43:52 2009 8 relations: 14779 Sun May 17 14:43:52 2009 9 relations: 12996 Sun May 17 14:43:52 2009 10+ relations: 52463 Sun May 17 14:43:52 2009 heaviest cycle: 18 relations Sun May 17 14:43:52 2009 commencing cycle optimization Sun May 17 14:43:53 2009 start with 1391589 relations Sun May 17 14:43:59 2009 pruned 48328 relations Sun May 17 14:43:59 2009 memory use: 34.9 MB Sun May 17 14:43:59 2009 distribution of cycle lengths: Sun May 17 14:43:59 2009 1 relations: 20437 Sun May 17 14:43:59 2009 2 relations: 19759 Sun May 17 14:43:59 2009 3 relations: 20794 Sun May 17 14:43:59 2009 4 relations: 20259 Sun May 17 14:43:59 2009 5 relations: 19717 Sun May 17 14:43:59 2009 6 relations: 18404 Sun May 17 14:43:59 2009 7 relations: 16553 Sun May 17 14:43:59 2009 8 relations: 15202 Sun May 17 14:43:59 2009 9 relations: 13248 Sun May 17 14:43:59 2009 10+ relations: 47466 Sun May 17 14:43:59 2009 heaviest cycle: 18 relations Sun May 17 14:44:00 2009 RelProcTime: 514 Sun May 17 14:44:00 2009 elapsed time 00:08:35 Sun May 17 14:44:00 2009 Msieve v. 1.41 Sun May 17 14:44:00 2009 random seeds: 7feb18a8 0850d81a <poly snipped> Sun May 17 14:44:04 2009 commencing linear algebra Sun May 17 14:44:05 2009 read 211839 cycles Sun May 17 14:44:05 2009 cycles contain 706357 unique relations Sun May 17 14:44:22 2009 read 706357 relations Sun May 17 14:44:23 2009 using 20 quadratic characters above 67105692 Sun May 17 14:44:33 2009 building initial matrix Sun May 17 14:44:49 2009 memory use: 77.0 MB Sun May 17 14:44:49 2009 read 211839 cycles Sun May 17 14:44:50 2009 matrix is 211645 x 211839 (59.3 MB) with weight 19909678 (93.98/col) Sun May 17 14:44:50 2009 sparse part has weight 14066502 (66.40/col) Sun May 17 14:44:56 2009 filtering completed in 2 passes Sun May 17 14:44:56 2009 matrix is 210938 x 211132 (59.2 MB) with weight 19864437 (94.09/col) Sun May 17 14:44:56 2009 sparse part has weight 14041674 (66.51/col) Sun May 17 14:45:01 2009 read 211132 cycles Sun May 17 14:45:12 2009 matrix is 210938 x 211132 (59.2 MB) with weight 19864437 (94.09/col) Sun May 17 14:45:12 2009 sparse part has weight 14041674 (66.51/col) Sun May 17 14:45:12 2009 saving the first 48 matrix rows for later Sun May 17 14:45:13 2009 matrix is 210890 x 211132 (56.9 MB) with weight 15716425 (74.44/col) Sun May 17 14:45:13 2009 sparse part has weight 13660065 (64.70/col) Sun May 17 14:45:13 2009 matrix includes 64 packed rows Sun May 17 14:45:13 2009 using block size 65536 for processor cache size 2048 kB Sun May 17 14:45:16 2009 commencing Lanczos iteration (2 threads) Sun May 17 14:45:16 2009 memory use: 56.6 MB Sun May 17 14:55:59 2009 lanczos halted after 3336 iterations (dim = 210889) Sun May 17 14:56:00 2009 recovered 29 nontrivial dependencies Sun May 17 14:56:00 2009 BLanczosTime: 716 Sun May 17 14:56:00 2009 elapsed time 00:12:00 Sun May 17 14:56:00 2009 Sun May 17 14:56:00 2009 Sun May 17 14:56:00 2009 Msieve v. 1.41 Sun May 17 14:56:00 2009 random seeds: 3e0dbac8 fdda6c41 <poly snipped> Sun May 17 14:56:04 2009 commencing square root phase Sun May 17 14:56:04 2009 reading relations for dependency 1 Sun May 17 14:56:05 2009 read 105164 cycles Sun May 17 14:56:05 2009 cycles contain 437700 unique relations Sun May 17 14:56:18 2009 read 437700 relations Sun May 17 14:56:22 2009 multiplying 351452 relations Sun May 17 14:58:06 2009 multiply complete, coefficients have about 13.76 million bits Sun May 17 14:58:08 2009 initial square root is modulo 80180141 [B][COLOR="Red"]Sun May 17 15:00:32 2009 Newton iteration failed to converge Sun May 17 15:00:32 2009 algebraic square root failed[/COLOR][/B] Sun May 17 15:00:32 2009 reading relations for dependency 2 Sun May 17 15:00:32 2009 read 105233 cycles Sun May 17 15:00:33 2009 cycles contain 438941 unique relations Sun May 17 15:00:45 2009 read 438941 relations Sun May 17 15:00:49 2009 multiplying 352772 relations Sun May 17 15:02:25 2009 multiply complete, coefficients have about 13.81 million bits Sun May 17 15:02:26 2009 initial square root is modulo 86059027 Sun May 17 15:04:50 2009 sqrtTime: 526 Sun May 17 15:04:50 2009 prp42 factor: 197289354911829793712749335520867417551553 Sun May 17 15:04:50 2009 prp61 factor: 4328310749361265920008552701826983603115943218253862379228231 Sun May 17 15:04:50 2009 elapsed time 00:08:50[/code] |
That's very unusual, and may be a bug somewhere in the multiple-precision math library used. I probably won't be able to reproduce it unless I have your relations, which may be too much data transfer for a C102.
|
[quote=jasonp;174563]That's very unusual, and may be a bug somewhere in the multiple-precision math library used. I probably won't be able to reproduce it unless I have your relations, which may be too much data transfer for a C102.[/quote]
i saw that as well recently with 1.41 it isnt ridiculously rare |
[QUOTE=jasonp;174563]That's very unusual, and may be a bug somewhere in the multiple-precision math library used. I probably won't be able to reproduce it unless I have your relations, which may be too much data transfer for a C102.[/QUOTE]
Don't you have a possibility to chose fixed "random seeds" for bughunting? I don't have the relations anymore, as I have used aliqueit.exe for running an aliquot sequence and this program automatically deletes the relations for disk space reasons when it has finished the factorization. I had used facmsieve.pl for this factorization, maybe you can reproduce the sieving with the parameters used by this script? |
[QUOTE=Andi47;174619]Don't you have a possibility to chose fixed "random seeds" for bughunting? I don't have the relations anymore[/QUOTE]
If the problem is in the multiple-precision library, then it will depend on having all the same relations in the order in which they occurred in the savefile. Since you used the GGNFS lattice siever I can't guarantee being able to generate that by myself. I'm not inclined to worry about it much; there are other things that need fixing more. |
1 Attachment(s)
The filtering of the complete data set for 2,908+ failed. The log is attached. At this point, though, I wouldn't worry about it since Tom has taught us that with proper selection of parameters, many fewer relations are sufficient.
|
[QUOTE=frmky;174998]The filtering of the complete data set for 2,908+ failed. The log is attached. At this point, though, I wouldn't worry about it since Tom has taught us that with proper selection of parameters, many fewer relations are sufficient.[/QUOTE]
The heavy relation code worked as expected, but the filtering was not shown enough ideals to produce a correct size matrix in the first place (a matrix of size 32M is just yucky). Clearly just deleting relations is not going to thin out the dataset so that the merge can run immediately; at the least you need to rerun the clique processing. But it's possible that nothing will work in this situation unless you know about almost all the ideals, not just those with low weight. |
"switching to small primes"
A curious case with square root modulo 53 (yes, simply 53!) happened while factoring 6,672M, the last Cunningham table number with difficulty less than 200:
[CODE]Thu May 28 19:21:26 2009 multiplying 5583346 relations Thu May 28 19:25:59 2009 multiply complete, coefficients have about 161.97 million bits Thu May 28 19:26:01 2009 [B]warning: no irreducible prime found, switching to small primes [/B]Thu May 28 19:26:04 2009 initial square root is modulo 53 Thu May 28 19:37:15 2009 sqrtTime: 1052 Thu May 28 19:37:15 2009 prp78 factor: 797755198398378136992989897173778551819524893213077735137125857196400787268209 Thu May 28 19:37:15 2009 prp82 factor: 5537679016126244829242773647897792916417225946706147891702969058169485639839312841 Thu May 28 19:37:15 2009 elapsed time 00:17:34[/CODE] The log and details are in the Cunningham 6+ thread. |
This explanation will be a little long-winded:
The algebraic square root needs to pretend each relation is a degree-1 polynomial. Then multiply all the relations together modulo the algebraic poly F, yielding a polynomial R with huge coefficients. Then find a square root of R (i.e. a polynomial that equals R when squared modulo F). The latter is done via p-adic Newton iteration: choose a small prime p and find the square root of R mod p somehow, then run Newton iteration k times to make the square root work mod p^k. When p^k is larger than the coefficients of the square root polynomial, then you will have the square root required. Per Leslie Jensen showed this worked well for small factorizations, but msieve was the first code that showed the algorithm was practical for industrial-size factorizations; I had no idea if it would be fast enough when I was writing it. The only requirement for p above is that F is irreducible mod p. When that's true, the square root algorithm is guaranteed to work and there's a clever way to find the square root of R mod p. Unfortunately, a few degree-4 and degree-6 F polynomials are not irreducible modulo any prime p, so the clever algorithm for the initial square root will not work and the complete algebraic square root may not work either. When that happens, the code chooses a small p for which F mod p has no *linear* polynomial factors (this is easy to check) and finds the initial square root polynomial by brute force, trying all polynomials with coefficients between 0 and p-1 until it finds one that equals R mod p when squared modulo F mod p. Because this is expensive, p is chosen as the first prime above 50 that will work (not the customary 6-10 digit value the code ordinarily chooses). You can make p smaller, but that increases the chance that the Newton iteration will fail. Newton failure happens when the two square roots of (R mod p) mod (F mod p) happen to coincide. This is impossible when F is irreducible mod p, but for general p it can happen and it becomes more likely when p becomes smaller, since there are fewer polynomials to choose from. Experience shows that with this choice of p the Newton iteration fails about half the time, so on average you need about twice as many dependencies to finish the factorization. A surprising number of XYYXF numbers have degree-4 SNFS polynomials where this mess is necessary. To date I think only one degree-6 job (a monster by Greg) has needed the special-case p. |
Just had a weird little error with msieve 1.41 and factMsieve.pl with NUM_CPUS=2:
[code] Tue Jun 09 06:49:53 2009 Msieve v. 1.41 Tue Jun 09 06:49:53 2009 random seeds: cf59a860 97b0d595 Tue Jun 09 06:49:53 2009 factoring 2378863810492383364784724016929764517542240407863155206841742219996638335582297762782527116002786833789317163196667710293879 (124 digits) Tue Jun 09 06:49:54 2009 searching for 15-digit factors Tue Jun 09 06:49:54 2009 ECM stage 1 factor found Tue Jun 09 06:49:54 2009 ECM stage 1 factor found Tue Jun 09 06:49:54 2009 ECM stage 1 factor found Tue Jun 09 06:49:54 2009 commencing number field sieve (124-digit input) Tue Jun 09 06:49:54 2009 R0: -681346488621441544095199 Tue Jun 09 06:49:54 2009 R1: 34093445133979 Tue Jun 09 06:49:54 2009 A0: -166525874832126379938206323680 Tue Jun 09 06:49:54 2009 A1: 10947187180220372892784692 Tue Jun 09 06:49:54 2009 A2: 56968751045950668526 Tue Jun 09 06:49:54 2009 A3: -1664642868411548 Tue Jun 09 06:49:54 2009 A4: 10775598711 Tue Jun 09 06:49:54 2009 A5: 16200 Tue Jun 09 06:49:54 2009 skew 136315.64, size 8.633867e-012, alpha -6.832790, combined = 2.027516e-010 Tue Jun 09 06:49:54 2009 Tue Jun 09 06:49:54 2009 commencing relation filtering Tue Jun 09 06:49:54 2009 commencing duplicate removal, pass 1 Tue Jun 09 06:50:31 2009 found 734509 hash collisions in 8925897 relations Tue Jun 09 06:50:47 2009 added 184 free relations Tue Jun 09 06:50:47 2009 commencing duplicate removal, pass 2 Tue Jun 09 06:50:52 2009 found 646138 duplicates and 8279943 unique relations Tue Jun 09 06:50:52 2009 memory use: 48.6 MB Tue Jun 09 06:50:52 2009 reading rational ideals above 4653056 Tue Jun 09 06:50:52 2009 reading algebraic ideals above 4653056 Tue Jun 09 06:50:52 2009 commencing singleton removal, pass 1 Tue Jun 09 06:51:29 2009 relations with 0 large ideals: 102572 Tue Jun 09 06:51:29 2009 relations with 1 large ideals: 758670 Tue Jun 09 06:51:29 2009 relations with 2 large ideals: 2327601 Tue Jun 09 06:51:29 2009 relations with 3 large ideals: 3207157 Tue Jun 09 06:51:29 2009 relations with 4 large ideals: 1744235 Tue Jun 09 06:51:29 2009 relations with 5 large ideals: 78614 Tue Jun 09 06:51:29 2009 relations with 6 large ideals: 61092 Tue Jun 09 06:51:29 2009 relations with 7+ large ideals: 2 Tue Jun 09 06:51:29 2009 8279943 relations and about 9184006 large ideals Tue Jun 09 06:51:29 2009 commencing singleton removal, pass 2 Tue Jun 09 06:52:29 2009 found 3935349 singletons Tue Jun 09 06:52:29 2009 current dataset: 4344594 relations and about 4199819 large ideals Tue Jun 09 06:52:29 2009 error: singleton2 can't rename output file (and in the console:) Return value 65280. Terminating... [/code] It had been running successfully for quite a while, gathering ~1GB relations. Any idea what went wrong? |
[quote=mklasson;176736]<snip>[/quote]
Retry with 1.42, see if it makes any difference. |
[QUOTE=10metreh;176741]Retry with 1.42, see if it makes any difference.[/QUOTE]
Well, just trying again with 1.41 worked too. Still though, would be nice if it was a bug that could be squashed. |
Are you low on disk space at all? Another possibility is that the destination file was deleted just before the file in question was renamed; maybe the delete took a little too long the first time and windows had locked the file while the rename was attempted. The files in question are quite small (the library never copies the relation file itself).
|
[QUOTE=jasonp;176782]Are you low on disk space at all? Another possibility is that the destination file was deleted just before the file in question was renamed; maybe the delete took a little too long the first time and windows had locked the file while the rename was attempted. The files in question are quite small (the library never copies the relation file itself).[/QUOTE]
Nope, 100+ GB free. The disk could very well have been working hard as I'm running four aliqueits (and hence possibly msieves) in parallel, but I've been doing that for several months now. Surely any file deletion should be completed before windows passes control back to msieve, no? For what it's worth there was a ~15MB test.dat.1(sp?) alongside the ~1GB test.dat in the dir after the failure. Ah well... EDIT: I just noticed my system acting a little weird overall now, so maybe this was all down to some other program going nuts. |
| All times are UTC. The time now is 04:53. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.