- **Cunningham Tables**
(*https://www.mersenneforum.org/forumdisplay.php?f=51*)

- - **Parameter Underestimation**
(*https://www.mersenneforum.org/showthread.php?t=13983*)

Parameter UnderestimationI am currently sieving 2,2166L.
I have been slowed down somewhat by electrical system shutdowns and network shutdowns for maintenence. However, my biggest slowdown has been a mis-estimate for the input parameters. I am using factor base bounds of 86million and 14million for the rational and algebraic polynomials respectively. I am using 30 bit large primes for both, and a sieve region of 8K x 16K for each special q. I had orginally estimated that I would need about 12 million special q's. I am already at 14 million and have only about 77% of the needed relations. Further, the yield rate is now quite low: only about 3.2 relations per q. I am going to have to do quite a bit more sieving that I anticipated. YECH. |

Siever BugI just found an interesting bug in my siever. I discovered it while
making some improvements, including a larger sieve region. Here is one instance. For special_q = 256300217 with root 137499035, my lattice reduction routine returned (36738082, 6593) (137499035, 1) This is not correct. The problem, of course is integer overflow. The routine uses signed ints. Of course, since the reduced lattice is wrong, everything that follows is wrong and no relations get produced. I had been getting a very low relation yield rate for the larger special q's on my current factorization. This explains it. I've been losing valid relations. I am going to have to change the latred routine to use 64 bit ints. |

[QUOTE=R.D. Silverman;231832]I just found an interesting bug in my siever. I discovered it while
making some improvements, including a larger sieve region. Here is one instance. For special_q = 256300217 with root 137499035, my lattice reduction routine returned (36738082, 6593) (137499035, 1) This is not correct. The problem, of course is integer overflow. The routine uses signed ints. Of course, since the reduced lattice is wrong, everything that follows is wrong and no relations get produced. I had been getting a very low relation yield rate for the larger special q's on my current factorization. This explains it. I've been losing valid relations. I am going to have to change the latred routine to use 64 bit ints.[/QUOTE] I just realized that there are other problms as well. We hope that norm(c + d alpha) is smooth. Let v1, v2 be the reduced lattice row vectors for some special_q. Then (c,d) = i*v1 + j*v2 where (i,j) is a point in the reduced lattice. i and j are bounded by the size of the sieve region, but for large special_q, it is possible that eiither i*v1 or j*v2 (or both) may overflow, giving a wrong value for c and/or d when they are defined as signed 32-bit ints. I will need to deal with this as well. Does anyone know if GGNFS uses 64-bits for any/all of these variables? For 2,2166L, I am now sieving special_q near 250 million. The average yield is now only about 3 relations/q. A fair fraction of the q's produce no relation at all. I may need to fix my code, and go back and RESIEVE a large set of q's that I have already done. |

[QUOTE=R.D. Silverman;231849]I just realized that there are other problms as well.
We hope that norm(c + d alpha) is smooth. Let v1, v2 be the reduced lattice row vectors for some special_q. Then (c,d) = i*v1 + j*v2 where (i,j) is a point in the reduced lattice. i and j are bounded by the size of the sieve region, but for large special_q, it is possible that eiither i*v1 or j*v2 (or both) may overflow, giving a wrong value for c and/or d when they are defined as signed 32-bit ints. I will need to deal with this as well. Does anyone know if GGNFS uses 64-bits for any/all of these variables? For 2,2166L, I am now sieving special_q near 250 million. The average yield is now only about 3 relations/q. A fair fraction of the q's produce no relation at all. I may need to fix my code, and go back and RESIEVE a large set of q's that I have already done.[/QUOTE] I've been trying to get to sleep without success..... I've been too busy thinking about the difficulties. It is clear that I need to fix the lattice reduction. However, the problem with (c,d) possibly overflowing still remains. It would take a MAJOR rewrite to allow (c,d) to exceed 32 bits. And the CWI tools can only handle 30-bit (c,d) as well. It is also how to proceed with my current effort. I can let the siever keep running, on ever-larger special q's. But the yield rate will only get worse. At the current rate I will need to sieve another 6 million special q's. I have already sieved through 14 million of them. Or I can fix the latred problems (which will take time) and go back and resieve q's that have already been done. I have reserved 2,1870L, but may have to release it until I can fix my siever. |

In redu2.c, the GGNFS/Franke siever uses [FONT=Courier New]double[/FONT] in the variant of the code that is used by practically everyone. However, the [URL="http://mersenneforum.org/showthread.php?p=169889#post169889"]very latest version[/URL] has the gmp mpz_t version:
[FONT=Arial Narrow]int reduce2(i32_t*,i32_t*,i32_t*,i32_t*,i64_t,i64_t,i64_t,i64_t,double);[/FONT] [FONT=Arial Narrow]int reduce2gmp(i64_t*,i64_t*,i64_t*,i64_t*,mpz_t,mpz_t,double);[/FONT] [code]@ Note that we dont use lattice reduction to organize sieving with the factor base primes which are larger than the line length. Instead, we use a method based on the relation with continued fractions and diophantine approximations. Therefore, the code contained in this file is not time crtitical. @(redu2.h@>= int reduce2(i32_t*,i32_t*,i32_t*,i32_t*,i64_t,i64_t,i64_t,i64_t,double); [COLOR=teal]int reduce2gmp(i64_t*,i64_t*,i64_t*,i64_t*,mpz_t,mpz_t,double);[/COLOR] @ This is done in the straigthforward way. A basis is given by the fifth to eighth argument (|a0|,|b0|), (|a1|,|b1|). The reduced basis is written to the addresses given by the first four arguments. The ninth argument will often be larger than 1 an punishes large |b|. As a safeguard against numerical instability, the scalar products are recalculated from the modified vectors after each reduction step and not updated in a way which would save multiplications. Note again that this function is not likely time critical for lattice sieving. The scalar products of the two vector with themselves are called |a0sq| and |a1sq|. Their scalar product is called |s|. [/code] |

[QUOTE=R.D. Silverman;231854]I've been trying to get to sleep without success..... I've been too busy
thinking about the difficulties. It is clear that I need to fix the lattice reduction. However, the problem with (c,d) possibly overflowing still remains. It would take a MAJOR rewrite to allow (c,d) to exceed 32 bits. And the CWI tools can only handle 30-bit (c,d) as well. It is also how to proceed with my current effort. I can let the siever keep running, on ever-larger special q's. But the yield rate will only get worse. At the current rate I will need to sieve another 6 million special q's. I have already sieved through 14 million of them. Or I can fix the latred problems (which will take time) and go back and resieve q's that have already been done.[/QUOTE] At least until you get your siever fixed properly, have you thought about using composite special-q in the range which still works (assuming you've used only prime special-q)? Assuming you are sieving the special-q on the rational side, I think it should not be too difficult to allow composite special-q's (on the algebraic side it would take more work). Might be worth doing as a quick measure to finish your current job. |

[QUOTE=jrk;231863]At least until you get your siever fixed properly, have you thought about using composite special-q in the range which still works (assuming you've used only prime special-q)?
Assuming you are sieving the special-q on the rational side, I think it should not be too difficult to allow composite special-q's (on the algebraic side it would take more work). Might be worth doing as a quick measure to finish your current job.[/QUOTE] It won't work. Almost all relations found with composite q will be duplicates. If norm(a + balpha)/(pq) is smooth, it will have been found with either special_q = p, or special_q = q (or both!) |

[QUOTE=R.D. Silverman;231854]I've been trying to get to sleep without success..... I've been too busy
thinking about the difficulties. It is clear that I need to fix the lattice reduction. However, the problem with (c,d) possibly overflowing still remains. It would take a MAJOR rewrite to allow (c,d) to exceed 32 bits. And the CWI tools can only handle 30-bit (c,d) as well. It is also how to proceed with my current effort. I can let the siever keep running, on ever-larger special q's. But the yield rate will only get worse. At the current rate I will need to sieve another 6 million special q's. I have already sieved through 14 million of them. Or I can fix the latred problems (which will take time) and go back and resieve q's that have already been done. I have reserved 2,1870L, but may have to release it until I can fix my siever.[/QUOTE] Does msieve handle lattice points whose coordinates are larger than single precision ints? Even if I fix the latred problem, I am still faced with losing valid relations (c,d) = i * V1 + j * V2, when the multiplications overflow 32 (signed) bits. |

Msieve doesn't have a lattice sieve at all. Both the line sieve and the postprocessing can handle 63-bit signed 'a' and 32-bit 'b' values.
Perhaps the most expedient solution is to use the Kleinjung siever to resieve enough, then postprocess the generated relations into CWI format (which is easy). |

[QUOTE=jasonp;231898]Msieve doesn't have a lattice sieve at all. Both the line sieve and the postprocessing can handle 63-bit signed 'a' and 32-bit 'b' values.
Perhaps the most expedient solution is to use the Kleinjung siever to resieve enough, then postprocess the generated relations into CWI format (which is easy).[/QUOTE] I fixed the latred bug. I need to rebuild my code, and re-install it. |

[QUOTE=R.D. Silverman;231901]I fixed the latred bug. I need to rebuild my code, and re-install it.[/QUOTE]
I rebuilt my code. Sort-of. For some &*(@#@& reason, my copy of Visual Studio at work produces code that seg-faults with a release (non-debug) version. The debug version runs just fine, albeit slower. I will need to rebuild everything on my home PC. I have encountered (and reported on) this problem before. |

All times are UTC. The time now is 17:16. |

Powered by vBulletin® Version 3.8.11

Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.