![]() |
I dont have the issue either! On all 4 windows binaries, modified on 09.06.2011 at 14:36 - 14:37.
|
[QUOTE=jasonp;263420]I can't help with the latter, but currently the target minimum combined score for a poly to be reported is the result of exponential interpolation from the tables at the top of gnfs/poly/poly_skew.c in the Msieve source. The combined score is the last entry in each structure in the various lists.
The degree-5 numbers were cribbed from the pol51 readme within GGNFS, except that the cutoffs for C154-C155 were made a little more lenient. The speculation right now is that the targets for up to C140 were determined by experiment, and the targets for C140-155 were extrapolated, because empirically the larger cutoffs are somewhat more stringent than one would expect. For degree 4 and 6, and the larger degree 5, the numbers were chosen strictly by experiment. Of course that doesn't predict the maximum score we can reasonably expect.[/QUOTE] So gathering statistics by doing psearch for c140-155 would be good? Earlier I wrote "seems like a more thorough search over a smaller area would be a better use of the time", does that make sense? |
[QUOTE=lorgix;263986]So gathering statistics by doing psearch for c140-155 would be good?
Earlier I wrote "seems like a more thorough search over a smaller area would be a better use of the time", does that make sense?[/QUOTE] Doing psearch with "wide" in that range would allow us to see if doing "fast" would have found the same poly. If you do that, just record somewhere how many threads you used and save the .p files. There does seem to be a slight bias for good polys having a smaller leading coefficient - but that is not a statistically significant statement at this point. That would argue for doing "deep" psearch - probably only above c155. |
[QUOTE=bsquared;263998]There does seem to be a slight bias for good polys having a smaller leading coefficient - but that is not a statistically significant statement at this point. That would argue for doing "deep" psearch - probably only above c155.[/QUOTE]
It may be that the current scoring method isn't accurate enough when the polynomials being compared have very large differences in skew. This doesn't mean that polynomials having larger leading coefficient (and thus smaller skew) are different in quality from those having smaller leading coefficient, only that comparing the two directly by score isn't always adequate. |
[QUOTE=jrk;264002]It may be that the current scoring method isn't accurate enough when the polynomials being compared have very large differences in skew. This doesn't mean that polynomials having larger leading coefficient (and thus smaller skew) are different in quality from those having smaller leading coefficient, only that comparing the two directly by score isn't always adequate.[/QUOTE]
Good point - I posted that without much thought. Anyway, doing deep searches (more than one search of a leading coefficient) is probably only useful for bigger number (C155+), right? |
[QUOTE=bsquared;264004]Good point - I posted that without much thought. Anyway, doing deep searches (more than one search of a leading coefficient) is probably only useful for bigger number (C155+), right?[/QUOTE]
If by "deep" you mean running the search on the same leading algebraic coefficient multiple times, this will only be useful when the search space is large and randomized. Current msieve versions print a line that reads "randomizing rational coefficient: using piece X of Y" when randomization occurs. This only happens on problems of size about c150 and larger. |
[QUOTE=bsquared;263548]To run nfs do:
yafu "nfs(number)" -v -threads <number of threads> To run a general purpose factoring routine do: yafu "factor(number)" -v -threads <number of threads> which will then do some pretesting with rho, p+/-1, and ecm before proceeding to nfs or siqs. I think the default cutoff is 95 digits. As debrouxl said, if you run tune() first, a more optimal cutoff will be determined. Tune takes 15-30min or so.[/QUOTE] How do(es) the line(s) in the yafu.ini have to look like, which specifie(s) the folder where GGNFS and msieve are placed? |
[QUOTE=Andi47;264127]How do(es) the line(s) in the yafu.ini have to look like, which specifie(s) the folder where GGNFS and msieve are placed?[/QUOTE]
B1pm1=100000 B1pp1=20000 B1ecm=11000 rhomax=1000 threads=2 ggnfs_dir=C:\Faktorisierung\tools\ggnfs\ ggnfs_dir=C:\Faktorisierung\tools\ggnfs\ This is my yafu.ini with the specified folders. |
[QUOTE=Andi_HB;264129]B1pm1=100000
B1pp1=20000 B1ecm=11000 rhomax=1000 threads=2 ggnfs_dir=C:\Faktorisierung\tools\ggnfs\ ggnfs_dir=C:\Faktorisierung\tools\ggnfs\ This is my yafu.ini with the specified folders.[/QUOTE] Thanks. Is it intended that you have specified ggnfs_dir twice? |
-threads 6 doesn't work?
Just started yafu "nfs(<c121>)" -v -threads 6, but it seems that it hasn't taken the "-threads 6" flag: the output looks like the msieve output for one thread, only one of 8 threads of my i7 is busy (CPU load ~12%), and the msieve.log which it created says that the time limit for poly search is somewhat more than 4 hours (which seems normal for a c121). (BTW: my yafu.ini contains the line "threads=6", this seems to be ignored too with yafu "nfs()".)
(BTW2: I have killed the job after 2 minutes and switched back to aliqueit which now uses factmsieve.pl to factor the c121) |
Ups - no its not neccessary to specifie ggnfs_dir twice.
|
| All times are UTC. The time now is 23:05. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.