mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   CADO-NFS (https://www.mersenneforum.org/forumdisplay.php?f=170)
-   -   CADO NFS (https://www.mersenneforum.org/showthread.php?t=11948)

EdH 2019-09-30 13:03

[QUOTE=VBCurtis;526951]Yes, That's what I did with 2330L, and you definitely could alter the snapshot file from I=14 to I=13 and restart. A wild guess is to do so after 30% of relations for c135 params, 50% for 140, 75% for 145, and not to bother (leave it all I=14) for c150. I'm actually planning to test I=13 for c135 params; that might be too soon for I=14.

I also agree that if msieve fails to build a matrix, you can add 10% to rels_wanted in the snapshot file and restart CADO. 5% might build a matrix, but above ~140 digits it's better to have a bit of oversieving for a faster matrix phase.[/QUOTE]Since I'm working in the c15x area, I guess I'll skip the switch for now. However, since the ETA moved from ~7 PM to just before midnight, and the last couple of hours was a constant check/more rels, check/more rels, I think I need to give the relations wanted a major increase. Of course, this is something we've discussed in regards to my huge ratio of sieve cores to LA cores, anyway.

On the plus side, LA "appears" to be screaming on this run. If it turns out as it's looking, I might still elect CADO-NFS LA over msieve LA for now.

EdH 2019-10-01 02:14

It looks like I need to learn how to use CADO-NFS to run SNFS jobs. I've read the README that says how to use the params.F9 file and read a lot of posts within this thread on SNFS, but I need a lot more background in setting everything up. It references a .poly file being needed. I also saw mention of an example in the CADO-NFS package, but haven't found it.

Any help would be appreciated. Consider me totally clueless. . .

I suppose if I could be taught how to create a poly file for the remaining composite factor of HCN 10+7,306, it might be helpful to me.

Thanks. . .

Dylan14 2019-10-01 02:30

[QUOTE=EdH;527032]It looks like I need to learn how to use CADO-NFS to run SNFS jobs. I've read the README that says how to use the params.F9 file and read a lot of posts within this thread on SNFS, but I need a lot more background in setting everything up. It references a .poly file being needed. I also saw mention of an example in the CADO-NFS package, but haven't found it.

Any help would be appreciated. Consider me totally clueless. . .

I suppose if I could be taught how to create a poly file for the remaining composite factor of HCN 10+7,306, it might be helpful to me.

Thanks. . .[/QUOTE]


To do 10+7,306 with Cado, first use the phi program to generate the SNFS polynomial. I get the following:
[CODE]n: 449458915470356816816666313204584211109371927064357869496293486531128830533581400086646461650935964393479133719097995871295297603887683902441000615512687417
# 10^306+7^306, difficulty: 204.00, skewness: 1.00, alpha: 1.29
# cost: 1.99661e+17, est. time: 95.08 GHz days (not accurate yet!)
skew: 1.000
c4: 1
c2: -1
c0: 1
Y1: -12589255298531885026341962383987545444758743
Y0: 1000000000000000000000000000000000000000000000000000
m: 448265168344012624707056615816612553542075492443936645977115074950107989626327149959802632849642891486339725070211112975638435429225772067627090165835534065
type: snfs
[/CODE]Then make a 10_7_306plus.poly file that looks like this:
[CODE]n: 449458915470356816816666313204584211109371927064357869496293486531128830533581400086646461650935964393479133719097995871295297603887683902441000615512687417
skew: 1.000
c4: 1
c2: -1
c0: 1
Y1: -12589255298531885026341962383987545444758743
Y0: 1000000000000000000000000000000000000000000000000000[/CODE]of course, you can improve the skew using the optimal skew calculator. Then, create a params.10_7_306 file, with the name and n. Then for poly select, use the following lines:


[CODE]#Polyselect - We are supplying the poly here.
tasks.polyselect.admin = 0
tasks.polyselect.admax = 0
tasks.polyselect.adrange = 0
#Instruct cado to import the polynomial
tasks.polyselect.import = /path/to/your/file/called/10_7_306plus.poly[/CODE] which will instruct CADO to import the SNFS polynomial. Then set up the rest of the parameters (for the lattice sieving, filtering, LA and sqrt phases).
In the sieving, you may also want to test whether the rational or algebraic side sieves better, and to do that, you'll want to use the tasks.sieve.sqside key. Setting this to 1 sieves the algebraic side. Setting it to 0 sieves the rational side.

RichD 2019-10-01 02:41

[QUOTE=EdH;527032]I suppose if I could be taught how to create a poly file for the remaining composite factor of HCN 10+7,306, it might be helpful to me.[/QUOTE]

One helpful program might be phi, developed by [B]akruppa[/B] and modified by [B]jyb[/B] can be found [url=https://mersenneforum.org/showpost.php?p=372732&postcount=19]here[/url].

VBCurtis 2019-10-01 04:25

[QUOTE=Dylan14;527034].... Then set up the rest of the parameters (for the lattice sieving, filtering, LA and sqrt phases).
In the sieving, you may also want to test whether the rational or algebraic side sieves better, and to do that, you'll want to use the tasks.sieve.sqside key. Setting this to 1 sieves the algebraic side. Setting it to 0 sieves the rational side.[/QUOTE]

This part is where experience is required. It's not hard to choose parameters that double the sieve effort! I've done more than my share of jobs where I forget to set sieve side to algebraic, not noticing until the job is 25% done (about when it should have been 60% done).

For SNFS jobs that are the 'correct' degree (deg 5 under 210 digits, deg 6 over 190 digits), the SNFS-to-GNFS conversion formula given in the H-C thread gives you an indication of how tough the job should be; you can use lim0 and lim1 of the corresponding GNFS difficulty, though you should reverse 0 and 1 for lim and LP sizes.

I don't have any quartic experience, so I have no advice on params for your sample/learning job.

R.D. Silverman 2019-10-01 10:59

[QUOTE=Dylan14;527034]To do 10+7,306 with Cado, first use the phi program to generate the SNFS polynomial. I get the following:
[CODE]n: 449458915470356816816666313204584211109371927064357869496293486531128830533581400086646461650935964393479133719097995871295297603887683902441000615512687417
# 10^306+7^306, difficulty: 204.00, skewness: 1.00, alpha: 1.29
# cost: 1.99661e+17, est. time: 95.08 GHz days (not accurate yet!)
skew: 1.000
c4: 1
c2: -1
c0: 1
Y1: -12589255298531885026341962383987545444758743
Y0: 1000000000000000000000000000000000000000000000000000
m: 448265168344012624707056615816612553542075492443936645977115074950107989626327149959802632849642891486339725070211112975638435429225772067627090165835534065
type: snfs
[/CODE]
[/QUOTE]

A sextic should be better.

EdH 2019-10-01 14:10

Thanks everyone. I ran phi as default and got the same output as Dylan14. Then I ran it with -deg6 and got the following:
[code]
n: 449458915470356816816666313204584211109371927064357869496293486531128830533581400086646461650935964393479133719097995871295297603887683902441000615512687417
# 10^306+7^306, difficulty: 204.00, skewness: 1.00, alpha: 0.00
# cost: 1.99661e+17, est. time: 95.08 GHz days (not accurate yet!)
skew: 1.000
c6: 1
c3: -1
c0: 1
Y1: -54116956037952111668959660849
Y0: 10000000000000000000000000000000000
m: 187606456870364126268604301435455915704166184803758377200392272179051964299732299305291455710155276927052247786999030836910476481702189554253833935334091002
type: snfs
[/code]Removing comments, m and type left me with:
[code]
n: 449458915470356816816666313204584211109371927064357869496293486531128830533581400086646461650935964393479133719097995871295297603887683902441000615512687417
skew: 1.000
c6: 1
c3: -1
c0: 1
Y1: -54116956037952111668959660849
Y0: 10000000000000000000000000000000000
[/code]I "think I" understand the polyselect part of the params file, but for the rest, would I simply use the appropriate params.cXXX for the digit size, or smaller? Or, do I need to construct the rest differently?

bsquared 2019-10-01 14:42

YAFU has poly generation capability, FWIW. It is fairly well automated. For 10^306+7^306 it decides on the degree 6 poly and fills out the rest of the parameters. I don't know if CADO needs those or will even acknowledge them, but they are there at least as a starting point. It also looks at the norms and recommends a side to sieve. For this one the norms are pretty close in size so it prefers the algebraic side.

Here's the command I ran:

[CODE]./yafu "nfs(10^306+7^306)" -v -np[/CODE]

Here is the full poly that yafu generated.

[CODE]n: 1115831964056746693173944871340898545187408148579931318367221152898787588403091685243564910688536511851916604110911381751676702001116124923129039665737934378007741517555469326341509284050395201
# 10^306+7^306, difficulty: 204.00, anorm: 1.00e+36, rnorm: 1.00e+40
# scaled difficulty: 204.00, suggest sieving algebraic side
# size = 4.382e-10, alpha = 0.000, combined = 9.460e-12, rroots = 0
type: snfs
size: 204
skew: 1.0000
c6: 1
c3: -1
c0: 1
Y1: -54116956037952111668959660849
Y0: 10000000000000000000000000000000000
m: 673459021354637686003102332363972410077866955805404890289115927014878349091344166074431272011686237313366047234890553232840846196958447978413202228067610762301253916568778325760381253726054425

rlim: 18000000
alim: 18000000
lpbr: 28
lpba: 28
mfbr: 56
mfba: 56
rlambda: 2.6
alambda: 2.6
[/CODE]

Note that if you have a cofactor that is much smaller (e.g., a large factor has been pulled out by ecm) then you can run this command instead:

[CODE]./yafu "snfs(10^306+7^306,449458915470356816816666313204584211109371927064357869496293486531128830533581400086646461650935964393479133719097995871295297603887683902441000615512687417)" -v -np[/CODE]

And it will generate the file with the specified cofactor as input (it still needs the full form as the first argument in order to generate the poly).

VBCurtis 2019-10-01 15:35

[QUOTE=EdH;527081]I "think I" understand the polyselect part of the params file, but for the rest, would I simply use the appropriate params.cXXX for the digit size, or smaller? Or, do I need to construct the rest differently?[/QUOTE]

I didn't know YAFU produced such a detailed file when it makes a poly! That's really nice, and makes SNFS on CADO a reasonable endeavor.

Ed- if you were to try params on your own, I'd use lim's and LP's from the equivalent GNFS-difficulty file, rather than the actual digit length of the input.

GGNFS generally gets fed symmetric params (lim0 = lim1, LP0 = LP1), while the CADO folks create GNFS params that are not equal on both sides. I don't know which way SNFS params would be faster in CADO.

bsquared 2019-10-01 16:57

[QUOTE=VBCurtis;527088]I didn't know YAFU produced such a detailed file when it makes a poly! That's really nice, and makes SNFS on CADO a reasonable endeavor.
[/QUOTE]

Thanks! Just a quick FYI on some of the polygen features:

As seen, it will attempt to make an informed decision about degree when there are options available (e.g., between deg4 and deg6)

The params are based on heuristics depending on SNFS difficulty. They could probably stand to be updated... but are hopefully reasonably close.

If there are poly options that are close in difficulty (e.g., between deg5 and deg6) and the difficulty is large enough, it will do some quick test sieving to decide between them.

It will autodetect several common special forms:
a*b^n +/- c
hcunningham
xyyx

Example:
[CODE]./yafu "nfs(2482485277884007653366719332982741394223119625933951084701373647104820848070151480868007612371259085802393865202687527348043374765670905911896619515341407566478213077259112447300)" -v -np

nfs: checking for job file - no job file found
nfs: checking for poly file - no poly file found
nfs: commencing nfs on c178: 2482485277884007653366719332982741394223119625933951084701373647104820848070151480868007612371259085802393865202687527348043374765670905911896619515341407566478213077259112447300
nfs: searching for brent special forms...
nfs: searching for homogeneous cunningham special forms...
nfs: searching for XYYXF special forms...
nfs: input divides 91^89 + 89^91
number of factors: 2, 0, total polys = 24
actual polys = 18, 6, total actual polys = 108

gen: ========================================================
gen: considering the following polynomials:
gen: ========================================================

<snip a bunch of possible polys>

gen: ========================================================
gen: best 3 polynomials:
gen: ========================================================

n: 10485465254928230221916080017700758861688267596294526891318294350422402442653357885593952278796591267181350270692127290071053966636681673375700437521425776473805666016877
# 91^89+89^91, difficulty: 177.69, anorm: 1.80e+32, rnorm: -2.27e+41
# scaled difficulty: 179.20, suggest sieving rational side
# size = 1.078e-12, alpha = 0.815, combined = 8.779e-11, rroots = 1
type: snfs
size: 177
skew: 6.0490
c5: 1
c0: 8099
Y1: -122749609618842355502421774953773681
Y0: 183123913839120657539940631629904921
m: 3699384008416351650355001964949374336038142520851390612728167131188442837523409534228944132714564357313360743025413180599705673321384006725706838460793477953173641713712
n: 10485465254928230221916080017700758861688267596294526891318294350422402442653357885593952278796591267181350270692127290071053966636681673375700437521425776473805666016877
# 91^89+89^91, difficulty: 177.69, anorm: 1.80e+38, rnorm: -2.54e+35
# scaled difficulty: 178.16, suggest sieving algebraic side
# size = 3.491e-09, alpha = 0.388, combined = 3.800e-11, rroots = 0
type: snfs
size: 177
skew: 4.4813
c6: 1
c0: 8099
Y1: -174120577810999285787632895849
Y0: 243008175525757569678159896851
m: 7124387887525993318623900846196223689008132478314648176802059638170261637280228424104948008208491322907361629524008821306475612802608512947742477617249653452186994543120
n: 10485465254928230221916080017700758861688267596294526891318294350422402442653357885593952278796591267181350270692127290071053966636681673375700437521425776473805666016877
# 91^89+89^91, difficulty: 180.59, anorm: 8.82e+39, rnorm: 9.59e+34
# scaled difficulty: 181.42, suggest sieving algebraic side
# size = 1.825e-09, alpha = -0.261, combined = 2.374e-11, rroots = 0
type: snfs
size: 180
skew: 1.5620
c6: 1157
c0: 16807
Y1: -34715453646536795668308556693
Y0: 174120577810999285787632895849
m: 3530358777361406046635868507948279304126963615332859663763864395731271122871681373641585765430900541949890230936345853159467302527872966015207444051070729089967396825300

<snip a bunch of test sieving output>

gen: ========================================================
gen: selected polynomial:
gen: ========================================================

n: 10485465254928230221916080017700758861688267596294526891318294350422402442653357885593952278796591267181350270692127290071053966636681673375700437521425776473805666016877
# 91^89+89^91, difficulty: 177.69, anorm: 1.80e+32, rnorm: -2.27e+41
# scaled difficulty: 179.20, suggest sieving rational side
# size = 1.078e-12, alpha = 0.815, combined = 8.779e-11, rroots = 1
type: snfs
size: 177
skew: 6.0490
c5: 1
c0: 8099
Y1: -122749609618842355502421774953773681
Y0: 183123913839120657539940631629904921
m: 3699384008416351650355001964949374336038142520851390612728167131188442837523409534228944132714564357313360743025413180599705673321384006725706838460793477953173641713712
nfs: job file is missing params, filling them

[/CODE]

EdH 2019-10-01 22:30

Let me see if I have any little handle on this:

I took YAFU and ran:
[code]
./yafu "snfs(10^306+7^306,449458915470356816816666313204584211109371927064357869496293486531128830533581400086646461650935964393479133719097995871295297603887683902441000615512687417)" -v -np[/code]One of the first thing I noticed is that it guessed a difficulty of snfs - 204 and gnfs - 144. Since this is a 156 digit composite, I'm guessing SNFS is the better choice, also considering it came up with a degree 6 set of values, rather than degree 4.

The .job file appears to have lots of the info I need to build a params file to feed CADO-NFS. But, there are differences. If I'm correct there is a direct relationship that CADO-NFS's 0 refers to rational and 1 refers to algebraic. e.g.:
[code]
rlim == lim0
alambda == lambda1
etc.
[/code]What I am wondering is if I now use the corresponding YAFU values for all the parameters and whether the other parameters that aren't covered by the YAFU info should be from the c145 file (GNFS difficulty) or the c155 file (actual digits).

Or, if I'm actually off track somewhere. . .

Thanks All!

RichD 2019-10-01 23:17

For a little clarity YAFU output says:
[CODE]nfs: guessing snfs difficulty 204 is roughly equal to gnfs difficulty 144[/CODE]
Meaning if the size (number of digits) is less than 144, GNFS would be better, if greater than 144 SNFS would be better.
(144-30) * 1.8 ~= 204

This is probably answered better by VBCurtis (or others). YAFU generates parameters for Msieve & GGNFS. If you plan to use CADO, all the parameters should come from the CADO files. (i.e. c155.params)

Edit: You are correct, SNFS would be better. Therefore you should use the c145.params if you run the job as SNFS. GNFS would take more than four times as long. Roughly speaking, each increase of five digits for GNFS doubles the work. (For SNFS each increase in nine difficulty doubles the work.)

VBCurtis 2019-10-01 23:28

Ed-
Use other params from the difficulty-sized (c145) file, not the actual length.

SNFS jobs are likely to need a different number of rels_wanted than a GNFS job of similar difficulty; the duplicate rate is also likely to be different. So, I'd do my first job with a guess of rels_wanted 10% lower than the equivalent GNFS difficulty, and adjust for my second job.

EdH 2019-10-02 01:49

OK! I have an SNFS job running, although if I am correct, it is one that isn't really "that" much better as SNFS: 10+3,259. It's 156 digits with a guess value of gnfs 154.

But, so far the server is happily handing out and collecting WUs. . .

EdH 2019-10-03 02:06

[QUOTE=EdH;527119]OK! I have an SNFS job running, although if I am correct, it is one that isn't really "that" much better as SNFS: 10+3,259. It's 156 digits with a guess value of gnfs 154.

But, so far the server is happily handing out and collecting WUs. . .[/QUOTE]
Well, that was disappointing! I found the server caught in a loop sending out continuous stop signals to all the clients. CTRL-C, due to an errant file and one of my scripts, caused CADO-NFS to immediately restart, wiping out all the relations from the previous run.:sad:

I will have to study SNFS with a smaller number at some point, but for now, I'll take a short break.

I suppose on the bright side, it was a relatively small loss time-wise. . .

EdH 2019-10-04 19:05

OK! I give up! I tried constructing a smaller candidate for SNFS from factors around 40 digits within some of the other HCNs and they all came up with relative difficulties in the 190s region. I went ahead and ran one for >12 hours with ~70 threads and sure enough, it got nowhere. Meanwhile, I ran the same ~80 digit composite with GNFS on 8 threads and had factors in about 10 minutes. I killed the SNFS run and have gone back to GNFS for now.

R.D. Silverman 2019-10-04 22:32

[QUOTE=EdH;527316]OK! I give up! I tried constructing a smaller candidate for SNFS from factors around 40 digits within some of the other HCNs
[/QUOTE]

Huh? How does one do this?

EdH 2019-10-05 00:53

[QUOTE=R.D. Silverman;527328]Huh? How does one do this?[/QUOTE]
The manner I used was to first look through the table of 3n+2n HCNs for any with two factors around 40 digits each. I multiplied those together to get a composite cofactor of around 80 digits. I then tried to treat it as an SNFS candidate.

Example:
[code]
3+2,400 = 4388625601.4556236801.25520929094046215917722455225911068200801.40218427089555210518770868836210456180801.P53

25520929094046215917722455225911068200801 * 40218427089555210518770868836210456180801 = 1026411626026606047165091747606726828040556793476879314585548048206850817029021601

$ ./yafu "snfs(3^400+2^400,1026411626026606047165091747606726828040556793476879314585548048206850817029021601)" -v -np
. . .
nfs: guessing snfs difficulty 152 is roughly equal to gnfs difficulty 115
nfs: creating ggnfs job parameters for input of size 115
. . .
gen: selected polynomial:
gen: ========================================================

n: 1026411626026606047165091747606726828040556793476879314585548048206850817029021601
# 3^400+2^400, difficulty: 152.68, anorm: 1.00e+24, rnorm: 1.48e+44
# scaled difficulty: 156.04, suggest sieving rational side
# size = 1.104e-15, alpha = 1.694, combined = 1.157e-09, rroots = 0
type: snfs
size: 152
skew: 1.0000
c4: 1
c3: -1
c2: 1
c1: -1
c0: 1
Y1: -1208925819614629174706176
Y0: 147808829414345923316083210206383297601
m: 434316362509514600056605743684942369000152414814327386642884688465272936756005516
[/code]Although much smaller than the earlier 19x's this one is still more difficult than the actual size of 82 digits.:sad:

R.D. Silverman 2019-10-05 10:09

[QUOTE=EdH;527331]The manner I used was to first look through the table of 3n+2n HCNs for any with two factors around 40 digits each. I multiplied those together to get a composite cofactor of around 80 digits. I then tried to treat it as an SNFS candidate.

[/QUOTE]

That will not work. It will work for GNFS. There is no reason why such a number
will yield a polynomial with very small coefficients.

Instead, *start with a polynomial* of small degree; any irreducible polynomial will do;
(say) f(x) = x^5 + x + 1. Select a value for x, near (say) 10^20 and put N
= f(x). Done. --> 100 digit N amenable to SNFS.

R.D. Silverman 2019-10-05 13:35

[QUOTE=EdH;527331]The manner I used was to first look through the table of 3n+2n HCNs for any with two factors around 40 digits each. I multiplied those together to get a composite cofactor of around 80 digits. I then tried to treat it as an SNFS candidate.

]Although much smaller than the earlier 19x's this one is still more difficult than the actual size of 82 digits.:sad:[/QUOTE]

A followup:

You are not factoring an 82-digit number, if I read your inputs correctly. You are factoring (3^400+2^400)/(3^80+2^80). This is a 153 digit number that you are factoring with a quartic.

EdH 2019-10-05 14:12

[QUOTE=R.D. Silverman;527355]A followup:

You are not factoring an 82-digit number, if I read your inputs correctly. You are factoring (3^400+2^400)/(3^80+2^80). This is a 153 digit number that you are factoring with a quartic.[/QUOTE]
I wondered if I might have had something backwards, but I couldn't see it. I must go work on this and your previous post.

Thanks much!

RichD 2019-10-05 19:07

[QUOTE=EdH;527214]I will have to study SNFS with a smaller number at some point, but for now, I'll take a short break.[/QUOTE]

The ReadMe file states an SNFS parameter file is included but there only appears to be for one size. It is /parameters/factor/parameters.F9. I guess one would have to guess for other sizes. I may try my luck in the coming days.

EdH 2019-10-05 21:08

[QUOTE=RichD;527380]The ReadMe file states an SNFS parameter file is included but there only appears to be for one size. It is /parameters/factor/parameters.F9. I guess one would have to guess for other sizes. I may try my luck in the coming days.[/QUOTE]
I wrote some scripts to use the values generated by YAFU and the appropriate size cXX(X) file to produce an SNFS parameter file. I'm going to try to work with the info R.D.Silverman provided me earlier. My current HCN reservation is supposed to be finished in about a half-hour, so if I can, I'll work on trying to get an SNFS job completed once this one is finished.

If any/all of it works, I'll post my scripts here, and if I get a solid workable setup, I'll put something in my "How I. . ." section, eventually.

RichD 2019-10-05 22:04

Many SNFS jobs require sieving on the rational side, with algebraic side being the default. One needs to include "tasks.sieve.sqside = 0" which is complete opposite of GGNFS.

EdH 2019-10-05 22:51

[QUOTE=RichD;527390]Many SNFS jobs require sieving on the rational side, with algebraic side being the default. One needs to include "tasks.sieve.sqside = 0" which is complete opposite of GGNFS.[/QUOTE]
My script catches the suggestion from YAFU's nfs.job file and makes that adjustment.

EdH 2019-10-06 03:09

Well, somehow I sent the wrong composite to my factoring scripts, so 10+3,259 is still unresolved.

I tried to construct some test subject composites per R.D.Silverman's earlier post, but every one of them was immediately solved by YAFU instead of giving me a candidate for SNFS. I don't know why I was so "lucky" in my choices for x.

That aside, my curent experiment will be with 10+3,259, since I didn't factor it after all.

My experiment from the start:

First, I made three entries in my script (shown further below), which is located in the cado-nfs folder on my server machine:
[code]
. . .
hcn="10^259+3^259"
comp=5815504780296148997503012312763782756080450913112033025539962021634164522276914515466481803925153863579601454828153470121155436709931443715028713024505069
pfile=c155
. . .
[/code]pfile will choose which gnfs params file to use to complete the parameters for the SNFS run.

The full script:
[code]
#!/bin/bash/

hcn="10^259+3^259"
comp=5815504780296148997503012312763782756080450913112033025539962021634164522276914515466481803925153863579601454828153470121155436709931443715028713024505069
pfile=c155

sqside=0
rm factor.log
rm nfs.fb
rm nfs.job
rm session.log
rm YAFU_get_poly_score.out

./yafu "snfs($hcn,$comp)" -v -np

exec <"nfs.job"
while read line
do
case $line in
*"algebraic side"*) let sqside=${sqside}+1
;;
esac
case $line in
"n: "*) echo $line >parameters/polynomials/snfsTest.poly
echo "# SNFS Test File" >parameters/factor/params.snfsTest
echo "" >>parameters/factor/params.snfsTest
echo "name = snfsTest" >>parameters/factor/params.snfsTest
echo "N = ${line:3}" >>parameters/factor/params.snfsTest
echo "" >>parameters/factor/params.snfsTest
echo "###########################################################################" >>parameters/factor/params.snfsTest
echo "# Polynomial selection" >>parameters/factor/params.snfsTest
echo "###########################################################################" >>parameters/factor/params.snfsTest
echo "" >>parameters/factor/params.snfsTest
echo "tasks.polyselect.admin = 0" >>parameters/factor/params.snfsTest
echo "tasks.polyselect.admax = 0" >>parameters/factor/params.snfsTest
echo "tasks.polyselect.adrange = 0" >>parameters/factor/params.snfsTest
echo "tasks.polyselect.import = parameters/polynomials/snfsTest.poly" >>parameters/factor/params.snfsTest
echo "" >>parameters/factor/params.snfsTest
echo "###########################################################################" >>parameters/factor/params.snfsTest
echo "# Sieve" >>parameters/factor/params.snfsTest
echo "###########################################################################" >>parameters/factor/params.snfsTest
echo "" >>parameters/factor/params.snfsTest
;;
esac
case $line in
"skew: "*) echo $line >>parameters/polynomials/snfsTest.poly
;;
esac
case $line in
"c8: "*) echo $line >>parameters/polynomials/snfsTest.poly
;;
esac
case $line in
"c7: "*) echo $line >>parameters/polynomials/snfsTest.poly
;;
esac
case $line in
"c6: "*) echo $line >>parameters/polynomials/snfsTest.poly
;;
esac
case $line in
"c5: "*) echo $line >>parameters/polynomials/snfsTest.poly
;;
esac
case $line in
"c4: "*) echo $line >>parameters/polynomials/snfsTest.poly
;;
esac
case $line in
"c3: "*) echo $line >>parameters/polynomials/snfsTest.poly
;;
esac
case $line in
"c2: "*) echo $line >>parameters/polynomials/snfsTest.poly
;;
esac
case $line in
"c1: "*) echo $line >>parameters/polynomials/snfsTest.poly
;;
esac
case $line in
"c0: "*) echo $line >>parameters/polynomials/snfsTest.poly
;;
esac
case $line in
"Y1: "*) echo $line >>parameters/polynomials/snfsTest.poly
;;
esac
case $line in
"Y0: "*) echo $line >>parameters/polynomials/snfsTest.poly
;;
esac
case $line in
"rlim: "*) echo "tasks.lim0 = ${line:6}" >>parameters/factor/params.snfsTest
;;
esac
case $line in
"alim: "*) echo "tasks.lim1 = ${line:6}" >>parameters/factor/params.snfsTest
;;
esac
case $line in
"lpbr: "*) echo "tasks.lpb0 = ${line:6}" >>parameters/factor/params.snfsTest
;;
esac
case $line in
"lpba: "*) echo "tasks.lpb1 = ${line:6}" >>parameters/factor/params.snfsTest
;;
esac
case $line in
"mfbr: "*) echo "tasks.sieve.mfb0 = ${line:6}" >>parameters/factor/params.snfsTest
;;
esac
case $line in
"mfba: "*) echo "tasks.sieve.mfb1 = ${line:6}" >>parameters/factor/params.snfsTest
;;
esac
case $line in
"rlambda: "*) echo "tasks.sieve.lambda0 = ${line:9}" >>parameters/factor/params.snfsTest
;;
esac
case $line in
"alambda: "*) echo "tasks.sieve.lambda1 = ${line:9}" >>parameters/factor/params.snfsTest
;;
esac
done

exec <"parameters/factor/params.${pfile}"
while read line
do
case $line in
"tesks.sieve.ncurves"*) echo $line >>parameters/factor/params.snfsTest
;;
esac
case $line in
"tasks.I ="*) echo $line >>parameters/factor/params.snfsTest
;;
esac
case $line in
"tasks.qmin"*) echo $line >>parameters/factor/params.snfsTest
;;
esac
case $line in
"tasks.sieve.qrange"*) echo $line >>parameters/factor/params.snfsTest
;;
esac
case $line in
"tasks.sieve.rels_wanted"*) echo $line >>parameters/factor/params.snfsTest
echo "tasks.sieve.sqside = $sqside" >>parameters/factor/params.snfsTest
echo "" >>parameters/factor/params.snfsTest
echo "###########################################################################" >>parameters/factor/params.snfsTest
echo "# Filtering" >>parameters/factor/params.snfsTest

echo "###########################################################################" >>parameters/factor/params.snfsTest
echo "" >>parameters/factor/params.snfsTest
;;
esac
case $line in
"tasks.filter.purge"*) echo $line >>parameters/factor/params.snfsTest
;;
esac
case $line in
"tasks.filter.target"*) echo $line >>parameters/factor/params.snfsTest
echo "" >>parameters/factor/params.snfsTest
echo "###########################################################################" >>parameters/factor/params.snfsTest
echo "# Linear algebra" >>parameters/factor/params.snfsTest

echo "###########################################################################" >>parameters/factor/params.snfsTest
echo "" >>parameters/factor/params.snfsTest
;;
esac
case $line in
"tasks.linalg.bwc"*) echo $line >>parameters/factor/params.snfsTest
;;
esac
case $line in
"tasks.linalg.m"*) echo $line >>parameters/factor/params.snfsTest
;;
esac
case $line in
"tasks.linalg.n"*) echo $line >>parameters/factor/params.snfsTest
echo "" >>parameters/factor/params.snfsTest
echo "###########################################################################" >>parameters/factor/params.snfsTest
echo "# Characters" >>parameters/factor/params.snfsTest

echo "###########################################################################" >>parameters/factor/params.snfsTest
echo "" >>parameters/factor/params.snfsTest
;;
esac
case $line in
"tasks.linalg.characters"*) echo $line >>parameters/factor/params.snfsTest
;;
esac
done
[/code]As shown, after YAFU runs, its nfs.job file and the appropriate params.cXX(X) file will be used to create a .poly and the params.snfsTest file in their standard locations. Note that I copied yafu and yafu.ini into my cado-nfs folder. The script run:
[code]
10/05/19 22:54:15 v1.35-beta @ math79, System/Build Info:
Using GMP-ECM 7.0.4, Powered by GMP 6.1.2
detected Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz
detected L1 = 32768 bytes, L2 = 8388608 bytes, CL = 64 bytes
measured cpu frequency ~= 3392.240600
using 1 random witnesses for Rabin-Miller PRP checks

===============================================================
======= Welcome to YAFU (Yet Another Factoring Utility) =======
======= bbuhrow@gmail.com =======
======= Type help at any time, or quit to quit =======
===============================================================
cached 78498 primes. pmax = 999983

>> nfs: checking for job file - no job file found
nfs: checking for poly file - no poly file found
nfs: commencing nfs on c260: 10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000003753228214182907784653374243320855184083908971830286725773330259610973961003693770407047767160959457088914969594820284759067
nfs: searching for brent special forms...
nfs: searching for homogeneous cunningham special forms...
nfs: input divides 10^259 + 3^259
nfs: guessing snfs difficulty 222 is roughly equal to gnfs difficulty 154
nfs: creating ggnfs job parameters for input of size 154

gen: ========================================================
gen: selected polynomial:
gen: ========================================================

n: 5815504780296148997503012312763782756080450913112033025539962021634164522276914515466481803925153863579601454828153470121155436709931443715028713024505069
# 10^259+3^259, difficulty: 222.00, anorm: 1.00e+36, rnorm: 1.00e+43
# scaled difficulty: 223.17, suggest sieving rational side
# size = 5.218e-11, alpha = 2.428, combined = 2.108e-12, rroots = 0
type: snfs
size: 222
skew: 1.0000
c6: 1
c5: -1
c4: 1
c3: -1
c2: 1
c1: -1
c0: 1
Y1: -450283905890997363
Y0: 10000000000000000000000000000000000000
m: 4955669040169060042220455827376023114584810834987038702389584545813463114454624952182025091999205978481086968484253198950361852341154281801634403132051732
nfs: job file is missing params, filling them

***factors found***

***co-factor***
C154 = 5815504780296148997503012312763782756080450913112033025539962021634164522276914515466481803925153863579601454828153470121155436709931443715028713024505069

ans = 5815504780296148997503012312763782756080450913112033025539962021634164522276914515466481803925153863579601454828153470121155436709931443715028713024505069
[/code]The params file for the SNFS run:
[code]
# SNFS Test File

name = snfsTest
N = 5815504780296148997503012312763782756080450913112033025539962021634164522276914515466481803925153863579601454828153470121155436709931443715028713024505069

###########################################################################
# Polynomial selection
###########################################################################

tasks.polyselect.admin = 0
tasks.polyselect.admax = 0
tasks.polyselect.adrange = 0
tasks.polyselect.import = parameters/polynomials/snfsTest.poly

###########################################################################
# Sieve
###########################################################################

tasks.lim0 = 30600000
tasks.lim1 = 30600000
tasks.lpb0 = 29
tasks.lpb1 = 29
tasks.sieve.mfb0 = 58
tasks.sieve.mfb1 = 58
tasks.sieve.lambda0 = 2.6
tasks.sieve.lambda1 = 2.6
tasks.I = 14
tasks.qmin = 100000
tasks.sieve.qrange = 10000
tasks.sieve.rels_wanted = 95000000
tasks.sieve.sqside = 0

###########################################################################
# Filtering
###########################################################################

tasks.filter.purge.keep = 175
tasks.filter.target_density = 135.0

###########################################################################
# Linear algebra
###########################################################################

tasks.linalg.bwc.interval = 2000
tasks.linalg.bwc.interleaving = 0
tasks.linalg.m = 64
tasks.linalg.n = 64

###########################################################################
# Characters
###########################################################################

tasks.linalg.characters.nchar = 50
[/code]And, the poly file:
[code]
n: 5815504780296148997503012312763782756080450913112033025539962021634164522276914515466481803925153863579601454828153470121155436709931443715028713024505069
skew: 1.0000
c6: 1
c5: -1
c4: 1
c3: -1
c2: 1
c1: -1
c0: 1
Y1: -450283905890997363
Y0: 10000000000000000000000000000000000000
[/code]I'm going to wait until tomorrow morning to start CADO-NFS. I expect to use the following invocation:
[code]
./cado-nfs.py parameters/factor/snfsTest . . .
[/code]I would appreciate any comments, etc.

Thanks all!

EdH 2019-10-07 21:16

Well, I've made some progress on this SNFS run including the discovery of why the earlier failed one had troubles.

I had the same thing as before happen again,:

The server "appeared" to be stuck in a 410 workers stop loop, and it actually was. The reason however, was a "field" of my "farm" refusing to quit. Strangely, to me anyway, a whole set of machines kept asking for work every time the server told them to stop. This continued until I told every one of the errant machines to "Cut it out!" After the last trouble-maker quit pestering the server, it moved into Linear Algebra and all appears well at this time.

This is an SNFS equivalent job of ~222 and GNFS of ~154, so is that telling me that it should finish in roughly the time a GNFS job of 154 would finish?

jyb 2019-10-07 22:09

[QUOTE=EdH;527472]This is an SNFS equivalent job of ~222 and GNFS of ~154, so is that telling me that it should finish in roughly the time a GNFS job of 154 would finish?[/QUOTE]

I'm not sure exactly who is telling you that, or in what form, so I can't comment on how you should interpret that. But I can tell you that it's definitely the case that an SNFS job of difficulty 222 will be about as hard (in the sense of time and resources required) as a GNFS job of 154 digits. I hope that adequately answers your question.

EdH 2019-10-07 22:32

[QUOTE=jyb;527475]I'm not sure exactly who is telling you that, or in what form, so I can't comment on how you should interpret that. But I can tell you that it's definitely the case that an SNFS job of difficulty 222 will be about as hard (in the sense of time and resources required) as a GNFS job of 154 digits. I hope that adequately answers your question.[/QUOTE]Both the HCN page and YAFU gave me those values. I was asking if they should take the equivalent Wall Clock time or thereabouts, with the same equipment. I'guessing that is what is meant by difficulty, but wanted to check.

Thanks!

jyb 2019-10-08 00:11

[QUOTE=EdH;527476]Both the HCN page and YAFU gave me those values. I was asking if they should take the equivalent Wall Clock time or thereabouts, with the same equipment. I'guessing that is what is meant by difficulty, but wanted to check.

Thanks![/QUOTE]

I see. Well the SNFS difficulty is determined by the particulars of the given polynomial and the value which is the common root of the algebraic and rational polynomials (i.e. the "m" value). (I can provide a simple example if desired.)

Through years of experience with NFS, some folks have determined empirically that the relative hardness (in terms of time, etc) of SNFS (s) and GNFS (g) is something like g = .69 * s, or maybe g = .56 * s + 30 (each is sometimes used). But note that:

1) This is just an empirical guideline. It's useful when you're starting out, but YMMV.
2) The "hardness" of GNFS includes time spent for polynomial selection, so if you've already found a polynomial, then you might find GNFS easier than what would otherwise be the equivalent SNFS difficulty.

EdH 2019-10-08 01:12

[QUOTE=jyb;527479]I see. Well the SNFS difficulty is determined by the particulars of the given polynomial and the value which is the common root of the algebraic and rational polynomials (i.e. the "m" value). (I can provide a simple example if desired.)

Through years of experience with NFS, some folks have determined empirically that the relative hardness (in terms of time, etc) of SNFS (s) and GNFS (g) is something like g = .69 * s, or maybe g = .56 * s + 30 (each is sometimes used). But note that:

1) This is just an empirical guideline. It's useful when you're starting out, but YMMV.
2) The "hardness" of GNFS includes time spent for polynomial selection, so if you've already found a polynomial, then you might find GNFS easier than what would otherwise be the equivalent SNFS difficulty.[/QUOTE]
Thanks, again! This all makes sense to me.

EdH 2019-10-08 01:30

Now, for something that makes no sense to me at all:

(I hope it's something simple, even stupid on my part, but I can't see it.)

While my primary CADO-NFS server is working on LA, I want to use a secondary server to task all those CADO-NFS clients. Sounds easy, but NOOO, that would be too much to expect.

I have set up a secondary server and changed all the appropriate machine names and it happily performs factorizations by itself. All seems good there.

However, if I add a client (redirected toward the secondary server), any client, it crashes:
[code]
. . .
Warning:Polynomial Selection (root optimized): Invalid polyselect file '/tmp/cadofactor/c60.upload/c60.polyselect2.r421ahir.opt_6': Key n in line n: 90377629292003121684002147101760858109247336549001090677693
has occurred before
. . .
Error:Polynomial Selection (root optimized): No polynomial found. Consider increasing the search range bound admax, or maxnorm
. . .
Critical:Complete Factorization: Premature exit within Polynomial Selection (root optimized). Bye.
Error occurred, terminating
[/code]I've tried different servers and different clients, all with the same results. I've made sure the d/uload areas are cleared and tried different ways to connect the clients. Always the same: if the client(s) connect(s), the server crashes. If not, the server provides factors.

RichD 2019-10-08 02:24

Just guessing. If you are trying to run an SNFS job, there is no need for poly selection. A few of those "tasks.ad..." should be zero. If you are run GNFS then those previously mentioned parameters need to be set appropriately. Did you carry over parameters from SNFS to a GNFS job? Wild thoughts of mine. :smile:

Edit: Or N is not a factor of the poly.

EdH 2019-10-08 02:39

[QUOTE=RichD;527488]Just guessing. If you are trying to run an SNFS job, there is no need for poly selection. A few of those "tasks.ad..." should be zero. If you are run GNFS then those previously mentioned parameters need to be set appropriately. Did you carry over parameters from SNFS to a GNFS job? Wild thoughts of mine. :smile:

Edit: Or N is not a factor of the poly.[/QUOTE]
Actually, I'm simply running the sample in the README right now. I figure if I can't get that to run, there must be a problem, although maybe it's too small for a client to be useful. It did grab the default params.c60 as expected.

Maybe tomorrow I'll see what RSA-100 runs like. . .

RichD 2019-10-08 02:53

[QUOTE=EdH;527484]... and it happily performs factorizations by itself. All seems good there.[/QUOTE]

I missed that part so all of my post can be ignored...

EdH 2019-10-08 12:32

The SNFS job succeeded! Thanks everyone!

Dylan14 2019-10-09 00:23

something strange going on in the square root phase...
 
So I am using CADO to factor 8+5_321 from the HCN tables. The program had no problem in the lattice sieving, filtering, and matrix solving steps. However, when it gets to the square root phase, it is able to create the dependencies, but when it goes to the first dependency, it gets through the rational part fine but hangs as soon as it gets to the algebraic part, happily eating up a cpu core. The relevant output is shown below:


[CODE]dylan@ubuntu:~/bin/cado/cado-nfs$ /home/dylan/bin/cado/cado-nfs/build/ubuntu/sqrt/sqrt -poly /home/dylan/Desktop/factoring/deepsearch/8_5_321plus.poly -prefix /home/dylan/Desktop/factoring/deepsearch/8_5_321plus.dep.gz -purged /home/dylan/Desktop/factoring/deepsearch/8_5_321plus.purged.gz -index /home/dylan/Desktop/factoring/deepsearch/8_5_321plus.index.gz -ker /home/dylan/Desktop/factoring/deepsearch/8_5_321plus.kernel -dep 0 -t 8 -side0 -side1 -gcd
/home/dylan/bin/cado/cado-nfs/build/ubuntu/sqrt/sqrt.r52eebdbc5 -poly /home/dylan/Desktop/factoring/deepsearch/8_5_321plus.poly -prefix /home/dylan/Desktop/factoring/deepsearch/8_5_321plus.dep.gz -purged /home/dylan/Desktop/factoring/deepsearch/8_5_321plus.purged.gz -index /home/dylan/Desktop/factoring/deepsearch/8_5_321plus.index.gz -ker /home/dylan/Desktop/factoring/deepsearch/8_5_321plus.kernel -dep 0 -t 8 -side0 -side1 -gcd
Using GMP 6.1.2
Using GMP 6.1.2
Using GMP 6.1.2
Using GMP 6.1.2
Using GMP 6.1.2
Using GMP 6.1.2
Using GMP 6.1.2
Using GMP 6.1.2
Rat(7): read 1000000 pairs in 57.83s, size 21M (peak 857M)
Rat(2): read 1000000 pairs in 58.04s, size 21M (peak 857M)
Rat(6): read 1000000 pairs in 58.28s, size 21M (peak 857M)
Rat(0): read 1000000 pairs in 58.80s, size 21M (peak 857M)
Rat(1): read 1000000 pairs in 59.46s, size 21M (peak 857M)
Rat(4): read 1000000 pairs in 59.63s, size 21M (peak 857M)
Rat(5): read 1000000 pairs in 59.89s, size 21M (peak 857M)
Rat(3): read 1000000 pairs in 60.19s, size 21M (peak 857M)
Rat(2): read 2000000 pairs in 142.10s, size 43M (peak 1484M)
Rat(7): read 2000000 pairs in 144.02s, size 43M (peak 1484M)
Rat(6): read 2000000 pairs in 144.08s, size 43M (peak 1484M)
Rat(0): read 2000000 pairs in 144.10s, size 43M (peak 1484M)
Rat(1): read 2000000 pairs in 144.44s, size 43M (peak 1484M)
Rat(5): read 2000000 pairs in 145.79s, size 43M (peak 1484M)
Rat(3): read 2000000 pairs in 146.12s, size 43M (peak 1484M)
Rat(4): read 2000000 pairs in 146.30s, size 43M (peak 1484M)
Rat(2): read 3000000 pairs in 251.93s, size 64M (peak 2733M)
Rat(6): read 3000000 pairs in 254.44s, size 64M (peak 2733M)
Rat(7): read 3000000 pairs in 255.63s, size 64M (peak 2733M)
Rat(5): read 3000000 pairs in 256.09s, size 64M (peak 2733M)
Rat(1): read 3000000 pairs in 256.75s, size 64M (peak 2733M)
Rat(3): read 3000000 pairs in 257.33s, size 64M (peak 2733M)
Rat(0): read 3000000 pairs in 257.40s, size 64M (peak 2733M)
Rat(4): read 3000000 pairs in 258.20s, size 64M (peak 2733M)
Rat(2): read 4000000 pairs in 340.63s, size 86M (peak 2733M)
Rat(6): read 4000000 pairs in 342.46s, size 86M (peak 2733M)
Rat(7): read 4000000 pairs in 343.87s, size 86M (peak 2733M)
Rat(5): read 4000000 pairs in 344.76s, size 86M (peak 2733M)
Rat(1): read 4000000 pairs in 345.47s, size 86M (peak 2733M)
Rat(0): read 4000000 pairs in 345.60s, size 86M (peak 2733M)
Rat(3): read 4000000 pairs in 346.64s, size 86M (peak 2733M)
Rat(4): read 4000000 pairs in 346.73s, size 86M (peak 2733M)
Rat(2): read 4913480 (a,b) pairs, including 320990 free
Rat(2): size of product = 881572967 bits (peak 4774M)
Rat(2): starting rational square root at 497.26s
Rat(7): read 4915938 (a,b) pairs, including 321580 free
Rat(7): size of product = 881994421 bits (peak 4774M)
Rat(7): starting rational square root at 507.52s
Rat(4): read 4913404 (a,b) pairs, including 321054 free
Rat(4): size of product = 881551104 bits (peak 4774M)
Rat(4): starting rational square root at 523.92s
Rat(6): read 4917508 (a,b) pairs, including 321182 free
Rat(6): size of product = 882291684 bits (peak 4774M)
Rat(6): starting rational square root at 546.44s
Rat(0): read 4916902 (a,b) pairs, including 321120 free
Rat(0): size of product = 882190229 bits (peak 5083M)
Rat(0): starting rational square root at 569.57s
Rat(1): read 4916160 (a,b) pairs, including 321470 free
Rat(1): size of product = 882035345 bits (peak 5083M)
Rat(1): starting rational square root at 591.31s
Rat(5): read 4913230 (a,b) pairs, including 320554 free
Rat(5): size of product = 881533079 bits (peak 5424M)
Rat(5): starting rational square root at 613.39s
Rat(4): computed square root at 613.39s
Rat(7): computed square root at 613.39s
Rat(2): computed square root at 613.39s
Rat(6): computed square root at 613.39s
Rat(3): read 4917898 (a,b) pairs, including 321096 free
Rat(3): size of product = 882367677 bits (peak 5824M)
Rat(3): starting rational square root at 636.65s
Rat(2): reduced mod n at 636.65s
Rat(2): computed g1^(nab/2) mod n at 636.65s
Rat(6): reduced mod n at 636.66s
Rat(0): computed square root at 636.66s
Rat(7): reduced mod n at 636.66s
Rat(4): reduced mod n at 636.66s
Rat(4): computed g1^(nab/2) mod n at 636.66s
Rat(2): square root is 108353163759881600643894679536078468820402057173097413949138909519156751140550342443578088701109860135899766622436893581704775995414313180461317459390632941167200055610731146829
Rat(2): square root time: 636.71s
Rat(6): computed g1^(nab/2) mod n at 636.71s
Rat(7): computed g1^(nab/2) mod n at 636.71s
Rat(6): square root is 204016267615125619978352034156136128497615124930984073137683408536868621946584070745593145367799471026667584236348551090474004696260742128357144779037959322531996725629094023237
Rat(6): square root time: 636.75s
Rat(4): square root is 53705843804086516539230617583639378592193158633256713204143907090312318593416823578922810277659821256643442825103867636008232655456161679510096601053303037581781919950967068799
Rat(4): square root time: 636.76s
Rat(7): square root is 213593806083670740848816055933204752741729262767834671378006160962352110024503160512233018770623107056426602436268741439495733588074449749243022795924896659819047774904389709342
Rat(7): square root time: 636.83s
Rat(0): reduced mod n at 637.47s
Rat(0): computed g1^(nab/2) mod n at 637.47s
Rat(0): square root is 95700751676702838988654317293570603376820089246383021493578812711978005199691151792539059463278937707494190849430823110992470364468209608718262618106869516992952136965347297727
Rat(0): square root time: 637.55s
Rat(1): computed square root at 646.10s
Rat(1): reduced mod n at 646.63s
Rat(1): computed g1^(nab/2) mod n at 646.63s
Rat(1): square root is 100177008842319066167478554512477703721488147337428495464894724260551817302834262129313454018174152265955205906856575359538552850754471282622579693292339850074025822115849144074
Rat(1): square root time: 646.70s
Rat(5): computed square root at 656.28s
Rat(5): reduced mod n at 656.60s
Rat(5): computed g1^(nab/2) mod n at 656.60s
Rat(5): square root is 315089031833869189828103609310552217499888422784399432355902759629919996167185060811818485689191415613990951402492208965391479751813642347860103097972624178374074507231610975241
Rat(5): square root time: 656.61s
Rat(3): computed square root at 660.67s
Rat(3): reduced mod n at 660.80s
Rat(3): computed g1^(nab/2) mod n at 660.80s
Rat(3): square root is 381947265433452190345574052318913733219924009795436619180598909604207607650546531704889996482255142032007976009496695733193596979172567317910360363175693667475280599240253232452
Rat(3): square root time: 660.80s
Alg(5): reading ab pair #0 at 660.80s (peak 5824M)
Alg(1): reading ab pair #0 at 660.80s (peak 5824M)
Alg(4): reading ab pair #0 at 660.80s (peak 5824M)
Alg(7): reading ab pair #0 at 660.80s (peak 5824M)
Alg(0): reading ab pair #0 at 660.80s (peak 5824M)
Alg(2): reading ab pair #0 at 660.81s (peak 5824M)
Alg(6): reading ab pair #0 at 660.85s (peak 5824M)
Alg(3): reading ab pair #0 at 660.88s (peak 5824M)
Alg(1): reading ab pair #1000000 at 749.12s (peak 5824M)
Alg(0): reading ab pair #1000000 at 749.86s (peak 5824M)
Alg(7): reading ab pair #1000000 at 750.08s (peak 5824M)
Alg(3): reading ab pair #1000000 at 750.51s (peak 5824M)
Alg(6): reading ab pair #1000000 at 750.57s (peak 5824M)
Alg(4): reading ab pair #1000000 at 750.58s (peak 5824M)
Alg(2): reading ab pair #1000000 at 750.65s (peak 5824M)
Alg(5): reading ab pair #1000000 at 751.15s (peak 5824M)
Alg(1): reading ab pair #2000000 at 856.92s (peak 5824M)
Alg(7): reading ab pair #2000000 at 858.05s (peak 5824M)
Alg(0): reading ab pair #2000000 at 858.50s (peak 5824M)
Alg(2): reading ab pair #2000000 at 858.93s (peak 5824M)
Alg(6): reading ab pair #2000000 at 859.82s (peak 5824M)
Alg(4): reading ab pair #2000000 at 860.14s (peak 5824M)
Alg(5): reading ab pair #2000000 at 860.39s (peak 5824M)
Alg(3): reading ab pair #2000000 at 860.68s (peak 5824M)
Alg(1): reading ab pair #3000000 at 985.26s (peak 5824M)
Alg(7): reading ab pair #3000000 at 987.83s (peak 5824M)
Alg(0): reading ab pair #3000000 at 988.03s (peak 5824M)
Alg(2): reading ab pair #3000000 at 988.39s (peak 5824M)
Alg(5): reading ab pair #3000000 at 989.27s (peak 5824M)
Alg(6): reading ab pair #3000000 at 989.56s (peak 5824M)
Alg(3): reading ab pair #3000000 at 989.71s (peak 5824M)
Alg(4): reading ab pair #3000000 at 989.86s (peak 5824M)
Alg(1): reading ab pair #4000000 at 1093.64s (peak 5824M)
Alg(0): reading ab pair #4000000 at 1096.56s (peak 5824M)
Alg(7): reading ab pair #4000000 at 1097.14s (peak 5824M)
Alg(2): reading ab pair #4000000 at 1097.38s (peak 5824M)
Alg(3): reading ab pair #4000000 at 1097.77s (peak 5824M)
Alg(5): reading ab pair #4000000 at 1097.86s (peak 5824M)
Alg(6): reading ab pair #4000000 at 1097.99s (peak 5824M)
Alg(4): reading ab pair #4000000 at 1098.46s (peak 5824M)
Alg(1): read 4916160 including 321470 free relations
Alg(0): read 4916902 including 321120 free relations
Alg(7): read 4915938 including 321580 free relations
Alg(3): read 4917898 including 321096 free relations
Alg(6): read 4917508 including 321182 free relations
Alg(2): read 4913480 including 320990 free relations
Alg(4): read 4913404 including 321054 free relations
Alg(5): read 4913230 including 320554 free relations
Alg(1): product tree took 135Mb (peak 5824M)
Alg(1): finished accumulating product at 1309.40s
Alg(1): nab = 4916160, nfree = 321470, v = 4594687
Alg(1): maximal polynomial bit-size = 144071522
[/CODE]after the last line it hangs until I send Ctrl+C. This behavior seems to persist for other dependencies as well (I have tried deps 0, 1 and 2 and got the same behavior). I'm not sure what the issue is, or whether there is an issue.

If it helps, I am using commit 52eebdbc5dcb59ca254f3fc97a842e4fe926c06a, dated Jan 9 2019.

EdH 2019-10-21 22:37

How truly disappointing:
[code]
Info:Linear Algebra: Aggregate statistics:
Info:Linear Algebra: Krylov: WCT time 2103.47, iteration CPU time 0.06, COMM 0.01, cpu-wait 0.0, comm-wait 0.0 (32000 iterations)
Info:Linear Algebra: Lingen CPU time 204.53, WCT time 59.6
Info:Linear Algebra: Mksol: WCT time 1176.55, iteration CPU time 0.07, COMM 0.01, cpu-wait 0.0, comm-wait 0.0 (16000 iterations)
Info:Quadratic Characters: Total cpu/real time for characters: 35.38/10.0432
Info:Square Root: Total cpu/real time for sqrt: [B]0.01/0.0108812[/B]
Info:HTTP server: Shutting down HTTP server
Info:[B]Complete Factorization[/B] / Discrete logarithm: Total cpu/elapsed time for entire factorization: 4.86737e+06/31386.1
4567667762603194757852398320680609911185256595933512055509347681004328939176021536679224687954633015313533157719712867770065756562808774560089924477412533383204712638551151
[/code]No factors returned. . .

I suppose I could redo the LA with msieve, as Dylan did for his earlier failure, but is there a way to have CADO-NFS rerun the Square Root step, or is it not actually worth it?

Dylan14 2019-10-21 23:09

[QUOTE=EdH;528531]How truly disappointing:
[code]
Info:Linear Algebra: Aggregate statistics:
Info:Linear Algebra: Krylov: WCT time 2103.47, iteration CPU time 0.06, COMM 0.01, cpu-wait 0.0, comm-wait 0.0 (32000 iterations)
Info:Linear Algebra: Lingen CPU time 204.53, WCT time 59.6
Info:Linear Algebra: Mksol: WCT time 1176.55, iteration CPU time 0.07, COMM 0.01, cpu-wait 0.0, comm-wait 0.0 (16000 iterations)
Info:Quadratic Characters: Total cpu/real time for characters: 35.38/10.0432
Info:Square Root: Total cpu/real time for sqrt: [B]0.01/0.0108812[/B]
Info:HTTP server: Shutting down HTTP server
Info:[B]Complete Factorization[/B] / Discrete logarithm: Total cpu/elapsed time for entire factorization: 4.86737e+06/31386.1
4567667762603194757852398320680609911185256595933512055509347681004328939176021536679224687954633015313533157719712867770065756562808774560089924477412533383204712638551151
[/code]No factors returned. . .

I suppose I could redo the LA with msieve, as Dylan did for his earlier failure, but is there a way to have CADO-NFS rerun the Square Root step, or is it not actually worth it?[/QUOTE]

Hmm... that’s the opposite issue that I had, where the square root phase just kept going and didn’t get anywhere. You could try to run the square root manually, with some invocation like:

[CODE]/path/to/sqrt/executable/in/build/folder -poly polyfile -prefix depfile -purged purgedfile -index indexfile -ker something.kernel -dep 0 -t 4 -side0 -side1 -gcd [/CODE]

where you replace the files with the actual paths to those files and the number next to -t with the number of threads you actually have. If that doesn’t work, you could try a different dependency. As a last resort with CADO, you could rebuild the dependencies with

[CODE]/path/to/characters/exec/in/build/folder -poly polyfile -purged purgedfile -index indexfile -heavyblock dense.bin file -out nameofcomposite.kernel -ker something.bwc/W -lpb0 # -lpb1 # -nchar # -t n[/CODE]

where the values for -lpb0 and -lpb1 come from the params file.
And if that doesn’t work you could just move the relations to msieve and run filtering there. You may have to sieve some more though.

EdH 2019-10-22 00:27

Well, I have discovered the problem, much to my embarrassment.

I put in 3[B]+[/B]2, 627 rather than 3[B]-[/B]2,627.

msieve has told me that the number I was working on is a [B]p[/B]172.:blush:

Back to the real 3[B]-[/B]2,627.

Sorry for the trouble, Dylan, but thanks for the help!

EdH 2019-10-23 18:30

Dev Branch for CADO-NFS Errors Out on Build Attempt
 
[code]
[ 59%] Building C object linalg/bwc/CMakeFiles/bwc_mpfq.dir/mpfq/mpfq_m128.c.o
In file included from /usr/lib/gcc/x86_64-linux-gnu/5/include/smmintrin.h:811:0,
from /usr/lib/gcc/x86_64-linux-gnu/5/include/x86intrin.h:41,
from /home/math71/Math/cado-nfs/linalg/bwc/mpfq/mpfq_m128.h:13,
from /home/math71/Math/cado-nfs/linalg/bwc/mpfq/mpfq_m128.c:4:
/home/math71/Math/cado-nfs/linalg/bwc/mpfq/mpfq_m128.h: In function \u2018mpfq_m128_simd_hamming_weight\u2019:
/usr/lib/gcc/x86_64-linux-gnu/5/include/popcntintrin.h:42:1: error: inlining failed in call to always_inline \u2018_mm_popcnt_u64\u2019: target specific option mismatch
_mm_popcnt_u64 (unsigned long long __X)
^
In file included from /home/math71/Math/cado-nfs/linalg/bwc/mpfq/mpfq_m128.c:4:0:
/home/math71/Math/cado-nfs/linalg/bwc/mpfq/mpfq_m128.h:804:20: error: called from here
_mm_popcnt_u64(_mm_extract_epi64(*r, 1));
^
In file included from /usr/lib/gcc/x86_64-linux-gnu/5/include/smmintrin.h:811:0,
from /usr/lib/gcc/x86_64-linux-gnu/5/include/x86intrin.h:41,
from /home/math71/Math/cado-nfs/linalg/bwc/mpfq/mpfq_m128.h:13,
from /home/math71/Math/cado-nfs/linalg/bwc/mpfq/mpfq_m128.c:4:
/usr/lib/gcc/x86_64-linux-gnu/5/include/popcntintrin.h:42:1: error: inlining failed in call to always_inline \u2018_mm_popcnt_u64\u2019: target specific option mismatch
_mm_popcnt_u64 (unsigned long long __X)
^
In file included from /home/math71/Math/cado-nfs/linalg/bwc/mpfq/mpfq_m128.c:4:0:
/home/math71/Math/cado-nfs/linalg/bwc/mpfq/mpfq_m128.h:803:20: error: called from here
return _mm_popcnt_u64(_mm_extract_epi64(*r, 0)) +
^
In file included from /usr/lib/gcc/x86_64-linux-gnu/5/include/smmintrin.h:811:0,
from /usr/lib/gcc/x86_64-linux-gnu/5/include/x86intrin.h:41,
from /home/math71/Math/cado-nfs/linalg/bwc/mpfq/mpfq_m128.h:13,
from /home/math71/Math/cado-nfs/linalg/bwc/mpfq/mpfq_m128.c:4:
/usr/lib/gcc/x86_64-linux-gnu/5/include/popcntintrin.h:42:1: error: inlining failed in call to always_inline \u2018_mm_popcnt_u64\u2019: target specific option mismatch
_mm_popcnt_u64 (unsigned long long __X)
^
In file included from /home/math71/Math/cado-nfs/linalg/bwc/mpfq/mpfq_m128.c:4:0:
/home/math71/Math/cado-nfs/linalg/bwc/mpfq/mpfq_m128.h:803:20: error: called from here
return _mm_popcnt_u64(_mm_extract_epi64(*r, 0)) +
^
In file included from /usr/lib/gcc/x86_64-linux-gnu/5/include/smmintrin.h:811:0,
from /usr/lib/gcc/x86_64-linux-gnu/5/include/x86intrin.h:41,
from /home/math71/Math/cado-nfs/linalg/bwc/mpfq/mpfq_m128.h:13,
from /home/math71/Math/cado-nfs/linalg/bwc/mpfq/mpfq_m128.c:4:
/usr/lib/gcc/x86_64-linux-gnu/5/include/popcntintrin.h:42:1: error: inlining failed in call to always_inline \u2018_mm_popcnt_u64\u2019: target specific option mismatch
_mm_popcnt_u64 (unsigned long long __X)
^
In file included from /home/math71/Math/cado-nfs/linalg/bwc/mpfq/mpfq_m128.c:4:0:
/home/math71/Math/cado-nfs/linalg/bwc/mpfq/mpfq_m128.h:804:20: error: called from here
_mm_popcnt_u64(_mm_extract_epi64(*r, 1));
^
linalg/bwc/CMakeFiles/bwc_mpfq.dir/build.make:86: recipe for target 'linalg/bwc/CMakeFiles/bwc_mpfq.dir/mpfq/mpfq_m128.c.o' failed
make[2]: *** [linalg/bwc/CMakeFiles/bwc_mpfq.dir/mpfq/mpfq_m128.c.o] Error 1
CMakeFiles/Makefile2:3434: recipe for target 'linalg/bwc/CMakeFiles/bwc_mpfq.dir/all' failed
make[1]: *** [linalg/bwc/CMakeFiles/bwc_mpfq.dir/all] Error 2
Makefile:138: recipe for target 'all' failed
make: *** [all] Error 2
Makefile:7: recipe for target 'all' failed
make: *** [all] Error 1
[/code]Very repeatable!

System Core2 Quad Q8400, Ubuntu 16.04 LTS

EdH 2019-10-23 19:41

Actually, is the git version I normally use,
[code]
https://scm.gforge.inria.fr/anonscm/git/cado-nfs/cado-nfs.git
[/code]the development branch, or is it the latest release? And, is there a download area for older versions, maybe something like the revision system for subversion? I had to grab a local directory from a working machine and recompile. That one is probably quite old.

Dylan14 2019-10-24 04:49

[QUOTE=EdH;528719]Actually, is the git version I normally use,
[code]
https://scm.gforge.inria.fr/anonscm/git/cado-nfs/cado-nfs.git
[/code]the development branch, or is it the latest release? And, is there a download area for older versions, maybe something like the revision system for subversion? I had to grab a local directory from a working machine and recompile. That one is probably quite old.[/QUOTE]


The git version that you use is the development version. Older versions are available here:
[URL]http://cado-nfs.gforge.inria.fr/download.html[/URL]
Do note, the latest non-development version is over 2 years old at this point, so it is likely to be slower than the current version in the repository.

EdH 2019-10-24 14:05

[QUOTE=Dylan14;528751]. . .
Do note, the latest non-development version is over 2 years old at this point, so it is likely to be slower than the current version in the repository.[/QUOTE]The version I'm running is an earlier git version, but I don't know how early.

Since the current git download won't compile (see earlier post), I was hoping there was a way to d/l an earlier revision to the git clone, like with subversion.

Thanks much!

Nooks 2019-10-24 15:42

Run "git checkout <commit>" where commit is the hash of an earlier commit. You can find earlier commits by looking at the output of "git log":

[CODE]commit 15143a94f3eb2cea193c838bb4e3a1885757767e
Author: Paul Zimmermann <Paul.Zimmermann@inria.fr>
Date: Mon Oct 7 08:45:42 2019 +0200

typo

commit ea1c46714c526341905eca37572595bfc78dc51d (get_arg_max)
Author: Alexander Kruppa <akruppa@gmail.com>
Date: Fri Oct 4 20:16:43 2019 +0200

Bugfix: one Newton iteration was missing in u64arith_invmod()
[/CODE]

Running "git checkout 15143a94f3eb2cea193c838bb4e3a1885757767e" (or some shorter prefix of that SHA checksum) will set your checkout to that point in the revision history. "git diff 15143a" will show the differences between what's currently checked out and that commit (note that I've used the short form of the commit name).

The command you want to find the change that causes your build failure is "git bisect", which helps you perform a binary search through the git commit history to find the bad change, assuming that there is one.

EdH 2019-10-25 00:16

[QUOTE=Nooks;528790]Run "git checkout <commit>" where commit is the hash of an earlier commit. You can find earlier commits by looking at the output of "git log":

[CODE]commit 15143a94f3eb2cea193c838bb4e3a1885757767e
Author: Paul Zimmermann <Paul.Zimmermann@inria.fr>
Date: Mon Oct 7 08:45:42 2019 +0200

typo

commit ea1c46714c526341905eca37572595bfc78dc51d (get_arg_max)
Author: Alexander Kruppa <akruppa@gmail.com>
Date: Fri Oct 4 20:16:43 2019 +0200

Bugfix: one Newton iteration was missing in u64arith_invmod()
[/CODE]Running "git checkout 15143a94f3eb2cea193c838bb4e3a1885757767e" (or some shorter prefix of that SHA checksum) will set your checkout to that point in the revision history. "git diff 15143a" will show the differences between what's currently checked out and that commit (note that I've used the short form of the commit name).

The command you want to find the change that causes your build failure is "git bisect", which helps you perform a binary search through the git commit history to find the bad change, assuming that there is one.[/QUOTE]
Excellent! Thank you! After I recover from an extended power outage** I'll have to check this info out and learn more about the git workings.

**Had to disconnect the server and transfer the /tmp/cado* directory before restarting when the power came back. I know, I need to use a different working directory, but for some reason that failed when I last tried it. . .

EdH 2019-10-26 00:51

Well, the current git version just won't compile with my Q8400 Core2 Quads. It appears to compile otherwise. . .

Happy5214 2019-10-26 02:34

[QUOTE=EdH;528961]Well, the current git version just won't compile with my Q8400 Core2 Quads. It appears to compile otherwise. . .[/QUOTE]


Core 2 Quad Q8400? I didn't think anyone here (other than myself) used CPUs that old. My primary computer has a Q8300. I'm too cheap to buy a new machine.

EdH 2019-10-26 03:15

[QUOTE=Happy5214;528971]Core 2 Quad Q8400? I didn't think anyone here (other than myself) used CPUs that old. My primary computer has a Q8300. I'm too cheap to buy a new machine.[/QUOTE]
I do have some ancient hardware. I did retire the Pentium 4s a little while back.:smile: The Q8400s do run a slightly older version of CADO-NFS, though.

EdH 2019-10-31 14:02

[strike]I'm getting hit by frequent power outages. I have a parallel machine running with my CADO-NFS server doing some msieve testing, so it captures the relations files to another machine and runs msieve LA in parallel, if possible.

This last power outage destroyed the CADO-NFS directory during CADO-NFS LA. Msieve won't do LA on the saved relations. There is no snapshot file. (They haven't worked for me after a power outage, anyway.)

I don't remember how to start CADO-NFS at the filtering stage. Can someone refresh my memory as to how to pick up CADO-NFS at the end of relations gathering?

Thanks![/strike]
I found the README with the info I was looking for to try to build a restart. . .

EdH 2019-10-31 17:42

[QUOTE=EdH;529357]. . .
I found the README with the info I was looking for to try to build a restart. . .[/QUOTE]
In case there is interest, I could not get CADO-NFS to correctly accept the 8.0 GB of relations it had produced. It kept telling me there were only 14k of over 60M. I had to create a modified poly file and manually run some sieving to add a few thousand relations to the existing ones in order for msieve to create a matrix. msieve LA is now counting down. CADO-NFS has been retasked. . .

EdH 2019-12-03 16:54

I hadn't really given it much thought as I was tied up for the Thanksgiving time frame, but this current Homogeneous Cunningham Number (5-2,395) is kicking my . . . I knew it would be a bit rough as a quartic, but it has been much more troublesome than I expected. After ten plus days, it is still running krylov. It stepped itself up to over 150M relations prior to that, and msieve wouldn't even build a matrix with those:[code]
Wed Nov 27 15:42:07 2019 found 43148403 hash collisions in 150481636 relations
Wed Nov 27 15:42:17 2019 added 3657731 free relations
Wed Nov 27 15:42:17 2019 commencing duplicate removal, pass 2
Wed Nov 27 15:45:40 2019 found 58197613 duplicates and 95941754 unique relations
Wed Nov 27 15:45:40 2019 memory use: 852.8 MB
Wed Nov 27 15:45:40 2019 reading ideals above 104267776
Wed Nov 27 15:45:40 2019 commencing singleton removal, initial pass
Wed Nov 27 15:54:06 2019 memory use: 2756.0 MB
Wed Nov 27 15:54:06 2019 reading all ideals from disk
Wed Nov 27 15:54:07 2019 memory use: 1781.1 MB
Wed Nov 27 15:54:11 2019 commencing in-memory singleton removal
Wed Nov 27 15:54:14 2019 begin with 95941754 relations and 89951500 unique ideals
Wed Nov 27 15:54:53 2019 reduce to 48565925 relations and 36703205 ideals in 19 passes
Wed Nov 27 15:54:53 2019 max relations containing the same ideal: 28
Wed Nov 27 15:54:55 2019 reading ideals above 720000
Wed Nov 27 15:54:55 2019 commencing singleton removal, initial pass
Wed Nov 27 16:01:22 2019 memory use: 1378.0 MB
Wed Nov 27 16:01:22 2019 reading all ideals from disk
Wed Nov 27 16:01:23 2019 memory use: 1846.4 MB
Wed Nov 27 16:01:28 2019 keeping 48438181 ideals with weight <= 200, target excess is 250010
Wed Nov 27 16:01:32 2019 commencing in-memory singleton removal
Wed Nov 27 16:01:37 2019 begin with 48565925 relations and 48438181 unique ideals
Wed Nov 27 16:02:34 2019 reduce to 48463653 relations and 48333630 ideals in 14 passes
Wed Nov 27 16:02:34 2019 max relations containing the same ideal: 200
Wed Nov 27 16:02:38 2019 filtering wants 1000000 more relations
[/code]Did I just have bad luck, or was the polynomial a truly poor one?[code]
n: 15511640641470902861412193110950053902622414575896230622010872491557176010272780951376514004554647568057529677786967709412454141325321707454094419046730623511926783273119854156313082841
skew: 1.0000
c4: 1
c3: 1
c2: 1
c1: 1
c0: 1
Y1: -604462909807314587353088
Y0: 16543612251060553497428173841399257071316242218017578125
[/code]

jyb 2019-12-03 17:18

[QUOTE=EdH;531923]I hadn't really given it much thought as I was tied up for the Thanksgiving time frame, but this current Homogeneous Cunningham Number (5-2,395) is kicking my . . . I knew it would be a bit rough as a quartic, but it has been much more troublesome than I expected. After ten plus days, it is still running krylov. It stepped itself up to over 150M relations prior to that, and msieve wouldn't even build a matrix with those:[code]
Wed Nov 27 15:42:07 2019 found 43148403 hash collisions in 150481636 relations
Wed Nov 27 15:42:17 2019 added 3657731 free relations
Wed Nov 27 15:42:17 2019 commencing duplicate removal, pass 2
Wed Nov 27 15:45:40 2019 found 58197613 duplicates and 95941754 unique relations
Wed Nov 27 15:45:40 2019 memory use: 852.8 MB
Wed Nov 27 15:45:40 2019 reading ideals above 104267776
Wed Nov 27 15:45:40 2019 commencing singleton removal, initial pass
Wed Nov 27 15:54:06 2019 memory use: 2756.0 MB
Wed Nov 27 15:54:06 2019 reading all ideals from disk
Wed Nov 27 15:54:07 2019 memory use: 1781.1 MB
Wed Nov 27 15:54:11 2019 commencing in-memory singleton removal
Wed Nov 27 15:54:14 2019 begin with 95941754 relations and 89951500 unique ideals
Wed Nov 27 15:54:53 2019 reduce to 48565925 relations and 36703205 ideals in 19 passes
Wed Nov 27 15:54:53 2019 max relations containing the same ideal: 28
Wed Nov 27 15:54:55 2019 reading ideals above 720000
Wed Nov 27 15:54:55 2019 commencing singleton removal, initial pass
Wed Nov 27 16:01:22 2019 memory use: 1378.0 MB
Wed Nov 27 16:01:22 2019 reading all ideals from disk
Wed Nov 27 16:01:23 2019 memory use: 1846.4 MB
Wed Nov 27 16:01:28 2019 keeping 48438181 ideals with weight <= 200, target excess is 250010
Wed Nov 27 16:01:32 2019 commencing in-memory singleton removal
Wed Nov 27 16:01:37 2019 begin with 48565925 relations and 48438181 unique ideals
Wed Nov 27 16:02:34 2019 reduce to 48463653 relations and 48333630 ideals in 14 passes
Wed Nov 27 16:02:34 2019 max relations containing the same ideal: 200
Wed Nov 27 16:02:38 2019 filtering wants 1000000 more relations
[/code]Did I just have bad luck, or was the polynomial a truly poor one?[code]
n: 15511640641470902861412193110950053902622414575896230622010872491557176010272780951376514004554647568057529677786967709412454141325321707454094419046730623511926783273119854156313082841
skew: 1.0000
c4: 1
c3: 1
c2: 1
c1: 1
c0: 1
Y1: -604462909807314587353088
Y0: 16543612251060553497428173841399257071316242218017578125
[/code][/QUOTE]

I can't speak to the particulars of using CADO-NFS vs. msieve, but a quartic of difficulty 221 is really just hard. Based on prior experience (using ggnfs/msieve), it's about as hard as a sextic of difficulty 245 or 250. I.e. pushing the limit of what one person can do on personal hardware.

EdH 2019-12-03 20:23

[QUOTE=jyb;531927]I can't speak to the particulars of using CADO-NFS vs. msieve, but a quartic of difficulty 221 is really just hard. Based on prior experience (using ggnfs/msieve), it's about as hard as a sextic of difficulty 245 or 250. I.e. pushing the limit of what one person can do on personal hardware.[/QUOTE]If this isn't out of the ordinary, then I'll continue with these quartic forced HCNs, since I'm capable. I may need to revisit a couple things, though. If I could have gotten msieve to do the LA, I could already be sieving the next one. As my setup currently stands, I have a lot of idle machines while this one finishes.

chris2be8 2019-12-04 16:49

Looking at the msieve output it would probably be able to build a matrix with 10% more relations (or possibly a few % less).

But I've not done quite that large a quartic. If I had to I'd probably try test sieving the octic against the quartic (assuming it has a reasonable octic).

Chris

EdH 2019-12-04 17:57

[QUOTE=chris2be8;532012]Looking at the msieve output it would probably be able to build a matrix with 10% more relations (or possibly a few % less).

But I've not done quite that large a quartic. If I had to I'd probably try test sieving the octic against the quartic (assuming it has a reasonable octic).

Chris[/QUOTE]
Thanks,

I'll probably leave the quartics for someone else, for now, after the current two finish.

EdH 2019-12-08 16:21

After more than two weeks, a bit of disappointment. (I know, two weeks is but a minor bit compared to some projects.):
[code]Info:Quadratic Characters: Starting
Info:Quadratic Characters: Total cpu/real time for characters: 489.84/145.22
Info:Square Root: Starting
Info:Square Root: Creating file of (a,b) values
Warning:Command: Process with PID 29213 finished with return code -9
Error:Square Root: Program run on server failed with exit code -9
Error:Square Root: Command line was: /home/math90/Math/cado-nfs/build/math90/sqrt/sqrt -poly /tmp/cadofactor/snfsTest.poly -prefix /tmp/cadofactor/snfsTest.dep.gz -purged /tmp/cadofactor/snfsTest.purged.gz -index /tmp/cadofactor/snfsTest.index.gz -ker /tmp/cadofactor/snfsTest.kernel -dep 0 -t 8 -side0 -side1 -gcd > /tmp/cadofactor/snfsTest.sqrt.stdout.2 2> /tmp/cadofactor/snfsTest.sqrt.stderr.2
Error:Square Root: Stderr output (last 10 lines only) follow (stored in file /tmp/cadofactor/snfsTest.sqrt.stderr.2):
Error:Square Root: Rat(0): read 24052344 (a,b) pairs, including 1378888 free
Error:Square Root: Rat(1): read 24000000 pairs in 4232.32s, size 578M (peak 18720M)
Error:Square Root: Rat(2): read 24000000 pairs in 4233.25s, size 578M (peak 18720M)
Error:Square Root: Rat(1): read 24066996 (a,b) pairs, including 1378412 free
Error:Square Root: Rat(2): read 24061340 (a,b) pairs, including 1378834 free
Error:Square Root: Rat(6): size of product = 4856295942 bits (peak 24415M)
Error:Square Root: Rat(6): starting rational square root at 4400.28s
Error:Square Root: Rat(5): size of product = 4856688979 bits (peak 25743M)
Error:Square Root: Rat(5): starting rational square root at 4462.11s
Error:Square Root:
Traceback (most recent call last):
File "./cado-nfs.py", line 122, in <module>
factors = factorjob.run()
File "./scripts/cadofactor/cadotask.py", line 5885, in run
last_status, last_task = self.run_next_task()
File "./scripts/cadofactor/cadotask.py", line 5977, in run_next_task
return [task.run(), task.title]
File "./scripts/cadofactor/cadotask.py", line 4871, in run
raise Exception("Program failed")
Exception: Program failed
[/code]Can this be solved with a large swap file?

VBCurtis 2019-12-08 17:24

Lame!
If swap doesn't cut it, you can upload the entire work directory to the server we used for 2330L, and complete the job there. PM me for details, if you don't finish it yourself.

EdH 2019-12-08 23:07

[QUOTE=VBCurtis;532370]Lame!
If swap doesn't cut it, you can upload the entire work directory to the server we used for 2330L, and complete the job there. PM me for details, if you don't finish it yourself.[/QUOTE]
Thanks! I'm going to "experiment" here first, because I've got another one right behind this one, that will need processing. Will msieve do the root for this, even though it refused to build a matrix?

VBCurtis 2019-12-09 00:39

I haven't tried that; I'm interested to know if it works, and if it does which files you had to copy/rename to feed to msieve.

EdH 2019-12-09 04:32

[QUOTE=VBCurtis;532398]I haven't tried that; I'm interested to know if it works, and if it does which files you had to copy/rename to feed to msieve.[/QUOTE]
Well, I got out of trying the msieve approach. A large swapfile did the trick.:smile:

SethTro 2020-01-13 06:53

[QUOTE=wombatman;497391]I was able to get the linear algebra to re-run by deleting the bwc folder under the /tmp/ work directory. Just posting this in case someone else runs into the same issue (or I do again...)[/QUOTE]

I'm trying to fix what I think is a bug in mksol ETA, (in that eta seems to increase directly in proportion with iterations). I'd love to be able to rerun just that step.

deleting the bwc directory had no impact.

Does anyone know how to rerun just a mksol? or if the ETA bug has been fixed by someone else?

Thanks.

SethTro 2020-01-13 07:28

By also running [CODE]sqlite3 /tmp/cado.jtdbnixu/c75.db "delete from linalg"[/CODE] I was able to get mksol to rerun.

EdH 2020-01-22 15:42

Is there a way to find the version?
 
I thought this has been referenced somewhere, but I can't find it.

I have several machines, some of which may have older versions. Is there a way I can tell how old an install is?

Dylan14 2020-01-22 17:06

[QUOTE=EdH;535711]I thought this has been referenced somewhere, but I can't find it.

I have several machines, some of which may have older versions. Is there a way I can tell how old an install is?[/QUOTE]

Try typing in a terminal that is accessing the root of the cado directory the following:

[CODE]git log[/CODE]this will yield all the revisions. The one that says (HEAD -> master, origin/master, origin/coverage, origin/HEAD) is the the one you're on. For example, I am on rev f65aab7f3ccca673cb9086d67f2526f1fde10a91, dated Oct 10, 2019.

EdH 2020-01-22 21:43

Thanks! I really must study git more. . .

Of two machines checked, one had the Head, etc. info and the other didn't. It just gave four entries with their dates and a brief description.

fivemack 2020-01-27 19:57

well that was silly
 
I had thought I could use -admin X -incr 420 and -admin {X+210} -incr 420 on two machines to search a wide c5 range efficiently. But in fact polyselect adjusts admin to the next multiple of incr, and is deterministic, so all 36 thread-days run on the second computer was wasted.

EdH 2020-02-02 20:01

This is quite disappointing!

The last three CADO-NFS runs I've worked have had server failures like the one that were plaguing the 2,2330L team project. The server quits handing out WUs. This last one has stopped twice in one day. Restarting isn't really an issue, but the loss of all the hours when it happens overnight or even during the day, if it isn't noticed, is quite annoying.

I don't think I ever had this issue prior to the last time I installed a current revision. After this run finishes, I will be moving back to a prior one.

henryzz 2020-02-26 16:17

What is the sublat option on las? As far as I can tell it sieves a subset of the sieve region and should reduce memory usage(presumably with a speed penalty). On the version, I am using it just crashes.
Is this option worth anything? Does it work on the latest source?

EdH 2020-03-09 20:08

I have a few issues that warrant further study:

In all my SNFS runs, I need to explicitly invoke local machine clients. I don't need to do this for GNFS. I don't see (recognize) any setting(s) in the params.cxx files.

In my latest GNFS run, I noticed that during LA, only 50% of my CPU was in use. I'm running an i7 2600 with four cores and two threads/core. Is CADO-NFS limiting the use to core count? Can/should I override it to use thread count?

In my latest GNFS runs, using a recent revision of CADO-NFS, I have noticed the server stopping the issuing of WUs, at which time I need to stop/restart the server. It still accepts WUs, but doesn't issue any. (This appears to be the same as we had with the 2,2330L project.) I never experienced this in earlier GNFS/SNFS runs.

RichD 2020-03-11 04:35

[QUOTE=EdH;539237]In my latest GNFS run, I noticed that during LA, only 50% of my CPU was in use. I'm running an i7 2600 with four cores and two threads/core. Is CADO-NFS limiting the use to core count? Can/should I override it to use thread count?[/QUOTE]

I have a Core i7-2600K standalone running a GNFS job. (No network clients.) It finally got to the "krylov" stage and I am noticing the same thing. Recalling several years ago Msieve did not benefit with HT in Block Lanczos. Perhaps there is something similar to their implementation of Block Weidemann, which may be intentional.

EdH 2020-03-11 13:24

[QUOTE=RichD;539361]I have a Core i7-2600K standalone running a GNFS job. (No network clients.) It finally got to the "krylov" stage and I am noticing the same thing. Recalling several years ago Msieve did not benefit with HT in Block Lanczos. Perhaps there is something similar to their implementation of Block Weidemann, which may be intentional.[/QUOTE]
I've never looked at that with msieve. But, msieve runs LA faster and when I'm not lazy, I use it for the LA portion of larger jobs. I suppose this warrants more study on my part. . .

EdH 2020-04-01 15:29

I am working on a Colab Instance of CADO-NFS to pick up the 100k-150k section of relations of a current snfs run. (I hope to eventually document my adventure in my "How I . . ." threads.)

I will not be accessing the server, but will, instead, initiate the Colab session as a separate sever with all the parameters of the first except the qmin setting. I am concerned about the addition of the new relations to the original set, since the original sever has no record of issuing the work area. Is there a method of adding the alternate relations to the existing ones?

I'm expecting that my msieve side process, that I'm also going to run, will have no trouble with them, but I'm expecting the CADO-NFS process to have issues.

Thoughts from anyone?

VBCurtis 2020-04-01 15:57

I have the same question/doubt, and I agree that msieve won't care. When I've done this myself, I've just cat'ed together the relations files from the side run, then fed the main-run relations and the side-run file together to remdups and then msieve.

I don't know how to tell CADO to assimilate those extra relations; it would be nice if there was a feature similar to factmsieve's spairs.add where we could manually drop new relations in and CADO would pick them up via an occasional check for a ".add" file.

EdH 2020-04-01 17:03

I have run into a problem that I already have here, but work around:

When I run an snfs job, the server doesn't start any local sievers. Here, I separately start some on the server machine, but I can't figure out how to do so on my Colab session.

-------
When you use remdups, do you decompress the *.gz files first? When I tried to cat all the *.gz files and send them through remdups4, I ended up with 1 good relation and a couple hundred million bad relations.

Xyzzy 2020-04-01 17:53

[C]zcat relations.dat.gz | ./remdups4 10000 > relations.dat[/C]

EdH 2020-04-01 18:33

Thanks, Xyzzy!

So, they must be decompressed first. I've been "cat"ing the .gz files into a composite for msieve, which has no trouble with the relations being compressed. I will play with this for a bit and see whether it makes more sense for me to decompress and use remdups or simply let msieve do its "thing."

EdH 2020-04-01 19:23

I got back 31k relations from my first sieve run in my Colab instance and was able to get those added to the msieve parallel run. Hopefully, that will help with the matrix build.

However, I was too late to try to add the relations to the CADO-NFS run, as it had already declared the move into "Filtering - Merging: Starting."

I wonder if the associated *.stderr0 file for the relations might allow them to be added to the server run, or if just existing within the directory might allow them to be added? Something for another experiment on another day. . .

EdH 2020-04-02 13:34

Well, . . . My server finally declared it had enough relations at 274402507 and told all the clients to, "Knock it off!"

Unfortunately, it then proceeded to get memory stuck using 99.8% of 15.6 GiB RAM and 50% of 8.0 GiB Swap during the merge. I'm leaving it sit out of curiosity for the time being.

Disappointingly, my msieve parallel (to the CADO-NFS) process didn't think the relations sufficient to build a matrix:
[code]**remdups4 was run prior**
Wed Apr 1 20:03:57 2020 found 6580868 hash collisions in 172527688 relations
Wed Apr 1 20:04:05 2020 added 3657741 free relations
Wed Apr 1 20:04:05 2020 commencing duplicate removal, pass 2
Wed Apr 1 20:09:23 2020 found 0 duplicates and 176185429 unique relations
Wed Apr 1 20:09:23 2020 memory use: 506.4 MB
Wed Apr 1 20:09:23 2020 reading ideals above 81068032
Wed Apr 1 20:09:23 2020 commencing singleton removal, initial pass
Wed Apr 1 20:22:05 2020 memory use: 5512.0 MB
Wed Apr 1 20:22:05 2020 reading all ideals from disk
Wed Apr 1 20:22:44 2020 memory use: 3903.9 MB
Wed Apr 1 20:22:52 2020 commencing in-memory singleton removal
Wed Apr 1 20:23:00 2020 begin with 176185429 relations and 182285468 unique ideals
Wed Apr 1 20:24:36 2020 reduce to 77811022 relations and 69781418 ideals in 21 passes
Wed Apr 1 20:24:36 2020 max relations containing the same ideal: 32
Wed Apr 1 20:24:41 2020 reading ideals above 720000
Wed Apr 1 20:24:41 2020 commencing singleton removal, initial pass
Wed Apr 1 20:32:30 2020 memory use: 1506.0 MB
Wed Apr 1 20:32:30 2020 reading all ideals from disk
Wed Apr 1 20:32:57 2020 memory use: 3071.3 MB
Wed Apr 1 20:33:06 2020 keeping 78830602 ideals with weight <= 200, target excess is 406990
Wed Apr 1 20:33:14 2020 commencing in-memory singleton removal
Wed Apr 1 20:33:21 2020 begin with 77811022 relations and 78830602 unique ideals
Wed Apr 1 20:34:02 2020 reduce to 77808983 relations and 78828557 ideals in 6 passes
Wed Apr 1 20:34:02 2020 max relations containing the same ideal: 200
Wed Apr 1 20:34:08 2020 filtering wants 1000000 more relations
[/code]I'm currently doing some more sieving in an attempt to convince msieve it should take on the LA for this project.

On a positive note, I was successful in running the 100k-150k area via my Colab instance and retrieved around 150k relations from that experiment.

RichD 2020-04-02 14:29

From my limited experience, you will need about 180M (or greater) unique relations for a 31-bit job. The sweet spot would be around 200M if you want to build a matrix at TD=120-130.

EdH 2020-04-02 16:29

Thanks richD. I became impatient with my server that was stuck, so I restarted it to gather more relations for my msieve run. This was done partly because I can't get any of my other machines to accept clients when I run a CADO-NFS server. I use the same setup as my current server (excepting, of course, IPs), but cannot get clients running.

EdH 2020-04-07 11:42

[QUOTE=RichD;541596]From my limited experience, you will need about 180M (or greater) unique relations for a 31-bit job. The sweet spot would be around 200M if you want to build a matrix at TD=120-130.[/QUOTE]
Not sure if you're following my current thread on this run (in my blog area), but you were right on with 180M:
[code]
Thu Apr 2 21:15:02 2020 found 0 duplicates and 181713072 unique relations[/code]

EdH 2020-04-10 16:31

I'm working on a Colab project with CADO-NFS and have a couple questions that I will eventually discover the answers to, if I don't get them here, but I thought I'd try here first, and possibly save some work:

1. Is the siever (las) a complete stand-alone program which can run on its own without any of the rest of the CADO-NFS package?

2. Is there a method of compiling las without building the entire CADO-NFS package?

3. Is there any advantage to recompiling las for different Xeon CPUs across different Colab sessions?

Thanks for any thoughts. . .

VBCurtis 2020-04-10 16:49

1. Definitely.
2. Yes, but I don't know how. The documentation refers to the possibility of compiling just las for windows, even though the overall package doesn't compile on windows.
3. Probably? There are massive speed differences among architectures in las, and I don't imagine the range of possible architecture-optimizations exist within each compiled binary; seems particularly unlikely since there is no officially-released binary, rather each one is self-compiled by users by design. This is just a guess based on function, rather than from any personal experience I have with the code.

EdH 2020-04-16 03:13

Well, I'm becoming "annoyed!" I would probably have been through with sieving if I hadn't had to restart the server 8 times to get it to assign WUs. I was sure I'm using the install that never gave this trouble in the past. Is it possible it has something to do with the size of the composite/corresponding params?

VBCurtis 2020-04-16 06:00

Yes, but it's hard to understand how. It seems to be the database side that's failing, yet bigger numbers gather relations and hand out WUs more slowly than smaller jobs. So, things *should* function better, because the rate of transactions is smaller... at least, I would think? (well, I do think... but I'm wrong)

In the Cunningham 2330L C207 thread last summer, one of the CADO contributors suggested a different database backend. I have zero experience with such things, so I didn't consider exactly what he suggested, but that's why I think it's something about the server/backend rather than a sieve parameter; certainly, one might trigger the other for reasons unknown.

EdH 2020-04-16 13:43

I think I might have to set my laziness aside and join the CADO-NFS mailing list to see if I can gain a little more insight into the package. I'm a very amateur programmer, so I can't readily figure out in depth programs, but I do sometimes try alterations.

A "pet peeve" with CADO-NFS is its manner of crashing (dying) instead of gracefully ending, if you use "tasks.filter.run = false." Now I have 30+ pieces of "farm" equipment that are still looking for assignments. I'll probably have to restart without the "false" and crash that run, to "gracefully" stop the clients, rather than going to each one to issue CTRL-C..

Another difficulty is that I was unable to get a remote Colab install to run las on its own because it didn't like something about the roots1 data.

I do realize it's ME, but a lot of these things I keep trying, appear to be just opposite what the programmers intended. If I just had a better understanding (and a longer attention span). . .

EdH 2020-04-23 21:10

This is speculation, but I think I have discovered why CADO-NFS stops issuing WUs.

If WUs are tardy, the server tries to wait for the late WUs until timedout is reached, whereas it reissues them. But, it appears that if too many are late, it stops sending all others until they are caught up.

I have changed my timedout to 43200 (12 hours) to cover the sleeping time for some of my machines and have had no noticeable instances of WUs not being handed out for my current run.

Let's see if this theory is disproven now that I have posted it. . .

Dylan14 2020-04-29 17:11

So I ran into a problem with the cado-nfs-client.py file, in particular, it couldn't find my las executable:

[code]FileNotFoundError: [Errno 2] No such file or directory: "'/home/dylan/bin/cado/cado-nfs/build/dylan-xps159570/sieve/las'"[/code]

Looking into the code and adding a print statement in the run_command function I got this as the command_list:

[CODE]["'/home/dylan/bin/cado/cado-nfs/build/dylan-xps159570/sieve/las'", '-poly', "'download/sean198.c198.poly'", '-q0', '11290000', '-A', '30', '-q1', '11292000', '-lim0', '536000000', '-lim1', '536000000', '-lpb0', '33', '-lpb1', '33', '-mfb0', '64', '-mfb1', '95', '-ncurves0', '30', '-ncurves1', '15', '-fb1', "'download/sean198.c198.roots1.gz'", '-out', "'dylan-xps159570.79cb1ba5.work/sean198.c198.11290000-11292000.gz'", '-t', '6', '-stats-stderr'][/CODE]

The path inside all the quotes is indeed the location of my las executable, but somehow all the quotes confused it. So, to fix this I did the following:
(*) imported the shlex module
(*) replaced
[CODE]command_list = command if isinstance(command, list) else command.split(" ")[/CODE]
with
[code]command_list = shlex.split(command_str)[/code]

and then this worked (at least to start the run up. Waiting to see if it submits successfully - which it did!)

henryzz 2020-04-30 07:14

[QUOTE=Dylan14;544203]So I ran into a problem with the cado-nfs-client.py file, in particular, it couldn't find my las executable:

[code]FileNotFoundError: [Errno 2] No such file or directory: "'/home/dylan/bin/cado/cado-nfs/build/dylan-xps159570/sieve/las'"[/code]

Looking into the code and adding a print statement in the run_command function I got this as the command_list:

[CODE]["'/home/dylan/bin/cado/cado-nfs/build/dylan-xps159570/sieve/las'", '-poly', "'download/sean198.c198.poly'", '-q0', '11290000', '-A', '30', '-q1', '11292000', '-lim0', '536000000', '-lim1', '536000000', '-lpb0', '33', '-lpb1', '33', '-mfb0', '64', '-mfb1', '95', '-ncurves0', '30', '-ncurves1', '15', '-fb1', "'download/sean198.c198.roots1.gz'", '-out', "'dylan-xps159570.79cb1ba5.work/sean198.c198.11290000-11292000.gz'", '-t', '6', '-stats-stderr'][/CODE]

The path inside all the quotes is indeed the location of my las executable, but somehow all the quotes confused it. So, to fix this I did the following:
(*) imported the shlex module
(*) replaced
[CODE]command_list = command if isinstance(command, list) else command.split(" ")[/CODE]
with
[code]command_list = shlex.split(command_str)[/code]

and then this worked (at least to start the run up. Waiting to see if it submits successfully - which it did!)[/QUOTE]
You might want to report this error to the devs. It looks like the sort of thing that could be broken python 2/3 compatibility. I believe CADO uses python 2 but they aim for it to work in 3 as well. I don't have much python experience so I may be wrong.

Dylan14 2020-04-30 14:34

[QUOTE=henryzz;544259]You might want to report this error to the devs. It looks like the sort of thing that could be broken python 2/3 compatibility. I believe CADO uses python 2 but they aim for it to work in 3 as well. I don't have much python experience so I may be wrong.[/QUOTE]


Just submitted the issue on their repository. Waiting for a response.

Dylan14 2020-04-30 16:42

The issue is fixed in a new commit, basically it was a conflict of version between a "new" client and a "old" server.

EdH 2020-08-26 19:55

I'm happy to mention that I have the git version:
[code]
commit a1dbe6b800a0bef436f0723e62c5b502955e0c2a
Author: Emmanuel Thom� <Emmanuel.Thome@inria.fr>
Date: Sat Aug 22 18:56:34 2020 +0200
[/code]running on all my machines, including the Core2 ones that wouldn't compile last year.:smile:

VBCurtis 2020-08-26 20:41

Excellent! That means I can update my Core2-Xeons this winter for another factorization. They sat unused this past winter, as the version of CADO installed on them wouldn't work as a client with the current version as server and the then-current version wouldn't compile. I missed out on all that "free" room heating!

EdH 2020-08-26 21:25

[QUOTE=VBCurtis;555057]Excellent! That means I can update my Core2-Xeons this winter for another factorization. They sat unused this past winter, as the version of CADO installed on them wouldn't work as a client with the current version as server and the then-current version wouldn't compile. I missed out on all that "free" room heating![/QUOTE]
My Core2s are Intel Quads (Q8400 and Q9???). They wouldn't compile the then current git last year, but worked fine as clients with an earlier commit until just recently when there was change that forced me to update everything - none of my clients would communicate with the newer version server and I was experiencing an occasional failure with the one I was running. ATM, all is well.

EdH 2020-08-27 18:01

[QUOTE=VBCurtis;555057]Excellent! That means I can update my Core2-Xeons this winter for another factorization. They sat unused this past winter, as the version of CADO installed on them wouldn't work as a client with the current version as server and the then-current version wouldn't compile. I missed out on all that "free" room heating![/QUOTE]
I must have jinxed it! Don't upgrade just yet. More later.

EdH 2020-08-28 16:14

[QUOTE=EdH;555124]. . . More later.[/QUOTE]
The more:

There is an incompatibility barrier that was crossed sometime in March 2020 that causes WUs to fail across the barrier. This means that the server and clients need to be on the same side of March 2020.

Currently, there is an open issue that causes an Error -6 interruption on rare occasions. When this error occurs, CADO-NFS cannot complete filtering and won't move on. However, I have been able to run Msieve on the relations found, and even rerun CADO-NFS to add relations by using "tasks.sieve.rels_wanted" in the snapshot file and using the snapshot to restart the CADO-NFS run. It still failed at filtering in my tests.


All times are UTC. The time now is 12:43.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.