mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   CADO-NFS (https://www.mersenneforum.org/forumdisplay.php?f=170)
-   -   Improved params files for CADO (https://www.mersenneforum.org/showthread.php?t=24274)

VBCurtis 2019-04-08 21:30

Improved params files for CADO
 
The CADO thread contains some improved params files, such as the ones Gimarel supplied, but they're quite difficult to locate. Let's use this thread for discussions about params and improvements to the default params files.

The forum does not allow attachments with arbitrary suffixes, so remember to append .txt to the end of any params files you post!

VBCurtis 2019-04-08 21:39

3 Attachment(s)
Attached are my best effort at parameters for C90, C95, and C100. These files all use Gimarel's excellent development with tight lambda settings near 1.80 combined with loose large-prime bounds and very low Q values.
Timing data: For C90, I ran single-threaded. The stock CADO git-install from Feb '19 took 2236 seconds, while my params took 941 seconds.
For C95 on 6 threads of an otherwise busy 6-core i7-haswell, CADO-default takes 1008 seconds while this params file takes 625 seconds.
For C100 on 6 threads, CADO-default takes 1904 seconds while this params file takes 1288 seconds. Poly-select time should probably be reduced a bit on this file, as I just noticed poly select takes 10% of sieving time.
Running multi-threaded, I believe the YAFU-CADO crossover is somewhere near 90 digits! Please run your own tests and report back here.
Edit 15 Apr: C90 file fixed to comment out the input value N. The c90 file is the one that CADO chose to explain all the parameters, so they included a sample N; I do the same.

VBCurtis 2019-04-08 21:50

5 Attachment(s)
Attached are improved params files for C105, C110, and C115. I leave to the user to compare timings against stock files (please post results here!).
Under the conditions of these files, with low Q values and tight lambda settings, I find the crossover from I=12 to I=13 to be around 118 digits.
Edit 9 April: params.c115 was edited to change "tasks.sieve.qmin" to "tasks.qmin". CADO changed the name of this setting in 2019, and this file hadn't been updated.
Edit 13 April: params.c120 added. Tested on RSA120: Stock CADO (April '19 git) 15538 wall-clock seconds, this file 11302 seconds; measured 6-threaded on a 6-core i7 running 5xLLR concurrently.
23Apr21: Added c125 file.

VBCurtis 2019-04-08 21:51

3 Attachment(s)
C130 params have been posted.
15 Apr edit: Tested on 6 threads i7-5820, RSA130: Apr '19 git 63043 seconds, this file 40360 seconds wall-clock time. Also, tasks.sieve.qmin changed to tasks.qmin.
22 Sep edit: C140 params added. Tested on 12 threads i7-5820, RSA140: Sep '19 git 83417 seconds, this file 62729 seconds wall-clock time.
23Apr21 edit: C135 added. Best timing 27500 sec for a C137 on 12x2.5 Haswell-Xeon.

VBCurtis 2019-04-08 21:52

{post reserved for future file attachments for C145 to C165}

VBCurtis 2019-04-08 21:53

2 Attachment(s)
Poly select params posted for C190 and C195. Sieve params are CADO default, and are not optimal.
These params take about 1/3rd the CADO-default time; if you are not impressed with the E-score from your run, double P and run again.
Thanks to user hnoey for extensive testing on a series of C193s to develop these parameters.

VBCurtis 2019-04-13 16:06

The current git release for CADO has a note in the params.c120 file that says they've verified experimentally that matrix density 100 is optimal at that size. I haven't tested it yet, but my files use densities of 135-155; if 100 proves faster, I'll be updating all of the files.

CADO originally used 170 for all sizes; I thought I was already using a pretty low density...

EdH 2019-04-13 22:49

Some Timing Data for the Recent Unmodified git Version params
 
This is from the unmodified params files in the Development version runs I posted in the CADO-NFS thread. I should have some comparison modified params runs soon:
[code]
dig cpu/elapsed time
--- ----------------------
74 1245.62/49.9081
75 1579.75/41.1682
75 1484.11/45.9239
79 2170.13/54.9493
79 1780.97/60.1481
80 2431.27/58.9629
80 2206.38/67.7262
80 1904.68/66.3247
80 2046.36/64.2789
80 2471.99/67.8799
80 2159.99/72.5948
82 2354.01/58.6331
84 2559.15/62.1541
84 3038.62/78.0739
85 3400.2/78.9644
85 3504.66/85.8065
86 3518.26/98.8682
86 3655.63/89.1821
86 4071.04/106.596
86 4017.55/100.512
86 3184.24/74.8409
87 4202.14/128.446
87 3997.62/236.189
88 5078.94/138.644
90 6581.7/129.619
90 6786.46/160.069
91 6789.82/140.023
91 6685.29/134.413
94 10033.3/251.869
94 7691.76/212.026
94 9167.48/236.614
94 10231/247.654
94 9694.67/259.649
95 11464.4/230.579
96 12163.4/232.13
96 11602.3/273.837
96 12517.3/233.369
96 10671.7/253.824
97 13343.6/248.417
98 15737/328.048
98 14776.7/296.881
98 14768.1/347.099
99 16861.4/375.305
99 14919.2/375.983
100 18859.4/378.495
102 23535.8/491.217
102 22007.8/400.009
102 23544.4/531.072
102 21766.3/459.406
102 22909.6/438.363
103 19677.6/544.91
103 19994.5/614.039
104 22606.8/618.271
105 25752.2/657.456
107 32252.9/641.667
108 33323.8/719.982
108 29268/614.038
110 36442/874.006
110 39789.6/904.136
111 39385.7/985.466
113 59437.2/1206.31
114 62543.9/1259.54
114 63621.9/1721.12
115 63498.4/1682.17
117 87274.5/1820.56
117 78749.6/1779.51
117 88810.1/2200.56
118 96917.9/1977.22
118 101399/2625.73
119 116296/2590.59
120 107719/2345.83
120 110556/2303.2
120 129214/2763.6
121 139975/2996.79
121 141585/3000.22
122 146063/2690.93
123 181634/4156.62
124 205371/4196.39
125 183532/4387.97
126 209732/4763.29
127 281775/5911.33
127 279243/6556.18
128 301283/11037
128 283924/10980.6
129 304850/10683
129 307423/10925.6
130 374315/9931.01
131 418847/14039.8
133 551879/12586.8
135 680225/16017.4
138 709733/16633.3
[/code]If you need/want to move this elsewhere, go ahead.:smile:

EdH 2019-04-14 14:35

[QUOTE=VBCurtis;513166]...
Edit 9 April: params.c115 was edited to change "tasks.sieve.qmin" to "tasks.qmin". CADO changed the name of this setting in 2019, and this file hadn't been updated.
...[/QUOTE]
It looks like c130 also needs this edit.

EdH 2019-04-15 19:52

Hi Curtis,

Your params.c90.txt file appears to have an N value within...

-Ed

VBCurtis 2019-04-15 21:53

[QUOTE=EdH;513790]Hi Curtis,

Your params.c90.txt file appears to have an N value within...

-Ed[/QUOTE]

Fixed! I simply commented out the N value, as CADO itself does for the c90 default file.
Also, the line to add to a work file to *not* use /tmp for all work is tasks.workdir = ./{jobfoldername}
If you don't use work files, you can pass any param flag to CADO on the command line by prepending with double-dash: ./cado-nfs.py {input number} --tasks.workdir=./{jobfoldername}
Default CADO behavior is to use /tmp directories for all job data; a power loss during a job eradicates all progress. The downside to using your own directories within /cado-nfs is that you must remember to delete the data after jobs finish, else you'll run out of disk space.

EdH 2019-04-16 03:59

[QUOTE=VBCurtis;513798]Fixed! I simply commented out the N value, as CADO itself does for the c90 default file.
Also, the line to add to a work file to *not* use /tmp for all work is tasks.workdir = ./{jobfoldername}
If you don't use work files, you can pass any param flag to CADO on the command line by prepending with double-dash: ./cado-nfs.py {input number} --tasks.workdir=./{jobfoldername}
Default CADO behavior is to use /tmp directories for all job data; a power loss during a job eradicates all progress. The downside to using your own directories within /cado-nfs is that you must remember to delete the data after jobs finish, else you'll run out of disk space.[/QUOTE]
I had to comment out the value for a C90 run. That's how I discovered it. CADO-NFS told me of the conflict. I have a fixed job directory in my scripts so I can find the log file for the factors. I also concatenate the log into a compiled file for data retrieval and reuse the fixed job directory.

EdH 2019-04-19 03:47

Some Timing Comparisons
 
1 Attachment(s)
I've attached a file of comparisons based on composite sizes. Here's a sample:
[code]
Unmodified Modified

Digits CPU (s) WCT (s) Digits CPU (s) WCT (s)
...
128 283924 10981 128 188998 4265
128 301283 11037
129 304850 10683
129 307423 10926
130 374315 9931 130 294086 7241
130 317382 6831
131 418847 14040 131 312883 7090
131 354817 8030
...

[/code]

bsquared 2019-04-25 21:35

[QUOTE=VBCurtis;513162]Attached are my best effort at parameters for C90, C95, and C100. These files all use Gimarel's excellent development with tight lambda settings near 1.80 combined with loose large-prime bounds and very low Q values.
Timing data: For C90, I ran single-threaded. The stock CADO git-install from Feb '19 took 2236 seconds, while my params took 941 seconds.
For C95 on 6 threads of an otherwise busy 6-core i7-haswell, CADO-default takes 1008 seconds while this params file takes 625 seconds.
For C100 on 6 threads, CADO-default takes 1904 seconds while this params file takes 1288 seconds. Poly-select time should probably be reduced a bit on this file, as I just noticed poly select takes 10% of sieving time.
Running multi-threaded, I believe the YAFU-CADO crossover is somewhere near 90 digits! Please run your own tests and report back here.
Edit 15 Apr: C90 file fixed to comment out the input value N. The c90 file is the one that CADO chose to explain all the parameters, so they included a sample N; I do the same.[/QUOTE]

Just curious what version of yafu was used in this comparison, and how it was run (always nfs, or siqs for the c90 and c95 and nfs elsewhere, or factor, or something else)?

VBCurtis 2019-04-26 04:09

[QUOTE=bsquared;514726]Just curious what version of yafu was used in this comparison, and how it was run (always nfs, or siqs for the c90 and c95 and nfs elsewhere, or factor, or something else)?[/QUOTE]

Yafu 1.34 linux-64, running siqs for both c90 and c95 as I don't have GGNFS on that machine.

LaurV 2019-04-26 05:39

That is ok, yafu siqs vs nfs crossover is somewhere over 100 digits on most of the machines i played with (like 101-106 digits).

bsquared 2019-04-26 16:00

[QUOTE=VBCurtis;514753]Yafu 1.34 linux-64, running siqs for both c90 and c95 as I don't have GGNFS on that machine.[/QUOTE]

Ok, thanks.

I tried out the various versions on a new skylake X processor; looks like AVX-512 helps to the tune of about 25%. This instruction set will someday be more commonplace.

[CODE]
4 threads, (Xeon 5122 Gold)
c90 = 308204495124600567361475233684732529849778775819664070284023650219790546206058182493911073
sse41 207
avx2 197
avx512 167

4 threads, (Xeon 5122 Gold)
c95 = 38105527381517286355640997328621412161117687510374225914020362885840036150320501125450844393617
sse41 748
avx2 721
avx512 608

[/CODE]

I'll try to get CADO configured on a virtualbox to compare. Excited to test out all your improvements to the parameters!

EdH 2019-06-09 02:58

Hey Curtis,

Have you ever gotten anywhere with modified params for c125 and c135? I'm compiling lots of these ranges with default params, but none with modified

Ed

VBCurtis 2019-06-09 04:52

I have files that are decent, but not yet fast enough to match the trend set by the smaller files. I'll email the drafts to you soon (I'm in finals at work, might be a day or two).

RichD and I have close-to-trend c140 and c145 files, hopefully posted in the next couple weeks. Our first c150 job was *below* trend! We're running a couple more to see if it was a fluke, but c150 might get posted before the smaller ones.

My hobby time the past month has been spent on the 2330L factorization setup in the cunningham forum. If you haven't had a peek, we're using CADO to run a c207!

EdH 2019-06-09 15:15

[QUOTE=VBCurtis;518919]I have files that are decent, but not yet fast enough to match the trend set by the smaller files. I'll email the drafts to you soon (I'm in finals at work, might be a day or two).

RichD and I have close-to-trend c140 and c145 files, hopefully posted in the next couple weeks. Our first c150 job was *below* trend! We're running a couple more to see if it was a fluke, but c150 might get posted before the smaller ones.

My hobby time the past month has been spent on the 2330L factorization setup in the cunningham forum. If you haven't had a peek, we're using CADO to run a c207![/QUOTE]
Thanks Curtis,

I looked a little at 2330L in the beginning and have noticed your activity for the thread, but haven't delved into the finer details. Not ready to commit to a long term project ATM. All my current stuff is day-to-day only. The machines have their nights off.

Ed

VBCurtis 2019-09-22 15:25

It took much more than a couple of weeks, but C140 params have finally been added to post #4. This file is about 25% faster than CADO default.

I hope to have c125 and c135 up in October, while RichD continues to help me work on c145 and c150.

EdH 2019-09-25 21:14

[QUOTE=VBCurtis;526290]It took much more than a couple of weeks, but C140 params have finally been added to post #4. This file is about 25% faster than CADO default.

I hope to have c125 and c135 up in October, while RichD continues to help me work on c145 and c150.[/QUOTE]
Thanks, Curtis. If I run composites >142, would the new c140 be better with those than the default files?

VBCurtis 2019-09-26 04:42

That's a good question! I bet this c140 file would be faster for c145, but not for c150. A ballpark way to adjust from this c140 file is to add ~30% to lim0 and lim1 for each 5 digits bigger, and to add ~40% to P and admax for each 5 digits. I'd also add 1M to rels_wanted for each *digit*.

Those adjustments should give decent performance up to the low 150s.

VBCurtis 2019-09-27 21:26

RichD and I each tested the new c140 file this week on c141-142 numbers, and we both had to filter multiple times to build a matrix. This means there wasn't enough oversieving, so I have posted a new file with rels_wanted set to 82M rather than 80M.
That is the only change, so if you already downloaded it you can just change that one setting rather than re-downloading.

VBCurtis 2019-10-11 21:16

1 Attachment(s)
Ed-
On my i7-5820 (6 core haswell), I once completed a C156 with a modified factmsieve.py in *barely* under 7 days (167 hours). I was quite happy with my modifications, as I think the default factmsieve.py had taken 8.x days for a C155.

The attached CADO params file factored a C155 with leading digit 7 on the same machine in 133 hours. I lack the patience to do a comparison run to the factory params, but the front page of the CADO website cites 5.3 days on 16 cores of 2ghz xeon for RSA-155. 32 ghz * 5.3 days = ~170 ghz-days. My C155 took 6*3.3ghz * 5.5 days = 110 ghz-days.
2.7M thread-seconds sieve, 311k thread-seconds matrix. A bit more poly select than I used would be better, so I added 20% to admax for this attached file versus my actual run.
As usual, if you want to reduce matrix time, add ~5% to rels_wanted.

EdH 2019-10-12 02:51

[QUOTE=VBCurtis;527767]Ed-
On my i7-5820 (6 core haswell), I once completed a C156 with a modified factmsieve.py in *barely* under 7 days (167 hours). I was quite happy with my modifications, as I think the default factmsieve.py had taken 8.x days for a C155.

The attached CADO params file factored a C155 with leading digit 7 on the same machine in 133 hours. I lack the patience to do a comparison run to the factory params, but the front page of the CADO website cites 5.3 days on 16 cores of 2ghz xeon for RSA-155. 32 ghz * 5.3 days = ~170 ghz-days. My C155 took 6*3.3ghz * 5.5 days = 110 ghz-days.
2.7M thread-seconds sieve, 311k thread-seconds matrix. A bit more poly select than I used would be better, so I added 20% to admax for this attached file versus my actual run.
As usual, if you want to reduce matrix time, add ~5% to rels_wanted.[/QUOTE]Thanks! I will be running SNFS run for a little while, but some will rely on this file to fill in the SNFS parameters. Not sure when I'll get back to GNFS, but will keep you posted.

Ferrier 2019-10-27 23:39

[QUOTE=VBCurtis;513162]Attached are my best effort at parameters for C90, C95, and C100. These files all use Gimarel's excellent development with tight lambda settings near 1.80 combined with loose large-prime bounds and very low Q values.
Timing data: For C90, I ran single-threaded. The stock CADO git-install from Feb '19 took 2236 seconds, while my params took 941 seconds.
For C95 on 6 threads of an otherwise busy 6-core i7-haswell, CADO-default takes 1008 seconds while this params file takes 625 seconds.
For C100 on 6 threads, CADO-default takes 1904 seconds while this params file takes 1288 seconds. Poly-select time should probably be reduced a bit on this file, as I just noticed poly select takes 10% of sieving time.
Running multi-threaded, I believe the YAFU-CADO crossover is somewhere near 90 digits! Please run your own tests and report back here.
Edit 15 Apr: C90 file fixed to comment out the input value N. The c90 file is the one that CADO chose to explain all the parameters, so they included a sample N; I do the same.[/QUOTE]

tested RSA-100 with your c100 param file (20 threads "Intel(R) Xeon(R) Gold 6136 CPU @ 3.00GHz")

cado-nfs finished in ~324s
yafu(latest trunk version, compilation option: NFS=1 USE_SSE41=1) finished in ~500s

VBCurtis 2019-10-27 23:56

Thanks for the report! I'd appreciate a comparison at 94 or 95 digits too; that should be a closer battle.
EDIT: At that size, the close battle should be with the quadratic sieve within YAFU, rather than NFS.

5 minutes to crack a C100 is pretty cool!

Ferrier 2019-10-29 04:42

[QUOTE=VBCurtis;529078]Thanks for the report! I'd appreciate a comparison at 94 or 95 digits too; that should be a closer battle.
EDIT: At that size, the close battle should be with the quadratic sieve within YAFU, rather than NFS.

5 minutes to crack a C100 is pretty cool![/QUOTE]

cado-nfs with default c95 param:

[CODE]
./cado-nfs.py 48404068520546498995797968938385187958997290617596242601254422967869040251141325866025672337021
...
Info:Complete Factorization / Discrete logarithm: Total cpu/elapsed time for entire factorization: 9562.25/[B]269.406[/B]
[/CODE]

cado-nfs with VBCurtis c95 param:
[CODE]
./cado-nfs.py 48404068520546498995797968938385187958997290617596242601254422967869040251141325866025672337021
...
Info:Complete Factorization / Discrete logarithm: Total cpu/elapsed time for entire factorization: 6511.55/[B]193.041[/B]
[/CODE]


yafu trunk version (compile option: NFS=1 USE_SSE41=1):
[CODE]
./yafu 'siqs(48404068520546498995797968938385187958997290617596242601254422967869040251141325866025672337021)'

starting SIQS on c95: 48404068520546498995797968938385187958997290617596242601254422967869040251141325866025672337021

==== sieving in progress (20 threads): 92992 relations needed ====
==== Press ctrl-c to abort and save state ====
93748 rels found: 24010 full + 69738 from 1277813 partial, (6962.11 rels/sec)

SIQS elapsed time = [B]206.5399[/B] seconds.
...
[/CODE]

yafu wip r379 version (compile option: NFS=1 USE_AVX2=1 SKYLAKEX=1)
[CODE]
./yafu 'siqs(48404068520546498995797968938385187958997290617596242601254422967869040251141325866025672337021)'

starting SIQS on c95: 48404068520546498995797968938385187958997290617596242601254422967869040251141325866025672337021

==== sieving in progress ( 20 threads): 92992 relations needed ====
==== Press ctrl-c to abort and save state ====
95438 rels found: 24356 full + 71082 from 1274175 partial, (8081.20 rels/sec)

freeing 1410 poly_a's
building matrix with 95438 columns
SIQS elapsed time = [B]179.2129[/B] seconds.
...
[/CODE]

VBCurtis 2019-10-29 05:27

Nice!! So, for non-skylake users, the SIQS/CADO crossover is 93 or 94 digits. For skylake CPUs, more like 96 or 97.

Kudos to BSquared (& co?) for the AVX512 development on YAFU!

bsquared 2019-10-29 12:41

[QUOTE=VBCurtis;529165]Nice!! So, for non-skylake users, the SIQS/CADO crossover is 93 or 94 digits. For skylake CPUs, more like 96 or 97.

Kudos to BSquared (& co?) for the AVX512 development on YAFU![/QUOTE]

Thank you!

Factoring RSA-100 with a two-socket Intel 5122 Gold system (16 threads) I get:
[CODE]SIQS elapsed time = 781.4394 seconds.[/CODE]

And for yafu's NFS I get:
[CODE]NFS elapsed time = 728.6211 seconds.[/CODE]

Probably a crossover of about 98-99 digits here, but I know that CADO-NFS is faster than yafu's nfs with this number. Unfortunately this system doesn't have python 3 and I can only install in my home directory so I've got a bit of figuring/tinkering to do before I can test it.

VBCurtis 2019-11-03 16:47

Post #6 now contains polyselect-optimized params for 190 and 195 digit inputs. The only difference is a slightly smaller P for 190.
User hnoey has run a wide variety of poly select params for a set of 6 C193s, and has determined with some certainty that these choices of NQ and incr produce generally-better results than other common alternatives (incr tested were 60,210,420,4620 with 4620 best; nq tested 3125 15625 78125, with 15625 clearly best). However, no P value was always best; for C193 tests, P=3e6 often was good, but when the poly produced had a score below expectations a higher P-value often produced a more acceptable score.
So, our advice: Use the files as posted, but if score doesn't meet your wishes try again with a doubling of P. These two runs combined would still be shorter than the default CADO suggestion. A halving of P is also a reasonable suggestion.
Cownoise scores for the list of 6 C193s ranged from 1.35e-14 to 1.44e-14.

VBCurtis 2020-04-05 04:11

1 Attachment(s)
I upgraded my CPU on my home system (was i7-5820k haswell 6x3.3ghz, now Xeon of same era 12x2.5ghz. Same mobo), so I started over at C100 trying to optimize settings.

I managed to crack a C100 in under 6 minutes wall-clock on this single-socket 2013-era system. :shock:
This is roughly a 15% improvement on the previous C100 file posted in this thread last year.

Attached is the params file I used.

bur 2021-04-22 09:10

[QUOTE=VBCurtis;513613]The current git release for CADO has a note in the params.c120 file that says they've verified experimentally that matrix density 100 is optimal at that size. I haven't tested it yet, but my files use densities of 135-155; if 100 proves faster, I'll be updating all of the files.

CADO originally used 170 for all sizes; I thought I was already using a pretty low density...[/QUOTE]I saw your files still have a densty between 135-155 and current git CADO has 100. So from your tests your density setting was still faster?

VBCurtis 2021-04-22 15:03

No, I just haven't updated the files in quite a few months. I continue to track my job timing data in a spreadsheet, and "one of these days" I'll post a new series of params files that is marginally faster than the ones currently public- something on the order of 5% improved for most sizes.

Now that you've reminded me, I may get C100-C130 up shortly, as I've quit developing those. I'm working on C135-155 now.

bur 2021-04-23 07:17

So the current public params are faster than your custom 2019 ones?

VBCurtis 2021-04-23 14:31

Huh? By "public" I mean "available on this thread". So, your question seems to compare the same two things. The CADO package's params files aren't very good- the ones posted in this thread are 15-30% faster across the board. You're welcome to test that claim yourself- run the same number with stock CADO, and with one of my files.

bur 2021-04-23 17:28

Ok, sorry, I thought by public you meant the ones bundled with CADO.


Good to know, I'll use the files you posted.

EdH 2021-04-23 17:39

I have a comparison for CADO supplied vs. VBCurtis' in my blog area, if you're interested in how my "farm" did with comparisons.

VBCurtis 2021-04-23 17:58

1 Attachment(s)
I posted C125 and C135 files to the top-of-thread posts. I'll wait to update C120/130/140 until I compare to my old files- I changed hardware around the time Charybdis started suggesting ideas for params, so I don't have good A/B comparison timing.

C145-150-155 currently in testing. The drafts are attached as a .zip; they should be faster than stock CADO, but I believe I'll find more speed so I'll hold off on posting to the top of the thread.

Thanks to bur for nudging me to get my work posted.

bur 2021-05-08 06:12

Any chance a C165 file will be available shortly?

VBCurtis 2021-05-08 13:59

Sure, I can post a draft. There hasn't been a ton of testing at that size, but Charybdis did a few jobs. I'll track down the data this weekend and get one posted to this thread.

bur 2021-05-08 14:28

Thanks, I want to factor 10*102!+1 for [URL]http://oeis.org/A095194[/URL].


Judging from the factorization times so far it should take about 2 weeks, so it's certainly worth the wait.

VBCurtis 2021-05-10 02:32

I've posted a draft c165 file in the thread for 165-170 digit work:
[url]https://mersenneforum.org/showthread.php?t=25535[/url]

It should be decent, as it is based on the c170 file that has had a few jobs run; the main difference for c165 is to use I = 14.

bur 2021-05-23 13:29

I want to start a C159, could you post a draft of params.c160?

VBCurtis 2021-05-23 14:14

1 Attachment(s)
Here you go!

bur 2021-05-23 15:58

Thanks, I'll let you know how it went.


All times are UTC. The time now is 23:19.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.