mersenneforum.org  

Go Back   mersenneforum.org > Factoring Projects > CADO-NFS

Reply
 
Thread Tools
Old 2016-03-31, 07:27   #1
Dubslow
Basketry That Evening!
 
Dubslow's Avatar
 
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88

3·29·83 Posts
Default Playing with CADO's individual polyselect binaries

Since I haven't been able to get the regular Python scripts running correctly, and since even if I could their operation seems extraordinarily inflexible to me, I've taken PZ's suggestion of manually using the polyselect binaries that the scripts delegate to.

Here's my first attempt on the A4788_C195:

Code:
~/cado/build/Gravemind/polyselect ∰∂ nice -n 19 ./polyselect -degree 5 -P 5000000 -admax 50000000 -nq 1000 -keep 1000 -v -t 8 -N $(cat num)
# (17c31b8) ./polyselect -degree 5 -P 5000000 -admax 50000000 -nq 1000 -keep 1000 -v -t 8 -N 105720747827131650775565137946594727648048676926428801477497713261333062158444658783837181718187127016255169032147325982158719006483898971407998273736975091062494070213867530300194317862973608499
# Compiled with gcc 4.9.2
# Compilation flags -std=c99 -g -W -Wall -O2  -msse3 -mssse3 -msse4.1 -mpclmul
# Info: initializing 316066 P primes took 16ms, nq=1000
# Info: estimated peak memory=1388.00MB (8 thread(s), batch 20 inversions on SQ)
# thread 0: ad=60
# thread 1: ad=120
# thread 6: ad=180
# thread 3: ad=240
# thread 4: ad=300
# thread 5: ad=360
# thread 2: ad=420
# thread 7: ad=480
# thread 7: ad=540
# thread 4: ad=600
# thread 2: ad=660

# sopt: start size-optimization with polynomials:
# sopt: f_raw = 120*x^5 + -37*x^4 + ...
# sopt: g_raw = 2157420360827950350617*x^1 + -244903953537066202453401636126542860406*x^0
# sopt: with lognorm = 71.270838
Keeping x = 0.260108, f(x) = 2016795408842364209203141872461793618276549461594849023160013414058503962624.000000, f''(x) = 43184121449821530246129924935985504342081783347318205967447779341609100201112371200.000000
# sopt: q-roots of Res(c2,c3) or Res(c2,c3)' = { 0.260108 }
# sopt: 8 values for the translations were computed and added in list_k
# sopt: k = 0 was added list_k
# sopt: It remains 9 values after sorting and removing duplicates
# sopt: Calling improve_list_k with sopt_effort = 0
# sopt: skew0 = 4275.446290
# sopt: skew[0] = 65
# sopt: skew[1] = 529
# sopt: skew[2] = 4275
# sopt: skew[3] = 34572
# sopt: skew[4] = 279558
# sopt: Start processing all k in list_k of length 9
# sopt:       better lognorm 65.28 (previous was 71.27) for skew[2] = 4275
# sopt:       better lognorm 65.23 (previous was 65.28) for skew[3] = 34572
# sopt:       better lognorm 64.54 (previous was 65.23) for skew[4] = 279558
# sopt:       better lognorm 64.17 (previous was 64.54) for skew[3] = 34572
# sopt:       better lognorm 63.84 (previous was 64.17) for skew[3] = 34572
# sopt:       better lognorm 63.72 (previous was 63.84) for skew[3] = 34572
# sopt: end of size-optimization, best polynomial are
# sopt: f_opt = 2760*x^5 + 500106045506149*x^4 + ...
# sopt: g_opt = 2157420360827950350617*x^1 + -244903875353083220572872777923498836651*x^0
# sopt: with lognorm = 63.724256

# Raw polynomial:
# n: 105720747827131650775565137946594727648048676926428801477497713261333062158444658783837181718187127016255169032147325982158719006483898971407998273736975091062494070213867530300194317862973608499
# Y1: 2157420360827950350617
# Y0: -244903953537066202453401636126542860406
# c5: 120
# c4: -37
# c3: 1188753689271031975437528
# c2: 63701574612453595484553171865323265035
# c1: -5231971310990810577211204041041772504
# c0: 39181211010549708386751205216207022495
# raw lognorm 71.27, skew 372185759744.00, alpha 0.09 (proj: -1.21), E 71.36, exp_E 63.14, 1 rroots
# Size-optimized polynomial:
n: 105720747827131650775565137946594727648048676926428801477497713261333062158444658783837181718187127016255169032147325982158719006483898971407998273736975091062494070213867530300194317862973608499
Y1: 2157420360827950350617
Y0: -244903875353083220572872777923498836651
c5: 2760
c4: 500106045506149
c3: 63601533977108612470151786
c2: -2972736829782803317901917056
c1: -77606935545503482645832413970733012323
c0: 556082758789900711444681471769664993163
# lognorm 63.72, skew 1493504.00, alpha 0.76 (proj: -1.34), E 64.49, exp_E 60.24, 3 rroots
etc. A fair bit of output is produced per coefficient, and I hope to let this run for a long while, so we'll see what happens. It looks as though I'll have to pass the results file from here to polyselect_ropt manually.

It might be interesting to combine (e.g.) RichD's GPU sizeopted output with polyselect_ropt and see what we can come up with.

Given how modular CADO's binaries are (if not the scripts), mayhaps it's worth hacking GPU-Msieve to work with sopt and polyselect_ropt? (It seems that polyselect delegates to sopt? Or are they totally separate somehow...?)

Last fiddled with by Dubslow on 2016-03-31 at 07:30
Dubslow is offline   Reply With Quote
Old 2016-04-15, 06:25   #2
Dubslow
Basketry That Evening!
 
Dubslow's Avatar
 
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88

3×29×83 Posts
Default

It finally completed. Looks like somewhere on the order or 12-14 days of a hyper threaded Sandy Bridge quad core... I may have bitten off more than I could truly chew. Oh well, it's complete now.

Next stage: backups and root opt


Code:
# sopt: start size-optimization with polynomials:
# sopt: f_raw = 49999980*x^5 + 87342730*x^4 + ...
# sopt: g_raw = 75531706090637747253061*x^1 + -18409334084348638436044486591897373801*x^0
# sopt: with lognorm = 71.191531
# sopt: find_translations_extra 49394349.452506
# sopt: find_translations_extra -49394329.184788
Not keeping x = -0.236905, f(x) = -881266452414298567385496367451512255442834743137534323397114924012974975049793536.000000, f''(x) = 42362913788293809589383387284727357529134974824061360749448886093185664868476268579334914048.000000
# sopt: q-roots of Res(c2,c3) or Res(c2,c3)' = { -0.236899, -0.236912 }
# sopt: 50 values for the translations were computed and added in list_k
# sopt: k = 0 was added list_k
# sopt: It remains 27 values after sorting and removing duplicates
# sopt: Calling improve_list_k with sopt_effort = 0
# sopt: skew0 = 904.912995
# sopt: skew[0] = 30
# sopt: skew[1] = 165
# sopt: skew[2] = 905
# sopt: skew[3] = 4963
# sopt: skew[4] = 27221
# sopt: Start processing all k in list_k of length 27
# sopt:       better lognorm 70.30 (previous was 71.19) for skew[2] = 905
# sopt:       better lognorm 70.20 (previous was 70.30) for skew[2] = 905
# sopt:       better lognorm 65.96 (previous was 70.20) for skew[3] = 4963
# sopt:       better lognorm 65.31 (previous was 65.96) for skew[3] = 4963
# sopt:       better lognorm 65.28 (previous was 65.31) for skew[3] = 4963
# sopt:       better lognorm 64.25 (previous was 65.28) for skew[2] = 905
# sopt:       better lognorm 63.67 (previous was 64.25) for skew[4] = 27221
# sopt:       better lognorm 62.84 (previous was 63.67) for skew[2] = 905
# sopt: end of size-optimization, best polynomial are
# sopt: f_opt = 324399870240*x^5 + -80029460397523177360*x^4 + ...
# sopt: g_opt = 75531706090637747253061*x^1 + -18409337811083621201759661159905138289*x^0
# sopt: with lognorm = 62.837342

# Raw polynomial:
# n: 105720747827131650775565137946594727648048676926428801477497713261333062158444658783837181718187127016255169032147325982158719006483898971407998273736975091062494070213867530300194317862973608499
# Y1: 75531706090637747253061
# Y0: -18409334084348638436044486591897373801
# c5: 49999980
# c4: 87342730
# c3: -1202006018615283019865612
# c2: -4361271323601108006810005241882801604
# c1: 5777297089772007359416979193471163928
# c0: 1364051122127173272601888644114195017
# raw lognorm 71.19, skew 2042101760.00, alpha 0.07 (proj: -1.40), E 71.26, exp_E 63.99, 3 rroots
# Size-optimized polynomial:
n: 105720747827131650775565137946594727648048676926428801477497713261333062158444658783837181718187127016255169032147325982158719006483898971407998273736975091062494070213867530300194317862973608499
Y1: 75531706090637747253061
Y0: -18409337811083621201759661159905138289
c5: 324399870240
c4: -80029460397523177360
c3: -17398904458268124477723493
c2: 22144827537958081761266148134238
c1: 888071204935334410586667012400944573
c0: -256020541443086763599669214408591268137446
# lognorm 62.84, skew 361088.00, alpha -1.25 (proj: -1.47), E 61.59, exp_E 59.67, 5 rroots

# Stat: potential collisions=750929.12 (1.55e-01/s)
# Stat: raw lognorm (nr/min/av/max/std): 756882/58.91/70.98/71.82/0.83
# Stat: optimized lognorm (nr/min/av/max/std): 756882/58.91/64.02/67.26/0.76
# Stat: tried 833333 ad-value(s), found 756882 polynomial(s), 756882 size-optimized, 0 rootsieved
# Stat: best logmu after size optimization: 58.91 59.05 59.05 59.23 59.25 59.26 59.33 59.47 59.48 59.54 59.57 59.60 59.62 59.64 59.64 59.69 59.70 59.74 59.78 59.78 59.80 59.81 59.82 59.84 59.85 59.86 59.87 59.88 59.88 59.91 59.92 59.94 59.99 60.00 60.01 60.02 60.02 60.03 60.05 60.06 60.07 60.09 60.09 60.10 60.10 60.10 60.11 60.11 60.11 60.12 60.13 60.13 60.14 60.14 60.15 60.15 60.16 60.17 60.17 60.19 60.19 60.21 60.21 60.22 60.22 60.22 60.22 60.23 60.23 60.23 60.24 60.24 60.26 60.26 60.26 60.27 60.27 60.27 60.29 60.29 60.29 60.30 60.30 60.31 60.32 60.33 60.34 60.34 60.34 60.35 60.35 60.35 60.36 60.36 60.37 60.37 60.38 60.38 60.39 60.39 60.39 60.43 60.43 60.43 60.43 60.44 60.44 60.44 60.44 60.44 60.44 60.44 60.45 60.45 60.46 60.46 60.47 60.47 60.48 60.48 60.48 60.48 60.48 60.48 60.48 60.48 60.49 60.51 60.51 60.51 60.51 60.52 60.52 60.53 60.53 60.54 60.54 60.54 60.54 60.54 60.54 60.55 60.55 60.55 60.56 60.56 60.56 60.56 60.56 60.57 60.57 60.57 60.58 60.58 60.58 60.59 60.59 60.59 60.60 60.60 60.60 60.60 60.60 60.60 60.60 60.61 60.61 60.61 60.61 60.62 60.62 60.62 60.62 60.62 60.63 60.63 60.63 60.63 60.63 60.63 60.63 60.63 60.64 60.65 60.65 60.65 60.66 60.66 60.66 60.66 60.67 60.67 60.67 60.67 60.67 60.67 60.67 60.68 60.68 60.68 60.68 60.69 60.69 60.69 60.69 60.69 60.69 60.70 60.70 60.70 60.70 60.70 60.71 60.71 60.71 60.71 60.71 60.72 60.72 60.72 60.72 60.72 60.72 60.72 60.72 60.72 60.73 60.73 60.73 60.73 60.73 60.73 60.74 60.74 60.74 60.74 60.74 60.74 60.74 60.74 60.75 60.75 60.75 60.75 60.75 60.75 60.75 60.76 60.76 60.76 60.76 60.76 60.76 60.77 60.77 60.77 60.77 60.77 60.77 60.77 60.78 60.78 60.78 60.78 60.78 60.78 60.79 60.79 60.79 60.79 60.79 60.79 60.79 60.79 60.79 60.79 60.80 60.80 60.80 60.80 60.80 60.80 60.80 60.80 60.81 60.81 60.81 60.81 60.81 60.81 60.82 60.82 60.82 60.82 60.82 60.82 60.82 60.82 60.83 60.83 60.83 60.83 60.83 60.83 60.84 60.84 60.84 60.84 60.84 60.84 60.84 60.84 60.84 60.85 60.85 60.85 60.85 60.85 60.85 60.85 60.85 60.85 60.86 60.86 60.86 60.86 60.86 60.86 60.86 60.86 60.86 60.86 60.86 60.86 60.86 60.86 60.86 60.87 60.87 60.87 60.87 60.87 60.87 60.87 60.87 60.88 60.88 60.88 60.88 60.88 60.88 60.88 60.88 60.88 60.88 60.89 60.89 60.89 60.89 60.89 60.89 60.90 60.90 60.90 60.90 60.90 60.90 60.90 60.91 60.91 60.91 60.91 60.91 60.91 60.91 60.91 60.91 60.91 60.91 60.92 60.92 60.92 60.92 60.92 60.92 60.92 60.92 60.92 60.92 60.93 60.93 60.93 60.93 60.93 60.93 60.93 60.93 60.93 60.93 60.93 60.93 60.93 60.93 60.93 60.93 60.94 60.94 60.94 60.94 60.94 60.94 60.94 60.94 60.94 60.94 60.94 60.95 60.95 60.95 60.95 60.95 60.95 60.95 60.95 60.95 60.95 60.95 60.96 60.96 60.96 60.96 60.96 60.96 60.96 60.96 60.96 60.96 60.96 60.96 60.96 60.96 60.96 60.96 60.96 60.97 60.97 60.97 60.97 60.97 60.97 60.97 60.97 60.97 60.97 60.97 60.98 60.98 60.98 60.98 60.98 60.98 60.98 60.98 60.98 60.98 60.98 60.99 60.99 60.99 60.99 60.99 60.99 60.99 60.99 60.99 60.99 60.99 60.99 60.99 60.99 60.99 60.99 61.00 61.00 61.00 61.00 61.00 61.00 61.00 61.00 61.00 61.00 61.00 61.00 61.01 61.01 61.01 61.01 61.01 61.01 61.01 61.01 61.01 61.01 61.01 61.01 61.01 61.01 61.02 61.02 61.02 61.02 61.02 61.02 61.02 61.02 61.02 61.02 61.02 61.02 61.02 61.02 61.02 61.02 61.02 61.03 61.03 61.03 61.03 61.03 61.03 61.03 61.03 61.03 61.04 61.04 61.04 61.04 61.04 61.04 61.04 61.04 61.04 61.04 61.05 61.05 61.05 61.05 61.05 61.05 61.05 61.05 61.05 61.05 61.05 61.05 61.05 61.05 61.05 61.05 61.05 61.06 61.06 61.06 61.06 61.06 61.06 61.06 61.06 61.06 61.07 61.07 61.07 61.07 61.07 61.07 61.07 61.07 61.07 61.07 61.07 61.07 61.07 61.07 61.07 61.07 61.07 61.08 61.08 61.08 61.08 61.08 61.08 61.08 61.08 61.08 61.08 61.08 61.08 61.08 61.09 61.09 61.09 61.09 61.09 61.09 61.09 61.09 61.09 61.09 61.09 61.09 61.09 61.09 61.09 61.09 61.09 61.10 61.10 61.10 61.10 61.10 61.10 61.10 61.10 61.10 61.10 61.10 61.10 61.10 61.10 61.10 61.10 61.10 61.11 61.11 61.11 61.11 61.11 61.11 61.11 61.11 61.11 61.11 61.11 61.11 61.11 61.11 61.11 61.12 61.12 61.12 61.12 61.12 61.12 61.12 61.12 61.12 61.12 61.12 61.12 61.12 61.12 61.12 61.12 61.12 61.12 61.12 61.12 61.12 61.13 61.13 61.13 61.13 61.13 61.13 61.13 61.13 61.13 61.13 61.13 61.13 61.13 61.13 61.13 61.13 61.13 61.14 61.14 61.14 61.14 61.14 61.14 61.14 61.14 61.14 61.14 61.14 61.14 61.14 61.15 61.15 61.15 61.15 61.15 61.15 61.15 61.15 61.15 61.15 61.15 61.15 61.15 61.15 61.15 61.15 61.15 61.15 61.15 61.15 61.15 61.16 61.16 61.16 61.16 61.16 61.16 61.16 61.16 61.16 61.16 61.16 61.16 61.16 61.16 61.16 61.16 61.16 61.16 61.17 61.17 61.17 61.17 61.17 61.17 61.17 61.17 61.17 61.17 61.17 61.17 61.17 61.17 61.17 61.17 61.17 61.17 61.17 61.17 61.17 61.17 61.17 61.18 61.18 61.18 61.18 61.18 61.18 61.18 61.18 61.18 61.18 61.18 61.18 61.18 61.18 61.18 61.18 61.18 61.18 61.18 61.19 61.19 61.19 61.19 61.19 61.19 61.19 61.19 61.19 61.19 61.19 61.19 61.19 61.19 61.19 61.19 61.19 61.19 61.19 61.19 61.19 61.20 61.20 61.20 61.20 61.20 61.20 61.20 61.20 61.20 61.20 61.20 61.20 61.20 61.20 61.20 61.20 61.20 61.20 61.20 61.20 61.20 61.20 61.20 61.20 61.20 61.20 61.20 61.21 61.21 61.21 61.21 61.21 61.21 61.21 61.21 61.21 61.21 61.21 61.21 61.21 61.21 61.21 61.21 61.21 61.21 61.21 61.21 61.21 61.22 61.22 61.22 61.22 61.22 61.22 61.22 61.22 61.22 61.22 61.22 61.22 61.22 61.22 61.22 61.22 61.22 61.22 61.23 61.23 61.23 61.23 61.23 61.23 61.23 61.23 61.23 61.23 61.23 61.23 61.23 61.23 61.23 61.23 61.23 61.23 61.23 61.23 61.23 61.23 61.23 61.23 61.23 61.23 61.24 61.24 61.24 61.24 61.24 61.24 61.24 61.24 61.24 61.24 61.24 61.24 61.24 61.24 61.24 61.24 61.24 61.24 61.24 61.24 61.24 61.24 61.24 61.24 61.24 61.24 61.24 61.25 61.25 61.25 61.25 61.25 61.25 61.25 61.25 61.25 61.25 61.25 61.25 61.25 61.25 61.25 61.25 61.25 61.25 61.25 61.25 61.25 61.25 61.25 61.25 61.25 61.25 61.25 61.25 61.26 61.26 61.26 61.26 61.26 61.26 61.26 61.26 61.26 61.26 61.26 61.26 61.26 61.26 61.26 61.26 61.26 61.26 61.26 61.26 61.26 61.26 61.26 61.26 61.26 61.26 61.26 61.26 61.27 61.27 61.27 61.27 61.27 61.27 61.27 61.27 61.27 61.27 61.27 61.27 61.27 61.27 61.27 61.27 61.27 61.27 61.27 61.27 61.27 61.27 61.27
# Stat: total phase took 4856295.06s
# Stat: size-optimization took 39902.63s
Dubslow is offline   Reply With Quote
Old 2016-04-15, 06:47   #3
Dubslow
Basketry That Evening!
 
Dubslow's Avatar
 
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88

3×29×83 Posts
Default

. It doesn't save them to file unless you specify -out (which will get you msieve style output). I did not save the stdout. Only the last 5K lines are saved by my tty, which includes roughly 0.014% of the total output.

Dubslow is offline   Reply With Quote
Old 2016-04-20, 23:24   #4
Dubslow
Basketry That Evening!
 
Dubslow's Avatar
 
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88

3×29×83 Posts
Default

Okay, looks like the Python scripts that are otherwise meant to run CADO redirect stdout to file, and then parse said output for the top n lognorm polys itself. That is, the scripts parse out the best n hits, not the polyselect binary itself. The -keep parameter is meaningless (except for the stats at the end).


For reference, here is the params.c90 documentation:

Code:
###########################################################################
# Polynomial selection task with Kleinjung's algorithm (2008)
###########################################################################

tasks.polyselect.degree = 4		# degree of the algebraic polynomial

#tasks.polyselect.threads = 2
#    # How many threads to use per polyselect process

tasks.polyselect.P = 10000		# choose lc(g) with two prime factors in [P,2P]
    # Setting a large value will most likely find better polynomials,
    # but the number of found polynomials will be smaller.
    # As a rule of thumb, we want at least 100 found polynomials in total
    # (without norm limitation, see below).

tasks.polyselect.admin = 0          # min value for leading coefficient of f(x)
    # If not set, the default is 0.

tasks.polyselect.admax = 150e3      # max value for leading coefficient of f(x)

tasks.polyselect.incr = 60	    # increment for leading coefficient of f(x)
    # This factor is usually a smooth number, which forces projective roots in
    # the algebraic polynomial. 60 is a good start, 210 is popular as well.
    # Warning: to ensure lc(f) is divisible by incr, admin should be divisible
    # by incr too.

    # The polynomial selection search time is proportional to the
    # length of the search interval, i.e., (admax-admin)/incr.

tasks.polyselect.nrkeep = 40

tasks.polyselect.adrange = 5000		# size of individual tasks
    # Polynomial selection is split into several individual tasks. The
    # complete range from admin to admax has to be covered for the polynomial
    # selection to complete. The number of individual tasks is
    # (polsel_admax-polsel_admin)/polsel_adrange. Each such task is issued as
    # a workunit to a slave for computation.

tasks.polyselect.nq = 256
    # Number of small primes in the leading coefficient of the linear polynomial
    # Safe to leave at the default value
    # Recommended values are powers of the degree, e.g., 625 for degree 5,
    # or 1296 for degree 6. Here 256 = 4^4 thus the leading coefficient of
    # the linear polynomial will be q1*q2*q3*q4*p1*p2 where q1,q2,q3,q4 are
    # small primes, and P <= p1, p2 < 2*P.
And also suggested params for a C190:

Code:
###########################################################################
# Polynomial selection
###########################################################################

tasks.polyselect.degree = 5

tasks.polyselect.P = 5000000
tasks.polyselect.admax = 5e7
tasks.polyselect.adrange = 5e5
tasks.polyselect.incr = 60
tasks.polyselect.nq = 1000
tasks.polyselect.nrkeep = 1000

Last fiddled with by Dubslow on 2016-04-20 at 23:43 Reason: ()
Dubslow is offline   Reply With Quote
Old 2016-04-21, 05:32   #5
VBCurtis
 
VBCurtis's Avatar
 
"Curtis"
Feb 2005
Riverside, CA

2×5×599 Posts
Default

Quote:
Originally Posted by Dubslow View Post
It might be interesting to combine (e.g.) RichD's GPU sizeopted output with polyselect_ropt and see what we can come up with.
I didn't find the python scripts inflexible, once I figured out what each parameter controlled. It was pretty easy to pick a coefficient range, P-value (which correlates to how long to spend on each coeff), and number of polys to keep for root-opt step.

I can post my top ~500 size-opt hits from msieve, if you'd like to try root-opting them. But, my best poly was worse than Rich's, so my hits may just not be very good. Lemme know if you'd like them.
VBCurtis is online now   Reply With Quote
Old 2016-04-21, 05:58   #6
Dubslow
Basketry That Evening!
 
Dubslow's Avatar
 
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88

3·29·83 Posts
Default

Quote:
Originally Posted by VBCurtis View Post
I didn't find the python scripts inflexible, once I figured out what each parameter controlled. It was pretty easy to pick a coefficient range, P-value (which correlates to how long to spend on each coeff), and number of polys to keep for root-opt step.
There are endless parameters to tweak, and that's wonderful, but what I consider "inflexible" is the fact that you must set *all* parameters for *all* stages merely to run poly select. I have no idea how many large primes we're going to use thank you very much, and I certainly don't know what filtering density, etc. Essentially, there's no -np or -ns or -nc equivalent.

Quote:
Originally Posted by VBCurtis View Post
I can post my top ~500 size-opt hits from msieve, if you'd like to try root-opting them. But, my best poly was worse than Rich's, so my hits may just not be very good. Lemme know if you'd like them.
As I recall from the RSA-896 experimental poly search, CADO's root siever is noticeably better than Msieve's. Whether or not that's enough to beat RichD's poly, I'm not sure, but I definitely want to try it for posterity's sake, as well as of course trying CADO's first stage/size opt as has so far been the point here.
Dubslow is offline   Reply With Quote
Old 2016-04-21, 22:18   #7
VBCurtis
 
VBCurtis's Avatar
 
"Curtis"
Feb 2005
Riverside, CA

135468 Posts
Default

Attached is my top 500 hits from 15 GPU-days. Worst norm is 1.46e26, best 1.33e25. Please post best poly or two from CADO root-opt on this file.
Attached Files
File Type: 7z 4788c195top500.ms.7z (53.6 KB, 240 views)

Last fiddled with by VBCurtis on 2016-04-21 at 22:18
VBCurtis is online now   Reply With Quote
Old 2016-04-21, 23:26   #8
Dubslow
Basketry That Evening!
 
Dubslow's Avatar
 
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88

3×29×83 Posts
Default

jasonp, are the size optimized hit "norms" calculated the same way in Msieve as in CADO? I recall that the Murphy E is definitely different, but it seems that the norms are calculated the same (though CADO gives lognorms, here varying from 60-70, where for reference e^65 ~ 1.695e28).

The recorded best 1000 norms from my first run (whose results are lost, but best lognorms are in the second post of this thread) seem to be from 3.84e25 - 4.97e26.

Last fiddled with by Dubslow on 2016-04-21 at 23:29 Reason: second line
Dubslow is offline   Reply With Quote
Old 2016-04-22, 00:53   #9
jasonp
Tribal Bullet
 
jasonp's Avatar
 
Oct 2004

2·1,789 Posts
Default

The norm (last field in the .ms file) is a rectangular integral very similar to the (now obsolete) GGNFS polynomial selection tool. CADO-NFS uses the same integral but in radial coordinates, which leads to a different expression to optimize. The two versions will in general not compute the same answer but the hope is the two norms can rank polynomials in approximately the same way.

CADO-NFS now also uses lattice-reduction based size optimization techniques that can potentially search a larger space than Msieve does. They took our RSA-768 stage 1 dataset and produced a polynomial that was actually better than the one used for the factorization.

The E-values computed by Msieve and CADO-NFS should be exactly comparable now.

Last fiddled with by jasonp on 2016-04-22 at 00:59
jasonp is offline   Reply With Quote
Old 2016-04-22, 09:01   #10
Dubslow
Basketry That Evening!
 
Dubslow's Avatar
 
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88

3×29×83 Posts
Default

Quote:
Originally Posted by jasonp View Post
The norm (last field in the .ms file) is a rectangular integral very similar to the (now obsolete) GGNFS polynomial selection tool. CADO-NFS uses the same integral but in radial coordinates, which leads to a different expression to optimize. The two versions will in general not compute the same answer but the hope is the two norms can rank polynomials in approximately the same way.

CADO-NFS now also uses lattice-reduction based size optimization techniques that can potentially search a larger space than Msieve does. They took our RSA-768 stage 1 dataset and produced a polynomial that was actually better than the one used for the factorization.

The E-values computed by Msieve and CADO-NFS should be exactly comparable now.
1) That's a shame. They at least seem to be nearly the same, within an order of magnitude. (Btw, when you say "identical integral except for the coordinates", I would otherwise interpret that mean same final result, since the value of an integral is independent of the coordinates of computation... presumably you meant something slightly different here)

2) You mean what Msieve calls stage 1, pre-size-optimization, that CADO doesn't bother to separate from size optimization? As in, they used CADO to perform size optimization on Msieve's stage 1 hits?

3) Really? Since when? Which code base changed to match the other? And also I recall that CADO's Murphy E depends on the selected sieve region size, called I in most places?

Thanks for the awesome knowledge bombs regardless

Last fiddled with by Dubslow on 2016-04-22 at 09:03
Dubslow is offline   Reply With Quote
Old 2016-04-22, 15:07   #11
jasonp
Tribal Bullet
 
jasonp's Avatar
 
Oct 2004

2·1,789 Posts
Default

1) The value of the integrals would be the same, but remember that we are optimizing very large polynomial expressions, and rectangular vs radial coordinates yield different polynomials that have different gradients and hessians. Our experience is that radial coordinates are better behaved numerically, presumably because polynomial size in the sieving region naturally has radial patterns in it

2) In CADO-NFS the size optimization is considered part of the initial hashtable-based searching. I think they made an exception for the RSA-768 dataset, because all we had was the Msieve stage 1 output and it would have taken quite a while to regenerate it.

3) My memory is hazy on what the difference was; the formula for the E-value does include the sieving bound and sieve area size as parameters, but those are pretty much always chosen to be the same numbers across the different implementations I know about. Actually I remember now they gave different answers because Dickman's function is computed in segments, and the RSA-896 poly search required a few more segments than that version of CADO had tabulated. So we should always have produced the same answers, but in that specific case there was a small bug that they fortunately found quickly

Last fiddled with by jasonp on 2016-04-22 at 15:11
jasonp is offline   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
Base method of montgomery polyselect implementation csg Homework Help 2 2017-12-13 07:04
Mac Os X binaries ValerieVonck Software 6 2012-05-15 20:27
To CPU or to GPU: stage1 polyselect research Karl M Johnson Msieve 8 2011-11-12 19:46
Binaries for 64-bit windows smh Programming 31 2008-09-03 09:18
Need binaries for Solaris x64 rgiltrap Software 4 2006-04-27 06:55

All times are UTC. The time now is 22:00.


Mon Sep 25 22:00:49 UTC 2023 up 12 days, 19:43, 0 users, load averages: 1.86, 1.10, 1.01

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2023, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.

≠ ± ∓ ÷ × · − √ ‰ ⊗ ⊕ ⊖ ⊘ ⊙ ≤ ≥ ≦ ≧ ≨ ≩ ≺ ≻ ≼ ≽ ⊏ ⊐ ⊑ ⊒ ² ³ °
∠ ∟ ° ≅ ~ ‖ ⟂ ⫛
≡ ≜ ≈ ∝ ∞ ≪ ≫ ⌊⌋ ⌈⌉ ∘ ∏ ∐ ∑ ∧ ∨ ∩ ∪ ⨀ ⊕ ⊗ 𝖕 𝖖 𝖗 ⊲ ⊳
∅ ∖ ∁ ↦ ↣ ∩ ∪ ⊆ ⊂ ⊄ ⊊ ⊇ ⊃ ⊅ ⊋ ⊖ ∈ ∉ ∋ ∌ ℕ ℤ ℚ ℝ ℂ ℵ ℶ ℷ ℸ 𝓟
¬ ∨ ∧ ⊕ → ← ⇒ ⇐ ⇔ ∀ ∃ ∄ ∴ ∵ ⊤ ⊥ ⊢ ⊨ ⫤ ⊣ … ⋯ ⋮ ⋰ ⋱
∫ ∬ ∭ ∮ ∯ ∰ ∇ ∆ δ ∂ ℱ ℒ ℓ
𝛢𝛼 𝛣𝛽 𝛤𝛾 𝛥𝛿 𝛦𝜀𝜖 𝛧𝜁 𝛨𝜂 𝛩𝜃𝜗 𝛪𝜄 𝛫𝜅 𝛬𝜆 𝛭𝜇 𝛮𝜈 𝛯𝜉 𝛰𝜊 𝛱𝜋 𝛲𝜌 𝛴𝜎𝜍 𝛵𝜏 𝛶𝜐 𝛷𝜙𝜑 𝛸𝜒 𝛹𝜓 𝛺𝜔