mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Factoring (https://www.mersenneforum.org/forumdisplay.php?f=19)
-   -   2^947+1 status (https://www.mersenneforum.org/showthread.php?t=18719)

fivemack 2013-10-20 16:34

2^947+1 status
 
As an attempt to steer away from instant gratification, I'm doing 2^947+1 myself.

I am 3% through; it will take about four months.

R.D. Silverman 2013-10-21 00:33

[QUOTE=fivemack;356866]As an attempt to steer away from instant gratification, I'm doing 2^947+1 myself.

I am 3% through; it will take about four months.[/QUOTE]

It is [i]so[/i] nice to have resources!

Batalov 2014-02-25 22:46

[QUOTE=fivemack;356866]I am 3% through; it will take about four months.[/QUOTE]
Could be any day now, then? :tu:

retina 2014-02-26 06:40

Are we there yet?

fivemack 2014-02-26 10:19

725 million relations collected so far. I've sieved 20-114, 140-173, 190-200. Probably another month and a half to go, depending entirely on whether the machines crash mysteriously the day after I go to Mexico.

It turns out that I do not have the required temperament just to leave all my computers working on a problem and go and do something else for six months, so the estimates (30 hours per MQ on machine one, 72 hours per MQ on machine two, 60 hours per MQ on machine three) don't correspond well with the actual elapsed time.

fivemack 2014-04-12 16:09

Linear algebra started
 
[code]
Sun Apr 6 23:30:56 2014 found 241393747 hash collisions in 907238334 relations
Sun Apr 6 23:31:17 2014 added 1218600 free relations
Sun Apr 6 23:31:17 2014 commencing duplicate removal, pass 2
Sun Apr 6 23:41:08 2014 found 214115403 duplicates and 694341531 unique relations
Mon Apr 7 01:32:52 2014 begin with 694341531 relations and 605484726 unique ideals
...
Mon Apr 7 06:58:22 2014 commencing in-memory singleton removal
Mon Apr 7 06:58:28 2014 begin with 118530589 relations and 97145799 unique ideals
Mon Apr 7 06:59:36 2014 reduce to 118083954 relations and 91048903 ideals in 9 passes
Mon Apr 7 06:59:36 2014 max relations containing the same ideal: 17
Mon Apr 7 07:00:31 2014 relations with 0 large ideals: 4682145
Mon Apr 7 07:00:31 2014 relations with 1 large ideals: 21135870
Mon Apr 7 07:00:31 2014 relations with 2 large ideals: 42173655
Mon Apr 7 07:00:31 2014 relations with 3 large ideals: 35687344
Mon Apr 7 07:00:31 2014 relations with 4 large ideals: 12631170
Mon Apr 7 07:00:31 2014 relations with 5 large ideals: 1591912
Mon Apr 7 07:00:31 2014 relations with 6 large ideals: 86220
Mon Apr 7 07:00:31 2014 relations with 7+ large ideals: 95638
Mon Apr 7 07:00:31 2014 commencing 2-way merge
Mon Apr 7 07:01:34 2014 reduce to 69068937 relation sets and 42033886 unique ideals
Mon Apr 7 07:01:34 2014 commencing full merge
Mon Apr 7 07:10:06 2014 memory use: 3570.7 MB
Mon Apr 7 07:10:12 2014 found 31495168 cycles, need 28242086
Mon Apr 7 07:10:20 2014 weight of 28242086 cycles is about 3163276282 (112.01/cycle)
Mon Apr 7 07:10:20 2014 distribution of cycle lengths:
Mon Apr 7 07:10:20 2014 1 relations: 4746924
Mon Apr 7 07:10:20 2014 2 relations: 1955327
Mon Apr 7 07:10:20 2014 3 relations: 1756761
Mon Apr 7 07:10:20 2014 4 relations: 1682800
Mon Apr 7 07:10:20 2014 5 relations: 1695689
Mon Apr 7 07:10:20 2014 6 relations: 1670962
Mon Apr 7 07:10:20 2014 7 relations: 1663574
Mon Apr 7 07:10:20 2014 8 relations: 1637271
Mon Apr 7 07:10:20 2014 9 relations: 1582850
Mon Apr 7 07:10:20 2014 10+ relations: 9849928
Mon Apr 7 07:10:20 2014 heaviest cycle: 21 relations
Mon Apr 7 07:10:22 2014 commencing cycle optimization
Mon Apr 7 07:11:31 2014 start with 209594857 relations
Mon Apr 7 07:18:56 2014 pruned 9926842 relations
Mon Apr 7 07:18:56 2014 memory use: 6192.3 MB
Mon Apr 7 07:18:56 2014 distribution of cycle lengths:
Mon Apr 7 07:18:56 2014 1 relations: 4746924
Mon Apr 7 07:18:56 2014 2 relations: 2017354
Mon Apr 7 07:18:56 2014 3 relations: 1841433
Mon Apr 7 07:18:56 2014 4 relations: 1772997
Mon Apr 7 07:18:56 2014 5 relations: 1804867
Mon Apr 7 07:18:56 2014 6 relations: 1785619
Mon Apr 7 07:18:56 2014 7 relations: 1788893
Mon Apr 7 07:18:56 2014 8 relations: 1754704
Mon Apr 7 07:18:56 2014 9 relations: 1696359
Mon Apr 7 07:18:56 2014 10+ relations: 9032936
Mon Apr 7 07:18:56 2014 heaviest cycle: 21 relations
Mon Apr 7 07:19:22 2014 RelProcTime: 35197
Mon Apr 7 07:19:22 2014 elapsed time 09:46:38
...
Sat Apr 12 14:16:17 2014 commencing linear algebra
Sat Apr 12 14:16:21 2014 read 28242086 cycles
Sat Apr 12 14:17:17 2014 cycles contain 100827430 unique relations
Sat Apr 12 14:27:20 2014 read 100827430 relations
Sat Apr 12 14:30:41 2014 using 20 quadratic characters above 4294917296
Sat Apr 12 14:38:06 2014 building initial matrix
Sat Apr 12 14:55:00 2014 memory use: 13022.3 MB
Sat Apr 12 14:55:13 2014 read 28242086 cycles
Sat Apr 12 14:55:16 2014 matrix is 28240603 x 28242086 (12265.0 MB) with weight 3500853676 (123.96/col)
Sat Apr 12 14:55:16 2014 sparse part has weight 2904542345 (102.84/col)
Sat Apr 12 15:06:26 2014 filtering completed in 3 passes
Sat Apr 12 15:06:31 2014 matrix is 28194648 x 28194848 (12254.5 MB) with weight 3497602261 (124.05/col)
Sat Apr 12 15:06:31 2014 sparse part has weight 2902289708 (102.94/col)
Sat Apr 12 15:08:17 2014 matrix starts at (0, 0)
Sat Apr 12 15:08:21 2014 matrix is 28194648 x 28194848 (12254.5 MB) with weight 3497602261 (124.05/col)
Sat Apr 12 15:08:21 2014 sparse part has weight 2902289708 (102.94/col)
Sat Apr 12 15:08:21 2014 saving the first 48 matrix rows for later
Sat Apr 12 15:08:25 2014 matrix includes 64 packed rows
Sat Apr 12 15:08:28 2014 matrix is 28194600 x 28194848 (11776.5 MB) with weight 2983072463 (105.80/col)
Sat Apr 12 15:08:28 2014 sparse part has weight 2805193976 (99.49/col)
Sat Apr 12 15:08:29 2014 using block size 8192 and superblock size 1179648 for processor cache size 12288 kB
Sat Apr 12 15:11:05 2014 commencing Lanczos iteration (6 threads)
Sat Apr 12 15:11:05 2014 memory use: 10232.0 MB
Sat Apr 12 15:12:50 2014 linear algebra at 0.0%, ETA 515h34m
Sat Apr 12 15:13:23 2014 checkpointing every 60000 dimensions
[/code]

ETA 4th May (the gap from 7/4 to 12/4 was to run gpu-msieve and cuda-ecm on some aliquot-sequence numbers, and also factor a C148)

fivemack 2014-05-03 22:42

Linear algebra done
 
[code]
Sat Apr 12 15:08:28 2014 matrix is 28194600 x 28194848 (11776.5 MB) with weight 2983072463 (105.80/col)
Sat Apr 12 15:08:28 2014 sparse part has weight 2805193976 (99.49/col)
Sat Apr 12 15:08:29 2014 using block size 8192 and superblock size 1179648 for processor cache size 12288 kB
Sat Apr 12 15:11:05 2014 commencing Lanczos iteration (6 threads)
Sat Apr 12 15:11:05 2014 memory use: 10232.0 MB
Sat Apr 12 15:12:50 2014 linear algebra at 0.0%, ETA 515h34m
Sat Apr 12 15:13:23 2014 checkpointing every 60000 dimensions
Sat May 3 21:18:59 2014 lanczos halted after 445870 iterations (dim = 28194599)
Sat May 3 21:19:24 2014 recovered 39 nontrivial dependencies
Sat May 3 21:19:25 2014 BLanczosTime: 1839788
Sat May 3 21:19:25 2014 elapsed time 511:03:09
Sat May 3 21:21:25 2014
Sat May 3 21:21:25 2014
Sat May 3 21:21:25 2014 Msieve v. 1.52 (SVN 952)
Sat May 3 21:21:25 2014 random seeds: ece05c6c 791a6b70
Sat May 3 21:21:25 2014 factoring 10043208114491264804179083298391231973312377937445803901159034772482570674044331352202149635565259461053937738648225843572812880677442189386001220582374066889927639514347376185241129744973725819976341678434342729 (212 digits)
Sat May 3 21:21:26 2014 no P-1/P+1/ECM available, skipping
Sat May 3 21:21:26 2014 commencing number field sieve (212-digit input)
Sat May 3 21:21:26 2014 R0: 365375409332725729550921208179070754913983135744
Sat May 3 21:21:26 2014 R1: -1
Sat May 3 21:21:26 2014 A0: 2
Sat May 3 21:21:26 2014 A1: 0
Sat May 3 21:21:26 2014 A2: 0
Sat May 3 21:21:26 2014 A3: 0
Sat May 3 21:21:26 2014 A4: 0
Sat May 3 21:21:26 2014 A5: 0
Sat May 3 21:21:26 2014 A6: 1
Sat May 3 21:21:26 2014 skew 1.00, size 5.487e-14, alpha 1.888, combined = 1.246e-14 rroots = 0
Sat May 3 21:21:26 2014
Sat May 3 21:21:26 2014 commencing square root phase
Sat May 3 21:21:26 2014 reading relations for dependency 1
Sat May 3 21:21:29 2014 read 14096426 cycles
Sat May 3 21:22:03 2014 cycles contain 50392692 unique relations
Sat May 3 21:30:33 2014 read 50392692 relations
Sat May 3 21:36:28 2014 multiplying 50392692 relations
Sat May 3 22:35:29 2014 multiply complete, coefficients have about 1371.80 million bits
Sat May 3 22:35:34 2014 initial square root is modulo 1424149
Sat May 3 23:35:00 2014 sqrtTime: 8014
Sat May 3 23:35:01 2014 prp65 factor: 22975205222554546114010145307351411496025395212488281695009106987
Sat May 3 23:35:01 2014 prp147 factor: 437132465943413660702686151864039550227452294488019446708578944533056499456088034550429379770729594712071066859290963976253884943867357239818955867
Sat May 3 23:35:01 2014 elapsed time 02:13:36
[/code]

I have also informed Sam Wagstaff of this

jasonp 2014-05-04 00:17

Kudos on a huge solo job!

Batalov 2014-05-04 00:35

Congrats, Tom! You are a seasoned marathoner!

bsquared 2014-05-04 00:46

Nice work Tom, congrats!

pinhodecarlos 2014-05-04 01:13

I take my hat off.

debrouxl 2014-05-04 06:53

Great job :smile:

WraithX 2014-05-04 12:53

[QUOTE=fivemack;367854]725 million relations collected so far. I've sieved 20-114, 140-173, 190-200. Probably another month and a half to go, depending entirely on whether the machines crash mysteriously the day after I go to Mexico.

It turns out that I do not have the required temperament just to leave all my computers working on a problem and go and do something else for six months, so the estimates (30 hours per MQ on machine one, 72 hours per MQ on machine two, 60 hours per MQ on machine three) don't correspond well with the actual elapsed time.[/QUOTE]

Excellent Work! You did all this with only 3 machines?!? Very impressive. :tu:

How many cores/threads did you have sieving? How did you farm out the work to the different machines? Was it all by hand, or did you automate it somehow?

debrouxl 2014-05-04 15:17

AFAIK, for automation, he has his own set of scripts, developed over time.

I once looked into coercing SaltStack into the kind of work distribution / orchestration system that we need for NFS sieving. At that time, there was nothing to make it fulfill the "here's a set of N tasks, distribute them onto this set of computers on a FCFS basis, and gather the result on a computer" usage pattern, akin to "poor man's BOINC without BOINC-ifying the executables").

fivemack 2014-05-04 17:22

Only three machines, but one of them is a four-socket 48-CPU 64GB Opteron that I bought when a special offer made it about the most cost-effective, and certainly the most convenient, way to acquire that many computrons (the others are a 16GB i7/4770 and a 32GB i7/4930K; the post-processing is done on that last one)

I basically just run jobs with make
[code]
G=$(shell seq 0 12999)
S=/home/nfsworld/gnfs-batalov/gnfs-lasieve4I16e

all: $(patsubst %,%.t1,$G)

%.t1:
$(S) snfs -r -f $(shell echo $*\*10000+20000000 | bc) -c 10000 2> $*.t
wc -c $*.t > $*.t1
[/code]

and manually write makefiles that run for about a month on however-many CPUs.

WraithX 2014-05-05 03:46

[QUOTE=fivemack;372632]I basically just run jobs with make

and manually write makefiles that run for about a month on however-many CPUs.[/QUOTE]

That's a very creative use of make! :cool: I don't know if I would have thought of something like that. I wonder what other kinds of unix tools could be used in a creative way like this to do large batches of sieving?

Personally, I rewrote parts of factmsieve.py to connect to my web server and get work, gzip the relations, and then ftp the gz file to my home computer. The web server is running a simple php script that keeps track of the next unit of work to hand out. I'll be writing a more detailed description of all of this if I ever finish the large factoring project I'm working on! :max: :smile:

xilman 2014-05-05 12:07

[QUOTE=WraithX;372673]That's a very creative use of make! :cool: I don't know if I would have thought of something like that. I wonder what other kinds of unix tools could be used in a creative way like this to do large batches of sieving?

Personally, I rewrote parts of factmsieve.py to connect to my web server and get work, gzip the relations, and then ftp the gz file to my home computer. The web server is running a simple php script that keeps track of the next unit of work to hand out. I'll be writing a more detailed description of all of this if I ever finish the large factoring project I'm working on! :max: :smile:[/QUOTE]
Many years ago I wrote cabal[cd].c (client and daemon respectively) to co-ordinate NFS factorizations. If you search on "The Cabal" and NFS you should find the reason for the name and just how many years ago. It was last used in anger at several sites for RSA-768. One of the RSA-768 papers describes its use. Finding that paper is also left as an exercise for the reader

The code may well be out there but anyone who wants it for adaption to their projects is welcome to a copy from me if it can't be found otherwise.

Batalov 2014-05-06 18:00

I spun out the push for the next factorization in a [URL="http://mersenneforum.org/showthread.php?t=19334"]separate thread[/URL].


All times are UTC. The time now is 11:06.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.