Go Back > Factoring Projects > Factoring

Thread Tools
Old 2011-12-17, 10:04   #89
xilman's Avatar
May 2003
Down not across

25×52×13 Posts

Originally Posted by Stargate38 View Post
Any news on the c181? It could go to NFS@Home, then it would be factored a lot quicker.
Hmm, perhaps akruppa and I should have another run at it

xilman is offline   Reply With Quote
Old 2012-02-29, 15:45   #90
Stargate38's Avatar
"Daniel Jackson"
May 2011

23×7×11 Posts

Any ETA? It's been almost 3 months.
Stargate38 is offline   Reply With Quote
Old 2012-09-08, 14:11   #91
WraithX's Avatar
Mar 2006

47210 Posts

Good news everyone! I have factored HP49(110).c181 = p79*p103. You can see the details in the email I sent to Patrick:
Hello Patrick,

I have quite a few developments in the HP49 saga to report to you.  I
spent about six to eight months trying to factor the HP49(110).c181 via
ecm.  However, that never proved fruitful.  Then earlier in the year I
started factoring the c181 via the General Number Field Sieve using
ggnfs and msieve to do the factorization.  I started gathering
relations on 2012/05/17 and finished gathering relations on 2012/08/11.
On that day a 22M^2 matrix was built and linear algebra ran on it until
2012/08/30. 1.5 hours later, the square root step found the factors on
the first dependency.  The c181 split into p79*p103, with:
p79 = 13242639228855682038275386963913139191902992119830964965826611351\
p103 = 5257875980823060025161989259479167407618986741511789127217197204\

From here I have started using an excellent factoring utility called
yafu, which can very quickly find small factors and can even keep
working until it fully factors a number.  In order to factor a number,
it checks for small factors, it tries the Fermat method, Pollard rho,
p-1, ecm, the quadratic sieve, and it can try factoring via the number
field sieve.  Some of the functionality depends on external binaries,
each of which are easy to find online.  I typically use yafu to find
small factors of these numbers, and then I will manually run gmp-ecm
to try to find larger factors.

*** The above factorization leads us to HP49(111), which is a c236:

Which had an easy factorization of:
3 * 7 * 3119 * 30168011 * 859257036259 * p212, with:
p212 = 2215672318292438329341551789093919668756597710700506491362229289\

*** This leads us to HP49(112), which is a c238:

Which partially factored into:
131 * 2721660787 * 364148211209 * 4332696358733373457 * c196

I was able to factor HP49(112).c196 with gmp-ecm with B1=110e6 and lucky
sigma=426853020.  This gives us the split c196 = p46*p151, with:
p46 = 2871080232471495934021653967701541108613371057
p151 = 2310258942683190562148481349981529646166666457710725946445425378\

*** This leads us to HP49(113), which is a c241:

Which partially factored into:
3 * 13 * 23 * 521845650935569 * 868711762772471 * 319988447520300554621
 * 28389161986882946018325701897 * c159

I was able to factor HP49(113).c159 with gmp-ecm with B1=110e6 and lucky
sigma=1608282488.  This gives us the split c159 = p46*p113, with:
p46 = 4476784590773507504219451975358661227634604289
p113 = 7937968436512120054007404759114714054246049623545813848950867773\

*** This leads us to HP49(114), which is a c244:

Which pretty quickly factored into:
19 * 983 * 2663 * 78607 * 9934389995249 * 21656051585046364524395089
 * 45811515442003960460099942651 * p164, with:
p164 = 8128977805895626607033264670100486500595772941131584619352172246\

*** This leads us to HP49(115), which is a c246:

Which pretty quickly factored into:
3 * 3 * 3 * 339257 * 256784956591 * 36693424661311252997
 * 12089711795346540523800293 * p183, with:
p183 = 1915138220008002714613864800803463985954766228490424773556933617\

*** This leads us to HP49(116), which is a c250:

Which took a short while to factor into:
227 * 52386283 * 39852303700003 * 34918470225660868578167
 * 71390396918591830182237959705744641 * p169, with:
p169 = 2821594399022506045260907988881750768134579956275599251807250458\

*** This leads us to HP49(117), which is a c252:

Which has the partial factorization:
3 * 23 * 99525233 * 12143755081 * 2844434001269627828783 * c210

The decimal expansion of HP49(117).c210 is:

I am continuing to work on HP49(117).c210.  The search continues!
This quickly led down the road to HP49(117).c210. I am currently running curves at the B1=345e6 level. According to gmp-ecm:
Using B1=345000000, B2=3973241221966, polynomial Dickson(30), sigma=...
dF=262144, k=5, d=2852850, d2=17, i0=104
Expected number of curves to find a factor of n digits:
35      40      45      50      55      60      65      70      75      80
21      73      286     1258    6111    32413   186083  1147805 7549361 5.3e+007
I have already finished more than 6500 curves at B1=345e6. So, if I'm reading the table right, this means that I have done > 1.06*t55 and > 5.17*t50.

Here are some extra details that I didn't provide to Patrick. All of this was done on one computer that has dual Xeon E5-2687W (16 total physical cores) and 64GB DDR3-1600 registered ecc ram. I am also attaching the ggnfs/msieve/ log files so that others can see any extra details they might be interested in. ( I trimmed out a lot of the sieving since that made the zip file too big to upload.

The search continues!
Attached Files
File Type: zip (17.9 KB, 92 views)
WraithX is offline   Reply With Quote
Old 2012-09-08, 15:00   #92
xilman's Avatar
May 2003
Down not across

25×52×13 Posts

Originally Posted by WraithX View Post
Good news everyone! I have factored HP49(110).c181 = p79*p103.
Nice work!

Traditional wisdom suggests a t70 before turning to GNFS. However, a c210 would be a nice showcase in its own right, so perhaps a t65 could be justified.

xilman is offline   Reply With Quote
Old 2012-09-08, 15:34   #93
schickel's Avatar
"Frank <^>"
Dec 2004
CDP Janesville

1000010010102 Posts

Originally Posted by WraithX View Post
Good news everyone! I have factored HP49(110).c181 = p79*p103.
All I can say is: wow!
schickel is offline   Reply With Quote
Old 2012-09-09, 20:19   #94
sean's Avatar
Aug 2004
New Zealand

DC16 Posts

Congratulations. That's an excellent individual effort.
There are now 308 numbers which the current blocker in on the home prime path for.

Last fiddled with by sean on 2012-09-09 at 20:34
sean is offline   Reply With Quote
Old 2012-09-14, 12:31   #95
em99010pepe's Avatar
Sep 2004

2·5·283 Posts

I ran a few curves on HP49(117).c210.

For B1=345000000 14 curves.
For B1=850000000 132 curves.
Attached Files
File Type: zip (19.4 KB, 77 views)
em99010pepe is offline   Reply With Quote
Old 2012-10-20, 03:54   #96
Jwb52z's Avatar
Sep 2002

52·31 Posts

Why hasn't a new thread been started since the title of this thread concerns a number factored 2 years ago last February?
Jwb52z is offline   Reply With Quote
Old 2012-10-20, 19:23   #97
kar_bon's Avatar
Mar 2006

2,857 Posts

Because the more generally thread for HP is on the second page of this thread: see here.

There's also a thread for more HP on other bases here.
kar_bon is offline   Reply With Quote
Old 2014-12-03, 23:25   #98
WraithX's Avatar
Mar 2006

23·59 Posts

Good news everyone! I have factored HP49(117).c210 = p95*p116 via GNFS.

From 2012/08/31 to 2013/04/01, I was able to complete 6*t60 (= 1.2*t65) with GMP-ECM.
From 2013/03/16 to 2013/07/15, I used msieve to do gpu polynomial selection.
From 2013/07/02 to 2013/07/17, I did some test sieving to find the best poly.
The polynomial/paramters I used are given down below.
From 2013/08/01 to 2014/05/19, I collected relations from the ggnfs siever.
I wrote a simple web interface to hand out work assignments.
I wrote a simple python script to get work, run the gnfs-lasieve4I16e siever,
gzip the relations when that thread would finish processing its batch of work,
and then send that gzip file to my ftp server.
I used the Amazon Elastic Cloud Compute service to help with sieving. I purchased
spot instances to help keep costs down. On several occassions the price of a spot
instance would exceed my high bid, and so my instances were shut down. Then later,
when the prices fell to their low point again, I would start up several new instances.
From 2013/08 to 2014/02, when prices were low, I had 5 instances running.
From 2014/03 to 2014/05, when prices were low, I had 10 instances running.
Each instance was a cc2.8xlarge instance, which was a dual Xeon E5-2670 (total of
16 cores, 32 threads) with 60.5GB ram. On each instance I ran the Amazon Linux
AMI x86_64 HVM EBS OS and had 32 threads collecting relations. In total, I ran
29882 instance hours x 32 threads = 956224 thread hours. I also ran sieving on my
personal computers, but those probably only contributed about 10% of the total
relations. I sieved over the range from 10^6 to 2^31 and collected a total of
859026466 relations. On 2014/05/19 I ran remdups4 and found that I had
520615500 unique, 338410717 duplicate, and 249 bad relations.
From 2014/05/20 to 2014/05/29, I used msieve to test different target densities.
One interesting thing here, I ended up using target density 125. I ended up running
that 3 different times, and the estimated run time was different each time. I ran this
on 13 cores of my dual Xeon E5-2687w (v1). I think the reason there was a big difference
in time estimates was probably due to where the dataset was loaded into memory, in
relation to where most of the threads were running on the processors. (Hyperthreading
was off, so that wasn't an issue) I let the 3rd instance with target density 125 run
since it had the lowest overall runtime estimate (3054h). The size of this matrix was:
Thu May 29 09:09:16 2014  matrix is 71776202 x 71776379 (35323.1 MB) with weight 10890899799 (151.73/col)
Thu May 29 09:09:16 2014  sparse part has weight 8398415017 (117.01/col)
Thu May 29 09:09:16 2014  saving the first 48 matrix rows for later
Thu May 29 09:09:44 2014  matrix includes 128 packed rows
Thu May 29 09:10:09 2014  matrix is 71776154 x 71776379 (33924.4 MB) with weight 9386761820 (130.78/col)
Thu May 29 09:10:09 2014  sparse part has weight 8031767895 (111.90/col)
Thu May 29 09:10:09 2014  using block size 8192 and superblock size 1966080 for processor cache size 20480 kB
Thu May 29 09:24:16 2014  commencing Lanczos iteration (13 threads)
Thu May 29 09:24:16 2014  memory use: 30587.0 MB
Thu May 29 09:24:36 2014  restarting at iteration 352 (dim = 22260)
Thu May 29 09:28:28 2014  linear algebra at 0.0%, ETA 3054h11m
From 2014/05/29 to 2014/10/03, I used msieve to run Linear Algebra on that 71.77M^2 matrix.
Fri Oct 03 13:39:34 2014  lanczos halted after 1135057 iterations (dim = 71776151)
Fri Oct 03 13:42:24 2014  recovered 26 nontrivial dependencies
Fri Oct 03 13:51:48 2014  BLanczosTime: 10989982
Fri Oct 03 13:51:48 2014  elapsed time 3052:46:23
From 2014/10/03 to 2014/10/04, I used msieve to run the square root step on the dependencies.
Each one took about 10.5 hours, and the factorization was found on the 2nd dependency.
Sat Oct 04 11:54:37 2014  sqrtTime: 75794
Sat Oct 04 11:54:37 2014  prp95 factor: 57999185025539581493090229022659057407046561926157771218233736154789469010389546134055763746997
Sat Oct 04 11:54:37 2014  prp116 factor: 16537654195779115452085989138575434836851289636159339542929128160183686876788663786928798769682882741367314660621029
Sat Oct 04 11:54:37 2014  elapsed time 21:03:15
Here is the polynomial/paramters I used for ggnfs:
type: gnfs
# norm 1.276810e-020 alpha -9.238464 e 9.932e-016 rroots 5
skew: 380780879.24
c0: -162062780465967901494709111193197313231897108452480
c1: 13812347733683567053764230784091491451134952
c2: 22102839514830809629173664060748362
c3: -520746958301071507022048503
c4: -168875771147350788
c5: 535643820
Y0: -17807540461984351122128386845862353501171
Y1: 108424014196861749931
rlim: 500000000
alim: 500000000
lpbr: 32
lpba: 33
mfbr: 64
mfba: 96
rlambda: 2.7
alambda: 3.7
So now, after 765 days (or 2 years, 1 month, and 5 days) of work, with
568 days (or 1 year, 6 months, 19 days) of that spent on GNFS on the C210,
we find that HP49(117) has the full factorization:
3 * 23 * 99525233 * 12143755081 * 2844434001269627828783 * p95 * p116

Since I'm not sure what all information from this job that people might be
interested in, I'm posting a zip file that contains several msieve log files,
a txt file that contains a breakdown of relation counts by range of 10M q,
a remdups file that shows the duplicate removal process, and a job_details
txt file that gathers a lot of the above information into one place. If
anyone has any additional questions about this process, please let me know.

Also, HP49 is now at step 119 and blocked by a C251. I've run about 2*t60
on it so far with no luck. I definitely do not plan on running this through
GNFS! ;) I'm hoping that this has a factor within range of ecm and that
I, or anyone really!, will find a lucky curve.
Attached Files
File Type: zip (62.9 KB, 116 views)
WraithX is offline   Reply With Quote
Old 2014-12-03, 23:32   #99
I moo ablest echo power!
wombatman's Avatar
May 2013

1,741 Posts

wombatman is offline   Reply With Quote

Thread Tools

Similar Threads
Thread Thread Starter Forum Replies Last Post
P-1 on M1061 and HP49.99 ATH Factoring 21 2009-10-13 13:16

All times are UTC. The time now is 01:43.

Sun Dec 6 01:43:36 UTC 2020 up 2 days, 21:54, 0 users, load averages: 2.73, 2.53, 2.56

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.