mersenneforum.org  

Go Back   mersenneforum.org > Factoring Projects > Msieve

Reply
 
Thread Tools
Old 2010-12-24, 22:14   #23
tal
 
Nov 2010

32 Posts
Default

Here are my results on a 512 bit semiprime (RSA key) doing the -np X,Y segmenting. My analysis queries are still being worked on, and this is actually a trim of it (since some fields won't make sense), but it does give some insight.

Code:
sendtime		startbucket		concurrent		realtime	polys		start	end	spn 	host fpops		calced fpops
12/21/10 19:45:51	12/21/10 19:30:00	1			189m45s		0		501	751	250	1307792578.59553	14889218507310.1
12/19/10 20:06:59	12/19/10 20:00:00	16			306m0s		1		1001	1251	250	2287235009.41987	41993634772948.8
12/19/10 20:06:59	12/19/10 20:00:00	16			347m22s		0		1251	1501	250	2287235009.41987	47670552066328.9
12/19/10 20:06:59	12/19/10 20:00:00	16			328m0s		1		1501	1751	250	2287235009.41987	45012784985383
12/19/10 20:07:24	12/19/10 20:00:00	16			314m47s		0		5251	5501	250	2287235009.41987	43199007622913
12/19/10 20:07:12	12/19/10 20:00:00	16			396m15s		1		5501	5751	250	2287235009.41987	54379012348957.4
12/19/10 20:07:12	12/19/10 20:00:00	16			451m16s		0		5751	6001	250	2287235009.41987	61929175115052.3
12/19/10 20:07:12	12/19/10 20:00:00	16			382m32s		1		6001	6251	250	2287235009.41987	52496617936204.8
12/19/10 20:07:12	12/19/10 20:00:00	16			437m40s		0		6251	6501	250	2287235009.41987	60062791347365.7
12/19/10 20:07:12	12/19/10 20:00:00	16			373m10s		0		6501	6751	250	2287235009.41987	51211191860910.8
12/19/10 20:07:57	12/19/10 20:00:00	2			686m32s		0		6751	7001	250	1402067608.44444	57753968927043.5
12/20/10 11:18:30	12/20/10 11:00:00	1			698m1s		1		7001	7251	250	1152603758.65945	48272198016416.5
12/20/10 15:58:59	12/20/10 15:30:00	1			857m39s		3		7751	8001	250	1402067608.44444	72148997062942.6
12/19/10 20:07:37	12/19/10 20:00:00	16			246m6s		0		8501	8751	250	2287235009.41987	33773312149093.8
12/21/10 06:16:38	12/21/10 06:00:00	1			349m52s		0		9001	9251	250	1402067608.44444	29432203236465.7
12/20/10 06:33:15	12/20/10 06:30:00	1			565m44s		0		11001	11251	250	1402067608.44444	47591782901038.2
12/19/10 20:08:09	12/19/10 20:00:00	2			625m6s		0		11251	11501	250	1402067608.44444	52585947722317.3
12/20/10 01:34:59	12/20/10 01:30:00	1			117m30s		0		13501	13751	250	2287235009.41987	16125006816410.1
12/21/10 02:06:10	12/21/10 02:00:00	4			288m58s		0		14251	14501	250	1307792578.59553	22674507727689.3
12/19/10 20:10:39	12/19/10 20:00:00	2			741m43s		0		15251	15501	250	1152603758.65945	51294325071621.6
12/19/10 20:10:39	12/19/10 20:00:00	2			907m51s		0		15501	15751	250	1152603758.65945	62783479337939.1
12/21/10 15:46:41	12/21/10 15:30:00	1			317m42s		0		16501	16751	250	1307792578.59553	24929142133188
12/20/10 00:40:12	12/20/10 00:30:00	2			302m58s		1		18751	19001	250	2287235009.41987	41577358001234.4
12/21/10 16:55:36	12/21/10 16:30:00	1			370m25s		3		19751	20001	250	1152603758.65945	25616618536206.3
12/21/10 16:23:45	12/21/10 16:00:00	1			344m58s		0		22501	22751	250	1307792578.59553	27068690791770.3
12/20/10 17:20:47	12/20/10 17:00:00	1			726m21s		0		23001	23251	250	1402067608.44444	61103508443617.3
12/19/10 20:06:59	12/19/10 20:00:00	16			376m56s		0		25001	25251	250	2287235009.41987	51728106973039.7
12/19/10 20:06:59	12/19/10 20:00:00	16			360m14s		0		25251	25501	250	2287235009.41987	49436297493601
12/21/10 05:27:08	12/21/10 05:00:00	1			480m22s		0		26251	26501	250	1402067608.44444	40410392610585.7
12/19/10 20:07:24	12/19/10 20:00:00	16			272m48s		0		27251	27501	250	2287235009.41987	37437462634184.4
12/19/10 20:07:24	12/19/10 20:00:00	16			404m12s		1		27501	27751	250	2287235009.41987	55470023448450.6
12/19/10 20:07:24	12/19/10 20:00:00	16			289m4s		0		27751	28001	250	2287235009.41987	39669804003378.2
12/19/10 20:07:24	12/19/10 20:00:00	16			358m30s		0		28001	28251	250	2287235009.41987	49198425052621.4
12/21/10 22:08:43	12/21/10 22:00:00	1			46m53s		0		29251	29501	250	1307792578.59553	3678820523589.23
12/21/10 06:55:08	12/21/10 06:30:00	2			239m17s		1		30001	30251	250	1307792578.59553	18775978050896
12/20/10 02:29:44	12/20/10 02:00:00	1			273m7s		16		31001	31251	250	2287235009.41987	37480920099363.4
12/21/10 09:43:17	12/21/10 09:30:00	1			602m34s		0		31251	31501	250	1307792578.59553	47281932886542.8
12/21/10 18:01:20	12/21/10 18:00:00	1			304m28s		0		33001	33251	250	1402067608.44444	25612971071063.1
12/20/10 00:56:28	12/20/10 00:30:00	2			366m23s		2		34251	34501	250	2287235009.41987	50280287212077
12/20/10 03:24:52	12/20/10 03:00:00	1			217m59s		1		34751	35001	250	2287235009.41987	29914746688202.5
12/20/10 07:34:29	12/20/10 07:30:00	1			586m18s		0		35251	35501	250	1402067608.44444	49321934329858.6
12/21/10 06:33:42	12/21/10 06:30:00	2			552m59s		0		36251	36501	250	1307792578.59553	43391249965221.1
12/20/10 08:32:22	12/20/10 08:30:00	1			499m46s		1		37001	37251	250	1152603758.65945	34561976307162.4
12/20/10 22:56:31	12/20/10 22:30:00	1			1347m29s	1		39001	39251	250	1152603758.65945	93186861283858.1
12/21/10 02:06:22	12/21/10 02:00:00	4			857m23s		0		41501	41751	250	1307792578.59553	67276773620689.9
12/21/10 02:06:22	12/21/10 02:00:00	4			456m55s		0		41751	42001	250	1307792578.59553	35853133542196.5
12/21/10 02:06:22	12/21/10 02:00:00	4			267m19s		0		42001	42251	250	1307792578.59553	20975685168093.7
12/21/10 13:27:30	12/21/10 13:00:00	1			578m18s		2		42751	43001	250	1402067608.44444	48648941877805.3
12/21/10 21:04:23	12/21/10 21:00:00	1			111m13s		6		43251	43501	250	1307792578.59553	8726899876967.98
12/20/10 16:52:08	12/20/10 16:30:00	1			1443m28s	8		44751	45001	250	1152603758.65945	99824706329977.9
12/21/10 10:54:26	12/21/10 10:30:00	1			721m10s		0		46751	47001	250	1307792578.59553	56588184875828.6
12/21/10 12:06:30	12/21/10 12:00:00	1			354m50s		3		47751	48001	250	1402067608.44444	29850019383782.2
12/21/10 21:24:01	12/21/10 21:00:00	1			102m0s		0		48001	48251	250	1152603758.65945	7053935002995.85
startbucket = FLOOR(r.sent_time/1800) * 1800
concurrent = number of jobs per host starting in that startbuket
polys is number of polys that matched the criteria and therefore would be outputted to .dat.p

I can pull up the output for any individual item, but I edited msieve to not output all the debugging information, as the job distribution system would get overwhelmed with output then. But it does contain the random seeds which ought to let any individual problem be reproduced (I think). I was not running the version merged to trunk, but actually the branch one or two revisions earlier. I'm onto a sieving step right now, but next poly I plan to upgrade it to trunk.

Last fiddled with by tal on 2010-12-24 at 22:14
tal is offline   Reply With Quote
Old 2011-01-14, 19:32   #24
fivemack
(loop (#_fork))
 
fivemack's Avatar
 
Feb 2006
Cambridge, England

11001000100112 Posts
Default Tiny calibration problem

I'm running on 94689285701167475723997539761252176360612091760157996390039119173394183793423467218004659819255804122621254823776089987171612793165075463128641351439942459 and have now run 48 hours on 4 CPUs (effectively -np 1,24000, though I'm running in 1k slices) without getting a single polynomial.

Is there anything I can do with the 'poly' lines in nohup.out to try to search with effectively lower parameters, or should I stop, recompile with the bounds reduced by 30% and start again?
fivemack is offline   Reply With Quote
Old 2011-01-17, 14:46   #25
jasonp
Tribal Bullet
 
jasonp's Avatar
 
Oct 2004

3,541 Posts
Default

If you just want to run stage 2 with more lenient bounds, you can save the 'poly' lines to msieve.dat.m (keep just the numbers) and rerun with -nc2 after recompiling the code. Just update the tables at the top of gnfs/poly/poly_skew.c

In other news: I've just merged a long-needed update to stage 2 that extends the degree 6 root sieve infrastructure to degree 4 and 5. The updated code will find similar (not identical) polynomials as the original, but does so in seconds instead of hours for large problems.
jasonp is offline   Reply With Quote
Old 2011-01-17, 14:53   #26
fivemack
(loop (#_fork))
 
fivemack's Avatar
 
Feb 2006
Cambridge, England

72×131 Posts
Default

Superb, thanks!

Would you mind updating the last two lines in the stolen-from-GGNFS part of the table in prebuilt_params_deg5 in poly_skew.c to

Code:
	{154, 8.00E+023, 1.00E+022, 2.40E-012},
	{155, 1.00E+024, 1.50E+022, 2.00E-012},
without which I found that a 240 CPU-hour run on N ~ 10^154.96 gave not a single polynomial?

I'll run some local tests and see if I can get you a better 150 and 155 line; I suspect that the version of GGNFS from which they were stolen had never run a 512-bit problem.
fivemack is offline   Reply With Quote
Old 2011-01-17, 15:06   #27
jasonp
Tribal Bullet
 
jasonp's Avatar
 
Oct 2004

67258 Posts
Default

Sure.

Note that at Paul Zimmerman's request I ran an older version of the GPU code on RSA512 for two weeks straight (using a 9800GT), and it did find 11 polynomials with the bounds as given. jrk has also noticed that something is fishy with the bounds on 155-digit inputs, they are very stringent compared to e.g. the 140-digit bounds.
jasonp is offline   Reply With Quote
Old 2011-01-17, 18:39   #28
fivemack
(loop (#_fork))
 
fivemack's Avatar
 
Feb 2006
Cambridge, England

72·131 Posts
Default

I appreciate that this is pedantic and probably difficult, but is there any way that msieve could be made to notice that it's output the same polynomial several times? On a test case (SVN, built ten minutes ago, running on GPU) I'm getting every polynomial reported twice, and some three times.

(ah, this only seems to happen for very small c5; so not a problem)

Last fiddled with by fivemack on 2011-01-17 at 20:07
fivemack is offline   Reply With Quote
Old 2011-01-17, 21:01   #29
jasonp
Tribal Bullet
 
jasonp's Avatar
 
Oct 2004

3,541 Posts
Default

Yes, one of the big changes here is that the root sieve is run several times in a row, with the search space allowed to be progressively larger. Larger search spaces are processed more approximately, and any good polynomials found are thrown into a priority queue that is keyed by a combined size and root score. This allows polynomials with good size and mediocre root score to compete fairly with polynomials whose root score is better but size score is worse.

An unfortunate side effect is that if the search space does not grow too much from one step to the next, it has the effect of rerunning the root sieve at the same level of fidelity and finding the same hits over again, along with possibly some new ones. For degree 4 the duplicates go away if you only run the root sieve once, but then once in a while you miss a good polynomial. Maybe the right way to handle this is to manually remove duplicates from the priority queue.
jasonp is offline   Reply With Quote
Old 2011-01-27, 19:29   #30
chris2be8
 
chris2be8's Avatar
 
Sep 2009

2×1,039 Posts
Default

I've tested the new version against 1.44, 1.41 and pol5m0b/pol5opt with the rsa100 test case shipped with ggnfs.

Ver Time Yield Elapsed
1.48 00:29:18 1125098 02:07:48
1.44 01:02:54 1342408 02:12:22
1.41 01:04:31 1165126 02:04:42
pol5 00:42:55 1129275 02:06:33

Time is elapsed time to select the polynomial.
Yield is sieving from q=1600000 to 1700000.
Elapsed is time to sieve the range using the 64 bit assemble version of gnfs-lasieve4I13e on my pentium 4.

So the polynomials seem similar, but it took 1.48 half as long to find it. Nice.

Should I expect bigger gains for large cases?

Chris K
chris2be8 is offline   Reply With Quote
Old 2011-01-29, 16:46   #31
jasonp
Tribal Bullet
 
jasonp's Avatar
 
Oct 2004

3,541 Posts
Default

Thanks for the comparison; I actually am not clear on whether the improved poly selection method is going to make a big difference for other than huge problems. Kleinjung has showed that for the largest problems one can find noticeably better polynomials with the new algorithm, if you work quite hard at it.
jasonp is offline   Reply With Quote
Old 2011-02-02, 18:20   #32
chris2be8
 
chris2be8's Avatar
 
Sep 2009

2×1,039 Posts
Default Parallel polynomial selection.

Testing the same number in two stages:
np1 took 00:14:50
np2 took 00:12:56

About the same in total as doing it in one go (which took 00:29:18).

Then I did both stages in parallel:
Code:
 mkfifo msieve.dat.m
~/msieve-1.48/msieve -np1 -v -t 2 -l msieve.np1.log -p $N >msieve.np1.sysout &
~/msieve-1.48/msieve -np2 -v -t 2 -l ggnfs.log -p $N
Both stages done in 00:21:09 on a single core with hyperthreading. I hope for even better results on a dual core CPU.

Some of the speedup is probably from sending stage 1 output to a file, it writes enough to the screen that displaying it takes a significant amount of CPU.

And sieving from 1600000 to 1700000 took 02:09:05 to generate 1125098 relations.

Chris K

Last fiddled with by chris2be8 on 2011-02-02 at 18:24 Reason: Added sieving stats.
chris2be8 is offline   Reply With Quote
Old 2011-02-04, 20:38   #33
debrouxl
 
debrouxl's Avatar
 
Sep 2009

977 Posts
Default

Some stats about polynomial selection for the remaining C164 of near-repdigit http://hpcgi2.nifty.com/m_kamada/f/c.cgi?q=77771_297 , which has been running on one of my cores for nearly 98 hours:
* 22558 polys found so far, 18623 unique;
* the Murphy E values range from ~5.1e-13 to ~8.5e-13;
* 1601 of the 18623 unique polys (~8.6%) have e >= 6e-13; 13 polys have >= 7.5e-13, 2 have e >= 8e-13.

Next time I perform polynomial selection on C160+ integers, I'll try chris2be8's trick, or split ranges across multiple instances of msieve.
debrouxl is offline   Reply With Quote
Reply



Similar Threads
Thread Thread Starter Forum Replies Last Post
Polynomial selection Max0526 NFS@Home 9 2017-05-20 08:57
SNFS Polynomial selection help? mhill12 Factoring 59 2013-09-09 22:40
Best way to scale polynomial selection pastcow Msieve 6 2013-05-08 09:01
2^877-1 polynomial selection fivemack Factoring 47 2009-06-16 00:24
Polynomial selection CRGreathouse Factoring 2 2009-05-25 07:55

All times are UTC. The time now is 00:59.


Sat Jul 17 00:59:39 UTC 2021 up 49 days, 22:46, 1 user, load averages: 1.69, 1.38, 1.35

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.