![]() |
[quote=Andi47;161874]WOOOHOOOOOO!! :w00t::bow wave::showoff: :banana:
This might be the biggest number factored with QS ever!![/quote] Not quite: [URL]http://homes.cerias.purdue.edu/~ssw/cun/champ[/URL] But needless to say, I'm quite happy about getting this far!! Hopefully someday I'll get around to getting triple large primes implemented, and then I can go a bit farther. |
[QUOTE=bsquared;161872]Here are the factors for the C130 cofactor of 11^173-10^173
<snip> This otherwise unremarkable result is perhaps more significant because I did it with QS, It took about 5 1/2 days on 50 cpus to gather ~1.4 million relations, [/QUOTE] VERY NICE!!!:smile: But 50 machines??? I am only using 1. And it is 9 years old! This project isn't important enough to be using 50 machines. It would be better to use such resources on other projects. I could do a lot with 50 machines....... |
[quote=R.D. Silverman;161878]VERY NICE!!!:smile:
[/quote] Thanks! :smile: That means a lot to me coming from a pioneer of fast QS implementations. [quote=R.D. Silverman;161878] But 50 machines??? I am only using 1. And it is 9 years old! This project isn't important enough to be using 50 machines. It would be better to use such resources on other projects. I could do a lot with 50 machines.......[/quote] Very much agreed that there are better places to apply the horsepower. But this was a test of the software, and I needed at least a somewhat-non-trivial-yet-not-completely-impossible test subject, and this one seemed to qualify. No cunningham number is accessable to QS right now, nor likely ever will be again. I also could have just let one or two cpus go for 50 to 25 times as long, but was unwilling to wait that long to gather the test data. I was additionally not as concerned about the waste because these resources are routinely wasted much of the time anyway because I don't have the human time to devote to keeping them busy. Regrettable, but unavoidable right now for me :( |
130-digit MPQS runs
Okay, 130 digits is not usual these days, or on any day, for MPQS.
I am exploring other factorization techniques. Blind luck, in particular: a p53 fell out of 3^463-2^463 after 770 curves at 43e6: 45529717192860405382141829116728537388219911192760133 This factorization stuff is no trouble at all! Takes just a few hours for these SNFS 221 numbers to crack -- I don't know what the fuss is about. Maybe I will try breeding pandas next -- how hard can it be? Nobody come near me. I'm on fire right now. For the rest of the evening I will be busy letting this go to my head. |
[quote=bsquared;161876]Not quite:
[URL]http://homes.cerias.purdue.edu/~ssw/cun/champ[/URL] But needless to say, I'm quite happy about getting this far!! Hopefully someday I'll get around to getting triple large primes implemented, and then I can go a bit farther.[/quote] Once you've done TLP, have a go at RSA-140 with QS! |
[quote=10metreh;161955]Once you've done TLP, have a go at RSA-140 with QS![/quote]
it would be very nice to see the QS record broken Congrats on the huge QS bsquared!!! when would QLP be of use? |
[QUOTE=henryzz;161968]
when would QLP be of use?[/QUOTE] Likely never; with three large primes, only about 2% of the relations that survive the sieving step have three correct-size large primes. With four large primes, you have exponentially more relations to test, exponentially fewer that will survive, and almost-exponentially longer to test each candidate. NFS gets to cheat because rational and algebraic sieve reports can each have two large primes, so you get most of the benefit of four large primes without most of the pain needed to find them. |
[QUOTE=wblipp;159265]That seems reasonable. I think the selection probability within each strategy is more important that the strategy choice when trying to avoid the overwork of any one number. I usually leave the time-adjustments near zero, but I like to make the base probability 10-30%. That spreads the choices among the top numbers in a geometric distribution. This especially helps at the transitions where it is possible for several people to get a P-1 assignment before any of them are returned.
William[/QUOTE]I'm configuring a new server. Could you post or PM or mail me with the parameters you are using please? It will be interesting to see how much better yours performs than mine. Paul |
[QUOTE=xilman;162003]I'm configuring a new server. Could you post or PM or mail me with the parameters you are using please? It will be interesting to see how much better yours performs than mine.[/QUOTE]
It depends some on the goals. If the smallest numbers are going to be siphoned off for QS/NFS, I emphasize the length strategy. If the goal is to find at least one factor for each number or to bring all numbers to the same level, I emphasize the B1 strategy. If the goal is endless mindless ECM, I emphasize the difficulty strategy. Here are the parameters I'm presently using in an endless mindless ECM server: [code] // These four lines are used to control how candidates are picked for factoring work. // They can be set up to cause individual candidates to get the most work or they // can be set up to evenly distribute how factoring work is sent to clients. These // are all in ascending sequence. // The 5 parameters for each are defined as follows: // The percent of candidates to be selected using this strategy // The percent chance that each candidate in the list is selected for factoring // A breakpoint (in hours) used to increase or decrease the percent chance that // a candidate is chosen. // A percentage to add for each hour less than the breakpoint for candidates // that have had no work has been sent out or completed // A percentage to subtract for each hour over than the breakpoint for candidates // that have had no work has been sent out or completed // strategy0 - random // strategy1 - by B1 // strategy2 - by total work done // strategy3 - by difficulty // strategy4 - by length strategy0=00:100:10:0:0 strategy1=10:10:10:2:0 strategy2=05:30:10:0:0 strategy3=83:10:10:3:2 strategy4=02:20:10:1:1 [/code] In a "siphon off the smallest ones" server I am presently using [code] // strategy0 - random // strategy1 - by B1 // strategy2 - by total work done // strategy3 - by difficulty // strategy4 - by length strategy0=00:100:10:0:0 strategy1=10:70:10:2:0 strategy2=00:30:10:0:0 strategy3=10:40:10:3:2 strategy4=80:20:10:1:1 [/code] |
Thanks. My strategy is much more simple-minded:
[code] // strategy0 - random // strategy1 - by lowest B1 that needs curves done and total work done // strategy2 - by total work done // strategy3 - by difficulty // strategy4 - by length and total work done strategy0=20:20:0:0:0 strategy1=80:20:0:0:0 strategy2=00:20:0:0:0 strategy3=00:20:0:0:0 strategy4=00:20:0:0:0 [/code] Whether I change it or not depends, at least in part, on future comments in this thread. Paul |
IIRC, you often set up servers with the task of bringing a batch of numbers to a particular level. Rinse and repeat. The heavy emphasis on strategy 1 makes sense for that.
Do you get too many P-1 and P+1 curves at the transitions? The 20% selection probability might be enough to solve that, depending on the rate at which the servers are getting hit. If you still get 2 or 3 P-1 curves at every level, I'd fiddle with the other parameters to greatly reduce the rate for the first hour or two after a number is sent. William |
| All times are UTC. The time now is 23:06. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.