![]() |
Now what (VI)
OK, it is possible to factor a 197-digit general number using the current tools and fifty CPU-years of sieving time on Bruce's cluster. Good.
My inclination is to do 2^1009-1 next. It's 204 digits and SNFS difficulty 304, so it may be that after a CPU-year of polynomial selection it turns out to be easier by SNFS; but a CPU-year of polynomial selection is not a disastrous amount of time to waste. I'm slightly saddened how little external interest I had in the 197-digit GNFS sieving - maybe I over-egged the description of how difficult the sieving would be to the point that it put people off, and likely the other people with significant resources (I'm thinking of Serge and Ben) have more interesting things to do with them. |
I'm willing to help but I only have 4 cores and I am under windows. I know linux is faster on sieving...
For polynomial selection GPU is faster, right? So probably most of us are on windows without a good GPU card. |
[QUOTE=fivemack;281834]I'm slightly saddened how little external interest I had in the 197-digit GNFS sieving - maybe I over-egged the description of how difficult the sieving would be to the point that it put people off, and likely the other people with significant resources (I'm thinking of Serge and Ben) have more interesting things to do with them.[/QUOTE]
If your phrase "external interest means "interest of people here to join factoring" here, this could have some of the following reasons: * There is a nasty not-yet-resolved crash bug (possibly somehow related to [URL="http://www.mersenneforum.org/showthread.php?t=14812"]this one[/URL]?) in the windows binaries of GGNFS, which happens especially with the 15e (and maybe 16e) sievers (it also happens occasionally with the 11e and 12 sievers, but *very* seldom with 13e and 14e). So, when having a sufficiently fast PC (e.g. an Intel i5 or i7), but running windows, people will keep finding one or more threads sitting idle with error messageboxes on the screen, when coming back from work or getting up in the morning - at least this happened to me. * The aliquot project seems to have grown in the meantime, tying up people's resources with aliquot sequences. And I don't mean just IGG-type stuff, but also bigger factorizations: The c155 to c16x team factorizations which keep popping up there are far less prone to trigger the ggnfs crash bug on windows systems. *(Probably just a very minor reason: Unlike your previous threads, the thread title ("G197") was a bit... erm... "what's that?": OK, for you it is clear at the first glance, that this means "gnfs-197", but for others? Maybe something like "Team sieve: GNFS-197 from <number>" would have been a bit more appealing. (Or, like your previous factoring threads: something like "poly search for..." and then change to "sieving...")) |
I am sorry to report don't really have significant resources. (I had borged some two or three years ago, but that all ended abruptly with "a sound of a breaking string dying away sadly", as in Chekhov's Cherry Orchard.)
Then, I only had a mulitple-personality-disordered Phenom that started life as a 940 4-core with 8Gb of DDR2 memory, then I'd replaced its brain with a 1055T (and had shelved the 940) and just two monts ago dissected its personalities into two boxes: restored the 4-core and gave the 6-core 16Gb of DDR3 memory... but that's it, that's the 10 cores I have. :sigh: |
[QUOTE=fivemack;281834]My inclination is to do 2^1009-1 next. It's 204 digits and SNFS difficulty 304, so it may be that after a CPU-year of polynomial selection it turns out to be easier by SNFS; but a CPU-year of polynomial selection is not a disastrous amount of time to waste.[/QUOTE]
M1009 indeed could turn out to be snfs. There's a definite gnfs 204 3 748 + 324.4 0.65 but it probably needs more ECM. Or there's 206 2 2218 M 333.8 0.615 /gnfs That's a tall order for sure. |
As it happens, I now have access to even more horsepower (~ 120 X5687 cores + the 60 X5570 cores I already had access to). The new gear is all other people's workstations and the rest of it is a shared resource - so all of it can only be used part time (non office hours). This means that it's more manual effort on my part to daily re-start jobs. This last chore is largely why I didn't participate in the last job...
I've thought of using daily recurring scheduled tasks for the 120 windows cores, but haven't implemented it yet. I'm sure there's a way to schedule jobs on linux too but I don't know the particulars. If my babysitting time could be reduced, I would definitely participate in the next job. |
[QUOTE=Batalov;281843]M1009 indeed could turn out to be snfs. [/QUOTE]
There's a minor annoyance about M1009 in the epfl counts. It was left off of the June version of the epfl report; presumably as it wasn't an attractive candidate for large snfs. The subsequent/final version of the epfl report gives a range of M values where they ran ecm (to t65 or 2t65, depending on size, split at M1125, iirc), but PaulZ subsequently found a small factor from M1115 that they couldn't possibly have missed with t65. I got no reply from an email request for clarification that included our friends Thorsten and Joppe (in a cc: on a reply to PaulZ's factor report). I haven't been running either 2- or 2+ numbers since then, and my curve count on M1009 is below 2t55. Perhaps a direct inquiry regarding M1009 would help. [QUOTE=Batalov;281843] There's a definite gnfs 204 3 748 + 324.4 0.65 but it probably needs more ECM. Or there's 206 2 2218 M 333.8 0.615 /gnfs That's a tall order for sure.[/QUOTE] Another problem with ecm pretests on 3p748; it's just been added to the regular tables this week, and curve counts are mostly whatever Sam and PaulZ may have run (quite likely substantial, judging by factor sizes they found during testing on the extension). This could be fixed during poln selection (in parallel). Uhm, well. Suppose sorting sieving candidates shows how reliable curve count ranges are (depending, perhaps, on whose they are). The 2M2218 C206 is someone's cofactor from an earlier C262, and doesn't appear to have been run at all on our pc/grid. It did make the new 8-core cluster input file though. The count there (non-first holes, not 2- or 2+, C20x's) is 24119 curves with B1=900M, where t60 = 13061. That's c. 1.8t60, on a file that initially had 11 numbers and has just five left, with a p65 found. (That's c. to the nearest t55 = c. 2700 curves, 9/5.) On the topic of attracting a larger pool of forum contributors, I wonder whether a warmup gnfs191, one of 2M2090 or 2M2110 perhaps is a possibility? We'd get a chance to try the new poly select, and might hope to do better than what we had for gnfs187? These are at 17644/13061 with an extra 3t55, resp 2t55, from the pc/grid. That's 9/10, resp 8/10, of 2t60. Well, I'll stop clogging bandwidth; the matrix for the gnfs167 cofactor of 3p608 just finished. -Bruce |
RSA-210 and RSA-704 ? :smile:
(yeah, according to the rules of thumb, these are more than twice harder than a GNFS 204, and close to M1061, so they may be a bit tall for now...) [quote=bsquared]I've thought of using daily recurring scheduled tasks for the 120 windows cores, but haven't implemented it yet. I'm sure there's a way to schedule jobs on linux too but I don't know the particulars.[/quote] cron/anacron does the job on *nix, and can launch any executable (so native code, BASH, Perl, PHP, Python, whatever). RSALS uses cron to trigger PHP scripts (made mainly by squalyl) every few minutes for automatic work generation and for concatenating gzipped results to the large files containing raw relations. Note that once it's set up, BOINC is a rather easy way to babysit a group of computers :smile: It takes me 10-15 minutes to build a .poly for a number (usually from William Lipp's lists), make a short test sieving run at (r|a)lim/2 (during which I usually do something else), create and fill an entry into the automatic work generation system, and move the number from "queued for sieving" to "sieving" state. But now that, thanks to frmky, I've been able to make sense of BOINC WU priorities, I could even put many numbers in "sieving" state, and, actually, do largely without the automatic work generation infrastructure (like I did in the beginning, and like frmky does for NFS@Home). In case you didn't notice the topic here ( [url]http://www.mersenneforum.org/showthread.php?t=16114[/url] ), [url]https://github.com/GDSSecurity/cloud-and-control[/url] could be an interesting starting point, should you want to deploy a BOINC server to manage your own sieving :smile: |
[QUOTE=debrouxl;281866]RSA-210 and RSA-704 ? :smile:
[/QUOTE] I like the idea... |
Note that if anyone wants to do numbers larger than ~GNFS200 in difficulty then we should invest some development effort in getting the sieving tools to handle large primes bigger than 33 bits in size. Though pretty soon we'll be reaching the 2^32 relation limit in Msieve, since 2,1031- already needed 2^30 relations after duplicates were removed manually.
|
So, we will need a new implementation/algorithm/code? I'm nowhere near as being qualified to do that.
|
There's still time before 'need' is the correct word for it. A public factorization effort that will generate more than 4 billion relations would probably need both Bruce and Greg's resources combined, for extended periods of time. Modifying Msieve to handle more than that many relations is not difficult per se, but the assumption of 32-bit relation numbers is all over the postprocessing code and would have to be removed in stages. Then there's the matter of making sure it still works afterwards...
More troubling is removing the next bottlenecks in line: building on-disk clique removal, for one. Parallel duplicate and singleton removal would also be nice, in case the initial dataset has trouble fitting in 48GB or 64GB of memory. In contrast, getting lasieve to spit out larger large primes might just be a matter of removing a single size check in the code; I just don't know. Tom did do a preliminary skim of the source and nothing looked obviously impossible about it. |
[QUOTE=debrouxl;281866]RSA-210 and RSA-704 ? :smile:
...[/QUOTE] I can tell you about the factors of RSA-704. Two 352-bit primes, more-or-less. Any reason you'd be interested to know which primes? -Bruce [(2^351+epsilon1)*(2^351+epsilon2) is short of 2^703+epsilon3; maybe one 352-bit and a 353-bit? Inquiring minds need to know?] (By contrast, factoring a wanted Cunningham number with a known ecm pre-test has some suspense. Three prime factors? A near-miss for ecm ... & etc.) |
They would have same number of digits, so a factor of 8 between them is not to be excluded. Theoretically it can also be 351 and 354 bits. So the search space is almost triple.
|
Modern standards for RSA keys require the factors of the modulus to have the exact same number of bits. If the modulus has p-bit primes, each prime must be larger than sqrt(2)*2^(p-1)
|
[QUOTE=bsquared;281851]I've thought of using daily recurring scheduled tasks for the 120 windows cores, but haven't implemented it yet. I'm sure there's a way to schedule jobs on linux too but I don't know the particulars.[/QUOTE]
Do you know you can scheduled it over the LAN? For example: schtasks.exe /Create /S <ip> /U <username> /P <password> /SC DAILY /ST <startime> /TN <name of task> /TR <file to run> Use "schtasks /?" and "schtasks /Create /?" to see more. |
[QUOTE=jasonp;281896]Modern standards for RSA keys require the factors of the modulus to have the exact same number of bits. If the modulus has p-bit primes, each prime must be larger than sqrt(2)*2^(p-1)[/QUOTE]
I was thinking epsilon1, epsilon2 around 100-bits; large enough so that p*q would be safely way from snfs, small enough to keep the bit count simple. Do we know whether RSA-704 meets modern standards? Must be relatively ancient. With p1, p2 both larger than (sqrt(2)/2)*2^p we'd get p1*p2 > .5*2^(2p) = 2^(2p-1) ... so 2p-1 = 703, p1 and p2 with 352-bits, p1 = 2^351 + sigma1, p2 = 2^351 + sigma2, both larger than .707 decimal * .... 1.41 decimal, converts to 1.01101 binary; uhm (1.011)*(2^351) = 1.011*10000.... that's 1+1/4+1/8, so 10011000, sigma's supposed to be big, like 2^351+2^349+2^348+ (0's)... =p1 and 2^351+2^350+ (0's) = p2 and p1*p2 = 2^702. Sounds like RSA-704 isn't modern? Suppose I could check RSA233. -Bruce |
[QUOTE=jasonp;281896]Modern standards for RSA keys require the factors of the modulus to have the exact same number of bits. If the modulus has p-bit primes, each prime must be larger than sqrt(2)*2^(p-1)[/QUOTE]
Talking about RSA210 and RSA704, they are not "modern standards", and everything we know about their factors is the fact they have the same number of digits. From the past-factored RSA numbers we can see that the "same number of bits" rule was not exactly followed in all the cases. There are 3 or 4 exceptions, for example RSA120 has a ratio of 2.11759 between its factors (>2, so no way they have same number of bits!), same for RSA170 (2.02626) or just the former RSA200 (2.243724) and some of the other with the ratio under 2 also have different number of bits. In fact I did not compute, but for one of the "exceptions" above, it could be more then one bit difference. Like from 29 to 67, both have 2 digits, their ratio is about 2.2, but 29 has 5 bits, and 67 has 7 bits. So, for RSA numbers (including 210 and 704) we can expect any ratio of their factors, between 1 and 10, so a 3-bit difference between their factors is NOT to be excluded. Just my tuppence. Edit: in fact a difference of 4 bits must not be excluded too, as for example 1009 and 9973 are both primes, their ratio is smaller then 10, they both have 4 digits, but the first has 10 bits and the second has 14 bits. Note that having the same number of digits is a condition stronger then the fact that the ratio is smaller than 10, this is important, otherwise other smaller pair could be found with the ratio also smaller then 10, but different number of digits, like 29 and 257, first having 5 bits and the second having 9. Also the primality is important, otherwise the smaller example could be 13 and 129 (4 bits, respective 8). In fact, 7 and 67 would suffice too, and by chance they are also primes, but they don't have the same number of digits. |
RSA 210 will have two 105-digit factors.
RSA 704 will have two 352-bit factors. This much we can be sure of. EDIT:- Using these facts: The factors of RSA 704 should be between 8070373681869514072734228812145027220094678854297782287717332620153464615011270601823360738857596970569828 and 2^352 (a ratio of 1.13675) RSA 210 should have factors between 245246644900278211976517663573088018467026787678332759743414451715061600830038587216952208399332071549104 and 10^105-1 (a ratio of 4.0775) |
I suggested RSA-210 or RSA-704, here and to Greg several weeks ago, because while the standards for choosing the size of RSA keys already strongly discourage the creation of RSA keys <= 1024 bits, public efforts such as MersenneForum / NFS@Home, using FLOSS tools, and making headlines by factoring RSA-210 or RSA-704, could help accelerate the existing 1024-bit RSA keys being phased out :smile:
Double bonus points for improving the FLOSS tools in the process, in the directions mentioned by jasonp. I'm not among those who have clues on the math and the implementation of these tools. Phasing out 1024-bit RSA keys is a double-edged sword: it would help eliminating some certificates used for SSL/TLS, but it would eliminate one of the ways people trying to make permanent installs of third-party software, on the hardware they own, can achieve their goals. For instance, TI Nspire calculators, use 1024-bit RSA signing keys on 2007-2010 models, and have already switched to 2048-bit RSA signing keys on the 2011 models. It's not desirable that other manufacturers of devices which users should be able to control, follow suit... |
Why GNFS?
Why does the next one need to be GNFS? Why not SNFS up in the kilobit range, where there are many candidates?
The only answer I can see at the moment is that it gives a chance to stress-test Janon's polynomial finding code. Paul |
[QUOTE=fivemack;281834]I'm slightly saddened how little external interest I had in the 197-digit GNFS sieving - maybe I over-egged the description of how difficult the sieving would be to the point that it put people off, and likely the other people with significant resources (I'm thinking of Serge and Ben) have more interesting things to do with them.[/QUOTE]I played no role for the second reason --- too many things to do with too few resources. That has been the cry of factorers for decades, if not longer.
Paul |
For me, I would say that I would feel like I would be holding things up if I tried to stake out any significant range of a large job.
I'm getting just over 2.5 million relations a day on a c164 with 8 (out of 10) cores running on the job, and I shudder to think what my output would be on something larger. |
[QUOTE=xilman;281929]Why does the next one need to be GNFS? Why not SNFS up in the kilobit range, where there are many candidates?
The only answer I can see at the moment is that it gives a chance to stress-test Jason's polynomial finding code.[/QUOTE] NFS@home is very happily doing large SNFS jobs, with I think significantly more resources than the forum can offer (apart of course from Bruce's enormous capacity), so organising more such work by hand feels redundant to me. The polynomial-finding code has been worked on a lot recently; getting the parameters right for larger numbers is probably not completely uninteresting. |
[QUOTE=bdodson;281883](By contrast, factoring a wanted Cunningham number with a known ecm pre-test has some suspense. Three prime factors? A near-miss for ecm ... & etc.)[/QUOTE]
Factoring the C187 from [URL="http://www.mersenneforum.org/showthread.php?t=13003"]this thread[/URL] would have even more suspense; not only is it unknown what sizes the factors are, but there's also a message to recover. |
[QUOTE=jasonp;281896]Modern standards for RSA keys require the factors of the modulus to have the exact same number of bits. If the modulus has p-bit primes, each prime must be larger than sqrt(2)*2^(p-1)[/QUOTE]
Second try. (Please ignore the first!) Magma has a function RandomPrime. I checked a few and got [code] p2=6795865632092182381090554801611929751291254620153213141853637792040419908863678815206226270618836356620243 p3=8425088594444363083756584497069027163097594304627681595990294439570195850206502157856636064495940696446767 [/code] An online converter reports both have 106-digits, 352-bits. Next, Maple reports [code] evalf(sqrt(2))*2^351; 0.6486993694*10^106 [/code] which seems to be Canadian for 6.4e105, so both p2 and p3 are in Jason's range. No reason RSA-704 couldn't be equally modern. By contrast, our most famous factorization, RSA-129, at p64*p65 = 3.49e63*3.27e64 definitely isn't modern. -Bruce |
Or 2^1755+1 and 2^1755-1 (C183 and C209) which the Links to factoring projects thread says would assist a result about base-2 pseudoprimes. But I don't know how significant the result is (the math's beyond me but that's a weak condition).
Chris K |
[QUOTE=debrouxl;281922]For instance, TI Nspire calculators, use 1024-bit RSA signing keys on 2007-2010 models, and have already switched to 2048-bit RSA signing keys on the 2011 models. It's not desirable that other manufacturers of devices which users should be able to control, follow suit...[/QUOTE]
But it is rather inspiring (pun intended) to see people in the TI forums grappling with understanding why they have no chance to do what they want by breaking calculator crypto first. Several of them have already resolved to come up with new factorization algorithms :) |
chris2be8: I thought of the Wieferich primes as well, when reading the topic for the first time :smile:
GNFS 183 is quite a bit easier than GNFS 197, while GNFS 209 is quite a bit harder. jasonp: yeah, once in a while, someone doesn't have the orders of magnitude in their heads, but usually stops working soon enough after someone tells them. |
Lest anyone think I was being disparaging, I'm sincerely not; it's a pleasure to see people discover math in their way, then figure out how it works.
|
[QUOTE=jasonp;281964]But it is rather inspiring (pun intended) to see people in the TI forums grappling with understanding why they have no chance to do what they want by breaking calculator crypto first. Several of them have already resolved to come up with new factorization algorithms :)[/QUOTE]
Could you kindly post here the link to the thread on that forum? |
Try [url="http://www.omnimaga.org/index.php?topic=3821.0"]here[/url], [url="http://www.omnimaga.org/index.php?topic=3639.0"]here[/url], [url="http://www.omnimaga.org/index.php?topic=6810.0"]here[/url], [url="http://www.omnimaga.org/index.php?topic=8707"]here[/url] and [url="http://www.unitedti.org/forum/index.php?showtopic=9557"]here[/url] (it's a popular topic). Lionel can point to others that I missed.
|
Heh, jasonp kept a better track of the topic links than I did :smile:
Note that I intentionally never posted links to those topics on MersenneForum, so as to protect both the guilty, and the hopes-dasher (myself) from their wrath. But we may be venturing into somewhat off-topic territory... mentioning the adventures of some people in the TI community was not my intention when posting about RSA-210 and RSA-704 :smile: |
[QUOTE=debrouxl;282074]Heh, jasonp kept a better track of the topic links than I did :smile:
Note that I intentionally never posted links to those topics on MersenneForum, so as to protect both the guilty, and the hopes-dasher (myself) from their wrath. But we may be venturing into somewhat off-topic territory... mentioning the adventures of some people in the TI community was not my intention when posting about RSA-210 and RSA-704 :smile:[/QUOTE] Just for fun I'll carry the off-topic stuff a little further :smile:. (mods do as you see fit) Do I understand the cspire_lfsr code you posted correctly in that it attempts to trial divide by random 512 bit odd integers (with a tiny bit of trial division on the random p before doing the full n mod p)? The TFing could be made 100x faster by using a sieve starting from some random p to elimiate a lot of composites prior to testing n % p. Not that that would get you any closer to factoring n, but thought I'd point it out :whistle: [edit] For the heck of it, I tried this out over my lunch break. Saw about a 70x speedup. |
[QUOTE=fivemack;281933]NFS@home is very happily doing large SNFS jobs, with I think significantly more resources than the forum can offer (apart of course from Bruce's ...)
The polynomial-finding code has been worked on a lot recently; getting the parameters right for larger numbers is probably not completely uninteresting.[/QUOTE] More than mine, as well; one kilobit down, the next one to finish sieving soon (a month, perhaps). On gnfs, our previous was gnfs197, and doubling the difficulty leaves us well short of RSA210 (who is contributing ... to the sieving, I mean)? Are there other candidates to consider, with M1009 and 3p748 from 204-digits the best current? Perhaps having the snfs poly for M1009 has the advantage of setting a target for (gnfs) poly selection, like the poly actually used to factor RSA768 (Thorsten, et. al.) for comparison in the recent test of the gpu -vs- cpu msieve upgrade. -Bruce |
Less than c204s, there's only
c202 12 299 + 297.8 0.678 /13 that looks much less likely to be gnfs than 2,1009-. And there's the newcomer c202 3 745 + 284.3 0.8 /5q/likely_gnfs(or octic? :-) ) |
[QUOTE=Batalov;282087]Less than c204s, there's only
c202 12 299 + 297.8 0.678 /13 that looks much less likely to be gnfs than 2,1009-. And there's the newcomer c202 3 745 + 284.3 0.8 /5q/likely_gnfs(or octic? :-) )[/QUOTE] 2,2218M? |
[QUOTE=R.D. Silverman;282099]2,2218M?[/QUOTE]
Mentioned in Serge's previous post, [QUOTE=Serge] Or there's 206 2 2218 M 333.8 0.615 /gnfs That's a tall order for sure. [/QUOTE] Seemed to me that gnfs197 to gnfs204 is a steep enough step, without stretching to gnfs206 (cf. gnfs210). Depends on how ambitious Tom wants to be --- if you're expressing a preference for a base-2, wouldn't M1009 be your first choice? -Bruce |
Old, validated* gnfs targets are sparse. Nothing between 197 and 204 (maybe, 202 with a stretch of imagination).
[SIZE=1]*/ Pharma lingo. A gene target (for a future drug) is validated when a significant lot is known about it. Unlike last decade, no department wants "novel" targets anymore - because they drop like dead flies in the later pipeline when a significant effort is already spent and then there's nothing to show for the effort. I think the parallels are clear.[/SIZE] |
They would need more ECM, but [URL="http://factorization.ath.cx/index.php?id=1100000000012549379"]1361^107-1[/URL] and [URL="http://factorization.ath.cx/index.php?id=1100000000126320612"]22576753^47-1[/URL] seem to be in the range you are looking at.
|
FWIW, I've launched msieve -np1 on the composite cofactor of M1009, [url]http://factordb.com/index.php?id=1100000000019320428[/url] .
|
I've run -np1 1,1000 (7.5 hours on geforce 275, 9419 hits), then -np2 which took 46 hours and gave 52638 hits (and myriads of 'too many line iterations' messages). Best was E=1.919e-15 and still significantly slower than the SNFS polynomial on a trial-sieve ... indeed, slow enough that I'm wondering if there are problems with the siever at ludicrous skew: with
[code] lpbr: 32 lpba: 33 mfbr: 64 mfba: 96 alambda: 3.6 rlambda: 2.6 alim: 400000000 rlim: 400000000 [/code] and siever 16e, I'm still getting [code] gnfs-lasieve4I16e -a -f 400000000 -c 10000 total yield: 4513, q=400010057 (24.43317 !!! sec/rel) [/code] (against a yield of about 2.3 and times around 1.5 sec/rel for 7+374) snfs for comparison is also intractably slow: [code] n: 511723119647870982096966393397179429123821780791609426362473853284317421442953557435166610895933390722879695053104391499366375434314647024796083015465856876642832067947449473241839773740706589007306261239 skew: 0.9 c6: 2 c0: -1 Y1: -1 Y0: 374144419156711147060143317175368453031918731001856 lpbr: 32 lpba: 33 mfbr: 64 mfba: 96 alambda: 3.6 rlambda: 2.6 alim: 400000000 rlim: 400000000 gnfs-lasieve4I16e -a -f 400000000 -c 10000 total yield: 8018, q=400010057 (5.69129 sec/rel) [/code] (but maybe the SNFS should be done on the rational side, or maybe I should be using three big primes both sides ... trying those tonight) |
What am I doing wrong? Can someone explain me step by step how to run polynomial search? What flags and files do I have to use and create. Thank you.
[code] E:\msieve>msieve -np1 -v Msieve v. 1.49 Fri Dec 16 22:04:50 2011 random seeds: 96f7f1f8 d3e7bad4 factoring 5117231196478709820969663933971794291238217807916094263624738532843174 21442953557435166610895933390722879695053104391499366375434314647024796083015465 856876642832067947449473241839773740706589007306261239 (204 digits) searching for 15-digit factors commencing number field sieve (204-digit input) commencing number field sieve polynomial selection error: no parameters for 204 digit inputs [/code] |
SNFS poly has E = 2.775e-15.
GNFS may reach that with a bit of luck. (but will it sieve as well is indeed an open question) 6th degree gnfs, perhaps? SNFS -r sieving seems to be better for the posted poly. |
[QUOTE=pinhodecarlos;282485]What am I doing wrong? Can someone explain me step by step how to run polynomial search?[/QUOTE]
The parameters for very large inputs have (at present) to be set by changing the file gnfs/poly_skew.c in the source code and recompiling (make x86_64 on linux, on Windows I have no idea); if you're not happy with recompiling msieve then I'm afraid there's not much opportunity to help out with this particular project at this very early stage. |
If anyone's interested in tweaking the stage-2 parameters for M1009
840 1532426581029323681171 14353279785776501210693417737211891227753 is the best of ten thousand stage-1 hits I've found so far (stage-2 root score 4e+27 as against a median of 1.7e+29) and the source of the 1.919e-15 score. Stage-2 root score 2.4e+28 is the 99th-percentile. (I don't know whether the new stage 2 has changed the double role of max_norm; for the 197-digit polynomial search I found it seemed to help to compute the norm for every polynomial, cut down to the top 1%, then run -np2 on that top 1% with max_norm set much larger. But I'm not sure that wasn't credibly dismissed as voodoo even at the time) |
The new stage 2 works exactly the same as before, it's just much more robust when choosing degree-6 polynomials.
|
[QUOTE=fivemack;282496]The parameters for very large inputs have (at present) to be set by changing the file gnfs/poly_skew.c in the source code and recompiling (make x86_64 on linux, on Windows I have no idea); if you're not happy with recompiling msieve then I'm afraid there's not much opportunity to help out with this particular project at this very early stage.[/QUOTE]
I don't know how to compile but I would really like to help out (windows 32 and 64 bits version). |
[QUOTE=fivemack;282496]The parameters for very large inputs have (at present) to be set by changing the file gnfs/poly_skew.c in the source code and recompiling (make x86_64 on linux, on Windows I have no idea); if you're not happy with recompiling msieve then I'm afraid there's not much opportunity to help out with this particular project at this very early stage.[/QUOTE]
Could you give me an exact description of the files that need to be changed and the exact details of the changes needed? |
Or just wait until msieve 1.50 is out, which has params for this composite and you won't have to compile anything.
|
I'm running -np1 1000,5000.
|
I've done (very nearly) 20k@260M on 3,589- c213. This one should be ready for SNFS if you are interested.
|
That's a lot of curves, but I'm afraid I have an irrational preference for Mersenne numbers; I've reserved M929 for the forum with Wagstaff and am trying to get good parameters (in particular, to see if I can get it to be a 15e job)
|
[QUOTE=fivemack;283053]That's a lot of curves, but I'm afraid I have an irrational preference for Mersenne numbers; I've reserved M929 for the forum with Wagstaff and am trying to get good parameters (in particular, to see if I can get it to be a 15e job)[/QUOTE]
Why would the Mersenne forum have a preference for such numbers? I can't imagine. |
[QUOTE=R.D. Silverman;283061]Why would the Mersenne forum have a preference for such numbers?
I can't imagine.[/QUOTE]Ooh! Irony, and from an American too. That's something you don't see every day. |
| All times are UTC. The time now is 15:39. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.