6 table
[code]Size Base Index  Diff. Ratio Notes
281 6 421  327.6 0.856 321 6 431  335.3 0.955 273 6 437  340 0.801 293 6 439  341.6 0.856 259 6 445  277 0.933 /5q 336 6 449  349.3 0.96 250 6 457  355.6 0.701 337 6 461  358.7 0.938 258 6 463  360.2 0.714 310 6 467  363.3 0.86 293 6 473  334.6 0.874 /11q 317 6 479  372.7 0.849 290 6 481  345.4 0.837 /13 299 6 485  301.9 0.988 /5q 320 6 487  378.9 0.843 301 6 491  382 0.786 276 6 493  383.6 0.718 277 6 497  331.4 0.834 /7 379 6 499  388.2 0.974[/code] 
6,387
[CODE]N=44031017740982928067538705953801189246013052570402834343832374510583225456135162211816299899613030974567524511477153154642766468275281718659648235588552347206547842353356774275612337 ( 182 digits)
SNFS difficulty: 200 digits. Divisors found: r1=396617565007083620931188002164448757655876754586218585374821747799540417379 (pp75) r2=111016307964566649045756932943929716969554706760914531131640167955206717252780433501309849995957928482083803 (pp108)[/CODE] 
From Raman:
6,305 [code]prp53 factor: 24506226188880631899928133376464081634967825718604821 prp103 factor: 1068071855703783761181123461268973104294098322369041790833437139214193724057795181478916448908089214641[/code] 
[quote=Xyzzy;121649]From Raman:
6,305 [code]prp53 factor: 24506226188880631899928133376464081634967825718604821 prp103 factor: 1068071855703783761181123461268973104294098322369041790833437139214193724057795181478916448908089214641[/code][/quote] Happy New year to everyone. I can contribute many things to this forum. Please take me in. Please give me chance to show off my good behaviour. Please cooperate. What is the purpose of factoring of 6,305 otherwise? How do you feel if I do not let you join my forum and that you are interested in joining it up then? 
[quote=Raman;121845] ... I can contribute many things to this forum.
... Please give me chance to show off my good behaviour. Please cooperate. What is the purpose of factoring of 6,305 otherwise? ...[/quote] I'm replying against my better judgement; not wishing to have my email filtering software burdened by months of email bombs from you, again. While there may be many things you can contribute, I'd like to make a suggestion, intended to be helpful: consider _not_ replying to some of the posts you have an interest in. I find many posts with things that I could comment on; but readers of the forum have heard my comments before and/or other people do just as well at replying. If you feel that you just have to post your comment on everything that floats by  without considering whether it's actually a positive (i.e., not negative) contribution  readers will tire from hearing from you sooner, rather than later. The Gerbils, in their wisdom, didn't consult me on readmitting you to the forum; if they had, I'd have suggested a somewhat longer probation; say, long enough to finish that second Cunningham you've had reserved for months. Peace, bdodson 
[QUOTE=bdodson;121908]The Gerbils, in their wisdom, didn't consult me on readmitting you to the forum; if they had, I'd have suggested a somewhat longer probation;
say, long enough to finish that second Cunningham you've had reserved for months.[/QUOTE]Mea culpa. The consultation was with me, as I'd posted an article telling him everything he needs to know to find good NFS parameters for his factorization. Posting a succinct pointer to it seemed a less bad alternative to enduring several more months of whinging. Raman: my earlier advice to you stands. Come back here [b]after[/b] you have factors and not before. Not everyone here is a soft hearted/headed (choose 1) as I am and I assure you that our collective tolerance is still extremely low. You will find period of quiet contemplation will serve you very well indeed. Meditation has a lot to recommend it. Paul 
[quote=xilman;121911]
Raman: my earlier advice to you stands. Come back here [B]after[/B] you have factors and not before.[/quote] So, you mean the factors for 7,295? BTW, it will take a long time (probably one year) unless I add up more machines for the computation. I can use additional machines besides my 2.8 GHz dual core processor anyway. (Especially my uncle's 3.06 GHz Pentium IV) Thanks. I will utilize this chance properly. 
6,347
Sieving by Bruce Dodson, parameter selection and completion by Tom Womack. This may be the first job with 32bit large primes both sides to be finished with msieve.
Polynomials x^66, x6^58. Small primes up to 160 million on both sides, sieved with 15e for Q=10M170M algebraic side and Q=10M260M rational side. 367372454 unique relations from something over half a billion raw (better estimate of runtime and rawrelcount coming soon). 36 hours on one CPU of a 12GB i7 running at 2.8GHz, with peak memory usage around 10GB, to get to Sun Mar 29 21:56:52 2009 weight of 19120844 cycles is about 1338865042 (70.02/cycle) and another two hours to get to Mon Mar 30 00:06:39 2009 matrix is 19036824 x 19037072 (5329.0 MB) with weight 1283623590 (67.43/col) Mon Mar 30 00:06:39 2009 sparse part has weight 1206600171 (63.38/col) The slight oddity in the filtering was 19311242 "warning: zero character" messages appearing on stderr. Then four threads of the i7 crunched fairly solidly (with one small pause caused by the system disc on the i7 machine failing) for 821 hours, using ~6.5GB RAM, to get 14 dependencies. Square root done on two threads separately (I tried four, but it needs 4.5GB RAM peak per thread), three hours per sqrt, initially two dependencies per thread, and each thread found one of the P96 factors. Oh yes, the factors: 6^3471 = 5 * 16657 * 92013588619490399 * P58 * P96a * P96b where [code] P58 = 8023776342054310550242315692074754087050026551393750990167 P96a = 112962017521735300449115732149174215721837276361901343007283764634643624748720079471271422964001 P96b = 150229032135327752933222419558205115221308398344159056674278560696885280711039602252138197654667 [/code] 
Wow, congratulations!
[QUOTE=fivemack;172102] 32bit large primes both sides 367372454 unique relations matrix is 19036824 x 19037072 (5329.0 MB) with weight 1283623590 (67.43/col) four threads of the i7 crunched fairly solidly ... for 821 hours[/QUOTE] Not much oversieving. I would have expected the matrix to be much bigger. Even our 2,908+ matrix is a bit bigger. And the i7 is fast! That matrix took only a month. The 2,908+ matrix should finish in a couple of weeks, and it will have taken about 3.5 months on a 2GHz Barcelona K10. Greg 
[QUOTE=fivemack;172102]Sieving by Bruce Dodson, parameter selection and completion by Tom Womack. This may be the first job with 32bit large primes both sides to be finished with msieve.
Polynomials x^66, x6^58. Small primes up to 160 million on both sides, sieved with 15e for Q=10M170M algebraic side and Q=10M260M rational side. 367372454 unique relations from something over half a billion raw (better estimate of runtime and rawrelcount coming soon). 36 hours on one CPU of a 12GB i7 running at 2.8GHz, with peak memory usage around 10GB, to get to Sun Mar 29 21:56:52 2009 weight of 19120844 cycles is about 1338865042 (70.02/cycle) and another two hours to get to Mon Mar 30 00:06:39 2009 matrix is 19036824 x 19037072 (5329.0 MB) with weight 1283623590 (67.43/col) Mon Mar 30 00:06:39 2009 sparse part has weight 1206600171 (63.38/col) The slight oddity in the filtering was 19311242 "warning: zero character" messages appearing on stderr. Then four threads of the i7 crunched fairly solidly (with one small pause caused by the system disc on the i7 machine failing) for 821 hours, using ~6.5GB RAM, to get 14 dependencies. Square root done on two threads separately (I tried four, but it needs 4.5GB RAM peak per thread), three hours per sqrt, initially two dependencies per thread, and each thread found one of the P96 factors. Oh yes, the factors: 6^3471 = 5 * 16657 * 92013588619490399 * P58 * P96a * P96b where [code] P58 = 8023776342054310550242315692074754087050026551393750990167 P96a = 112962017521735300449115732149174215721837276361901343007283764634643624748720079471271422964001 P96b = 150229032135327752933222419558205115221308398344159056674278560696885280711039602252138197654667 [/code][/QUOTE] :bow wave: P.S.: I just posted the factors to Syd's database. 
How much ECM was run? Was the P58 an ECM miss?

[QUOTE=frmky;172123]
Not much oversieving. I would have expected the matrix to be much bigger. Even our 2,908+ matrix is a bit bigger. And the i7 is fast! That matrix took only a month. The 2,908+ matrix should finish in a couple of weeks, and it will have taken about 3.5 months on a 2GHz Barcelona K10. Greg[/QUOTE] I'd say there was a fair amount of oversieving; initially Bruce sieved 10M160M on both sides, getting 278146913 unique relations, and the matrix that arrived was noticeably bigger: Tue Mar 24 21:52:20 2009 matrix is 22586885 x 22587133 (6499.2 MB) with weight 1573087910 (69.65/col) with an ETA of about 1130 hours. There seem to be advantages in the linear algebra as well as in sieving yield to having a fairly large smallprime bound; 2+908 had to deal with an enormous duplication rate to get its relations. The i7 has a very good memory controller, and I think benefits significantly from being in a singleprocessor system so there's no requirement to check ownership of cache lines with a processor not on the same piece of silicon. I am surprised to have finished before 2+908 did. 
[QUOTE=10metreh;172127]How much ECM was run? Was the P58 an ECM miss?[/QUOTE]
The number was C249, diff 270 when Tom found it, so only 2*t50. I added 7*t50, as 11020 curves with B1 = 260M (default B2). Also, Tom reports [QUOTE]Taking out the P58 would have left a number probably slightly harder by GNFS than the SNFS was. [/QUOTE] perhaps illustrating Bob's point that these large composites aren't very good candidates for ecm factoring. My recollection (from late Jan/early Feb) is that this was the last hard number before my adjusting to p59/p60 factors found in snfs's from Greg and Tom. I'm just finishing c. 14*t50 on Serge's 2, 2068M, at c268 = diff 268. Bruce 
Timing and duplication estimates
I reran 0.01% of the sieving (Q=k*10^7 .. k*10^7+10^3) on one CPU of the i7 machine and extrapolated up (using perideal measures) for the yield and timings.
So I would estimate that the A10170 R10260 produced 430 million Rside and 280 million Aside raw relations, for a duplicate rate of nearenough 50% (367M unique), and took about 350 million CPUseconds: call it a hundred thousand CPUhours. This is about 30% longer than the C180 GNFS took last year, and rather over twice as long as 109!+1 has taken to sieve. A10160 R10160 would have been about 540 million raw relations (so a duplicate rate still essentially 50%, since 278M unique) in about 250 million CPUseconds, so we used about 10^8 CPUseconds on the cluster to save (1130821)*3600 ~ 10^6 seconds of realtime on the linalg machine. I think the cluster's big enough that this was a saving in terms of total time. 
Congratulations! Very impressive all around, and a very fast job for such a huge matrix!
The 9696 split is a nice entry for a modern Kunstkammer. (Sadly, there exists a [URL="http://hpcgi2.nifty.com/m_kamada/f/c.cgi?q=60001_198"]9797[/URL] split.) But anyway! S 
Finding 14 dependencies in the presence of those zerocharacter messages is also a relief. The other possibility was that too many quadratic characters generated these messages, so that you would get dependencies from the linear algebra but the square root would fail on all of them (or perhaps just half of them, with complaints that Newton iteration did not converge).
There's a fairly simple workaround to minimize the chance of that happening in the future, and it will become especially important now that jobs with 32bit large primes are becoming more common. 
[QUOTE=10metreh;172127]How much ECM was run? Was the P58 an ECM miss?[/QUOTE]
OK, that was the long version. Here's the short version: if I had found the p58, it would have been the 2nd largest on the current top10, after four months of global ecm effort, everyone. Factors above p57 are a gift, not a computational objective. On Xilman/Paul's point that ecm pretesting, on hard sieving candidates with small and medium sized factors removed is less likely to give a top10 factor, I now have three of these candidates with small factors p58, p59 and p60. (As well as a bunch with smallest factor p80+.) I'm still puzzled why untested numbers ought to be any more likely to give up a p62+ than one of these nearterm sieving candidates. Bruce 
One of the four dependencies did give me a 'Newton iteration did not converge' message, which presumably means that half of them would have but that I was lucky.
I may well not understand this correctly, but I thought the quadratic characters were there to kill off the 2part of the unit group of the underlying number field, and that there's no reason to believe that that 2part will be terribly large: Aoki's factorisations which say 'we found 64 dependencies and reduced by quadratic characters to 61' presumably mean that the 2part turned out to have precisely three generators. If the groups are normally that small, I wonder if Aoki's strategy of applying the characters afterwards might not be the right way to go. 
The groups typically are that small; most of the time allocating 5 quadratic characters is enough to guarantee that the square root will work correctly, and using more than the (unkown) minimum just uses up dense matrix rows. But that requires that each quadratic character doesn't divide any of the relations, and if you can't assure that then that charater is useless for guarantee purposes.
The only reasons the quadratic characters are computed at the start of the LA instead of the end are 1) the Lanczos code already has to solve a small Gauss elimination problem and that code would have to be duplicated elsewhere, and 2) the relations are already in memory when the LA starts so they don't have to be read again. Could you print the p and r values inside the for()loop on line 210 of gnfs/gf2.c, then exit after the loop finishes? This requires restarting the LA from scratch but only running long enough to read the relations from disk. The fact that you got a Newton failure at all, and a number of dependencies approximately equal to (expected number minus number of quadratic characters) all makes me suspect that only one or two quadratic characters are valid. 
What do you mean by a quadratic character dividing a relation? I suppose these are quadratic characters chi_p for some rational prime p and the concern is that p shouldn't appear on either side in any relation, which would explain why it was hard to find one having sieved with 32bit primes on both sides.
In which case, allowing 64bit p and working down from 2^64 feels as if it ought to be safe for quite a while ... even Dan Bernstein doesn't propose large primes of more than 40 bits! Or is it terribly slow to compute the values of the character for large p? 
64bit p would definitely solve the problem; I'm reluctant to go that route because msieve only has optimized code to find roots of polynomials with coefficients modulo 32bit p. The time needed to compute the characters is not a big concern.

6,335
6,335 c170 = p83.p88
37844794094580139581697623770911579688837081742561513466850889366516267662341180891 1327309015857828899623999948822386264843491918815374735541893912578511688338311537475701 
[B]6, 341[/B] C224 = p94 . p130
(exp divisible by 11, and therefore a quintic with difficulty 241) [SIZE=1]A tongueincheek recipe for 'success': [/SIZE] [SIZE=1]"If you only want long enough, any number will become the [I]1st [/I]hole." :smile:[/SIZE] Batalov+Dodson snfs 
6,355
1 Attachment(s)
6,355 is an ECM miss rather?
6,355 c206 = p[spoiler]58[/spoiler] * p[spoiler]148[/spoiler] You know that every prime number of form 1 (mod 4) can be uniquely represented up as sum of two squares, right? p[spoiler]58[/spoiler] = a[sup]2[/sup]+b[sup]2[/sup] where [code] a = [spoiler]26954637581188276770322320890[/spoiler] b = [spoiler]53660966062879867364046240361[/spoiler] [/code]Expecting up the factors of 2,935 on February 20, 2010 itself. That is right now being the expected completion time of that number only. Due to post regarding this number only on 5 February 2010, I got extremely late for my cousin sister's marriage betrothal (actually reached there when all the function was over), and already then they have taken up the photos and videos of all my other beloved relatives except me and my parents (family). :furious: Very frustrating it is. :censored: Marriage is upon the summer solstice day only. But, actually in fact that I slept off for 3 hours before writing up that post, though, due to lack of patience in writing it up. Should censor up irregular time sleep from now onwards. I wish that I would have gone there earlier, before itself, instead of rather lying down, getting to sleep, and then that post could have been done later on. [COLOR=White]The photographs of me and then my cousins have been attached up hereby itself, only.[/COLOR] 
How many days are there in a year?
1 Attachment(s)
Mystery number  find out that candidate
by using this following hint: How many days are there within any given year? mystery number number of days within any given year = 365 365 = 5 * 73 = (2^2 + 1^2) * (8^2 + 3^2) = 19^2 + 2^2 = 14^2 + 13^2 What about that for 689, 1457, 1001, 1009... 
[QUOTE=Raman;216197]Mystery number  find out that candidate
by using this following hint: How many days are there within any given year? mystery number number of days within any given year = 365 365 = 5 * 73 = (2^2 + 1^2) * (8^2 + 3^2) = 19^2 + 2^2 = 14^2 + 13^2 What about that for 689, 1457, 1001, 1009...[/QUOTE] To be honest, there are 365.25 days in a year... :smile: Luigi 
[quote=ET_;216201]To be honest, there are 365.25 days in a year... :smile:
Luigi[/quote] I meant that calendar year, not the Earth's rotation Earth rotates in 23 hours 56 minutes 4.09 seconds Revolves around the sun within precisely 365 days 5 hours 48 minutes 45.51 seconds [B]= 365.2422 days[/B] Earth's axial tilt = 23.44 degrees How many days does Gregorian calendar have so accurately within 10000 years Absolutely, that is 3652425 days, no? Thus, how many days do you think that year 10000 February should have so in order to synchronize up with that Earth's rotation calendar? Of course, it is true that earth's rotation is being slowed down regularly due to that tidal friction from that moon, moon goes into a further orbit around earth, at the rate of around 3 cm per year rather? This will continue until Earth is tidally locked with moon as is moon with earth right now. At that point of time, rotation period of Earth will be equal to revolution period of moon around earth at 47 days, right now from 27.3 days. This will take upto 50 billion years, but within another 5 billion years, that Sun as a red giant star will rather swallow up both earth and moon then? If not, once earth moon are both tidally locked up with each other, as is Pluto Charon system, then that moon will start up moving closer to earth. Once it crosses up within that Roche limit, earth's gravity can break up that moon into millions of fragments that will rather orbit planet in the form of that rings. Earth's rotation is right now being rather slowed down at the rate of about 1 second within every 500000 years. 
[QUOTE=Raman;216197] ...[/QUOTE]
For those of us that don't open zipfiles [code] 5910 6, 365 c185 1552875106954286892749964394710986213899516972480933644523600802712902591 . p113 Raman snfs [/code] Also Batalov+Dodson's c171 gnfs, p83*p89. Supose the above p73 counts as a miss ..., ah, no; not a Mersenne number, guess not. Bruce 
[quote=bdodson;216205]For those of us that don't open zipfiles
[code] 5910 6, 365 c185 1552875106954286892749964394710986213899516972480933644523600802712902591 . p113 Raman snfs [/code]Also Batalov+Dodson's c171 gnfs, p83*p89. Supose the above p73 counts as a miss ..., ah, no; not a Mersenne number, guess not. Bruce[/quote] No chance to be any ECM miss at all For a p73 factor, imagine how many curves you need to run at B1=3*10[sup]9[/sup] Certainly that it is a lot easier to factor by using SNFS my pet algorithm that has become right now, [SIZE=1]by now itself[/SIZE] p54 factors like that from 5,427+ can only be called as an ECM miss I won't accept even p65 factors as an ECM miss at all. By the way, is that Bos+Kleinjung ECM parallelization trick only applicable to that list of Mersenne numbers, actually? I want to know more about that case, [SIZE=1]much within that fact[/SIZE] 
[QUOTE=Raman;216217] ...
By the way, is that Bos+Kleinjung ECM parallelization trick only applicable to that list of Mersenne numbers, actually?...[/QUOTE] Alex posted Thorsten's announcement: [QUOTE=Thorsten] Stage 1: we implemented arithmetic functions for Playstation3s for Mersenne numbers. Stage 1 for 24 curves in parallel and for B1=3*10^9 took less than 23 hours on one PS3, i.e., less than one hour per curve per PS3. [/QUOTE] which appears to me to say that B1=3*10^9 only works for Mersenne numbers. I'd be happy to be wrong; perhaps there's a workaround, or the timings for nonMersenne numbers isn't off by a full magnitude (like 1e9, instead of 3e9?). Until we hear otherwise, p73's are only for Mersenne numbers, due to B1=3e9 being only for Mersenne numbers. Meanwhile, looking at the "who's" list, are we going to have a new SmallerbutNeeded above C180 in another page or two? Bruce PS  Here's another version from Nmbrthry [QUOTE=Thorsten] Stage 1: We implemented arithmetic functions for PS3s for Mersenne numbers. We used a recently developed 4way SIMD carryless Karatsuba multiplier based on radix 212 signed digit representation, thereby obtaining a speedup of approximately a factor of 2 over our previous unoptimized 4way SIMD PS3 multiplier. Stage 1 for 24 curves in parallel and for B1=3*10^9 took less than 23 hours on one PS3, i.e., less than one hour per curve per PS3. [/QUOTE] Also, the parallelization matters. They did 30,000 step 1s (and then found that first p73 after just 8800 of the step 2s), so without any parallelizing (an extreme case), they'd only have had 30000/24 = c. 300*4 = 1200 curves. Our only reason for not objecting that the entire method looks too adhoc to consider seriously is that they found the second one. I wonder whether we'll get another 70digit+ any time soon. 
1 Attachment(s)
NFS@Home has finished 6,385 by SNFS. Log is attached.
[CODE]prp84 factor: 848309686035087642620840193724699651536186090795063231766401545260286503867205343001 prp103 factor: 8069925129421512315078607836272335020663433851059267139141811772952369600025478699376883002767164883851 [/CODE] 
1 Attachment(s)
And 6,377 is finished.
[CODE]prp64 factor: 4163738647660343644736593693349157640855934319038312150518137743 prp77 factor: 10225045916601248647752444357593002700492832183893285320674260124721671052319 prp109 factor: 3180086558308793081180085210976696863166710882711790023757500947744933108800529269662639263040161216121494069[/CODE] 
1 Attachment(s)
Add 6,349 to the finished list. Actually a while ago as you can see by the date. Finals week intervened, though!
[CODE]Fri May 13 01:03:56 2011 prp68 factor: 42718986495841359540531087907270796421704343217605115730109486979483 Fri May 13 01:03:56 2011 prp141 factor: 271696650711235713248500256972633590883334525976176892387276364981832992375182426818192208801216461941493240842717505382975945252028996771663 [/CODE] 
6,447 (snfs difficulty 232) is waiting for a knight in shining armor and a reasonably modern home bread maker or a pressure cooker. I heard that those have ridiculously strong CPUs recently.
Seriously, a good project for a home computer (a month on a quadcore, less on a hexa)! Write to Sam before too late. 
SNFS 232 would be in the reach of RSALS, if the polynomial is quintic or sextic; but before I reserve this, I'd like to make some test sieving.
What's the best SNFS polynomial for that number ? :smile: 
Like,
[CODE]n: 616206951833849099509404360836151448327434546464783674219667801836184526666386100726932863976883607096993792653424911139417437242190871728765383549637572368393220053069548102628443735964624521073 type: snfs skew: 1.82 Y1: 1 Y0: 808281277464764060643139600456536293376 c6: 1 c3: 6 c0: 36 rlim: 55000000 alim: 55000000 lpbr: 30 lpba: 30 mfbr: 60 mfba: 60 rlambda: 2.6 alambda: 2.6 [/CODE] but try 3LP on both sides and see if it gets better? (For this size, maybe not.) 
On one core of an otherwise idle i72670QM, at q=rlim/2, 2LP sieving and 3LP sieving produce the same yield at the same speed (only the fifth digit after the comma changes, which is way below noise). It doesn't hurt to make a 3LP sieving.
I've sent an email to Sam Wagstaff for reserving 6,447. 
BTW, before I queue it to the bread makers, pressure cookers and whatever else composes RSALS: how much ECM has 6,447 received ?
I can't find the information (or at least, it's not obvious to me ^^) through [url]http://homes.cerias.purdue.edu/~ssw/cun/[/url] . 
Tons. Rest assured.
S.S.W. himself found very large factors in this portion while it was still in the [URL="http://homes.cerias.purdue.edu/~ssw/cun/xtend/"]future extension[/URL] stage. Also Bouvier recently added many more curves (supposedly with the GPUGMPECM) and found some impressive factors. There's no direct evidence, though; but it has been observed that they ECMd even less snfsdifficult numbers very heavily in the past. I'd like to reserve the postprocessing. 
[QUOTE=debrouxl;303720]It doesn't hurt to make a 3LP sieving.[/QUOTE]
FYI, You might see an increase in the duplication rate, unless you also increase the large prime size. 
6 tables were officially extended. (from 451 to 500)
There may be some projects accessible to homestyle enthusiasts, e.g. 6,471. 
6,383 is done.
[PASTEBIN]N0hgKLks[/PASTEBIN] 
All times are UTC. The time now is 23:16. 
Powered by vBulletin® Version 3.8.11
Copyright ©2000  2022, Jelsoft Enterprises Ltd.