mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   NFS@Home (https://www.mersenneforum.org/forumdisplay.php?f=98)
-   -   Linear algebra reservations, progress and results (https://www.mersenneforum.org/showthread.php?t=20023)

chris2be8 2016-01-13 17:48

As far as I know all the holes in the Brent tables have had enough ECM run against them. Certainly none of the ones I factored were ECM misses.

@Jcrombie, did Prof Brent send you any details of how much ECM he had run against them?

Chris

PS. Should posts 580, 581 and this one (583) be in the queue management thread?

VBCurtis 2016-01-13 18:34

[QUOTE=xilman;422241]
What are the true limits on the 14e queue? If the upper limit can be stretched there are six C166 and 28 <S252, of which 12 are < S251.
Paul[/QUOTE]

Ignorance is bliss- I recently did my first C165 and used 14e. It finished in about the time expected (extrapolated as 4x time for C155), using 32LP. I bet C166 and 167 are fine for the 14e queue, particularly if 32LP are used.

Stats from C165: Msieve E-score 6.67e-13, 265M raw rels produced a 10.3M matrix at density = 96. I can find special-q range sieved if interested.

xilman 2016-01-13 19:26

[QUOTE=chris2be8;422258]As far as I know all the holes in the Brent tables have had enough ECM run against them. Certainly none of the ones I factored were ECM misses.

@Jcrombie, did Prof Brent send you any details of how much ECM he had run against them?

Chris

PS. Should posts 580, 581 and this one (583) be in the queue management thread?[/QUOTE]Quite possibly for 581. I'm never quite sure where to post.

unconnected 2016-01-13 22:23

Reserving C220_120_79 and C170_119_79.

swellman 2016-01-13 22:54

[QUOTE=Xyzzy;422231]It looks like we are about to run out of work to sieve.

[url]http://escatter11.fullerton.edu/nfs/crunching.php[/url]

:mike:[/QUOTE]

[url=http://www.mersenneforum.org/showthread.php?t=20024&page=32]cough cough[/url]

Xyzzy 2016-01-13 23:23

[QUOTE=swellman;422310][URL="http://www.mersenneforum.org/showthread.php?t=20024&page=32"]cough cough[/URL][/QUOTE]We only know how to edit the reservations and enter results. We have no idea how to enter new work.

:sorry:

swellman 2016-01-14 01:09

Reserving 11119_61_minus1.

frmky 2016-01-14 02:43

[QUOTE=swellman;422310][url=http://www.mersenneforum.org/showthread.php?t=20024&page=32]cough cough[/url][/QUOTE]
Queued.

Dubslow 2016-01-14 03:26

Is there a particular reason for the use of gzip for compression? bzip2 has a rather better compression ratio (at a computational cost) while still being more-or-less widely available.

frmky 2016-01-14 07:36

[QUOTE=Dubslow;422348]Is there a particular reason for the use of gzip for compression? bzip2 has a rather better compression ratio (at a computational cost) while still being more-or-less widely available.[/QUOTE]
The results are compressed by the BOINC clients before they are returned to the server. The server just checks that they are valid compressed relations then concatenates them into the file you download.

Dubslow 2016-01-14 08:06

[QUOTE=frmky;422366]The results are compressed by the BOINC clients before they are returned to the server. The server just checks that they are valid compressed relations then concatenates them into the file you download.[/QUOTE]

I suppose that answers my question about whether or not the .gz format supports arbitrary concatenation. :smile: Good to know that I can still grab relations that come in while I start the initial download.

fivemack 2016-01-14 10:37

C161_P170_plus_1
 
1 Attachment(s)
[code]
Thu Jan 14 04:14:26 2016 p74 factor: 54052766629980734999562244122502065112987992580302950918724141982758469031
Thu Jan 14 04:14:26 2016 p87 factor: 538334666188011347785904608538080112624666237830477754503884561471369042454503441501091
[/code]

17.8 hours for 6.57M density-120 matrix on 7 threads E5-2650v2

[pastebin]zDFJRN12[/pastebin]

fivemack 2016-01-14 18:16

C221_118_81 now running.

Reserving C219_127_57 and C197_129_53 to run over the weekend

debrouxl 2016-01-14 20:10

I'll attempt to queue the OP numbers posted by William probably tomorrow.

While 15e is more efficient in that range (but the 15e queue is full), 14e can usually deal with GNFS difficulty 165-170 tasks, especially with a good poly. SNFS difficulty 250+ with a sextic polynomial on 14e is a stretch.

xilman 2016-01-14 20:48

[QUOTE=debrouxl;422457]I'll attempt to queue the OP numbers posted by William probably tomorrow.

While 15e is more efficient in that range (but the 15e queue is full), 14e can usually deal with GNFS difficulty 165-170 tasks, especially with a good poly. SNFS difficulty 250+ with a sextic polynomial on 14e is a stretch.[/QUOTE]Thanks. Indicates where I should devote GPU effort in the next month or two.

Paul

Dubslow 2016-01-15 09:07

[QUOTE=Dubslow;422348]Is there a particular reason for the use of gzip for compression? bzip2 has a rather better compression ratio (at a computational cost) while still being more-or-less widely available.[/QUOTE]

[QUOTE=frmky;422366]The results are compressed by the BOINC clients before they are returned to the server. The server just checks that they are valid compressed relations then concatenates them into the file you download.[/QUOTE]

Furthermore, removing duplicates would also be a massive savings of bandwidth. It would perhaps require uncompressing, unless someone extended remdups4 with zlib, but in that case it would become practical to use bzip2.

I'm bringing this up because I have substantially worse internet that I've had years past; it took on the order of 12 hrs to download 22GiB of just over 400M relations, of which 63M were duplicates (and so a waste of bandwidth). Besides the connection being slower, it's also a connection where the total bandwidth consumption is monitored -- and 22 GiB is not insubstantial. (It definitely didn't help that I messed up and needed to do it *again* -- but that was my fault :razz:)

fivemack 2016-01-15 09:41

C221_118_81 done
 
1 Attachment(s)
[code]
Fri Jan 15 07:37:13 2016 p56 factor: 66604751882840716203190547146724002766120346665371044853
Fri Jan 15 07:37:13 2016 p165 factor: 489911686445026569674165955928329541002751620798887655641477577459794398906029747989127094102551808131327740546125837492604682107848229561628575767032878867858107417
[/code]

12.8 hours for 5.7M matrix on E5-2650v2 -t 7

Xyzzy 2016-01-15 15:21

[QUOTE=Dubslow;422531]I'm bringing this up because I have substantially worse internet that I've had years past; it took on the order of 12 hrs to download 22GiB of just over 400M relations, of which 63M were duplicates (and so a waste of bandwidth).[/QUOTE]It takes us days, but we start downloading relations as soon as they start coming in. At first it is hard to keep up but things balance out nicely towards the end.

[c]while true; do nice wget --continue --limit-rate=64k --user=rsals_data --password=***** http://escatter11.fullerton.edu/nfs_data/12_226_plus_7_226/12_226_plus_7_226.dat.gz; sleep 3600; done[/c]

VictordeHolland 2016-01-15 21:01

1373_79_minus1 results
 
1 Attachment(s)
[B]1373_79_minus1[/B]
[code]
p63 factor: 167155760887752250734824423685255209540133209961357160033907273
p126 factor: 525234033640980062974735094976085151972433095270924958325303347977625755049053805905136035990038690127663787469942009602039239
[/code]12.4M matrix with TD=110
about 109h on all 4 cores 3770k

VictordeHolland 2016-01-15 21:12

[QUOTE=Dubslow;422531]Furthermore, removing duplicates would also be a massive savings of bandwidth. It would perhaps require uncompressing, unless someone extended remdups4 with zlib, but in that case it would become practical to use bzip2.
[/QUOTE]
Bandwidth is not an issue for me (theoretical 90mbit, in practice 50-80mbit depending on the time of day). But need-less to say, if we can limit the download to uniques then I would support that.

However in that case you can't check the duplicate ratio, unless that original number of relations is stored in a table.


[edit]
I''ll be on skiing holidays from tomorrow till Saturday the 23rd. My machines will be off in that time-window.

pinhodecarlos 2016-01-15 21:14

[QUOTE=xilman;422241]

These are the sub-S250 remainders:
[c]226.37 7,265- C168
227.22 7,266- C173
227.27 8,249+ C178
227.27 8,249- C173
227.35 6,289- C160
227.44 2,746- C219
227.58 5,322+ C207
227.74 4,374- C215
[/c]

Paul[/QUOTE]

Paul,

Let's add them to the queue to help you complete a project milestone . I can thrown two machines to help you on the post-processing.

Carlos

swellman 2016-01-15 21:49

1 Attachment(s)
[QUOTE=swellman;421340]Reserving 1847_71_minus1.[/QUOTE]

A nice triple.

[code]
prp54 factor: 751675369654905088678231667221559046734934416341718309
prp60 factor: 860344312225890325626692969152005944727762796494291088905929
prp113 factor: 97939282732986918883657823741022254384311180414447883360700495089044693365385695961569969049390093405927645342427
[/code]

TD=98(!) as attempts at 114 and 106 failed to build a matrix.

Dubslow 2016-01-15 21:50

It shouldn't be hard for the server to track the original rel count before we download only uniques. Thanks for the tip Mike, I'll probably use that myself.

Jarod 2016-01-15 21:57

[QUOTE=fivemack;421370]Nine hours is pretty short (though remember that post-processing saves checkpoints, and you can stop with ^C and restart with -npr); the newly-queued SNFS(22x) jobs from XYYXF may be small enough, C220_120_79 is the one I'd go for.[/QUOTE]
I am aware of the checkpoint is a side of things. The reason why I like short numbers is as is because as a rule they are smaller to download and turnaround postprocessing is faster. Unfortunately I missed the number that you suggested. I am a look at taking C170_122_63, as it is a 220 number. If somebody wants it before I reserve it please go ahead

Jarod 2016-01-16 03:18

C170_122_63 reservation query
 
[QUOTE=Speedy51;422610]I am aware of the checkpoint is a side of things. The reason why I like short numbers is as is because as a rule they are smaller to download and turnaround postprocessing is faster. Unfortunately I missed the number that you suggested. I am a look at taking C170_122_63, as it is a 220 number. If somebody wants it before I reserve it please go ahead[/QUOTE]
Thank you whoever put my name beside this number. However I notice it has been reserved under my forum name and not my real name Jarod McClintock. Does anybody have an idea of how big the DAT file will be & how long it will take to run on a I 7 5960 at stock speed 3 GHz 16 gig RAM. Thanks for the information

Dubslow 2016-01-16 03:54

It looks like many other jobs you've done. It's around 59 bytes per relation (compressed), and the page suggests ~110M relations for 30 bit large primes, so around 6 GiB. The time should certainly be less than a day, though I don't know which side of 12 hours it would be.

Jarod 2016-01-16 05:42

[QUOTE=Dubslow;422647]It looks like many other jobs you've done. It's around 59 bytes per relation (compressed), and the page suggests ~110M relations for 30 bit large primes, so around 6 GiB. The time should certainly be less than a day, though I don't know which side of 12 hours it would be.[/QUOTE]

Thanks. Can I use the following command to start downloading the file? I am aware there are certain parts missing [c]wget --continue --limit-rate=64k --user= --password= http://escatter11.fullerton.edu/nfs_data/C170_122_63/C170_122_63.dat.gz; sleep 3600; done[/c] if I download the file this way do I have to to any anything special at the end all well it all joined as one file & I gather we can do whatever we want in that the – limit rate =? This is a first time I have used such a command.

Dubslow 2016-01-16 10:03

[QUOTE=Speedy51;422659]Thanks. Can I use the following command to start downloading the file? I am aware there are certain parts missing [c]wget --continue --limit-rate=64k --user= --password= http://escatter11.fullerton.edu/nfs_data/C170_122_63/C170_122_63.dat.gz; sleep 3600; done[/c] if I download the file this way do I have to to any anything special at the end all well it all joined as one file & I gather we can do whatever we want in that the – limit rate =? This is a first time I have used such a command.[/QUOTE]

You are downloading one file, and one file you will get, just like all the other times you've post-processed. --limit-rate tells wget what the maximum download rate (bandwidth usage) should be.

swellman 2016-01-17 14:22

Reserving 8821_61_minus1.

Dubslow 2016-01-18 08:34

C175_4788_5241
 
1 Attachment(s)
The composite from aliquot sequence 4788 has been factored with a very nice split, p2/p1 < 10:

[code]commencing square root phase
reading relations for dependency 1
read 3876794 cycles
cycles contain 13857722 unique relations
read 13857722 relations
multiplying 13857722 relations
multiply complete, coefficients have about 766.73 million bits
initial square root is modulo 7563427
sqrtTime: 2414
p87 factor: 430802617242534521281556392152002775332022453248704238219208064962458560609604821206017
p88 factor: 3624044865067865720135307373451057373557807330604385686289032335250978044812423406553167
elapsed time 00:40:15[/code]

It took a bit shy of 70 hours on all 8 threads of a i7-2600K, though it's my personal computer and was not idle all the time, including at least two hours where the LA was totally paused. The log will show my various attempts at getting msieve file names and switches correct -- among other things, apparently -ncr doesn't imply -nc3.

Edit: The matrix was around 7.75M, built with target_density=130 (which, as I understand, is quite high, but it also seemed quite over-sieved to me).

[code]Fri Jan 15 04:42:30 2016 commencing linear algebra
Fri Jan 15 04:42:31 2016 read 7752972 cycles
Fri Jan 15 04:42:48 2016 cycles contain 27716916 unique relations
Fri Jan 15 04:52:25 2016 read 27716916 relations
Fri Jan 15 04:53:12 2016 using 20 quadratic characters above 4294917295
Fri Jan 15 04:54:53 2016 building initial matrix
Fri Jan 15 05:00:15 2016 memory use: 3857.5 MB
Fri Jan 15 05:00:19 2016 read 7752972 cycles
Fri Jan 15 05:00:21 2016 matrix is 7752793 x 7752972 (3916.4 MB) with weight 1189883819 (153.47/col)
Fri Jan 15 05:00:21 2016 sparse part has weight 933627118 (120.42/col)
Fri Jan 15 05:02:02 2016 filtering completed in 2 passes
Fri Jan 15 05:02:04 2016 matrix is 7752457 x 7752636 (3916.4 MB) with weight 1189865271 (153.48/col)
Fri Jan 15 05:02:04 2016 sparse part has weight 933620115 (120.43/col)
Fri Jan 15 05:03:42 2016 matrix starts at (0, 0)
Fri Jan 15 05:03:43 2016 matrix is 7752457 x 7752636 (3916.4 MB) with weight 1189865271 (153.48/col)
Fri Jan 15 05:03:43 2016 sparse part has weight 933620115 (120.43/col)
Fri Jan 15 05:03:43 2016 saving the first 48 matrix rows for later
Fri Jan 15 05:03:45 2016 matrix includes 64 packed rows
Fri Jan 15 05:03:46 2016 matrix is 7752409 x 7752636 (3815.1 MB) with weight 1026143909 (132.36/col)
Fri Jan 15 05:03:46 2016 sparse part has weight 922568351 (119.00/col)
Fri Jan 15 05:03:46 2016 using block size 8192 and superblock size 786432 for processor cache size 8192 kB
Fri Jan 15 05:04:21 2016 commencing Lanczos iteration (8 threads)
Fri Jan 15 05:04:21 2016 memory use: 3194.3 MB[/code]

fivemack 2016-01-18 10:03

C219_127_57 done
 
1 Attachment(s)
[code]
Mon Jan 18 08:21:15 2016 p53 factor: 30616003696099954381642063749711730195963203825482041
Mon Jan 18 08:21:15 2016 p166 factor: 7714764781728414548741417186472049141202290325908411867168772990125828157277628427664514249440253270539277416395045457380765814639295127918491951474043017703261065229
[/code]

20.9 hours for 6.58M density-120 matrix on seven cores E5-2650v2 (probably not entirely idle). Log attached.

unconnected 2016-01-18 14:22

2 Attachment(s)
C220_120_79: 11 hours for pretty small 5.3M matrix (TD=120)

[CODE]prp53 factor: 11128045331010217413015279396287804977183688795427299
prp168 factor: 174886193797798712318441610702475795938459285367516715405060911091657516643307944940231653782840826902173758801359189202098933930086984072963064666660932198044232626163
[/CODE]C170_119_79: 18.7 hours for 6.7M matrix (TD=120)

[CODE]prp59 factor: 62816776104045212956746785494921086815091413384959342959841
prp111 factor: 199015708111943239095750091572500193580824535235215583939669395129315035963269076644650780893898808146459131781
[/CODE]

Logs attached.

swellman 2016-01-18 14:29

128^95+95^128 under sieved?
 
Does 128^95+95^128 have enough relations to ultimately build a matrix? I can attempt to postprocess it but TD will probably be very low. While I am aware that 1 day BOINC >> 1 day on my machine, wouldn't a bit more sieving be prudent? Either way it will be a few days before I can start downloading.


ETA: 142^141+141^142 should be factored tomorrow.

fivemack 2016-01-18 17:01

I've put 128^95+95^128 up for another 20MQ sieving - if you won't be downloading it for a few days it might as well be being sieved for those few days. Should get to nearly 400M relations which ought to run happily at td=120

131^97+97^131 is going to be another ten days or so.

fivemack 2016-01-18 17:02

Reserving C185_144_74 (ETA Wednesday morning)

fivemack 2016-01-19 10:03

C197_129_53 done
 
1 Attachment(s)
[code]
Tue Jan 19 06:07:27 2016 p70 factor: 1085491588288805069436520806674356851134281433511744858997665696680993
Tue Jan 19 06:07:27 2016 p128 factor: 27061118089147790361908736622070446679633485705722720185082440794491462738556929523387218368030813435778089934853460603652462917
[/code]

17.6 hours for 6.42M density-120 matrix on E5-2650v2, 7 threads. Log attached.

swellman 2016-01-19 23:52

142^141+141^142
 
1 Attachment(s)
[code]
prp80 factor: 66875582620915829456450187931617113249069061933547766065897384063151385935018179
prp105 factor: 239425949455322682181915017148216037791868832164041123360437216429731608295117977387407386730317905645913
[/code]

TD=116, log attached.

fivemack 2016-01-20 11:01

C185_144_74 done
 
1 Attachment(s)
[code]
Wed Jan 20 10:30:36 2016 p50 factor: 93604794623904155804533630242361902033604614039361
Wed Jan 20 10:30:36 2016 p135 factor: 416737245347120216181558111096119309982467264318165393760959208256969107878930999933023875808954169059931534950000225898479463315929729
[/code]

9.7 hours for 4.69M density-120 matrix on E5-2650v2 -t 7

pinhodecarlos 2016-01-20 19:43

Reserving C172_128_61.

pinhodecarlos 2016-01-21 06:04

[QUOTE=pinhodecarlos;423253]Reserving C172_128_61.[/QUOTE]

Unreserve please. Msieve crashing here. Didn't manage to get the error details, msieve just stopped working on the filtering stage. I was trying out the new binaries available here on the forum for the sandy bridge processor.

RichD 2016-01-21 22:27

I'll take C172_128_61 next.

RichD 2016-01-23 00:20

C172_128_61 factored
 
1 Attachment(s)
21.5 hours to solve a 5.95M matrix on a Core-i5/2500K with -t 4, TD=122.
[CODE]p75 factor: 176324105188309864912122164853541463541354532849750720872282127914640356121
p98 factor: 49642668236861617268183142150151873100816078564871983172681613023692994197983614788594739864073441[/CODE]

swellman 2016-01-23 03:32

[QUOTE=swellman;422789]Reserving 8821_61_minus1.[/QUOTE]


[code]
prp67 factor: 2524554436936986001334414048406510203205491562289851627742338660203
prp115 factor: 2274051369955518885228728257978452887762138388986235025993437573289298914028686442688884425694406002151123740071547
[/code]

Xyzzy 2016-01-23 04:27

L1397
 
1 Attachment(s)
[CODE]p56 factor: 18591404391295654344917858476030266274675995857742273569
p204 factor: 358749411103286588128921044837671952971342876890315788275238171354391125411273641633082566812099118401989319401620589468961278338670391751798161393961173502740900021894729866560610180343639080950899816231[/CODE]

Xyzzy 2016-01-23 04:31

As far as we can tell, that p204 is the biggest factor on the "D" and "E" pages.

:mike:

Dubslow 2016-01-23 12:23

[QUOTE=Xyzzy;422548]It takes us days, but we start downloading relations as soon as they start coming in. At first it is hard to keep up but things balance out nicely towards the end.

[c]while true; do nice wget --continue --limit-rate=64k --user=rsals_data --password=***** http://escatter11.fullerton.edu/nfs_data/12_226_plus_7_226/12_226_plus_7_226.dat.gz; sleep 3600; done[/c][/QUOTE]

I've added this to my own script:

[code]url='http://escatter11.fullerton.edu/nfs_data'
name='4051_71_minus1'
user=****
pass=****
slptime=3600
rate=250k
exts='fb ini poly'

for ext in $exts; do
wget -N --user=$user --password=$pass "$url/$name/$name.$ext"
done

while true; do
nice wget --continue --limit-rate=$rate --user=$user --password=$pass "$url/$name/$name.dat.gz"
sleep $slptime
done
[/code]

I'm curious though, why the [c]nice[/c]? Surely a simple (rate limited) TCP download consumes negligible processing power?

Xyzzy 2016-01-23 16:08

[QUOTE=Dubslow;423720][code]exts='fb ini poly'[/code][/QUOTE]
Why do you need the .poly file?

[quote]I'm curious though, why the [c]nice[/c]? Surely a simple (rate limited) TCP download consumes negligible processing power?[/quote]That is just a habit we have from way back when CPU resources were tight. It was always the polite thing to do. (And on one server we deal with even today, running a process that way allows us to skirt the process CPU time limitation.)

FWIW, when we have downloaded the .fb and .ini file, we rename them to msieve.fb and worktodo.ini so that the command line to invoke msieve is simpler.

:mike:

Dubslow 2016-01-23 21:33

[QUOTE=Xyzzy;423740]Why do you need the .poly file?

That is just a habit we have from way back when CPU resources were tight. It was always the polite thing to do. (And on one server we deal with even today, running a process that way allows us to skirt the process CPU time limitation.)

FWIW, when we have downloaded the .fb and .ini file, we rename them to msieve.fb and worktodo.ini so that the command line to invoke msieve is simpler.

:mike:[/QUOTE]

Hmm, there's no :shrug:

Honestly, no real reason. I had the .projection too for a while before I realized it's large and a waste.

Jarod 2016-01-24 04:31

C170_122_63 Done
 
1 Attachment(s)
LA took 13 hours 51 minutes to complete using either 6 or 8 threads on a I 7 5960 X at 3 GHz.

[C]prp52 factor: 7900083352228908935782292679688341172833928993595953
prp118 factor: 2028402787009378656990445950154672782198265845967931765722769217754896599149319369650857058346177958425237023520001137[/C]


Please find log attached

jyb 2016-01-24 19:01

I can take 8_269_minus_7_269 for post-processing.

unconnected 2016-01-25 07:44

1 Attachment(s)
C165_842592_8032 completed - 35 hours for 8.3M matrix (TD=140).

[CODE]prp67 factor: 1050566961306451588674530244517205327913164128373334255482224505743
prp99 factor: 664910913443536911339260986610284630155514114445454204744433715236688283538021166444218857757023173
[/CODE]

swellman 2016-01-25 15:41

Reserving C178_149_35.

Xyzzy 2016-01-25 15:49

We are pretty sure we know what the difference is between a "p" and a "prp".

Why does msieve sometimes report a "prp"?

:mike:

Xyzzy 2016-01-25 16:05

We had an idea a few days ago:

We are willing to download, package and snail-mail files in case someone is in a situation where they want to do LA work but they do not have the resources to download big files.

Mailing a USB key is slow but we think (?) there is no race to get this work done.

Anyways, if anyone wants a deal like this just let us know.

:mike:

wombatman 2016-01-25 16:07

[QUOTE=Xyzzy;423991]We are pretty sure we know what the difference is between a "p" and a "prp".

Why does msieve sometimes report a "prp"?

:mike:[/QUOTE]

I'm pretty sure msieve just doesn't prove primality on the factors. I believe it checks (via Fermat, maybe?) for probable primality and whether the factor divides the input number.

VBCurtis 2016-01-25 18:31

[QUOTE=wombatman;423994]I'm pretty sure msieve just doesn't prove primality on the factors. I believe it checks (via Fermat, maybe?) for probable primality and whether the factor divides the input number.[/QUOTE]

I am pretty sure this is correct, but that below a certain size prp and p are equivalent in that it has been shown that every prp via the test msieve uses below n digits is p. I don't remember what n is: 25ish?

fivemack 2016-01-25 19:14

It depends on the version of msieve; the most recent ones use a full-strength primality test on the factors and report them as 'p'.

jasonp 2016-01-25 23:55

The older versions use 20 passes of Miller-Rabin with random bases. Not much thought had gone into the primality tests Msieve uses until David Cleaver added his APRCL code. Even with the old version, inputs less than the square of the trial factoring bound (10^5) are declared prime. There had been requests for primality proving for years, but I couldn't justify the effort required. It was hard to argue with someone doing all the work, though :)

RichD 2016-01-26 20:57

I'll take 6883_61_minus1 next.

Xyzzy 2016-01-26 21:27

We wonder how far "iceweasel" will "grow" over time?

There is only one way to find out!

[CODE]top - 15:25:14 up 16 days, 17:59, 6 users, load average: 1.68, 1.41, 1.29
Tasks: 171 total, 2 running, 169 sleeping, 0 stopped, 0 zombie
%Cpu0 : 1.4 us, 0.2 sy, 79.8 ni, 18.5 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu1 : 1.2 us, 0.3 sy, 79.4 ni, 19.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu2 : 5.6 us, 1.1 sy, 7.2 ni, 85.8 id, 0.2 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu3 : 5.1 us, 1.0 sy, 11.5 ni, 82.2 id, 0.1 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem: 16010.64+total, 12386.28+used, 3624.359 free, 54.762 buffers
MiB Swap: 0.000 total, 0.000 used, 0.000 free. 7141.863 cached Mem

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ nTH P COMMAND
15228 m 39 19 2755.7m 2.679g 1.9m R 100.0 17.1 5308:46 1 2 ./msieve -v -nc target_density=112 -t 1
5235 m 20 0 6698.2m 1.690g 53.2m S 6.4 10.8 2344:49 480 1 iceweasel[/CODE]:max:

fivemack 2016-01-26 22:37

I'll take 4091^71-1

ETA evening of 3 February

Xyzzy 2016-01-27 02:14

C211_121_75
 
1 Attachment(s)
[CODE]p68 factor: 22005275904080406107054935816620608456610359842370122577578909913467
p144 factor: 179228860624350281984700611345290739523366763846812168770969826799804801442859659626123581024327695739728437738059506035178171116927770754000849[/CODE]

jyb 2016-01-27 22:50

8_269_minus_7_269 complete
 
1 Attachment(s)
[QUOTE=jyb;423893]I can take 8_269_minus_7_269 for post-processing.[/QUOTE]

Whoa, this one did not get enough ECM, apparently. Although given the 3-way split, even if the 47-digit factor were known, it would still have required the same NFS effort.

[code]
p47 factor: 63114004184218450772318216431038158607109067561
p67 factor: 4741515390811528985749791051638497031569341941056417119817991873989
p120 factor: 747244692061433908388323714497667107975207477527795863826868403853526380407802412454124839197349709661613622645915326801
[/code]

VBCurtis 2016-01-28 00:57

What SNFS difficulty was it? SNFS-230 would merit something near t48, which would have a 20-25% chance to miss a factor that size.

RichD 2016-01-28 07:22

6883_61_minus1 factored
 
1 Attachment(s)
32.5 hours to solve a 7.45M matrix on mostly idle Core-i5/2500K with -t 4, target_density=116 (TD=120 failed).
[CODE]p68 factor: 84273559293405121928979399486848406700565800617077898448855241626079
p115 factor: 5245956891208220007358651770865998231614272255190924227608335479651400690815208991333212112292678008643404661706329[/CODE]

fivemack 2016-01-28 13:44

C259_131_97 done
 
1 Attachment(s)
[code]
Thu Jan 28 02:53:50 2016 prp63 factor: 283614117787866468094423360694273484828422170197058254732041677
Thu Jan 28 02:53:50 2016 prp83 factor: 18465028266988351471348374598083345672675509533258185596008206139569807614139701509
Thu Jan 28 02:53:50 2016 prp114 factor: 981108110414542945638245088752661429623158530523661605003815000859986790006193284288333961708447242654941015393793
[/code]

363.9 hours on six threads i7/5820K for a 25.8M matrix with density 110.

Log attached

jyb 2016-01-28 18:44

[QUOTE=VBCurtis;424359]What SNFS difficulty was it? SNFS-230 would merit something near t48, which would have a 20-25% chance to miss a factor that size.[/QUOTE]

How did you calculate t48 for an SNFS 230? I thought the rule of thumb was about 2/9 for SNFS jobs, making it more like a t51.

But in any case, this job was actually an SNFS 245, as you can see from the NFS@Home status page.

VBCurtis 2016-01-28 23:26

I use 0.21 for SNFS and 0.31 for GNFS. I also rounded down, as 230*0.21 is 48.3.
One of RDS' tirades was wasted effort on too much ECM, including a challenge to consider the expected number of factors one would find during the second half of the 2/9ths ECM run compared to the time spent on that second half.

Running a t51 vs a t48 should find roughly 3/50 factors, or a factor 6% of the time. If I followed his suggestion correctly, if the time to run from t48 to t51 is more than 6% of the SNFS time one should just jump to SNFS and skip the t48 to t51 curves.

I used to run ECM for 20-25% of the time I expected SNFS to take; now I run for ~15% of the time. I imagine it's not a whole lot more efficient overall, but I enjoy trying to eke out percentage-points of efficiency.

swellman 2016-01-29 00:56

I agree that running a lot of extra ECM seems to be the norm, myself included. Many seem to fear the dreaded "ECM miss". Fivemack discusses his [url=http://www.mersenneforum.org/showpost.php?p=416442&postcount=2370]philosophy on the issue here[/url].

[quote=Fivemack]
I have my own rule of thumb for estimating.

A 175-digit number will take about 20,000 thread-hours to sieve.

Once you've done a t55, the probability of finding anything by doing a t60 is about (60-55)/55, so one in eleven or so.

So it's not worth doing more than 2,000 thread-hours of ECM.

The 17700@11e7 is already about 2,000 thread-hours of ECM (one curve at that level takes about ten minutes).

So I would say you've done enough ECM, and start polynomial selection now.[/quote]

I'm a hobbyist, so my opinion is unimportant but I like this approach. If one estimates sieving of a number will take say 16 weeks, why spend 9 weeks performing ECM to t55 with only a small chance of finding a factor? At an assumed chance of getting a hit of 10%, one should stop ECM after 1.6 weeks. In other words, read RDS' paper.:razz:

swellman 2016-01-29 23:22

11119^61-1
 
1 Attachment(s)
Nice split

[code]
prp114 factor: 347465404642111419190929084225284009368912210169089255954683658838468291976138075959889956129218858290508491165827
prp130 factor: 1671365001528609652619239678423862909748470021652718667395482160027770270390209697006988658244653691454863777261043290054058325963
[/code]


128^95+95^128 should finish on Feb 7.

Xyzzy 2016-01-30 15:46

C175_127_66
 
1 Attachment(s)
[CODE]p75 factor: 276732322901760807490821515624819447404789382450767782997296862812222450299
p101 factor: 35226442252520220836091533003767704920725300996778426791647330489565330439241141377295887822892752357[/CODE]

Xyzzy 2016-02-01 00:12

[QUOTE=Xyzzy;424183]We wonder how far "iceweasel" will "grow" over time?

There is only one way to find out!

[CODE]top - 15:25:14 up 16 days, 17:59, 6 users, load average: 1.68, 1.41, 1.29
Tasks: 171 total, 2 running, 169 sleeping, 0 stopped, 0 zombie
%Cpu0 : 1.4 us, 0.2 sy, 79.8 ni, 18.5 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu1 : 1.2 us, 0.3 sy, 79.4 ni, 19.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu2 : 5.6 us, 1.1 sy, 7.2 ni, 85.8 id, 0.2 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu3 : 5.1 us, 1.0 sy, 11.5 ni, 82.2 id, 0.1 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem: 16010.64+total, 12386.28+used, 3624.359 free, 54.762 buffers
MiB Swap: 0.000 total, 0.000 used, 0.000 free. 7141.863 cached Mem

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ nTH P COMMAND
15228 m 39 19 2755.7m 2.679g 1.9m R 100.0 17.1 5308:46 1 2 ./msieve -v -nc target_density=112 -t 1
5235 m 20 0 6698.2m 1.690g 53.2m S 6.4 10.8 2344:49 480 1 iceweasel[/CODE]:max:[/QUOTE]Today "iceweasel" refused to create new tabs, so we had to kill it.

[CODE]top - 10:24:21 up 21 days, 13:03, 6 users, load average: 2.71, 2.31, 2.17
Tasks: 173 total, 2 running, 171 sleeping, 0 stopped, 0 zombie
%Cpu0 : 2.1 us, 0.3 sy, 65.3 ni, 32.2 id, 0.1 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu1 : 1.6 us, 0.3 sy, 65.8 ni, 32.3 id, 0.1 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu2 : 5.3 us, 1.0 sy, 9.7 ni, 83.8 id, 0.2 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu3 : 4.4 us, 0.9 sy, 25.2 ni, 69.5 id, 0.1 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem: 16010.64+total, 15802.32+used, 208.324 free, 117.777 buffers
MiB Swap: 0.000 total, 0.000 used, 0.000 free. 9995.824 cached Mem

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ nTH P COMMAND
5235 m 20 0 7920.1m 1.932g 59.9m S 24.7 12.4 3107:08 607 2 iceweasel
28675 m 39 19 2820.7m 2.672g 2.0m R 172.8 17.1 2917:11 2 0 ./msieve -v -nc target_density=124 -t 2[/CODE]:mike:

swellman 2016-02-01 22:08

1 Attachment(s)
[QUOTE=swellman;423989]Reserving C178_149_35.[/QUOTE]

[code]
prp81 factor: 145741348464954811277944598989536993974825906039681318872935013904739051062333203
prp98 factor: 17470146436308210690850836799475529833777583094451053278828373089719561867487272819402774127013327
[/code]

Xyzzy 2016-02-02 15:11

C183_125_69
 
1 Attachment(s)
[CODE]p87 factor: 163973299401013821937972360705685684560159584251985578961493966656541070584091698955807
p97 factor: 4331338347805912174992013703582393901274763393517556040001510362053544628969416669400405928078457[/CODE]

Xyzzy 2016-02-02 15:15

We have run into a problem with 12269_61_minus1.

No matter what target density we ask it to work with it fails with the same error. It even fails with no target density assignment.

[CODE]Mon Feb 1 22:12:51 2016 Msieve v. 1.53 (SVN 991)
Mon Feb 1 22:12:51 2016 random seeds: 3df29c83 8a0e2986
Mon Feb 1 22:12:51 2016 factoring 213100950507038902323424992988338306706137172668512676217767551268467441188182705072231889035199625853826867350764930484042586708996940146726901352417099130307985502480978489510231192438490927623416476902553155595412922322827511674649043799356901 (246 digits)
Mon Feb 1 22:12:53 2016 no P-1/P+1/ECM available, skipping
Mon Feb 1 22:12:53 2016 commencing number field sieve (246-digit input)
Mon Feb 1 22:12:53 2016 R0: -77284368857697826790963712513825284497801
Mon Feb 1 22:12:53 2016 R1: 1
Mon Feb 1 22:12:53 2016 A0: -1
Mon Feb 1 22:12:53 2016 A1: 0
Mon Feb 1 22:12:53 2016 A2: 0
Mon Feb 1 22:12:53 2016 A3: 0
Mon Feb 1 22:12:53 2016 A4: 0
Mon Feb 1 22:12:53 2016 A5: 0
Mon Feb 1 22:12:53 2016 A6: 12269
Mon Feb 1 22:12:53 2016 skew 1.00, size 1.037e-12, alpha 2.123, combined = 1.145e-13 rroots = 2
Mon Feb 1 22:12:53 2016
Mon Feb 1 22:12:53 2016 commencing relation filtering
Mon Feb 1 22:12:53 2016 estimated available RAM is 16010.6 MB
Mon Feb 1 22:12:53 2016 commencing duplicate removal, pass 1
Mon Feb 1 22:26:28 2016 error -11 reading relation 59038814
Mon Feb 1 22:38:04 2016 error -15 reading relation 110815505
Mon Feb 1 22:47:22 2016 skipped 1211 relations with b > 2^32
Mon Feb 1 22:47:22 2016 skipped 1 relations with composite factors
Mon Feb 1 22:47:22 2016 found 9320901 hash collisions in 147973299 relations
Mon Feb 1 22:47:48 2016 commencing duplicate removal, pass 2
Mon Feb 1 22:49:21 2016 found 0 duplicates and 147973299 unique relations
Mon Feb 1 22:49:21 2016 memory use: 394.4 MB
Mon Feb 1 22:49:21 2016 reading ideals above 100728832
Mon Feb 1 22:49:21 2016 commencing singleton removal, initial pass
Mon Feb 1 23:15:35 2016 memory use: 3012.0 MB
Mon Feb 1 23:15:36 2016 reading all ideals from disk
Mon Feb 1 23:15:41 2016 memory use: 3079.3 MB
Mon Feb 1 23:15:57 2016 commencing in-memory singleton removal
Mon Feb 1 23:16:13 2016 begin with 147973299 relations and 152497506 unique ideals
Mon Feb 1 23:19:06 2016 reduce to 64353012 relations and 56043611 ideals in 21 passes
Mon Feb 1 23:19:06 2016 max relations containing the same ideal: 26
Mon Feb 1 23:19:15 2016 reading ideals above 720000
Mon Feb 1 23:19:16 2016 commencing singleton removal, initial pass
Mon Feb 1 23:33:37 2016 memory use: 1506.0 MB
Mon Feb 1 23:33:37 2016 reading all ideals from disk
Mon Feb 1 23:33:41 2016 memory use: 2590.8 MB
Mon Feb 1 23:33:56 2016 keeping 67306010 ideals with weight <= 200, target excess is 335210
Mon Feb 1 23:34:09 2016 commencing in-memory singleton removal
Mon Feb 1 23:34:22 2016 begin with 64353012 relations and 67306010 unique ideals
Mon Feb 1 23:36:49 2016 reduce to 64347395 relations and 67300393 ideals in 13 passes
Mon Feb 1 23:36:49 2016 max relations containing the same ideal: 200
[B]Mon Feb 1 23:37:00 2016 filtering wants 1000000 more relations[/B]
Mon Feb 1 23:37:00 2016 elapsed time 01:24:09[/CODE]:help:

pinhodecarlos 2016-02-02 16:38

Mike, too soon to download data from server, more relations are needed.

fivemack 2016-02-02 16:38

That's fine, it just needs a bit more sieving. I have given it 40MQ more and put it back into the queue.

VictordeHolland 2016-02-02 20:00

[QUOTE=Xyzzy;424798]Today "iceweasel" refused to create new tabs, so we had to kill it.
:mike:[/QUOTE]
Browsers are resource hogs, Firefox sometimes uses more than 1GB on my machine. After a restart it usually goes down significantly.

fivemack 2016-02-03 09:50

4091-71 done
 
1 Attachment(s)
[code]
Wed Feb 3 08:48:00 2016 p95 factor: 12988503186222327646421248730895222403293905784149398502620481281112320157272209614019269690491
Wed Feb 3 08:48:00 2016 p159 factor: 518335114642293378691256234379505566150951390081075444098532223237755512044427487395657159499307225212792813634256606208660579492770434929349443574020113409031
[/code]

119.7 hours on six cores of i7/5820K for 15.6M density-120 matrix. Log attached.

swellman 2016-02-04 00:22

Reserving C161_P201_plus_1 for post processing, once sieving nears completion.

swellman 2016-02-04 21:51

146^61+61^146
 
Reserving 146^61+61^146 for post processing.

unconnected 2016-02-05 14:04

1 Attachment(s)
4051^71-1 completed - 90 hours for 12.99M matrix. Log attached.

[CODE]prp75 factor: 279274616733523689420549341866562257753979070922335553016705017106816974741
prp179 factor: 12118051067257084674115490347233065965201598080089357315147723455637969877312374084872855670357510735066190365056976013646056286675455603648094759669868167197319449527159692130881
[/CODE]

swellman 2016-02-06 16:43

I'll take 12479_61_minus1. Thanks.

swellman 2016-02-07 14:13

128^95+95^128 Factored
 
[code]
p68 = 23144924940060930504142723222526535829414212140961804786225308913977
p71 = 67007404223119987076182324317164971756140037268878967582148260231233633
p72 = 878501540133886952102725149264266226184054224758023789229451091417720817
[/code]

swellman 2016-02-07 15:54

1 Attachment(s)
Log file for 128^95+95^128.

Xyzzy 2016-02-10 19:02

9199_59_minus1
 
1 Attachment(s)
[CODE]p72 factor: 263615508766572376761965157810265545216275677830154922343498587550270229
p115 factor: 7994398119225158258804307836393693167720834296122169067092886389267152091868349999345814661476258564700539077285423[/CODE]

Xyzzy 2016-02-10 20:30

Usually "D" jobs are around 100 to 125 hours on our NUC.

We just started 12269_61_minus1 (TD=100) and the ETA is over 1,100 hours.

Did we do something wrong?

:mike:

[CODE]Wed Feb 10 13:24:55 2016 commencing linear algebra
Wed Feb 10 13:25:10 2016 read 20345397 cycles
Wed Feb 10 13:26:06 2016 cycles contain 66603558 unique relations
Wed Feb 10 13:33:11 2016 read 66603558 relations
Wed Feb 10 13:36:04 2016 using 20 quadratic characters above 4294917295
Wed Feb 10 13:42:09 2016 building initial matrix
Wed Feb 10 13:56:21 2016 memory use: 8343.3 MB
Wed Feb 10 13:56:35 2016 read 20345397 cycles
Wed Feb 10 13:56:38 2016 matrix is 20345220 x 20345397 (8219.0 MB) with weight 2351373112 (115.57/col)
Wed Feb 10 13:56:38 2016 sparse part has weight 1951095079 (95.90/col)
Wed Feb 10 14:05:03 2016 filtering completed in 2 passes
Wed Feb 10 14:05:08 2016 matrix is 20344558 x 20344735 (8218.9 MB) with weight 2351354016 (115.58/col)
Wed Feb 10 14:05:08 2016 sparse part has weight 1951089631 (95.90/col)
Wed Feb 10 14:06:37 2016 matrix starts at (0, 0)
Wed Feb 10 14:06:40 2016 matrix is 20344558 x 20344735 (8218.9 MB) with weight 2351354016 (115.58/col)
Wed Feb 10 14:06:40 2016 sparse part has weight 1951089631 (95.90/col)
Wed Feb 10 14:06:40 2016 saving the first 48 matrix rows for later
Wed Feb 10 14:06:43 2016 matrix includes 64 packed rows
Wed Feb 10 14:06:46 2016 matrix is 20344510 x 20344735 (7917.1 MB) with weight 1985767002 (97.61/col)
Wed Feb 10 14:06:46 2016 sparse part has weight 1871966901 (92.01/col)
Wed Feb 10 14:06:47 2016 using block size 8192 and superblock size 294912 for processor cache size 3072 kB
Wed Feb 10 14:08:43 2016 commencing Lanczos iteration (2 threads)
Wed Feb 10 14:08:43 2016 memory use: 6798.9 MB
Wed Feb 10 14:14:03 2016 linear algebra at 0.0%, ETA 1160h 1m
Wed Feb 10 14:15:48 2016 checkpointing every 20000 dimensions[/CODE]

debrouxl 2016-02-10 21:29

A 20Mx20M matrix with density 100 is fantastically large for a SNFS difficulty 249 task, so it looks like the task was significantly undersieved. How much RAM does your NUC have ?

If the numbers on the "Crunching" page are to be believed (= if the new numbers are not subject to the recent glitch), all of C164_P207_plus_1, C162_842592_8035, C157_933436_12482 and C174_135_52, which were moved to the "Queued for post-processing" state, are way undersieved, and at least C157_933436_12482 has fewer relations than the minimum count to build a (huge) matrix.
I do intentionally queue numbers with short ranges, and adjust them later - don't mistake my short ranges for proper ranges :smile:

Xyzzy 2016-02-10 21:52

Our NUC has 16GiB memory.

We are trying to eliminate downtime between jobs. We have C164_P207_plus_1 mostly downloaded but it might also need additional sieving so we are not sure what to do.

:mike:

pinhodecarlos 2016-02-11 04:24

[QUOTE=Xyzzy;425910]Our NUC has 16GiB memory.

We are trying to eliminate downtime between jobs. We have C164_P207_plus_1 mostly downloaded but it might also need additional sieving so we are not sure what to do.

:mike:[/QUOTE]

Mike, I think you should wait until the "Est. Pending Rels" is less than 1M to start downloading. You'll always have downtime's between jobs, maybe put them to sieve while you wait for the numbers to be post-processed?!

fivemack 2016-02-13 12:02

Post-processing C162_842592_8035

unconnected 2016-02-14 00:19

Reserving C157_933436_12482.

swellman 2016-02-14 01:27

Reserving C174_135_52.

swellman 2016-02-15 16:22

1 Attachment(s)
[QUOTE=swellman;425139]Reserving C161_P201_plus_1 for post processing, once sieving nears completion.[/QUOTE]

Log attached.

[code]
prp79 factor: 1807477082986816951166282697707063909104383145306133523585065496662325622935467
prp83 factor: 46301491712468944269319808184915150324840849319698869338015195602580697770742797771
[/code]

swellman 2016-02-15 23:04

Reserving C217_134_64 for post processing.

12479_61_minus1 and C174_135_52 are both in LA and should be factored by Thursday.

fivemack 2016-02-16 10:37

C162_842592_8035 done
 
[code]
Mon Feb 15 17:15:19 2016 p72 factor: 451249779191541589867087927176202587301358260921847679079661458462065601
Mon Feb 15 17:15:19 2016 p90 factor: 248795141121757851813715374557702115003250234458368592829899368129068471680674246278461977
[/code]

51 hours for 6.50M matrix with density 120 on four cores of i5/750 (original 27" iMac)

Log at [url]http://pastebin.com/f18nYh5E[/url]

VictordeHolland 2016-02-17 00:30

I'll take [B]C166_P172_plus_1[/B]


All times are UTC. The time now is 05:35.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.