![]() |
C221_118_81 now running.
Reserving C219_127_57 and C197_129_53 to run over the weekend |
I'll attempt to queue the OP numbers posted by William probably tomorrow.
While 15e is more efficient in that range (but the 15e queue is full), 14e can usually deal with GNFS difficulty 165-170 tasks, especially with a good poly. SNFS difficulty 250+ with a sextic polynomial on 14e is a stretch. |
[QUOTE=debrouxl;422457]I'll attempt to queue the OP numbers posted by William probably tomorrow.
While 15e is more efficient in that range (but the 15e queue is full), 14e can usually deal with GNFS difficulty 165-170 tasks, especially with a good poly. SNFS difficulty 250+ with a sextic polynomial on 14e is a stretch.[/QUOTE]Thanks. Indicates where I should devote GPU effort in the next month or two. Paul |
[QUOTE=Dubslow;422348]Is there a particular reason for the use of gzip for compression? bzip2 has a rather better compression ratio (at a computational cost) while still being more-or-less widely available.[/QUOTE]
[QUOTE=frmky;422366]The results are compressed by the BOINC clients before they are returned to the server. The server just checks that they are valid compressed relations then concatenates them into the file you download.[/QUOTE] Furthermore, removing duplicates would also be a massive savings of bandwidth. It would perhaps require uncompressing, unless someone extended remdups4 with zlib, but in that case it would become practical to use bzip2. I'm bringing this up because I have substantially worse internet that I've had years past; it took on the order of 12 hrs to download 22GiB of just over 400M relations, of which 63M were duplicates (and so a waste of bandwidth). Besides the connection being slower, it's also a connection where the total bandwidth consumption is monitored -- and 22 GiB is not insubstantial. (It definitely didn't help that I messed up and needed to do it *again* -- but that was my fault :razz:) |
C221_118_81 done
1 Attachment(s)
[code]
Fri Jan 15 07:37:13 2016 p56 factor: 66604751882840716203190547146724002766120346665371044853 Fri Jan 15 07:37:13 2016 p165 factor: 489911686445026569674165955928329541002751620798887655641477577459794398906029747989127094102551808131327740546125837492604682107848229561628575767032878867858107417 [/code] 12.8 hours for 5.7M matrix on E5-2650v2 -t 7 |
[QUOTE=Dubslow;422531]I'm bringing this up because I have substantially worse internet that I've had years past; it took on the order of 12 hrs to download 22GiB of just over 400M relations, of which 63M were duplicates (and so a waste of bandwidth).[/QUOTE]It takes us days, but we start downloading relations as soon as they start coming in. At first it is hard to keep up but things balance out nicely towards the end.
[c]while true; do nice wget --continue --limit-rate=64k --user=rsals_data --password=***** http://escatter11.fullerton.edu/nfs_data/12_226_plus_7_226/12_226_plus_7_226.dat.gz; sleep 3600; done[/c] |
1373_79_minus1 results
1 Attachment(s)
[B]1373_79_minus1[/B]
[code] p63 factor: 167155760887752250734824423685255209540133209961357160033907273 p126 factor: 525234033640980062974735094976085151972433095270924958325303347977625755049053805905136035990038690127663787469942009602039239 [/code]12.4M matrix with TD=110 about 109h on all 4 cores 3770k |
[QUOTE=Dubslow;422531]Furthermore, removing duplicates would also be a massive savings of bandwidth. It would perhaps require uncompressing, unless someone extended remdups4 with zlib, but in that case it would become practical to use bzip2.
[/QUOTE] Bandwidth is not an issue for me (theoretical 90mbit, in practice 50-80mbit depending on the time of day). But need-less to say, if we can limit the download to uniques then I would support that. However in that case you can't check the duplicate ratio, unless that original number of relations is stored in a table. [edit] I''ll be on skiing holidays from tomorrow till Saturday the 23rd. My machines will be off in that time-window. |
[QUOTE=xilman;422241]
These are the sub-S250 remainders: [c]226.37 7,265- C168 227.22 7,266- C173 227.27 8,249+ C178 227.27 8,249- C173 227.35 6,289- C160 227.44 2,746- C219 227.58 5,322+ C207 227.74 4,374- C215 [/c] Paul[/QUOTE] Paul, Let's add them to the queue to help you complete a project milestone . I can thrown two machines to help you on the post-processing. Carlos |
1 Attachment(s)
[QUOTE=swellman;421340]Reserving 1847_71_minus1.[/QUOTE]
A nice triple. [code] prp54 factor: 751675369654905088678231667221559046734934416341718309 prp60 factor: 860344312225890325626692969152005944727762796494291088905929 prp113 factor: 97939282732986918883657823741022254384311180414447883360700495089044693365385695961569969049390093405927645342427 [/code] TD=98(!) as attempts at 114 and 106 failed to build a matrix. |
It shouldn't be hard for the server to track the original rel count before we download only uniques. Thanks for the tip Mike, I'll probably use that myself.
|
| All times are UTC. The time now is 23:06. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.