![]() |
A lot of CPU time has been wasted
nfs@home has run GW_3_515, which is a 156-digit composite that could be run by GNFS with 30-bit large primes in ten days on a single quad-core Haswell, as a difficulty-248 31-bit-large-primes SNFS job!
|
Greg has four new toys sieving for NFS@Home.
|
[QUOTE=pinhodecarlos;410787]Greg has four new toys sieving for NFS@Home.[/QUOTE]
Not new. Just otherwise idle for a bit. |
[QUOTE=frmky;410809]Not new. Just otherwise idle for a bit.[/QUOTE]
Did you try to run any post-processing tasks on them (like the ones you did for 14e)? |
G6p490
From 6 Apr 2015 to 10 Aug 2015, the number 6^490+1 is sieved using gnfs, q below 2000M.
And about from 11 Sep 2015, a task named G6p490b is sieved, the q seems the same, is there anything wrong? |
Rational factor base and algebraic factor base sieve.
|
[QUOTE=pinhodecarlos;410840]Rational factor base and algebraic factor base sieve.[/QUOTE]
I hope so , but as I can remember, when G6p490 is sieved , it is sieved using -a option, that means it sieved on algebraic side. For G6p490b, it still use -a option, should this option change into -r ? [url]http://escatter11.fullerton.edu/nfs/result.php?resultid=47484710[/url] [CODE]<core_client_version>7.2.42</core_client_version> <![CDATA[ <stderr_txt> boinc initialized work files resolved, now working -> ../../projects/escatter11.fullerton.edu_nfs/lasieve5f_1.10_x86_64-pc-linux-gnu [COLOR="Red"]-> -a[/COLOR] -> -f -> 160170000 -> -c -> 2000 -> -R -> ../../projects/escatter11.fullerton.edu_nfs/G6p490.poly -> -o -> ../../projects/escatter11.fullerton.edu_nfs/G6p490b_160170_0_0 total yield: 1358, q=160172003 (1.42758 sec/rel, 100.15000 % done of 2000) called boinc_finish </stderr_txt> ]]>[/CODE] |
[QUOTE=wreck;410838]From 6 Apr 2015 to 10 Aug 2015, the number 6^490+1 is sieved using gnfs, q below 2000M.
And about from 11 Sep 2015, a task named G6p490b is sieved, the q seems the same, is there anything wrong?[/QUOTE] Yep. Corrupted file which got propagated to the backup before being discovered. Need to redo up to 1500. And I'm now saving incremental backups. :davieddy: |
[QUOTE=frmky;410857]Yep. Corrupted file which got propagated to the backup before being discovered. Need to redo up to 1500. And I'm now saving incremental backups. :davieddy:[/QUOTE]
Maybe a call of arms to help re-sieve should be issued on the news. Double points for this rework so we can easily sieve up to 1500M. Edit: Request for help made in several team forums. |
Hey Greg, you're flying, great output.
|
[QUOTE=pinhodecarlos;410829]Did you try to run any post-processing tasks on them (like the ones you did for 14e)?[/QUOTE]
Those very ones were run on a node in this cluster. |
| All times are UTC. The time now is 10:17. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.