![]() |
|
|
#595 |
|
"Carlos Pinho"
Oct 2011
Milton Keynes, UK
10011010100112 Posts |
Installed JDownloader, managed to download 11_10_233m, now running filtering phase....
Carlos |
|
|
|
|
|
#596 |
|
"Carlos Pinho"
Oct 2011
Milton Keynes, UK
494710 Posts |
11_10_233m LA ~10 days. Anyway, I think more sieve was needed but I will keep going with the post-processing.
Carlos |
|
|
|
|
|
#597 |
|
"Carlos Pinho"
Oct 2011
Milton Keynes, UK
3×17×97 Posts |
I think we need to go back to the sieve subject. I really think that more sieve is needed for the lasieved integers. Examples, on my IB core i7, 4 threads running:
L1201 218-digit input - lasievee SNFS 251.0 setting target matrix density to 110.0 found 74947407 hash collisions in 314763320 relations found 80865618 duplicates and 235116244 unique relations matrix is 10065336 x 10065561 (4163.7 MB) with weight 1053317911 (104.65/col) sparse part has weight 990830628 (98.44/col) linear algebra at 0.0%, ETA 161h38m L1803 216-digit input - lasievee SNFS 251.2 setting target matrix density to 110.0 found 74270715 hash collisions in 329955633 relations found 76882467 duplicates and 253885458 unique relations matrix is 10657840 x 10658088 (2884.9 MB) with weight 695106766 (65.22/col) sparse part has weight 649681132 (60.96/col) linear algebra at 0.0%, ETA 128h26m 11_10_233m 183-digit input - lasieved SNFS 244 setting target matrix density to 110.0 found 39972611 hash collisions in 217657588 relations found 39223437 duplicates and 179652801 unique relations matrix is 11770611 x 11770836 (4927.7 MB) with weight 1263586357 (107.35/col) sparse part has weight 1174068997 (99.74/col) linear algebra at 0.0%, ETA 229h50m |
|
|
|
|
|
#598 |
|
Sep 2009
977 Posts |
217M raw relations, with less than 20% duplicates, are alright for a 31-bit LPs task of SNFS difficulty 244 sieved with 14e. That's about what we've used at RSALS for a long time
![]() The forecast for L1752, SNFS difficulty 244 as well, is around 277M raw relations, which is a huge amount of oversieving. Sieving the 60M extra raw relations took two extra days, which makes the factorization not occur that much sooner than it could occur with the usual amount of sieving (in which case post-processing could have started a couple days ago). |
|
|
|
|
|
#599 | |
|
"Carlos Pinho"
Oct 2011
Milton Keynes, UK
3×17×97 Posts |
Quote:
NFS@Home sievers only care about points, they don't understand the math and the post-processing work that is needed. We need to take advantage of the available CPU, why not oversieve all as we are doing with L1752? You need to read what I already said on this thread as Batalov (http://www.mersenneforum.org/showpos...&postcount=449). Last fiddled with by pinhodecarlos on 2013-05-13 at 17:00 Reason: Added link |
|
|
|
|
|
|
#600 | ||||
|
Sep 2009
977 Posts |
Quote:
![]() Quote:
I don't run much NFS LA nowadays, my own cores have been working mainly on World Community Grid since December 2004. Quote:
Quote:
I can fully understand that post-processers wish differently, but as long as post-processing is not the bottleneck, why should sieving spend 10%, 20% or 30% extra effort (in addition to the extra effort represented by producing 210M-220M raw relations instead of 200M raw relations for a 31-bit LPs task) to reduce post-processing (which is far below 10% of the total factorization cost), even by 50% ? A bit of oversieving is necessary to make a matrix more palatable, but too much oversieving is a waste of resources. The main reason why L1752 is so oversieved is that sieving was highly productive, more than most other L and F numbers. Sieving should have been stopped much earlier, at q=140M - and in turn, by now, we'd already have several dozen millions more relations for L1758 ! Last fiddled with by debrouxl on 2013-05-13 at 17:58 |
||||
|
|
|
|
|
#601 | |||
|
Nov 2003
22×5×373 Posts |
Quote:
Knowledge". If you believe otherwise, please show us the new mathematical knowledge you claim to be generating...... Quote:
than we do by performing many mundane factorizations. Quote:
|
|||
|
|
|
|
|
#602 | |
|
"Carlos Pinho"
Oct 2011
Milton Keynes, UK
135316 Posts |
Quote:
1) I only have one laptop available and therefore it is my work computer where I need to manage its processing time and energy spent; 2) Optimal parameterization of SNFS is achieved. This implies a bunch of parametric choices where I think you were totally wrong on RSALS. Yes, I've been reading a lot of papers... Carlos Last fiddled with by pinhodecarlos on 2013-05-13 at 18:12 |
|
|
|
|
|
|
#603 |
|
"William"
May 2003
New Haven
2·7·132 Posts |
Are you claiming that oversieving to get smaller, easier solved matrices is pushing the state of the art? If not, then you are completely irrelevant to the discussion of how to best tap the BOINC power. I think the issues are social, not mathematical.
|
|
|
|
|
|
#604 | |
|
Sep 2009
3D116 Posts |
Let's make my reply more explicit
![]() TL;DR: I'd certainly go for a 10-days LA instead of a 7-days LA, since it saves CPU-months to others while costing less than half a CPU-month to me. Explanation: I basically don't post-process tasks of those difficulties, although I do have the gear to. However, for the number of smaller tasks I post-processed over time, I didn't raise the target number of raw relations in such epic proportions as 20% (or at all) just to make my own life easier, no matter the CPU cost for others - so I effectively went with the harder post-processing tasks. Quote:
![]() Minor changes to time-vetted, practice-validated target number of relations (for the record: up to ~55-65M raw for 29-bit LPs tasks, up to ~110M-120M for 30-bit LPs tasks, up to ~210M-220M for 31-bit LPs tasks) are possible, and might even be desirable. I mean, maybe we should target ~220M-230M raw relations for 31-bit LPs tasks; however, we should certainly not target 270M, as it wouldn't be a good tradeoff between wasting sieving CPU time on the one side, and lengthier post-processing on the other side. That said, I wouldn't want to be seen as a dictator. If you can convince the stakeholders (Greg and Serge for NFS@Home, the leads of the projects which act as sources of ECM-ized numbers, and other post-processers) that you'll be able to do a better job than I'm doing (even if the oversieving you're advocating yields overall higher CPU consumption for any given number and fewer integers sieved by time unit, all other things equal), then you can become one of the people with access to NFS@Home's work feeding infrastructure
|
|
|
|
|
|
|
#605 |
|
"Carlos Pinho"
Oct 2011
Milton Keynes, UK
115238 Posts |
Lionel, I really don't give a damn about taking your place, that's not in question.
I just think when you have plenty of processing power available to sieve, and when the sievers don't care about energy efficiency but instead they care about points and heating their rooms, we should better consider the size of the matrix for the LA when the post-processing jobs are done by dedicated fellows. Carlos |
|
|
|
![]() |
| Thread Tools | |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| Boinc Statistics for NFS@Home borked ? | thomasn | NFS@Home | 1 | 2013-10-02 15:31 |
| BOINC NFS sieving - RSALS | debrouxl | NFS@Home | 621 | 2012-12-14 23:44 |
| BOINC? | masser | Sierpinski/Riesel Base 5 | 1 | 2009-02-09 01:10 |
| BOINC? | KEP | Twin Prime Search | 212 | 2007-04-25 10:29 |
| BOINC | bebarce | Software | 3 | 2005-12-15 18:35 |