![]() |
|
|
#122 |
|
"Curtis"
Feb 2005
Riverside, CA
4,861 Posts |
Some RSA-768 stats, for reference:
2^16 x 2^15 sieve area (identical to 16e) 4 large primes (!!!!!) on alg side, 3 on rational mfba = 140, mfbr = 110 lpba = lpbr = 40 (they note mfba/mfbr are set optimally for lp = 37, but they accepted up to 40-bit large primes) Factor base 1100M on one side, 200M on the other side. 64G raw relations, 27% duplicate rate. At 150 bytes per relation, that's about 10TB of data. 193M matrix of total weight 27G (144 per row, jasonp has noted this is not quite equivalent to target-density in msieve). The authors noted they expected 250M matrix of weight 37G, so they conclude they oversieved by nearly a factor of 2. 1500 core-years sieving normalized to opteron-2.2ghz core. If we accept their conclusion of oversieving, 800 core-years might've been enough. Today's standard core is perhaps 50% faster than Athlon-2.2, so let's say 550 core-years on modern 3.4Ghz hardware. GNFS-251 should be roughly 15 times longer to sieve than RSA-768, so 8000 modern core-years might produce a rather large matrix, with 10000 possibly a good plan to produce a more manageable matrix. |
|
|
|
|
|
#123 |
|
"Carlos Pinho"
Oct 2011
Milton Keynes, UK
3×17×97 Posts |
Few notes if we use NFS@Home:
1) Request Greg authorisation and confirm if we can use the 5f application 2) Confirm if NFS server has plenty of hard space with backup 3) Prior to the start of sieving NFS should release a statement explaining this run is for a GNFS world record factorisation so people could mobilise their computers to help 4) Give to NFS clients an estimate of the sieve time in function on the numbers of cores allocated 5) Confirm with Greg if the cluster can be used to run the post-processing stage hence I think his grant had ended but this is a medium term goal so he might have it back next year. From my side I can push several teams to support us on sieving. PS( forgot about the polynomial search...who's gonna do it?!) Last fiddled with by pinhodecarlos on 2016-10-07 at 05:30 |
|
|
|
|
|
#124 |
|
"Curtis"
Feb 2005
Riverside, CA
4,861 Posts |
Carlos-
Is the 96-bit mfbr/mfba limit lifted on the 5f siever? It didn't occur to me that anything other than CADO would work for this job. Edit: and, ohyea, how are you going to get 17e or 18e from lasieve? Last fiddled with by VBCurtis on 2016-10-07 at 06:09 |
|
|
|
|
|
#125 | |
|
"Carlos Pinho"
Oct 2011
Milton Keynes, UK
3×17×97 Posts |
Quote:
First question I don't know, second not a clue...lol Can CADO be integrated onto NFS@Home platform?! |
|
|
|
|
|
|
#126 |
|
Bamboozled!
"πΊππ·π·π"
May 2003
Down not across
29·3·7 Posts |
|
|
|
|
|
|
#127 |
|
"Carlos Pinho"
Oct 2011
Milton Keynes, UK
115238 Posts |
That's good.
In the meantime, this is a little off topic, but if my calculations are correct taken for BOINC NFS@Home page, in the last 24 hours almost 3200 cores were connected to NFS@Home grid running the 5f siever. Each core being an equivalent to an i7-3630QM CPU @ 3.20GHz core. Calculations were based on my laptop estimating time of one hour to process one 5f siever task. Faster computers can run the task in 2500 seconds or less. 80k 5f tasks done in the last 24 hours. Let's see next couple of days during the challenge because the core numbers might be over estimated do to clients bunkering. Last fiddled with by pinhodecarlos on 2016-10-07 at 11:11 |
|
|
|
|
|
#128 | |
|
"Curtis"
Feb 2005
Riverside, CA
4,861 Posts |
Quote:
Comparing 4.2e9, 6e9, and 7e9, B1=6e9 minimizes expected time to complete a t70. With B1 = 6e9, increasing B2 by 30% further reduced t70 time, with higher B2 bounds still to be tested. I am now working to find what bounds minimize expected t75 time on this 16GB machine. I also have two 32GB machines to use to see what effect doubling maxmem (and thus quartering k) has on the minimum. At under one curve per day, this testing may take a while! |
|
|
|
|
|
|
#129 | |
|
Jul 2018
238 Posts |
Quote:
I've been working on steadily larger factorizations of less than 200 digits using CADO-NFS with a fair bit of success and am trying to improve my understanding of GNFS parameters and sizing. |
|
|
|
|
|
|
#130 | |
|
"Curtis"
Feb 2005
Riverside, CA
4,861 Posts |
Quote:
You should have a look at the CADO-NFS thread in this subforum, where we discuss improvements on the default CADO-NFS params files. If you're doing jobs of 160+ digits, I'd really like to see timing data and parameter choice from your runs. |
|
|
|
|
|
|
#131 | ||
|
Jul 2018
1910 Posts |
Quote:
30GB per process would limit one's ability to find machines to run on and use all the cores, certainly. Quote:
My latest factorization on a 179-digit N is running mksol now and should finish in about a week, I'll update the CADO-NFS thread then. Perhaps another route to take here is to have a conversation about how to write a c250 parameter file for CADO-NFS; I've previously played around with using the c270 params file that's included, and am learning how the parameters relate to one another but I still don't understand it fully. |
||
|
|
|
|
|
#132 |
|
"Curtis"
Feb 2005
Riverside, CA
4,861 Posts |
Have a look at this thread about M1277 http://mersenneforum.org/showthread....523#post487523 for some discussion of parameter selection of a sort-of-similar-sized SNFS job. Test-sieving in CADO is a pain, as one must invoke LAS with the entire parameter list as arguments; I plan to figure out how to do it with a GNFS-180 I'm about to factor with CADO. Specifically, I want to see if CADO likes 3 large primes better at this size, and if so what MFBR is most efficient.
|
|
|
|
![]() |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| P-1 on M1061 and HP49.99 | ATH | Factoring | 21 | 2009-10-13 13:16 |