Go Back > Factoring Projects > YAFU

Thread Tools
Old 2012-07-10, 06:37   #1
yoyo's Avatar
Oct 2006
Berlin, Germany

10010011002 Posts
Default Thumb rule for runtime prediction


lets assume I run yafu on a 8 core system to factor a C110. Yafu runs through all phases until nfs.
Is there a rule of thumb how long the runtime will be for a C111, C112, C113 and so on? E.g. runtime will double with every additional 3 digits?

yoyo is offline   Reply With Quote
Old 2012-07-10, 06:47   #2
Batalov's Avatar
Mar 2008

100011101001002 Posts

Roughly speaking, "Runtime will double for every 5 digits for gnfs, and for every 9 digits of difficulty for snfs."

EDIT: this is when all is optimized to perfection. However, for small jobs, too much time is spent selecting a poly (on 1 cpu) and this time can easily dominate the whole estimate. Poly selection has to be either parallelized with some updated exit conditions or can be stopped manually (not within yafu though, but with some scripts or code wrangling). For larger jobs, the rule holds well up to gnfs-170-180s... 190s?

Last fiddled with by Batalov on 2012-07-10 at 06:53
Batalov is offline   Reply With Quote
Old 2012-07-12, 02:48   #3
RichD's Avatar
Sep 2008

60718 Posts

I think I know where you are going with this. One possibility, but probably too cumbersome, is to divide into three work units; say ECM WU, poly select WU and [sieving & LA] WU. The poly exchange is only 400-500 kB of data.

With several queues you will inevitably end up with bottlenecks. Most likely at the poly select unless you can recruit some CUDA select workers. Then again, these may be too small for CUDA. Perhaps jrk or jasonp can add some comments.

Maybe just have two different WUs -- ECM & NFS. ??
RichD is offline   Reply With Quote
Old 2012-07-12, 16:32   #4
Tribal Bullet
jasonp's Avatar
Oct 2004

3·1,163 Posts

The GPU code is not tuned well enough to let it loose on non-technical people's computers, the slowdown is very noticeable when it's running full tilt. Also, you would currently need a seriously high-end card to manage a substantial speedup (2x or more) over a modern CPU.

I also have concerns that people who don't know how the LA works will be surprised when it soaks up all the CPU, memory and memory bandwidth on their machine for extended periods of time. By the time you get to the LA for a ~120-digit input, you would need 1-2 gigs of disk space and maybe 500MB of memory for the matrix. Restarting the matrix from disk would take several minutes, which won't work very well when the BOINC client is set up to stop and restart often.
jasonp is offline   Reply With Quote

Thread Tools

Similar Threads
Thread Thread Starter Forum Replies Last Post
Do-it-yourself, crank, mersenne prediction thread. Uncwilly Miscellaneous Math 85 2017-12-10 16:03
GNFS rules of thumb pakaran Factoring 2 2015-09-15 19:44
Prediction for the next prime paulunderwood 3*2^n-1 Search 7 2008-06-20 10:31
Is there a Prime prediction algorithm? Omniprime1 Miscellaneous Math 8 2007-03-01 20:04
My prediction about multi-core chips jasong Hardware 6 2006-02-14 16:07

All times are UTC. The time now is 21:31.

Thu Oct 22 21:31:15 UTC 2020 up 42 days, 18:42, 0 users, load averages: 2.79, 2.33, 2.05

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.