There are some people (including myself) that are doing TF, P-1 and/or ECM in the 1M-9M region. Currently the 2M range is TFed to 65bits. The goal is to find factors for exponents without known factors.

The lower the exponent the more effort (GHzd) it takes to TF to the same bitlevel. The lower exponents are 'cheaper' to do P-1/ECM on, due to the the smaller FFT sizes. Which poses the question: at what point makes using P-1/ECM using a CPU makes more sense than TF on a GPU. I know comparing CPUs and GPUs is a bit like comparing apples to oranges, but the idea is to spend electricity wisely (read: lowest KWh/factor).

Of course there are other uses of GPU resources that are more helpful to GIMPS (DCTF, LLTF), but let's take them out of the equation for the moment.

Power and GHzd-d approximation

CPU: Intel i5 2500k 30-33 GHzd-d of P-1/ECM and uses 135W.

GPU: AMD 280X that can TF ~600 GHzd-d (<69bits) in these low ranges and uses 250W.

Power/GHzd: (nice numbers for easier calc)

CPU 135W * 24h / 32.4GHzd-d = 100Wh/GHzd

GPU: 250W * 24h / 600GHzd-d = 10Wh/GHzd

Assuming TF results in 1 factor in 200 runs (due to some P-1/ECM already done).

Rng | 65->66bit | effort/factor | Wh/factor

2M | 3.74 GHzd | 748 GHzd | 7,480

4M | 1.87 GHzd | 347 GHzd | 3,470

6M | 1.25 GHzd | 250 GHzd | 2,500

8M | 0.93 GHzd | 186 GHzd | 1,860

I've been doing some P-1 (B1=10e6 B2=200e6) in the 1.5-1.7M range and so far found 83 factor in ~1300 attempts, which works out to about 1/16 (remember nice numbers ;-) ). Expanding that to the higher ranges:

Rng | P-1 GHzd | effort/factor | Wh/factor

2M | 1.94 GHzd | 31.04 GHzd | 3,104

4M | 3.68 GHzd | 58.88 GHzd | 5,888

6M | 5.23 GHzd | 83.68 GHzd | 8,368

8M | 7.71 GHzd | 123.36 GHzd | 12,336

With ECM I ran 2300 curves (B1=5e4 B2=5e6) in the 1.5-1.7M range and found 2 factors. The experts will probably kill me for saying this: but let's assume 1 factor in 1000 curves.

Rng | ECM GHzd | effort/factor | Wh/factor

2M | 0.0845 GHzd | 84.5 GHzd | 8,450

4M | 0.180 GHzd | 180 GHzd | 18,000

6M | 0.270 GHzd | 270 GHzd | 27,000

8M | 0.397 GHzd | 397 GHzd | 39,700

GPU TF until Wh/F

_{TF GPU} of the next bitlevel > Wh/F

_{CPU P-1/ECM} ??????

That would imply:

2M no futher GPU TF

4M to 66bits

6M to 67bits

8M to 68bits

Is there something fundamentally wrong with my assumptions, or is GPU TF just still quite efficient in the >4M region?

**Disclaimer: **Just to be

__very clear__, this endeavour is purely for FUN! Nothing scientific to be gained here.