mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   GPU Computing (https://www.mersenneforum.org/forumdisplay.php?f=92)
-   -   mfaktc: a CUDA program for Mersenne prefactoring (https://www.mersenneforum.org/showthread.php?t=12827)

Aramis Wyler 2013-01-21 01:14

I did go back and muck with the gpusieveprimes value, but this time in the range of 50000 to 90000 I could only budge it by a few tenths of a ghz day. Much less volatile than before. 70000 was still the hot spot for me in the 60/61m range.

TheJudger 2013-01-21 18:54

Hi,

[QUOTE=Aillas;325298]Hi,

could someone please make a version of mfaktc 0.20 for cuda 4.0?

Thanks a lot.[/QUOTE]

I [B]guess[/B] that you have a good reason not upgrading your video driver?
Do you need a Windows or Linux executable?

On Windows it is not funny to switch CUDA toolkit version (at least for me...).
On Linux it is pretty easy:[LIST][*]install CUDA toolkit 4.0 to /usr/local/cuda (default path)[*]mv /usr/local/cuda /usr/local/cuda_4.0[*]install CUDA toolkit 4.1 to /usr/local/cuda (default path)[*]mv /usr/local/cuda /usr/local/cuda_4.1[*]...[/LIST]
And than choose your toolkit version by just executing the following command: [CODE]ln -snf /usr/local/cuda_<version> /usr/local/cuda[/CODE]

I want something similar for Windows! But I'm afraid it is not that simple because of[LIST][*][B]registry[/B][*]integration into MS Visual Studio (from Windows SDK)[*]my lack of Windows experience[*]...[/LIST]
Oliver

Ken_g6 2013-01-22 00:25

Wow, I wasn't aware George had made a GPU sieve until now. It looks awesome! :w00t:

Now, I'd like to integrate it into ppsieve/tpsieve. But I'm not sure where to begin. I can't find where George's sieve outputs its "small" primes. Are they a list of 64-bit numbers (which would be best), a list of some-other-bit numbers, or a bitmap that you somehow get numbers out of?


Meanwhile, while perusing the code, I think I've found a speedup potential! In mod_p there's a mul.lo.s32 followed by a sub.s32. Ideally, they could be combined into a mad.lo.s32, except that the second instruction is a sub and not an add. If we combine them anyway, the problem then becomes how to negate r before getting to the add. I [i]think[/i] that negating either (but not both of) p or pinv will do the trick. So let's call this new function mod_neg_p.

Edit: to spell out what I'm thinking of:
[code]
__device__ __inline static int mod_neg_p (int x, int p, int pinv)
{
// int q, r, a, b;

// q = __mulhi (x, pinv); // quotient = x * -inverse_of_p
// a = x + q * p; // x mod p (but may be too large by one p)
// b = a - p; // x mod p (the alternative return value)
// asm("slct.s32.s32 %0, %1, %2, %3;" : "=r" (r) : "r" (b) , "r" (a) , "r" (b));

int r;
asm ("mul.hi.s32 %0, %1, %2;\n\t" // r = __mulhi (x, pinv);
"mad.lo.s32 %1, %0, %3, %1;\n\t" // x = r * p + x;
"sub.s32 %0, %1, %3;\n\t" // r = x - p;
"slct.s32.s32 %0, %0, %1, %0;" // r = (r >= 0) ? r : x
: "=r" (r), "+r" (x) : "r" (pinv), "r" (p));

#ifdef GWDEBUG
pinv = -pinv;
if (pinv != gen_pinv (p))
printf ("p doesn't match pinv!! p = %d, pinv = %d\n", p, pinv);
if (r < 0 || r >= p)
printf ("x mod p out of range!! x = %d, p = %d, pinv = %d, r = %d\n", x, p, pinv, r);
#endif

return r;
}
[/code]

Modifying mod_const_p to use mod_neg_p looks pretty easy:

[code]
#define gen_neg_pinv(p) (1-(0xFFFFFFFF / (p)))
// Inline to calculate x mod p where p is a constant

__device__ __inline static int mod_const_p (int x, int p)
{
return mod_neg_p (x, p, gen_neg_pinv (p));
}[/code]
I'm sure there are other opportunities to use mod_neg_p in the code as well.

Prime95 2013-01-22 00:40

[QUOTE=Ken_g6;325406]I can't find where George's sieve outputs its "small" primes. Are they a list of 64-bit numbers (which would be best), a list of some-other-bit numbers, or a bitmap that you somehow get numbers out of?[/QUOTE]

It does not output a list of 64-bit numbers. It outputs a bitmap that the TF kernels must "read" (convert set bits into the factor candidate). Look at the start of each TF kernel to see how the bitmap is read.

chalsall 2013-01-22 00:42

[QUOTE=Ken_g6;325406]Wow, I wasn't aware George had made a GPU sieve until now. It looks awesome! :w00t:[/QUOTE]

Have you been asleep for the last three quarters?

Ken_g6 2013-01-22 00:49

[QUOTE=Prime95;325409]It does not output a list of 64-bit numbers. It outputs a bitmap that the TF kernels must "read" (convert set bits into the factor candidate). Look at the start of each TF kernel to see how the bitmap is read.[/QUOTE]I was afraid of that. This makes things awkward as I expect an exact number of primes to test in my sieves. (Proportional to the number of compute units.)

[QUOTE=chalsall;325410]Have you been asleep for the last three quarters?[/QUOTE]Three quarters? Nine months? Is there someplace his sieve is being discussed other than this thread? I don't get over here much. I knew about his trial factoring in CUDA, but I didn't expect a public-domain small-prime sieve to go with it!

I'd looked at the "World's second-dumbest CUDA program" thread, but it didn't have any conclusive results. And I rarely look at this thread - I was lucky to see it as soon as I did!

Chuck 2013-01-22 01:13

[QUOTE=Aramis Wyler;325327]I did go back and muck with the gpusieveprimes value, but this time in the range of 50000 to 90000 I could only budge it by a few tenths of a ghz day. Much less volatile than before. 70000 was still the hot spot for me in the 60/61m range.[/QUOTE]

I also switched to 70000; it gives me an additional 1 GHz-d/day.

Prime95 2013-01-22 01:41

[QUOTE=Ken_g6;325406]
Meanwhile, while perusing the code, I think I've found a speedup potential! In mod_p there's a mul.lo.s32 followed by a sub.s32. Ideally, they could be combined into a mad.lo.s32, except that the second instruction is a sub and not an add. [/QUOTE]

Good catch! I'll put it on my todo list for mmff / mfaktc. Unfortunately, I'm up to my eyeballs in more optimizations for prime95 FFTs (no, don't get excited). It should be possible to convert every mod_p call.

Dubslow 2013-01-22 01:41

[QUOTE=Ken_g6;325411]
I'd looked at the "World's second-dumbest CUDA program" thread, but it didn't have any conclusive results. And I rarely look at this thread - I was lucky to see it as soon as I did![/QUOTE]

Some months ago in that thread, B^2 had been continuously improving a CUDA SoE, with help from axn and rcv; AFAIK, George used that as a launching point for a mfaktc sieve.

Prime95 2013-01-22 01:50

[QUOTE=Ken_g6;325411]I was afraid of that. This makes things awkward as I expect an exact number of primes to test in my sieves. (Proportional to the number of compute units.)[/QUOTE]

Mfaktc used to have the same requirement. It turns out that by processing a big enough chunk of the bitmap (mfaktc does 8KB or 16KB) and then spreading the set bits evenly over 256 threads, there is relatively little wastage of compute resources. In fact, it may be more efficient because memory I/O is reduced since each factor candidate is represented by 1 bit instead of 64 bits.

LaurV 2013-01-22 02:19

[QUOTE=Ken_g6;325411]Three quarters? Nine months? Is there someplace his sieve is being discussed other than this thread?[/QUOTE]
It all started with [URL="http://www.mersenneforum.org/showthread.php?t=17162"]mmff thread,[/URL] and factoring fermat and double mersenne. The sieve came to mfaktc much later.

ixfd64 2013-01-22 02:33

[QUOTE=Prime95;325420]Good catch! I'll put it on my todo list for mmff / mfaktc. Unfortunately, I'm up to my eyeballs in more optimizations for prime95 FFTs (no, don't get excited). It should be possible to convert every mod_p call.[/QUOTE]

How much of a speedup (for mfaktc) are we looking at?

Prime95 2013-01-22 03:10

[QUOTE=ixfd64;325429]How much of a speedup (for mfaktc) are we looking at?[/QUOTE]

Just a guess: less than 2%, probably less than 1%

Aramis Wyler 2013-01-22 22:09

1%
 
When you're cranking out nearly 400 ghzdays/day with the gpu, a 1% increase is still more than any one of the cores on my cpu is doing. :)

Aillas 2013-01-23 09:11

Hi,

[QUOTE=TheJudger;325373]Hi,

I [B]guess[/B] that you have a good reason not upgrading your video driver[/QUOTE]
Yes, on this computer, system is maintained, updated and monitored by IT. I can't upgrade myself video drivers :no:

[QUOTE]Do you need a Windows or Linux executable?[/QUOTE]
A windows version please.

[QUOTE]On Windows it is not funny to switch CUDA toolkit version (at least for me...).[/QUOTE]

No problem. I will stick with my current version. And I will try to find a way to upgrade my nvidia drivers :smile:

Thanks.

flashjh 2013-01-23 14:13

[QUOTE=Aillas;325551]...A windows version please.[/QUOTE]

[URL="http://www.mersenneforum.org/showthread.php?p=324896#post324896"]Here[/URL] you go.

ixfd64 2013-01-23 17:17

Dumb question: if I have GPU sieving enabled, do I need to run multiple instances for dual-GPU cards (such as the GTX #90), or is that only necessary for multiple cards?

flashjh 2013-01-23 17:33

[QUOTE=ixfd64;325572]Dumb question: if I have GPU sieving enabled, do I need to run multiple instances for dual-GPU cards (such as the GTX #90), or is that only necessary for multiple cards?[/QUOTE]

Yes, but you would anyway.

TObject 2013-01-23 20:26

This new GPU-sieving mfaktc release got me thinking. Provided with enough power and adequate cooling, one could stick four GTX 580 (or even four GTX 590, if drivers support that), and get an insane combined through output from a single box.

[img]http://lab501.ro/wp-content/uploads/2010/11/4Way-02.jpg[/img]

Dubslow 2013-01-23 20:32

[QUOTE=TObject;325585]This new GPU-sieving mfaktc release got me thinking. Provided with enough power and adequate cooling, one could stick four GTX 580 (or even four GTX 590, if drivers support that), and get an insane combined through output from a single box.

[img]http://lab501.ro/wp-content/uploads/2010/11/4Way-02.jpg[/img][/QUOTE]

Wowzers!! That setup would require significant cooling.

And I think the issue on the forum is performance per dollar. :rolleyes:

kracker 2013-01-23 20:32

[QUOTE=TObject;325585]This new GPU-sieving mfaktc release got me thinking. Provided with enough power and adequate cooling, one could stick four GTX 580 (or even four GTX 590, if drivers support that), and get an insane combined through output from a single box.

[img]http://lab501.ro/wp-content/uploads/2010/11/4Way-02.jpg[/img][/QUOTE]

...if you have enough money to spare :smile:

EDIT:@Dubslow my case sucks, so I keep it open :max:

Dubslow 2013-01-23 20:42

[QUOTE=kracker;325589]
EDIT:@Dubslow my case sucks, so I keep it open :max:[/QUOTE]

As do I, but mostly for "it's fun to look at" reasons than cooling reasons :smile: (though of course the latter is a welcome benefit).

firejuggler 2013-01-23 20:55

nobody told you that your case should blow air rather than suck it?

chalsall 2013-01-23 21:41

[QUOTE=firejuggler;325593]nobody told you that your case should blow air rather than suck it?[/QUOTE]

I know this was a joke... But...

Non-specialized fans in a case should pull air in (suck), to assist the specialized fans immediately near high heat producers like CPUs, GPUs and PSs which should push air out.

Unless something is [I]very[/I] wrong, air_in == air_out.

Edit: Assuming a closed system. Some think leaving the cover off helps them. This is a fallacy unless there aren't enough fans, and the ambient temperature is low enough that convection does the work.

James Heinrich 2013-01-23 21:57

[QUOTE=chalsall;325599]... and the ambient temperature is low enough that convection does the work.[/QUOTE]Works fine here, just open a window. It was -37°C this morning... :ick:

Dubslow 2013-01-23 22:10

[QUOTE=James Heinrich;325605]Works fine here, just open a window. It was -37°C this morning... :ick:[/QUOTE]

Heh, it's not quite that bad here, but the heating in the dorm is suspect, so temps are below 21C, and I think maybe even 19. It's pretty cold in here (for being indoors).

chalsall 2013-01-23 22:12

[QUOTE=James Heinrich;325605]Works fine here, just open a window. It was -37°C this morning... :ick:[/QUOTE]

Bloody hell! :smile:

I felt cold last night when it got down to 22°C! :wink:

Xyzzy 2013-01-23 22:56

[QUOTE]Provided with enough power and adequate cooling, one could stick four GTX 580 (or even four GTX 590, if drivers support that), and get an insane combined through output from a single box.[/QUOTE]Two GTX 590 cards in a box would be nice. Too bad the GTX 590 is discontinued. Even GTX 580 cards are getting hard to find.

:sad:

TObject 2013-01-24 01:00

A newbie question here:

What is the difference between different GPU kernels?
If I start an exponent using [b]barrett92_mul32[/b] kernel and then continue with [b]barrett87_mul32_gs[/b] kernel from the last finished class, will I compromise the results?

Thank you.

Dubslow 2013-01-24 01:54

That changes how the trial divide is done, not the actual results of the divide. You should be fine.

TObject 2013-01-24 02:07

Excellent

LaurV 2013-01-24 02:11

[QUOTE=TObject;325585]Provided with enough power and adequate cooling, one could stick four GTX 580 (or even four GTX 590, if drivers support that), and get an insane combined through output from a single box.[/QUOTE]
That is actually what some of us ARE doing. (And were doing, with CudaLucas, before mfaktc 0.20 release).
P.S. why the ".ro" link? some special reason?

[QUOTE=Xyzzy;325610]Two GTX 590 cards in a box would be nice. Too bad the GTX 590 is discontinued. Even GTX 580 cards are getting hard to find.:sad:[/QUOTE]
Try [URL="https://www.google.co.th/search?q=Mars+II&hl=en&client=firefox-a&hs=DHt&tbo=u&rls=org.mozilla:en-US:official&tbm=isch&source=univ&sa=X&ei=tpwAUf-FLcrprQeqzIG4Cg&ved=0CDAQsAQ&biw=1452&bih=824"]Mars II[/URL], with two of those you need nothing else... :razz:

rjbelans 2013-01-24 02:28

[QUOTE=Xyzzy;325610]Two GTX 590 cards in a box would be nice. Too bad the GTX 590 is discontinued. Even GTX 580 cards are getting hard to find.

:sad:[/QUOTE]


I guess I'll give these 590s I've got a whirl and see what they can do. Maybe then I can go see what 4 285 classifieds and 3 580SCs will give. I'm doing other DC projects too, so no quotes on how long before I will get to doing all of this.

TObject 2013-01-24 02:50

[QUOTE=LaurV;325627]
P.S. why the ".ro" link? some special reason?
[/QUOTE]

No reason. A picture found on Google.

rjbelans 2013-01-24 04:33

[QUOTE=rjbelans;325628]I guess I'll give these 590s I've got a whirl and see what they can do. Maybe then I can go see what 4 285 classifieds and 3 580SCs will give. I'm doing other DC projects too, so no quotes on how long before I will get to doing all of this.[/QUOTE]

Just a quick update. I started running all four of the GPUs on my 2 590s and they are getting about an average of 140 GHz-d/day each, for 560 GHz-d/day total. These cards are watercooled and running 720/1440/1728 clocks. The CPU is a 980X @ 4.0GHz running 1 worker of Prime 95 on 10 threads.


FYI - I noticed a post earlier in this thread talking about how someone was disappointed about a 590 not equalling 2 x 580. That was never expected to happen with these cards because of the lower clocks that were needed to get the two GPUs on a single card and meet all of nVidia's power, heat, etc. requirements. Even with these reduced clocks, I've never had any complaints with these cards.:tu:

Dubslow 2013-01-24 05:34

Hmm... what program are you running?

With mfaktc 0.20, [URL="http://www.mersenne.ca/mfaktc.php?sort=ghdpd&noA=1"]a single 580 should be north of 400 Eq. GHz, and a 590 should be between 300 and 350 Eq. GHz per GPU.[/URL]

LaurV 2013-01-24 06:03

Running two gtx 580, clocked at 781 MHz (factory), with mfaktc 0.20, TF-ing 332M3 range to 72, 73, 74 bits, tweaked the parameters of mfaktc "down" (i.e. to get the card only 96% busy - with the default parameters and an occupancy of 98%++, the computer was not very responsive), I get a [B]stable[/B] 392GHzDays/day/card. When I go TF-ing to 75 bits (same parameters, same range), I get a stable 396GHzD/D/card.

You should NOT get lower that this! (scale it for your clock only, and so it is for 590 too: you just scale my figures for the 590's lower clock).

OTOH, keeping the CPU [B]extremely[/B] busy, will decrease the mfaktc output. Of course, 0.20 sieves with the GPU, and does not need the same CPU power as 0.19, but it [B]still[/B] need the CPU, who coordinates the things. The GPU does not run by itself. For example, when I start 8 workers (HT enabled on my 4-physical-cores CPU), the output of each gtx goes down few (5, 10) GHzDays, and is is not stable anymore (oscillates between 380-390 or so). If I remember right, the 980x is a 6-phys-cores CPU, so running 10 workers on it may be "overcrowding" it a little.... Try pausing P95 for few minutes, and if the output of the cards do not improve, then you may be doing something wrong. Also, cooling it properly is affecting the speed: those new thingies have the bad habit they "throttle" when they get hot.

rjbelans 2013-01-24 12:27

[QUOTE=LaurV;325645]Running two gtx 580, clocked at 781 MHz (factory), with mfaktc 0.20, TF-ing 332M3 range to 72, 73, 74 bits, tweaked the parameters of mfaktc "down" (i.e. to get the card only 96% busy - with the default parameters and an occupancy of 98%++, the computer was not very responsive), I get a [B]stable[/B] 392GHzDays/day/card. When I go TF-ing to 75 bits (same parameters, same range), I get a stable 396GHzD/D/card.

You should NOT get lower that this! (scale it for your clock only, and so it is for 590 too: you just scale my figures for the 590's lower clock).

OTOH, keeping the CPU [B]extremely[/B] busy, will decrease the mfaktc output. Of course, 0.20 sieves with the GPU, and does not need the same CPU power as 0.19, but it [B]still[/B] need the CPU, who coordinates the things. The GPU does not run by itself. For example, when I start 8 workers (HT enabled on my 4-physical-cores CPU), the output of each gtx goes down few (5, 10) GHzDays, and is is not stable anymore (oscillates between 380-390 or so). If I remember right, the 980x is a 6-phys-cores CPU, so running 10 workers on it may be "overcrowding" it a little.... Try pausing P95 for few minutes, and if the output of the cards do not improve, then you may be doing something wrong. Also, cooling it properly is affecting the speed: those new thingies have the bad habit they "throttle" when they get hot.[/QUOTE]

[QUOTE=Dubslow;325642]Hmm... what program are you running?

With mfaktc 0.20, [URL="http://www.mersenne.ca/mfaktc.php?sort=ghdpd&noA=1"]a single 580 should be north of 400 Eq. GHz, and a 590 should be between 300 and 350 Eq. GHz per GPU.[/URL][/QUOTE]


I'm running 0.20, but I did play with some settings in the .ini file and my CPU is at a constant 90% + usage because of the other things running. Once the current units are completed, after I get home from work tonight, I will try running with no other programs and I'll put the settings back to defaults.

swl551 2013-01-24 12:56

Stages=0 vs Stage=1
 
What are the pros/cons of factoring with Stages=0 vs Stages=1 with wide bit ranges like 79957723,70,74

Beyond a reduction in Result rows I'm not see anything obvious related to performance or reliability with a GTX-570 and 0.20?

I know that mfaktc would/could switch kernels for factoring different ranges when stages is on (0.19). I don't see any difference with 0.20. Did 0.20 make Stages obsolete?

thx

Andi_HB 2013-01-24 14:26

GTX560 with 268 GHz-days/day
 
The GTX 560 Performance is listed with 205 GHz-days/day but this is only with the default settings.

I have decreased the GPUSieveProcessSize=8
and increased the GPUSieveSieveSize=128

This increased my GhzDays from 205 to 268 on the GTX 560 with mfaktc 0.20

:D

(Win 7, 64bit)

TheJudger 2013-01-24 16:15

[QUOTE=swl551;325662]What are the pros/cons of factoring with Stages=0 vs Stages=1 with wide bit ranges like 79957723,70,74

Beyond a reduction in Result rows I'm not see anything obvious related to performance or reliability with a GTX-570 and 0.20?

I know that mfaktc would/could switch kernels for factoring different ranges when stages is on (0.19). I don't see any difference with 0.20. Did 0.20 make Stages obsolete?

thx[/QUOTE]

Stages=1 is faster than Stages=0 (thinking about cleared exponents per time, not GHzd/day...)
With stages=1 in your example there is a ~1.4% chance that there is a factor between 2[SUP]70[/SUP] and 2[SUP]71[/SUP], in this case 14/15 of the work is saved. If there is a factor between 2[SUP]71[/SUP] and 2[SUP]72[/SUP] there is another ~1.4% chance to save 12/15 of the work. If there is a factor between 2[SUP]72[/SUP] and 2[SUP]73[/SUP] there is another ~1.4% chance to save 8/15 of the work. Of course this depends on "StopAfterFactor", too.

The different kernels are still there in mfaktc 0.20. Actually there are 3 new kernels in 0.20.

Oliver

swl551 2013-01-24 16:30

[QUOTE=TheJudger;325675]Stages=1 is faster than Stages=0 (thinking about cleared exponents per time, not GHzd/day...)
With stages=1 in your example there is a ~1.4% chance that there is a factor between 2[SUP]70[/SUP] and 2[SUP]71[/SUP], in this case 14/15 of the work is saved. If there is a factor between 2[SUP]71[/SUP] and 2[SUP]72[/SUP] there is another ~1.4% chance to save 12/15 of the work. If there is a factor between 2[SUP]72[/SUP] and 2[SUP]73[/SUP] there is another ~1.4% chance to save 8/15 of the work. Of course this depends on "StopAfterFactor", too.

The different kernels are still there in mfaktc 0.20. Actually there are 3 new kernels in 0.20.

Oliver[/QUOTE]

Thanks!

James Heinrich 2013-01-24 16:52

[QUOTE=TheJudger;325675]Of course this depends on "StopAfterFactor", too.[/QUOTE]To clarify, if StopAfterFactor=2 (stop after current class when factor is found) then there's almost no difference in terms of time, right? Except of course each class takes a bit longer if Stages=0, but the difference should be only a matter of seconds or minutes, not hours like it would be for StopAfterFactor=1.

TheJudger 2013-01-24 17:04

Well, not so easy but my feeling tells me that it is slower anyway!
Using the same example, MORE_CLASSES and the time for a single class from 2[SUP]70[/SUP] to 2[SUP]71[/SUP] is T.

First class of 2[SUP]70[/SUP] to 2[SUP]74[/SUP]: 15T (T + 2T + 4T + 8T), chance for a factor: (1/71 + 1/72 + 1/73 + 1/74) / 960: 5.75e-5
In the same time you can do 15 classes from 2[SUP]70[/SUP] to 2[SUP]71[/SUP]: 15T, chance for a factor: 1/71 * 15 / 960: 2.20e-4.

Feel free to do the math till the end but I'm pretty sure that stage=1 is faster on average. Of course this is for the average case.

Oliver

TheJudger 2013-01-24 19:48

Hi,

OK, unless I've calculated something wrong here are the numbers.
As before the time for a single class for 2[SUP]70[/SUP] to 2[SUP]71[/SUP] is T and it doubles for each bitlevel (ignoring that different kernels will be used):

2[SUP]70[/SUP] to 2[SUP]74[/SUP]
StopAfterFactor=0: T[SUB]average[/SUB] = 14400 = (1 + 2 + 4 + 8) * 960
StopAfterFactor=2, Stages=0: T[SUB]average[/SUB] = ~14003.078
StopAfterFactor=2, Stages=1: T[SUB]average[/SUB] = ~13847.314

The difference is not that big but keep in mind that additionally the selected kernel can make a big difference. A really worse case: 2[SUP]78[/SUP] to 2[SUP]80[/SUP]
With Stages=0 mfaktc will choose to slow 95bit kernel [B]without[/B] GPU sieving support.
With Stages=1 mfaktc will choose barrett87 for each bitlevel including GPU sieving support.

Technically GPU sieving is possible for the older kernels... but why should somebody spent time on these old and slow kernels?
Barrett87,88 and 92 can only handle single bitlevels at the time
So usually Stages=1 is what you want!

Oliver

flashjh 2013-01-24 20:34

1 Attachment(s)
I've set Stages=1 (in light of the info above)

This is typical for my 580s in the 61M range:

Dubslow 2013-01-24 21:00

[QUOTE=rjbelans;325660]I'm running 0.20, but I did play with some settings in the .ini file and my CPU is at a constant 90% + usage because of the other things running. Once the current units are completed, after I get home from work tonight, I will try running with no other programs and I'll put the settings back to defaults.[/QUOTE]

CPU shouldn't affect it as much as LaurV says. I ran mfaktc with full CPU usage and no CPU usage, and noticed maybe a 1 Eq. GHz drop, from like 206 to 205. That doesn't explain your half-performance discrepancy.

[QUOTE=Andi_HB;325668]The GTX 560 Performance is listed with 205 GHz-days/day but this is only with the default settings.

I have decreased the GPUSieveProcessSize=8
and increased the GPUSieveSieveSize=128

This increased my GhzDays from 205 to 268 on the GTX 560 with mfaktc 0.20

:D

(Win 7, 64bit)[/QUOTE]

Brilliant! I, of course, had to reduce the sieve size down to its minimum as well, to keep screen lag minimal :razz: (throughput was ~185 Eq. GHz at those settings).

ckdo 2013-01-24 21:03

Nice try redacting that exponent out. Too bad k_min and k_max are giving it away. :devil:

flashjh 2013-01-24 21:27

:blush:

swl551 2013-01-24 21:34

Is there a reason to keep exponents secret? Have there been attacks or malware built that targets TFers working certain ranges? Am I in danger?:sos:

chalsall 2013-01-24 21:39

[QUOTE=swl551;325714]Is there a reason to keep exponents secret? Have there been attacks or malware built that targets TFers working certain ranges? Am I in danger?:sos:[/QUOTE]

A very few people have found themselves the subject of targeted "poaching". Usually in the LLing domain, however.

Those doing serious TFing probably wouldn't even notice if they were poached. And if they did, probably wouldn't care all that much other than possibly wondering why.

rjbelans 2013-01-24 23:58

[QUOTE=Dubslow;325703]CPU shouldn't affect it as much as LaurV says. I ran mfaktc with full CPU usage and no CPU usage, and noticed maybe a 1 Eq. GHz drop, from like 206 to 205. That doesn't explain your half-performance discrepancy.[/QUOTE]

It was the settings in the .ini file. After I put everything back to defaults, each core is getting about 325 for a total of 1300GHz-d/day.

chalsall 2013-01-25 00:06

[QUOTE=rjbelans;325719]It was the settings in the .ini file. After I put everything back to defaults, each core is getting about 325 for a total of 1300GHz-d/day.[/QUOTE]

[URL="http://www.americanscientist.org/issues/pub/thats-funny/1"]I assume you kept a copy of the .ini file which was causing you trouble[/URL]?

LaurV 2013-01-25 02:25

[QUOTE=Dubslow;325703]CPU shouldn't affect it as much as LaurV says. I ran mfaktc with full CPU usage and no CPU usage, and noticed maybe a 1 Eq. GHz drop, from like 206 to 205.[/QUOTE]
Try decreasing the priority of mfaktc or increasing the priority of p95 :razz: till they have both the same chance to grab the CPU ticks. Of course if P95 works in iddle mode, mfaktc does not wait for it...
Edit: Disclaimer: don't do that at home! :smile:

rjbelans 2013-01-25 04:32

[QUOTE=chalsall;325720][URL="http://www.americanscientist.org/issues/pub/thats-funny/1"]I assume you kept a copy of the .ini file which was causing you trouble[/URL]?[/QUOTE]

You shouldn't ass.u.me anything! :razz:

Essentially, I set the last few settings to their maximum number to see what it would do. I would suggest not doing that.

Dubslow 2013-01-25 07:58

[QUOTE=LaurV;325731]Try decreasing the priority of mfaktc or increasing the priority of p95 :razz: till they have both the same chance to grab the CPU ticks. Of course if P95 works in iddle mode, mfaktc does not wait for it...
Edit: Disclaimer: don't do that at home! :smile:[/QUOTE]
'Twasn't P95, but was rather hyperthreaded lasieve. :razz:
[QUOTE=rjbelans;325747]You shouldn't ass.u.me anything! :razz:

Essentially, I set the last few settings to their maximum number to see what it would do. I would suggest not doing that.[/QUOTE]

Heh, thanks for the tip :smile:

rjbelans 2013-01-25 20:00

So, I did a little testing of what changes to settings would do for me. I found that the only change that netted me an improvement was to set GPUSieveSize=128. This improved performance from 325 to 330. I also noticed that setting GPUSievePrimes at 150000 or more resulted in decreased performance, so I now have it at 100000 with no noticeable decrease compared to the default of 82486.


It looks like I will need to play with my OC to get any additional performance gains. That would be something to look at later, I'm happy with where it is right now.


330 GHz-d/day on each core with clocks set to 720/1440/1728 @ stock 0.925V.

RichD 2013-01-26 00:29

[QUOTE=rjbelans;325828]... I'm happy with where it is right now.


330 GHz-d/day on each core with clocks set to 720/1440/1728 @ stock 0.925V.[/QUOTE]

And we're happy with your contributions. Welcome aboard, and have a safe journey. :smile:

ixfd64 2013-01-26 04:16

I'm still having trouble compiling mfaktc. It turned out that VC++ doesn't handle custom makefiles very well, so I decided to use nmake instead. However, I'm still getting errors:

[CODE]C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\bin>nmake.exe -f C:\Users\danny\Desktop\mfaktc-0.20\src\Makefile.win

Microsoft (R) Program Maintenance Utility Version 10.00.30319.01
Copyright (C) Microsoft Corporation. All rights reserved.

C:\Users\danny\Desktop\mfaktc-0.20\src\Makefile.win(26) : fatal error U1001: syntax error : illegal character '^' in macro
Stop.[/CODE]

I tried removing the caret from the file, but that results in a different error. *sigh*

Why does Microsoft have to make this so complicated?

TheJudger 2013-01-26 13:55

[QUOTE=ixfd64;325905]I'm still having trouble compiling mfaktc. It turned out that VC++ doesn't handle custom makefiles very well, so I decided to use nmake instead. However, I'm still getting errors:

[CODE]C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\bin>nmake.exe -f C:\Users\danny\Desktop\mfaktc-0.20\src\Makefile.win

Microsoft (R) Program Maintenance Utility Version 10.00.30319.01
Copyright (C) Microsoft Corporation. All rights reserved.

C:\Users\danny\Desktop\mfaktc-0.20\src\Makefile.win(26) : fatal error U1001: syntax error : illegal character '^' in macro
Stop.[/CODE]

I tried removing the caret from the file, but that results in a different error. *sigh*

Why does Microsoft have to make this so complicated?[/QUOTE]

nmake is not gnu make. You'll need to modify the makefile or install gnu make (or a compatible make).

Oliver

flashjh 2013-01-26 14:15

(as best I can remember and some are only for CuLu)

First thing is to go [URL="http://www.mersenneforum.org/showthread.php?p=290851#post290851"][COLOR=#0066cc]here[/COLOR][/URL] and check out post 808 and on. You'll see what I had to do to get compiling working. It has been a while, so I don't remember all the details, but the instructions there are pretty good.

I know I have all CUDA toolkits installed from 3.2 up so I can compile all versions. Also, I have MSVS 2008 and 2010 installed.

Once everything is installed, I use make for Windows ([URL="http://gnuwin32.sourceforge.net/packages/make.htm"][COLOR=#0066cc]here[/COLOR][/URL]). The only file that's required is make.exe. Once you have that, open the command window from the required MSVS location. Mine are in C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC.

I place a shortcut to the command windows in my source along with make.exe.

You might have to update makefile.win to the CUDA version and SM you want.

The command line used is 'make -f makefile.win' or 'make -f makefile.win32'

Once it completes, you can run 'make -f makefile.win clean' to get rid of the .obj file (associated with the current makefile.win)

If you want to get rid of everything, run 'make -f makefile.win cleaner' to delete all .obj and .exe files (associated with the current makefile.win)

I'm no expert. I'm just learning as I go along. I really hope to have the project files working in MSVS, but that will have to wait until I have a little more time.

Let me know if you have any questions.

Jerry

ixfd64 2013-01-30 03:03

Could someone please compile mfaktc for Mac OS X?

ixfd64 2013-02-02 19:01

Sorry for the double post, but does anyone know why I'm getting "n.a." for the ETA?

flashjh 2013-02-02 19:02

The run is too short.

ixfd64 2013-02-02 19:14

[QUOTE=flashjh;327198]The run is too short.[/QUOTE]

Thanks. I had a feeling that was the reason, but I was unable to find any mention of it. Do you know the minimum ETA that will be printed?

flashjh 2013-02-02 19:18

From output.c:

[CODE]if(mystuff->stats.class_time > 250)[/CODE]

So it looks like anything over 250 seconds will output an ETA

ixfd64 2013-02-02 19:22

[QUOTE=flashjh;327204]From output.c:

[CODE]if(mystuff->stats.class_time > 250)[/CODE]

So it looks like anything over 250 seconds will output an ETA[/QUOTE]

I see. Thanks again for your quick reply!

ixfd64 2013-02-02 21:56

1 Attachment(s)
I just had an instance of mfaktc printing the ETA for an assignment that is too short for one. The figure isn't even correct because an assignment in this range (81.95M from 67 to 68 bits) consistently takes 3 minutes and 26 seconds on my GTX 555. I wonder if it's a bug.

Dubslow 2013-02-02 22:08

[QUOTE=ixfd64;327240]I just had an instance of mfaktc printing the ETA for an assignment that is too short for one. The figure isn't even correct because an assignment in this range (81.95M from 67 to 68 bits) consistently takes 3 minutes and 26 seconds on my GTX 555. I wonder if it's a bug.[/QUOTE]

If the assignment is that short (or for any assignment really), then the "sample size" for an ETA estimate, especially *right at the beginning* of an assignment, is very varied. It's well within probability that one of so many would have an unusually high ETA.

lycorn 2013-02-04 14:03

[QUOTE=kladner;323880]That's very good to know. Thanks! I'm running 310.70, but will upgrade.
[/QUOTE]

I´m still running 306.97. Checked the NVIDIA site for new versions but all I read about 310.90 was related to improvements on some games, which doesn´t really interest me. The point is: would installing 310.90 be worth the trouble as far as mfaktc goes? Does anybody have any hard figures on that?
Thx

Aramis Wyler 2013-02-06 22:20

I installed 310.90 last night and didn't notice any difference in mfactc. It idn't hurt and it was an easy upgrade so I figure maybe I got a free bugfix in there, but performance did not improve. It stayed at 431.8<x<432.68.

swl551 2013-02-06 23:52

I have it running on 4 570s and 1 560. No issues. All overclocked.

TheJudger 2013-02-12 19:49

Running mfaktc on GPU while CPU is busy with other stuff:
[LIST][*]Windows 7 64bit, Xeon W3690, GTX 470: when I start prime95 on all 6 CPU cores the throughput of mfaktc [B]increases[/B] by ~1GHz/day[*]Windows 7 64bit, Core i7 3770k, GTX 680: when I start prime95 on all 4 CPU cores the throughput of mfaktc [B]decreases[/B] by ~3-5GHz/day[*]Linux 64bit, Xeon E5-2650, K20: when I put some heavy load on all CPU cores the throughput of mfaktc [B]increases[/B] by ~1-2GHz/day[/LIST]
Ideas? Comments?

Oliver

P.S. my GTX 680 stopped working on sunday... :sad:

chalsall 2013-02-12 20:38

[QUOTE=TheJudger;329142]Ideas? Comments?[/QUOTE]

As [URL="http://en.wikipedia.org/wiki/Isaac_Asimov"]Isacc Asimov[/URL] said, "The most exciting phrase to hear in science, the one that heralds new discoveries, is not 'Eureka!' (I found it!) but 'That's funny ...'".

rcv 2013-02-12 22:50

[QUOTE=TheJudger;329142]Ideas? Comments?
[/QUOTE]
Just brainstorming...
1. Measurement error. With the CPU busy, your GPU measurements aren't as precise.
2. You are unwittingly running an app that uses CPU and a little GPU. When the CPU is busy due to Prime95, the witless app uses less GPU. (Perhaps something supplied by NVIDIA.)

flashjh 2013-02-13 04:08

I experience a drop of ~5GHz/Day when I start P95 on each machine.

Where did you get a K20? How does it perform with mfaktc?

Bdot 2013-02-13 11:55

Running a mfaktc GPU-sieving on a Quadro2000 + ancient Xeon 5140, I still get some benefit from running two instances (1-1.5%, which is ~1-1.5 GHzdays/day).

I assume the rather slow CPU leaves some "holes" in scheduling the GPU kernels that are filled by the other instance.

If prime95 runs on all cores including hyper-threaded ones, I'd expect to see similar "holes" due to the CPU scheduling granularity. Also it is quite likely that the mfaktc-code has to be fetched from memory for running the outer loop that schedules the GPU kernels - it probably has long expired from the CPU caches when a GPU kernel finishes.

On the other hand, if no prime95 is running, modern CPUs significantly lower their core clock on idle, and it takes a few micro-seconds to spin up again. So if you have a "spare" hyper-thread that issues the next kernel immediately, it may be even faster if the CPU was not allowed to go into power-save mode.

I think, depending on which effect is stronger, you'll see things change for better or worse ...

Maybe also for you, the sum of two mfaktc instances is more than a single one ...

TheJudger 2013-02-13 18:42

[QUOTE=flashjh;329252]Where did you get a K20? How does it perform with mfaktc?[/QUOTE]

A bit faster than my GTX 680. For some (unknown) reason CC 3.5 is worse than CC 3.0 (comparing performance / (number of core * clock rate))...
A GTX580 is still the fastest GPU for mfaktc.
I did a quick test with CUDALucas, too, it seems to be a small margin faster than a stock GTX 580 when ECC is enabled on the K20.

Oliver

TheJudger 2013-02-24 17:51

[QUOTE=TheJudger;329142]P.S. my GTX 680 stopped working on sunday... :sad:[/QUOTE]

Well... I've received my replacement yesterday. Different "Vendor", again reference design... after two hours playing Diablo 3 the card stopped working. After one hour the problems started, the game was laggy somehow (low framerates for fractions of a second, than for fractions of a second full performance, than again low framerates). I've checked temperatures (below 70°C for the GPU) and power target (<75% while playing Diablo 3). After two hours blinking pixels, corrupt triangles/textures and black display for a couple of seconds (nvlddmkm reloaded).
So either I had bad luck (two defective GTX 680) or my system kills GTX 680s (but I can't imaging how and why only 680s).[LIST][*]Both GTX 680 failed in my main rig (Asus P8Z68-V/Gen3, i7 3770k), errors are reproducable in my secondary rig (Intel DX58SO, Xeon W3690).[*]GTX 275 and GTX 470 are running fine in both systems.[*]I've used the GTX 470 for ~one year in the P8Z68-V/Gen3 until I decided to upgrade to GTX 680.[*]The GTX 470 consumes more power than the GTX 680s, powersupply is a 665W singlerail.[/LIST]While assembling the system I take care of ESD.

Oliver

P.S. new .plan for mfaktc 0.21: The features planned for 0.21 are moved to 0.22. 0.21 features support for Wagstaff numbers.

Redarm 2013-02-24 18:46

please check if the pci-e connector is slightly black

kracker 2013-02-24 19:12

How is your power supply?

TheJudger 2013-02-24 21:31

[QUOTE=Redarm;330818]please check if the pci-e connector is slightly black[/QUOTE]

Perfect condition, I've allready checked this.

[QUOTE=kracker;330824]How is your power supply?[/QUOTE]

665W Singlerail in both systems (Supermicro PWS-665-PQ, 54A @12V), power consumption for the i7 system is below 300W. The W3690 with the GTX 470 consumes up to ~400W.

Oliver

Rodrigo 2013-03-05 06:24

GPU temps with mfaktc
 
I just installed an NVIDIA GeForce GT 630 in an HP dx-7500 Microtower (Core2 Duo E7600, Vista Business x86) and am doing some testing. Two things I've noticed so far:

1. Prime95 running on both cores doesn't seem to be affected by mfaktc running at all -- the LL per-iteration times remain unchanged. Is this expected behavior? I'd thought that it was necessary to "dedicate" a CPU core to mfaktc. (I'm getting 44+ GHz-days/day on the 640.)

2. However, according to the CPUID Hardware Monitor, the GPU's temperature with mfaktc running goes from a baseline of 40C to as high as 83C, with a steady level at 82C. Is this excessive, or normal? (The GPU fan is running at 74%. Other temperature sensor readings don't change all that much.)

Thanks for any insights or info.

Rodrigo

Batalov 2013-03-05 07:01

That's (sort of) normal (the 82-83 C temps).

However, you can lower your temp (and the fan noise) without lowering productivity too much by lowering the memory clock. Try it in steps of 100MHz (and wait and listen for a couple minutes; once you will cross some specific zone, you will hear fans spinning down; observe the mfaktc window at the same time); then go back up in steps of 10MHz. Your mileage may vary, though.

LaurV 2013-03-05 07:52

[QUOTE=Rodrigo;332030]
1. Prime95 running on both cores doesn't seem to be affected by mfaktc running at all -- the LL per-iteration times remain unchanged. Is this expected behavior? I'd thought that it was necessary to "dedicate" a CPU core to mfaktc. (I'm getting 44+ GHz-days/day on the 640.)
[/QUOTE]
Before mfaktc 0.20, yes, the CPU was used for sieving the factor candidates. With v0.20 and after, the GPU is used to seive, so the CPU is (almost) totally free to do other tasks (P95), so yes, that is normal behavior if you run mfaktc v0.20.

[QUOTE=Rodrigo;332030]
2. However, according to the CPUID Hardware Monitor, the GPU's temperature with mfaktc running goes from a baseline of 40C to as high as 83C, with a steady level at 82C. Is this excessive, or normal? (The GPU fan is running at 74%. Other temperature sensor readings don't change all that much.)[/QUOTE]
Around 80C is "normal" for that card, in the way that such temperature won't damage it. But if hotter, it takes more power and does less work. You may try playing with mfaktc ini file to get the occupancy down (like from 98-99% to 95-97%). Your computer (in case that is used as video card too) will become more responsive, cooler, less noisier, for a small 2-3% of the output sacrificed.
As Batalov said, your mileage may vary.

Rodrigo 2013-03-05 18:03

Thanks Batalov and LaurV for the information and useful suggestions.

I'll look into how to adjust the clock. As for tweaking the INI file, which values should one consider changing for these purposes?

Rodrigo

Batalov 2013-03-05 19:59

There are two aspects:

1. The application cannot change memory clock (or other clocks/freqs); this is not in the .ini. You can do that with system tools; and the system tools will abstract the access rights for you. You won't be able to do that unless your account has administrator rights, or if you cannot right-click on the tool (MSI Afterburner, GB don't-remember-name, EVGA Precision) and "Run as administrator". Run it and manipulate specifically "memory clock", not the shader etc clocks. It may be that it is the memory that gets hot - even though it seems to be used for register spills (most of the work happens in the registers). Well, I don't have a good explanation. Maybe the authors observed that and would comment?

2. From the application .ini parameters, you can control some behaviors, that will affect responsiveness. One parameter that many people change GPUSieveSize=16 to GPUSieveSize=8.

Rodrigo 2013-03-06 01:14

Very good, I'll experiment with different values of GPUSieveSize.

I'll go to the NVIDIA site and see what turns up with respect to over/underclocking.

Rodrigo

Batalov 2013-03-06 01:46

What is your card's brand? You may want to start from their website; usually they use the clocking tools to lure you to register your product (which is not necessarily a bad idea). An older version of the tool may be on the CD/DVD that was included with the card...
[URL="http://www.evga.com/precision/"]EVGA Precision[/URL] is here,
[URL="http://event.msi.com/vga/afterburner/download.htm"]MSI Afterburner[/URL] ,
Gigabyte is [URL="http://www.gigabyte.com/support-downloads/utility.aspx?cg=3"]somewhere in there[/URL]...
I am sure that Zotac, and others have similar web pages.

TheJudger 2013-03-06 19:40

Wanted: help with Makefile
 
Hello,

since I screwed up the Windows makefile in mfaktc 0.20 (you can use for compiling mfaktc on Windows but when you change some code and recompile it won't work as expected because of some missing dependencies) I would like to merge the three Makefiles into one (if possible and feasible). For all three Makefiles (Linux, Windows and Windows 32bit) the rules are the same, what changes are the compiler and compiler options as well as the name of the object files.

The second question: is it easily possible to build mfaktc in subdirectories of src/ (where to place the object files)?

I'm dreaming of[LIST][*]make linux (build mfaktc in src/build.linux)[*]make win32 (build mfaktc in src/build.win32)[*]make win64 (build mfaktc in src/build.win64)[*]make wagstaff linux (build mfaktc for wagstaff numbers in src/build.wagstaff.linux (add -DWAGSTAFF to CFLAGS))[*]make wagstaff win32 (build mfaktc for wagstaff numbers in src/build.wagstaff.win32 (add -DWAGSTAFF to CFLAGS))[*]make wagstaff win64 (build mfaktc for wagstaff numbers in src/build.wagstaff.win64 (add -DWAGSTAFF to CFLAGS))[/LIST]Of course stuff like make clean needs to be adjusted, too.

Oliver

chalsall 2013-03-06 20:09

[QUOTE=TheJudger;332189]For all three Makefiles (Linux, Windows and Windows 32bit) the rules are the same, what changes are the compiler and compiler options as well as the name of the object files.[/QUOTE]

I'm afraid I'm [I][U]WAY[/U][/I] too busy at the moment to do anything but offer a suggestion...

The [URL="http://en.wikipedia.org/wiki/GNU_build_system"]GNU build system[/URL] (AKA Autotools) [I][U]might[/U][/I] help you achieve what you want.

YMMV. I [I]never[/I] compile under Windows....

Rodrigo 2013-03-07 06:42

[QUOTE=Batalov;332129]What is your card's brand? You may want to start from their website; usually they use the clocking tools to lure you to register your product (which is not necessarily a bad idea). An older version of the tool may be on the CD/DVD that was included with the card...
[URL="http://www.evga.com/precision/"]EVGA Precision[/URL] is here,
[URL="http://event.msi.com/vga/afterburner/download.htm"]MSI Afterburner[/URL] ,
Gigabyte is [URL="http://www.gigabyte.com/support-downloads/utility.aspx?cg=3"]somewhere in there[/URL]...
I am sure that Zotac, and others have similar web pages.[/QUOTE]
Thanks, Batalov. This one is a Zotac card, so I'll go in there and see what they have.

Or maybe I'll check out the included CD and actually use it for something! :smile:

Rodrigo

akruppa 2013-03-09 14:50

Btw, is there a particular reason for disallowing composite exponents?

TheJudger 2013-03-09 15:41

Hi Alex,

I didn't spent time on composite exponents, do the same rules apply for them?
At least for even exponents there are factors which are +/- 3 mod 8, they are excluded in mfaktc because for prime exponents all factors are +/- 1 mod 8.

Oliver

akruppa 2013-03-09 16:10

I guess we can limit ourselves to primitive divisors; divisors of algebraic factors may or may not be found but we can leave those for the user to figure out. For a primitive divisor p of 2^n-1, we have n|p-1, like for prime exponents (where every divisor > 1 is primitive).

As you note, the +-1 (mod 8) thing may not work when n is even. You could still do some tricks with the quadratic character of 2, like for p==1 (mod 4) when 2||n in which case 2 must be a QR for 2^n=1 to hold so we can actually restrict ourselves to p==1 (mod 8), i.e., no need to test p==5 (mod 8), but I think that can also be left alone for a start.

Of course you don't need to sieve by small primes q|n, since p=kn+1 is never divisible by q - if you don't skip those, the sieve init will probably try an impossible mod inverse.

To get something that can be used at all, I think it's enough to allow composite expoents, allow +-3 (mod 8) factors when the exponent is even, and not sieve primes that divide the exponent.

James Heinrich 2013-03-10 02:12

For what it's worth, the mfaktc performance chart now includes data (from a single benchmark, so interpret loosely) for the GTX Titan. Short story: about the same mfaktc performance as a GTX 570.
[url]http://www.mersenne.ca/mfaktc.php[/url]

ixfd64 2013-03-10 02:35

[QUOTE=akruppa;332531]Btw, is there a particular reason for disallowing composite exponents?[/QUOTE]

Considering that mfaktc was designed to eliminate Mersenne prime candidates, it's probably a bit pointless to make it TF known composite numbers.


All times are UTC. The time now is 22:30.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.