mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   GPU to 72 (https://www.mersenneforum.org/forumdisplay.php?f=95)
-   -   GPU to 72 status... (https://www.mersenneforum.org/showthread.php?t=16263)

storm5510 2019-11-02 12:36

[QUOTE=chalsall;529320]Yup, that's how this thing works... :smile:

Not clear what you mean by ceiling value. [U]We "release" back to Primenet at 77[/U] (currently).[/QUOTE]

Sorry, I should have written end bits.

Sometime during the night, I crossed over into 96M, still running to 74 bits. I may have mentioned this before. It takes about 30 minutes for each of these with [I]mfaktc[/I]. 2[SUP]77[/SUP] would take four hours each so I would not get many done.

I do not run my 1080 at full capacity. The reason being is that, if any were to happen to it, I would never be able to replace it. When I bought it, I was still working. I am retired now. I use [I]Afterburner[/I] to throttle it back. I have the power limit set at 80%. At that level, it still runs over 1,000 GHz-d/day. That's good enough, for me, anyway. :smile:

chalsall 2019-11-02 23:33

[QUOTE=storm5510;529463]Sometime during the night, I crossed over into 96M, still running to 74 bits. I may have mentioned this before. It takes about 30 minutes for each of these with [I]mfaktc[/I]. 2[SUP]77[/SUP] would take four hours each so I would not get many done.[/QUOTE]

Yup. A 76 -> 77 bit run takes ten hours and something on a K80, and 3:40 on a P100.

But /someone/ has to do them (hint-hint... LG72D Luke)... :wink:

Just to reminisce a bit... When we all started this adventure way back in 2011, we had no idea where we should actually TF to. GPU72 was named as it was because it rhymed, not because we actually thought we would actually regularly get to that level.

A huge shout-out to James, and his [URL="https://www.mersenne.ca/cudalucas.php?model=745"]economic cross-over analysis[/URL] which shows exactly where it "makes sense" to TF to, vs. where there will be more throughput by just running a CUDA-based LL job.

Which brings something up... We're going to have to start planning on going up to 78 soon... :smile:

PhilF 2019-11-03 00:03

If we take the future into consideration now, we should rename the effort to GPU92. :smile:

Uncwilly 2019-11-03 00:18

[QUOTE=PhilF;529504]If we take the future into consideration now, we should rename the effort to GPU92. :smile:[/QUOTE]
GPU95 sounds better to me.

petrw1 2019-11-03 00:45

[QUOTE=chalsall;529501]
A huge shout-out to James, and his [URL="https://www.mersenne.ca/cudalucas.php?model=745"]economic cross-over analysis[/URL] which shows exactly where it "makes sense" to TF to, vs. where there will be more throughput by just running a CUDA-based LL job.

Which brings something up... We're going to have to start planning on going up to 78 soon... :smile:[/QUOTE]

Your link is for the 2080Ti; still a top-end fringe card.
Shouldn't we be using a more mainstream (common) card like the 1080Ti.?

chalsall 2019-11-03 01:02

[QUOTE=petrw1;529509]Your link is for the 2080Ti; still a top-end fringe card. Shouldn't we be using a more mainstream (common) card like the 1080Ti.?[/QUOTE]

It depends on what is "in the fleet", and the total GHzD/D available. Even for a 1080 (CC 6.1) it makes sense to go to 78 at 115M or so.

Also, some just like finding factors. And so will do a "bit or two" past the "optimal cross-over" point.

axn 2019-11-03 04:12

[QUOTE=petrw1;529509]Your link is for the 2080Ti; still a top-end fringe card.
Shouldn't we be using a more mainstream (common) card like the 1080Ti.?[/QUOTE]

You should be using an even more mainstream card like [URL="https://www.mersenne.ca/cudalucas.php?model=751"]1660 Ti[/URL] which beats the crap out of 1080 Ti in TF at fraction of the cost and power consumption. Since it is the same family as 2080 Ti, the conclusion remains unchanged.

storm5510 2019-11-03 14:00

[QUOTE=chalsall;529501]Yup. A 76 -> 77 bit run takes ten hours and something on a K80, and 3:40 on a P100.

But /someone/ has to do them (hint-hint... LG72D Luke)... :wink:

Just to reminisce a bit... When we all started this adventure way back in 2011, we had no idea where we should actually TF to. GPU72 was named as it was because it rhymed, not because we actually thought we would actually regularly get to that level.

A huge shout-out to James, and his [URL="https://www.mersenne.ca/cudalucas.php?model=745"]economic cross-over analysis[/URL] which shows exactly where it "makes sense" to TF to, vs. where there will be more throughput by just running a CUDA-based LL job.

Which brings something up... We're going to have to start planning on going up to 78 soon... :smile:[/QUOTE]

2[SUP]78[/SUP] It may be a good idea, but it would not be practical, at least for me. Someone having a 16xx or 20xx series card would be better suited to do this work. Those of us with less powerful GPU's can still do the lower bit tests. As for me, 75 bits would be my absolute ceiling.

[U]Off-topic[/U]: My son is a big AMD fan. He recently built a new system. There was a compatibility issue with AMD video cards. He ended up with a GTX-1660. He doesn't understand it, nor does he like it. He'll go back to AMD sometime. I'll see if I can get my hands on his 1660 when he does.

chalsall 2019-11-03 14:34

[QUOTE=storm5510;529544]Someone having a 16xx or 20xx series card would be better suited to do this work. Those of us with less powerful GPU's can still do the lower bit tests. As for me, 75 bits would be my absolute ceiling.[/QUOTE]

And that is exactly why people are allowed to set their own "Pledge" level (and even range, if they so choose).

Whatever rocks your boat! Every "bit" helps! :tu:

[QUOTE=storm5510;529544][U]Off-topic[/U]: My son is a big AMD fan. He recently built a new system. There was a compatibility issue with AMD video cards. He ended up with a GTX-1660. He doesn't understand it, nor does he like it. He'll go back to AMD sometime. I'll see if I can get my hands on his 1660 when he does.[/QUOTE]

Yeah; I can relate...

I always like to support the "underdog" wherever possible. A while ago I was building a new Linux workstation; I bought a cheap AMD GPU card to drive a monitor. All kinds of compatibility issues -- and this was with CentOS; pretty mainstream distribution.

At the end of the day, I took the card back, and instead got an NVidia card. And this wasn't even for compute; just to drive a display.

Sometimes you just have to choose the option which just works, and get on with the job.

PhilF 2019-11-03 15:13

[QUOTE=storm5510;529544][U]Off-topic[/U]: My son is a big AMD fan. He recently built a new system. There was a compatibility issue with AMD video cards. He ended up with a GTX-1660. He doesn't understand it, nor does he like it. He'll go back to AMD sometime. I'll see if I can get my hands on his 1660 when he does.[/QUOTE]

I have been a computer professional since the 1970's, and AMD has had that problem [U]for this entire time[/U]. They simply cannot write drivers, period. I would never consciously use an AMD video card for my main system, no matter if it was linux or Windows.

Now, if someone just gave me a Radeon VII, I would use it to crunch numbers in a separate box, but that's about it.

Just my 2 cents worth.

chalsall 2019-11-03 17:21

[QUOTE=PhilF;529504]If we take the future into consideration now, we should rename the effort to GPU92. :smile:[/QUOTE]

LOL...

Domain registered... :wink:


All times are UTC. The time now is 23:10.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.