mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Lone Mersenne Hunters (https://www.mersenneforum.org/forumdisplay.php?f=12)
-   -   fond of a factor? Urn yourself to become remains (https://www.mersenneforum.org/showthread.php?t=13977)

retina 2019-09-13 23:21

[QUOTE=lycorn;525782]What if you had found this one?
11510125[/QUOTE]I don't understand. :confused:

Jwb52z 2019-09-15 23:43

P-1 found a factor in stage #2, B1=885000, B2=18585000.
UID: Jwb52z/Clay, M96078113 has a factor: 40150180980878122799159 (P-1, B1=885000, B2=18585000)

75.088 bits.

Jwb52z 2019-09-17 18:28

P-1 found a factor in stage #1, B1=820000.
UID: Jwb52z/Clay, M94654033 has a factor: 7507521220789479758248769 (P-1, B1=820000)

82.635 bits.

mrh 2019-09-25 00:38

Found one for [M]7111127[/M] - [URL="https://www.mersenne.ca/exponent/7111127"] 786612060695816024641268553407[/URL] 99.312 bits

Jwb52z 2019-09-29 04:28

P-1 found a factor in stage #1, B1=890000.
UID: Jwb52z/Clay, M96570209 has a factor: 24529069813789561422559319 (P-1, B1=890000)

84.343 bits.

mrh 2019-09-29 21:07

I think this is the largest I've found so far [M]3333397[/M] [URL="https://www.mersenne.ca/exponent/3333397"]5987402934250702953071699518972409[/URL] 112.206 bits

James Heinrich 2019-09-29 21:12

[QUOTE=mrh;526924]I think this is the largest I've found so far [M]3333397[/M] [URL="https://www.mersenne.ca/exponent/3333397"]5987402934250702953071699518972409[/URL] 112.206 bits[/QUOTE]It is the largest of the 5 you've found: [url]https://www.mersenne.ca/pm1user/19538[/url]

Maciej Kmieciak 2019-10-03 22:00

Hey, I read that GPUs are more efficient on higher bitlevels. So why is there less GHz-days / day in 73-74 than in 71-73?

[CODE]
no factor for M909985499 from 2^71 to 2^72 [mfakto 0.14-Win cl_barrett15_73_gs_2]
tf(): total time spent: 1m 9.138s (656.78 GHz-days / day)

no factor for M909985499 from 2^72 to 2^73 [mfakto 0.14-Win cl_barrett15_73_gs_2]
tf(): total time spent: 2m 17.558s (660.21 GHz-days / day)

no factor for M909985499 from 2^73 to 2^74 [mfakto 0.14-Win cl_barrett15_82_gs_2]
tf(): total time spent: 5m 6.655s (592.31 GHz-days / day)

no factor for M909985451 from 2^71 to 2^72 [mfakto 0.14-Win cl_barrett15_73_gs_2]
tf(): total time spent: 1m 8.872s (659.32 GHz-days / day)

no factor for M909985451 from 2^72 to 2^73 [mfakto 0.14-Win cl_barrett15_73_gs_2]
tf(): total time spent: 2m 17.148s (662.19 GHz-days / day)

no factor for M909985451 from 2^73 to 2^74 [mfakto 0.14-Win cl_barrett15_82_gs_2]
tf(): total time spent: 5m 5.676s (594.21 GHz-days / day)
[/CODE]

nomead 2019-10-04 00:11

[QUOTE=Maciej Kmieciak;527251]Hey, I read that GPUs are more efficient on higher bitlevels. So why is there less GHz-days / day in 73-74 than in 71-73?
[/QUOTE]
Short answer: For those longer factors, mfakto needs to use a different GPU kernel that is less efficient, in other words, uses more instructions for the same operation.

Longer answer: Instead of division, mfakto (and mfaktc) use Barrett reduction, that basically turns division into multiplication, and because of the relatively small multipliers available in the GPU cores, there are some tricks that need to be done to extend the precision. There are some further optimization tricks that can be done, but these have the side effect of eating into this extended precision, so each more optimized GPU kernel has a lower corresponding maximum bitlevel.

The GHz-day formula doesn't take these changes into account, as it is supposed to be related to the time a CPU takes to factor something to a given bitlevel, not a GPU.

Now, I'm not familiar with AMD or mfakto specifics, are the barrett15_* kernels really still more efficient than barrett32_* even on more modern cards?

LaurV 2019-10-04 03:41

[QUOTE=Maciej Kmieciak;527251]Hey, I read that GPUs are more efficient on higher bitlevels. So why is there less GHz-days / day in 73-74 than in 71-73?[/QUOTE]
Short-short-shortest (to paraphrase the ante-poster) answer: because you need more time to count to 13 than you need to count to 11. And if you have a task that requires repeatedly counting to 13, then you will do less of these tasks per day, compared with a task that asks you to repeatedly count to 11.

Correct answer: (which correctly implies that my previous answer, as well as both answers from the anteposter are wrong :razz:) : Because the formula to calculate the credits is wrong, in the sense that it is only an approximation, based on empirical evidence, derived from middle age times when only the CPUs could do TF. The GHzDays/Day (as a measuring unit) should be the amount of work that a single-core 32 bits CPU can do in one day, running at 1 GHz. This is (approx) how it was defined long ago. This should have nothing to do with the exponents, bitlevels, TF, LL, whatever. But invention of multi-cores, 64-bit CPUs, GPUs, airplanes, flying spaghetti monster, and other alien stuff which invaded us in the last time, heavily changed the odds, and anyhow, such measurement, even if we could make it extremely accurate, would not be useful (think about it! if your card will always show 562.73 GHzD/D regardless of what you are doing with it, what should be the point?). Actual calculus could be altered by many things, including a "stimulation" for people to do certain type of work (yes, you may get more credit doing this or that bitlevel, in this or that range, because that is most needed now - well, dream on! that would be ideally, wouldn't be?)

Of course, everybody can use his/her cards and electricity money to do whatever type of work fits them better.

James Heinrich 2019-10-04 08:38

GHz-days credit for TF is broadly based on bitlevel, derived from how well an Intel CPU of decades ago could process TF work using Prime95 ([url=https://www.mersenne.ca/throughput.php?cpu1=Intel%28R%29+Pentium%28R%29+III+processor%7C256%7C0&mhz1=600]example[/url]). The credit is scaled according to 3 ranges: up to 62-bit is "easy" and given 62.58% credit, 63-64 bit is "slightly easier" and given 95.15% credit, and >=65-bit gets 100% credit.

Since all the TF done now is >65-bit the credit given is linear, but is subject to architectural efficiencies of the GPU (or whatever device you're using) and the software running the calculation. Broadly speaking, higher bit depths require more bits to be played with at once and therefore slow the calculations down somewhat, which is why mfakt[i]x[/i] will choose the smallest kernel that can process the current assignment, since that will be the fastest.



For historical interest, for the old-timers, I found these old notes in the code:[code]CPU credit - background information:

In Primenet v4 we used a 90 MHz Pentium CPU as the benchmark machine
for calculating CPU credit. The official unit of measure became the
P-90 CPU year. In 2007, not many people own a plain Pentium CPU, so we
adopted a new benchmark machine - a single core of a 2.4 GHz Core 2 Duo.
Our official unit of measure became the C2GHD (Core 2 GHz Day). That is,
the amount of work produced by the single core of a hypothetical
1 GHz Core 2 Duo machine. A 2.4 GHz should be able to produce 4.8 C2GHD
per day.

To compare P-90 CPU years to C2GHDs, we need to factor in both the
the raw speed improvements of modern chips and the architectural
improvements of modern chips. Examining prime95 version 24.14 benchmarks
for 640K to 2048K FFTs from a P100, PII-400, P4-2000, and a C2D-2400
and compensating for speed differences, we get the following architectural
multipliers:

One core of a C2D = 1.68 P4.
A P4 = 3.44 PIIs
A PII = 1.12 Pentium

Thus, a P-90 CPU year
= 365 days * 1 C2GHD * (90MHz / 1000MHz) / 1.68 / 3.44 / 1.12
= 5.075 C2GHDs[/code]


All times are UTC. The time now is 22:53.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.