mersenneforum.org  

Go Back   mersenneforum.org > Great Internet Mersenne Prime Search > Hardware > GPU Computing

Reply
 
Thread Tools
Old 2018-04-03, 03:44   #12
Mark Rose
 
Mark Rose's Avatar
 
"/X\(‘-‘)/X\"
Jan 2013
https://pedan.tech/

24×199 Posts
Default

Quote:
Originally Posted by airsquirrels View Post
Figure out the algorithm and we could do it!

Requirements:
Input: 32 byte header hash of proposed block, nonce (64 bit worker specific attempt)

Algorithm: ??? Test a range of bits on some candidates???

Output:
List of factors found +
32 byte hash whose 256 bit value is < target

How do you ensure each hash attempt is on previously unfactored candidates?

Perhaps each block includes a range of candidates from a pre-determined pools, iterating through the pool every block? The hash would have to be the mod sum of candidate attempts or some other not so fakeable values. Perhaps nonce represents the low order bits and high order ranges are checked?
Each miner could keep a simple bitmap of which exponents are factored. We need to keep <40% of the exponents, so the first 2^32 exponents would require <1.7 GB to store. As everything below 1277 is factored, we could define the exponent range for TF Coin to be 2^10 to 2^42. Results for exponents 2^43 and above would be invalid. This caps the total number of coins to ever be mined.

Do we reward on work done or factors found? I'm inclined to go with rewarding factors found on an exponent, as checking the work is done is trivial. I'm inclined to not track unsuccessful TF attempts: we can't easily verify it was actually done. This means work will be repeated, which is what we want anyway to catch missed factors. Smart pools will obviously avoid duplicating work, but implement heuristics to detect underperforming clients.

Machines that find new factors in each round would be eligible to certify the block. They could reject factors for exponents already found.

The lower the exponent and larger the prime factor, the higher the payout should be. I'm sure someone smarter than me knows the magic formula for how many candidates there are for a given bit depth for a given exponent. We could then multiply that formula by a function (perhaps the rational function?), such that finding factors for the highest exponents is almost worthless: this should focus the effort on low exponents.

I suppose if we wanted to make a dynamically adjusting target, we could restrict the largest eligible exponent for a round or something like that. I'm not sure about the best way to determine this cutoff though.

I'll have to study cryptocurrency algorithms more to comment further. I've probably missed critical things in my thoughts above.
Mark Rose is offline   Reply With Quote
Old 2018-04-03, 04:06   #13
Mark Rose
 
Mark Rose's Avatar
 
"/X\(‘-‘)/X\"
Jan 2013
https://pedan.tech/

24·199 Posts
Default

Quote:
Originally Posted by GP2 View Post
That's only a problem for the currency itself. It would be a boon to the project's goal of finding factors with low power expenditure.
Bingo.

Quote:
In fact, why not make the whole thing a scam by design. It would hardly be the first cryptocurrency to fit that description. Deliberately make it ASIC-able, albeit in a non-obvious way. After the dust settles, we buy up the ASICs at fire-sale prices and use them for our own purposes to find factors. The ASIC designers might even put extra R&D effort into it because of the durable hobbyist aftermarket for their product even after the currency itself fails.
Why bother trying obfuscating ASIC-ableness? It's not really possible: all that's needed is wide binary division circuitry. Multiple groups could implement ASIC easily.
Mark Rose is offline   Reply With Quote
Old 2018-04-03, 11:07   #14
M344587487
 
M344587487's Avatar
 
"Composite as Heck"
Oct 2017

95010 Posts
Default

Quote:
Originally Posted by airsquirrels View Post
...
Algorithm: ??? Test a range of bits on some candidates???
...
It would be up to the mining pool to divvy up work (probably) on a range basis, the chain itself should just specify which candidates it will accept factors for to create the next block. The only proof we have is "factor found" so all we can specify is what candidates we are accepting factors for.

Quote:
Originally Posted by airsquirrels View Post
...
How do you ensure each hash attempt is on previously unfactored candidates?
...
I think a 64 bit candidate space is too low, but I could be wrong. Basically though you'd make the candidate space large enough to beat the birthday paradox with a hefty stick, and choose the candidates randomly based on the hash of the previous block. 128 bit should do it but 96 bit may work (but you need to consider that a block may be accepting many thousand candidates, or orders of magnitude more depending on difficulty). You'd also bake in the candidates with known factors at time of launch (that are cheap to describe), skipping over those if they come up, and skipping any candidates factored on-chain.

Quote:
Originally Posted by Dubslow View Post
The major problem with this is that it's ASIC-able. There will be too much centralization.
That is not really a problem, in fact from the point of view of prime hunting it's great if specialised factoring hardware is created. Take bitcoin, the ship has sailed on ASICs but it still works, it just isn't mineable by consumer hardware. Wouldn't it be great if GPUs were no longer top-dog for factoring? They could be used for LL and PRP instead ;)

BTW how ASIC-able are we talking? I've always thought GPUs are pretty well matched for factoring, but I suppose some sort of dedicated integer solution could blow them out of the water.

Quote:
Originally Posted by Mark Rose View Post
...
Do we reward on work done or factors found? I'm inclined to go with rewarding factors found on an exponent, as checking the work is done is trivial. I'm inclined to not track unsuccessful TF attempts: we can't easily verify it was actually done.
...
Has to be factors found as "found a factor" is the only proof, the unsuccessful factoring is the work. An algorithm suitable for PoW has to be like finding a needle in a haystack (once you've found the needle you can show everyone where it was without them having to search too). This is why we are talking about factoring and not LL or PRP, with those algorithms the only way to know if a result is correct is to repeat the steps they took. BTW my earlier comment that "prime found" for LL is suitable as PoW was wrong for this reason.

Quote:
Originally Posted by Mark Rose View Post
...
Machines that find new factors in each round would be eligible to certify the block. They could reject factors for exponents already found.
...
The blockchain is self-certifying. When a block is broadcast to the network as potentially the next block, every node checks it, rejecting it as necessary. A node checks the validity of everything in the block and ensures it follows all the rules of the chain, one of those rules could be a baked-in list of known factored candidates (static at launch, it wouldn't be wise to update the list on-the-fly), another rule could be rejecting any factors already on the chain.

Quote:
Originally Posted by Mark Rose View Post
...
I suppose if we wanted to make a dynamically adjusting target, we could restrict the largest eligible exponent for a round or something like that. I'm not sure about the best way to determine this cutoff though.
...
It sounds like you're talking about difficulty, the concept is used in most (all?) PoW cryptocurrencies as a way to ensure new blocks are created at the same interval on average. Earlier I proposed the difficulty be used to adjust how many candidates are valid to find a factor for (keeping factors required to validate the block at 1), now that I've had time to think about it this goes against the spirit of the idea. Instead, the number of potential candidates should be fixed at some large number, and the difficulty is a measure of how many factors need to be found within that space. This has the negative trait that some found factors will not be submitted as they will be found as part of a block that never meets the difficulty requirement, but the positive is that we are not artificially restricting the candidate range which has a far higher penalty itself.

Quote:
Originally Posted by Mark Rose View Post
...
The lower the exponent and larger the prime factor, the higher the payout should be. I'm sure someone smarter than me knows the magic formula for how many candidates there are for a given bit depth for a given exponent. We could then multiply that formula by a function (perhaps the rational function?), such that finding factors for the highest exponents is almost worthless: this should focus the effort on low exponents.
...
Varying the reward is not normally done on the blockchain, but there's a way to make it work with a fixed reward per block. Instead of the difficulty being a simple counter of how many factors need to be found in a given space, your magic formula could be used to give more weight to factors that are harder to find. An ECM factor of Mp from a 30-bit p could exceed the difficulty requirements on its own, whereas perhaps you'd need dozens of factors from a 60-bit p range (easily attained by TF in low-bits) to exceed the difficulty requirements.

This is an interesting topic, neatly merging two of my interests. I might make a shopping list later of what I think a viable solution would be, but this post is a bit long and I'm hungry :P
M344587487 is offline   Reply With Quote
Old 2018-04-03, 15:36   #15
GP2
 
GP2's Avatar
 
Sep 2003

2×5×7×37 Posts
Default

Quote:
Originally Posted by Mark Rose View Post
Why bother trying obfuscating ASIC-ableness? It's not really possible: all that's needed is wide binary division circuitry. Multiple groups could implement ASIC easily.
Miners running GPU farms really hate the idea of ASIC designers muscling in on a currency. They want extreme measures to prevent that, such as algorithm modifications and hard forks or a switch to proof-of-stake.

But from our point of view, we only care about finding factors, not space heaters doing random proof-of-work.

So you'd be advertising yet another cryptocurrency, except you say it's obviously ASIC-able and we're ruling out ever doing any algorithm modifications? Hmm. I can't see mercenary GPU farmers getting on board. Are there enough math enthusiasts to achieve critical mass?
GP2 is offline   Reply With Quote
Old 2018-04-03, 16:01   #16
Mark Rose
 
Mark Rose's Avatar
 
"/X\(‘-‘)/X\"
Jan 2013
https://pedan.tech/

24·199 Posts
Default

Quote:
Originally Posted by GP2 View Post
Miners running GPU farms really hate the idea of ASIC designers muscling in on a currency. They want extreme measures to prevent that, such as algorithm modifications and hard forks or a switch to proof-of-stake.
Indeed. I'm not happy about the ASIC Ethereum miners.

Quote:
So you'd be advertising yet another cryptocurrency, except you say it's obviously ASIC-able and we're ruling out ever doing any algorithm modifications? Hmm. I can't see mercenary GPU farmers getting on board. Are there enough math enthusiasts to achieve critical mass?
Maybe they would get on board for the initial rush? I'm not sure how we'd modify the algorithm, since TF is TF. Do you have any ideas?
Mark Rose is offline   Reply With Quote
Old 2018-04-03, 17:20   #17
chalsall
If I May
 
chalsall's Avatar
 
"Chris Halsall"
Sep 2002
Barbados

2C6E16 Posts
Default

Quote:
Originally Posted by Mark Rose View Post
Do you have any ideas?
A *very* interesting subject!

One of the big things I don't like about cryptocurrencies is that they are competitive rather than cooperative with their use of computational resources (and, thus, energy usage).

A few issues I can immediately think of for TF'ing Mersenne Prime candidates as a block-chain PoW:

1. There is nothing stopping someone with lots of ability to simply keep back from submitting new factors they've already found until new blocks are needed to be "solved".

1.1. The only way I can think of to get around this is to somehow have a completed block state what the next candidate(s) to TF are. Perhaps produce a list of at least a few thousand so that it is unlikely to be intractable, but at the same time not predictable.

2. Turn-around-time for each worker would have to be reduced significantly, since once a block has been completed each worker would have to start on the new challenge(s).

3. The temporal domain for the next block is somewhat undefined. Will people transferring "MFcoin" be comfortable not knowing how long the transfer will take?

4. A "double-spend" attack could easily be done with our current resources. Just today, for example, "The Judger" dumped 447K GHD/d worth of work (!) after only a temporal week.

Still, worth thinking about!
chalsall is offline   Reply With Quote
Old 2018-04-03, 18:29   #18
M344587487
 
M344587487's Avatar
 
"Composite as Heck"
Oct 2017

3B616 Posts
Default Spitballing how a TF PoW might work

Candidate space
  • Let p an integer be prime. Mp denotes a mersenne number with prime exponent.
  • Let C be the set of the first 2^128 primes. These are the prime candidates.
  • Let K be the set of p in C that are known to have composite Mp.
  • Let K_n indicate the state of set K at block n.
  • K_0 is as up-to-date as possible at launch using external sources, while being as compact as possible. This is the only time external sources can be accounted for, short of hard forking (it's vital that every node maintain identical K_n).
  • K_n contains all of K_0, and any p shown to yield composite Mp on-chain (encoded in all blocks <n).
  • Let P_n be the set of p in C but not in K_n. This is the candidate space at block height n.
Note: It may be shown that updating P_n in this way is unwieldy and/or unnecessary. We may need to forget about updating K_n as we go and instead rely on the 128-bit space to minimise redoing work. A very compact K_0 representing the majority of composites found so far could still be used if deemed useful.

Limit allowable candidates to validate this block
  • Let M_n be the set of candidates a miner is allowed to find factors for to create the next block
  • M_n is a subset of P_n with an arbitrary (fixed) number of members. Lets say 2^24 for now, in any case something large enough to accomodate a variable difficulty while not so large that there is potential for abuse
  • We sha256 hash the hash of the previous block together with the hash of all transactions in this block (which includes the transaction sending the reward to the miners address). We use this sha256 value to find the set of M_n for this miner, using some deterministic algorithm to spit out a scattershot of candidates (or a scattershot of a range of candidates).
  • The consequence of doing it this way is that every miner will get a different set of candidates to search within for the next block, and it's not predictable what those candidates will be. Repeated work is minimised. Someone can't come along and steal a block by putting their own address in. The townsfolk rejoice

Difficulty
  • The difficulty is a 32-bit value denoting how much work needs to be done to validate the block. It is encoded in the blocks, and changes dynamically based on the previous however-many blocks. The aim is to average the time-between-blocks to a set value given that we are not in control of the hashrate.
  • The difficulty should be a representation of how many factors need to be found in M_n to validate the block. The factors could be weighted depending on how difficult they are to find (it could be as simple as bit-depth of the factor, of course only allowing the smaller of the two factors to be submitted).

Using the bitcoin block header as reference ( https://en.bitcoin.it/wiki/Block_hashing_algorithm ) , here is a potential block header:
Code:
hashPrevBlock - Hash of the previous block header
hashMerkleRoot - Hash of all the transactions in this block
Time - Current timestamp as seconds since 1970-01-01T00:00 UTC
Difficulty - As above
factorCount - Varint encoded ( https://en.bitcoin.it/wiki/Protocol_documentation#Variable_length_integer ) count of the factors found during PoW for this block
candidate/factor pair - A factorCount length list of candidate/factor pairs, using some variable length encoding (Varint is not sufficient)
If anyone wants to poke holes, please poke away.

Quote:
Originally Posted by GP2 View Post
Miners running GPU farms really hate the idea of ASIC designers muscling in on a currency. They want extreme measures to prevent that, such as algorithm modifications and hard forks or a switch to proof-of-stake.
...
Of course GPU miners are against ASICs as they're competition (they would not want PoS, hodlers would benefit most from that), and general consensus is against ASICs due to potential centralisation, but it's a hypothetical problem. An ASIC comes to town when it's profitable to do so, we're not even halfway to putting the kettle on to contemplate the vague idea of how a factoring PoW would potentially work yet.

Quote:
Originally Posted by GP2 View Post
...
But from our point of view, we only care about finding factors, not space heaters doing random proof-of-work.
...
For it to work we do need to care, otherwise we'd stick to pure computation and forget about trying to make a hybrid solution.

Quote:
Originally Posted by GP2 View Post
So you'd be advertising yet another cryptocurrency, except you say it's obviously ASIC-able and we're ruling out ever doing any algorithm modifications? Hmm. I can't see mercenary GPU farmers getting on board. Are there enough math enthusiasts to achieve critical mass?
I don't think it's as bleak as all that. By my fag packet estimation the equivalent of 6.5e6 high end cards are mining ETH, 5e5 are mining XMR, and 7.3e5 are mining ZCash (the biggest coins on the biggest GPU algorithms afaik, ASICs are in there but it still represents the majority of GPU mining). That's a big market, of which just a small percentage would be a big win when it comes to computational power. Margins have already dropped a lot since the latest peak (actually I don't think it would take much more for a reset of sorts). Now may be as good a time as any to pitch a PoW that does something useful.

In the beginning it would just be enthusiasts and speculators mining it, over time the hope would be that the coins gain a value based on the work it takes to mine them (the work being useful may help with the perceived value), then it may be possible to break into the mainstream and have proper momentum/trading happen. It would take time to have enough coins to enable any sort of trading anyway, you start at zero after all so in the beginning it'll always be a grassroots thing.
M344587487 is offline   Reply With Quote
Old 2018-04-03, 18:44   #19
airsquirrels
 
airsquirrels's Avatar
 
"David"
Jul 2015
Ohio

11×47 Posts
Default

Quote:
Originally Posted by chalsall View Post
A *very* interesting subject!

1. There is nothing stopping someone with lots of ability to simply keep back from submitting new factors they've already found until new blocks are needed to be "solved".

1.1. The only way I can think of to get around this is to somehow have a completed block state what the next candidate(s) to TF are. Perhaps produce a list of at least a few thousand so that it is unlikely to be intractable, but at the same time not predictable.

2. Turn-around-time for each worker would have to be reduced significantly, since once a block has been completed each worker would have to start on the new challenge(s).

3. The temporal domain for the next block is somewhat undefined. Will people transferring "MFcoin" be comfortable not knowing how long the transfer will take?

4. A "double-spend" attack could easily be done with our current resources. Just today, for example, "The Judger" dumped 447K GHD/d worth of work (!) after only a temporal week.

Still, worth thinking about!

1. Was my biggest concern. Regarding 3, at a large enough scale don't statistics take care of this?

I don't think you can prevent pre-factoring, you just have to ensure that you would have to pre-factor so much ahead of time that you'd be better off just running the PoW algorithm in real time.

One way I could think of to prevent users from pre-factoring specific ranges is to the make factoring only a portion of the proof of work, or make the hash of the previous block that must be part of the input some how restrict which factors are considered value. i.e. 0x112233445566778899aabbccddeeff aka ol' 100010010001000110011010001000101010101100110011101111000100010011001101010101011101111001100110111011110111011111111 is a bitmap of valid candidates for the lower 9 bits with the last digit assumed as 1 (no evens). You'd have a lot of overlap, but maybe that isn't a bad thing.



1. Get previous block header, and block target candidate range (and possibly target factor range?) that's pre-computed and known for each block number. Assigning ranges not currently factored ensures we're doing new work with this fun algorithm.

2. sieve the candidates (within your assigned range) against both primes and the 256 bit header mask to get valid candidates.

3. Run the TF algorithm against candidates for your assigned starting range up as high as you can go in the block time.

4. Generate solutions message, and use traditional hashing algorithm to hash the previous block header along with your solutions. Submit both to the server.

5. Server rejects any duplicate solutions, rather than compete for time the nodes operate on a fixed block time, and the solution with the highest sum of log2(factors) is chosen to seal the block and get the reward.
airsquirrels is offline   Reply With Quote
Old 2018-04-03, 19:26   #20
chalsall
If I May
 
chalsall's Avatar
 
"Chris Halsall"
Sep 2002
Barbados

2×112×47 Posts
Default

Quote:
Originally Posted by airsquirrels View Post
1. Was my biggest concern. Regarding 3, at a large enough scale don't statistics take care of this?
Only if you have large enough participation.

Quote:
Originally Posted by airsquirrels View Post
I don't think you can prevent pre-factoring, you just have to ensure that you would have to pre-factor so much ahead of time that you'd be better off just running the PoW algorithm in real time.
Yeah.

One way to do that is to somehow make the TF result dependant upon previous inputs. Like the last hash, for example.

Those that are just into TF'ing will just TF, of course.
chalsall is offline   Reply With Quote
Old 2018-04-03, 19:40   #21
science_man_88
 
science_man_88's Avatar
 
"Forget I exist"
Jul 2009
Dartmouth NS

8,461 Posts
Default

Quote:
Originally Posted by M344587487 View Post
Let C be the set of the first 2^128 primes. These are the prime candidates.
What are all the primes up to : ~31622776601683793319988935444327185337195 ?

Last fiddled with by science_man_88 on 2018-04-03 at 19:40
science_man_88 is online now   Reply With Quote
Old 2018-04-03, 19:54   #22
VictordeHolland
 
VictordeHolland's Avatar
 
"Victor de Hollander"
Aug 2011
the Netherlands

117910 Posts
Default

Quote:
Originally Posted by M344587487 View Post
Candidate space
  • K_0 is as up-to-date as possible at launch using external sources, while being as compact as possible. This is the only time external sources can be accounted for, short of hard forking (it's vital that every node maintain identical K_n).
  • K_n contains all of K_0, and any p shown to yield composite Mp on-chain (encoded in all blocks <n).
Wouldn't this mean if somebody wanted to start mining or open up a wallet he/she needed to download the entire mersenne.org database + James additional data up to 2^32 (http://www.mersenne.ca/status/tf/0/0/0/0) + part of the factordb for low mersenne candidates? I don't know how much this is exactly (even with compression or saved as some sort of bitmap?)
VictordeHolland is offline   Reply With Quote
Reply



Similar Threads
Thread Thread Starter Forum Replies Last Post
Thinking of Joining GPU to 72 jschwar313 GPU to 72 3 2016-01-31 00:50
Thinking about lasieve5 Batalov Factoring 6 2011-12-27 22:40
Thinking about buying a panda jasong jasong 1 2008-11-11 09:43
Lateral thinking puzzle for you Bundu Puzzles 30 2005-11-26 10:33
Latteral thinking puzzle Uncwilly Puzzles 5 2004-09-01 14:38

All times are UTC. The time now is 15:03.


Fri Jul 7 15:03:07 UTC 2023 up 323 days, 12:31, 0 users, load averages: 1.81, 1.38, 1.22

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2023, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.

≠ ± ∓ ÷ × · − √ ‰ ⊗ ⊕ ⊖ ⊘ ⊙ ≤ ≥ ≦ ≧ ≨ ≩ ≺ ≻ ≼ ≽ ⊏ ⊐ ⊑ ⊒ ² ³ °
∠ ∟ ° ≅ ~ ‖ ⟂ ⫛
≡ ≜ ≈ ∝ ∞ ≪ ≫ ⌊⌋ ⌈⌉ ∘ ∏ ∐ ∑ ∧ ∨ ∩ ∪ ⨀ ⊕ ⊗ 𝖕 𝖖 𝖗 ⊲ ⊳
∅ ∖ ∁ ↦ ↣ ∩ ∪ ⊆ ⊂ ⊄ ⊊ ⊇ ⊃ ⊅ ⊋ ⊖ ∈ ∉ ∋ ∌ ℕ ℤ ℚ ℝ ℂ ℵ ℶ ℷ ℸ 𝓟
¬ ∨ ∧ ⊕ → ← ⇒ ⇐ ⇔ ∀ ∃ ∄ ∴ ∵ ⊤ ⊥ ⊢ ⊨ ⫤ ⊣ … ⋯ ⋮ ⋰ ⋱
∫ ∬ ∭ ∮ ∯ ∰ ∇ ∆ δ ∂ ℱ ℒ ℓ
𝛢𝛼 𝛣𝛽 𝛤𝛾 𝛥𝛿 𝛦𝜀𝜖 𝛧𝜁 𝛨𝜂 𝛩𝜃𝜗 𝛪𝜄 𝛫𝜅 𝛬𝜆 𝛭𝜇 𝛮𝜈 𝛯𝜉 𝛰𝜊 𝛱𝜋 𝛲𝜌 𝛴𝜎𝜍 𝛵𝜏 𝛶𝜐 𝛷𝜙𝜑 𝛸𝜒 𝛹𝜓 𝛺𝜔