Thread: 2-d binary representation View Single Post 2009-01-18, 13:08   #6
only_human

"Gang aft agley"
Sep 2002

2×1,877 Posts Ok, now that this thread has sat here for a couple of days, here are some things that I am a bit more tentative about and really are excursions in using 2D binary patterns as a starting point to think about things.

Bits of equal magnitude are found on the diagonals from bottom left to top right. In a 2D square shaped region, there is one more bit available to each higher magnitude until the diagonals start decreasing in length:
1·20
1·20 + 2·21 + 1·22
1·20 + 2·21 + 3·22 + 2·23 + 1·24
etc.

On 3D representations, absent what happens at the boundary when coefficients begin decreasing, the coefficients seem to be triangular numbers (the coefficient found after first two entries in a row of Pascal's triangle). With a cube that has 4 bits per side, the maximum representable value is 153 with this number of bits per magnitude:
1·20 + 3·21 + 6·22 + 10·23 +12·24+ 12·25 + 10·26 + 6·27 + 3·28 + 1·29

Looking at http://en.wikipedia.org/wiki/Multiplication_algorithm, these 2D representations resemble the section labeled "Lattice Multiplication" except each grid point represents 1 bit by 1 bit binary multiply thus no space for a carry is needed for that digit in the binary version of Napier's Bones. Of course plenty of carries are needed when adding up the bits along the diagonals.

Looking at a 2D rectangle and the above link for Karatsuba multiplication:
Quote:
 For systems that need to multiply numbers in the range of several thousand digits, such as computer algebra systems and bignum libraries, long multiplication is too slow. These systems may employ Karatsuba multiplication, which was discovered in 1960 (published in 1962). The heart of Karatsuba's method lies in the observation that two-digit multiplication can be done with only three rather than the four multiplications classically required. Suppose we want to multiply two 2-digit numbers: x1x2· y1y2: 1. compute x1 · y1, call the result A 2. compute x2 · y2, call the result B 3. compute (x1 + x2) · (y1 + y2), call the result C 4. compute C − A − B,call the result "K"; this number is equal to x1 · y2 + x2 · y1. 5. compute A · 100 + K · 10 + B
Step 5 above was of course written for base ten. Looking at a 2D pattern that I have been calling canonical (a result of a long form multiply but without any of the partial products added but rather simply placed in magnitude positions as described in my first message in the thread), we can break the 2D pattern into 4 regions. The top left region, in isolation corresponds to result A. The bottom right region, corresponds to result B. Region D is x1 · y2, E is x2 · y1. D + E = K. The shifting of step 5 has already been done by the placement of the regions on the 2D grid. None of the optimization of Karatsuba multiplication has been done, I merely am pointing out what each region corresponds to.

Code:
*  x1 x2
y1| A E
y2| D B
Another thing I am thinking is since no adds or carries are done at the point that these 2D representations are plopped down on paper, maybe the representation has use for thinking about coefficients of polynomials products instead of mere numerical products.

Last fiddled with by only_human on 2009-01-18 at 13:51 Reason: make more concise  