Convert Binary to BCD code
1. A 16bit register in a computer contains 10110011 11000100. What does its contents represent if it contains
a) a 4digit decimal number in the BCD(binary coded decimal) code, HEre what i have: First i convert binary to decimal and have 46020 in decimal. Then i use the BCD table in the book to convert it from decimal to BCD. Answer= 0100 0110 0000 0001 0000 Please check if i did it wrong. Thanks alot 
Does anyone still use BCD? Do they still teach this stuff?
My rusty recollections tell me you have an illegal BCD number as input. I suggest googling BCD or "binary coded decimal" and figuring it out. 
I'd say this sounds like homework, but I have no idea what class teaches BCD anymore.

Heh, the Intel x87 FPU supported loading and storing an 80bit packed decimal format (10byte, 1 sign byte and 18 digits with the remaining 9 bytes), and consequently to this day so does any modern x86 processor. The internal register representation is the standard one though. Well, standard Intel "extendedprecision double".

[QUOTE=tinhnho]1. A 16bit register in a computer contains 10110011 11000100. What does its contents represent if it contains
a) a 4digit decimal number in the BCD(binary coded decimal) code[/QUOTE] 0 (decimal) = 0000 (BCD) 1 (decimal) = 0001 (BCD) 2 (decimal) = 0010 (BCD) 3 (decimal) = 0011 (BCD) 4 (decimal) = 0100 (BCD) 5 (decimal) = 0101 (BCD) 6 (decimal) = 0110 (BCD) 7 (decimal) = 0111 (BCD) 8 (decimal) = 1000 (BCD) 9 (decimal) = 1001 (BCD) The register content 1011 0011 1100 0100 is not itself a 4digit BCD number because neither 1011 nor 1100 is a BCD digit. BCD also interpreted the other 6 bit sequences as plus or minus signs  1011 and 1101 were minus signs, while 1010, 1100, 1110 and 1111 were plus signs. (Yes, two codes for the minus sign and four codes for the plus sign.) (Why not three codes for each sign? (*sigh*) It's a long story.) So, 1011 0011 1100 0100 could be interpreted as BCD "3+4", but that's not a 4digit decimal number.    And yes, packed decimal format is derived from BCD codes  even the plus/minus signs (in IBM's packed decimal, anyway). 
Cheesehead's answer is partly correct but incomplete (as he left out the last 6 "digits"). IBM's hexadecimal BCD code has 16 characters, 0  9 and A  F.
Here are the missing characters: A (decimal) = 1010 (BCD) B (decimal) = 1011 (BCD) C (decimal) = 1100 (BCD) D (decimal) = 1101 (BCD) E (decimal) = 1110 (BCD) F (decimal) = 1111 (BCD) Consequently, 1011 0011 1100 0100 = B 3 C 4 and is considered by IBM mainframe programmers as a 4 digit hexadecimal number which can be added, subtracted, multiplied, etc. like any other number. This is where, I think, the expression "A + 1 = B" came from (e.g. 1010 + 0001 = 1011 or A+1 = B). 
[QUOTE=RMAC9.5]Cheesehead's answer is partly correct but incomplete (as he left out the last 6 "digits"). IBM's hexadecimal BCD code has 16 characters, 0  9 and A  F.
Here are the missing characters: A (decimal) = 1010 (BCD) B (decimal) = 1011 (BCD) C (decimal) = 1100 (BCD) D (decimal) = 1101 (BCD) E (decimal) = 1110 (BCD) F (decimal) = 1111 (BCD) Consequently, 1011 0011 1100 0100 = B 3 C 4 and is considered by IBM mainframe programmers as a 4 digit hexadecimal number which can be added, subtracted, multiplied, etc. like any other number. This is where, I think, the expression "A + 1 = B" came from (e.g. 1010 + 0001 = 1011 or A+1 = B).[/QUOTE] Not disputing what you say, but why would there be binary coded [u]decimal[/u] to encode hexadecimal? 
RMAC9.5,
I started programming computers in 1963. At that time (which was before IBM announced its System/360), many mainframes (IBM and others) used word lengths that were a multiple of 3 bits, the number system most commonly used (other than binary and decimal) was octal (base8), and character codes were 6 bits long, not 8. Hexadecimal was [u]not[/u] commonly used by programmers (except possibly on a few obscure nonIBM systems) at that time. In its System/360 introductory documents, IBM had to include explanations of the hexadecimal system and how it was related to binary, octal, and decimal, because few programmers were familiar with it. [QUOTE=RMAC9.5]Cheesehead's answer is partly correct but incomplete (as he left out the last 6 "digits").[/quote] No, I left out no digits. As TravisT pointed out, the [b]D[/b] in BCD stands for [b]Decimal[/b], not "Hexadecimal". [quote]IBM's hexadecimal BCD code has 16 characters, 0  9 and A  F.[/quote]Correction: The hexadecimal (base16) numbering system, which was not invented by IBM, has 16 digits. The hexadecimal digits corresponding to decimal values 10, 11, 12, 13, 14, and 15 are usually written as A, B, C, D, E, and F, respectively. In the BinaryCoded Decimal system, the fourbit binary values whose decimal equivalents are 10, 11, 12, 13, 14, and 15 are interpreted as sign (+ or ) codes, not numeric digits. tinhnho quoted "1. A 16bit register in a computer contains 10110011 11000100. What does its contents represent if it contains a) a 4digit decimal number in the BCD(binary coded decimal) code". It is possible that the writer of that question made a mistake  the reference to BCD doesn't really match the register value presented. I've worded my previous responses under the assumption that the writer meant what s/he said, but that might not actually be true.      Also see [url="http://en.wikipedia.org/wiki/Binarycoded_decimal"]http://en.wikipedia.org/wiki/Binarycoded_decimal[/url] 
Cheesehead, as I was composing my reply a little voice inside my head kept whispering "you are are describing EBCDIC (extended binary coded decimal interchange code) hexadecimal numbers, maybe they are different than BCD decimal numbers"; and "Cheesehead sounds like he might have personal experience (i.e. knows what he is talking about)". Thanks for the history lesson as my IBM mainframe experience dates from 1975 on System 370 machines and I have sometimes wondered what EBCDIC was extended from.

All times are UTC. The time now is 18:06. 
Powered by vBulletin® Version 3.8.11
Copyright ©2000  2020, Jelsoft Enterprises Ltd.