COA Unit - II Notes
COA Unit - II Notes
Unit-II
Data Representation: Signed number representation, fixed and floating point representations, Character
representation.
Computer Arithmetic: Integer addition and subtraction, Multiplication – shift and add, Booth multiplication,
Signed operand multiplication, Division, Floating point arithmetic.
-----------------------------------------------------------------------------------------------------------------------------------
Data Representation:
Number Systems
Human beings use decimal (base 10) because we have 10 fingers such as 0, 1, 2, Up to 9).
Computers use binary (base 2) number system, as they are made from binary digital components (known as
transistors) operating in two states - on and off.
In computing, we also use hexadecimal (base 16) or octal (base 8) number systems, as a compact form for
representing binary numbers.
Decimal (Base 10) Number System
Decimal number system has ten symbols: 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9, called digits.
It uses positional notation. That is, the least-significant digit (right-most digit) is of the order of 10^0 (units or
ones), the second right-most digit is of the order of 10^1 (tens), the third right-most digit is of the order
of 10^2 (hundreds), and so on, where ^ denotes exponent. For example,
735 = 700 + 30 + 5 = 7×10^2 + 3×10^1 + 5×10^0
Binary (Base 2) Number System
Binary number system has two symbols: 0 and 1, called bits. It is also a positional notation, for example,
101102
= 10000 + 0000 + 100 + 10 + 0 = 1×2^4 + 0×2^3 + 1×2^2 + 1×2^1 + 0×2^0
A binary digit is called a bit. Eight bits is called a byte (why 8-bit unit? Probably because 8=23).
Hexadecimal (Base 16) Number System
Hexadecimal number system uses 16 symbols: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, and F, called hex digits.
It is a positional notation, for example,
A3EH = A00H + 30H + EH = 10×16^2 + 3×16^1 + 14×16^0
We shall denote a hexadecimal number (in short, hex) with a suffix H. Some programming languages denote
hex numbers with prefix 0x or 0X (e.g., 0x1A3C5F), or prefix x with hex digits quoted (e.g., x'C3A4D98B').
Each hexadecimal digit is also called a hex digit. Most programming languages accept lowercase 'a' to 'f' as well
as uppercase 'A' to 'F'.
There are two representations (0000 0000B and 1000 0000B) for the number zero, which could lead to
inefficiency and confusion.
Positive and negative integers need to be processed separately.
n-bit Sign Integers in 1's Complement Representation
In 1's complement representation:
Again, the most significant bit (msb) is the sign bit, with value of 0 representing positive integers and 1
representing negative integers.
The remaining n-1 bits represents the magnitude of the integer, as follows:
for positive integers, the absolute value of the integer is equal to "the magnitude of the (n-1)-bit binary pattern".
for negative integers, the absolute value of the integer is equal to "the magnitude of the complement (inverse) of
the (n-1)-bit binary pattern" (hence called 1's complement).
Absolute value is the complement of 000 0001B, i.e., 111 1110B = 126D
Hence, the integer is -126D
• n-bit
Sign Integers in 2's Complement Representation
Absolute value is the complement of 000 0001B plus 1, i.e., 111 1110B + 1B = 127D
Hence, the integer is -127D
Absolute value is the complement of 111 1111B plus 1, i.e., 000 0000B + 1B = 1D
Hence, the integer is -1D
•
Floating-Point Number Representation
• A floating-point number (or real number) can represent a very large (1.23×10^88) or a very small
(1.23×10^-88) value.
• It could also represent very large negative number (-1.23×10^88) and very small negative number (-
1.23×10^88), as well as zero,
A floating-
point number is typically expressed in the scientific notation, with a fraction (F), and an exponent (E) of a
certain radix (r), in the form of F×r^E.
Decimal numbers use radix of 10 (F×10^E); while binary numbers use radix of 2 (F×2^E).
For example, the number 55.66 can be represented as 5.566×10^1, 0.5566×10^2, 0.05566×10^3, and so on.
The fractional part can be normalized. In the normalized form, there is only a single non-zero digit before the
radix point.
For example, decimal number 123.4567 can be normalized as 1.234567×10^2;
binary number 1010.1011B can be normalized as 1.0101011B×2^3.
• IEEE-754 32-bit Single-Precision Floating-Point Numbers
• In 32-bit single-precision floating-point representation:
• The most significant bit is the sign bit (S), with 0 for positive numbers and 1 for negative numbers.
• The following 8 bits represent exponent (E).
• The remaining 23 bits represents fraction (F).
•
Normalized Form
• Let's illustrate with an example, suppose that the 32-bit pattern is 1 1000 0001 011 0000 0000 0000
0000 0000, with:
• S=1
• E = 1000 0001
• F = 011 0000 0000 0000 0000 0000
• In the normalized form, the actual fraction is normalized with an implicit leading 1 in the form of 1.F. In
this example, the actual fraction is 1.011 0000 0000 0000 0000 0000 = 1 + 1×2^-2 + 1×2^-3 = 1.375D.
• The sign bit represents the sign of the number, with S=0 for positive and S=1 for negative number. In
this example with S=1, this is a negative number, i.e., -1.375D.
• In normalized form, the actual exponent is E-127 (so-called excess-127 or bias-127). This is because we
need to represent both positive and negative exponent.
• With an 8-bit E, ranging from 0 to 255, the excess-127 scheme could provide actual exponent of -127 to
128. In this example, E-127=129-127=2D.
• Hence, the number represented is -1.375×2^2=-5.5D.
• IEEE-754 64-bit Double-Precision Floating-Point Numbers
• The representation scheme for 64-bit double-precision is similar to the 32-bit single-precision:
• The most significant bit is the sign bit (S), with 0 for positive numbers and 1 for negative numbers.
Character Representation:
In computer memory, character are "encoded" (or "represented") using a chosen "character encoding schemes"
(aka "character set", "charset", "character map", or "code page").
For example, in ASCII (as well as Latin1, Unicode, and many other character sets):
code numbers 65D (41H) to 90D (5AH) represents 'A' to 'Z', respectively.
code numbers 97D (61H) to 122D (7AH) represents 'a' to 'z', respectively.
code numbers 48D (30H) to 57D (39H) represents '0' to '9', respectively.
It is important to note that the representation scheme must be known before a binary pattern can be interpreted.
E.g., the 8-bit pattern "0100 0010B" could represent anything under the sun known only to the person encoded
it.
The most commonly-used character encoding schemes are: 7-bit ASCII (ISO/IEC 646) and 8-bit Latin-x
(ISO/IEC 8859-x) for western european characters, and Unicode (ISO/IEC 10646) for internationalization
(i18n).
A 7-bit encoding scheme (such as ASCII) can represent 128 characters and symbols. An 8-bit character
encoding scheme (such as Latin-x) can represent 256 characters and symbols; whereas a 16-bit encoding
scheme (such as Unicode UCS-2) can represents 65,536 characters and symbols.
5.1 7-bit ASCII Code (aka US-ASCII, ISO/IEC 646, ITU-T T.50)
ASCII (American Standard Code for Information Interchange) is one of the earlier character coding
schemes.
ASCII is originally a 7-bit code. It has been extended to 8-bit to better utilize the 8-bit computer
memory organization. (The 8th-bit was originally used for parity check in the early computers.)
Code numbers 32D (20H) to 126D (7EH) are printable (displayable) characters as tabulated (arranged in
hexadecimal and decimal) as follows:
Hex 0 1 2 3 4 5 6 7 8 9 A B C D E F
2 SP ! " # $ % & ' ( ) * + , - . /
3 0 1 2 3 4 5 6 7 8 9 : ; < = > ?
4 @ A B C D E F G H I J K L M N O
5 P Q R S T U V W X Y Z [ \ ] ^ _
6 ` a b c d e f g h i j k l m n o
7 p q r s t u v w x y z { | } ~
Addition:
The binary number system uses only two digits 0 and 1 due to which their addition is simple. There are four
basic operations for binary addition, as mentioned above.
0+0=0
0+1=1
1+0=1
1+1=10
The above first three equations are very identical to the binary digit number. The column by column
addition of binary is applied below in details. Let us consider the addition of 11101 and 11011.
The above sum is carried out by following step
1 + 1 = 10 = 0 with a carry of 1.
1+0+1 = 10 = 0 with a carry of 1
1+1+0 = 10 = 10 = 0 with a carry of 1
1+1+1= 10+1 = 11= 1 with a carry of 1
1 +1 +1 = 11
Unsigned multiplication:
Hardware Implementation :
Following components are required for the Hardware Implementation of multiplication algorithm :
1. Registers:
Two Registers B and Q are used to store multiplicand and multiplier respectively.
Register A is used to store partial product during multiplication.
Sequence Counter register (SC) is used to store number of bits in the multiplier.
2. Flip Flop:
To store sign bit of registers we require three flip flops (A sign, B sign and Q sign).
Flip flop E is used to store carry bit generated during partial product addition.
3. Complement and Parallel adder:
This hardware unit is used in calculating partial product i.e, perform addition required.
Flowchart of Multiplication:
1. Initially multiplicand is stored in B register and multiplier is stored in Q register.
2. Sign of registers B (Bs) and Q (Qs) are compared using XOR functionality (i.e., if both the signs are
alike, output of XOR operation is 0 unless 1) and output stored in As (sign of A register).
Note: Initially 0 is assigned to register A and E flip flop. Sequence counter is initialized with value n, n
is the number of bits in the Multiplier.
3. Now least significant bit of multiplier is checked. If it is 1 add the content of register A with
Multiplicand (register B) and result is assigned in A register with carry bit in flip flop E. Content of E A
Q is shifted to right by one position, i.e., content of E is shifted to most significant bit (MSB) of A and
least significant bit of A is shifted to most significant bit of Q.
4. If Qn = 0, only shift right operation on content of E A Q is performed in a similar fashion.
5. Content of Sequence counter is decremented by 1.
6. Check the content of Sequence counter (SC), if it is 0, end the process and the final product is present in
register A and Q, else repeat the process.
Example:
Multiplicand = 10111
Multiplier = 10011
Booth algorithm gives a procedure for multiplying binary integers in signed 2’s complement representation in
efficient way, i.e., less number of additions/subtractions required. It operates on the fact that strings of 0’s in the
multiplier require no addition but just shifting and a string of 1’s in the multiplier from bit weight 2^k to weight
2^m can be treated as 2^(k+1 ) to 2^m.
Hardware Implementation of Booths Algorithm – The hardware implementation of the booth algorithm requires
the register configuration shown in the figure below.
Booth’s Hardware implementation:
We name the register as A, B and Q, AC, BR and QR respectively. Qn designates the least significant bit of
multiplier in the register QR. An extra flip-flop Qn+1is appended to QR to facilitate a double inspection of the
multiplier.The flowchart for the booth algorithm is shown below.
AC and the appended bit Qn+1 are initially cleared to 0 and the sequence SC is set to a number n equal to the
number of bits in the multiplier. The two bits of the multiplier in Qn and Qn+1are inspected. If the two bits are
equal to 10, it means that the first 1 in a string has been encountered. This requires subtraction of the
multiplicand from the partial product in AC. If the 2 bits are equal to 01, it means that the first 0 in a string of
0’s has been encountered. This requires the addition of the multiplicand to the partial product in AC.
When the two bits are equal, the partial product does not change. An overflow cannot occur because the
addition and subtraction of the multiplicand follow each other. As a consequence, the 2 numbers that are added
always have a opposite signs, a condition that excludes an overflow. The next step is to shift right the partial
product and the multiplier (including Qn+1). This is an arithmetic shift right (ashr) operation which AC and QR
ti the right and leaves the sign bit in AC unchanged. The sequence counter is decremented and the
computational loop is repeated n times.
Example – A numerical example of booth’s algorithm is shown below for n = 4. It shows the step by step
multiplication of -5 and -7.
Product = AC MR
Product = 0010 0011 = 35
Division algorithms:
Examples:
Remember to restore the value of A most significant bit of A is 1. As that register Q contain the quotient, i.e. 3
and register A contain remainder 2.
Flow chart:
Example:
Dividend =11
Divisor =3
-M =11101
N M A Q Action
4 00011 00000 1011 Start
00001 011_ Left shift AQ
11110 011_ A=A-M
3 11110 0110 Q[0]=0
11100 110_ Left shift AQ
N M A Q Action
11111 110_ A=A+M
2 11111 1100 Q[0]=0
11111 100_ Left Shift AQ
00010 100_ A=A+M
1 00010 1001 Q[0]=1
00101 001_ Left Shift AQ
00010 001_ A=A-M
0 00010 0011 Q[0]=1
Quotient = 3 (Q)
Remainder = 2 (A)
Now we get the difference of exponents to know how much shifting is required.
(10000010 – 01111110)2 = (4)10
Subtraction is similar to addition with some differences like we subtract mantissa unlike addition and in sign bit
we put the sign of greater number.
Now, we find the difference of exponents to know how much shifting is required.
(10000010 – 01111110)2 = (4)10
Now, we shift the mantissa of lesser number right side by 4 units.
Mantissa of – 0.5625 = 1.00100000000000000000000
(note that 1 before decimal point is understood in 32-bit representation)
Shifting right by 4 units, 0.00010010000000000000000
Mantissa of 9.75= 1. 00111000000000000000000