1. Representing fractions
Another section has discussed how to represent whole numbers, both negative and positive in the form of the integer data type and 2's compliment.
Now the problem is how do we represent a fractional number such as 2 1/2 or in decimal, 2.5?
You still have the same number of bits : 8,16,32 and so on in the CPU. But now some bits need to be used for fractions.
The question marks below could be replaced with a sequence of binary values some of which can be fractions
Like this :
The leftmost value in the top row is always negative as two's compliment is being used.
In this example, there are four bits used for sign and whole numbers (just like 2's compliment integers), then four bits for fractions (which halve each time). In effect, you could consider this number to have a binary point, like this
If we use this scheme, then 1111 1111 represents
-8 + 4 + 2 + 1 + 1/2 + 1/4 + 1/8 + 1/16 = -1 15/16 or approximately -1.937
The notional binary point in this scheme is fixed in place between the fourth and fifth bit. This is called a fixed point binary scheme. There is nothing magic about that position, we could decide that the binary point lies between the first and second bits, which only gives one bit for a fraction and a larger share for whole numbers.
But notice the limited range a fixed point scheme provides. In the example above the largest positive number is 0111 1111 (there is no -8)
4 + 2 + 1 + 1/2 + 1/4 + 1/8 + 1/16 = +7 15/16 or approximately +7.937
And so the range for this scheme is +7.937 to -1.937
Not very wide at all compared to the signed integer version of +127 to -128
There is a better scheme whereby the notional binary point is allowed to 'float' depending on the number you want to represent. This better scheme is called a floating point scheme.