Number Systems: How Does Counting in Binary Work?

In the previous part we answered the question: why do computers use binary numbers? This parts aims to show how a computer can count in binary.
As we all know, the decimal (or denary) system of counting uses ten digits represented by the symbols 0 to 9. When we count up to a number larger than 9, we introduce another column of numbers to represent groups of 10. Then, when we reach 99, we introduce another column of numbers to represent 100s, and so we go on.
Why exactly do we use 10 digits? Well it may be that the typical person has 10 fingers which comprises a handy 10 digit abacus that you conveniently carry with you everywhere. There's nothing particularly special about counting with 10 digits though, we could in fact use 20 digits or 5 digits and perform all of mathematics just the same. Using a system of 10s is just a useful convention for humans.
In the decimal system, the value of the columns increases in powers of 10 as you move to the left. The units column is the same as 10 to the power of zero (10^0) which is 1. The 10s column is the same as 10 to the power of 1 (10^1), which is 10. The hundreds column is 10 to the power of 2 (10^2), which is 100 and so on. The number 10 that we keep raising to a power is called the base of the counting system. The value of the column, that is 10s, 100s etc, is the place value.
To find out the value of a digit in any particular column, we take the digit and multiply it by the place value of the column. So if we have 8 in the 100s column, then we multiply 8 x 10^2 = 800.
If we wanted to count in a system based on 5s then we label the columns in powers of 5 instead of powers of 10. The units column is 5 to the power of 0 (which is 1). Then next we have 5 to the power of 1, which is 5, then after that we have 5 to the power of 2, which is 25, and then 5 to the power 3 which is 125 and so on.
A system of counting based on 10s means that the highest digit that can appear in a column is 9. If we count in a base of 5s then the highest digit that can appear in a column is 4.
In a system of counting based on 5s, to find out what a number means in our more conventional 10s based counting system, we take the digit in each column, multiply it by the place value and then sum the results of all of those multiplications together.
So how much is the number 4,321 in the 5s based counting system? Well that's:
(4 x 5^3) + (3 x 5^2) + (2 x 5^1) + (1 x 5^0)
That equals:
(4 x 125) + ( 3 x 25 ) + (2 x 5) + (1 x 1)
which equals:
( 500 ) + ( 75 ) + ( 10 ) + ( 1 )
So the answer is: 586.
A system of binary counting has a base of 2. In the base of 5 above, the only digits we could work with are those less than 5 i.e. the digits 0 - 4. With a base of 2, the only digits we use are those less than 2 i.e. we have only the digits 0 and 1 available. However, that's just fine, we can still count and do any calculations just the same as in any other base.
We label the columns with a place value exactly as previously, but using powers of 2 as follows:
What does the binary number 1010 represent? Well, that's equivalent to:
(1 x 2^3) + ( 0 x 2^2 ) + ( 1 x 2^1 ) + (0 x 2^0)
Which is:
(1 x 8) + ( 0 x 4 ) + ( 1 x 2 ) + ( 0 x 1 )
which is:
( 8 ) + ( 0 ) + ( 2 ) + ( 0 )
Hence the final answer is ten.
Bits and Words
In computer science, a single binary digit is known as a bit. This name is a portmanteau, which combines the first two letters of the word binary with the final letter of the word digit to form bit.
You may have heard the terms 8-bit, 16-bit, 32-bit etc. These are references to groups of binary digits. An 8-bit number means a binary number that is composed of no more than 8 binary digits. 16-bit refers to a binary number composed of no more than 16 binary digits and so on. An 8-bit grouping of binary digits can represent numbers between 0 and 255. A 16-bit grouping can handle numbers between 0 and 65,535. A 32-bit grouping can handle numbers between 0 and 4,294,967,295.
In modern times, it is common practice for a computer to deal with binary numbers in multiples of 8-bits. A grouping of 8-bits is also known as a byte. There is no specific reason why binary numbers must be dealt with in groups of 8 digits other than this is now the convention that virtually all computers use. Historical machines commonly worked with other sizes that were not multiples of 8 before the convention was settled.
All computers have a maximum number of binary digits that they can work on at once, which is sometimes referred to as the word size. This is a limitation imposed by the electrical design of the machine. There are only so many logic circuits that can be crammed onto a chip and working with larger numbers of digits requires more logic circuits which takes up more space. As technology has improved, the number of binary digits that can be worked on in a single operation has increased.
In the early days of home computers an 8-bit word size was commonplace because that was all that could be fitted on the chip with the technology of the day. The majority of computers now work, for the most part, with 64-bit words; however some operations may be performed with larger or smaller word sizes. The word size affects the performance of the machine. A 8-bit computer can represent numbers larger than 255, but it requires multiple operations to complete a calculation that a 64-bit machine may be able to do all at once.
Kilobytes and Megabytes
In scientific writing the prefixes kilo and mega are often used to mean thousand and million respectively. For example a kilogram is 1000 grams and a megawatt means one million watts. These prefixes are also used in computer science but in a slightly different way due to the prevalence of the binary counting system in this field.
The prefixes arise in the decimal counting system when we have reached a nice round number, usually a multiple of 1000 but a round number in decimal is not a round number in binary. For example 1000 in decimal converted to binary is 1111101000. One million in binary is 11110100001001000000. In binary these numbers do not end with all zeros as they do in decimal.
In computer science it is common practice to choose a round number for the kilo and mega prefixes that is somewhat near to the equivalent in decimal. For example 1024 in decimal produces a round number in binary of 100 00000000. The number 1024 is fairly close to 1000 in decimal.
For the mega prefix, we choose 1024 x 1024 which is 1,048,576. This produce a binary round number of 10000 00000000 00000000.
Higher prefixes are also used, for example giga means a billion (1000 million) and tera means a trillion (1000 billion). To get the equivalent round numbers in binary we keep multiplying up by 1024. So giga in binary means 1024 x 1024 x 1024 = 1,262,485,504 and tera means 1024 x 1024 x 1024 x 1024 = 1,099,511,627,776.
There is disagreement between the scientific usage where the prefixes mean strictly multiples of a 1000 in decimal and the commonplace computer science usage where they mean multiples of 1024. To resolve this a variation of the prefixes was introduced to distinguish them which are kibi, mebi, gibi and tebi. These prefixes always mean the computer science version and are always multiples of 1024.
In practical everyday language kilo, mega, giga and tera are commonly used to mean multiples of 1024. The alternative prefixes are used only intermittently.
Note: some manufacturers deliberately use misleading terminology where it is to their commercial advantage. This practice is particularly prevalent in storage devices. Many drive vendors use the scientific version of tera to mean exactly one decimal trillion instead of the commonplace computer science variation to mean multiples of 1024. The reason is that 1,000,000,000,000 is 9% smaller than 1,099,511,627,776 so the drive manufacturer gives the customer less storage space than they think they are getting!
Hexadecimal
So far we have considered number bases that require fewer than 10 digits, but what happens if a number system requires more than 10 digits? This implies more symbols are needed to represent digits higher than 9, which we don't have. There's no particular reason why you can't just make up your own new number symbols, however, the common convention is that letters are used instead. Typically A means digit 10, B means digit 11, C means digit 12 and so on.
A number system that is used very commonly in computer science and requires more than 10 digits is base 16, which is also known as hexadecimal or hex for short.
The smaller the number base, the more digits are needed to represent a number. As binary is the smallest number base, numbers tend to have a lot of digits and are very tedious to type in. It is also difficult for people to spot errors in long binary numbers and to memorise them.
Hexadecimal is a popular alternative number base used for entering binary numbers into a computer. Far fewer base 16 digits are required to enter a number than are needed with binary digits, which makes numbers easier for a person to both type and read. The reason hexadecimal is used for this purpose instead of decimal is that the number of hexadecimal digits required to represent a number has a precise correspondence with the number of binary digits needed to represent the same number. There is always exactly one hexadecimal digit per four binary digits, whereas decimal numbers do not fall on such convenient boundaries. The conversion between a hex number and its binary equivalent is very simple whereas it's more complicated for decimal.
In this example there are exactly two hex digits required to represent the 8 binary digits. The hex digit C (which is 12 in decimal) is 1100 in binary. The hex digit E (which means 14) is 1110 in binary.
Another number base that was frequently used in computing is octal which is base 8. Octal can sometimes be convenient because it always corresponds with exactly three binary digits. However, since it is now commonplace to work in 8-bit bytes, octal often straddles the boundaries of bytes, as groupings of 8-bits often do not divide exactly by 3. Hex, which maps to 4-bits, always divides evenly into any combination of 8-bit bytes.
Image Credits: Abacus, by N509FZ, Creative Commons Attribution-Share Alike 4.0 International