What Is a Bit? What is a Byte?
Newbie Introduction to the Web
This is yet another attempt to explain what bits and bytes are, in principle.
Imagine a single wire, for example, a copper wire. Information can propagate through the line in two states. The information, or let us call it data, can have only two states. When high voltage (most probably 2.6 volts - if you touch the wire, you will not feel a thing) is put on the line, in almost 99% of the systems in the world, that is denoted by a 1 - one. When low voltage (most probably no more than 0.5 volt) is put on the line, that is denoted by a 0 - zero. The maximum count would be 2 - 0 and 1. There are systems out in the industry that run on “negative logic.” In other words, high voltage is denoted by a zero and low by a one.
This is called digital world as against analog. Binary system - notice bi (two) in binary - is a part of the digital system. Normally things work in an analog fashion in the world. Digital is man-made. If the voltage on a line is somewhere between the required values, the result will be unpredictable. In Math, the digital system is called discrete-time system and the analog is called continuous-time system. For example, the sun rays come to us in a continuous-time system. There is no ON and OFF situation. A one and zero sometimes are denoted by YES and NO, ON and OFF respectively.
Imagine you have two wires. The lowest number is 00 and highest is 11. So the count would be 4. Four parallel wires create 0000 as lowest count and 1111 as highest count. So count is from 0 to 15, a total of 16.
So if we write the binary numbers in a 4-bit system, it will be 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111. The last number is 15 but the total count is 16 including the first number 0000. Now it is a matter of putting either high or low voltage on each line to create that particular number. Remember, you always count from right to left.
So adding 1 to 0000 becomes 0001 and 1 to 0111 becomes 1000. In binary system, add 1 to 1 and it becomes 10 because the highest number here is 1. There is no numeral above 1. 1+1=10. The 1 in 10 is carry over from the addition.
In a case of 8 bits, the first number is 00000000 and the last number is 11111111. If you do the count, the last number is 255 for a total of 256.
So, for a one line, the maximum count is 2.
For 2 lines, the max. is 4.
For 4 lines, the max. is 16.
For 8 lines, the max is 256.
For 16 lines, the max is 65536
And so on… See how the numbers multiply when you double the number of lines. So when you know that you double the lines from 4 to 8 and you know that max. for 4 is 16, then multiply 16 with itself (or 16 to the power of 2) to get to the max. for 8 = 256. Double it again to 16 and 256×256 (or 256 to the power of 2) is then 65536.
Now let us say you go from 4 lines to 5 lines. In this case, you double the number. For example, for 4 max. is 16 and so for 5, max. is 16*2 = 32. For six, it is 64 (32*2), for 7 it is 128 (64*2) and for 8 it is again then 256 (128*2). So, when you add one extra line, the new max. doubles the previous number. If you notice in binary system, a 1 shifted to the left gets doubled. For example, in a 4-bit system, 0010 (2) becomes 0100 (4) when the 1 is shifted by one binary place to the left.
Now a line (remember has 2 states) is known as a bit. So a bit can have two states, a 0 and a 1. So when Intel came out with their first microprocessor, called 4004, in 1971, it was a 4-bit engine that could process 4 bits of data. Intel or someone else called the 4 bits as nibble.
Nibble is 4 bits, byte is 8 bits, word is 16 bits.
If you look at the ASCII code table, all alphabets and numerals can be denoted by 7 bits for a total of 128 characters (from 0 to 127). When you scroll down a little on the table page, you will see an extended ASCII table that uses the 8th bit in a byte, that is it starts with 128 and ends at 255.
There are other systems, for example, hexadecimal and octal. Hexadecimal, as I understand it, was originated by IBM and used in their mainframe computers. Octal system was originated by Digital Equipment Corporation (DEC) and used in their mini-computers known as PDP-11.
In binary system, the count is from 0 to 1 for a total of 2.
In decimal, it is 0 to 9 for a total of 10.
In hexadecimal (hex is 6 and decimal is 10 for a total of 16), it is 0 to 15 for a total of 16.
In octal system, the count is from 0 to 7 for a total of 8.
We discussed binary and decimal. Now hexadecimal and octal systems.
The principle in all systems is the same. The denotion is a little different. Hexadecimal is a combination of decimal and six. You start with 0,1,2,3,4,5,6,7,8,9,A,B,C,D,E,F. Because there is no single decimal digit for 10, it is denoted by A, eleven is B, twelve is C, thirteen is D, fourteen is E and finally 15 is F. Incuding 0, the total is sixteen.
Similarly, the highest number in octal system is the numeral 7. So when you add 1 to 7, instead of becoming 8, in octal it is 10.
So the decimal number 255 is FF in hexadecimal, 377 in octal and 11111111 in binary.
Magic of Shifting
We are all familiar with decimal count. The reason it is called decimal is that it has a total of 10 numbers from 0 to 9. And then the numbers get repeated forever. So decimal 10 is a repeat of the original 1 and 0 and 25 is actually 2 and 5. The increasing number shifts 10 to the left adding a 0 to the extreme right to make it 10 times the original number. For example, 10. When shifted 1 decimal place to the left, it becomes 100 which one shifted 1 place left becomes 1000 and so. So in a decimal system, shifting 1 decimal place to the left gets multiplied by 10, just like in a binary system stated above, when the 1 gets shifted to the left gets multiplied by 2.
On the other hand, shifting to the right divides the previous number by 2 in binary and 10 in decimal.
Approximation in the real world
For 8 bits, the max. count is 256. For 9 bits, it is double = 512 and for 10 bits it is 1024.
As we all know, 1K is a kilo which is 1,000. 1024 is easy to remember but still is 1000 and 24 or 1k and 24. In everyday life, usually people drop the 24 and say just 1K. 2K is acually 2048 and 4196 is 4K. Therefore on the same token, the number 65536 we got above from 16 bits is rounded to 64K and similarly 65536*2=131072 is rounded to 64K*2=128K. Next comes 256K, 512K, 1024K and so on.
Now 1024K is 1K*1K = 1 million = 1MegaByte = 1MB
1000MB = 1 Giga Byte = 1GB
Personal computers come with speed in Giga Hertz and Giga Byte memory.
1000GB = 1 Tera Byte = 1TB
1000TB = 1 Peta Byte = 1PB
Above Peta, search for it.
The names and abbreviations for numbers of bytes are easily confused with the notations for bits. The abbreviations for numbers of bits use a lower-case “b” instead of an upper-case “B”. Since one byte is made up of eight bits, this difference can be significant. For example, if a broadband Internet connection is advertised with a download speed of 3.0Mbps, its speed is 3.0 megabits per second, or 0.375 megabytes per second (which would be abbreviated as 0.375MBps). Bits and bit rates (bits over time, as in bits per second [bps]) are most commonly used to describe connection speeds, so pay particular attention when comparing Internet connection providers and services.
A Website Tutorial
Learn to create your first website
Powered By: Voda Utilities