In this tutorial, we’ll study the difference between words and bytes.
The smallest unit of digital information is the binary bit. Modern processors don’t however perform direct operations on them, not even bitwise logic. Instead, they operate on collections of bits that take the name of bytes, and that are the smallest units of information that they can address in the memory. The byte today consists of 8 bits, but this hasn’t always been the case.
One of the first encoding for characters into bits derived from the International Teleprinter Code of the IX century. In that encoding, a byte comprises 5 bits and allows the mapping of all uppercase Latin letters plus some punctuation. Even though it was later dropped in favor of longer bytes, the 5-bit byte remained in use until the 1980s for encoding other character sets, and in particular Arabic and Farsi:
In the 1980s, 5-bit bytes started being replaced by 6-bit bytes. This is because, at the time, most machines used words whose length was a multiple of 6 bits; see the next section for clarification. The 6-bit encoding allowed the mapping of most of the punctuation but left no room for lowercase letters.
The subsequent introduction of the ASCII character encoding, and its adoption as an international standard, led to the extension of a byte to 7 bits. A 7-bit byte could encode all uppercase and lowercase letters, punctuation, and some national variations of the letters of the English alphabet.
The byte comprises 8 bits today, though this is a relatively recent standardization. The extension from 7 to 8 bit originally derived from the noise in the data transmission channels of computers. The 8th bit, at the time, comprised a parity bit that would be used in error correction for the transmission of a 7-bit ASCII character.
As of today, the usage of 8-bit bytes derives from the corresponding international standard, which states that “the number of bits in a byte is usually 8”, though not necessarily. Modern architectures that use bytes of different sizes are, however, uncommon in practice.
3. Bytes, Processors, and Programming
A byte is also the smallest unit of information that can have an address in the memory of a computer. While digital information uses bits as its fundamental unit, processors don’t directly operate on bits. Instead, they retrieve from the memory a byte containing the bit on which they need to operate. Then, they compute the required operation and store the result at the memory address of the original byte.
The byte is also a primitive data type in programming. The byte as a data type has different definitions according to the specific programming languages that we use:
- A byte in Java is an 8-bit signed two’s complement representation of an integer for values between and
- In Python, a byte is an integer in the interval
- In Scala, as is the case for Java, the byte is also an 8-bit signed two’s complement of an integer; though in contrast to Java, it isn’t a primitive data type but rather a full-blown object
- For C and C++, the byte is an unsigned char
A word, instead, is a unit for data processing that’s specific to given computer architecture. We saw how the size of a byte originates from the encoding of characters; the size of a word, instead, depends on the instruction set of a processor. The word is, usually, the sequence of bits that can be transferred from the working memory to the register of the processor.
The word is parametrized by its size or length. The size of a word varies according to the system architecture, with modern computers generally using 32 or 64-bit words. Other sizes for words are however possible.
The early Z3 computer, for example, used a 22-bit word structure. Computers for space exploration in the 1960s used instead 39-bit words, which consisted of three syllables of 13 bits each. Today, even supercomputers use 64-bit words as most standard commercial computers.
5. Words and Operations
Words can also have different structures, in the sense that they comprise bits with different meanings. Typical operations on words, such as logical manipulation, floating-point arithmetic, or address arithmetic, each use individual word structures.
As an example, this is a structure for a 64-bit word for floating-point arithmetics:
Where indicates the exponent flag, the sign of the exponent, and the number sign.
In this tutorial, we analyzed the characteristics of words and bytes and discussed their different relationships with memory and processors.