News

The 64-bit revolution

BUSTING THE JARGON
In this article, I’ll talk about the width of registers, the width of the data bus and the width of the address bus so, to start with, it’s important to understand what each of these terms means.
Registers are areas of memory inside a processor used by the arithmetic logic unit (ALU) when carrying out mathematical and logical operations. Assume you have values in two parts of the main memory and you also have software that needs to add these together, storing the result to a third location. Typically this would be done by loading a register with the value from the first memory location, adding the value from the second memory location to the value in the register, and then storing the resultant value in the register to a third memory location.

The magnitude of the values a processor can handle depends on the width of these registers. For example, for four, eight and 16 bits, the range of signed integer (whole number) values that can be accommodated are -8 to + 7, -128 to + 127, and -32,768 to +32,767 respectively. However, it would be wrong to think that a four-bit processor, for example, can’t work with values less than – 7 or greater than +8. This is a limit to the values that can be worked on in a single operation, but the software can store larger numbers in multiple memory locations and operate on them using more than one instruction. The downside, though, is that issuing multiple instructions slows things down.

The data bus is the electronic interface that is used to transfer data between the processor and memory, and its width is the number of parallel connections. For example, an eight-bit data bus has eight parallel lines, so It can transfer eight-bit wide values each time a memory location is accessed. The address bus is the collection of electronic signals the processor uses to define which location in memory a value is to be read from or written to. As we’ll see, its internal (or logical) width can be different from its external (or physical) width. The width of the address bus dictates how much memory the processor can access. Address buses of 8, 16 and 32 bits permit 256 bytes, 64KB and 4GB to be addressed respectively. The amount of addressable memory increases rapidly with the width of the address bus; in fact, it doubles with each additional bit.

PROCESSOR DEVELOPMENT

A new headline bit number isn’t an everyday event in the world of personal computing. The move to 64 bits was the first such change since 16-bit processors gave way to the 32-bit version in 1985. To help you understand the 64-bit revolution, I’ll take a brief look at the history of processor development.

The very first microprocessor was the Intel 4004, which was launched in 1971. It was a four-bit processor, meaning that it had four- bit wide registers. In the history of personal computing, though, the 4004 was something of a blind alley as it was used mostly in equipment such as calculators and cash registers.

The first processors to make their way into horne computers were eight-bit chips such as the Intel 8080 and 8085, the Motorola 6800 and 6809, the Zilog Z80 and the MOS Technology 6502. These appeared throughout the 1970s and were used in some of the first home computers such as the Tandy TRS-80, the BBC Micro and the ZX Spectrum. These 8-bit processors all had eight-bit registers. The move to 16-bit processors offered huge performance gains. Intel’s 8088, launched in 1979, made its way into the first IBM PC and personal computing as we know it was born. These processors had 16-bit registers. The transition to 32 bits, with the 80386, brought us 32-bit registers and, again, significant performance gains.

In the development from four-bit, through eight-bit and 16-bit, to 32-bit processors, in each case, that magic number referred to the width of the registers. Since loading values from memory into registers and storing values from those registers back into memory are two of the most common microprocessor operations, it might be reasonable to assume that the data bus would be the same width, but this hasn’t always been true. Certainly the 4004 had a four-bit data bus, eight-bit processors had 8-bit data buses, and the 8086 (Intel’s first foray into 16-bit computing) had a 16-bit address bus. However, the 8088 broke new ground in having an eight-bit data bus, at least externally. This was done to reduce the number of pins on the processor and allow PC designers to use simpler hardware on the motherboard, but it also had a downside.

Each time the processor issued an instruction to access a 16-bit value in memory, behind the scenes two eight-bit transfers were carried out. This had a negative impact on memory-intensive applications. A similar approach was adopted with the 32-bit 80386, which was available in two variants: the DX, which had a 32-bit external data bus, and the cut-down SX, with its 16-bit external data bus. Normality was restored with the 80486. This was available only with a 32-bit data bus, the same width as its registers, but with the Pentium the data bus grew to 64 bits, double the size of its registers, with a consequential increase in memory bandwidth.

FEEL THE WIDTH

Unlike the case with the data bus, there’s no good reason why there should be any fixed relationship between the width of the registers and the width of the address bus, except perhaps to simplify the design of the processor. With each new generation of the processor, the address bus has tended to widen, allowing the chip to access more memory, but it has usually been a different width to the registers of memory. Even an eight-bit address bus would have allowed it to address only 256 bytes, which is meagre even by 1980s standards. Because of this, most eight-bit processors had 15-bit address buses, which meant they could access 54K of memory. The 16-bit processors such as the 8088 and the 80285 had 20-bit and 24-bit address buses respectively (so could access 1 MB and 16MB of RAM), and the various 32-bit processors had gradually widening address buses ranging from the 80386SX’s 24 bits to the Pentium 4’s 36 bits, which enabled it to access 64GB of RAM.

The 32-bit processors with 36-bit buses encountered a problem, though. The internal data paths were just 32 bits wide, so using the full 64GB of addressable memory required the introduction of a feature called Page Address Extension (PAE) and specially written software to make use of it. Without it, 32-bit processors could address only 4GB of memory, and this is the limit that most PC users experienced.

DEFINING 64 BITS
Until the advent of 64-bit computing, the one thing we’ve been able to say about a processor is that the number of bits it has defines the width of its registers. According to this rule, a 64-bit processor ought to have 64-bit registers and so be able to operate on 64-bit wide values in a single operation. It’s somewhat surprising, therefore, that the hype surrounding this technology tells a different story.
What we tend to hear, often from so-called experts, is that the only real difference between a 32-bit and a 64-bit processor is that the latter can address more than 4GB of memory. So have the definitions changed? Does the 64-bit tag refer to something different from the four-, eight-, 16- and 32-bit descriptions that went before it? Aaron Coday, a member of Intel’s EMEA Visual Computing Enabling team in Munich was asked what the company means when it says that its latest and greatest processors have a 64-bit architecture.
In the realm of today’s PC-oriented chips, he said, the term ‘64-bit’ means two things. First, they have a 64-bit address space, and second, they have a 64-bit native data size, which generally refers to integer data. Usually it means that most of the internal buses and internal registers are 64 bits wide as well. As Coday put it, “conceptually everything is 64-bit”.

It’s important to recognise, though, that implementations can vary between processors. The Pentium of the 1990s, for example, was a 32-bit processor, but it had a 64-bit data bus and this was hidden from the programmer. In the case of today’s 64-bit chips, the number of address bits brought out to pins will differ from one product to another. For reasons that will become clear once we do a bit of arithmetic, none has a 64-bit external address bus.
Since the address space is equal to two to the power of the number of bits, a full 64-bit address bus permits 16 exabytes of memory to be addressed. An exabyte is a thousand petabytes, a million terabytes or a billion gigabytes. At today’s prices this amount of memory would cost about £240 billion on wholesale markets, or almost nine times the amount the U.S. government used to bail out Northern Rock. In reality, most 64-bit processors from Intel and AMD currently have 36 external address bits so they are able to address 64GB, but this figure will probably increase in the future as memory prices fall.


THE 64-BIT BENEFITS
The 64-bit revolution allows processors to work with 64-bit rather than 32-bit values and it means they can address more memory, but what does this mean in practice? Can we use larger and more complicated software? Can we work with more data? Will our PCs be faster? Or will it provide us with some benefits that are not obvious from these technical facts and figures? These questions were put to Intel’s Aaron Coday.

First, we asked about the increase in the width of the data bus from 32 to 64 bits. On the face of it, this seems a questionable advantage. It’s easy to see that the 16-bit revolution would have brought benefits in this area since it allowed integer values greater than 127 to be worked with in a single operation. We can even appreciate that the move to 32 bits would also have been beneficial, since it broke the 32,767-integer barrier. But we’d have to question how much value 64-bit processors offer in being able to work with integers greater than the 2,147,483,547 available on 32-bit chips. After all, although high-performance scientific applications might need to work with super-large numbers, it seems unlikely that mainstream applications will often need to do so.

The key to understanding this is to think of the contents of the registers not necessarily as numbers but as chunks of data. As Coday explained, by reference to virus-scanning software, “with a 32-bit processor the software goes through memory 32 bits at a time, but if you go through it 64 bits at a time the pattern matching is much quicker.” He also said that the wider registers allow 64-bit processors to keep track of more objects. It’s not hard to conceive of instances in which software might need to count more than two billion items. When you play a game and load a level, the software has to distinguish many items, down to the individual triangles it uses to draw the scene. Having 64 bits means software can count them normally, without having to use special techniques.
The second main advantage of 64-bit computing is the ability of the processor to access more memory. But how likely is it that a program will need to access more than 4GB? First, it’s important to clear one up possible misapprehension; namely that 4GB is available to applications in a 32-bit scenario. The fact is that hardware devices, most notably the graphics card, use memory address space. Windows and its drivers also reside in memory, with the result that no application will have anything close to 4GB at its disposal. So it’s more pertinent to ask how common it is for an application to need more than the three gigabytes or so that will be available in most 32-bit systems.

Undoubtedly software will grow more memory-hungry as programmers start to assume that the extra memory is available but, according to Aaron Coday, this can be to our advantage. He gave DVD-playing software as an example. If you can map the whole DVD image (which will usually exceed the memory capacity of a 32-bit system) into memory, it allows a technique called memory-mapped 10 (input/output) to be used. This makes it easier to program. so the software is likely to be more reliable and titles will come to market sooner.
On the other hand, there are some areas of software design that can take advantage of the larger memory capacity to do things faster. Examples include cryptography, media applications and games.
Finally, we also need to consider the total memory requirement. If you have several applications running, even if none of them needs a lot of memory individually, their total memory requirement could be considerable. Certainly Windows can handle this, even if there’s not enough memory for all the programs, by paging them in and out of memory. In other words, inactive programs are written to temporary disk to free up RAM and brought back into memory later when needed. Since disk access is comparatively slow, switching between tasks is far faster if they can all reside in RAM.

GETTING PRACTICAL
It goes without saying that you need a 64-bit processor to join the 64-bit revolution, and the good news is that most processors sold today are indeed 64-bit. However, this is by no means an end to what you need, so let’s take a look at the other requirements.
We’ve seen that there are some potential benefits to be gained from 64-bit computing, even without more memory. For most users, though, such advantages will be minimal, so you should make sure you have enough memory to make use of a 64-bit processor’s extra address space. Exactly how much will depend on what applications you use, but 8GB is probably a good starting point. However, if you’re buying a PC with the intention of expanding the memory later when the need arises, bear in mind that some 64-bit motherboards, and hence some PCs, still only have a maximum capacity of 4GB. Such products are best avoided.
Moving from hardware to software, your next requirement is a 64-bit version of Windows. The following versions have a 64-bit variant available:

Windows XP Professional, Windows Vista Home Premium, Business, Enterprise and Ultimate, and Windows 7 Home Premium, Professional, Enterprise and Ultimate (but not Home Basic). In addition to the operating system itself, you also need 64-bit drivers for all your hardware. This resources will be seriously under utilized if you stick with 32-bit applications.
If you have a 64-bit processor. adequate memory, a 64-bit version of Windows and the necessary drivers, you’ll gain some benefit over the 32-bit alternatives. For example, we’ve already seen how using more memory than 4GB will allow for quicker task switching, since applications won’t be swapped in and out of memory. This applies irrespective of whether the applications are 32-bit (which will run fine on a 64-bit version of Windows) or 64-bit.

Microsoft also told us that advanced security features is another reason to migrate to 64-bit but, to be pedantic, it seems likely that these features were introduced at the same time as 64-bit processors rather than being features that required a 64-bit architecture to operate.
If you really want to take advantage of the additional power available on a 64-bit system, you need to consider upgrading some of your applications to 64-bit versions. Even so, there’s no guarantee that you’ll benefit, since some types of application are more likely to experience a 54-bit boost than others. A large database, for example, will run faster because it can reside in memory and not on temporary disk space. Media-processing applications will also take advantage, as will games and software related to cryptography and security. It’s still early days, though, and the potential of 64-bit computing probably has yet to be fully appreciated by software developers. Watch this space.

INTO THE FUTURE
The advent of 64-bit processing represents the fifth generation in terms of the width of data pathways. But it’s interesting to note that the time between generations has increased vastly in recent years. It took Intel just a year to provide an 8-bit alternative to the 4-bit 4004, while the 16- and 32-bit milestones came a further seven and six years down the line. But we then had to wait for 18 years to take the final step to 64 bits. Even after such a long wait, the hype regarding this transition has been much less than that which accompanied the jump to 16 or 32 bits. This might just suggest that the law of diminishing returns is coming into play. If so, could it be that 64-bit computing will represent the pinnacle of processor development?

Only if Aaron Coday thought that we’d ever see an era of 128-bit computing. His response was predictable, given that so many people have had their finger burned by high-tech crystal ball-gazing. “Can’t say I see anything that needs 128-bits,” he said, “but I’m not 100 per cent certain it will never happen.” What was particularly interesting, though, was his suggestion that some types of application can benefit from very wide registers even if the processor as a whole doesn’t need more than a 64-bit architecture. And what’s more, he wasn’t talking of a future development but of a feature already present in today’s chips.

Streaming SIMD Extensions (SSE) first made its appearance in the Pentium III in 1999. SIMD stands for Single Instruction Multiple Data. Now in its fourth iteration (SSE4), this permits several shorter values (for example, four 32-bit or two 64-bit numbers) to be packed together in a 128-bit register, thereby allowing them to be operated on using a single 128-bit instruction.
These super-wide instructions are generally considered to be of particular benefit in media and gaming applications, but they can also provide benefits to other processor-intensive tasks, such as financial and scientific applications.

Something even more powerful is on the horizon, though. Advanced Vector Extensions (AVX) is the latest extension to the x86 architecture and will introduce a set of 256-bit registers and associated 256-bit instructions. It will appear in Intel’s forthcoming processor, codenamed Sandy Bridge, which is likely to appear in 2011. It will also be included in AMD’s Bulldozer processor, which is scheduled to hit the market in the same year. Welcome to the world of 256-bit processing.

 

PC Tech

Posts on this account are made by various editors.
Back to top button
Close

Adblock Detected

Please disable your adblocker to continue accessing this site.