The processor clock speed is only one of several factors that will ultimately determine the productivity of a given computer system. The micro architecture of the CPU itself, the number of instructions per clock cycle, the speed of the disk storage system, the design of the software in use, etc, will all play a contributing role. But raising the clock speed of the CPU has traditionally been one sure way to get more work done in the same amount of time.
Listed in the specifications of almost every computer sold will be the clock speed of the Central Processing Unit. While often promoted as a selling point and an indication of computing power, clock speed comparisons are only valid and meaningful within a specific model group or microarchitecture. (The microarchitecture refers to the internal CPU components and the associated interconnections.) For example, prior to the demise of the Intel Netburst architecture, Pentium 4 processors were available with a clock speed of 3.8 GHz. At the time this was the highest processor speed available in an assembly line desktop computer. What the specifications did not show was that an Athlon 64 FX series processor with a clock speed of 2.8 GHz was easily able to equal or exceed the amount of work done in an equivalent period of time by the Pentium processor. This is due to the microarchitecture of the processors compared.
With the change in manufacturing for Intel from the Netburst architecture to the Core and Core 2 architectures came a significant move to lower clock speeds, dropping the fastest CPU clock from a previous high of 3.8 GHz for the Pentium 4 to 2.93 GHZ (and recently 3.0 GHz) for the Core 2. Although the other processor manufacturers have not gone backwards with their CPU speeds as has Intel, it is somewhat surprising that, given the Intel market share of CPUs sold during the period covered by our research ranged from 77 to 84%, the trend in clock speeds continues upwards unabated:
Average CPU speed by PC type
Click here for more CPU research charts.
The clock of a CPU is much like a quartz watch in that the cycles are produced by applying a regular pattern of high and low voltages to an oscillator crystal, causing it to vibrate. The speed of the vibrations (or clock frequency) is controlled in one aspect by how often the voltage goes from low to high in a set interval. The result can be thought of as the classic sine wave. The actual “work” done by the processor is done at specific points of the wave, and the number of times that work is done in one complete wave is known as Instructions Per Cycle, or IPCs.
It is apparent then that in order to increase the amount of work done within a particular timeframe there are basically two choices: Increase the number of IPCs or increase the speed at which the instructions are performed. A simultaneous increase in both would be ideal. The Intel Netburst Pentium 4 was designed to continually be able to increase the amount of work performed by increasing the clock speed. As the speeds approached the 4 GHz mark, it became increasingly more difficult to keep the energy usage and heat production to acceptable levels. Fortunately for Intel, the Core 2 architecture was in parallel development and was just enjoying birth as the era of Netburst was coming to an end.
At about the same time that Intel opted to go for higher clock speeds, AMD elected to go with more IPCs and reduced memory latencies to increase the amount of computing done in any given interval. This designed proved to be more productive, used less energy, and produced less heat than the Netburst design, which was depending on clock speed increases to pull up the performance. And although the K7 design ultimately could not be clocked as high as a Pentium 4 due to the design and internal structure, that had little effect on the K7’s superior workload productivity.
With the introduction of the dual core processor, yet another method for increasing productivity with the same or even slower clock speed became available, and that is the use of parallel computing. Although primarily dependent upon software for implementation, the idea is that of splitting the work between two processor cores, thereby reducing the amount of time required to do the work. For example, if it takes 90 seconds for a single core processor to do a set number of specific calculations, then in theory, if the task is split between two processor cores running at the same clock speed as the single core, each would ideally finish their half of the calculations in 45 seconds.
As the architectures mature and more enhancements are implemented, clock speeds continue to be raised as part of the performance increases. Although the gigahertz drag race of the previous designs are a thing of the past, new processors will continue to work faster, not just smarter.
Coming: What are the differences between an AMD and an Intel processor?
379 total views, 2 views today