Clockwork Computers [Byte Size] | Nostalgia Nerd


Computers are a little like clockwork devices. A clock strikes a beat and a certain small
amount of work gets done. Just like the beginning piano player plays
to the beat of a metronome, computers run to the beat of an electronic clock. If you set the metronome too fast, the player
won’t have enough time to find the next piano key and the rendition will fall apart, or
at least sound pretty bad. Similarly if you set the clock rate of a CPU
too high, it will malfunction and the system will crash. This won’t necessarily damage the chip, it
just won’t work. Part of a computers design includes determining
the optimum clock rate. In some ways, the clock is like the coxswain
on a rowing team. He or she is the person who holds up the megaphone
and chants “Stroke, Stroke, Stroke!”. If the order is issued too quickly, the rowers
will get out of sync and the boat will slow down. In that sense, the coxswain is restricted
in dishing out commands no faster than the slowest rower. In a PC, many chips work to the beat of the
computer’s clock, and that means the clock can’t run faster than the slowest component,
which happens to be the most complex ones. This is usually the CPU and memory, although
they can run on independent clocks. The negotiation between CPU and memory was
originally performed by the memory controller hub, this later became the Northbridge chip
and most recently can be found integrated into the CPUs themselves. The clock rate is usually determined by the
frequency of an oscillator crystal, similar to the crystal in your watch. This produces a since wave which is translated
by electronic circuity into a square wave, and from there we can get binary commands. Either the wave is on, or off. After each clock pulse sent scooting through
the circuitry in varying directions depending on the commands given, the signal lines inside
the CPU need time to settle into their new state… meaning a transition either from
0 to 1 or from 1 to 0. If the next clock pulse comes too quickly,
then the results will be out of sync and incorrect. It’s the process of transitioning that produces
heat, and therefore the higher the clock rate, or number of transitions calculated within
a second, the more heat is produced. Originally CPU clocks were measured in Hertz
and KHz, with 1 KHz being a thousand ticks per second. Then in the 80s and 90s, CPU clocks generally
ticked at more than a million times per second; which equates to a 1MHz clock speed. Early PCs and XTs used a 4.77MHz clock and
the AT originally harnessed a 6MHz clock. These processors didn’t yield a lot of heat
and generally didn’t require heat sinks to dissipate the excess energy. Of course in more recent times, we’re more
familiar with GHz, with a 1GHz clock ticking at 1,000 million times per second. That’s some pretty mean progress, and also
a significant increase in heat energy. For many years the maximum possible CPU speed
determined a lot about the rest of the computer. Usually a manufacturer would design a motherboard
to operate at the same speed as the CPU. When CPUs rose in speed from 5MHz to 8MHz,
motherboards followed suit. All of the chips on the board had to be 8MHz
chips, including the memory. This was not only expensive, but it was pretty
impossible at the time to push them over 8MHz, so since 1984 motherboards have been designed
so that different parts can run at different speeds. This does mean that some speed was wasted,
but comprises must sometimes be met. But like processor speed, motherboard speeds
move with the times, and in 1989 motherboard speeds had been pushed to 33MHz. With Intel’s 50MHz 486 processor on the scene
this meant that a new strategy was required for the CPU to work with these still slower
boards. The original clock doubler was a special 486
that could plug straight into a 25MHz board and co-operate at that speed externally, but
internally run at 50MHz. This meant that numeric calculations, or internal
data transfer was carried out at double speed, but external operations were queued up at
the 25MHz rate. A 66MHz 486 was developed to also plug straight
into the 33MHz boards. This trend continued as processors got faster
and faster than motherboards, to clock triplers, quadruple, and today boards tend to run at
several hundred MHz with that speed factored up to match the CPU speed. As some boards allow, overclocking your motherboard
front side bus or it’s multiplication factor will therefore increase the speed of your
CPU. In modern multi-core processors, each core
runs at it’s own clock rate. So what then do these cycles of pulses mean? How does a cycle of electrons whizzing about
translate to what you see on your screen? Well, let’s take a 1kHz clock rate. Now a CPU running at this speed can flick
each of it’s binary switches one thousand times every second. After receiving input that flicks these switches
into a certain configuration, the CPU will then produce a result which can be cached
and used on it’s next cycle. This continues and once enough of these results
have been determined, we will have an instruction. It’s this instruction that might tell the
computer to place a pixel into memory. Cycle after cycle, instruction after instruction,
pixel by pixel, a picture is then pieced together in memory, before another instruction starts
the process of pushing that picture onto our screens. Now the number of cycles required per instruction
depends on what program you’re trying to execute or how the CPU is built, and it’s more likely
that we would now be dealing with instructions per cycle, rather than cycles per instruction,
simply due to the vast amounts of data a CPU can now handle and compute within each single
cycle. A core i7 can generally compute over 100,000
million instructions per second from a clock speed of just 3,000 million cycles per second
or 3GHz. This instruction rate is really what determines
how fast a given program will execute. The first electronic general purpose computer,
the ENIAC had a 100KHz clock rate and each instruction took 20 cycles, leading to an
instruction rate of 5KHz, or 5 thousand instructions per second. Of course due to the varying ways processors
are built and handle data, and the fact that different tasks will massively impact how
many instructions are completed in a given time frame, it’s far easier for us to compare
processor speed in Cycles per second, rather than instructions per second, or Millions
of instructions per second, known as MiPs, which are generally now just used for task
speed benchmarking.

Leave a Reply

Your email address will not be published. Required fields are marked *