In the previous post, we built a circuit that remembers information using latches. But because the D latch follows input changes directly while the enable signal is on, problems arise in systems where multiple latches are connected.
Computers don’t process complex calculations all at once. Instead, they break them into multiple stages, storing each stage’s result in a latch before passing it to the next stage. For example, an addition result is stored in a latch, and that value is then received by the next latch for use in another operation.
But the D latch follows changes in the D input directly while the Enable (E) signal is on. If multiple latches are connected in sequence A → B → C in this state, what happens? The moment A’s output changes, B immediately picks up that value, and C follows right away. Each stage can’t independently hold its own value.
In the circuit below, three D latches are connected in series. Try toggling D while E=1. You’ll see all three latches’ Q values change simultaneously.
Data passes through all at once, making it impossible for each stage to independently hold its value.
Therefore, we need to synchronize all circuits so they operate at the same timing. The clock is what handles this synchronization.
The clock is a periodic signal that repeats at a fixed interval inside the computer. It tells all circuits “act now” at the same beat, simultaneously.
What happens when we add a clock to the same pipeline?
Try the following sequence:
You’ve probably seen numbers like “3.2GHz” in computer specs. This refers to the clock frequency, meaning 3.2 billion beats per second. The higher this clock frequency, the more often the circuit can operate, enabling faster computation.
So how do we implement memory that operates in sync with the clock? Let’s look at the D flip-flop.
The D flip-flop takes data input and stores it like a D latch, but it only accepts data at a specific moment (edge) of the clock. You can choose either the rising edge or the falling edge.
Because it only accepts data at that edge moment, the result is predictable and stable no matter how much the input fluctuates.
A rising-edge D flip-flop can be implemented using two D latches as follows. The front latch opens when CLK=0 to accept data, while the back latch opens when CLK=1 to pass the front latch’s value to the output. Because the two latches are never open at the same time, data passes through only at the moment CLK transitions from 0 to 1.
Try the following sequence:
A single D flip-flop can store just one bit (0 or 1). But computers represent and process numbers in word units — groups of multiple bits like 8-bit, 16-bit, 32-bit, or 64-bit — which the computer handles all at once.
To store multiple bits of information simultaneously, multiple D flip-flops are connected in parallel to form a register. For example, an 8-bit register consists of 8 D flip-flops that store 8 bits simultaneously. Since all flip-flops share the same clock signal, 8 bits of data are stored and output at the same time.
Registers are located inside the CPU, serving as very fast temporary storage for critical information the CPU needs to process right away — such as values for the arithmetic unit to compute or addresses for the control unit to fetch the next instruction. Because they’re on the same chip as the CPU, access speed is extremely fast, but they’re also expensive and take up significant space.
We now know how to give a computer the ability to “remember” its computation results, not just perform addition.
With both computation and memory capabilities in hand, think about the computer you use every day — browsing the web, playing music, running games. It looks nothing like the circuits we’ve built. Can we call this a computer? In the next post, we’ll define what a computer is and see how all the circuits we’ve built so far are assembled into one.