This article will tell you which invention allowed computers to be smaller. The computer’s size got reduced as it progressed from the first to the fifth generation. However, the short and sweet answer to the question “Which invention allowed computers to become smaller in size?” is that first, transistors (the second generation) did then integrated circuits (the third generation) allowed computers to be smaller. Integrated Circuits could place billions of transistors in a single package. I recommend you glance at our article on the invention of the computer before going through this.
In 1965, George Moore posited that roughly every two years, the number of transistors on microchips would double. Commonly referred to as Moore’s Law, this phenomenon suggests that computational progress will become significantly faster, smaller, and more efficient over time. The rate of size reduction continued as “Moore’s Law” progressed. However, I would say it was not any particular invention that made computers smaller. Instead, the better manufacturing and building of the components used to create a computer has made them smaller. Better engineering makes miniaturization possible.
Making a computer component smaller means you can fit more in a given space, as the part or component would not take up as much space. As a result, you find newer ways to do what you did before and many times without needing as many additional items.
For instance, look at the TVs from the 1950s–60s–70s–80s-90s, and today. Take the back off them and look at the components, which are not much different than the same type of components in computers for a long time. Of course, computers got smaller and more potent as the parts got smaller, more powerful, and better engineered. Still, today many of the same types too.
In the late 1950s, computers got smaller because one of their main components – the valve – was replaced by a much smaller transistor. These made computers far more reliable, so different industries and businesses greatly interested them.
The first generation of computers (1940–1956) used vacuum tubes. Early first-generation computers (1940’s) used octal-based tubes, like the one on the far left. The ENIAC computer (1946) had almost 18,000 tubes and took up an 1800 square feet room (167 square meters).
Later first generation computer models built in the 1950s, like the IBM 701 series, used miniature-sized tubes next to it, typically only about 5000 of them. Each tube was functionally equivalent to one or two transistors. The popular 6SN7 and miniature 12AU7 dual-triodes were often used for implementing flip-flops. The computers still took up an entire room, but the peripherals (tape drives, etc.) took up much of the space.
UNIVAC I (1951)
Some first-generation Mini/Mainframe computers are ENIAC (Electronic Numerical Integrator and Computer), UNIVAC (Universal Automatic Computer), IBM 604, Mark-I, and EDSAC. Electronic Delay Storage Automatic Calculator.
Second-generation computers (1956-1963) used transistors and were much smaller. Three scientists, William Shockley, John Bardeen, and Walter Brattain, invented the transistor in 1947. Transistor functions like a vacuum tube. It replaced the vacuum tubes in the second-generation computers.
The CPUs of second-generation computers were the size of today’s large refrigerators. Since transistors were faster, more reliable, and much cheaper than vacuum tubes, they reduced the size of computers and increased the speed and memory capacity. However, the core memory, the most significant component of second-generation computers, often came in separate refrigerator-sized boxes. Hence, transistors are somehow our answer to the query of which invention allowed computers to be smaller.
Dual flip-flop module from a second generation PDP-8 minicomputer (1965)
Second-generation computers include IBM 7030, UNIVAC II, 7780, and 7090, General Electric GE 635, NCR 300 series, and Control Data Corporation’s CDC 1604 computers.
Integrated Circuits (ICs) are also known as semiconductor chips. Scientists developed IC chips in the early 1960s. The computers of this generation implemented memory using solid-state RAM chips instead of a core. Third-generation computers (1964-1971) used small and medium-scale integrated circuits and were still smaller. These computers also had a keyboard and monitor.
The invention of IC chips was a tremendous breakthrough in advancing computer technology. IC chips increased the power and decreased the cost of computers. A single IC chip contains a heavy number of transistors. Moreover, These computers could run different application programs at the same time. Thus, computers consumed less electricity and became smaller, cheaper, and more reliable than second-generation computers. Hence, the invention of ICs is the most straightforward answer to the question of which invention allowed computers to become smaller in size.
Third-generation computers are IBM System/360, System 3, Burroughs 6700, and Control Data Corporation’s 3300 and 6600 computers.
A microprocessor is a single chip that can handle a computer’s processing. These are very small in size, very reliable, consume less power, and are affordable. Fourth-generation computers (1971-present) used (and still use) microprocessors. Scientists developed LSI (Large Scale Integration) and VLSI (Very Large Scale Integration) chips with millions of transistors for this generation of computers. These computers support modern programming languages such as C++, Visual Basic, Java, and Python for developing powerful software.
Some examples of microprocessors developed in the fourth generation of computers are the Intel Pentium series, Dual Core, Core2 Duo, Core i3, i5, i7, and AMD Athlon.
The 6502 CPU (1975) used in the Apple II only had 3510 transistors. While the Intel Core i7 Haswell-E (2014), used in many desktop PCs, has 2.6 billion transistors. The difference between the 6502 CPU (of 1975) with 3510 transistors and the Intel i7 processor (of 2014) with 2.6 billion is 740,740 times as many.
Moore’s law would predict this to be: 2(2014–1975)/2= 741,455
Extending that four more years, we have the Apple A12X Bionic (ARM64) processor released in 2018 with 10 billion transistors, 2,849,003 times as many as the 6502. Moore’s law predicts: 2(2018–1975)/2= 2,965,820 — once again, very close.
Extrapolate ENIAC size and account that 6500 of its tubes were dual-triode and thus were the equivalent of two transistors. Then it would require a building with 10 billion / 24,500 x 1,800 square feet = ~735,000,000 square feet (~68,255,000 square meters) or 68.3 square kilometers (26.4 square miles) to house a vacuum tube equivalent of the Apple A12X Bionic processor.
Some examples of fourth-generation computers are the IBM ThinkPad, HP Pavilion, Dell Inspiron, and Apple’s MacBook Pro and MacBook Air.
This generation has nothing to do with the size of the computer. Therefore we do not need to discuss it to find the answer to which invention allowed computers to become smaller in size.
However, the fifth generation of computers aims to develop devices that can understand natural languages and have thinking power. Thus, they are based on Artificial Intelligence (AI). Designing such systems and software is a big challenge for computer developers and programmers.
Examples of fifth-generation computers are robots and expert systems.