Which Innovation Helped to make Computers more Powerful?

Posted on

Which Innovation Helped to make Computers more Powerful?

Answer: Microchips

Which Innovation Helped to make Computers more Powerful

Microchips, or integrated circuits, are everywhere. On the living room TV, even if it is still a tube, on the computer and cell phone. If it’s electronic, there’s an integrated circuit, you can be sure of that. But where did they come from?

Chips are electronic components that were born by the pressure of our society’s progress for solutions that offer more performance and efficiency in data processing. In summary, you can summarize a microchip in a tiny collective of transistors and other components whose utility is to turn electrical energy into binary data, or information.

 

Source

Before the appearance of the transistor, a busy computer floor weighed tons and no one risked relying on its useful life because regardless of the technical resource used, it was measured in very short space of time. To continue operating, for example, a computer needed to change valves at all times. They simply burned.
Computers could use relays or valves, or both. The relay consisted of a magnetic element whose movement determined a binary value: either 0 or 1, or on and off. The movement of something very much like a plunger from an electric pulse sealed or opened the circuit. Although more reliable than vacuum valves, the relays were clamorously slower. Yes, if you were surprised by the fact that a computer processed information with mechanical parts, you got the idea behind the relay.

The valves were much faster than the relays, up to 1 million times, but terribly susceptible to breakage. They consisted of a vacuum chamber where electrons flowed into a filament, which was the heart of the breaks. Over time, the heating made it lose its effectiveness – just like the tungsten bulbs that eventually burn. It was the flow of electrons in the valve, which could be ceased or intensified in it, which closed or opened the circuit, determining the “on and off” positions of the binary system, which is still present in the technology.

The bigger and more powerful the computer, the more valves it had. Let’s say a computer of the time occupies five floors. All of them with thousands of valves operating. In this scenario, it is a more or less safe bet to say that there is a strong possibility that, somewhere in the complex, at least one valve breaks down for a space of minutes. Imagine the work of going through all the units repairing the burned valves. All this to process tasks that calculators built into your smartphone today perform with much more elegance.

 

What microchips are they for?

The microchip grew in performance, uses, versions, component types and manufacturing processes, but always sought to shrink in size and cost. It’s called Moore’s Law: “The number of transistors on a chip will have a 100% increase every 18 months for the same cost.”
The law still applies, but there are theoretical limits: it is estimated that silicon becomes impossible as a substrate of microchips after 14 nanometers (1 nanometer is 1 millionth of a millimeter). It is a theoretical limit that instigates the new Noyce to create new solutions that abandon silicon waffles. Or that technology evolves to the point where silicon is viable at a level below 14 nm.