Moore's Law Takes A Quantum Leap

The past few days have seen news littered with headlines about the 2017 version of Nokia’s 3310 in South Africa at a less than attractive introductory price. Yet its appearance in the headlines means a lot of research and development has gone into ascertaining its viability as a premium product in a world dominated by 4G LTE and smartphones.

In the last decade, we’ve moved from the block cell phones with miniscule memory capacity, 12 keys which you could use to compose ringtones and sometimes send text messages. Since then, the capacity of our cellphones has seen cascading improvements in functionality at alarming pace and accessibilty. Premium mobile devices now sell with at least 128GB onboard memory capacity, something that would’ve been imaginable just five years ago. This incremental technological progress we’ve all been participating in for years hinges on one key trend called Moore’s Law.

In 1965, the co-founder of Intel, Gordon Moore predicted that integrated circuits or chips were the path to cheaper electronics. Moore’s Law states:

The number of transistors doubles every two years while the cost halves.

Transistors define the small switches that control the flow of electric current that can fit inside of an integrated power circuit. Although it isn’t an exact law of physics, having been revised over time to ensure companies to make better chips for less, it basically describes the inversely proportional nature of the relationship between the power of integrated circuits to their cost, creating the tiny computers we carry around in our pockets. A single chip today can contain billions of transistors which at only 14 nanometers across is smaller than most human viruses.

Trends might suggest this phenomenon is no longer conforming to the time parameters set for it, but in attempt to reimagine the next age of computing and electronics, experts are currently exploring a few interesting options.

Quantum computing is one of those options but yet another more interesting one is neuromorphic computing. These are computer chips modelled after our brains in the way that they work to learn and remember at the same time at incredible fast speeds.

Your brain’s billions of neurons connect to one another by synapses creating a neural network. Synaptic activity relies on ion channels which control the flow of charged Sodium and Calcium atoms in order to make your brain work properly. The neuromorphic chip tries to copy that model by relying on a densely connected web of transistors that mimic the activity of ion channels. Each chip has a network of cores, with inputs and outputs that are wired to additional cores, all operating in conjunction with one another. This connectivity enables neuromorphic chips to integrate memory, computation and communication. This is an entirely new computational design.

Today’s standard von Neumann architecture seperate the processor and memory enabling data to move between them. Tasks are executed when the central processing unit runs commands that are fetched from memory. Neuromorphic chips combines the storage and processing aspect of computational mechanics and further allows them to communicate with other chips. The hope is that these neuromorphic chips could transform computers from general purpose calculators into machines that can learn from experiences and make decisions leaping us into a future where computers wouldn’t just be able to crunch data at break neck speeds but could process sensory data as well.

Some future applications of neuromorphic chips might include sentient combat robots that could decide how to act in the field, UAVs that adapt to changes in environment or your car taking itself to a drive through car wash.This would usher in the possibilty of a singularity or even augmented physicality.