Back to Blog

Turning Computers Into Brains: IBM's TrueNorth

Written by Cory Healy on July 19, 2017

To get computers to truly advance, innovation will come from an unlikely source of inspiration: the human brain.

Replicating how efficiently the central nervous system works, computing at large could tremendously benefit, and may even further accelerate the principle known as Moore’s Law.

Moore’s Law is the observation that the number of transistors in a circuit in a dense integrated circuit doubles every two years. With the advent of IBM’s TrueNorth chips, this observation would be rendered obsolete.

The current model of innovative computeing is called the “Convolutional Neural Network,” known as the CNN. Unlike how we can instantly process that an apple is an apple based on simply glancing at it, traditional computing requires what I’d like to call ‘layers of recognition'—inputs that activate upon being triggered by one layer of recognition after another. For instance, a computer would processes that it’s a shape; that there are no edges; that there’s a mass; that it’s a type of food; that it’s a fruit—and finally—that it’s an apple.

This model of recognition is ok, but for something like autonomous cars, computing decisions need to be accelerated at an exponential rate. An autonomous car needs to identify what they see immediately and react accordingly.

IBM's TrueNorth aims to improve on computer processing with their advancements on "neuromorphic" chips. They're designed to compute at a rate similar to a human’s central nervous system. The problem with these chips in their current form is that it pushes hardware to its limits.

Machine learning as we’ve seen in ‘neural networks’ is inching closer to the ideal, and we're here for it. Read more from ArsTechnica on the history of CNNs by clicking here. Want a headstart on preparing for neural networks of the future? Dive into our wide selection of coding courses here.