September 23, 2016

Breaking News: Energy-Efficient Neuromorphic Classifiers

A paper entitled "Energy-Efficient Neuromorphic Classifiers" was published this week in Neural Computation by Daniel Martí, Mattia Rigotti, Mingoo Seok, and Stefano Fusi.

Here is the abstract:

Neuromorphic engineering combines the architectural and computational principles of systems neuroscience with semiconductor electronics, with the aim of building efficient and compact devices that mimic the synaptic and neural machinery of the brain. The energy consumptions promised by neuromorphic engineering are extremely low, comparable to those of the nervous system. Until now, however, the neuromorphic approach has been restricted to relatively simple circuits and specialized functions, thereby obfuscating a direct comparison of their energy consumption to that used by conventional von Neumann digital machines solving real-world tasks. Here we show that a recent technology developed by IBM can be leveraged to realize neuromorphic circuits that operate as classifiers of complex real-world stimuli. Specifically, we provide a set of general prescriptions to enable the practical implementation of neural architectures that compete with state-of-the-art classifiers. We also show that the energy consumption of these architectures, realized on the IBM chip, is typically two or more orders of magnitude lower than that of conventional digital machines implementing classifiers with comparable performance. Moreover, the spike-based dynamics display a trade-off between integration time and accuracy, which naturally translates into algorithms that can be flexibly deployed for either fast and approximate classifications, or more accurate classifications at the mere expense of longer running times and higher energy costs. This work finally proves that the neuromorphic approach can be efficiently used in real-world applications and has significant advantages over conventional digital devices when energy consumption is considered.

September 21, 2016

Deep learning inference possible in embedded systems thanks to TrueNorth

See the beautiful blog by Caroline Vespi on our PNAS paper.

September 20, 2016

Breaking News: Convolutional Networks for Fast, Energy-Efficient Neuromorphic Computing

Guest Blog by Steven K. Esser

We are excited to announce that our work demonstrating convolutional networks for energy-efficient neuromorphic hardware just appeared in the peer-reviewed Proceedings of the National Academy of Sciences (PNAS) of the United States of America. Here is an open-access link. In this work, we show near state-of-the-art accuracy on 8 datasets, while running at between 1200 and 2600 frames per second and using between 25 and 275 mW running on the TrueNorth chip. Our approach adapts deep convolutional neural networks, today's state-of-the-art approach for machine perception, to perform classification tasks on neuromorphic hardware, such as TrueNorth. This work overcomes the challenge of learning a network that uses low precision synapses, spiking neurons and limited-fan in, constraints that allow for energy-efficient neuromorphic hardware design (including TrueNorth) but that are not imposed in conventional deep learning.

Low precision synapses make it possible to co-locate processing and memory in hardware, dramatically reducing the energy needed for data movement. Employing spiking neurons (which have only a binary state) further reduces data traffic and computation, leading to additional energy savings. However, contemporary convolutional networks use high precision (at least 32-bit) neurons and synapses. This high precision representation allows for very small changes to the neural network during learning, facilitating incremental improvements of the response to a given input during training. In this work, we introduce two constraints to the learning rule to bridge deep learning to energy-efficient neuromorphic hardware. First, we maintain a high precision "shadow" synaptic weight during training, but map this weight to a trinary value (-1, 0 or 1) implementable in hardware for operation as well as, critically, to determine how the network should change during the training itself. Second, we employ spiking neurons during both training and operation, but use the behavior of a high precision neuron to predict how the spiking neuron should change as network weights change.

Limiting neuron fan-in (the number of inputs each neuron can receive) allows for a crossbar design in hardware that reduces spike traffic, again leading to lower energy consumption. However, in deep learning no explicit restriction is placed on inputs per neuron, which allows networks to scale while still remaining integrated. In this work, we enforce this connectivity constraint by partitioning convolutional filters into multiple groups. We maintain network integration by interspersing layers with large topographic coverage but small feature coverage with those with small topographic coverage but large feature coverage. Essentially, this allows the network to learn complicated spatial features, and then to learn useful mixtures of those features.

Our results show that the operational and structural differences between neuromorphic computing and deep learning are not fundamental and indeed, that near state-of-the-art performance can be achieved in a low-power system by merging the two. In a previous post, we mentioned one example of an application running convolutional networks on TrueNorth, trained with our method, to perform real-time gesture recognition.

The algorithm is now in the hands of over 130 users at over 40 institutions, and we are excited to discover what else might now be possible!

September 01, 2016

Career Opportunity: Brain-inspired Computing

Apply here.

August 23, 2016

Samsung turns IBM's brain-like chip into a digital eye

On occasion of the 30th Anniversary of IBM Research - Almaden, Samsung, Air Force Research Lab, and Lawrence Livermore National Lab presented their results using TrueNorth Ecosystem. Here is an article written by a reporter who was in the audience. A video of the session will be posted soon.

During the same event, Professor Tobi Delbruck presented the Misha Mahowald Prize to the IBM TrueNorth Team. See here.

Misha Mahowald Prize

August 12, 2016

TrueNorth Accepted into the Computer History Museum

On the occasion of the 30th Anniversary of IBM Research - Almaden, TrueNorth was accepted into the Computer History Museum. Here is IBM Press Release.


August 03, 2016

Six TrueNorth Algorithms & Applications by IBM and Partners at WCCI

Guest Blog by Andrew Cassidy and Michael Debole

At the IEEE 2016 World Congress on Computational Intelligence (WCCI) in Vancouver Canada last week, six researchers presented their research on TrueNorth-based algorithms and applications. These papers, published in the proceedings of the International Joint Conference on Neural Networks (IJCNN 2016), represent early outcomes from university and government research collaborators, who were among the first adopters of the TrueNorth hardware and software ecosystem. These research partners were trained at the Brain-inspired Boot Camp last August, and submitted their succeeding research for conference review in January 2016.

The six papers presented at the Special Session on Energy-Efficient Deep Neural Networks were:

  • LATTE: Low-power Audio Transform with TrueNorth Ecosystem. Wei-Yu Tsai, Davis Barch, Andrew Cassidy, Michael DeBole, Alexander Andreopoulos, Bryan Jackson, Myron Flickner, Dharmendra Modha, Jack Sampson and Vijaykrishnan Narayanan; The Pennsylvania State University, IBM Research - Almaden.
  • TrueHappiness: Neuromorphic Emotion Recognition on TrueNorth. Peter U. Diehl, Bruno U. Pedroni, Andrew Cassidy, Paul Merolla, Emre Neftci and Guido Zarrella; ETH Zurich, UC San Diego, IBM Research - Almaden, UC Irvine, The MITRE Corporation.
  • Probabilistic Inference Using Stochastic Spiking Neural Networks on A Neurosynaptic Processor. Khadeer Ahmed, Amar Shrestha, Qinru Qiu and Qing Wu; Syracuse University, Air Force Research Laboratory.
  • Weighted Population Code for Low Power Neuromorphic Image Classification. Antonio Jimeno Yepes, Jianbin Tang, Shreya Saxena, Tobias Brosch and Arnon Amir; IBM Research - Australia, Massachusetts Institute of Technology, IBM Research - Almaden, Institute of Neural Information Processing - Ulm University.
  • Sparse Approximation on Energy Efficient Hardware. Kaitlin Fair and David Anderson; Georgia Institute of Technology.
  • A Low-Power Neurosynaptic Implementation of Local Binary Patterns for Texture Analysis. Alexander Andreopoulos, Rodrigo Alvarez-Icaza, Andrew Cassidy and Myron Flickner; IBM Research - Almaden.

WCCI 2016 logo

July 31, 2016

Misha Mahowald Prize

Misha Mahowald Prize

Here is a press release.

Press Release: Inaugural Misha Mahowald Prize for Neuromorphic Engineering won by IBM TrueNorth Project

The Misha Mahowald Prize recognizes outstanding achievement in the field of neuromorphic engineering. Neuromorphic engineering is defined as the construction of artificial computing systems which implement key computational principles found in natural nervous systems. Understanding how to build such systems may enable a new generation of intelligent devices, able to interact in real-time in uncertain real-world conditions under severe power constraints, as biological brains do.

Misha Mahowald, for whom the prize is named, was a charismatic, talented and influential pioneer of neuromorphic engineering whose creative life unfortunately ended prematurely. Nevertheless, her novel designs of brain-inspired CMOS VLSI circuits for vision and computation have continued to influence a generation of engineers.

For the inaugural 2016 prize, the independent jury led by Prof. Terrence Sejnowski of the Salk Institute evaluated 21 entries worldwide. They have selected the TrueNorth project, led by Dr. Dharmendra S. Modha at IBM Research – Almaden in San Jose, California as the winner for 2016:

“For the development of TrueNorth, a neuromorphic CMOS chip that simulates 1 million spiking neurons with connectivity and dynamics that can be flexibly programmed while consuming only 70 milliwatts. This scalable architecture sets a new standard and brings us closer to achieving the high levels of performance in brains.”

The TrueNorth architecture is a milestone in the development of neuromorphic processors because it achieves the combination of scale, ultra-low-power and high performance that has never before been demonstrated in a real neuromorphic system. It is the first neuromorphic system that can compete with conventional state-of-the-art von Neumann processors on real-world problems on an equal footing. In doing this, it opens the door to future orders-of-magnitude improvements in computing power that will no longer be possible using the von Neumann architecture as its inherent bottlenecks approach physical limits.

The prize and certificate will be presented at the 30th anniversary celebration of the IBM Almaden Research Center in San Jose on 11 August, 2016.

The Misha Mahowald Prize is sponsored and administered by iniLabs ( in Switzerland.

Demo at Conference on Computer Vision and Pattern Recognition

Guest Blog by Arnon Amir, Brian Taba, and Timothy Melano

The Conference on Computer Vision and Pattern Recognition (CVPR) is widely considered as the preeminent conference for computer vision. This year the IBM Brain Inspired Computing team had the pleasure of demonstrating our latest technology at the CVPR 2016 Industry Expo, held in the air-conditioned conference halls of Caesars Palace, Las Vegas. The expo was co-located with academic poster presentations, which created an excellent opportunity for us to not only meet very interesting academics, but also to see the latest demos from other amazing companies, both large and small.

Demo booth

We too were excited to demonstrate our new Runtime API for TrueNorth. To showcase it, we connected an event-based vision sensor, the DVS128 (made by iniLabs), over USB to our NS1e board.

Hardware flow

We used our Eedn framework to train a convolutional neural network on hand and arm gestures collected from our team, including air-drums and air-guitar! This Eedn network was used to configure the TrueNorth chip on the NS1e board. Overall, the system received asynchronous pixel events from the DVS128 sensor and passed them to TrueNorth. A new classification was produced every one millisecond, or at 1000 classifications per second.

The reaction to the real-time gesture classifications was very positive and drew large crowds (and other hardware vendors ;). People were blown away by that fact that we were running a convnet in real-time at 1000 classifications per second while consuming only milliwatts of power. We invited anyone who was interested to come behind our table to play with the gesture recognition. With a little bit of adjustment, people were able to interact with TrueNorth and have their gestures recognized. To many in the audience, the entire concept of neuromorphic engineering was new. Their visit to our booth was a great opportunity to introduce them to the DVS128, a spiking sensor inspired by the human retina, and TrueNorth, a spiking neural network chip inspired by the human brain!

Gesture icons
A video can be seen here.

Previously, we have demonstrated that TrueNorth can perform greater than 1000 classifications per second on benchmark datasets. Therefore, the new Runtime API opens the interface to the NS1e board and the TrueNorth chip for many exciting real-time applications, processing complex data at very fast rates, yet consuming very low power.

We give special thanks to our teammates David Berg, Carmelo di Nolfo and Michael Debole for leading efforts to develop the Runtime API, to Jeff Mckinstry for performing the Eedn training, to Guillaume Garreau for his help with data preparation, and to the entire Brain Inspired Computing team for volunteering to create the training data set!



  • Talk
Creative Commons License
This weblog is licensed under a Creative Commons License.
The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies or opinions.