Guest Post by Rodrigo Alvarez-Icaza, John Arthur, Andrew Cassidy, and Paul Merolla.
Each July for the last 20 years or so, a group of neuroscientists, engineers, and computer scientists come together for a three week neuromorphic engineering workshop in the scenic town of Telluride. Telluride is best known for its ski slopes, but in the summer, it is the perfect place to hunker down and work on collaborative, hands on neuromorphic projects. This year, four of us who are IBM research scientists in the Brain-inspired Computing Group brought IBM’s latest generation TrueNorth chip to Telluride with one goal in mind: enable workshop participants to use TrueNorth for their own projects. To our surprise and delight, although many of the participants had never actually seen or used TrueNorth before, they were all up and running in almost no time. Here is a quick run down of what happened.
The Setup:
The bulk of the workshop took place at the Telluride elementary school, and in particular, in one of the classrooms. Shown below are the four of us, arriving to our new home for the next 3 weeks.
Rodrigo Alvarez-Icaza, John Arthur, Paul Merolla, and Andrew Cassidy (left to right) at the Telluride elementary school
Photo Credit: Tobi Delbruck
We brought a bunch of goodies to Telluride, including 10 of our latest mobile development boards, some of which are being unpacked by Rodrigo (top). The board (bottom) has a TrueNorth chip (SyNAPSE), an FPGA, and a host of sensors and connectors. The basic setup is that participants can log into these boards through our local servers, and run their real time spiking neural networks.
Rodrigo Alvarez-Icaza unpacking the boards
Close up of a TrueNorth mobile development board
Hands on projects:
Our IBM group, along with Arindam Basu (NTU), ran a workgroup called Spike-Based Cognitive Computing. In this group, we divided into sub projects. The projects based on TrueNorth included:
- ATIS camera: MNIST classifier
Garrick Orchard, (Singapore Institute for Neurotechnology) and Kate Fischl (JHU) - Sparse Representations for speech recognition (TIDIGITs)
Jie “Jack” Zhang (JHU) and Kaitlin Fair (Georgia Tech/AFRL) - Word vector associative memory (semantic similarity)
Dan Mendat (JHU) and Guillaume Garreau (JHU) - Word vector analogies
Dan Mendat (JHU) - Word “happiness” score (regression) using word vectors
Peter Diehl (INI) and Bruno Pedroni (UCSD) - Question (sentence) classification using Recurrent NNs
Emre Neftci (UC Irvine), Peter Diehl (INI) and Bruno Pedroni (UCSD) - FSMs and WTAs (for working memory, etc.)
Suraj Honnuraiah (Institute of Neuroinformatics) - Sensors to TN:
DAVIS: Luca Longinotti (iniLabs)
spiking cochlea: Shih-Chii Liu ((Institute of Neuroinformatics)
spiking sonar: Timmer Horiuchi (U. Maryland)
spiking radar: Saeed Afshar, University of Western Sydney
FPGA cochlea: Guillaume Garreau (JHU)
You can find more information on these projects on the Neuromorph site.
Here, we highlight one of the projects, which culminated with a real time demo!
Real-time digit classification on TrueNorth with a spiking retinal camera front end:
In this project, the goal was to connect a spiking retinal camera (called the ATIS) to a TrueNorth chip to perform pattern classification. Using the ATIS as a front end for TrueNorth opens up the possibility for a fast, low power object recognition system that works only using spikes.
There are two main steps involved in realizing the real-time digit classification system. The first step consists of creating and training the object recognition model to run on TrueNorth. The second step is to connect the ATIS to TrueNorth to achieve real time operation.
We made use of a publicly available spike-based conversion of the MNIST dataset which was taken with the ATIS sensor mounted on a pan-tilt while viewing MNIST digits on a computer monitor. Details of the dataset creation, as well as a download of the dataset itself, are available at:
http://www.garrickorchard.com/datasets/n-mnist
The continuous spike stream was converted to static images by accumulating spikes for 10ms at a time to create a static images for training. These static images are used to create a Lightning Memory Mapped Database (LMDB) on which training was performed using the caffe deep learning framework (modified to support TrueNorth). A simple 1 layer Neural Network was used, with 100 neurons trained to respond to each of the 10 digits. The final output of the system is a histogram of the number of spikes output by neurons representing each class. The class with the most spikes is deemed to be the most likely output.
Real time results:
A laptop powers and interfaces to the ATIS sensor which is mounted on a helmet worn by a user. This laptop performs simple noise filtering on ATIS spikes and activity based tracking of the MNIST digit on a screen. Spikes occurring within the 28×28 pixel region of interest being tracked are remapped to target corresponding cores and axons on the TrueNorth (sometimes multiple axons per spike). The laptop accumulates spikes until 130 spikes are available for classification, at which time all 130 spikes are communicated to TrueNorth over UDP.
The trained neural network runs on TrueNorth and output spikes are communicated to a second laptop using UDP. This second laptop performs visualization of the results.
On the spiking MNIST test set, we achieved 76%-80% accuracy at 100 classification/sec. The goal was not really classification accuracy per se, but rather was to learn to create end-to-end demonstrations. The classification rate (100/sec) is limited by the fact that we use 10ms of data for each classification, but TrueNorth is capable of performing 1000 such classifications per second. In the real-time system the classifier on TrueNorth uses only 4 cores (0.1% of the chip). Temporally, utilization on this 0.1% of the physical chip is below 10% (i.e. the 4 cores are idle >90% of the time) when performing 100 classifications per second.