• Skip to main content
  • Skip to primary sidebar

Dharmendra S. Modha

My Work and Thoughts.

  • Brain-inspired Computing
    • Collaborations
    • Videos
  • Life & Universe
    • Creativity
    • Leadership
    • Interesting People
  • Accomplishments
    • Prizes
    • Papers
    • Positions
    • Presentations
    • Press
    • Profiles
  • About Me

Misha Mahowald Prize

July 31, 2016 By dmodha

Misha Mahowald Prize

Press Release: Inaugural Misha Mahowald Prize for Neuromorphic Engineering won by IBM TrueNorth Project

The Misha Mahowald Prize recognizes outstanding achievement in the field of neuromorphic engineering. Neuromorphic engineering is defined as the construction of artificial computing systems which implement key computational principles found in natural nervous systems. Understanding how to build such systems may enable a new generation of intelligent devices, able to interact in real-time in uncertain real-world conditions under severe power constraints, as biological brains do.

Misha Mahowald, for whom the prize is named, was a charismatic, talented and influential pioneer of neuromorphic engineering whose creative life unfortunately ended prematurely. Nevertheless, her novel designs of brain-inspired CMOS VLSI circuits for vision and computation have continued to influence a generation of engineers.

For the inaugural 2016 prize, the independent jury led by Prof. Terrence Sejnowski of the Salk Institute evaluated 21 entries worldwide. They have selected the TrueNorth project, led by Dr. Dharmendra S. Modha at IBM Research – Almaden in San Jose, California as the winner for 2016:

“For the development of TrueNorth, a neuromorphic CMOS chip that simulates 1 million spiking neurons with connectivity and dynamics that can be flexibly programmed while consuming only 70 milliwatts. This scalable architecture sets a new standard and brings us closer to achieving the high levels of performance in brains.”

The TrueNorth architecture is a milestone in the development of neuromorphic processors because it achieves the combination of scale, ultra-low-power and high performance that has never before been demonstrated in a real neuromorphic system. It is the first neuromorphic system that can compete with conventional state-of-the-art von Neumann processors on real-world problems on an equal footing. In doing this, it opens the door to future orders-of-magnitude improvements in computing power that will no longer be possible using the von Neumann architecture as its inherent bottlenecks approach physical limits.

The prize and certificate will be presented at the 30th anniversary celebration of the IBM Almaden Research Center in San Jose on 11 August, 2016.

The Misha Mahowald Prize is sponsored and administered by iniLabs (www.inilabs.com) in Switzerland.

Filed Under: Accomplishments, Brain-inspired Computing, Press, Prizes

Demo at Conference on Computer Vision and Pattern Recognition

July 31, 2016 By dmodha

Guest Blog by Arnon Amir, Brian Taba, and Timothy Melano

The Conference on Computer Vision and Pattern Recognition (CVPR) is widely considered as the preeminent conference for computer vision. This year the IBM Brain Inspired Computing team had the pleasure of demonstrating our latest technology at the CVPR 2016 Industry Expo, held in the air-conditioned conference halls of Caesars Palace, Las Vegas. The expo was co-located with academic poster presentations, which created an excellent opportunity for us to not only meet very interesting academics, but also to see the latest demos from other amazing companies, both large and small.

Demo booth

We too were excited to demonstrate our new Runtime API for TrueNorth. To showcase it, we connected an event-based vision sensor, the DVS128 (made by iniLabs), over USB to our NS1e board.

Hardware flow

We used our Eedn framework to train a convolutional neural network on hand and arm gestures collected from our team, including air-drums and air-guitar! This Eedn network was used to configure the TrueNorth chip on the NS1e board. Overall, the system received asynchronous pixel events from the DVS128 sensor and passed them to TrueNorth. A new classification was produced every one millisecond, or at 1000 classifications per second.

The reaction to the real-time gesture classifications was very positive and drew large crowds (and other hardware vendors ;). People were blown away by that fact that we were running a convnet in real-time at 1000 classifications per second while consuming only milliwatts of power. We invited anyone who was interested to come behind our table to play with the gesture recognition. With a little bit of adjustment, people were able to interact with TrueNorth and have their gestures recognized. To many in the audience, the entire concept of neuromorphic engineering was new. Their visit to our booth was a great opportunity to introduce them to the DVS128, a spiking sensor inspired by the human retina, and TrueNorth, a spiking neural network chip inspired by the human brain!

Gesture icons
A video can be seen here.

Previously, we have demonstrated that TrueNorth can perform greater than 1000 classifications per second on benchmark datasets. Therefore, the new Runtime API opens the interface to the NS1e board and the TrueNorth chip for many exciting real-time applications, processing complex data at very fast rates, yet consuming very low power.

We give special thanks to our teammates David Berg, Carmelo di Nolfo and Michael Debole for leading efforts to develop the Runtime API, to Jeff Mckinstry for performing the Eedn training, to Guillaume Garreau for his help with data preparation, and to the entire Brain Inspired Computing team for volunteering to create the training data set!

Filed Under: Accomplishments, Brain-inspired Computing, Papers

Gearing Up for 2016 Telluride Neuromorphic Cognition Engineering Workshop

June 22, 2016 By dmodha

Guest Blog by Andrew Cassidy and Rodrigo Alvarez-Icaza

Gearing up. We are preparing for the 2016 Telluride Neuromorphic Cognition Engineering Workshop,
in the Colorado mountain town. Beginning Sunday Jun 26th, this annual workshop brings together nearly 100 researchers from all around the world to investigate brain-inspired solutions to topics such as:

  • Decoding Multi-Modal Effects on Auditory Cognition
  • Spike-Based Cognition in Active Neuromorphic Systems
  • Neuromorphic Path Planning for Robots in a Disaster Response Scenario
  • Neuromorphic Tactile Sensing
  • Computational Neuroscience

IBM’s Brain-Inspired Computing Group is sending two researchers with an end-to-end hardware/software ecosystem for training neural networks to run, in realtime, on the 4096 core TrueNorth neurosynaptic processor. The Eedn (Energy-efficient deep neuromorphic network) training algorithm enables near state-of-the-art accuracy on a wide range of visual, auditory, and other sensory datasets. When run on TrueNorth, these networks can be run at between 25 and 275mW, achieving >6000 FPS/W performance.

We are bringing (Figures 1-3):

  • 16 NS1e boards (each with 1 TrueNorth neurosynaptic processor)
  • 1 server (with 4 Titan X GPUs) for training deep neuromorphic networks
  • and a bucket of cables.

Building on the successes from last year’s workshop, and leveraging the training material from Bootcamp,
our goal is to enable train, build, and run for workshop participants. Combined with real-time runtime infrastructure to connect input sensors and output actuators to/from the NS1e board, we have all of the tools in place to build low-power end-to-end mobile and embedded systems, to solve real-world cognitive problems.

NS1e

Figure 1. Sixteen NS1e Boards

Training Server

Figure 2. Training Server and Gear

Prep Station

Figure 3. Prep Station

Photo Credits: Rodrigo Alvarez-Icaza

Filed Under: Brain-inspired Computing, Collaborations

PREPRINT: Structured Convolution Matrices for Energy-efficient Deep learning

June 9, 2016 By dmodha

Guest Blog by Rathinakumar Appuswamy

To seek feedback from fellow scientists, my colleagues and I are very excited to share a preprint with the community.

Title: Structured Convolution Matrices for Energy-efficient Deep learning

Authors: Rathinakumar Appuswamy, Tapan Nayak, John Arthur, Steven Esser, Paul Merolla, Jeffrey Mckinstry, Timothy Melano, Myron Flickner, Dharmendra S. Modha 

Extended Abstract: We derive a relationship between network representation in energy-efficient neuromorphic architectures and block Toplitz convolutional matrices. Inspired by this connection, we develop deep convolutional networks using a family of structured convolutional matrices and achieve state-of-the-art trade-off between energy efficiency and classification accuracy for well-known image recognition tasks. We also put forward a novel method to train binary convolutional networks by utilising an existing connection between noisy-rectified linear units and binary activations. We report a novel approach to train deep convolutional networks with structured kernels. Specifically, all the convolution kernels are generated by the commutative pairs of elements from the Symmetric group S4. This particular structure is inspired by the TrueNorth architecture and we use it to achieve an improved accuracy vs energy tradeoff than we had previously reported. Our work builds on the growing body of literature devoted to developing convolutional networks for low-precision hardware toward energy-efficient deep learning.

Link: http://arxiv.org/abs/1606.02407

Filed Under: Accomplishments, Brain-inspired Computing, Papers

PREPRINT: Deep neural networks are robust to weight binarization and other non-linear distortions

June 8, 2016 By dmodha

Guest Blog by Paul A. Merolla

To seek feedback from fellow scientists, my colleagues and I are very excited to share a preprint with the community.

Title: Deep neural networks are robust to weight binarization and other non-linear distortions

Authors: Paul A. Merolla, Rathinakumar Appuswamy, John V. Arthur, Steve K. Esser, Dharmendra S. Modha 

Abstract: Recent results show that deep neural networks achieve excellent performance even when, during training, weights are quantized and projected to a binary representation. Here, we show that this is just the tip of the iceberg: these same networks, during testing, also exhibit a remarkable robustness to distortions beyond quantization, including additive and multiplicative noise, and a class of non-linear projections where binarization is just a special case. To quantify this robustness, we show that one such network achieves 11% test error on CIFAR-10 even with 0.68 effective bits per weight. Furthermore, we find that a common training heuristic–namely, projecting quantized weights during backpropagation–can be altered (or even removed) and networks still achieve a base level of robustness during testing. Specifically, training with weight projections other than quantization also works, as does simply clipping the weights, both of which have never been reported before. We confirm our results for CIFAR-10 and ImageNet datasets. Finally, drawing from these ideas, we propose a stochastic projection rule that leads to a new state of the art network with 7.64% test error on CIFAR-10 using no data augmentation.

Link: http://arxiv.org/abs/1606.01981

Filed Under: Accomplishments, Brain-inspired Computing, Papers

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 5
  • Page 6
  • Page 7
  • Page 8
  • Page 9
  • Interim pages omitted …
  • Page 49
  • Go to Next Page »

Primary Sidebar

Recent Posts

  • Breakthrough low-latency, high-energy-efficiency LLM inference performance using NorthPole
  • Breakthrough edge AI inference performance using NorthPole in 3U VPX form factor
  • NorthPole in The Economist
  • NorthPole in Computer History Museum
  • NorthPole: Neural Inference at the Frontier of Energy, Space, and Time

Archives by Month

  • 2024: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2023: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2022: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2020: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2019: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2018: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2017: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2016: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2015: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2014: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2013: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2012: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2011: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2010: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2009: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2008: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2007: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2006: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec

Copyright © 2025