• Skip to main content
  • Skip to primary sidebar

Dharmendra S. Modha

My Work and Thoughts.

  • Brain-inspired Computing
    • Collaborations
    • Videos
  • Life & Universe
    • Creativity
    • Leadership
    • Interesting People
  • Accomplishments
    • Prizes
    • Papers
    • Positions
    • Presentations
    • Press
    • Profiles
  • About Me

Breaking News: Convolutional Networks for Fast, Energy-Efficient Neuromorphic Computing

September 20, 2016 By dmodha

Guest Blog by Steven K. Esser

We are excited to announce that our work demonstrating convolutional networks for energy-efficient neuromorphic hardware just appeared in the peer-reviewed Proceedings of the National Academy of Sciences (PNAS) of the United States of America. Here is an open-access link. In this work, we show near state-of-the-art accuracy on 8 datasets, while running at between 1200 and 2600 frames per second and using between 25 and 275 mW running on the TrueNorth chip. Our approach adapts deep convolutional neural networks, today’s state-of-the-art approach for machine perception, to perform classification tasks on neuromorphic hardware, such as TrueNorth. This work overcomes the challenge of learning a network that uses low precision synapses, spiking neurons and limited-fan in, constraints that allow for energy-efficient neuromorphic hardware design (including TrueNorth) but that are not imposed in conventional deep learning.

Low precision synapses make it possible to co-locate processing and memory in hardware, dramatically reducing the energy needed for data movement. Employing spiking neurons (which have only a binary state) further reduces data traffic and computation, leading to additional energy savings. However, contemporary convolutional networks use high precision (at least 32-bit) neurons and synapses. This high precision representation allows for very small changes to the neural network during learning, facilitating incremental improvements of the response to a given input during training. In this work, we introduce two constraints to the learning rule to bridge deep learning to energy-efficient neuromorphic hardware. First, we maintain a high precision “shadow” synaptic weight during training, but map this weight to a trinary value (-1, 0 or 1) implementable in hardware for operation as well as, critically, to determine how the network should change during the training itself. Second, we employ spiking neurons during both training and operation, but use the behavior of a high precision neuron to predict how the spiking neuron should change as network weights change.

Limiting neuron fan-in (the number of inputs each neuron can receive) allows for a crossbar design in hardware that reduces spike traffic, again leading to lower energy consumption. However, in deep learning no explicit restriction is placed on inputs per neuron, which allows networks to scale while still remaining integrated. In this work, we enforce this connectivity constraint by partitioning convolutional filters into multiple groups. We maintain network integration by interspersing layers with large topographic coverage but small feature coverage with those with small topographic coverage but large feature coverage. Essentially, this allows the network to learn complicated spatial features, and then to learn useful mixtures of those features.

Our results show that the operational and structural differences between neuromorphic computing and deep learning are not fundamental and indeed, that near state-of-the-art performance can be achieved in a low-power system by merging the two. In a previous post, we mentioned one example of an application running convolutional networks on TrueNorth, trained with our method, to perform real-time gesture recognition.

The algorithm is now in the hands of over 130 users at over 40 institutions, and we are excited to discover what else might now be possible!

Filed Under: Accomplishments, Brain-inspired Computing, Papers

Career Opportunity: Brain-inspired Computing

September 1, 2016 By dmodha

Apply here.

Filed Under: Brain-inspired Computing

Samsung turns IBM’s brain-like chip into a digital eye

August 23, 2016 By dmodha

On occasion of the 30th Anniversary of IBM Research – Almaden, Samsung, Air Force Research Lab, and Lawrence Livermore National Lab presented their results using TrueNorth Ecosystem. Here is an article written by a reporter who was in the audience. A video of the session will be posted soon.

During the same event, Professor Tobi Delbruck presented the Misha Mahowald Prize to the IBM TrueNorth Team. See here.

Misha Mahowald Prize

Filed Under: Brain-inspired Computing, Collaborations

TrueNorth Accepted into the Computer History Museum

August 12, 2016 By dmodha

On the occasion of the 30th Anniversary of IBM Research – Almaden, TrueNorth was accepted into the Computer History Museum. Here is IBM Press Release.

CMH

Filed Under: Accomplishments, Brain-inspired Computing, Press, Prizes

Six TrueNorth Algorithms & Applications by IBM and Partners at WCCI

August 3, 2016 By dmodha

Guest Blog by Andrew Cassidy and Michael Debole

At the IEEE 2016 World Congress on Computational Intelligence (WCCI) in Vancouver Canada last week, six researchers presented their research on TrueNorth-based algorithms and applications. These papers, published in the proceedings of the International Joint Conference on Neural Networks (IJCNN 2016), represent early outcomes from university and government research collaborators, who were among the first adopters of the TrueNorth hardware and software ecosystem. These research partners were trained at the Brain-inspired Boot Camp last August 2015, and submitted their succeeding research for conference review in January 2016.

The six papers presented at the Special Session on Energy-Efficient Deep Neural Networks were:

  • LATTE: Low-power Audio Transform with TrueNorth Ecosystem.
    Wei-Yu Tsai, Davis Barch, Andrew Cassidy, Michael DeBole, Alexander Andreopoulos, Bryan Jackson, Myron Flickner, Dharmendra Modha, Jack Sampson and Vijaykrishnan Narayanan;
    The Pennsylvania State University, IBM Research – Almaden.
  • TrueHappiness: Neuromorphic Emotion Recognition on TrueNorth.
    Peter U. Diehl, Bruno U. Pedroni, Andrew Cassidy, Paul Merolla, Emre Neftci and Guido Zarrella;
    ETH Zurich, UC San Diego, IBM Research – Almaden, UC Irvine, The MITRE Corporation.
  • Probabilistic Inference Using Stochastic Spiking Neural Networks on A Neurosynaptic Processor.
    Khadeer Ahmed, Amar Shrestha, Qinru Qiu and Qing Wu;
    Syracuse University, Air Force Research Laboratory.
  • Weighted Population Code for Low Power Neuromorphic Image Classification.
    Antonio Jimeno Yepes, Jianbin Tang, Shreya Saxena, Tobias Brosch and Arnon Amir;
    IBM Research – Australia, Massachusetts Institute of Technology, IBM Research –
    Almaden, Institute of Neural Information Processing – Ulm University.
  • Sparse Approximation on Energy Efficient Hardware.
    Kaitlin Fair and David Anderson;
    Georgia Institute of Technology.
  • A Low-Power Neurosynaptic Implementation of Local Binary Patterns for Texture Analysis.
    Alexander Andreopoulos, Rodrigo Alvarez-Icaza, Andrew Cassidy and Myron Flickner;
    IBM Research – Almaden.

WCCI 2016 logo

Filed Under: Brain-inspired Computing, Collaborations

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 4
  • Page 5
  • Page 6
  • Page 7
  • Page 8
  • Interim pages omitted …
  • Page 49
  • Go to Next Page »

Primary Sidebar

Recent Posts

  • Breakthrough low-latency, high-energy-efficiency LLM inference performance using NorthPole
  • Breakthrough edge AI inference performance using NorthPole in 3U VPX form factor
  • NorthPole in The Economist
  • NorthPole in Computer History Museum
  • NorthPole: Neural Inference at the Frontier of Energy, Space, and Time

Archives by Month

  • 2024: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2023: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2022: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2020: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2019: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2018: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2017: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2016: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2015: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2014: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2013: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2012: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2011: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2010: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2009: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2008: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2007: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2006: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec

Copyright © 2025