• Skip to main content
  • Skip to primary sidebar

Dharmendra S. Modha

My Work and Thoughts.

  • Brain-inspired Computing
    • Collaborations
    • Videos
  • Life & Universe
    • Creativity
    • Leadership
    • Interesting People
  • Accomplishments
    • Prizes
    • Papers
    • Positions
    • Presentations
    • Press
    • Profiles
  • About Me

NIPS 2015: Backpropagation for Energy-Efficient Neuromorphic Computing

December 3, 2015 By dmodha

Guest Post by Steven K. Esser, Rathinakumar Appuswamy, Paul A. Merolla, and John V. Arthur.

At the 2015 Neural Information Processing Systems (NIPS) conference, in a paper entitled Backpropagation for Energy-Efficient Neuromorphic Computing, we will be presenting our latest research in adapting machine learning techniques for use in training TrueNorth networks. In essence, this is our first step towards bringing together deep learning (for offline learning) together with brain-inspired computing (for online delivery).

This work is driven by an interest in using neural networks in embedded systems to solve real world problems. Such systems must satisfy both performance requirements, namely accuracy and generalizability, as well as platform requirements, such as a small footprint, low power consumption and real time capabilities. We have seen many recent examples demonstrating machine learning is able to meet performance needs, and other examples that neuromorphic approaches such as the TrueNorth are well suited to the platform needs.

An interesting challenge arises in trying to bring machine learning and neuromorphic hardware together. To achieve high efficiency, TrueNorth uses spiking neurons, discrete synapses and constrained connectivity. However, backpropagation, the algorithm at the core of much of machine learning, uses continuous-output neurons, high precision synapses, and typically operates with no limits on the number of inputs per neuron. How then can we build systems that take advantage of algorithmic insights from machine learning and the operational efficiency of neuromorphic hardware?

In our work, we demonstrate a learning rule and network topology that reconcile this apparent incompatibility by training in a continuous and differentiable probabilistic space that has a direct correspondence to spikes and discrete synaptic states in the hardware domain. Using this approach, we achieved near state-of-the-art performance on the MNIST handwritten digit dataset (99.42%), and the best accuracy to date using spiking neurons and/or low-precision discrete synapses. We have demonstrated three orders of magnitude less energy per classification than the next best low power approach.

NIPS 2015
Accuracy and energy of network trained using our approach running on the TrueNorth chip. Ensembles of multiple networks were tested, with ensemble size indicated next to each data point. It is possible to trade-off accuracy versus energy.

The software behind the published algorithm is already in the hands of nearly 100 developers most of whom attended the August 2015 Boot Camp, and we expect a number of new results in 2016.

Looking to the future, we are working to expand the repertoire of machine learning approaches for training TrueNorth networks. We have exciting work brewing in our lab using TrueNorth with convolution networks, and have achieved near state of the art on a number of additional datasets.

The paper can be found at https://papers.nips.cc/paper/5862-backpropagation-for-energy-efficient-neuromorphic-computing

Stay tuned!

Filed Under: Accomplishments, Brain-inspired Computing, Papers

R&D 100 Award, Editor’s Choice

November 30, 2015 By dmodha

TrueNorth received 2015 R&D 100 Award and was named Editor’s Choice under IT/Electrical Category.

Filed Under: Accomplishments, Brain-inspired Computing, Prizes

Digital India Dinner with Narendra Modi

September 26, 2015 By dmodha

On September 26, 2015, I attended Digital India Dinner with The Prime Minister of India, Honorable Narendra Modi. Amongst the attendees were Satya Nadella (CEO, Microsoft), Sundar Pichai (CEO, Google), John Chambers (CEO, Cisco), Shantanu Narayen (CEO, Adobe).

Narendra Modi
Narendra Modi.

Filed Under: Interesting People, Leadership

IBM’s ‘Rodent Brain’ Chip Could Make Our Phones Hyper-Smart

August 17, 2015 By dmodha

See WIRED article here.

Filed Under: Accomplishments, Brain-inspired Computing, Press

Exploring neuromorphic natural language processing with IBM’s TrueNorth

August 10, 2015 By dmodha

Guest post by Peter U. Diehl from ETH Zurich. Peter’s research is focused on bringing together the fields of neuromorphic computing, machine learning and computational neuroscience.

At the Telluride Neuromorphic Cognition Engineering Workshop 2015 a team from IBM Research (Rodrigo Alvarez-Icaza, John Arthur, Andrew Cassidy, and Paul Merolla) brought their newly developed low-power neuromorphic TrueNorth chip to introduce this platform to a broader research community. Among the other participants were Guido Zarella, principal research scientist at the MITRE Corporation and an expert in natural language processing (NLP), Bruno Pedroni, PhD student at UCSD and previous intern at IBM Research Almaden, Emre Neftci, professor at UC Irvine and a pioneer on using deep learning with spiking neural networks, and myself. Together we pursued the ambitious goal of bringing deep learning based NLP to neuromorphic systems.

Driven by the ever increasing amounts of natural language text available on the world wide web and by the necessity to make sense of it, the field of NLP showed dramatic progress in recent years. Simultaneously, the field of neuromorphic computing has started to emerge. Neuromorphic systems are modeled after the brain, which leads to hardware that consumes orders of magnitude less power than its conventional counterpart. However, such a new architecture requires new algorithms since most of the existing ones are designed for von-Neumann architectures and usually cannot be mapped directly.

At Telluride, myself and the group mentioned above were eager to fill the existing algorithmic gap of NLP for neuromorphic computing by mapping existing state-of-the-art NLP systems for von-Neumann architectures to TrueNorth. Achieving this goal enables a range of highly attractive technologies like high-quality analysis of user input on mobile devices with negligible battery drain, or data-centers for understanding queries that consume orders of magnitude less power than conventional high-performance computers. During the workshop we focused on two tasks.

  • The first task is sentiment analysis on TrueNorth, that is, predicting the “happiness” associated with the given words. Our system, called “TrueHappiness”, uses a fully-connected feedfoward neural network which is trained using backpropagation, and that is converted to a TrueNorth compatible network after the training finished.
  • The second task is question classification, where we identify what kind of answer the user is looking for in a given question. Similarly to the design of TrueHappiness, we start by using deep learning techniques, that is, we train a recurrent neural network using backpropagation and convert it afterwards to a spiking neural network suitable for TrueNorth.

For both tasks we managed to implement end-to-end systems where the user can type words that are mentioned in Wikipedia that are then used for sentiment analysis or (in case of a question) are analyzed with regards to the desired content. Demos and details about the training and conversion of both systems will be soon available at peterudiehl.com.

Although the designed algorithms are not yet viable for commercial scale applications because we are just getting started, they provide an important first step and a generally applicable framework for the mapping of traditional deep learning systems to neuromorphic platforms and thereby opening up neuromorphic computing and deep learning for entirely new applications. This is also the vision at the IBM TrueNorth Boot Camp at IBM Research in San Jose, where I am at the moment. Together with over 60 other participants we are diving into TrueNorth programming, creating new neuromorphic algorithms and mapping existing von-Neumann architecture algorithms to TrueNorth to advance the state-of-the-art in low-power computing.

Filed Under: Brain-inspired Computing, Collaborations

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 8
  • Page 9
  • Page 10
  • Page 11
  • Page 12
  • Interim pages omitted …
  • Page 49
  • Go to Next Page »

Primary Sidebar

Recent Posts

  • Computer History Museum Interview
  • EE Times Interview by Sunny Bains
  • SiLQ: Simple Large Language Model Quantization-Aware Training
  • Breakthrough low-latency, high-energy-efficiency LLM inference performance using NorthPole
  • Breakthrough edge AI inference performance using NorthPole in 3U VPX form factor

Archives by Month

  • 2025: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2024: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2023: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2022: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2020: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2019: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2018: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2017: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2016: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2015: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2014: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2013: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2012: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2011: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2010: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2009: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2008: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2007: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2006: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec

Copyright © 2025