• Skip to main content
  • Skip to primary sidebar

Dharmendra S. Modha

My Work and Thoughts.

  • Brain-inspired Computing
    • Collaborations
    • Videos
  • Life & Universe
    • Creativity
    • Leadership
    • Interesting People
  • Accomplishments
    • Prizes
    • Papers
    • Positions
    • Presentations
    • Press
    • Profiles
  • About Me

Deep learning inference possible in embedded systems thanks to TrueNorth

September 21, 2016 By dmodha

See the beautiful blog by Caroline Vespi on our PNAS paper “Convolutional networks for fast, energy-efficient neuromorphic computing“.

Filed Under: Accomplishments, Brain-inspired Computing, Press

Breaking News: Convolutional Networks for Fast, Energy-Efficient Neuromorphic Computing

September 20, 2016 By dmodha

Guest Blog by Steven K. Esser

We are excited to announce that our work demonstrating convolutional networks for energy-efficient neuromorphic hardware just appeared in the peer-reviewed Proceedings of the National Academy of Sciences (PNAS) of the United States of America. Here is an open-access link. In this work, we show near state-of-the-art accuracy on 8 datasets, while running at between 1200 and 2600 frames per second and using between 25 and 275 mW running on the TrueNorth chip. Our approach adapts deep convolutional neural networks, today’s state-of-the-art approach for machine perception, to perform classification tasks on neuromorphic hardware, such as TrueNorth. This work overcomes the challenge of learning a network that uses low precision synapses, spiking neurons and limited-fan in, constraints that allow for energy-efficient neuromorphic hardware design (including TrueNorth) but that are not imposed in conventional deep learning.

Low precision synapses make it possible to co-locate processing and memory in hardware, dramatically reducing the energy needed for data movement. Employing spiking neurons (which have only a binary state) further reduces data traffic and computation, leading to additional energy savings. However, contemporary convolutional networks use high precision (at least 32-bit) neurons and synapses. This high precision representation allows for very small changes to the neural network during learning, facilitating incremental improvements of the response to a given input during training. In this work, we introduce two constraints to the learning rule to bridge deep learning to energy-efficient neuromorphic hardware. First, we maintain a high precision “shadow” synaptic weight during training, but map this weight to a trinary value (-1, 0 or 1) implementable in hardware for operation as well as, critically, to determine how the network should change during the training itself. Second, we employ spiking neurons during both training and operation, but use the behavior of a high precision neuron to predict how the spiking neuron should change as network weights change.

Limiting neuron fan-in (the number of inputs each neuron can receive) allows for a crossbar design in hardware that reduces spike traffic, again leading to lower energy consumption. However, in deep learning no explicit restriction is placed on inputs per neuron, which allows networks to scale while still remaining integrated. In this work, we enforce this connectivity constraint by partitioning convolutional filters into multiple groups. We maintain network integration by interspersing layers with large topographic coverage but small feature coverage with those with small topographic coverage but large feature coverage. Essentially, this allows the network to learn complicated spatial features, and then to learn useful mixtures of those features.

Our results show that the operational and structural differences between neuromorphic computing and deep learning are not fundamental and indeed, that near state-of-the-art performance can be achieved in a low-power system by merging the two. In a previous post, we mentioned one example of an application running convolutional networks on TrueNorth, trained with our method, to perform real-time gesture recognition.

The algorithm is now in the hands of over 130 users at over 40 institutions, and we are excited to discover what else might now be possible!

Filed Under: Accomplishments, Brain-inspired Computing, Papers

Career Opportunity: Brain-inspired Computing

September 1, 2016 By dmodha

Apply here.

Filed Under: Brain-inspired Computing

Samsung turns IBM’s brain-like chip into a digital eye

August 23, 2016 By dmodha

On occasion of the 30th Anniversary of IBM Research – Almaden, Samsung, Air Force Research Lab, and Lawrence Livermore National Lab presented their results using TrueNorth Ecosystem. Here is an article written by a reporter who was in the audience. A video of the session will be posted soon.

During the same event, Professor Tobi Delbruck presented the Misha Mahowald Prize to the IBM TrueNorth Team. See here.

Misha Mahowald Prize

Filed Under: Brain-inspired Computing, Collaborations

TrueNorth Accepted into the Computer History Museum

August 12, 2016 By dmodha

On the occasion of the 30th Anniversary of IBM Research – Almaden, TrueNorth was accepted into the Computer History Museum. Here is IBM Press Release.

CMH

Filed Under: Accomplishments, Brain-inspired Computing, Press, Prizes

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 3
  • Go to page 4
  • Go to page 5
  • Go to page 6
  • Go to page 7
  • Interim pages omitted …
  • Go to page 48
  • Go to Next Page »

Primary Sidebar

Recent Tweets

  • Fundamental Principle: Nature Abhors Gradients. https://t.co/KI2CRhWJdRover a year ago
  • The TrueNorth Journey https://t.co/XnpDScCAUV @IBMResearchover a year ago
  • Inspiration: "No great thing is created suddenly" - Epictetusover a year ago
  • IEEE Computer Cover Feature — TrueNorth: Accelerating From Zero to 64 Million Neurons in 10 Years @IBMResearch… https://t.co/4fvYk2JCPTover a year ago
  • "In 2012, computer scientist Dharmendra Modha used a powerful supercomputer to simulate the activity of more than 5… https://t.co/Sz17XsG5h5over a year ago
  • Management Tip: Team success is AND, not OR.over a year ago
  • The creation of the electronic brain https://t.co/wBKjGtqkvi via @issuu See page 39 onwards ... @IBMResearchover a year ago
  • Creativity Tip: Beeline to problem, spiral to solution.over a year ago
  • PREPRINT: Low Precision Policy Distillation with Application to Low-Power, Real-time Sensation-Cognition-Action Loo… https://t.co/WZHmGS5AxJover a year ago
  • "The power and performance of neuromorphic computing is far superior to any incremental solution we can expect on a… https://t.co/B2k9ZznHIJover a year ago

Recent Posts

  • Jobs in Brain-inspired Computing
  • Neuromorphic scaling advantages for energy-efficient random walk computations
  • Discovering Low-Precision Networks Close to Full-Precision Networks for Efficient Inference
  • Exciting Opportunities in Brain-inspired Computing at IBM Research
  • The TrueNorth Journey: 2008 – 2018 (video)

Archives by Month

  • 2022: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2020: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2019: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2018: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2017: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2016: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2015: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2014: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2013: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2012: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2011: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2010: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2009: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2008: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2007: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2006: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec

Evolution: Brain-inspired Computing

Copyright © 2023