• Skip to main content
  • Skip to primary sidebar

Dharmendra S. Modha

My Work and Thoughts.

  • Brain-inspired Computing
    • Collaborations
    • Videos
  • Life & Universe
    • Creativity
    • Leadership
    • Interesting People
  • Accomplishments
    • Prizes
    • Papers
    • Positions
    • Presentations
    • Press
    • Profiles
  • About Me

2016 Scientist of the Year By R&D Magazine

November 8, 2016 By dmodha

Cover

This is the 51st Anniversary of the Award. Previous awardees include John Bardeen, Nobel Prize; Tim Berners-Lee, director of the World Wide Web Consortium; Steve Chu, Nobel Prize; Ralph Gomory, Former IBM SVP of Science & Technology; Leroy Hood, Human Genome Pioneer; Bill Joy, founder of Sun Microsystems; Kary Mullis, Nobel Prize; Calvin Quate, Atomic force microscopy; Justin Rattner, Former Director of Intel Labs; J. Craig Venter, Human Genome Pioneer; and Stephen Wolfram, Founder & CEO of Wolfram Research.

First and foremost, I am most grateful to my colleagues whose collaboration and dedication has been the key to our shared success. I am grateful to IBM Research, IBM, U.S. Department of Defense (Defense Advanced Research Projects Agency, Air Force Research Lab, Army Research Lab) and U.S. Department of Energy (Lawrence Livermore National Lab and Lawrence Berkeley National Lab) for their support of this work over the last 12 years. I am grateful to 150+ researchers at 40+ universities who are experimenting with TrueNorth Ecosystem. I am grateful to my mentors. I am grateful to my alma maters IIT Bombay and UCSD and my teachers at both schools. I am grateful to my family and friends for their love and support.

The cover picture is credit to Hita Bambhania-Modha. Her professional Facebook page https://www.facebook.com/hita.life/ just launched. And, don’t miss her website http://www.hita.life.

Filed Under: Accomplishments, Brain-inspired Computing, Prizes

Breaking News: Energy-Efficient Neuromorphic Classifiers

September 23, 2016 By dmodha

A paper entitled “Energy-Efficient Neuromorphic Classifiers” was published this week in Neural Computation by Daniel Martí, Mattia Rigotti, Mingoo Seok, and Stefano Fusi.

Here is the abstract:

Neuromorphic engineering combines the architectural and computational principles of systems neuroscience with semiconductor electronics, with the aim of building efficient and compact devices that mimic the synaptic and neural machinery of the brain. The energy consumptions promised by neuromorphic engineering are extremely low, comparable to those of the nervous system. Until now, however, the neuromorphic approach has been restricted to relatively simple circuits and specialized functions, thereby obfuscating a direct comparison of their energy consumption to that used by conventional von Neumann digital machines solving real-world tasks. Here we show that a recent technology developed by IBM can be leveraged to realize neuromorphic circuits that operate as classifiers of complex real-world stimuli. Specifically, we provide a set of general prescriptions to enable the practical implementation of neural architectures that compete with state-of-the-art classifiers. We also show that the energy consumption of these architectures, realized on the IBM chip, is typically two or more orders of magnitude lower than that of conventional digital machines implementing classifiers with comparable performance. Moreover, the spike-based dynamics display a trade-off between integration time and accuracy, which naturally translates into algorithms that can be flexibly deployed for either fast and approximate classifications, or more accurate classifications at the mere expense of longer running times and higher energy costs. This work finally proves that the neuromorphic approach can be efficiently used in real-world applications and has significant advantages over conventional digital devices when energy consumption is considered.

Filed Under: Brain-inspired Computing, Collaborations

Deep learning inference possible in embedded systems thanks to TrueNorth

September 21, 2016 By dmodha

See the beautiful blog by Caroline Vespi on our PNAS paper “Convolutional networks for fast, energy-efficient neuromorphic computing“.

Filed Under: Accomplishments, Brain-inspired Computing, Press

Breaking News: Convolutional Networks for Fast, Energy-Efficient Neuromorphic Computing

September 20, 2016 By dmodha

Guest Blog by Steven K. Esser

We are excited to announce that our work demonstrating convolutional networks for energy-efficient neuromorphic hardware just appeared in the peer-reviewed Proceedings of the National Academy of Sciences (PNAS) of the United States of America. Here is an open-access link. In this work, we show near state-of-the-art accuracy on 8 datasets, while running at between 1200 and 2600 frames per second and using between 25 and 275 mW running on the TrueNorth chip. Our approach adapts deep convolutional neural networks, today’s state-of-the-art approach for machine perception, to perform classification tasks on neuromorphic hardware, such as TrueNorth. This work overcomes the challenge of learning a network that uses low precision synapses, spiking neurons and limited-fan in, constraints that allow for energy-efficient neuromorphic hardware design (including TrueNorth) but that are not imposed in conventional deep learning.

Low precision synapses make it possible to co-locate processing and memory in hardware, dramatically reducing the energy needed for data movement. Employing spiking neurons (which have only a binary state) further reduces data traffic and computation, leading to additional energy savings. However, contemporary convolutional networks use high precision (at least 32-bit) neurons and synapses. This high precision representation allows for very small changes to the neural network during learning, facilitating incremental improvements of the response to a given input during training. In this work, we introduce two constraints to the learning rule to bridge deep learning to energy-efficient neuromorphic hardware. First, we maintain a high precision “shadow” synaptic weight during training, but map this weight to a trinary value (-1, 0 or 1) implementable in hardware for operation as well as, critically, to determine how the network should change during the training itself. Second, we employ spiking neurons during both training and operation, but use the behavior of a high precision neuron to predict how the spiking neuron should change as network weights change.

Limiting neuron fan-in (the number of inputs each neuron can receive) allows for a crossbar design in hardware that reduces spike traffic, again leading to lower energy consumption. However, in deep learning no explicit restriction is placed on inputs per neuron, which allows networks to scale while still remaining integrated. In this work, we enforce this connectivity constraint by partitioning convolutional filters into multiple groups. We maintain network integration by interspersing layers with large topographic coverage but small feature coverage with those with small topographic coverage but large feature coverage. Essentially, this allows the network to learn complicated spatial features, and then to learn useful mixtures of those features.

Our results show that the operational and structural differences between neuromorphic computing and deep learning are not fundamental and indeed, that near state-of-the-art performance can be achieved in a low-power system by merging the two. In a previous post, we mentioned one example of an application running convolutional networks on TrueNorth, trained with our method, to perform real-time gesture recognition.

The algorithm is now in the hands of over 130 users at over 40 institutions, and we are excited to discover what else might now be possible!

Filed Under: Accomplishments, Brain-inspired Computing, Papers

Career Opportunity: Brain-inspired Computing

September 1, 2016 By dmodha

Apply here.

Filed Under: Brain-inspired Computing

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 4
  • Page 5
  • Page 6
  • Page 7
  • Page 8
  • Interim pages omitted …
  • Page 49
  • Go to Next Page »

Primary Sidebar

Recent Posts

  • Computer History Museum Interview
  • EE Times Interview by Sunny Bains
  • SiLQ: Simple Large Language Model Quantization-Aware Training
  • Breakthrough low-latency, high-energy-efficiency LLM inference performance using NorthPole
  • Breakthrough edge AI inference performance using NorthPole in 3U VPX form factor

Archives by Month

  • 2025: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2024: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2023: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2022: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2020: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2019: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2018: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2017: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2016: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2015: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2014: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2013: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2012: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2011: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2010: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2009: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2008: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2007: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2006: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec

Copyright © 2025