• Skip to main content
  • Skip to primary sidebar

Dharmendra S. Modha

My Work and Thoughts.

  • Brain-inspired Computing
    • Collaborations
    • Videos
  • Life & Universe
    • Creativity
    • Leadership
    • Interesting People
  • Accomplishments
    • Prizes
    • Papers
    • Positions
    • Presentations
    • Press
    • Profiles
  • About Me

Archives for 2015

Telluride Neuromorphic Cognition Engineering Workshop

August 1, 2015 By dmodha

Guest Post by Rodrigo Alvarez-Icaza, John Arthur, Andrew Cassidy, and Paul Merolla.

Each July for the last 20 years or so, a group of neuroscientists, engineers, and computer scientists come together for a three week neuromorphic engineering workshop in the scenic town of Telluride. Telluride is best known for its ski slopes, but in the summer, it is the perfect place to hunker down and work on collaborative, hands on neuromorphic projects. This year, four of us who are IBM research scientists in the Brain-inspired Computing Group brought IBM’s latest generation TrueNorth chip to Telluride with one goal in mind: enable workshop participants to use TrueNorth for their own projects. To our surprise and delight, although many of the participants had never actually seen or used TrueNorth before, they were all up and running in almost no time. Here is a quick run down of what happened.

Telluride Town Center

The Setup:

The bulk of the workshop took place at the Telluride elementary school, and in particular, in one of the classrooms. Shown below are the four of us, arriving to our new home for the next 3 weeks.

Telluride Town Center
Rodrigo Alvarez-Icaza, John Arthur, Paul Merolla, and Andrew Cassidy (left to right) at the Telluride elementary school
Photo Credit: Tobi Delbruck

We brought a bunch of goodies to Telluride, including 10 of our latest mobile development boards, some of which are being unpacked by Rodrigo (top). The board (bottom) has a TrueNorth chip (SyNAPSE), an FPGA, and a host of sensors and connectors. The basic setup is that participants can log into these boards through our local servers, and run their real time spiking neural networks.

Telluride Town Center
Rodrigo Alvarez-Icaza unpacking the boards

Telluride Town Center
Close up of a TrueNorth mobile development board

Hands on projects:

Our IBM group, along with Arindam Basu (NTU), ran a workgroup called Spike-Based Cognitive Computing. In this group, we divided into sub projects. The projects based on TrueNorth included:

  • ATIS camera: MNIST classifier
     Garrick Orchard, (Singapore Institute for Neurotechnology) and Kate Fischl (JHU)

  • Sparse Representations for speech recognition (TIDIGITs)
     Jie “Jack” Zhang (JHU) and Kaitlin Fair (Georgia Tech/AFRL)

  • Word vector associative memory (semantic similarity)
     Dan Mendat (JHU) and Guillaume Garreau (JHU)

  • Word vector analogies
     Dan Mendat (JHU)

  • Word “happiness” score (regression) using word vectors
     Peter Diehl (INI) and Bruno Pedroni (UCSD)

  • Question (sentence) classification using Recurrent NNs
     Emre Neftci (UC Irvine), Peter Diehl (INI) and Bruno Pedroni (UCSD)

  • FSMs and WTAs (for working memory, etc.)
     Suraj Honnuraiah (Institute of Neuroinformatics)

  • Sensors to TN:
     DAVIS: Luca Longinotti (iniLabs)
     spiking cochlea: Shih-Chii Liu ((Institute of Neuroinformatics)
     spiking sonar: Timmer Horiuchi (U. Maryland)
     spiking radar: Saeed Afshar, University of Western Sydney
     FPGA cochlea: Guillaume Garreau (JHU)

You can find more information on these projects on the Neuromorph site.
Here, we highlight one of the projects, which culminated with a real time demo!

Real-time digit classification on TrueNorth with a spiking retinal camera front end:

In this project, the goal was to connect a spiking retinal camera (called the ATIS) to a TrueNorth chip to perform pattern classification. Using the ATIS as a front end for TrueNorth opens up the possibility for a fast, low power object recognition system that works only using spikes.

There are two main steps involved in realizing the real-time digit classification system. The first step consists of creating and training the object recognition model to run on TrueNorth. The second step is to connect the ATIS to TrueNorth to achieve real time operation.

Telluride Town Center

We made use of a publicly available spike-based conversion of the MNIST dataset which was taken with the ATIS sensor mounted on a pan-tilt while viewing MNIST digits on a computer monitor. Details of the dataset creation, as well as a download of the dataset itself, are available at:

 http://www.garrickorchard.com/datasets/n-mnist


Video 1


Video 2

The continuous spike stream was converted to static images by accumulating spikes for 10ms at a time to create a static images for training. These static images are used to create a Lightning Memory Mapped Database (LMDB) on which training was performed using the caffe deep learning framework (modified to support TrueNorth). A simple 1 layer Neural Network was used, with 100 neurons trained to respond to each of the 10 digits. The final output of the system is a histogram of the number of spikes output by neurons representing each class. The class with the most spikes is deemed to be the most likely output.

Real time results:

A laptop powers and interfaces to the ATIS sensor which is mounted on a helmet worn by a user. This laptop performs simple noise filtering on ATIS spikes and activity based tracking of the MNIST digit on a screen. Spikes occurring within the 28×28 pixel region of interest being tracked are remapped to target corresponding cores and axons on the TrueNorth (sometimes multiple axons per spike). The laptop accumulates spikes until 130 spikes are available for classification, at which time all 130 spikes are communicated to TrueNorth over UDP.

The trained neural network runs on TrueNorth and output spikes are communicated to a second laptop using UDP. This second laptop performs visualization of the results.


Video 3

On the spiking MNIST test set, we achieved 76%-80% accuracy at 100 classification/sec. The goal was not really classification accuracy per se, but rather was to learn to create end-to-end demonstrations. The classification rate (100/sec) is limited by the fact that we use 10ms of data for each classification, but TrueNorth is capable of performing 1000 such classifications per second. In the real-time system the classifier on TrueNorth uses only 4 cores (0.1% of the chip). Temporally, utilization on this 0.1% of the physical chip is below 10% (i.e. the 4 cores are idle >90% of the time) when performing 100 classifications per second.

Filed Under: Brain-inspired Computing, Collaborations

Education Session for US Senate & House

July 16, 2015 By dmodha

Senate and Housel

Filed Under: Accomplishments, Brain-inspired Computing, Leadership, Presentations

Energy-efficient neuromorphic classifiers

July 8, 2015 By dmodha

Professor Stefano Fusi at Center for Theoretical Neuroscience at Columbia University (who was part of IBM Team for DARPA SyNAPSE in Phases 0, 1, and 2) has released a very interesting pre-print entitled “Energy-efficient neuromorphic classifiers”. Here is the Abstract (highlights are mine):

Neuromorphic engineering combines the architectural and computational principles of systems neuroscience with semiconductor electronics, with the aim of building efficient and compact devices that mimic the synaptic and neural machinery of the brain. Neuromorphic engineering promises extremely low energy consumptions, comparable to those of the nervous system. However, until now the neuromorphic approach has been restricted to relatively simple circuits and specialized functions, rendering elusive a direct comparison of their energy consumption to that used by conventional von Neumann digital machines solving real-world tasks. Here we show that a recent technology developed by IBM can be leveraged to realize neuromorphic circuits that operate as classifiers of complex real-world stimuli. These circuits emulate enough neurons to compete with state-of-the-art classifiers. We also show that the energy consumption of the IBM chip is typically 2 or more orders of magnitude lower than that of conventional digital machines when implementing classifiers with comparable performance. Moreover, the spike-based dynamics display a trade-off between integration time and accuracy, which naturally translates into algorithms that can be flexibly deployed for either fast and approximate classifications, or more accurate classifications at the mere expense of longer running times and higher energy costs. This work finally proves that the neuromorphic approach can be efficiently used in real-world applications and it has significant advantages over conventional digital devices when energy consumption is considered.

Filed Under: Brain-inspired Computing, Collaborations

World Economic Forum: Top 10 Emerging Technologies of 2015

April 30, 2015 By dmodha

World Economic Forum named “Neuromorphic technology” as one of “Top 10 Emerging Technologies of 2015” and specifically cited IBM’s TrueNorth Chip (see page 12 of the report).

Filed Under: Accomplishments, Brain-inspired Computing, Prizes

Cognitive Systems Colloquium: Videos

April 16, 2015 By dmodha

Guest Post by Ben G. Shaw, Organizing Chair of Cognitive Systems Colloquium.

This is continued from the previous post dated November 12, 2014.

To highlight the transformative potential of IBM’s Neurosynaptic System and its impact on computation in the Cognitive Era, IBM Research hosted nearly 200 eminent thinkers and pioneers in the field of brain-inspired computing at the IBM Research – Almaden Cognitive Systems Colloquium. The program featured over a dozen outstanding speakers and distinguished panelists. Attendees included nearly 200 thought leaders and potential early adopters from government, industry, academia, research and the venture community.

Recurring Themes of the Day:

  • The Brain: how advances in understanding nature’s most efficient and powerful computational substrate are revealing new paradigms for computing
  • Technology: as von Neumann computation comes up against fundamental limitations that are bringing Moore’s law to an end, how new approaches can revolutionize important classes of computation
  • Applications: how efficient, embedded neural computation may benefit individuals, businesses and society by making objects, environments and systems more aware and responsive
  • Ecosystems: how new technologies and offerings will gain breadth, depth and momentum to transform industries from robotics to healthcare, agriculture to mobile devices, transportation to public safety.

SyNAPSE Deep Dive:

In addition to reviewing the state of knowledge in the field of brain-inspired computing and a forward-looking panel discussion, participants took a concentrated “Deep Dive” into the recently announced IBM Neurosynaptic System including the 1-million neuron TrueNorth chip, architecture, development boards, programming paradigm, applications, education and ecosystem. Inspired by the brain, TrueNorth is an architecture and a substrate for non-von Neumann, event-driven, multi-modal, real-time spatio-temporal pattern recognition, sensory processing and integrated sensor-actuator systems. TrueNorth’s extreme power efficiency and inherent scalability will revolutionize applications in mobile and embedded systems, at the same time allowing neural algorithms to achieve previously unattainable scales, running quickly, efficiently and natively in hardware.

  • Brain-inspired Computing: A Decade-Long Journey (20:27)
  • Part I: The Need for a New Architecture, TrueNorth & Compass, Transduction, Live demos (25:45)
  • Part II: Architecture, Neuron, Training for TrueNorth, MNIST Example (24:10)
  • Part III: Corelet Development, Corelet Programming, Hardware Placement (18:27)
  • Part IV: Mobile Deployment, Scale Deployment, SyNAPSE University (14:49)

Distinguished Speakers and Panelists:

  • From BrainScales to the Human Brain Project: Neuromorphic Computing Coming of Age (24:33)
    Karlheinz Meier, Professor & Co-Director, Human Brain Project, University of Heidelberg
  • Brain-inspired Computing: A Decade-Long Journey (20:27)
    Dharmendra Modha, IBM Fellow and Principal Investigator, IBM SyNAPSE Program
  • Cell Type and Computation (17:11)
    Michael Hawrylycz, Investigator, Allen Institute for Brain Science
  • Synesthesia’s Challenge to Brain-Inspired Computing (17:24)
    Richard Cytowic, Author of Synesthesia: A Union of the Senses
  • Visual Cortex in Silicon (26:30)
    Vijaykrishnan Narayanan, Professor, Penn State University and PI, NSF Expeditions in Computing
  • Silicon Retinas (17:18)
    Tobi Delbruck, Co-Founder, INILabs, and Professor, ETH Zurich
  • Asynchronous Circuits (21:54)
    Rajit Manohar, Professor, Cornell Tech
  • A Quest for Visual Intelligence (23:10)
    Fei-Fei Li, Professor and Director, AI Lab, Stanford University
  • Panel: Brain, Computers, Society, Future
    • Andreas Andreou, Professor, Johns Hopkins University
    • Gary Marcus, Professor, NYU
    • Horst Simon, Deputy Laboratory Director, Lawrence Berkeley National Laboratory
    • Jayashree Subrahmonia, Vice President for Products, IBM Watson Group
    • Jim Spohrer, IBM Director of Global University Programs (Moderator)
    • Mark Anderson, CEO, Strategic News Service and Chair, Future in Review Conference
    • Miyoung Chun, Executive Vice President for Science Programs, The Kavli Foundation

Audience:

The audience included luminaries such as Turing Prize Awardee, Ivan Sutherland, and Von Neumann Prize Awardee, Nimrod Megiddo. Four IBM Fellows were in attendance (Ronald Fagin, C. Mohan, Hamid Pirahesh, Stuart Parkin), as were prominent founders and visionaries in the field of brain-inspired computing, including Warren Hunt (UT Austin), Tim Lance (NYSERNet), Einar Gall (Neurosciences Institute), Gert Cauwenberghs (UCSD), Ken Kreutz-Delgado (UCSD) and Jeff Krichmar (UC Irvine).

Filed Under: Brain-inspired Computing, Collaborations

  • « Go to Previous Page
  • Page 1
  • Page 2
  • Page 3
  • Page 4
  • Go to Next Page »

Primary Sidebar

Recent Posts

  • Breakthrough low-latency, high-energy-efficiency LLM inference performance using NorthPole
  • Breakthrough edge AI inference performance using NorthPole in 3U VPX form factor
  • NorthPole in The Economist
  • NorthPole in Computer History Museum
  • NorthPole: Neural Inference at the Frontier of Energy, Space, and Time

Archives by Month

  • 2024: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2023: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2022: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2020: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2019: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2018: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2017: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2016: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2015: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2014: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2013: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2012: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2011: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2010: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2009: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2008: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2007: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2006: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec

Copyright © 2025