• Skip to main content
  • Skip to primary sidebar

Dharmendra S. Modha

My Work and Thoughts.

  • Brain-inspired Computing
    • Collaborations
    • Videos
  • Life & Universe
    • Creativity
    • Leadership
    • Interesting People
  • Accomplishments
    • Prizes
    • Papers
    • Positions
    • Presentations
    • Press
    • Profiles
  • About Me

Binding Sparse Spatiotemporal Patterns in Spiking Computation

July 16, 2010 By dmodha

Today, at the International Joint Conference on Neural Networks, Steven Esser (of IBM Research – Almaden) presented research developing an auto-associative memory network using spiking neurons.  Auto-associative memory involves storing a pattern in a way such that presenting part of the pattern can trigger recall of the complete pattern.  Such a mechanism is at work in the way we’re able to recall the appearance of many objects, such as a person’s face, even when much of it is hidden by another object.  Performing autoassociative memory using spiking neurons poses particular challenges, as the communication between such neurons is often spread out across time and mixed with other signals, rather than being delivered in a clean "snapshot".  However, spiking neurons are particularly appealing to work with as spiking communication is at the heart of the extremely powerful, yet tremendously efficient processing of the mammalian brain.  Our work is further aimed at learning patterns in an unsupervised and noise tolerant fashion by detecting patterns that occur repeatedly without the use of explicit “pattern present” signals, removing noise that the patterns may be embedded in, and storing those patterns.

The key to the function of this network is a design that includes two reciprocally connected layers of neurons and a learning rule that modifies the connections between the two layers to allow for the formation of auto-associative memories.  Input to the network enters through the first layer, while neurons in the second layers gradually learn to respond selectively to patterns in the first layer that occur repeatedly.  As the second layer neurons learn to respond to selectively to the appearance of a complete or partial version of specific patterns, they simultaneously learn to drive the appearance of the same pattern whenever they activate.  Thus, the appearance of a portion of a learned pattern in the first layer will activate a selective neuron in the second layer, which will in turn drive recall of the "missing piece" of the pattern in the first layer.  In contrast to previous work using spiking neuron’s to perform autoassociative memory, this system does not rely on the formation of slowly building attractors, but rather is able to use single spikes to produce very rapid pattern detection and recall.

The paper is here.

Filed Under: Accomplishments, Brain-inspired Computing, Papers

Primary Sidebar

Recent Tweets

  • Fundamental Principle: Nature Abhors Gradients. https://t.co/KI2CRhWJdRover a year ago
  • The TrueNorth Journey https://t.co/XnpDScCAUV @IBMResearchover a year ago
  • Inspiration: "No great thing is created suddenly" - Epictetusover a year ago
  • IEEE Computer Cover Feature — TrueNorth: Accelerating From Zero to 64 Million Neurons in 10 Years @IBMResearch… https://t.co/4fvYk2JCPTover a year ago
  • "In 2012, computer scientist Dharmendra Modha used a powerful supercomputer to simulate the activity of more than 5… https://t.co/Sz17XsG5h5over a year ago
  • Management Tip: Team success is AND, not OR.over a year ago
  • The creation of the electronic brain https://t.co/wBKjGtqkvi via @issuu See page 39 onwards ... @IBMResearchover a year ago
  • Creativity Tip: Beeline to problem, spiral to solution.over a year ago
  • PREPRINT: Low Precision Policy Distillation with Application to Low-Power, Real-time Sensation-Cognition-Action Loo… https://t.co/WZHmGS5AxJover a year ago
  • "The power and performance of neuromorphic computing is far superior to any incremental solution we can expect on a… https://t.co/B2k9ZznHIJover a year ago

Recent Posts

  • Jobs in Brain-inspired Computing
  • Neuromorphic scaling advantages for energy-efficient random walk computations
  • Discovering Low-Precision Networks Close to Full-Precision Networks for Efficient Inference
  • Exciting Opportunities in Brain-inspired Computing at IBM Research
  • The TrueNorth Journey: 2008 – 2018 (video)

Archives by Month

  • 2022: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2020: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2019: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2018: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2017: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2016: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2015: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2014: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2013: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2012: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2011: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2010: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2009: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2008: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2007: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2006: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec

Evolution: Brain-inspired Computing

Copyright © 2023