• Skip to main content
  • Skip to primary sidebar

Dharmendra S. Modha

My Work and Thoughts.

  • Brain-inspired Computing
    • Collaborations
    • Videos
  • Life & Universe
    • Creativity
    • Leadership
    • Interesting People
  • Accomplishments
    • Prizes
    • Papers
    • Positions
    • Presentations
    • Press
    • Profiles
  • About Me

Binding Sparse Spatiotemporal Patterns in Spiking Computation

July 16, 2010 By dmodha

Today, at the International Joint Conference on Neural Networks, Steven Esser (of IBM Research – Almaden) presented research developing an auto-associative memory network using spiking neurons.  Auto-associative memory involves storing a pattern in a way such that presenting part of the pattern can trigger recall of the complete pattern.  Such a mechanism is at work in the way we’re able to recall the appearance of many objects, such as a person’s face, even when much of it is hidden by another object.  Performing autoassociative memory using spiking neurons poses particular challenges, as the communication between such neurons is often spread out across time and mixed with other signals, rather than being delivered in a clean "snapshot".  However, spiking neurons are particularly appealing to work with as spiking communication is at the heart of the extremely powerful, yet tremendously efficient processing of the mammalian brain.  Our work is further aimed at learning patterns in an unsupervised and noise tolerant fashion by detecting patterns that occur repeatedly without the use of explicit “pattern present” signals, removing noise that the patterns may be embedded in, and storing those patterns.

The key to the function of this network is a design that includes two reciprocally connected layers of neurons and a learning rule that modifies the connections between the two layers to allow for the formation of auto-associative memories.  Input to the network enters through the first layer, while neurons in the second layers gradually learn to respond selectively to patterns in the first layer that occur repeatedly.  As the second layer neurons learn to respond to selectively to the appearance of a complete or partial version of specific patterns, they simultaneously learn to drive the appearance of the same pattern whenever they activate.  Thus, the appearance of a portion of a learned pattern in the first layer will activate a selective neuron in the second layer, which will in turn drive recall of the "missing piece" of the pattern in the first layer.  In contrast to previous work using spiking neuron’s to perform autoassociative memory, this system does not rely on the formation of slowly building attractors, but rather is able to use single spikes to produce very rapid pattern detection and recall.

The paper is here.

Filed Under: Accomplishments, Brain-inspired Computing, Papers

Talk at IEEE Solid State Circuits Society & IEEE Computer Society, Santa Clara Valley

June 19, 2010 By dmodha

On June 17, 2010, I gave an Invited Talk at IEEE Solid State Circuits Society & IEEE Computer Society, Santa Clara Valley. To my great surprise and delight, Professor Bernard Widrow attended the talk. He is shown to the right below. 

Filed Under: Accomplishments, Brain-inspired Computing, Presentations

Efficient Physical Embedding of Topologically Complex Information Processing Networks in Brains and Computer Circuits

April 30, 2010 By dmodha

Danielle S. Bassett, Daniel L. Greenfield, Andreas Meyer-Lindenberg, Daniel R. Weinberger, Simon W. Moore, Edward T. Bullmore published an article last week in PLoS Computational Biology:

ABSTRACT: Nervous systems are information processing networks that evolved by natural selection, whereas very large scale integrated (VLSI) computer circuits have evolved by commercially driven technology development. Here we follow historic intuition that all physical information processing systems will share key organizational properties, such as modularity, that generally confer adaptivity of function. It has long been observed that modular VLSI circuits demonstrate an isometric scaling relationship between the number of processing elements and the number of connections, known as Rent’s rule, which is related to the dimensionality of the circuit’s interconnect topology and its logical capacity. We show that human brain structural networks, and the nervous system of the nematode C. elegans, also obey Rent’s rule, and exhibit some degree of hierarchical modularity. We further show that the estimated Rent exponent of human brain networks, derived from MRI data, can explain the allometric scaling relations between gray and white matter volumes across a wide range of mammalian species, again suggesting that these principles of nervous system design are highly conserved. For each of these fractal modular networks, the dimensionality of the interconnect topology was greater than the 2 or 3 Euclidean dimensions of the space in which it was embedded. This relatively high complexity entailed extra cost in physical wiring: although all networks were economically or cost-efficiently wired they did not strictly minimize wiring costs. Artificial and biological information processing systems both may evolve to optimize a trade-off between physical cost and topological complexity, resulting in the emergence of homologous principles of economical, fractal and modular design across many different kinds of nervous and computational networks.

Filed Under: Brain-inspired Computing

The Brain’s Router: A Cortical Network Model of Serial Processing in the Primate Brain

April 30, 2010 By dmodha

Ariel Zylberberg, Diego Fernández Slezak, Pieter R. Roelfsema, Stanislas Dehaene, Mariano Sigman published a wonderful article in PLoS Computational Biology yesterday.

ABSTRACT: The human brain efficiently solves certain operations such as object recognition and categorization through a massively parallel network of dedicated processors. However, human cognition also relies on the ability to perform an arbitrarily large set of tasks by flexibly recombining different processors into a novel chain. This flexibility comes at the cost of a severe slowing down and a seriality of operations (100–500 ms per step). A limit on parallel processing is demonstrated in experimental setups such as the psychological refractory period (PRP) and the attentional blink (AB) in which the processing of an element either significantly delays (PRP) or impedes conscious access (AB) of a second, rapidly presented element. Here we present a spiking-neuron implementation of a cognitive architecture where a large number of local parallel processors assemble together to produce goal-driven behavior. The precise mapping of incoming sensory stimuli onto motor representations relies on a “router” network capable of flexibly interconnecting processors and rapidly changing its configuration from one task to another. Simulations show that, when presented with dual-task stimuli, the network exhibits parallel processing at peripheral sensory levels, a memory buffer capable of keeping the result of sensory processing on hold, and a slow serial performance at the router stage, resulting in a performance bottleneck. The network captures the detailed dynamics of human behavior during dual-task-performance, including both mean RTs and RT distributions, and establishes concrete predictions on neuronal dynamics during dual-task experiments in humans and non-human primates.

Filed Under: Brain-inspired Computing

Plenary Talk at “Toward a Science of Consciousness”

April 14, 2010 By dmodha

Just delivered a Plenary Talk at "Toward a Science of Consciousness" conference at Tucson, AZ.

Filed Under: Accomplishments, Brain-inspired Computing, Presentations

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 22
  • Page 23
  • Page 24
  • Page 25
  • Page 26
  • Interim pages omitted …
  • Page 49
  • Go to Next Page »

Primary Sidebar

Recent Posts

  • Breakthrough low-latency, high-energy-efficiency LLM inference performance using NorthPole
  • Breakthrough edge AI inference performance using NorthPole in 3U VPX form factor
  • NorthPole in The Economist
  • NorthPole in Computer History Museum
  • NorthPole: Neural Inference at the Frontier of Energy, Space, and Time

Archives by Month

  • 2024: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2023: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2022: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2020: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2019: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2018: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2017: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2016: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2015: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2014: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2013: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2012: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2011: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2010: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2009: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2008: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2007: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2006: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec

Copyright © 2025