• Skip to main content
  • Skip to primary sidebar

Dharmendra S. Modha

My Work and Thoughts.

  • Brain-inspired Computing
    • Collaborations
    • Videos
  • Life & Universe
    • Creativity
    • Leadership
    • Interesting People
  • Accomplishments
    • Prizes
    • Papers
    • Positions
    • Presentations
    • Press
    • Profiles
  • About Me

Archives for 2010

Breaking News: Long-Distance Wiring Diagram of the Monkey Brain

July 27, 2010 By dmodha

Macaque Long-Distance Brain Network

Today, July 27, Proceedings of the National Academy of Sciences of the USA (PNAS) published the paper entitled “Network architecture of the long-distance pathways in the macaque brain” by Raghavendra Singh (IBM Research – India) and myself. The main paper is 6 pages long and the appendix is 57 pages.

Figure 1 of main paper: Visualized below is the long-distance network of the Macaque brain spanning the cortex, thalamus, and basal ganglia that captures many aspects of the cumulative contribution of a whole community of neuroanatomists over the last half-century. The network has 383 brain regions and 6,602 long-distance connections. High-resolution version of the figure is here.

We have successfully uncovered and mapped the most comprehensive long-distance network of the Macaque monkey brain, which is essential for understanding the brain’s behavior, complexity, dynamics and computation. We can now gain unprecedented insight into how information travels and is processed across the brain.

The brain network is organized and can be studied at multiple scales, connecting single neurons, small populations of neurons, brain regions, larger brain structures and functional subsystems together. Short-distance gray matter connections constitute “local roads” within a brain region and its sub-structures. We have focused on the long-distance brain connections that travel through the brain’s white matter, which are like the “interstate highways” between far-flung brain regions.

Building on the publicly available database called Collation of Connectivity data on the Macaque brain (CoCoMac), which compiles anatomical tracing data from over 400 scientific reports published over the last half-century, we have studied four times the number of brain regions and have compiled nearly three times the number of connections when compared to the largest previous endeavor. Our data may open up entirely new ways of analyzing, understanding, and, eventually, imitating the network architecture of the brain, which according to Marian C. Diamond and Arnold B. Scheibel is “the most complex mass of protoplasm on earth-perhaps even in our galaxy“.

Key findings:

“Network”:
Our network consists of 383 hierarchically organized regions spanning cortex, thalamus, and basal ganglia; models the presence of 6,602 directed long-distance connections; is nearly three times larger than any previously derived brain network; and contains subnetworks corresponding to classic corticocortical, corticosubcortical, and subcortico-subcortical fiber systems.

“Core sub-network“:
The brain network contains a tightly integrated core that might be at the heart of higher cognition and even consciousness. Strikingly, this discovery aligns remarkably well with three decades of behavioral imaging studies that exhibit a “task-positive” network implicated in goal-directed performance and a “task-negative” network activated when the brain is withdrawn and at wakeful rest.

“Topological Centrality”:
Much as search engines uncover highly ranked web-pages of the world-wide-web using network theory, by ranking brain regions, we found evidence that prefrontal cortex is a topologically-central part of the brain that might act as an integrator and distributor of information.

“Degree distribution”:
The degree distribution is a key signature that provides clues about the possible evolution of the network. The brain network does not appear to be “scale-free” like the web’s social networks, which are logical and can grow without constraints, but seems to be “exponential” like air traffic networks, which are physical and must satisfy resource constraints. This finding will help us design the routing architecture for a network of cognitive computing chips.

What was previously scattered across 410 papers, and limited to neuroanatomists specializing in the wetware of the experimental animal is now unified and accessible to network scientists who can unleash their algorithmic software toolkits. The network opens the door to the application of large-scale network-theoretic analysis that has been so successful in understanding the internet, metabolic networks, protein interaction networks, various social networks, and in searching the world-wide web. The network will be an indispensable foundation for clinical, systems, cognitive, and computational neurosciences as well as cognitive computing.

To paraphrase Lincoln, the data is of the monkey, by the people, and for the people. We have elected to share it with the entire community.

Figure S6 of the appendix: Visualized below is a hierarchical macaque brain map consisting of 383 regions in brain (Br) that is divided into cortex (Cx), diencephalon (DiE), and basal ganglia (BG). Further, cortex is divided into temporal lobe (TL#2), frontal lobe (FL#2), parietal lobe (Pl#6), occipital lobe (OC#2), insula (Insula), and cingulate cortex (CgG#2), and so on. Each brain region is represented via its acronym or abbreviation enclosed in a small colored rectangle. The brain regions in the three outermost circles are leaves that cannot be further subdivided. A color wheel is used for better discrimination amongst brain regions. For the leaf brain regions in the two outermost circles, the color wheel is rotated by 120 degrees and 240 degrees. High-resolution version of the figure is here.

Figure 2 of the main paper: Visualized below is the innermost core of that is a central sub-network that is far more tightly integrated than the overall network, information likely spreads more swiftly within the innermost core than through the overall network, the overall network communicates with itself mainly through the innermost core, and the innermost core contains major components of the task-positive and task-negative networks derived via functional imaging research. High-resolution version of the figure is here. One of our mentors, Professor Kenneth Kreutz Delgado, upon seeing this picture dubbed it “The Mandala of the Mind“.

We were inspired by Poincare who said: “The Scientist … studies nature because he takes pleasure in it; and he takes pleasure in it because it is beautiful. If nature were not beautiful, it would not be worth knowing and life would not be worth living.” In creating the network visualization, we strove to express the truth in artistic terms and to seek beauty in science.

For the technically interested reader, here is a detailed powerpoint slide show with voice narration (60 slides, ~52 minutes, ~50 MB). This may take some time to download given the size.

Core Subnetwork: The Mandala of the Mind

Filed Under: Accomplishments, Brain-inspired Computing, Papers, Press

IBM’s Pat Goldberg Memorial Best Paper Award

July 20, 2010 By dmodha

Rajagopal Ananthanarayanan, Steven Esser, Horst Simon, and myself authored the paper "The Cat is Out of the Bag" last year at Supercomputing 2009 where it won ACM’s Gordon Bell Prize. We just found out that this paper was one of the recepients of IBM’s Pat Goldberg Memorial Best Paper Award. Over 145 papers in Computer Science, Electrical Engineering and Mathematical Sciences published in refereed conference proceedings and journals in 2009 were submitted by IBM Research authors worldwide for consideration.

Citation: The work reported in this paper is truly multidisciplinary, combining both neuroscience (physiology and anatomy of the brain) with computer science (parallelism, scaling, performance evaluation).  The paper won the prestigious 2009 ACM Gorden Bell Award. It is a "tour-de-force" in supercomputing, in the sense that it uses very large configurations of Blue Gene/P (36k nodes and 144 TB of memory) to their fullest. The computations are CPU intensive, memory intensive, communication intensive, I/O intensive, and prone to load-imbalance. Several challenges had to be addressed before this application could run at the scale and performance reported. The performance evaluation is very detailed, reporting scaling results for all modes of execution. Both weak and strong scaling are reported. In addition to measuring execution time, more detailed measurements with performance counters were also performed. The scientific result is unprecedented. The authors were the first to simulate a cortex at the scale of a cat cortex.  This scale is more than an order of magnitude larger than that of any previously reported cortical simulations. The collaboration between a group of scientists with widely different backgrounds makes this work particularly attractive.

Filed Under: Accomplishments, Prizes

Binding Sparse Spatiotemporal Patterns in Spiking Computation

July 16, 2010 By dmodha

Today, at the International Joint Conference on Neural Networks, Steven Esser (of IBM Research – Almaden) presented research developing an auto-associative memory network using spiking neurons.  Auto-associative memory involves storing a pattern in a way such that presenting part of the pattern can trigger recall of the complete pattern.  Such a mechanism is at work in the way we’re able to recall the appearance of many objects, such as a person’s face, even when much of it is hidden by another object.  Performing autoassociative memory using spiking neurons poses particular challenges, as the communication between such neurons is often spread out across time and mixed with other signals, rather than being delivered in a clean "snapshot".  However, spiking neurons are particularly appealing to work with as spiking communication is at the heart of the extremely powerful, yet tremendously efficient processing of the mammalian brain.  Our work is further aimed at learning patterns in an unsupervised and noise tolerant fashion by detecting patterns that occur repeatedly without the use of explicit “pattern present” signals, removing noise that the patterns may be embedded in, and storing those patterns.

The key to the function of this network is a design that includes two reciprocally connected layers of neurons and a learning rule that modifies the connections between the two layers to allow for the formation of auto-associative memories.  Input to the network enters through the first layer, while neurons in the second layers gradually learn to respond selectively to patterns in the first layer that occur repeatedly.  As the second layer neurons learn to respond to selectively to the appearance of a complete or partial version of specific patterns, they simultaneously learn to drive the appearance of the same pattern whenever they activate.  Thus, the appearance of a portion of a learned pattern in the first layer will activate a selective neuron in the second layer, which will in turn drive recall of the "missing piece" of the pattern in the first layer.  In contrast to previous work using spiking neuron’s to perform autoassociative memory, this system does not rely on the formation of slowly building attractors, but rather is able to use single spikes to produce very rapid pattern detection and recall.

The paper is here.

Filed Under: Accomplishments, Brain-inspired Computing, Papers

Talk at IEEE Solid State Circuits Society & IEEE Computer Society, Santa Clara Valley

June 19, 2010 By dmodha

On June 17, 2010, I gave an Invited Talk at IEEE Solid State Circuits Society & IEEE Computer Society, Santa Clara Valley. To my great surprise and delight, Professor Bernard Widrow attended the talk. He is shown to the right below. 

Filed Under: Accomplishments, Brain-inspired Computing, Presentations

Efficient Physical Embedding of Topologically Complex Information Processing Networks in Brains and Computer Circuits

April 30, 2010 By dmodha

Danielle S. Bassett, Daniel L. Greenfield, Andreas Meyer-Lindenberg, Daniel R. Weinberger, Simon W. Moore, Edward T. Bullmore published an article last week in PLoS Computational Biology:

ABSTRACT: Nervous systems are information processing networks that evolved by natural selection, whereas very large scale integrated (VLSI) computer circuits have evolved by commercially driven technology development. Here we follow historic intuition that all physical information processing systems will share key organizational properties, such as modularity, that generally confer adaptivity of function. It has long been observed that modular VLSI circuits demonstrate an isometric scaling relationship between the number of processing elements and the number of connections, known as Rent’s rule, which is related to the dimensionality of the circuit’s interconnect topology and its logical capacity. We show that human brain structural networks, and the nervous system of the nematode C. elegans, also obey Rent’s rule, and exhibit some degree of hierarchical modularity. We further show that the estimated Rent exponent of human brain networks, derived from MRI data, can explain the allometric scaling relations between gray and white matter volumes across a wide range of mammalian species, again suggesting that these principles of nervous system design are highly conserved. For each of these fractal modular networks, the dimensionality of the interconnect topology was greater than the 2 or 3 Euclidean dimensions of the space in which it was embedded. This relatively high complexity entailed extra cost in physical wiring: although all networks were economically or cost-efficiently wired they did not strictly minimize wiring costs. Artificial and biological information processing systems both may evolve to optimize a trade-off between physical cost and topological complexity, resulting in the emergence of homologous principles of economical, fractal and modular design across many different kinds of nervous and computational networks.

Filed Under: Brain-inspired Computing

  • « Go to Previous Page
  • Page 1
  • Page 2
  • Page 3
  • Page 4
  • Go to Next Page »

Primary Sidebar

Recent Posts

  • Breakthrough low-latency, high-energy-efficiency LLM inference performance using NorthPole
  • Breakthrough edge AI inference performance using NorthPole in 3U VPX form factor
  • NorthPole in The Economist
  • NorthPole in Computer History Museum
  • NorthPole: Neural Inference at the Frontier of Energy, Space, and Time

Archives by Month

  • 2024: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2023: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2022: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2020: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2019: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2018: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2017: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2016: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2015: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2014: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2013: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2012: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2011: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2010: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2009: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2008: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2007: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2006: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec

Copyright © 2025