• Skip to main content
  • Skip to primary sidebar

Dharmendra S. Modha

My Work and Thoughts.

  • Brain-inspired Computing
    • Collaborations
    • Videos
  • Life & Universe
    • Creativity
    • Leadership
    • Interesting People
  • Accomplishments
    • Prizes
    • Papers
    • Positions
    • Presentations
    • Press
    • Profiles
  • About Me

Kwabena Boahen: Neurogrid–Emulating a million neurons in the cortex

March 28, 2008 By dmodha

Today, I had a tremendous good fortune to host Professor Kwabena Boahen from Stanford University for a widely attended colloquium talk at Almaden. Professor Boahen is a brilliant scientist and bioengineer who seeks to emulate the brain in hardware. Professor Boahen is a protégé of Professor Carver Mead from CalTech.

My favorite papers amongst his many wonderful articles are Neuronal Ion-Channel Dynamics in Silicon and the now classic Point-to-Point Connectivity Between Neuromorphic Chips using Address-Events. 

ABSTRACT

The digital technique used to simulate neural activity has not changed since Hodgkin and Huxley pioneered ion-channel modeling in the 1950s. Since then, progress has come incrementally, from computer performance doubling every eighteen months (Moore’s Law), plateauing in recent years, and putting real-time cortex-scale simulations outside the realm of the fastest supercomputers for the foreseeable future. With recent advances in neural recording and imaging techniques, our ability to characterize the brain’s structure and function truly trumps our ability to simulate its behavior. Fortuitously, the analog technique developed by neuromorphic engineers over the past two decades has now matured, with the recently developed ability to program various types of ion-channels as well as arbitrary patterns of synaptic connections.

Exploiting the analog technique, Neurogrid will help neuroscientists vet various hypotheses by performing simulations large enough to include interactions between multiple cortical areas yet detailed enough to account for what is known about brain function and neuronal structure. While neuronal-level mechanisms have been linked to network-level functions through computational modeling (e.g., generation of brain rhythms), scaling these models up to the area- and system-levels (where cognition emerges) has proved difficult. In the visual system alone, there are three dozen cortical areas, each with its own representation of the visual scene. It is not understood how conflicting information in these areas is reconciled.

When it is completed this year, Neurogrid will emulate a million neurons in the cortex (i.e., simulate in real-time)—rivaling the performance of 20–200 IBM Blue Gene racks on this particular task—at under a thousandth the cost.

BIOGRAPHY

Professor Kwabena Boahen joined Stanford’s Bioengineering Department as Associate Professor in December 2005. From 1997 to 2005 he was on the faculty of University of Pennsylvania, Philadelphia PA. He is a bioengineer who is using silicon integrated circuits to emulate the way neurons compute, linking the seemingly disparate fields of electronics and computer science with neurobiology and medicine. His interest in neural networks developed soon after he left his native Ghana to pursue undergraduate studies in Electrical and Computer Engineering at Johns Hopkins University, Baltimore, in 1985. He went on to earn a doctorate in Computation and Neural Systems at the California Institute of Technology in 1997. His lab is currently developing Neurogrid, a specialized hardware platform that will enable the cortex’s inner workings to be simulated in detail—something outside the realm of even the fastest supercomputers. Professor Boahen’s numerous contributions to the field of neuromorphic engineering include a silicon retina that could be used to give the blind sight and a self-organizing chip that emulates the way the juvenile brain wires itself up. His scholarship is widely recognized, with over sixty publications to his name, including a cover story in the May 2005 issue of Scientific American. He has received several distinguished honors, including a Fellowship from the Packard Foundation in 1999, a CAREER award from the National Science Foundation in 2001, a Young Investigator Award from the Office of Naval Research in 2002, and the National Institute of Health Director’s Pioneer Award in 2006. The professor is an avid cyclist.

Filed Under: Brain-inspired Computing, Interesting People

Sir Arthur C Clarke: 1917-2008

March 19, 2008 By dmodha

After a prolific and esteemed career, Sir Arthur C Clarke has passed away in Sri Lanka.

See the NY Times article here.

Filed Under: Interesting People

Brain map project set to revolutionise neuroscience

March 16, 2008 By dmodha

The Allen Institute for Brain Science in Seattle, Washington, US, is today launching a four-year, $55-million effort to build a three-dimensional map documenting the levels of activity of some 20,000 different genes across the human brain.

“The Human Genome Project was the ‘what’, and our project is the ‘where’,” says Allan Jones, the institute’s chief scientific officer.

Please see here for the complete press article.

Filed Under: Brain-inspired Computing

“Reverse-engineer the brain” — Grand Challenge by NAE

February 18, 2008 By dmodha

On Feb 15, National Academy of Engineering (NAE) unveiled 14 grand challenges for the 21st century.

One of the challenges, is "Reverse-engineer the brain" with the subtext that "The intersection of engineering and neuroscience promises great advances in health care, manufacturing, and communication." Here is the complete description:

For decades, some of engineering’s best minds have focused their thinking skills on how to create thinking machines — computers capable of emulating human intelligence.

Why should you reverse-engineer the brain?

While some of thinking machines have mastered specific narrow skills — playing chess, for instance — general-purpose artificial intelligence (AI) has remained elusive.

Part of the problem, some experts now believe, is that artificial brains have been designed without much attention to real ones. Pioneers of artificial intelligence approached thinking the way that aeronautical engineers approached flying without much learning from birds. It has turned out, though, that the secrets about how living brains work may offer the best guide to engineering the artificial variety. Discovering those secrets by reverse-engineering the brain promises enormous opportunities for reproducing intelligence the way assembly lines spit out cars or computers.

Figuring out how the brain works will offer rewards beyond building smarter computers. Advances gained from studying the brain may in return pay dividends for the brain itself. Understanding its methods will enable engineers to simulate its activities, leading to deeper insights about how and why the brain works and fails. Such simulations will offer more precise methods for testing potential biotechnology solutions to brain disorders, such as drugs or neural implants. Neurological disorders may someday be circumvented by technological innovations that allow wiring of new materials into our bodies to do the jobs of lost or damaged nerve cells. Implanted electronic devices could help victims of dementia to remember, blind people to see, and crippled people to walk.

Sophisticated computer simulations could also be used in many other applications. Simulating the interactions of proteins in cells would be a novel way of designing and testing drugs, for instance. And simulation capacity will be helpful beyond biology, perhaps in forecasting the impact of earthquakes in ways that would help guide evacuation and recovery plans.

Much of this power to simulate reality effectively will come from increased computing capability rooted in the reverse-engineering of the brain. Learning from how the brain itself learns, researchers will likely improve knowledge of how to design computing devices that process multiple streams of information in parallel, rather than the one-step-at-a-time approach of the basic PC. Another feature of real brains is the vast connectivity of nerve cells, the biological equivalent of computer signaling switches. While nerve cells typically form tens of thousands of connections with their neighbors, traditional computer switches typically possess only two or three. AI systems attempting to replicate human abilities, such as vision, are now being developed with more, and more complex, connections.

What are the applications for this information?

Already, some applications using artificial intelligence have benefited from simulations based on brain reverse-engineering. Examples include AI algorithms used in speech recognition and in machine vision systems in automated factories. More advanced AI software should in the future be able to guide devices that can enter the body to perform medical diagnoses and treatments.

Of potentially even greater impact on human health and well-being is the use of new AI insights for repairing broken brains.  Damage from injury or disease to the hippocampus, a brain structure important for learning and memory, can disrupt the proper electrical signaling between nerve cells that is needed for forming and recalling memories. With knowledge of the proper signaling patterns in healthy brains, engineers have begun to design computer chips that mimic the brain’s own communication skills. Such chips could be useful in cases where healthy brain tissue is starved for information because of the barrier imposed by damaged tissue. In principle, signals from the healthy tissue could be recorded by an implantable chip, which would then generate new signals to bypass the damage. Such an electronic alternate signaling route could help restore normal memory skills to an impaired brain that otherwise could not form them.

“Neural prostheses” have already been put to use in the form of cochlear implants to treat hearing loss and stimulating electrodes to treat Parkinson’s disease. Progress has also been made in developing “artificial retinas,” light-sensitive chips that could help restore vision.

Even more ambitious programs are underway for systems to control artificial limbs. Engineers envision computerized implants capable of receiving the signals from thousands of the brain’s nerve cells and then wirelessly transmitting that information to an interface device that would decode the brain’s intentions. The interface could then send signals to an artificial limb, or even directly to nerves and muscles, giving directions for implementing the desired movements.

Other research has explored, with some success, implants that could literally read the thoughts of immobilized patients and signal an external computer, giving people unable to speak or even move a way to communicate with the outside world.

What is needed to reverse-engineer the brain?

The progress so far is impressive. But to fully realize the brain’s potential to teach us how to make machines learn and think, further advances are needed in the technology for understanding the brain in the first place. Modern noninvasive methods for simultaneously measuring the activity of many brain cells have provided a major boost in that direction, but details of the brain’s secret communication code remain to be deciphered. Nerve cells communicate by firing electrical pulses that release small molecules called neurotransmitters, chemical messengers that hop from one nerve cell to a neighbor, inducing the neighbor to fire a signal of its own (or, in some cases, inhibiting the neighbor from sending signals). Because each nerve cell receives messages from tens of thousands of others, and circuits of nerve cells link up in complex networks, it is extremely difficult to completely trace the signaling pathways.

Furthermore, the code itself is complex — nerve cells fire at different rates, depending on the sum of incoming messages. Sometimes the signaling is generated in rapid-fire bursts; sometimes it is more leisurely. And much of mental function seems based on the firing of multiple nerve cells around the brain in synchrony. Teasing out and analyzing all the complexities of nerve cell signals, their dynamics, pathways, and feedback loops, presents a major challenge.

Today’s computers have electronic logic gates that are either on or off, but if engineers could replicate neurons’ ability to assume various levels of excitation, they could create much more powerful computing machines. Success toward fully understanding brain activity will, in any case, open new avenues for deeper understanding of the basis for intelligence and even consciousness, no doubt providing engineers with insight into even grander accomplishments for enhancing the joy of livin
g.

References

Berger, T.W., et al. Restoring Lost Cognitive Function,” IEEE Engineering in Medicine and Biology Magazine (September/October 2005), pp. 30-44.

Griffith, A. 2007.  Chipping In,” Scientific American (February 2007), pp. 18-20.

Handelman, S. The Memory Hacker,” Popular Science (2005).

Hapgood,  F. Reverse-Engineering the Brain,” Technology Review (July 11, 2006).

Lebedev, M.A. and Miguel A.L. Nicolelis. Brain-machine interfaces: Past, present, and future,” Trends in Neurosciences 29 (September 2006), pp. 536-546.

Filed Under: Brain-inspired Computing

The Footprints of God

January 26, 2008 By dmodha

I had enormous fun reading Greg Ilse’s wonderful novel The Footprints of God. If you love supercomputers, brain simulations, high-resoultion MRI, artificial intelligence, disembodied cognition, National Security Agency, etc. , then you will enjoy the sci-fi techno-thriller story. The book is wonderfully researched and masterfully penned.

Filed Under: Brain-inspired Computing

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 34
  • Page 35
  • Page 36
  • Page 37
  • Page 38
  • Interim pages omitted …
  • Page 49
  • Go to Next Page »

Primary Sidebar

Recent Posts

  • Computer History Museum Interview
  • EE Times Interview by Sunny Bains
  • SiLQ: Simple Large Language Model Quantization-Aware Training
  • Breakthrough low-latency, high-energy-efficiency LLM inference performance using NorthPole
  • Breakthrough edge AI inference performance using NorthPole in 3U VPX form factor

Archives by Month

  • 2025: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2024: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2023: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2022: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2020: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2019: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2018: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2017: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2016: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2015: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2014: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2013: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2012: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2011: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2010: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2009: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2008: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2007: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2006: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec

Copyright © 2025