• Skip to main content
  • Skip to primary sidebar

Dharmendra S. Modha

My Work and Thoughts.

  • Brain-inspired Computing
    • Collaborations
    • Videos
  • Life & Universe
    • Creativity
    • Leadership
    • Interesting People
  • Accomplishments
    • Prizes
    • Papers
    • Positions
    • Presentations
    • Press
    • Profiles
  • About Me

“Reverse-engineer the brain” — Grand Challenge by NAE

February 18, 2008 By dmodha

On Feb 15, National Academy of Engineering (NAE) unveiled 14 grand challenges for the 21st century.

One of the challenges, is "Reverse-engineer the brain" with the subtext that "The intersection of engineering and neuroscience promises great advances in health care, manufacturing, and communication." Here is the complete description:

For decades, some of engineering’s best minds have focused their thinking skills on how to create thinking machines — computers capable of emulating human intelligence.

Why should you reverse-engineer the brain?

While some of thinking machines have mastered specific narrow skills — playing chess, for instance — general-purpose artificial intelligence (AI) has remained elusive.

Part of the problem, some experts now believe, is that artificial brains have been designed without much attention to real ones. Pioneers of artificial intelligence approached thinking the way that aeronautical engineers approached flying without much learning from birds. It has turned out, though, that the secrets about how living brains work may offer the best guide to engineering the artificial variety. Discovering those secrets by reverse-engineering the brain promises enormous opportunities for reproducing intelligence the way assembly lines spit out cars or computers.

Figuring out how the brain works will offer rewards beyond building smarter computers. Advances gained from studying the brain may in return pay dividends for the brain itself. Understanding its methods will enable engineers to simulate its activities, leading to deeper insights about how and why the brain works and fails. Such simulations will offer more precise methods for testing potential biotechnology solutions to brain disorders, such as drugs or neural implants. Neurological disorders may someday be circumvented by technological innovations that allow wiring of new materials into our bodies to do the jobs of lost or damaged nerve cells. Implanted electronic devices could help victims of dementia to remember, blind people to see, and crippled people to walk.

Sophisticated computer simulations could also be used in many other applications. Simulating the interactions of proteins in cells would be a novel way of designing and testing drugs, for instance. And simulation capacity will be helpful beyond biology, perhaps in forecasting the impact of earthquakes in ways that would help guide evacuation and recovery plans.

Much of this power to simulate reality effectively will come from increased computing capability rooted in the reverse-engineering of the brain. Learning from how the brain itself learns, researchers will likely improve knowledge of how to design computing devices that process multiple streams of information in parallel, rather than the one-step-at-a-time approach of the basic PC. Another feature of real brains is the vast connectivity of nerve cells, the biological equivalent of computer signaling switches. While nerve cells typically form tens of thousands of connections with their neighbors, traditional computer switches typically possess only two or three. AI systems attempting to replicate human abilities, such as vision, are now being developed with more, and more complex, connections.

What are the applications for this information?

Already, some applications using artificial intelligence have benefited from simulations based on brain reverse-engineering. Examples include AI algorithms used in speech recognition and in machine vision systems in automated factories. More advanced AI software should in the future be able to guide devices that can enter the body to perform medical diagnoses and treatments.

Of potentially even greater impact on human health and well-being is the use of new AI insights for repairing broken brains.  Damage from injury or disease to the hippocampus, a brain structure important for learning and memory, can disrupt the proper electrical signaling between nerve cells that is needed for forming and recalling memories. With knowledge of the proper signaling patterns in healthy brains, engineers have begun to design computer chips that mimic the brain’s own communication skills. Such chips could be useful in cases where healthy brain tissue is starved for information because of the barrier imposed by damaged tissue. In principle, signals from the healthy tissue could be recorded by an implantable chip, which would then generate new signals to bypass the damage. Such an electronic alternate signaling route could help restore normal memory skills to an impaired brain that otherwise could not form them.

“Neural prostheses” have already been put to use in the form of cochlear implants to treat hearing loss and stimulating electrodes to treat Parkinson’s disease. Progress has also been made in developing “artificial retinas,” light-sensitive chips that could help restore vision.

Even more ambitious programs are underway for systems to control artificial limbs. Engineers envision computerized implants capable of receiving the signals from thousands of the brain’s nerve cells and then wirelessly transmitting that information to an interface device that would decode the brain’s intentions. The interface could then send signals to an artificial limb, or even directly to nerves and muscles, giving directions for implementing the desired movements.

Other research has explored, with some success, implants that could literally read the thoughts of immobilized patients and signal an external computer, giving people unable to speak or even move a way to communicate with the outside world.

What is needed to reverse-engineer the brain?

The progress so far is impressive. But to fully realize the brain’s potential to teach us how to make machines learn and think, further advances are needed in the technology for understanding the brain in the first place. Modern noninvasive methods for simultaneously measuring the activity of many brain cells have provided a major boost in that direction, but details of the brain’s secret communication code remain to be deciphered. Nerve cells communicate by firing electrical pulses that release small molecules called neurotransmitters, chemical messengers that hop from one nerve cell to a neighbor, inducing the neighbor to fire a signal of its own (or, in some cases, inhibiting the neighbor from sending signals). Because each nerve cell receives messages from tens of thousands of others, and circuits of nerve cells link up in complex networks, it is extremely difficult to completely trace the signaling pathways.

Furthermore, the code itself is complex — nerve cells fire at different rates, depending on the sum of incoming messages. Sometimes the signaling is generated in rapid-fire bursts; sometimes it is more leisurely. And much of mental function seems based on the firing of multiple nerve cells around the brain in synchrony. Teasing out and analyzing all the complexities of nerve cell signals, their dynamics, pathways, and feedback loops, presents a major challenge.

Today’s computers have electronic logic gates that are either on or off, but if engineers could replicate neurons’ ability to assume various levels of excitation, they could create much more powerful computing machines. Success toward fully understanding brain activity will, in any case, open new avenues for deeper understanding of the basis for intelligence and even consciousness, no doubt providing engineers with insight into even grander accomplishments for enhancing the joy of livin
g.

References

Berger, T.W., et al. Restoring Lost Cognitive Function,” IEEE Engineering in Medicine and Biology Magazine (September/October 2005), pp. 30-44.

Griffith, A. 2007.  Chipping In,” Scientific American (February 2007), pp. 18-20.

Handelman, S. The Memory Hacker,” Popular Science (2005).

Hapgood,  F. Reverse-Engineering the Brain,” Technology Review (July 11, 2006).

Lebedev, M.A. and Miguel A.L. Nicolelis. Brain-machine interfaces: Past, present, and future,” Trends in Neurosciences 29 (September 2006), pp. 536-546.

Filed Under: Brain-inspired Computing

The Footprints of God

January 26, 2008 By dmodha

I had enormous fun reading Greg Ilse’s wonderful novel The Footprints of God. If you love supercomputers, brain simulations, high-resoultion MRI, artificial intelligence, disembodied cognition, National Security Agency, etc. , then you will enjoy the sci-fi techno-thriller story. The book is wonderfully researched and masterfully penned.

Filed Under: Brain-inspired Computing

“Why is Real-World Visual Object Recognition Hard?”

January 25, 2008 By dmodha

In a study published in PLoS Computational Biology, the authors address a key issue of "Why is Real-World Visual Object Recognition Hard?":

Abstract: Progress in understanding the brain mechanisms underlying vision requires the construction of computational models that not only emulate the brain’s anatomy and physiology, but ultimately match its performance on visual tasks. In recent years, ‘‘natural’’ images have become popular in the study of vision and have been used to show apparently impressive progress in building such models. Here, we challenge the use of uncontrolled ‘‘natural’’ images in guiding that progress. In particular, we show that a simple V1-like model—a neuroscientist’s ‘‘null’’ model, which should perform poorly at real-world visual object recognition tasks—outperforms state-of-the-art object recognition systems (biologically inspired and otherwise) on a standard, ostensibly natural image recognition test. As a counterpoint, we designed a ‘‘simpler’’ recognition test to better span the real-world variation in object pose, position, and scale, and we show that this test correctly exposes the inadequacy of the V1-like model. Taken together, these results demonstrate that tests based on uncontrolled natural images can be seriously misleading, potentially guiding progress in the wrong direction. Instead, we reexamine what it means for images to be natural and argue for a renewed focus on the core problem of object recognition—real-world image variation.

Reference: Pinto N, Cox DD, DiCarlo JJ (2008) Why is real-world visual object recognition hard? PLoS Comput Biol 4(1): e27. doi:10.1371/journal.pcbi.0040027)

Filed Under: Brain-inspired Computing

E-noses Could Make Diseases Something to Sniff at

January 15, 2008 By dmodha

A very interesting article in Scientific American discusses how olfactory sensors may revolutionize medicine. A snippet is below.

Engineers are developing electronic versions of the human nose that will allow doctors, ever in search of less-invasive techniques, to tap into what the nose knows about the human body.

"The sense of smell has been used as a medical diagnostic tool for thousands of years," says Bill Hanson, an anesthesiologist and critical care specialist at the University of Pennsylvania in Philadelphia, who has studied whether odor can be used to diagnose an ailment. "Both diseases and bacteria that cause diseases have individual and unique odors. You can walk into a patient’s room and know immediately in some cases that the patient has such and such bacteria just because of the odor."

Filed Under: Brain-inspired Computing

Learning in Networks: from Spiking Neural Nets to Graphs

January 10, 2008 By dmodha

Yesterday, I attended an interesting talk by Victor Miagkikh as part of ACM’s SF Bay Area Data Mining Special Interest Group at the beautiful campus of SAP Labs.

Abstract:

Hebbian learning is a well know principal of unsupervised learning in networks: if two events happen "close in time" then the strength of connection between the network nodes producing those events increases. Is this a complete set of learning axioms? Given a reinforcement signal (reward) for a sequence of actions we can add another axiom: "reward controls plasticity". Thus, we get a reinforcement learning algorithm that could be used for training spiking neural networks (SNN). The author will demonstrate the utility of this algorithm on a maze learning problem. Can these learning principles be applied not only to neural, but also to other kinds of networks? Yes, in fact we will see their application to economical influence networks for portfolio optimization. Then, if time allows, we consider another application: social networks for a movie recommendation engine, and other causality inducing principles instead of "close in time". By the end of the talk the author hopes that the audience would agree that the "reward controls plasticity" principle is a vital learning axiom.

Filed Under: Brain-inspired Computing

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 34
  • Page 35
  • Page 36
  • Page 37
  • Page 38
  • Interim pages omitted …
  • Page 49
  • Go to Next Page »

Primary Sidebar

Recent Posts

  • Breakthrough low-latency, high-energy-efficiency LLM inference performance using NorthPole
  • Breakthrough edge AI inference performance using NorthPole in 3U VPX form factor
  • NorthPole in The Economist
  • NorthPole in Computer History Museum
  • NorthPole: Neural Inference at the Frontier of Energy, Space, and Time

Archives by Month

  • 2024: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2023: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2022: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2020: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2019: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2018: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2017: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2016: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2015: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2014: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2013: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2012: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2011: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2010: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2009: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2008: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2007: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2006: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec

Copyright © 2025