IBM has officially announced that our proposal “Cognitive Computing via Synaptronics and Supercomputing (C2S2)” won the first phase of DARPA’s Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) initiative. Please see DARPA’s BAA here.
Some Snippets from the BAA:
“Proposed research should investigate innovative approaches that enable revolutionary advances in neuromorphic electronic devices that are scalable to biological levels. Specifically excluded is research that primarily results in evolutionary improvements to the existing state of practice.”
“Over six decades, modern electronics has evolved through a series of major developments (e.g., transistors, integrated circuits, memories, microprocessors) leading to the programmable electronic machines that are ubiquitous today. Owing both to limitations in hardware and architecture, these machines are of limited utility in complex, real-world environments, which demand an intelligence that has not yet been captured in an algorithmic-computational paradigm. As compared to biological systems for example, today’s programmable machines are less efficient by a factor of one million to one billion in complex, real-world environments. The SyNAPSE program seeks to break the programmable machine paradigm and define a new path forward for creating useful, intelligent machines.”
“The vision for the anticipated DARPA SyNAPSE program is the enabling of electronic neuromorphic machine technology that is scalable to biological levels. Programmable machines are limited not only by their computational capacity, but also by an architecture requiring (human-derived) algorithms to both describe and process information from their environment. In contrast, biological neural systems (e.g., brains) autonomously process information in complex environments by automatically learning relevant and probabilistically stable features and associations. Since real world systems are always many body problems with infinite combinatorial complexity, neuromorphic electronic machines would be preferable in a host of applications—but useful and practical implementations do not yet exist.”
What does the DARPA Award mean?
DARPA has provided mission, money, mandate, meaning, motivation, and metrics that are indispensable to such an ambitious undertaking and to bring a wide-ranging, interdisciplinary group of researchers together.
DARPA has a rich history of initiating and supporting profound technological breakthroughs, including the internet, please see. We are profoundly grateful to DARPA and to the program manager, Dr. Todd Hylton, who created and manages the SyNAPSE program.
Who are IBM’s collaborators?
I am proud to serve as the Principal Investigator for a truly star-studded cast that is comprehensive, creative, and committed!
IBM Almaden Research Center (Dr. Stuart Parkin, Dr. Bulent Kurdi, Dr. J. Campbell Scott, Dr. Paul Maglio, Dr. Simone Raoux, Dr. Rajagopal Ananthanarayanan, Dr. Raghavendra Singh)
IBM T. J. Watson Research Center (Dr. Chung Lam and Dr. Bipin Rajendran)
Stanford University (Professors Kwabena Boahen, H. Philip Wong, Brian Wandell)
University of Wisconsin-Madison (Professor Giulio Tononi)
Cornell University (Professor Rajit Manohar)
Columbia University Medical Center (Professor Stefano Fusi)
University of California-Merced (Professor Christopher Kello)
And, many other researchers, post-docs, and students.
What are the short-term, medium-term, and long-term goals?
Please see the metrics in the BAA here. In the next 9 months, we will focus on demonstrating nano-scale, low power synapse-like devices and on beginning to uncover the functional microcircuits of the brain.
Why the focus on synapse?
Synapses are junctions between neurons. In mouse and rat brains, there are roughly 10,000 times more synapses in the brain than neurons. Strength/efficacy/efficiency of synapses is subject to change (plasticity) as the animal interacts with the environment, and these synaptic junctions are hyothesized to encode our individual experience. The computation, communication, memory, power, and space requirements for representing brain in software or hardware seem to scale with the number of syanpses. Thus, brain is much less a neural network, and more correctly, a synaptic network.
Who else is funded?
For publicly available information, please see.
What is Cognitive Computing?
There is no definition or specification of the human mind. But, we understand it as a collection of processes of sensation, perception, action, cognition, emotion, and interaction. Yet, the mind seems to integrate sight, hearing, touch, taste, and smell effortlessly into a coherent whole, and to act in a c
ontext-dependent way in a changing, uncertain environment. The mind effortless creates categories of time, space, and object, and interrelationships between these.
The mind arises from the wetware of the brain. Thus, it would seem that reverse engineering the computational function of the brain is perhaps the cheapest and quickest way to engineer computers that mimic the robustness and versatility of the mind.
So cognitive computing is the quest to engineer mind-like intelligent business machines by reverse-engineering the computational function of the brain.
What is the difference between Cognitive Computing and AI?
The field of artificial intelligence research has focused on individual aspects of engineering intelligent machines. So, it proceeds in problem-first, algorithm-later fashion. Even AI pioneers John Anderson and Allen Newell have both argued for a need for single unified theory of cognition.
Cognitive computing, seeks to engineer holistic intelligent machines that neatly tie together all of the pieces. Cognitive computing seeks to uncover the core micro and macro circuits of the brain underlying a wide variety of abilities. So, it aims to proceeds in algorithm-first, problems-later fashion.
I believe that spiking computation is a key to achieving this vision. For a previous discussion of neural code, please see here.
Why is the time ripe now?
In my opinion, there are three reasons why the time is now ripe to begin to draw inspiration from structure, dynamics, function, and behavior of the brain for developing novel computing architectures and cognitive systems. First, neuroscience now seems to have matured, and enough quantitative data is available for formulating hypotheses of brain function and dynamics. Second, supercomputing is now ready to undertake extremely large-scale simulations. Third, nanotechnology is evolving to the point that we may be able to represent essential computational function of synapses and neurons in hardware to rival brain’s power and space.
Taking an engineering mindset, time is now perfect to begin to build cognitive computing chips. The very process of building will bring us face-to-face with the unknowns and with it the potential for forward progress.
What is von Neumann Architecture? Why is brain different?
The key element of von Neumann architecture is separation of memory and computation. This leads to von Neumann bottleneck – a term coined by John Backus (who was an IBMer) in his 1972 Turing Lecture:
“Surely there must be a less primitive way of making big changes in the store than by pushing vast numbers of words back and forth through the von Neumann bottleneck. Not only is this tube a literal bottleneck for the data traffic of a problem, but, more importantly, it is an intellectual bottleneck that has kept us tied to word-at-a-time thinking instead of encouraging us to think in terms of the larger conceptual units of the task at hand. Thus programming is basically planning and detailing the enormous traffic of words through the von Neumann bottleneck, and much of that traffic concerns not significant data itself, but where to find it.”
In contrast, in the brain, memory and computation are distributed throughout the fabric, and there is no von Neumann bottleneck. It is an entirely different paradigm for computing, and we, as a civilization, have not yet tapped into this secret of mother nature.
Can you tell us about your rat-scale simulation?
Please see the actual paper here and detailed discussion here. Please note – it is not a rat brain!
What happens if you succeed?
If we succeed, then we will be able to give birth to novel cognitive systems, computing architectures, programming paradigms and numerous practical applications and perhaps to entirely new industries. IBM press release captures it nicely. “The end goal: ubiquitously deployed computers imbued with a new intelligence that can integrate information from a variety of sensors and sources, deal with ambiguity, respond in a context-dependent way, learn over time and carry out pattern recognition to solve difficult problems based on perception, action and cognition in complex, real-world environments.”
While the possibilities are exciting, we are just beginning and the path is long, arduous, uncertain, and achieving the end goal will require crossing many hurdles and numerous technical breakthroughs. Many challenges will become apparent as we build it. Stay tuned and help us get there by joining the field as there are a number of exciting inventions and discoveries awaiting the interested and prepared mind.
The end goal is inevitable, but is unpredictable.
How do you feel?
I feel a profound sense of gratitude to DARPA for their vision and for choosing our proposal, to IBM for its depth, breadth, and innovative culture, and my fellow team members for their collaboration. I also feel a deep sense of optimism about our ability to innovate. At the same time, I feel a palpable sense of responsibility to seize this opportunity to truly make meaningful positive forward progress given the obvious time and money constraints.
Personal Cognitive Computing Milestones:
In September 2005, proposal to organize 2006 Almaden Institute around Cognitive Computing was selected by Dr. Mark Dean who then headed IBM’s Almaden Research Center and by four department heads (Dr. Dilip Kandlur, Dr. Laura Haas, Dr. Gian-Luca Bona, and Dr. Jim Spohrer).
On March 31, 2006, grand challenge proposal to start a project on Cognitive Computing was funded by IBM Research.
On May 10-11, 2006, a very successful Almaden Institute on Cognitive Computing took place. Please see here and here.
On August 16, 2006, Cognitive Computing group was organized, and I became its manager.
On April 27, 2007, BBC News reported my group’s work on a half-mouse-scale simulation in near real-time.
On May 2-3, 2007, I co-chaired Cognitive Computing 2008 at UC Berkeley.
On May 20-21, 2007, I spoke at the Decade of the Mind Symposium organized by Jim Olds and James Albus.
On July 17, 2007, PC Magazine carried a cover story on Cognitive Computing.
On September 2007, I co-authored a letter in Science calling for a Decade of the Mind Initiative along with truly distinguished colleagues who also spoke at the Decade of the Mind Symposium.
On November 10-16, 2007, we presented our work on rat-scale simulation at Supercomputing 2007 conference. See here.