I had enormous fun reading Greg Ilse’s wonderful novel The Footprints of God. If you love supercomputers, brain simulations, high-resoultion MRI, artificial intelligence, disembodied cognition, National Security Agency, etc. , then you will enjoy the sci-fi techno-thriller story. The book is wonderfully researched and masterfully penned.
Archives for January 2008
In a study published in PLoS Computational Biology, the authors address a key issue of "Why is Real-World Visual Object Recognition Hard?":
Abstract: Progress in understanding the brain mechanisms underlying vision requires the construction of computational models that not only emulate the brain’s anatomy and physiology, but ultimately match its performance on visual tasks. In recent years, ‘‘natural’’ images have become popular in the study of vision and have been used to show apparently impressive progress in building such models. Here, we challenge the use of uncontrolled ‘‘natural’’ images in guiding that progress. In particular, we show that a simple V1-like model—a neuroscientist’s ‘‘null’’ model, which should perform poorly at real-world visual object recognition tasks—outperforms state-of-the-art object recognition systems (biologically inspired and otherwise) on a standard, ostensibly natural image recognition test. As a counterpoint, we designed a ‘‘simpler’’ recognition test to better span the real-world variation in object pose, position, and scale, and we show that this test correctly exposes the inadequacy of the V1-like model. Taken together, these results demonstrate that tests based on uncontrolled natural images can be seriously misleading, potentially guiding progress in the wrong direction. Instead, we reexamine what it means for images to be natural and argue for a renewed focus on the core problem of object recognition—real-world image variation.
Reference: Pinto N, Cox DD, DiCarlo JJ (2008) Why is real-world visual object recognition hard? PLoS Comput Biol 4(1): e27. doi:10.1371/journal.pcbi.0040027)
A very interesting article in Scientific American discusses how olfactory sensors may revolutionize medicine. A snippet is below.
Engineers are developing electronic versions of the human nose that will allow doctors, ever in search of less-invasive techniques, to tap into what the nose knows about the human body.
"The sense of smell has been used as a medical diagnostic tool for thousands of years," says Bill Hanson, an anesthesiologist and critical care specialist at the University of Pennsylvania in Philadelphia, who has studied whether odor can be used to diagnose an ailment. "Both diseases and bacteria that cause diseases have individual and unique odors. You can walk into a patient’s room and know immediately in some cases that the patient has such and such bacteria just because of the odor."
Yesterday, I attended an interesting talk by Victor Miagkikh as part of ACM’s SF Bay Area Data Mining Special Interest Group at the beautiful campus of SAP Labs.
Hebbian learning is a well know principal of unsupervised learning in networks: if two events happen "close in time" then the strength of connection between the network nodes producing those events increases. Is this a complete set of learning axioms? Given a reinforcement signal (reward) for a sequence of actions we can add another axiom: "reward controls plasticity". Thus, we get a reinforcement learning algorithm that could be used for training spiking neural networks (SNN). The author will demonstrate the utility of this algorithm on a maze learning problem. Can these learning principles be applied not only to neural, but also to other kinds of networks? Yes, in fact we will see their application to economical influence networks for portfolio optimization. Then, if time allows, we consider another application: social networks for a movie recommendation engine, and other causality inducing principles instead of "close in time". By the end of the talk the author hopes that the audience would agree that the "reward controls plasticity" principle is a vital learning axiom.
In an article entitled "Learning in and from Brain-Based Devices" published in Science (vol. 318, 16 Nov 2007), Dr. Gerald Edelman, Nobelist and Director of The Neurosciences Institute provides a wonderful perspective on Brain-based Devices.
Abstract: Biologically based mobile devices have been constructed that differ from robots based on artificial intelligence. These brain-based devices (BBDs) contain simulated brains that autonomously categorize signals from the environment without a priori instruction. Two such BBDs, Darwin VII and Darwin X, are described here. Darwin VII recognizes objects and links categories to behavior through instrumental conditioning. Darwin X puts together the "what,""when," and "where" from cues in the environment into an episodic memory that allows it to find a desired target. Although these BBDs are designed to provide insights into how the brain works, their principles may find uses in building hybrid machines. These machines would combine the learning ability of BBDs with explicitly programmed control systems.