• Skip to main content
  • Skip to primary sidebar

Dharmendra S. Modha

My Work and Thoughts.

  • Brain-inspired Computing
    • Collaborations
    • Videos
  • Life & Universe
    • Creativity
    • Leadership
    • Interesting People
  • Accomplishments
    • Prizes
    • Papers
    • Positions
    • Presentations
    • Press
    • Profiles
  • About Me

Archives for 2008

SFN 2008: Nov 15-19 in Washington, DC

August 4, 2008 By dmodha

SfN is the major neuroscience conference where ~30,000 scientists usually attend.

Cognitive Computing Group at Almaden has 2 talks, and 3 posters at the conference. Congratulations to my colleagues!

Presentations/Talks
“Interfacing auditory input and output spike streams in a large-scale, cortical simulator ”, Shyamal Chandra, Rajagopal Ananthanarayanan, Tom Zimmerman, Dharmendra S. Modha

“Imaging the spatio-temporal dynamics of large-scale cortical simulations”, Rajagopal Ananthanarayanan, Shyamal Chandra, Dharmendra  S. Modha, Raghavendra Singh

Posters
“Quantifying Synchrony in Large-scale Cortical Simulations”, Anthony Ndirango, Rajagopal Ananthanarayanan, Dharmendra S. Modha

“Understanding the topological properties of the white matter pathways in the Macaque brain”, Dharmendra S. Modha and Raghav Singh

"FascTrack: Optimal estimation of long range white matter fascicle networks using diffusion tensor imaging", Anthony J. Sherbondy, Robert F. Dougherty, Brian A. Wandell, Dharmendra S. Modha

Filed Under: Brain-inspired Computing

Intel: Human and computer intelligence will merge in 40 years

July 25, 2008 By dmodha

"At Intel Corp., just passing its 40th anniversary and with myriad chips in its historical roster, a top company exec looks 40 years into the future to a time when human intelligence and machine intelligence have begun to merge."

See full story here.

Filed Under: Brain-inspired Computing

iCub

July 18, 2008 By dmodha

"The iCub is an artificial toddler [robot] with senses, 53 degrees of freedom, and a modular software structure designed to allow the work of different research teams to be combined."

"This open-source robot is designed to allow academics to concentrate on implementing their theories about learning and interaction without having to focus on designing and building hardware, and is part of the general trend towards open source in the field."

You can see a wonderful article by Sunny Bains in EE Times.

Filed Under: Brain-inspired Computing

Vivienne Ming

July 2, 2008 By dmodha

Today, we had a quite an interesting talk from Dr. Vivienne Ming.

Title: Sparse codes for natural sounds

Abstract: The auditory neural code must serve a wide range of tasks that require great sensitivity in time and frequency and be effective over the diverse array of sounds present in natural acoustic environments. It has been suggested (Barlow, 1961; Atick, 1992; Simoncelli & Olshausen, 2001; Laughlin & Sejnowski, 2003) that sensory systems might have evolved highly efficient coding strategies to maximize the information conveyed to the brain while minimizing the required energy and neural resources. In this talk, I will show that, for natural sounds, the complete acoustic waveform can be represented efficiently with a nonlinear model based on a population spike code. In this model, idealized spikes encode the precise temporal positions and magnitudes of underlying acoustic features. We find that when the features are optimized for coding either natural sounds or speech, they show striking similarities to time-domain cochlear filter estimates, have a frequency-bandwidth dependence similar to that of auditory nerve fibers, and yield significantly greater coding efficiency than conventional signal representations. These results indicate that the auditory code might approach an information theoretic optimum and that the acoustic structure of speech might be adapted to the coding capacity of the mammalian auditory system.

Bio: Vivienne Ming received her B.S. (2000) in Cognitive Neuroscience from UC San Diego, developing face and expression recognition systems in the Machine Perception Lab. She earned her M.A. (2003) and Ph.D. (2006) in Psychology from Carnegie Mellon University along with a doctoral training degree in computational neuroscience from the Center for the Neural Basis of Cognition. Her dissertation, Efficient auditory coding, combined computational and behavioral approaches to study the perception of natural sounds, including speech. Since 2006, she has worked jointly as a junior fellow and post-doctoral researcher at the Redwood Center for Theoretical Neuroscience at UC Berkeley and MBC/Mind, Brain & Cognition at Stanford University developing statistical models for auditory scene analysis.

Filed Under: Interesting People

PetaVision Synthetic Cognition Project

June 16, 2008 By dmodha

"Less than a week after Los Alamos National Laboratory’s Roadrunner supercomputer began operating at world-record petaflop/s data-processing speeds, Los Alamos researchers are already using the computer to mimic extremely complex neurological processes.

"Late last week and early this week while verifying Roadrunner’s performance, Los Alamos and IBM researchers used three different computational codes to test the machine. Among those codes was one dubbed “PetaVision” by its developers and the research team using it.

"PetaVision models the human visual system—mimicking more than 1 billion visual neurons and trillions of synapses.

"On Saturday, Los Alamos researchers used PetaVision to model more than a billion visual neurons surpassing the scale of 1 quadrillion computations a second (a petaflop/s). On Monday scientists used PetaVision to reach a new computing performance record of 1.144 petaflop/s. The achievement throws open the door to eventually achieving human-like cognitive performance in electronic computers.

"Based on the results of PetaVision’s inaugural trials, Los Alamos researchers believe they can study in real time the entire human visual cortex—arguably a human being’s most important sensory apparatus."

For more details, see the press release from LANL.

Filed Under: Brain-inspired Computing

  • « Go to Previous Page
  • Page 1
  • Page 2
  • Page 3
  • Page 4
  • Page 5
  • Interim pages omitted …
  • Page 7
  • Go to Next Page »

Primary Sidebar

Recent Posts

  • Breakthrough low-latency, high-energy-efficiency LLM inference performance using NorthPole
  • Breakthrough edge AI inference performance using NorthPole in 3U VPX form factor
  • NorthPole in The Economist
  • NorthPole in Computer History Museum
  • NorthPole: Neural Inference at the Frontier of Energy, Space, and Time

Archives by Month

  • 2024: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2023: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2022: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2020: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2019: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2018: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2017: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2016: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2015: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2014: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2013: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2012: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2011: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2010: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2009: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2008: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2007: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2006: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec

Copyright © 2025