• Skip to main content
  • Skip to primary sidebar

Dharmendra S. Modha

My Work and Thoughts.

  • Brain-inspired Computing
    • Collaborations
    • Videos
  • Life & Universe
    • Creativity
    • Leadership
    • Interesting People
  • Accomplishments
    • Prizes
    • Papers
    • Positions
    • Presentations
    • Press
    • Profiles
  • About Me

iCub

July 18, 2008 By dmodha

"The iCub is an artificial toddler [robot] with senses, 53 degrees of freedom, and a modular software structure designed to allow the work of different research teams to be combined."

"This open-source robot is designed to allow academics to concentrate on implementing their theories about learning and interaction without having to focus on designing and building hardware, and is part of the general trend towards open source in the field."

You can see a wonderful article by Sunny Bains in EE Times.

Filed Under: Brain-inspired Computing

Vivienne Ming

July 2, 2008 By dmodha

Today, we had a quite an interesting talk from Dr. Vivienne Ming.

Title: Sparse codes for natural sounds

Abstract: The auditory neural code must serve a wide range of tasks that require great sensitivity in time and frequency and be effective over the diverse array of sounds present in natural acoustic environments. It has been suggested (Barlow, 1961; Atick, 1992; Simoncelli & Olshausen, 2001; Laughlin & Sejnowski, 2003) that sensory systems might have evolved highly efficient coding strategies to maximize the information conveyed to the brain while minimizing the required energy and neural resources. In this talk, I will show that, for natural sounds, the complete acoustic waveform can be represented efficiently with a nonlinear model based on a population spike code. In this model, idealized spikes encode the precise temporal positions and magnitudes of underlying acoustic features. We find that when the features are optimized for coding either natural sounds or speech, they show striking similarities to time-domain cochlear filter estimates, have a frequency-bandwidth dependence similar to that of auditory nerve fibers, and yield significantly greater coding efficiency than conventional signal representations. These results indicate that the auditory code might approach an information theoretic optimum and that the acoustic structure of speech might be adapted to the coding capacity of the mammalian auditory system.

Bio: Vivienne Ming received her B.S. (2000) in Cognitive Neuroscience from UC San Diego, developing face and expression recognition systems in the Machine Perception Lab. She earned her M.A. (2003) and Ph.D. (2006) in Psychology from Carnegie Mellon University along with a doctoral training degree in computational neuroscience from the Center for the Neural Basis of Cognition. Her dissertation, Efficient auditory coding, combined computational and behavioral approaches to study the perception of natural sounds, including speech. Since 2006, she has worked jointly as a junior fellow and post-doctoral researcher at the Redwood Center for Theoretical Neuroscience at UC Berkeley and MBC/Mind, Brain & Cognition at Stanford University developing statistical models for auditory scene analysis.

Filed Under: Interesting People

PetaVision Synthetic Cognition Project

June 16, 2008 By dmodha

"Less than a week after Los Alamos National Laboratory’s Roadrunner supercomputer began operating at world-record petaflop/s data-processing speeds, Los Alamos researchers are already using the computer to mimic extremely complex neurological processes.

"Late last week and early this week while verifying Roadrunner’s performance, Los Alamos and IBM researchers used three different computational codes to test the machine. Among those codes was one dubbed “PetaVision” by its developers and the research team using it.

"PetaVision models the human visual system—mimicking more than 1 billion visual neurons and trillions of synapses.

"On Saturday, Los Alamos researchers used PetaVision to model more than a billion visual neurons surpassing the scale of 1 quadrillion computations a second (a petaflop/s). On Monday scientists used PetaVision to reach a new computing performance record of 1.144 petaflop/s. The achievement throws open the door to eventually achieving human-like cognitive performance in electronic computers.

"Based on the results of PetaVision’s inaugural trials, Los Alamos researchers believe they can study in real time the entire human visual cortex—arguably a human being’s most important sensory apparatus."

For more details, see the press release from LANL.

Filed Under: Brain-inspired Computing

Rajit Manohar

June 11, 2008 By dmodha

Today, I had an opportunity to host Professor Rajit Manohar from Cornell University. He gave us an amazing talk.

Title:
Ultra Low Power Asynchronous VLSI

Abstract:
We present the design of SNAP: an ultra low power asynchronous processor optimized for embedded sensing applications. The circuit style used by SNAP has been optimized for both area and energy to enable the development of a small, long lifetime sensor node. The asynchronous nature of the processor enables efficient transitions from idle to active back to idle state. We present measured performance and energy results for our design. In 0.18um, typical monitoring tasks can be performed with a power budget of 0.6uW.

We will also provide a brief introduction of asynchronous design methodologies, and their relation to concurrent program development.

Biography:
Ph.D. Computer Science, Caltech (1998); Leader in asynchronous VLSI design; inventor of GHz-speed FPGA technology and ultra low power processors; ~10 issued patents and >50 published papers; MIT Technology Review TR35 awardee; Founder and Chief Technology Officer, Achronix Semiconductor Corp.

Filed Under: Interesting People

Petaflop!

June 9, 2008 By dmodha

"The IBM machine, codenamed Roadrunner, has been shown to run at "petaflop speeds", the equivalent of one thousand trillion calculations per second.

The benchmark means the computer is twice as nimble as the current world’s fastest machine, also built by IBM."

For details, please see.

Filed Under: Brain-inspired Computing

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 31
  • Page 32
  • Page 33
  • Page 34
  • Page 35
  • Interim pages omitted …
  • Page 49
  • Go to Next Page »

Primary Sidebar

Recent Posts

  • Breakthrough low-latency, high-energy-efficiency LLM inference performance using NorthPole
  • Breakthrough edge AI inference performance using NorthPole in 3U VPX form factor
  • NorthPole in The Economist
  • NorthPole in Computer History Museum
  • NorthPole: Neural Inference at the Frontier of Energy, Space, and Time

Archives by Month

  • 2024: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2023: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2022: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2020: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2019: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2018: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2017: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2016: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2015: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2014: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2013: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2012: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2011: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2010: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2009: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2008: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2007: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
  • 2006: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec

Copyright © 2025