Wednesday, March 12, 2008

Do You See What I See?

http://www.sciam.com/article.cfm?id=translating-images-from-brain-waves

March 6, 2008
Do You See What I See? Translating Images out of Brain Waves
Visual decoder allows researchers to translate brain wave activity into images
By Nikhil Swaminathan

File this under futuristic (and perhaps a little scary): In a step toward one day perhaps deciphering visions and dreams, new research unveils an algorithm that can translate the activity in the minds of humans.

Scientists from the University of California, Berkeley, report in Nature today that they have developed a method capable of decoding the patterns in visual areas of the brain to determine what someone has seen. Needless to say, the potential implications for society are sweeping.

"This general visual decoder would have great scientific and practical use," the researchers say. "We could use the decoder to investigate differences in perception across people, to study covert mental processes such as attention, and perhaps even to access the visual content of purely mental phenomena such as dreams and imagery."

The scientists say that previous attempts to extract "mental content from brain activity" only allowed them to decode a finite number of patterns. Researchers would feed image to an individual (or ask them to think about an object) one at a time and then look for a corresponding brain activity pattern. "You would need to know [beforehand], for each thought you want to read out, what kind of pattern of activity goes with it," says John-Dylan Haynes, a professor at the Bernstein Center for Computational Neuroscience Berlin and the Max Planck Institute for Human Cognitive and Brain Sciences that was not affiliated with the new work.

"The advance brought forward here," he continues, "is that they have set up a mathematical model that captures the properties of the visual part of the brain," which can then be applied to previously unseen objects.

Researchers used functional magnetic resonance images (fMRIs) to record activity in the visual cortices of a pair of volunteers (two of the study's co-authors) while they viewed a series of images. They examined the brain by dividing the regions into voxels (volumetric units, or 3-D pixels) and noting the part of the picture to which each section responded. For instance, one voxel, or slice, might respond in a certain pattern to, say, colors in the upper left-hand corner of the photo, whereas another voxel would be set off by something in a different portion of the picture.

Haynes says the team could "go back and infer what the image was that a person was seeing" by monitoring the activity in each brain section and deciphering what sort of information would most likely be found in the corresponding part of the visual field, or photograph.

When the volunteers scanned a new set of 120 images—depicting everything from people to houses to animals to fruit and other objects—the computer program correctly identified what they were looking at up to 92 percent of the time; when the image pool was upped to 1,000, the algorithm was successful 80 percent of the time. Naturally, its accuracy decreased as the number of possible pictures grew, but even at a quantity 100 times greater than the number of images indexed on the Internet by Google, according to the scientists, the model would be successful greater than 10 percent of the time. (This far exceeds the success rate of random guessing.)

"This indicates," the researchers wrote, "that fMRI signals contain a considerable amount of stimulus information and that this information can be successfully decoded in practice."

Haynes says the method is limited to deciphering information that can be mapped out in space, such as sensory inputs (where a sound is coming from) or motor function (what action one's arm has performed). The challenge, he says, is that it cannot "be easily applied to cases where you don't have a clear mathematical model," such as memories, intentions and emotions. "High-level thoughts would be a bit tricky to get a hold of without such a mathematical model," he adds.

So, you can keep that tinfoil helmet in your closet for now. These algorithms still can't read our innermost thoughts—at least not yet.

No comments: