In the final scene of Inception, we are faced with the question, “Is this real life, or is it all a dream?” Inception, like countless other Hollywood movies, suggests a scientific ability to tap into dreams, memories, and other manifestations of what we can see in our mind’s eye. While we associate this with science fiction, Berkeley scientists may have found a way to tap into our dreams, or have at least come across the first step to understanding our visual functioning. These scientists have obtained a quantitative insight into the brain activity that facilitates dynamic processes such as dreaming, perception, hallucinations, and visual imagery. They have made the old Hollywood trope of tapping into someone’s mind into a reality by using functional magnetic resonance imaging (fMRI) and computerized models of brain activity stimulated by visual patterns. With technology like this, the scientists hope to communicate with stroke victims and even the comatose, introducing a myriad of medical possibilities.
Because we capture and process several images per second and piece them together into a fluid visual stream, Shinji Nishimoto and his fellow researchers were faced with a problem. The fMRI technology measures blood oxygen level-dependent (BOLD) signals, but can only model brain activity in response to still images because it does not process our brain signals as quickly as they are activated by stimuli in normal waking vision. In order to capture brain signals stimulated by watching videos, researchers developed two key innovations: a new motion-energy encoding model that could be used with the fMRI technology, and a way to train the program to associate brain signals with certain images, like training a child to associate the letter “A” with an apple. The motion-energy encoding model interprets the separate nerve cells stimulated by the video in order to recover bits of information that are then used to piece together the images being viewed by the subject.
The second innovation involves over a million sample movie clips embedded into a decoding program called the Bayesian decoding framework. Essentially, the subjects—Nishimoto and two other research team members—viewed YouTube clips while in an MRI scanner for hours and had the blood flow through their visual cortex measured by fMRI. The fMRI fed their brain activity into a computer program that scanned through several pre-loaded movie clips and pieced together the clips that it believed were closest to what the subjects had seen, recreating, with some fuzziness, exactly what the subjects had watched. The scientists even figured out how to reconstruct color, though image quality is not as clear as the original.
The technology is astonishingly accurate, with the motion-energy encoding model correctly observing the BOLD signal 95% of the time. Even when the set included 1,000,000 movie clips chosen at random from the internet, the identification accuracy was over 75% for all three test subjects.
The researchers say that the technique, which projects what we view by measuring our brain activity, could be the gateway to see into the minds of those who cannot communicate verbally. The method the scientists would use is not explicitly outlined in the study, but their study asserts that comatose patients, stroke victims and people with neurodegenerative diseases should theoretically be able to have the images they see in their minds recreated by further innovation of their discovery. The models of dynamic mental events may also be able to serve as tools for psychiatric diagnosis. The technology is expected to be a stepping-stone to brain-machine interfaces that would allow people with paralysis or cerebral palsy to guide computers with their minds. Professor Jack Gallant, a UC Berkeley neuroscientist and co-author of the study, acknowledges the breakthrough his researchers have made, revealing that they “are opening a window into the movies in our minds.”
Although this may sound like the plot of a horror movie about mad scientists, the researchers’ goal seems to be rational. However, as the technology becomes more advanced, there is no doubt that brain-probing and mind-reading will raise many moral questions and remain a hot topic for bioethicists.
To see a video of the reconstructions, visit: http://www.youtube.com/watch?v=KMA23JJ1M1o&feature=player_embedded
Citation: Nishimoto et al., Reconstructing Visual Experiences from Brain Activity Evoked by Natural Movies, Current Biology (2011), doi:10.1016/j.cub.2011.08.031