Preview : fMRI Based Visual Stimulus Reconstruction

10 12 2008

I’m going to try a new format for getting brand-new articles up on the site quickly. Often, I want to post something but don’t have the time to read the paper carefully and then create a quality writeup. This provides a lot of posting inhibition. Rather than just sit on the paper, I’ll now post the paper, the link, and the abstract.  Then, if and when I find the time, I’ll post and update and go more in depth.  Here’s the first preview!

Please see the updated post on this paper.

Visual Image Reconstruction from Human Brain Activity using a Combination of Multiscale Local Image Decoders

Yoichi Miyawaki1,2,6,Hajime Uchida2,3,6,Okito Yamashita2,Masa-aki Sato2,Yusuke Morito4,5,Hiroki C. Tanabe4,5,Norihiro Sadato4,5andYukiyasu Kamitani2,3,,

Perceptual experience consists of an enormous number of possible states. Previous fMRI studies have predicted a perceptual state by classifying brain activity into prespecified categories. Constraint-free visual image reconstruction is more challenging, as it is impractical to specify brain activity for all possible images. In this study, we reconstructed visual images by combining local image bases of multiple scales, whose contrasts were independently decoded from fMRI activity by automatically selecting relevant voxels and exploiting their correlated patterns. Binary-contrast, 10 10-patch images (2100 possible states) were accurately reconstructed without any image prior on a single trial or volume basis by measuring brain activity only for several hundred random images. Reconstruction was also used to identify the presented image among millions of candidates. The results suggest that our approach provides an effective means to read out complex perceptual states from brain activity while discovering information representation in multivoxel patterns.