BrainStorm 1 : The Calcium Memory Sensor

9 01 2009

As mentioned in the previous post, this is the first installment of BrainStorm, a section of ideas I have under development, but don’t have the time to physically work on.  This section will contain organically developed ideas, organized by project.  Reader feedback is encouraged.

How can we identify the group of neurons that encode a particular thought?  

I don’t want to simply see correlations of in activity of a few scattered neurons with a given thought, but identify the entire neuronal ensemble.  Which neurons are active at a precise moment in a task?  How are they wired together? Which are the drivers of activity?

Existing technology is inadequate to identify the entire neural ensemble that encodes a thought. Immediate early gene expression  patterns have not been shown to be precisely correlated with brain activity, and have a temporal resolution on the order of minutes. Genetically encoded calcium sensors (GECIs) have the necessary temporal and spatial resolution, but their response is nearly as fleeting as a thought, making it impossible to simultaneously record from networks of thousands of possible participants with current microscopy techniques.

In BrainStorm 1, I will outline a technology, photoswitchable genetically-encoded calcium memory sensors, that can identify all the neurons in a large network that are active during user-specified, aribitrarly brief or long time periods.  I will propose four potential strategies for construction of these sensors, and detail practical considerations for sensor design, screening and application.





The Journal of Visualized Experiments

21 12 2008

For technically demanding protocols in neuroscience (or any other science) research, a printed protocol is often insufficient to capture all the essentials of a method.  There are usually numerous ‘tricks’ or things that one must pay attention to that are not included in the printed page.  Or, if they are included, they still lack a vivid description. Many techniques require the novice to be taught the technique from a more experienced colleage. Unfortunately, it is not always easy to find someone skilled to be trained from.  Labs which pioneer the techniques have only a limited amount of time and resources available to train outside scientists. How can advanced scientific skills be distributed more broadly and efficiently? A good place to start is the Journal of Visualized Experiments (JoVE). It’s a YouTube for science protocols.

So that's how you do it!

So that's how you do it!

JoVE is a growing collection of video protocols that walk a researcher through the procedure, allowing one to actually see the steps used, rather than just imaging what performing the protocol might be like. Want to know how to glue a live fruit fly to a stick?  Just watch the video! Wonder how to load calcium dyes onto the cortex of a mouse? Just watch the video!  This looks to be a fantastic resource for people that are learning a technique, that want to see other possible ways to do a procedure, or those who are simply curious about what a neuroscientist actually does at work.

I should make one for glutamate imaging!





Updated: fMRI Based Visual Stimulus Reconstruction

11 12 2008

A simple view of what the brain does is acquire input, process it, then produce output. One strategy for understanding what processing takes place is to record the patterns of brain activity while showing many patterns of input, then see if you can use the information gained to predict a novel input, given the pattern of brain activity. The canonical example of this approach is visual input reconstruction based on recorded spike trains in the visual system of the blowfly.

The blowfly is a relatively simple system (though quite efficient) with a tiny brain. Could a similar approach work in humans?  Although we can’t drop electrodes into the visual cortex (usually), we can put people in fMRI scanners to visualize the pattern of blood oxygenation, which is correlated with neural activity.

In Visual Image Reconstruction from Human Brain Activity using a Combination of Multiscale Local Image Decoders, Miyawaki et al demonstrate visual input prediction using fMRI responses. Using 3mm3 voxels, the group measured the activity level across early visual cortex (V1-V4) for numerous 10×10 binary patterns of visual stimuli. They looked at correlations in 1×1, 1×2, 2×1 and 2×2 bins of voxel activity to hundreds of visual test patterns. The activity represented local image elements. Then they displayed novel visual input and used a linear combination of the local image element responses to predict the visual input from the brain activity alone. It is noteworthy that they only required several hundred training images before visual input prediction was possible.

Predicted visual input from fMRI activity in V1 and V2

Predicted visual input from fMRI activity in V1 and V2

Note that a rentinotopic map, where the relative spatial position of visual input is reflected in the activity across the visual cortex, is not strictly required for this technique to work. What is required is that the response of each local element is consistent across similar patterns of input in the element’s receptive field. Furthermore, the spatial scale of pattern representation in early processing regions of human visual cortex is broad enough to be picked up by the fMRI scanner.

It would be interesting to see how much higher visual resolution could be predicted with an fMRI approach. Could this approach be adapted to predict input from the responses of cells with more complex receptive fields in higher cortical areas? Or, are those cells too intermingled with neighbors with vastly different response properties to be separable by fMRI?  Higher areas are vital for our own brains to rapidly perceive the contours of complex images. I’d also like to see how well non-contiguous images are predicted.

Cellular resolution calcium imaging with bulk loaded dyes has been used to map fine-grained detail of receptive fields in lower animals visual and somatosensory cortex. Is input prediction possible from these recordings? Is the input training set too limited? Could more complex input be perceived using a fewer number of complex cells from higher visual areas (V2 and above)?





Preview : fMRI Based Visual Stimulus Reconstruction

10 12 2008

I’m going to try a new format for getting brand-new articles up on the site quickly. Often, I want to post something but don’t have the time to read the paper carefully and then create a quality writeup. This provides a lot of posting inhibition. Rather than just sit on the paper, I’ll now post the paper, the link, and the abstract.  Then, if and when I find the time, I’ll post and update and go more in depth.  Here’s the first preview!

Please see the updated post on this paper.

Visual Image Reconstruction from Human Brain Activity using a Combination of Multiscale Local Image Decoders

Yoichi Miyawaki1,2,6,Hajime Uchida2,3,6,Okito Yamashita2,Masa-aki Sato2,Yusuke Morito4,5,Hiroki C. Tanabe4,5,Norihiro Sadato4,5andYukiyasu Kamitani2,3,,

Perceptual experience consists of an enormous number of possible states. Previous fMRI studies have predicted a perceptual state by classifying brain activity into prespecified categories. Constraint-free visual image reconstruction is more challenging, as it is impractical to specify brain activity for all possible images. In this study, we reconstructed visual images by combining local image bases of multiple scales, whose contrasts were independently decoded from fMRI activity by automatically selecting relevant voxels and exploiting their correlated patterns. Binary-contrast, 10 10-patch images (2100 possible states) were accurately reconstructed without any image prior on a single trial or volume basis by measuring brain activity only for several hundred random images. Reconstruction was also used to identify the presented image among millions of candidates. The results suggest that our approach provides an effective means to read out complex perceptual states from brain activity while discovering information representation in multivoxel patterns.