High-resolution deep-brain two-photon imaging

14 07 2009

Mark Schnitzer, who recently became an HHMI Investigtor, has a new paper out on improved optics for his group’s miniature probe microscopes.  Mark has been pioneering these tiny probes to do optical imaging in deep brain structures, and is one of the only games in town if you want to look deeper then a millimeter into the brain without sucking off all the ‘irrelevant mush’ in between your microscope and the part of the brain you are interested in.  A beer-lubricated former Bell Labs employee (not Karel) years ago confided to me that he wasn’t too impressed with these microprobe systems,because “its just a GRIN lens on a stick”, ignoring the painstaking engineering and minaturization of motor, supports and light paths.  The limitations were quite significant though, in this new Nature Methods communication, In vivo fluorescence imaging with high-resolution microlenses, Barretto et al. note that “The best Rayleigh resolutions values achieved by two-photon fluorescence imaging with GRIN lenses are ~1.6 um lateral and ~12um axial, yielding highly elongated point spread functions that impede acquisions of high-quality, three-dimensional image stacks.”  That elongated point spread function really blurs the image and dramatically reduces the power and brightness of the two-photon imaging mode.  I assume this is why most of the data I’ve seen Mark present was with one-photon illumination.  Now, the authors have addressed this shortcoming.

 

Design of the lens system.  The resolution is comparable to a standard 40x microscope objective.

Design of the lens system. The resolution is comparable to a standard 40x microscope objective.

 

 

      They coupled a plano-convex lens to a custom fabricated GRIN lens whose refractive properties were designed to compensate for the spherical aberration of the plano-convex lens. This system achieved near diffraction-limited resolution in both the lateral and axial dimensions.  Rather then the dim blur of previous iterations, the new system clearly resolves synaptic spines on the dendrites of fluorescent neurons buried deeply in the hippocampus of live mice. The example system in the paper has a 1mm diameter, while the previous systems were as narrow as 0.3mm, so there is still plenty of room for further miniaturization, although I’m not sure how that would affect the light gathering capacity of the lens. Importantly, Mark’s systems are finally getting commercialized so that a much larger scientific population can start to benefit from the technology soon. 

 

A hippocampal neuron (e) in a live mouse visualized with the new system (c) vs. the old system (d)

A hippocampal neuron (e) in a live mouse visualized with the new system (c) vs. the old system (d)





Symposium : A Revolution in Fluorescence Imaging

11 02 2009

header-jellyfish

This coming Tuesday and Wednesday (Feb 17th & 18th) at UCSD, there will be a symposium honoring Roger Tsien, featuring presentations from 32 former and current members of the Tsien Lab. The topics are quite diverse, concentrated in genetically-encoded indicators, but also featuring fluorescent cell penetrating peptides for cancer therapy, photophore ligases for imaging synaptic development, and even a radical new design for the internal combustion engine.

The quality of speakers and subjects looks to be outstanding.  Here is a complete schedule.  You may notice that at 11:15 AM on Tuesday in Price Center East Ballroom, I will be presenting recent progress we have made in the development of genetically-encoded calcium indicators and their application to in vivo imaging.  Don’t miss that one!  🙂  Roger’s talk, which will assuredly be equal parts absorbing, humorous, and illuminating, is at 4pm Wednesday in the Price Center Theater.

If you live in Southern California and are interesting in imaging technology, there isn’t a better place to be than this symposium.  If you can’t make it, Brain Windows will have a full write-up following the event.

Here is the un-official schedule.

Tuesday February 17th – Price Center East Ballroom

9:00 -9:05 Varda  Levram -Ellisman Opening

9:05-9:15 Palmer Taylor

Designing the next generation of genetically encoded sensors

9:15-9:30 Roger Heim

FRET for compound screening at Aurora/Vertex

9:30-9:45 Amy Palmer

Designing and using genetically encoded sensors: Lessons I learned from Roger

9:45-10:00 Robert Campbell

Beyond brightness: colony screens for fluorescent protein photo stability and biosensor FRET changes

10:00-10:15 Colette Dooley

GFP sensors for reactive oxygen species: Tying up loose ends and looking forward.

10:15-10:30 Peter Wang

Fluorescent Proteins and FRET biosensors for visualizing cell motility and mechanotransduction

Fluorescent proteins in neuroscience

11:00-11:15 Brian Bacskai

Aberrant calcium homeostasis in the Alzheimer mouse brain

11:15-11:30 Andrew Hires

Watching a mouse think: Novel fluorescent genetically-encoded calcium indicators applied to in vivo brain imaging

11:30-11:45 Alice Ting

Imaging synapse development with engineered photophore ligases

11:45-12:00 Rex Kerr

3D calcium imaging in C. elegans

Clinical applications

12:00-12:15 Todd Aguilera

Activatable Cell Penetrating Peptides for use in clinical contrast agent and therapeutic development

12:15-12:30 Quyen Nguyen

Surgery with Molecular Fluorescence Imaging Guidance

Fluorescent probes (Chemistry)

1:30-1:45 Tito Gonzalez

Voltage-Sensitive FRET Probes & Applications

1:45-2:00 Paul Negulescu

From watching ions to moving them

2:00-2:15 Timothy Dore

Roger-Inspired Photochemistry: Releasing Biological Effectors with 2PE

2:00-2:15 Joe Kao

Electron Paramagnetic Resonance Imaging in Living Animals

2:15-2:30 Brent Martin

Chemical probes for studying protein acylation

2:30-2:45 Jianghong Rao

Non-GFP based probes for imaging of the hydrolytic enzyme activity

Cellular research with and without Fluorescent probes

3:15-3:30 Carsten Schultz

Cell membrane repair visualized by GFP fusion proteins

3:30-3:45 David Green

Transcriptomes and Systems Biology: application to early mammalian embryogenesis

3:45-4:00 Clotilde Randriamampita

Paradoxical aspects of T cell activation revealed with fluorescent proteins

4:15-4:30 Wen-Hong Li

Studying dynamic cell-cell communication in vivo by Trojan-LAMP

4:30-4:45 Martin Poenie

Aim and Shoot: Two roles for dynein in T cell effector function

4:45-5:00 Gregor Zlokarnik

From bla to blah, blah in 20 years

5:00-5:15                        James Sharp

President, Zeiss MicroImaging Gmbh

February 18 2009 – Leichtag 107

Cellular research with and without fluorescent proteins

9:00-9:15 David Zacharias

Fluorescent Proteins, Palmitoylation and Cancer: two out of three ain’t bad

9:15-9:30 Jin Zhang

Visualization of Cell Signaling Dynamics: A Tale of MAPK

9:30-9:45 Paul Sammak

Nuclear organization and movement in pluripotent stem cells measured by Histone GFP H2B

Branching out

9:45-10:00 Yong Yao

NIH Toolbox Program

10:00-10:15 Oded Tour

The Tour Engine – A novel Internal Combustion Engine with the potential to boost efficiency and cut emissions

Into the future

10:45-11:00 Xiaokun Shu

Visibly and infrared fluorescent proteins: photophysics and engineering

11:00-11:15 Michael Lin

Engineering fluorescent proteins for visualizing newly synthesized proteins and improving FRET-based biosensors

11:15-11:30 Jeremy Babendure

Training our next generation of Fluorescent Protein Enthusiasts

Main Event – Price Center Theater

4:00-5:00 Roger Tsien

Chancellor invitational lecture 2008 Nobel Prize in Chemistry






BrainStorm 1 : The Calcium Memory Sensor

9 01 2009

As mentioned in the previous post, this is the first installment of BrainStorm, a section of ideas I have under development, but don’t have the time to physically work on.  This section will contain organically developed ideas, organized by project.  Reader feedback is encouraged.

How can we identify the group of neurons that encode a particular thought?  

I don’t want to simply see correlations of in activity of a few scattered neurons with a given thought, but identify the entire neuronal ensemble.  Which neurons are active at a precise moment in a task?  How are they wired together? Which are the drivers of activity?

Existing technology is inadequate to identify the entire neural ensemble that encodes a thought. Immediate early gene expression  patterns have not been shown to be precisely correlated with brain activity, and have a temporal resolution on the order of minutes. Genetically encoded calcium sensors (GECIs) have the necessary temporal and spatial resolution, but their response is nearly as fleeting as a thought, making it impossible to simultaneously record from networks of thousands of possible participants with current microscopy techniques.

In BrainStorm 1, I will outline a technology, photoswitchable genetically-encoded calcium memory sensors, that can identify all the neurons in a large network that are active during user-specified, aribitrarly brief or long time periods.  I will propose four potential strategies for construction of these sensors, and detail practical considerations for sensor design, screening and application.





Updated: fMRI Based Visual Stimulus Reconstruction

11 12 2008

A simple view of what the brain does is acquire input, process it, then produce output. One strategy for understanding what processing takes place is to record the patterns of brain activity while showing many patterns of input, then see if you can use the information gained to predict a novel input, given the pattern of brain activity. The canonical example of this approach is visual input reconstruction based on recorded spike trains in the visual system of the blowfly.

The blowfly is a relatively simple system (though quite efficient) with a tiny brain. Could a similar approach work in humans?  Although we can’t drop electrodes into the visual cortex (usually), we can put people in fMRI scanners to visualize the pattern of blood oxygenation, which is correlated with neural activity.

In Visual Image Reconstruction from Human Brain Activity using a Combination of Multiscale Local Image Decoders, Miyawaki et al demonstrate visual input prediction using fMRI responses. Using 3mm3 voxels, the group measured the activity level across early visual cortex (V1-V4) for numerous 10×10 binary patterns of visual stimuli. They looked at correlations in 1×1, 1×2, 2×1 and 2×2 bins of voxel activity to hundreds of visual test patterns. The activity represented local image elements. Then they displayed novel visual input and used a linear combination of the local image element responses to predict the visual input from the brain activity alone. It is noteworthy that they only required several hundred training images before visual input prediction was possible.

Predicted visual input from fMRI activity in V1 and V2

Predicted visual input from fMRI activity in V1 and V2

Note that a rentinotopic map, where the relative spatial position of visual input is reflected in the activity across the visual cortex, is not strictly required for this technique to work. What is required is that the response of each local element is consistent across similar patterns of input in the element’s receptive field. Furthermore, the spatial scale of pattern representation in early processing regions of human visual cortex is broad enough to be picked up by the fMRI scanner.

It would be interesting to see how much higher visual resolution could be predicted with an fMRI approach. Could this approach be adapted to predict input from the responses of cells with more complex receptive fields in higher cortical areas? Or, are those cells too intermingled with neighbors with vastly different response properties to be separable by fMRI?  Higher areas are vital for our own brains to rapidly perceive the contours of complex images. I’d also like to see how well non-contiguous images are predicted.

Cellular resolution calcium imaging with bulk loaded dyes has been used to map fine-grained detail of receptive fields in lower animals visual and somatosensory cortex. Is input prediction possible from these recordings? Is the input training set too limited? Could more complex input be perceived using a fewer number of complex cells from higher visual areas (V2 and above)?





Preview : fMRI Based Visual Stimulus Reconstruction

10 12 2008

I’m going to try a new format for getting brand-new articles up on the site quickly. Often, I want to post something but don’t have the time to read the paper carefully and then create a quality writeup. This provides a lot of posting inhibition. Rather than just sit on the paper, I’ll now post the paper, the link, and the abstract.  Then, if and when I find the time, I’ll post and update and go more in depth.  Here’s the first preview!

Please see the updated post on this paper.

Visual Image Reconstruction from Human Brain Activity using a Combination of Multiscale Local Image Decoders

Yoichi Miyawaki1,2,6,Hajime Uchida2,3,6,Okito Yamashita2,Masa-aki Sato2,Yusuke Morito4,5,Hiroki C. Tanabe4,5,Norihiro Sadato4,5andYukiyasu Kamitani2,3,,

Perceptual experience consists of an enormous number of possible states. Previous fMRI studies have predicted a perceptual state by classifying brain activity into prespecified categories. Constraint-free visual image reconstruction is more challenging, as it is impractical to specify brain activity for all possible images. In this study, we reconstructed visual images by combining local image bases of multiple scales, whose contrasts were independently decoded from fMRI activity by automatically selecting relevant voxels and exploiting their correlated patterns. Binary-contrast, 10 10-patch images (2100 possible states) were accurately reconstructed without any image prior on a single trial or volume basis by measuring brain activity only for several hundred random images. Reconstruction was also used to identify the presented image among millions of candidates. The results suggest that our approach provides an effective means to read out complex perceptual states from brain activity while discovering information representation in multivoxel patterns.