Raw Data : Vesicular Release from Astrocytes, SynaptopHluorange

15 11 2008

When I was working on my Ph.D. thesis, I was trying to find some biological question to definitively answer with GluSnFR, my glutamate sensitive fluorescent reporter. One possibility was the study of glutamate release from astrocytes. Around that time, 2003/2004, there was increasing evidence that glutamate was not just scavenged by astrocytes, but was also released from astrocytic vesicles. It released in response to calcium elevations within the cell. Existing methods for measuring this release were somewhat crude, so it seemed a great test system for GluSnFR.

Unfortunately, since there seemed to be no specialized areas on the astrocyte where the vesicles fused, and the release rate was relatively slow, we were unable to detect glutamate release with GluSnFR. I thought this might be a problem of not knowing when and where to look. So my collaborator, Yongling Zhu, and I expressed pHluorins fused to VAMP or to synaptophysin in astrocyte cultures. When we looked at them under the microscope, they just looked green, no action…

But then we left the excitation light on for a few minutes. I happened to look back into the scope after they had been bathing in bright blue light and was astonished. I could directly see, by eye, spontaneous bursts of fluorescence across the cells. It was absolutely magnificent. The long application of light had bleached all of the surface expressed, bright pHluorins. But the pH-quenched pHluorins in the vesicles were resistant to bleaching. On this dimmer background, the fusion events were plain as day.

Unfortunately, the green color overlapped with the emission of GluSnFR, so we couldn’t use it for a spatiotemporal marker of when and where to look for glutamate release. We tried using some ph-sensitive precursors to mOrange and mOrange2, developed by Nathan Shaner, but these seemed to block the release events. Since then, others have shown the functional relevance of glutamate release from astrocytes, and I turned the focus of GluSnFR measurements to synaptic spillover. This was one of the projects that was tantilizingly close, but got away. This movie of VAMP-pHluorin is almost five years old now, but it still looks cool… Enjoy!

If you are curious, this is what the Synaptophysin-mOrange looked like when we expressed it in hippocampal neuron cultures. Ammonium Chloride caused a massive fluorescence increase, by alkalizing the synaptic vesicles. Unfortunately, we never were able to see release via electrical stimulation. Details are in my thesis. Maybe someone else wants to give it a shot?





SLICK labeling and new FPs

1 07 2008

There is a nice writeup of the single-neuron labeling with inducible Cre-mediated knockout (SLICK) paper from Guoping Feng‘s lab over at the Alzheimer’s Research forum. The method simultaneously knocks out a gene in a small number of cells, while highlighting the knocked-out cells with a cytosolic fluorescent protein. In a comment to the Schizophrenia Research Forum, Joseph Gogos points out a similar technique his lab published last year in Current Biology.

Also in the writeup is coverage of the new fluorescent protein variants from the Tsien Lab.  These include mOrange2 made by Nathan Shaner, which is a much more photostable version of mOrange. This should immediately replace mOrange in most constructs.  Also of note is TagRFP-T from Michael Lin and his trusty undergraduate assistant Michael McKeown. Tag-T is an extremely photostable derivative of the Evrogen protein TagRFP. Tag-T was discovered by screening Tag mutants in bacterial colonies on a solar simulator. Toxicity in sensitive cells (in vivo neurons) hasn’t been fully determined yet, but in vitro these new FPs all look great. Now I wish they would make a super-bleach resistant Citrine for my FRET constructs.





3D and Multicolor Superresolution Imaging

19 02 2008

Progress in superresolution imaging is still moving very quickly. Here are two more great papers in the field.

First, Huang et al. from Xiaowei Zhuang’s group published a Science paper that moves superresolution imaging into three dimensions. Previously, STORM and PALM techniques were most useful for thin sections where the z-axis depth is well-constrained. Breaking the diffraction limit in the z-dimension was thought to possibly require recording from multiple angles, standing wave TIRF or optical lattice microscopy. Instead, the authors simply inserted a weak cylindrical mirror in between the imaging lens and the objective. This distorted the shape of the point spread function in the x- and y-dimensions, dependent on the z-axis distance from the focal plane. By examining the shape of each photoactivated molecule’s ‘photon cloud’, they were able to unambiguously assign a z-axis depth. This was a simple and clever way to map a third dimension of information on top of the two they were recording.

zhuang08.jpg

Due to increasing point spread widths with greater depth, the localization accuracy decreases with distance from the focal plane. Therefore, they only examined structures within a 500nm window around the focal depth. Z-scanning the focal plane could increase the depth range, though this might waste signal by photobleaching out of focus fluorophores. However, this is less of a concern in the STORM vs. PALM approach as the cyanine dyes used for STORM can be cycled on many times, while the Eos-FP used in PALM permanently bleaches. Of course, if a dye molecule moves position between on-cycles, this will degrade the effective resolution of the STORM approach.

PALM proponents also have a new paper out. Shroff et al. from Eric Betzig’s group show an alternative method of dual-color superresolution imaging. They co-express genes labeled with photoactivatable tandem dimer EosFP and with reversibly photoswitchable Dronpa or PS-CFP. The EosFP-tagged molecules are first photoactivated (405nm illumination), localized (561nm) and bleached. This process photoactivates a signficant population of the Dronpa or PS-CFP molecules. After all EosFP has been bleached, the activated second label is switched back to the dark state (Dronpa), or photobleached (PS-CFP) (488nm). The remaining second label can then be specifically photoactivated, localized and bleached.

shroff.jpg

A major advantage of this dual-color PALM technique over Zhuang or Hell’s two-color photoswitching approach is that all the fluorescent reagents are genetically encoded rather than antibody labeled. This permits more precise localization of the label to the target of interest. It also allows greater label packing density and more mild fixation. A disadvantage is that genetic overexpression could cause mislocalization of the target or artificial aggregation due to residual dimerization tendencies of the fluorescent tags. However, unnatural aggregation can also be induced with antibody labeling. Perhaps adaptation of Don Arnold’s FP tagged intrabodies could address this concern.





Pulse shaping for 2-photon signal enhancement

18 02 2008

Gains in signal to noise ratios of organic dyes and genetically encoded indicators often come in modest steps following screening of large numbers of compounds or clones. Improvements are usually specific to individual chromophores, leading to the pigeonholing of development efforts on a small handful of indicators that have already undergone systemic optimization (i.e. cameleons, G-CaMP and troponin-based GECIs). Indicator photobleaching imposes strict limits on the amount of information which can be extracted by optical indicators. Improvement of specific indicators and their constituents is a worthy and necessary goal, but more generalizable improvements can be made by changing the nature of the illumination source. A series of papers from a variety of groups has shown that careful manipulation of the structure of pulse laser illumination can produce dramatic improvements in signal/noise and photobleaching during non-linear (two-photon) imaging. This is generalizable to numerous optical indicators. Reduction of photobleaching and photoinduced tissue damage will be essential for continuous optical monitoring of sparse neural activity.

 My first encounter with these techniques came in 2003 during a lab presentation by Atsushi Miyawaki, who showed intriguing results with two-photon illumination of GFP. Kawano et al. shined ultra-short (28 femtosecond) pulses from a Ti-Sapphire laser on a plate of immobilized GFP. Due to the uncertainty principal, these ultra-short pulse durations cause a broad spread (~100nm) in the frequency of the laser pulse. They then actively modulated the phase of different frequency bands of the pulse. The interesting part is that they coupled this modulation to a feedback genetic algorithm that sought to increase the ratio of the GFP fluorescence to the intensity of the laser input. Over several hundred iterations of modulation, the system learned how to dramatically increase the output fluorescence over input power by tuning the phase of the frequency components. Using these optimally shaped pulses, they reduced the photobleaching rate by a factor of four! This was an impressive result, but it is unclear how useful this technique would be to live samples, in their heterogeneous aqueous environments. The tuning parameters might be less stable across a non-uniform sample.

kawano.jpg

The above paper raises intriguing questions on the nature of two-photon excited states and photobleaching. GFP and other fluorescent proteins have multiple bleaching modes, some permanent, some dark or UV-reversible. A more clear understanding of the photochemistry of bleaching could lead to improved illumination pulse design that could keep the chromophore away from these undesired states.

Early last year, Stenfan Hell’s group demonstrated a dramatic reduction in one and two-photon photobleaching by avoiding recurrent excitation of the GFP chromophore when it was in a dark absorbing state.Standard two-photon imaging procedure is to illuminate with a Ti-Sapphire laser pulsed at 80MHz, with a interpulse gap of 12.5ns. This gap is five times longer then the 2.4ns fluorescent lifetime of EGFP, giving the chromophore plenty of time to emit a photon and decay from the excited singlet S1 state. But is the singlet state the precursor to most photobleaching? Donnert et al. varied the pulse rate from 40 to 0.5MHz and discovered that photobleaching was dramatically reduced at the lower pulse rates, especially below 1MHz (1us interpulse interval). Under one photon illumination, total photons extracted from GFP before bleaching was increased 20-fold, while the rhodamine dye Atto532 increased by 8-fold. This suggests that the primary precursor to photobleaching is not the S1 state but is due to photon absorbtion during a dark triplet state T1, which has a relatively long lifetime of ~1us. Don’t illuminate during this state, and prevent most photobleaching! Under two photon illumination (800nm), even greater reductions in photobleaching of 25 and 20-fold respectively took place. This is particularly important because the much higher illumination power used in 2p excitation normally causes a dramatic, non-linear increase in the rate of photobleaching over 1p imaging in the region of focus.

donnert.jpg

What is special about the T1 triplet state that makes it more prone to causing photobleaching? Is it simply the longer lifetime gives a greater opportunity for an additional photon to hit, jumping to T2 and inducing a photochemical breakdown of the chromophore or the surrounding residues? Does T1 have a broader range of vibrational energies that can more easily engage the variety of photobleaching reactions than S1? How does the photobleaching rate of S2 compare to T2?

Despite the impressive reduction in photobleaching, and hence the greater S/N for a given bleach rate, Hell’s approach has a major drawback. Slowing the pulse train down ~100-fold also slows acquisition time down 100-fold. Therefore this technique is most useful for fixed specimens. Real-time, high resolution imaging of dynamic processes would be seriously degraded. There are work-arounds, such as wide-field pulsed illumination, rapid laser sweeping or multipoint parallel illumination, but these require additional technical development to make them feasible for the average, or even well-above average investigator.

Is there any related solution that can be easily applied to imaging live, dynamic cells? In this month’s Nature Methods, Na et al. from Eric Betzig’s group present an ‘exciting’ approach. They note that with conventional two photon illumination, most of the available laser power is wasted, intentionally blocked to reduce photodamage. As alluded to above, photodamage increases non-linearly with illumination intensity, making two-photon illumination methods particularly harmful. The authors demonstrate that this damage increases proportional to intensity to the ~2.4 power. To try to attenuate this effect, they used a series of mirrors to split the single ultra-short intense pulse (140 femtosecond) in half, and half again and again and again… They end up with 128 pulses of 1/128 the full intensity nearly evenly spaced every 37 picoseconds. This entire pulse group has a duration of 12ns, and with a 80MHz pulse cycle (12.5ns inter pulse interval) illuminates the chromophore with a steady stream of relatively dim pulsed light.

na.jpg

This split pulse illumination dramatically enhances acquisition speed and signal/noise. Images acquired at a rate of 0.4us/pixel with splitting look far more clear than those acquired at 25.6us/pixel without splitting. Photobleaching is reduced by a factor of four and acute photodamage is also reduced. Additional splitting may be possible and further improve the photobleaching attenuation. Importantly, they demonstrate this technique with GFP in fixed brain slices and in live worms and imagine dynamic responses with a calcium dye in living hippocampal slices. This technique appears to let you eat your cake and have it too. The implementation of the pulse splitting is modular, appears relatively simple to those with customizable two-photon instruments and works with existing Ti-Sapphire lasers. I anticipate rapid adoption by serious imaging labs.

Each of the above advances attacks the problem of indicator photobleaching by a different approach, and each focuses on a different aspect of the photochemistry. Theoretically, one could even combine all three for maximum photon collection efficiency before photobleaching, though this would also require combining the drawbacks of each. Photobleaching will continue to be a major concern in the imaging of dynamic processes, particularly when the signal is not synchronized with the onset of image acquisition. These techniques show substantial progress towards alleviating this concern, and I’m heartened to see a number of excellent labs are focusing so much energy on it.

A final question : How will each of these techniques affect acceptor photobleaching (I’m looking at you Citrine and Venus) in FRET imaging experiments? Do the same processes apply when the excitation is coming from a FRET donor?