GCaMP6 plasmids at addgene

8 11 2012

GCaMP6 variants are on addgene. Three flavors, fast kinetics or big signals. Bigger responses than OGB-1, some are MUCH bigger.  The responses to drifting gratings in visual cortex are spectacular. Sorry no pics for now. Hopefully the reviewers will be nice so we can all read about it soon. Still work to be done getting true 1AP resolution when simultaneously imaging large populations of neurons, but for single neuron imaging in vivo, this has 1AP resolution.  If you have been waiting for the GCaMPs that will blow your expectations away, these are them.

From the SfN abstract :

Using structure-guided mutagenesis and high-throughput screening, we increased the fluorescence change in response to single action potentials (APs) by >10-fold compared to GCaMP3. We also accelerated the kinetics by ~2-fold. These new GECIs reliably report single APs in single trials in vivo with near 100% accuracy. In the mouse visual cortex, we detected ~5-fold more visually responsive neurons. The sensitivity, dynamic range and speed of the new GECIs exceed those of the synthetic indicator OGB-1. The improved sensitivity further facilitated reliable measurement of synaptic calcium signals in the dendrites of pyramidal cells and parvabumin (PV)-positive interneurons in vivo. Hot spots of orientation-selective domains can be resolved both in single pyramidal cell spines and small segments of PV cell dendrites. These improved GECIs will permit a more complete description of neuronal circuit function and enable long-term functional imaging of single synapses.





Three ways of looking at touch coding

20 09 2012

At SfN, a block of three posters by myself, Simon Peron and Daniel O’Connor will showcase three ways to approach the problem of touch coding.

My work on whisker force measurements, and single cell and silicon probe based cortical recordings during active objection localization :

Program#/Poster#: 677.18/KK18
Presentation Title: Encoding whisking-related variables in the mouse barrel cortex during object localization
Location: Hall F-J
Presentation time: Tuesday, Oct 16, 2012, 2:00 PM – 3:00 PM
Authors: *S. A. HIRES, D. O’CONNOR, D. GUTNISKY, K. SVOBODA;
Janelia Farm Res. Campus, ASHBURN, VA

Simon Peron’s work on recording a complete representation of touch using in-vivo imaging with new G-CaMP variants during a similar behavior :

Program#/Poster#: 677.12/KK12
Presentation Title: Towards imaging complete representations of whisker touch in the mouse barrel cortex
Location: Hall F-J
Presentation time: Tuesday, Oct 16, 2012, 4:00 PM – 5:00 PM
Authors: *S. P. PERON1, V. IYER2, Z. GUO2, T.-W. CHEN2, D. KIM2, D. HUBER3, K. SVOBODA2;

Daniel O’Connor’s work on constructing synthetic perception of touch and object localization via cortical cell-type specific optogenetic stimulation during behavior :

Program#/Poster#: 677.06/KK6
Presentation Title: Neural coding for object location revealed using synthetic touch
Location: Hall F-J
Presentation time: Tuesday, Oct 16, 2012, 2:00 PM – 3:00 PM
Authors: *D. H. O’CONNOR1, S. A. HIRES1, Z. GUO1, Q.-Q. SUN2, D. HUBER1, K. SVOBODA1;

This is a must-see session for people interested in touch coding, the whisker system, in-vivo cortical imaging, or synthetic perception via optogenetics.

I hope to see you there.





Journal Club : Classic Single Unit Physiology in Barrel Cortex

29 04 2011

This one is for the aficionados. Here is a little review of four classic single-unit physiology papers investigating the response properties and information flow from whisker through thalamus and into cortex.  It’s quite interesting comparing this data taken from sedated or anesthetized rats to my own in awake, behaving animals. That’s a story for another time and publication venue though 🙂





Rapid warping of two-photon illumination wavefronts

16 02 2011

A short paper in Optics Express looks interesting.  In A high speed wavefront determination method based on spatial frequency modulations for focusing light through random scattering media, Meng Cui presents a method for rapidly determining the optimal wavefront to ‘cancel out’ the scattering when 785nm light passes through turbid media.  In his example, a glass diffuser was used, but the clear goal for this work is to replace the glass with a brain.

To understand why this is so important for in vivo two-photon imaging, let’s review how 2-p imaging works. Light from a laser is focused to a point and swept across the field in a raster. The resulting fluorescence is of a different wavelength and can thus be filtered out from the excitation light. For each voxel, all the fluorescence that re-enters the objective is collected, regardless of its source.  The total amount of fluorescence collected for that timepoint in the sweep is assigned as the brightness of that voxel. Since the user knows where the laser was being aimed, scattering of fluorescence emission may reduce the brightness but will not blur the image.  However, scattering of the excitation light can dramatically reduce the excitation at the target voxel while increasing the off-target excitation of its neighbors. This causes a rapid increase in background fluorescence and blur at increasing brain depth.

The vasculature was labeled by injecting flourescein dextran into the circulatory stream. The light source was a regenerative amplifier. ‘‘0 mm’’ corresponds to the top of the brain. Left, XZ projection. Right, examples of XY projections. Note the increase in background fluo- rescence deeper than 600 mm in the brain due to out-of-focus 2PE. (Theer et al., 2003)

Previous reports work has shown that one can use adaptive optics to adjust the phase of the wavefront of the excitation light to correct for the scattering of the excitation.  However, determination of the optimal wavefront for a field of view took minutes, which could be problematic for imaging in an awake animal.  Any changes in the precise position of the brain might change the optimal wavefront.  Ideally, one would want a system that could optimize the wavefront every second, or even before every frame of acquisition (typically 4-8 Hz in a raster scan in vivo experiment)

Scattering in the brain warps two-photon excitation light, but adaptive optics can correct this.

I’ll let Meng Cui explain the technique in his own words

Elastic scattering is the dominant factor limiting the optical imaging depth in tissues. Take gray matter as an example, at 800 nm the scattering coefficient is 77 /cm and the absorption coefficient is 0.2 / cm. If there is a way to suppress scattering, the optical imaging depth could be greatly improved. Despite the apparent randomness, scattering is a deterministic process. A properly engineered wave can propagate inside scattering media and form a focus, a well understood phenomenon in the time reversal and optical phase conjugation (OPC) studies…

For applications on biological tissues, acquisition time on the order of one millisecond (ms) per degree of freedom is desired. Deformable mirrors can provide a high modulation speed. However the degrees of freedom are rather limited. A phase-only SLM can provide about one million degrees of freedom at a much lower modulation speed. In this work, I present a novel method, capable of providing as many degrees of freedom as a SLM with a data acquisition time of one ms per degree of freedom. The method was employed to focus light through a random scattering medium with a 400 ms total data acquisition time, ~three orders of magnitude faster than the previous report [25].

The essence of a COAT system is to phase modulate different input spatial modes while detecting the output signal from the target. To greatly improve the operation speed, the experiment requires a device that can provide fast phase modulation and can access a large number of spatial modes very quickly. To meet these two requirements, a pair of scanning Galvanometer mirrors was used to quickly visit different modes in the spatial frequency domain or k space, and a frequency shifted reference beam was provided for a heterodyne detection. The wavefront profile was first determined in k space and then transformed to the spatial domain. The spatial phase profile was displayed on a SLM to focus light onto the target. In such a design, the number of degrees of freedom is limited by the number of pixels on the SLM and the experiment speed is determined by the scanning mirror speed…

Compared to existing techniques, the reported method can provide both a high operation speed and a large number of degrees of freedom. In the current design, the operation speed is limited by the scanning mirror speed and the maximum number of degrees of freedom is limited by the SLM pixel number. In this demonstration, 400 spatial modes in k space were visited and the determined phase profile was displayed on the SLM. Depending on the scattering property of the media, more (up to 1920 x 1080) or less number of degrees of freedom can be used to optimize the focus quality and the operation speed.

Using a stepwise position scanning, the method achieves an operation speed of one ms (400 μs transition time + 600 μs recording time) per spatial mode, ~three orders of magnitude faster than the previous report. Using a continuous position scanning and a faster position scanner such as resonant scanning mirrors, polygon mirror scanners, or acousto-optic deflectors, the operation speed can be potentially increased by at least one order of magnitude. It is anticipated that the reported technique will find a broad range of applications in biomedical deep tissue imaging.





Quick Picks : Brainbow flies

8 02 2011

Nature methods published two papers which extend brainbow-like techniques of stochastic multicolored neuronal labeling into fruit flies.  Nature’s summary explains the two methods.

 

dBrainbow expression examples

 

 

The first technique, called dBrainbow, was developed by Julie Simpson, a neuroscientist at the Howard Hughes Medical Institute’s Janelia Farm Research Campus in Ashburn, Virginia, and her colleagues2. This method uses enzymes called recombinases to randomly delete some of the colour-producing genes from the string, leaving different genes next to the promoter regions in different cells. Individual cells are therefore uniquely coloured and so can be easily distinguished…

dBrainbow genetic scheme

The second technique, called Flybow, was developed by Salecker and her colleagues3. They used an enzyme that ‘flips’ pairs of colour-producing genes on the string, leaving different genes next to the promoter region. The ‘flipping’ enzyme is also a recombinase, and so after being inverted, some of the colour-producing genes are randomly deleted. This ensures that all the different genes on the string can potentially end up next to the promoter, and be displayed by individual modified neurons.. Flybow uses a single string of four colours — red, green, blue and yellow.

Flybow genetic scheme

These techniques will find use in building the structural and functional connectome of the fly.

 





UPDATE : DIADEM Final Results

15 09 2010

The DIADEM automated neuronal reconstruction contest has finished.  Accurate, fast, and high-resolution automated neuron reconstruction is of vital importance to cracking the mystery of how neural circuits perform. Even with perfect knowledge of the firing patterns of every cell in a circuit, our understanding of how these patterns are produced and how the information is processed would be quite limited.  True understanding requires knowledge of the precise wiring diagram.  This prize is a good first step towards bringing awareness of this tricky problem to the world’s best computer scientists.

$75,000 in prize money was to go to the group that was able to produce high-quality reconstructions of neuronal structures at least 20x faster than by-hand reconstructions.  In the finals, the fastest speed achieved was 10X the by-hand method. Some groups were hindered by slight variances in the source data formatting, which normally isn’t a big deal unless you only have 20 minutes to produce as much reconstruction as possible…

Since no group was able to beat the hard floor, but substantial progress was made, the money was distributed amongst these finalists.

Badrinath Roysam Team, $25,000
“for the better overall generality of their program in producing robust reconstructions by integration of human and machines interactions.”

Armen Stepanyants Team, $25,000
“for the better overall biological results in the spirit of pure automation.”

Eugene Myers Team, $15,000
“for the excellent quality and strength of their algorithm.”

German Gonzalez Team, $10,000
“for their deeper potential, more original approach, and ultimate scalability of their proposed solution.”

Deniz Erdogmus Team
“for elevating themselves above the current state of automated reconstructions…with a deep understanding of the technical and scientific problems.”

Congrats to the placing teams.





Software Update : Ephus, ScanImage & Neuroptikon

20 08 2010

Three excellent pieces of neuroscience software have been recently updated or freshly released.  I have used two of them, Ephus and ScanImage, on a daily basis as primary data collection tools. The third, Neuroptikon, is quite useful for post-hoc illustration of neural circuits.

Ephus is a modular Matlab-based electrophysiology program that can control and record many channels of tools and data simultaneously.  Under control of a sophisticated internal looper or external trigger, you can initiate an ephys recording, trigger camera frames, adjust galvo positions, open/close shutters, trigger optical stimulation, punishments, rewards, etc.  It is a workhorse program for non-imaging related in vitro and in vivo electrophysiology experiments.  Ephus is named for the fabled baseball pitch, and pronounced as “EFF-ess”. As with the pitch, it may trick you at first, but eventually you’re sure to hit a home run. Of course, the name also evokes electrophysiology, which is the fundamental orientation of the project, be it optical or electrical.

Ephus 2.1.0 is a major release, and the only official version at this time.  The software is fully described in a publication in Frontiers in Neuroscience. New features include unlimited recording time, with disk streaming, for applications such as EEGs and long traces during in-vivo behavior. A number of additional scripts for in-the-loop control have been added. New configuration/start-up files have been created, with a template to help get up and running quickly. This release also includes a number of bug fixes.

ScanImage is another Matlab-related software program that is used for optical imaging and stimulation of neurons in vitro and in vivo.  It finds much use a control platform for 2-photon imaging, glutamate uncaging and laser-scanning photostimulation.  An early incarnation is described in this paper by Pologruto, et al.  It provides a lot of power right out of the box (bidirectional scanning @ 0.5ms/line, etc) and is easily extensible via custom user function plugins.

Neuroptikon is a sophisticated network visualization tool.  It can build Van Essen-like diagrams of any circuit you like, but it is so much more.  The direction of communication is animated, and subsets of regions and connections can be brought into focus, which greatly eases the clarity of the network.  The diagrams can be built in three-dimensions, to preserve relative topography, or functional grouping.  There is simple GUI-based control, while more complex tasks can use a scripting interface.  This is great software for anyone who needs to imagine information flow in a complex network.

All three tools are released for free use under the HHMI/Janelia Farm open source license.

Download Here :

Ephus 2.1.0

ScanImage 2.6.1

Neuroptikon 0.9.9