Compressed Sensing in Neuroscience

1 03 2010

Wired has a nice lay-person write-up of the rapidly developing field of compressed sensing. This is a technique that allows accurate reconstructions of highly undersampled sparse datasets. This field really took off in 2004 when Emmanuel J. Candès discovered that a tomography phantom image could be reconstructed exactly even with data deemed insufficient by the Nyquist-Shannon criterion. It is probably the hottest topic in imaging theory today.

Modified Shepp-Logan phantom with enhanced contrast for visual perception.

According to this review, Compressed Sensing MRI, its successful application requires three conditions to be met :

  • Transform Sparsity: The desired image must have a sparse representation in a known transform domain (i.e., it must be compressible by transform coding),
  • Incoherence of Undersampling Artifacts: The aliasing artifacts in a linear reconstruction caused by k-space undersampling must be incoherent (noise-like) in the sparsifying transform domain.
  • Nonlinear Reconstruction: The image must be reconstructed by a non-linear method which enforces both sparsity of the image representation and consistency of the reconstruction with the acquired samples.

These conditions are well met by MRI imaging.  This decoding technique dramatically shortens the required sampling times in an MRI magnet, which reduces the impact of motion artifacts, the bane of high-resolution MRI.

Unfortunately, I don’t think it is very applicable to situations where signal/noise of the underlying source is poor, like counting action potentials in shot-noise limited in vivo calcium imaging. But it’s use is spreading into other related problems, such as mapping the functional connectivity of neural circuitry.  Tao Hu and Mitya Chklovvskii apply the compressed sensing algorithms in Reconstruction of Sparse Circuits Using Multi-neuronal Excitation (RESCUME) from the latest Advances in Neural Information Processing Systems journal. They measure a post-synaptic neuron’s voltage while stimulating sequentially random subsets of multiple potentially pre-synaptic neurons. The sparseness of connectivity allows them to map connectivity much faster than by a sequential method.

UPDATE: If you want to get a better sense of the breadth and depth of the applications of compressed sensing, check out Igor Carron’s comprehensive site, Compressive Sensing : The Big Picture.





Monte Carlo Calcium Spike Detection

9 02 2010

I somehow missed that Josh Vogelstein’s method on action potential detection was published last summer. In Spike Inference from Calcium Imaging Using Sequential Monte Carlo Methods, the authors use a Monte Carlo approach to determine spike times from calcium imaging with superior performance to other deconvolution methods.  It does a great job on simulated and in vitro data, I’d love to see performance on real in vivo recordings.  If you are serious about calcium imaging, you should definitely get in touch with Josh and see what magic he can do with all that math.  You should also ask him about the benefits of linen pants vs. denim, he’s got strong opinions on that subject as well…

Using only strongly saturating and very noisy in vitro fluorescence measurements to infer precise spike times in a ‘‘naturalistic’’ spike train recorded in vitro





Automated ROI analysis for calcium imaging

2 10 2009

One of the most time consuming and frustrating tasks associated with fluorescence imaging in the brain is picking out your regions of interest.  Which pixels do you include in as part of the cell and which are part of the surrounding neuropil?  Often, the answer is not obvious, and even with painstaking selections you can make errors.  Eran Mukamel et. al, from Mark Schnitzer‘s lab just published this Neurotechnique Automated Analysis of Cellular Signals from Large-Scale Calcium Imaging Data that aims to simplify and improve the results of ROI selection. 

The authors used a multistage approach to identify and quantify the calcium-dependent fluorescence changes of imaged neurons. First, they used principal component analysis to identify the components of the image that were likely calcium signal related and which were noise.  The sparse nature of the calcium response (calcium transients are brief and spatially confined) helped the separation from the noise. They threw the noise away.  Then they used independent component analysis to pick out which components of the calcium signal changed in a manner independent from other pieces of the signal.  These likely represent individual cells. Using this output, they performed auto-segmentation of the image into numerous individual neurons or processes and measured the fluorescence change in those regions.  In simulations of data, it resulted in superior data fidelity over hand drawing ROIs.  They also validated it with real in vivo calcium imaging.

 

Automated Cell Sorting Identifies Neuronal and Glial Ca2+ Dynamics from Large-Scale Two-Photon Imaging Data

Automated Cell Sorting Identifies Neuronal and Glial Ca2+ Dynamics from Large-Scale Two-Photon Imaging Data

 

Whether its neuronal imaging, high-speed motion tracking or multielectrode recordings, tremendously large data sets are currently being generated in systems neuroscience. It is simply impossible for a single post-doc to crunch all of her data without major automated computational techniques.  In calcium imaging, the resources that have been poured into the development and release of powerful new tools requires an equal effort on the data analysis end to maximize the value of this technique.  The automated algorithms presented in this paper look very promising and we will definitely be checking them out in the near future.