One of the things I spend my time thinking about is the problem of shortening the time it takes to acquire Nuclear Magnetic Resonance data on biomolecular macromolecules like proteins and nucleic acids. Below is an attempt to describe the problem non-technically and how we can speed up data acquisition by ‘randomly’ sampling a subset of the data normally required.
TL;DR Summary: Acquiring nuclear magnetic resonance data takes a long time. This is because to get good resolution traditionally a certain number of data points must be sampled. Can we skip some of these points? The answer is yes, the next question is which points?
The problem: NMR takes a long time.
NMR spectra have great power in describing macromolecules at the atomic level when collected in 2, 3 or even 4 dimensions. Each dimension represents a different kind of information (say, nucleus type, location in a repeating unit of the polymer, distance from another nucleus) – so multiple-dimension spectra are data-heavy but help isolate specific atomic groups really well. The information in each dimension is actually frequency information – it is the frequency of each atom in the molecule. The down-side is these spectra take a long time to acquire. In fact, to acquire 3 and 4 dimensional data, experiments are usually shortened by not acquiring an ideal number of samples. That is, most of the dimensions are truncated in time which leads to poor frequency discrimination in that dimension.
Now, each dimension beyond the first dimension is acquired slowly for a number of technical reasons I wont discuss here. However, lets say that in one of these dimensions we would ideally like to acquire N points, but we really only have time to acquire N/4 points. This means our frequency resolution will drop 4 times. For further technical reasons, our frequency resolution is not just a function of N but also a function of some time delays between the sampled points. These time delays (we call them evolution delays) are actually very fast, its just that the time between when we can collect these points is slow (blah blah blah – further technical reason). This means we don’t have to wait a long time to actually collect the Nth point above, it just takes a long time to get to N because we must collect 1, 2, 3… N-2, N-1, N points along the way. This is a window of opportunity here if we can actually skip points and quickly get to the Nth point and maintain the same high frequency resolution expected when collecting all N points.
This can be done and in NMR and is called non-uniform sampling. It is also a type of compressed sensing.
Compressed Sensing: Collecting some data and discarding most.
Several techniques have been developed to allow collection of data out to distance points without having to collect all the points in between. Programmatically it is fairly easy to get an NMR spectrometer to do this. The problem lies in processing the data into a spectrum that contains few, if any, significant artifacts. The regular FFT (Fast Fourier Transform), for example, can be used by simply setting the non-collected data points to zero. This however results in significant artifacts. The problem is how to reconstruct the missing data and minimize the artifacts. Compressed sensing (CS) is a theory that describes a way to do this. Originally CS was developed for image processing but it has successfully been applied to NMR data. Assuming signals are sparse (true for NMR data) and that noise is not significant (mostly true for NMR data), compressed sensing algorithms can reconstruct the skipped data.
How do we decide which points to skip?
I will talk more about this in the next post… stay tuned.