Compressed Sensing (CS), as it was orriginally described, provides a nice theoretical framework that allows one to recover sparse or compressible signals from a limited number of samples (i.e., below Nyquist rate) called compressive measurements.
Beyond the fancy promises of that beautiful theory, it remains undeniable that when it comes to deal with real application, a whole world of unexpected issues arises. Among these inevitable challenges, quantization has been the focus of much recent research. Quantization is a mapping of the continuous real axis to a finite discrete set of values that can be indexed and, for instance, represented with finite length binary words. It is, so to say, the translation of the continuous analog world into the discrete (finite precision) digital world.
In the framework of compressed sensing, quantization is naturally applied to the compressive measurements in order to be able to store, transmit and process them afterward. There are several concerns and possible tradeoffs:
With this in mind, we tried to bridge two extreme cases that were already extensively studied in the litterature: 1-bit and high resolution quantization. We based our research on the idea that the reconstructed signal has to be consistent with the quantized measurements. This guiding principle led us to the development of an inconsistency energy and a new variant of the Iterative Hard Thresholding (IHT) called Quantized IHT (QIHT) which really bridges BIHT (Binary IHT i.e. for the 1-bit case) and IHT (for high resolutions i.e. when quantization error is small). Our numerical simulations showed that at small resolutions (when quantization error is far from negligible), QIHT outperforms classical IHT and the well known convex optimization problem called BPDN.
Last updated October 23, 2017, at 03:36 PM