Publicaciones del IMSE

Encontrados resultados para:

Autor: Marco Trevisi
Año: Desde 2002

Artículos de revistas


Compressive Imaging using RIP-compliant CMOS Imager Architecture and Landweber Reconstruction
M. Trevisi, A. Akbari, M. Trocan, Á. Rodríguez-Vázquez and R. Carmona-Galán
Journal Paper · IEEE Transactions on Circuits and Systems for Video Technology, vol. 30, no. 2, pp 387-399, 2020
resumen      doi      pdf

In this paper we present a new image sensor architecture for fast and accurate compressive sensing (CS) of natural images. Measurement matrices usually employed in compressive sensing CMOS image sensors (CS-CIS) are recursive pseudo-random binary matrices. We have proved that the restricted isometry property (RIP) of these matrices is limited by a low sparsity constant. The quality of these matrices is also affected by the non-idealities of pseudo-random numbers generators (PRNG). To overcome these limitations, we propose a hardware-friendly pseudo-random ternary measurement matrix generated on-chip by means of class III elementary cellular automata (ECA). These ECA present a chaotic behaviour that emulates random CS measurement matrices better than other PRNG. We have combined this new architecture with a block-based CS smoothed-projected Landweber (BCS-SPL) reconstruction algorithm. By means of single value decomposition (SVD) we have adapted this algorithm to perform fast and precise reconstruction while operating with binary and ternary matrices. Simulations are provided to qualify the approach.

Congresos


Adaptive Compressed Sensing Image Reconstruction Using Binary Measurement Matrices
A. Akbari, M. Trevisi and M. Trocan
Conference · IEEE International Conference on Electronics Circuits and Systems ICECS 2018
resumen     

In this paper we propose a simple, adaptive and efficient Compressed Sensing(CS) recovery algorithm that is able to recover the original image from compressed samples with higher quality. First, salient regions of the the image are detected using an initial reconstruction at the receiver side. The target acquisition subrate is further adaptively allocated among these regions, such that the new acquisition will favor the interest areas. The experimental results show that the proposed recovery algorithm has a better performance in terms of reconstruction quality when compared with existing fixed-rate reconstruction algorithm.

1D Cellular Automata for Pulse Width Modulated Compressive Sampling CMOS Image Sensors
M. Trevisi, R. Carmona-Galan and A. Rodriguez-Vazquez
Conference · International Workshop on Cellular Nanoscale Networks and their Applications CNNA 2018
resumen     

Compressive sensing (CS) is an alternative to the Shannon limit when the signal to be acquired is known to be sparse or compressible in some domain. Since compressed samples are non-hierarchical packages of information, this acquisition technique can be employed to overcome channel losses and restricted data rates. The quality of the compressed samples that a sensor can deliver is affected by the measurement matrix used to collect them. Measurement matrices usually employed in CS image sensors are recursive random-like binary matrices obtained using pseudo-random number generators (PRNG). In this paper we analyse the performance of these PRNGs in order to understand how their non-idealities affect the quality of the compressed samples. We present the architecture of a CMOS image sensor that uses class-III elementary cellular automata (ECA) and pixel pulse width modulation (PWM) to generate onchip a measurement matrix and high the quality compressed samples.

Concurrent focal-plane generation of compressed samples from time-encoded pixel values
M. Trevisi, H.C. Bandala, J. Fernández-Berni, R. Carmona-Galán and A. Rodríguez-Vázquez
Conference · Design Automation and Test in Europe DATE 2018
resumen      pdf

Compressive sampling allows wrapping the relevant content of an image in a reduced set of data. It exploits the sparsity of natural images. This principle can be employed to deliver images over a network under a restricted data rate and still receive enough meaningful information. An efficient implementation of this principle lies in the generation of the compressed samples right at the imager. Otherwise, i. e. digitizing the complete image and then composing the compressed samples in the digital plane, the required memory and processing resources can seriously compromise the budget of an autonomous camera node. In this paper we present the design of a pixel architecture that encodes light intensity into time, followed by a global strategy to pseudo-randomly combine pixel values and generate, on-chip and on-line, the compressed samples.

Compressed Sampling CMOS Imager based on Asynchronous Random Pixel Contributions
M. Trevisi, H.C. Bandala-Hernandez, R. Carmona-Galán and A. Rodríguez-Vázquez
Conference · Workshop on the Architecture of Smart Cameras WASC 2017
resumen     

Abstract not avaliable

Compressive Image Sensor Architecture with On-Chip Measurement Matrix Generation
M. Trevisi, R. Carmona-Galán and A. Rodríguez-Vázquez
Conference · Conference on Ph.D Research in Microelectronics and Electronics PRIME 2017
resumen     

A CMOS image sensor architecture that uses a cellular automaton for the pseudo-random compressive sampling matrix generation is presented. The image sensor employs inpixel pulse-frequency modulation and column wise pulse counters to produce compressed samples. A common problem of compressive sampling applied to image sensors is that the size of a full-frame compressive strategy is too large to be stored in an on-chip memory. Since this matrix has to be transmitted to or from the reconstruction system its size would also prevent practical applications. A full-frame compressive strategy generated using a 1-D cellular automaton showing a class III behavior neither needs a storage memory nor needs to be continuously transmitted. In-pixel pulse frequency modulation and up-down counters allow the generation of differential compressed samples directly in the digital domain where it is easier to improve the required dynamic range. These solutions combined together improve the accuracy of the compressed samples thus improving the performance of any generic reconstruction algorithm.

A Compressive Domain Saliency-Based Adaptive Measurement Method for Image Recovery
H. Li, M. Trocan, R. Carmona-Galán and M. Trevisi
Conference · IEEE International Conference on Electronics Circuits and Systems ICECS 2016
resumen     

This paper proposes an image compressed sensing method by adaptive measurement matrix based on saliency detection in the compressive domain. The saliency mapping algorithm is simply the difference between adjacent compressive measurements of neighboring image patches. The adjustment algorithm of subsampling rate is a multi - group approximation by sorting and clustering. It is verified by simulation experiments that the proposed method has higher reconstruction quality than the classical methods with fixed measurement matrices.

Non-Recursive Method for Motion Detection from a Compressive-Sampled Video Stream
M. Trevisi, R. Carmona-Galán and A. Rodríguez-Vázquez
Conference · Workshop on the Architecture of Smart Cameras WASC 2016
resumen     

This presentation introduces a non-recursive algorithm for motion detection directly from the analysis of compressed samples. The objective of this research is to create an algorithm able to detect, in real-time, the presence of moving objects over a fixed background from a compressive-sampled greyscale video stream. Many difficulties arise using this type of algorithm because it violates the fundamental principles of compressive sensing reconstruction that lie beneath traditional recursive methods. Recursive reconstruction methods even if accurate need large amounts of time and resources because they aim to retrieve all of the information contained within a scene. Our method is based on two key considerations. The first is that the targeted information of a moving element compared to a fixed background is really small. The second is an appropriate choice of a sub-Gaussian compressive sampling strategy. Our aim is to reduce the focus of general reconstruction in order to retrieve only objects of interest. This has resulted in a lightweight fast detection algorithm. It can be used to process compressed samples derived from a video stream with a speed of 100fps in order to detect the presence of moving objects over a fixed background. We have compared the performance of our algorithm with a highly efficient reconstruction algorithm, NESTA, to understand which benefits and limitations we are facing while trying to handle compressive-sampled information without recurring to standard techniques. While trying to target specific information within the compressed samples thus not following a conventional reconstruction technique may deliver worse reconstruction errors it is also true that it can benefit from faster processing times (Fig. 1) opening the possibility to new applications of CS.

Non-Recursive Method for Motion Detection from a Compressive-Sampled Video Stream
M. Trevisi, R. Carmona-Galán and A. Rodríguez-Vázquez
Conference · Conference on Ph.D Research in Microelectronics and Electronics PRIME 2016
resumen     

This paper introduces a non-recursive algorithm for motion detection directly from the analysis of compressed samples. The objective of this research is to create an algorithm able to detect, in real-time, the presence of moving objects over a fixed background from a compressive-sampled greyscale video stream. Many difficulties arise using this type of algorithm because it violates the fundamental principles of compressive sensing reconstruction that lie beneath traditional recursive methods. Recursive reconstruction methods even if accurate need large amounts of time and resources because they aim to retrieve all of the information contained within a scene. Our method is based on two key considerations. The first is that the targeted information of a moving element compared to a fixed background is really small. The second is an appropriate choice of a sub-Gaussian compressive sampling strategy. Our aim is to reduce the focus of general reconstruction in order to retrieve only objects of interest. This algorithm can be used to process compressed samples derived from a video stream with a speed of 100fps. This makes possible to detect the presence of moving objects directly from compressed samples with limited resources.

On the Design of a Sparsifying Dictionary for Compressive Image Feature Extraction
M. Trevisi, R. Carmona-Galán, J. Fernández-Berni and Á. Rodríguez-Vázquez
Conference · IEEE International Conference on Electronics Circuits and Systems ICECS 2015
resumen     

Although there are some works reported on feature extraction from compressed samples, none of them consider the implementation of the feature extractor as a part of the sensor itself. Our approach is to introduce a sparsifying dictionary, feasibly implementable at the focal plane, which describes the image in terms of features. This allows a standard reconstruction algorithm to directly recover the interesting image features, discarding the irrelevant information. In order to validate the approach, we have integrated a Harris-Stephens corner detector into the compressive sampling process. We have evaluated the accuracy of the reconstructed corners compared to applying the detector to a reconstructed image.

Hardware-oriented feature extraction based on compressive sensing
M. Trevisi, R. Carmona-Galán and Á. Rodríguez-Vázquez
Conference · International Conference on Distributed Smart Cameras ICDSC 2015
resumen     

Feature extraction is used to reduce the amount of resources required to describe a large set of data. A given feature can be represented by a matrix having the same size as the original image but having relevant values only in some specific points. We can consider this sets as being sparse. Under this premise many algorithms have been generated to extract features from compressive samples. None of them though is easily described in hardware. We try to bridge the gap between compressive sensing and hardware design by presenting a sparsifying dictionary that allows compressive sensing reconstruction algorithms to recover features. The idea is to use this work as a starting point to the design of a smart imager capable of compressive feature extraction. To prove this concept we have devised a simulation by using the Harris corner detection and applied a standard reconstruction method, the Nesta algorithm, to retrieve corners instead of a full image.

Compressive Feature Extraction
M. Trevisi, R. Carmona-Galán, J. Fernández-Berni and Á. Rodríguez-Vázquez
Conference · Workshop on the Architecture of Smart Cameras WASC 2015
resumen     

Compressive sensing (CS) provides an alternative to Nyquist-Shannon sampling when the signal to acquire is known to be sparse or compressible. A sparse signal has a small number of nonzero components compared to its total length. This property can either exist in the sampling domain of the signal or with respect to other basis. Representing a signal in a transform basis involves the choice of a dictionary, a set of elementary signals, used to decompose the signal. When performing analysis of complex data one of the major problems stems from the number of variables involved. Feature extraction is used to reduce the amount of resources required to describe a large set of data. A given feature is often represented by a set of parameters to be evaluated. This set has relevant values only in correspondence of said features. We can consider sets derived this way as being sparse. This sparked the idea to merge a feature extraction algorithm with the compressive sensing theory. To do so we try to adapt one of the most practical CS reconstruction algorithms, the Nesterov algorithm applied to CS (NESTA) to extract the features of one of the simplest corner detection algorithms, the Harris and Stephens algorithm. Our aim is to compare the performance of the combined Harris-NESTA algorithm over the application of a Harris algorithm on a NESTA reconstructed image and to do so we devised a test that takes into account four different parameters.

Comparative Analysis of Compressive Sensing Strategies for Smart Compressive Image Sensors
M. Trevisi, R. Carmona-Galán, J. Fernández-Berni and Á. Rodríguez-Vázquez
Conference · Workshop on the Architecture of Smart Cameras WASC 2014
resumen     

Compressive sensing (CS) first appeared eight years ago as a new kind of signal processing theory. Since then, only a few steps have been taken to turn this theory into feasible practice. We have identified two important gaps that stand behind the lag on practical compressive image sensors. The first one is that, technologically speaking, there is not yet any imager based on CS that is able to overrule, at least in resolution/decryption-speed ratio, the capabilities of current standard imagers. The second one is that none of the published reports on compressive image sensors mention how the compressive sensing strategy is passed from the sensor, which is delivering image samples, to the image reconstruction algorithm at the reception side. In order to capture compressed image samples, it is necessary that the imagers implement a compressive strategy, which has the form of a matrix that convolves the original signal. There are two sets of methods that are primarily implemented nowadays to build a compressive strategy, the first one is to pick each element from a random distribution, preferably Gaussian, and the second is to arrange in random order the rows of an incoherent orthobasis matrix, preferably Fourier matrix. The selection of the compressive sensing strategy has an incidence on the physical implementation, for instance it is easier to implement a binary mask on each pixel than to multiply its value by a real number. In order to analyze the effect of this selection in the reconstruction from the samples delivered by the compressive image sensor, we have simulated the process with a MATLAB test bench and compared the reconstruction times and the RMSE vs. the number of samples delivered of three different sensing strategies using a 64×64 image of Lena as test image and a Total Variation NESTA algorithm as reconstruction algorithm.

Parallel Processing Architectures and Power Efficiency in Smart Camera Chips
R. Carmona-Galán, J. Fernández-Berni, M. Trevisi and A. Rodríguez-Vázquez
Conference · Workshop on the Architecture of Smart Cameras WASC 2014
resumen     

Because of the massive amount of data, image and video processing represents a huge computational demand. Providing the necessary resources in embedded systems is not an easy task. Providing them on-chip needs a serious reconsideration of the processing architecture. In order to speed up processing, the number of processors/cores operating in parallel can be increased. This is an intuitive result, but there are two drawbacks. First, this speedup is limited by the degree to which the algorithm can be parallelized, what is known as Amdahl's law. Second, the more hardware operating at the same time the higher the power consumption. However, image processing and, in general, the processing visual information that keeps a retinotopic topology, affords an inherent parallelism that can be exploited to a great extent. Furthermore, there is an additional and less intuitive result of parallelization, which is that using a larger number of processors renders a more efficient implementation in terms of GOPS/W. Evidence of this can be found in massively parallel array processors. Their distributed architecture is adapted to the nature of the visual stimulus to the point that the amount of energy dedicated to data transmission and memory operations is largely reduced. The result is a power efficient implementation, perfectly suited for autonomous embedded vision systems working on a restricted power budget. This trend is observed in multi- and many-core processor chips, GPUs and different types of single-instruction multiple-data (SIMD) arrays containing from tens to hundreds of processing elements (PE). In the case of analog array processors, a similar relation is observed despite the fact that the disparity between design techniques, signal representation and the computation of the effective number of OPS advises against any sort of comparison.

Libros


No hay resultados

Capítulos de libros


No hay resultados

Otras publicaciones


No hay resultados

  • Revistas585
  • Congresos1171
  • Libros30
  • Capítulos de libros81
  • Otros9
  • 20245
  • 202335
  • 202281
  • 202183
  • 2020103
  • 201977
  • 2018106
  • 2017111
  • 2016104
  • 2015111
  • 2014104
  • 201380
  • 2012108
  • 2011102
  • 2010120
  • 200977
  • 200867
  • 200770
  • 200665
  • 200578
  • 200468
  • 200362
  • 200259
INVESTIGACIÓN
COMPARTIR