Tri-Operation Mode Vision Sensor

Surveillance networks made up by several vision sensors are being traditionally limited by the amount of data that the network nodes can process. Frame-based imagers generate huge amounts of data that can be prohibitive if we want to record images with a certain number of surveillance cameras distributing along one spatial region. The power consumption of these cameras is also high because they are always generating data.

In this project, we designed one smart vision sensor capable of transmitting its information wirelessly. The sensor has there different operation modes (octopus, spatial contrast and temporal contrast). It has event-based output and low power consumption. The sensor will only transmit data when detects relevant changes in the visual scene (i. e. movement). Thus, the bandwidth and power consumption will be reduced significantly. With these kind of sensors, the number of cameras in a surveillance network could be increased without compromising the capacity of processing information of the network nodes.

The sensor was designed at the E-lab at Yale University. We are obtaining promising preliminary experimental results. Some of its features are high dynamic range, low power consumption, high resolution (256x256 pixels), and event counter to decide if each frame have to be transmitted. It is a frame-based system with pixel of reduced size (10um x 10um) and event-based outputs. To transmit the information via wireless, we use an UWB transmitter. One fix pattern is transmitted at the end of each frame to help the UWB receiver to recover the clock from the data and identify the beginning and the end of each frame.

T-sensor_pixel_diagram

The system main blocks are displayed above. On the left, there is the Row Control Circuit. This circuitry generates the token to select each row and the control signals to reset the pixels and sense the output voltage. In the middle, there is the pixel matrix. Below, there is the Column Readout Circuitry. This is the most important part of the sensor. In this block, the temporal and the spatial contrast are computed. Since the contrast computation is done off-pixel, the pixel size is reduced, achieving a good fill factor. On the right, there is the UWB transmitter. This block generates UWB signals to transmmit the events coming from the sensor. There is also a 14-bit counter that can be used to discard the transmission of one frame if the number of events within the frame is quite low. I can also be used to disable the UWB block if the activity of the sensor is quite low and there is not information to transmit.

On the top, we show the dedicated C++ custom interface that we have developed to debug the system and display real-time images. We are showing an image of the sensor working on intensity mode.

Main Project Publications

  1. Juan A. Leñero-Bardallo, Wei Tang, Dongsoo Kim, Joon Hyuk Park, and Eugenio Culurciello, "A Tri-mode Event-based Vision Sensor with an Embedded Wireless Transmitter", IEEE International Conference on Electronics, Circuits, and Systems (ICECS), Seville, Spain, 2012.

 

Back