Spanish National Research Council · University of Seville
 HOME
INTRANET
esp    ing
IMSE-CNM in Digital.CSIC


 


In all publications
Author: Camuñas Mesa, Luis A.
Year: Since 2002
All publications
Neuromorphic spiking neural networks and their memristor-CMOS hardware implementations
L.A. Camuñas-Mesa, B. Linares-Barranco and T. Serrano-Gotarredona
Journal Paper - Materials, vol. 12, no. 7, article number 2745, 2019
MDPI AG    DOI: 10.3390/ma12172745    ISSN: 1996-1944    » doi
[abstract]
Inspired by biology, neuromorphic systems have been trying to emulate the human brain for decades, taking advantage of its massive parallelism and sparse information coding. Recently, several large-scale hardware projects have demonstrated the outstanding capabilities of this paradigm for applications related to sensory information processing. These systems allow for the implementation of massive neural networks with millions of neurons and billions of synapses. However, the realization of learning strategies in these systems consumes an important proportion of resources in terms of area and power. The recent development of nanoscale memristors that can be integrated with Complementary Metal-Oxide-Semiconductor (CMOS) technology opens a very promising solution to emulate the behavior of biological synapses. Therefore, hybrid memristor-CMOS approaches have been proposed to implement large-scale neural networks with learning capabilities, offering a scalable and lower-cost alternative to existing CMOS systems.

Calibration of offset via bulk for low-power HfO2 based 1T1R memristive crossbar read-out system
C. Mohan, L.A. Camuñas-Mesa, E. Vianello, L. Periniolla, C. Reita, J.M. de la Rosa, T. Serrano-Gotarredona and B. Linares-Barranco
Journal Paper - Microelectronic Engineering, vol. 198, pp 35-47, 2018
ELSEVIER    DOI: 10.1016/j.mee.2018.06.011    ISSN: 0167-9317    » doi
[abstract]
Neuromorphic RRAM circuits typically need currents of several mA when many binary memristive devices are activated at the same time. This is due to the low resistance state of these devices, which increases the power consumption and limits the scalability. To overcome this limitation, it is vital to investigate how to minimize the amplitude of the read-out inference pulses sent through the crossbar lines. However, the amplitude of such inference voltage pulses will become limited by the offset voltage of read-out circuits. This paper presents a three-stage calibration circuit to compensate for offset voltage in the wordlines of a memristor-array read-out system. The proposed calibration scheme is based on adjusting the bulk voltage of one of the input differential pair MOSFETs by means of a switchable cascade of resistor ladders. This renders the possibility to obtain calibration voltage steps less than 0.1mV by cascading a few number of stages, whose results are only limited by mismatch, temperature, electrical noise and other fabrication defects. The system is built using HfO2-based binary memristive synaptic devices on top of a 130-nm CMOS technology. Layout-extracted simulations considering technology corners, PVT variations and electrical noise are shown to validate the presented calibration scheme.

Event-Driven Configurable Module with Refractory Mechanism for ConvNets on FPGA
L.A. Camuñas-Mesa, Y. Domínguez-Cordero, T. Serrano-Gotarredona and B. Linares-Barranco
Conference - IEEE International Symposium on Circuits and Systems ISCAS 2018
[abstract]
The development of bio-inspired event-driven neuromorphic Dynamic Vision Sensors (DVS) provides a revolutionary way of capturing visual scenes by generating flows of events representing real-time visual information. Each pixel in a DVS operates autonomously and sends out an event (spike) whenever it senses a change of light greater than a preset threshold. Therefore, the DVS generates a continuous flow of events with a high temporal resolution (sub-microsecond) representing reality dynamically, without frames. Spiking Neural Networks (SNNs) process flows of events using different neuronal and synaptic models, performing tasks like object tracking or shape recognition.

A Configurable Event-Driven Convolutional Node with Rate Saturation Mechanism for Modular ConvNet Systems Implementation
L.A. Camuñas-Mesa, Y.L. Domínguez-Cordero, A. Linares-Barranco, T. Serrano-Gotarredona and B. Linares-Barranco
Journal Paper - Frontiers in Neuroscience, vol. 12, Article 63, 2018
FRONTIERS RESEARCH FOUNDATION    DOI: 10.3389/fnins.2018.00063    ISSN: 1662-4548    » doi
[abstract]
Convolutional Neural Networks (ConvNets) are a particular type of neural network often used for many applications like image recognition, video analysis or natural language processing. They are inspired by the human brain, following a specific organization of the connectivity pattern between layers of neurons known as receptive field. These networks have been traditionally implemented in software, but they are becoming more computationally expensive as they scale up, having limitations for real-time processing of high-speed stimuli. On the other hand, hardware implementations show difficulties to be used for different applications, due to their reduced flexibility. In this paper, we propose a fully configurable event-driven convolutional node with rate saturation mechanism that can be used to implement arbitrary ConvNets on FPGAs. This node includes a convolutional processing unit and a routing element which allows to build large 2D arrays where any multilayer structure can be implemented. The rate saturation mechanism emulates the refractory behavior in biological neurons, guaranteeing a minimum separation in time between consecutive events. A 4-layer ConvNet with 22 convolutional nodes trained for poker card symbol recognition has been implemented in a Spartan6 FPGA. This network has been tested with a stimulus where 40 poker cards were observed by a Dynamic Vision Sensor (DVS) in 1 s time. Different slow-down factors were applied to characterize the behavior of the system for high speed processing. For slow stimulus play-back, a 96% recognition rate is obtained with a power consumption of 0.85 mW. At maximum play-back speed, a traffic control mechanism downsamples the input stimulus, obtaining a recognition rate above 63% when less than 20% of the input events are processed, demonstrating the robustness of the network.

Bulk-based DC offset calibration for Low-power Memristor Array Read-Out System
C. Mohan, L.A. Camuñas-Mesa, E. Vianello, L. Perniola, C. Reita, J.M. de la Rosa, T. Serrano-Gotarredona and B. Linares-Barranco
Conference - Conference on Design of Circuits and Integrated Systems DCIS 2017
[abstract]
Memristors in neuromorphic circuits typically need to drive currents of many mA because their Low Resistance State (LRS) is in the order of a few kΩ and many devices need to be activated simultaneously which results in high power consumptions. Reducing read-out pulses amplitudes below the typical 0.1V is not trivial, as offset voltages of read-out circuits start to affect the results. This paper presents a three-stage cascaded calibration to compensate for the resting offset voltage of crossbar lines generated in the amplifiers driving memristive devices in memristor array read-out systems. The proposed calibration technique is based on adjusting the bulk voltage of the input differential pairs by means of a switchable cascade of resistor ladders. As a result, the calibrated offset voltage can be further reduced with the number of stages in the cascade, leading to a calibration voltage step below 0.1mV -only limited in practice by mismatch and electrical noise. The circuit has been designed in 130nm CMOS technology, and its operation has been verified with oxide-based resistive memory (OxRAM) devices operated in binary mode to implement synapses in neuromorphic circuits. Layout-extracted simulations considering PVT variations are considered to validate the presented calibration technique.

Event-Driven Stereo Visual Tracking Algorithm to Solve Object Occlusion
L.A. Camunas-Mesa, T. Serrano-Gotarredona, S. Ieng, R. Benosman and B. Linares-Barranco
Journal Paper - IEEE Transactions on Neural Networks and Learning Systems, vol. 29, no. 9, pp 4223-4237, 2017
IEEE    DOI: 10.1109/TNNLS.2017.2759326    ISSN: 2162-237X    » doi
[abstract]
Object tracking is a major problem for many computer vision applications, but it continues to be computationally expensive. The use of bio-inspired neuromorphic event-driven dynamic vision sensors (DVSs) has heralded new methods for vision processing, exploiting reduced amount of data and very precise timing resolutions. Previous studies have shown these neural spiking sensors to be well suited to implementing single-sensor object tracking systems, although they experience difficulties when solving ambiguities caused by object occlusion. DVSs have also performed well in 3-D reconstruction in which event matching techniques are applied in stereo setups. In this paper, we propose a new event-driven stereo object tracking algorithm that simultaneously integrates 3-D reconstruction and cluster tracking, introducing feedback information in both tasks to improve their respective performances. This algorithm, inspired by human vision, identifies objects and learns their position and size in order to solve ambiguities. This strategy has been validated in four different experiments where the 3-D positions of two objects were tracked in a stereo setup even when occlusion occurred. The objects studied in the experiments were: 1) two swinging pens, the distance between which during movement was measured with an error of less than 0.5%; 2) a pen and a box, to confirm the correctness of the results obtained with a more complex object; 3) two straws attached to a fan and rotating at 6 revolutions per second, to demonstrate the high-speed capabilities of this approach; and 4) two people walking in a real-world environment.

An address event representation-based processing system for a biped robot
U. Jaramillo-Avila, H. Rostro-Gonzalez, L.A. Camuñas-Mesa, R.J. Romero-Troncoso and B. Linares-Barranco
Journal Paper - International Journal of Advanced Robotic Systems, vol. 13, no. 1, 2016
SAGE    DOI: 10.5772/62321    ISSN: 1729-8806    » doi
[abstract]
In recent years, several important advances have been made in the fields of both biologically inspired sensorial processing and locomotion systems, such as Address Event Representation-based cameras (or Dynamic Vision Sensors) and in human-like robot locomotion, e.g,. the walking of a biped robot. However, making these fields merge properly is not an easy task. In this regard, Neuromorphic Engineering is a fast-growing research field, the main goal of which is the biologically inspired design of hybrid hardware systems in order to mimic neural architectures and to process information in the manner of the brain. However, few robotic applications exist to illustrate them. The main goal of this work is to demonstrate, by creating a closed-loop system using only bio-inspired techniques, how such applications can work properly. We present an algorithm using Spiking Neural Networks (SNN) for a biped robot equipped with a Dynamic Vision Sensor, which is designed to follow a line drawn on the floor. This is a commonly used method for demonstrating control techniques. Most of them are fairly simple to implement without very sophisticated components; however, it can still serve as a good test in more elaborate circumstances. In addition, the locomotion system proposed is able to coordinately control the six DOFs of a biped robot in switching between basic forms of movement. The latter has been implemented as a FPGA-based neuromorphic system. Numerical tests and hardware validation are presented.

Event-driven sensing and processing for high-speed robotic vision
L.A. Camunas-Mesa, T. Serrano-Gotarredona and B. Linares-Barranco
Conference - IEEE Biomedical Circuits and Systems Conference BioCAS 2014
[abstract]
We present here an overview of a new vision paradigm where sensors and processors use visual information not represented by sequences of frames. Event-driven vision is inherently frame-free, as happens in biological systems. We use an event-driven sensor chip (called Dynamic Vision Sensor or DVS) together with event-driven convolution module arrays implemented on high-end FPGAs. Experimental results demonstrate the application of this paradigm to implement Gabor filters and 3D stereo reconstruction systems. This architecture can be applied to real systems which need efficient and high-speed visual perception, like vehicle automatic driving, robotic applications in non-structured environments, or intelligent surveillance in security systems.

Live demonstration: Event-driven sensing and processing for high-speed robotic vision
L.A. Camunas-Mesa, T. Serrano-Gotarredona and B. Linares-Barranco
Conference - IEEE Biomedical Circuits and Systems Conference BioCAS 2014
[abstract]
Fig. 1(a) shows the demo setup. Two DVS boards send events out through parallel buses to a merger board. This board merges all the event flow in one single AER bus, and sends it to a custom-made convolutional board, where a 2D grid array of convolution modules is implemented within a Spartan6 FPGA, as represented in Fig. 1(b) and (c). A USBAERmini2 board is used to timestamp the events coming out of the convolutional board and send them to a computer through a high-speed USB2.0 port. Finally, the output events are represented in the computer in real time using jAER software.

Enhanced event-based stereo vision with Gabor filters
L.A. Camuñas-Mesa, T. Serrano-Gotarredona, S.H. Ieng, R. Benosman and B. Linares-Barranco
Conference - Conference on Design of Circuits and Integrated Systems DCIS 2014
[abstract]
The recently developed Dynamic Vision Sensors (DVS) sense dynamic visual information asynchronously and code it into trains of events with sub-micro second temporal resolution. This high temporal precision makes the output of these sensors especially suited for dynamic 3D visual reconstruction, by matching corresponding events generated by two different sensors in a stereo setup. This paper explores the use of Gabor filters to extract information about the orientation of the object edges that produce the events, applying the matching algorithm to the events generated by the Gabor filters and not to those produced by the DVS. This strategy provides more reliably matched pairs of events, improving the final 3D reconstruction.

On the use of orientation filters for 3D reconstruction in event-driven stereo vision
L.A. Camuñas-Mesa, T. Serrano-Gotarredona, S.H. Ieng, R.B. Benosman and B. Linares-Barranco
Journal Paper - Frontiers in Neuroscience, vol. 8, article 48, 2014
FRONTIERS RESEARCH FOUNDATION    DOI: 10.3389/fnins.2014.00048    ISSN: 1662-4548    » doi
[abstract]
The recently developed Dynamic Vision Sensors (DVS) sense visual information asynchronously and code it into trains of events with sub-micro second temporal resolution. This high temporal precision makes the output of these sensors especially suited for dynamic 3D visual reconstruction, by matching corresponding events generated by two different sensors in a stereo setup. This paper explores the use of Gabor filters to extract information about the orientation of the object edges that produce the events, therefore increasing the number of constraints applied to the matching algorithm. This strategy provides more reliably matched pairs of events, improving the final 3D reconstruction.

Event-driven stereo vision with orientation filters
L.A. Camuñas-Mesa, T. Serrano-Gotarredona, B. Linares-Barranco, S. Ieng and R. Benosman
Conference - IEEE International Symposium on Circuits and Systems ISCAS 2014
[abstract]
The recently developed Dynamic Vision Sensors (DVS) sense dynamic visual information asynchronously and code it into trains of events with sub-micro second temporal resolution. This high temporal precision makes the output of these sensors especially suited for dynamic 3D visual reconstruction, by matching corresponding events generated by two different sensors in a stereo setup. This paper explores the use of Gabor filters to extract information about the orientation of the object edges that produce the events, applying the matching algorithm to the events generated by the Gabor filters and not to those produced by the DVS. This strategy provides more reliably matched pairs of events, improving the final 3D reconstruction.

An event-driven multi-kernel convolution processor module for event-driven vision sensors
L. Camuñas-Mesa, C. Zamarreño-Ramos, A. Linares-Barranco, A.J. Acosta-Jiménez, T. Serrano-Gotarredona and B. Linares-Barranco
Journal Paper - IEEE Journal of Solid-State Circuits, vol. 47, no. 2, pp 504-517, 2012
IEEE    DOI: 10.1109/JSSC.2011.2167409    ISSN: 0018-9200    » doi
[abstract]
Event-Driven vision sensing is a new way of sensing visual reality in a frame-free manner. This is, the vision sensor (camera) is not capturing a sequence of still frames, as in conventional video and computer vision systems. In Event-Driven sensors each pixel autonomously and asynchronously decides when to send its address out. This way, the sensor output is a continuous stream of address events representing reality dynamically continuously and without constraining to frames. In this paper we present an Event-Driven Convolution Module for computing 2D convolutions on such event streams. The Convolution Module has been designed to assemble many of them for building modular and hierarchical Convolutional Neural Networks for robust shape and pose invariant object recognition. The Convolution Module has multi-kernel capability. This is, it will select the convolution kernel depending on the origin of the event. A proof-of-concept test prototype has been fabricated in a 0.35 mu m CMOS process and extensive experimental results are provided. The Convolution Processor has also been combined with an Event-Driven Dynamic Vision Sensor (DVS) for high-speed recognition examples. The chip can discriminate propellers rotating at 2 k revolutions per second, detect symbols on a 52 card deck when browsing all cards in 410 ms, or detect and follow the center of a phosphor oscilloscope trace rotating at 5 KHz.

On spike-timing-dependent-plasticity, memristive devices, and building a self-learning visual cortex
C. Zamarreño-Ramos, L.A. Camuñas-Mesa, J.A. Pérez-Carrasco, T. Masquelier, T. Serrano-Gotarredona and B. Linares-Barranco
Journal Paper - Frontiers in Neuroscience, vol. 5, article 26, 2011
FRONTIERS RESEARCH FOUNDATION    DOI: 10.3389/fnins.2011.00026    ISSN: 1662-4548    » doi
[abstract]
In this paper we present a very exciting overlap between emergent nanotechnology and neuroscience, which has been discovered by neuromorphic engineers. Specifically, we are linking one type of memristor nanotechnology devices to the biological synaptic update rule known as spike-time-dependent-plasticity (STDP) found in real biological synapses. Understanding this link allows neuromorphic engineers to develop circuit architectures that use this type of memristors to artificially emulate parts of the visual cortex. We focus on the type of memristors referred to as voltage or flux driven memristors and focus our discussions on a behavioral macro-model for such devices. The implementations result in fully asynchronous architectures with neurons sending their action potentials not only forward but also backward. One critical aspect is to use neurons that generate spikes of specific shapes. We will see how by changing the shapes of the neuron action potential spikes we can tune and manipulate the STDP learning rules for both excitatory and inhibitory synapses. We will see how neurons and memristors can be interconnected to achieve large scale spiking learning systems, that follow a type of multiplicative STDP learning rule. We will briefly extend the architectures to use three-terminal transistors with similar memristive behavior. We will illustrate how a V1 visual cortex layer can assembled and how it is capable of learning to extract orientations from visual data coming from a real artificial CMOS spiking retina observing real life scenes. Finally, we will discuss limitations of currently available memristors. The results presented are based on behavioral simulations and do not take into account non-idealities of devices and interconnects. The aim of this paper is to present, in a tutorial manner, an initial framework for the possible development of fully asynchronous STDP learning neuromorphic architectures exploiting two or three-terminal memristive type devices. All files used for the simulations are made available through the journal web site.

Microchips convolucionadores AER para procesado asíncrono neocortical de información sensorial visual codificada en eventos
L. Camuñas-Mesa
Thesis - Date of defense: 21/05/2010
UNIVERSIDAD DE SEVILLA, IMSE-CNM    » link
[abstract]
Durante los últimos tiempos, la capacidad de procesamiento de los sistemas informáticos se ha incrementado hasta alcanzar cotas que hasta hace poco resultaban inimaginables. Esto ha permitido diseñar sistemas artificiales capaces de llevar a cabo tareas cada vez más complejas. Sin embargo, en el ámbito de la percepción sensorial, y en particular en algunas aplicaciones como las relativas al reconocimiento de objetos a partir de la información visual por parte del cerebro humano ponen de relieve una importante limitación: a pesar de trabajar con unidades básicas de procesamiento mucho más rápidas que las neuronas, los sistemas artificiales no consiguen llevar a cabo este tipo de tareas a la misma velocidad que los sistemas biológicos. Para intentar superar esta limitación, se propone el procesamiento de información visual basado en eventos, en lugar del basado en fotogramas utilizado por los sistemas tradicionales. De este modo, podemos crear sistemas multicapa de procesamiento de información visual codificada en eventos con una estructura masivamente paralela, emulando el comportamiento del córtex cerebral. Teniendo en cuenta que la estructura de procesamiento de las primeras capas del córtex cerebral se puede aproximar por un sistema de convoluciones, proponemos el diseño de microchips convolucionadores que puedan convertirse en el módulo básico para construir dichos sistemas multicapa. Así pues, esta tesis presenta dos versiones diferentes de microchips convolucionadores completamente digitales basados en el protocolo AER para sistemas de procesamiento visual basados en eventos. A partir de la interconexión en serie y en paralelo de diferentes muestras de estos chips, se pueden construir sistemas complejos multicapa, siguiendo las estructuras típicas del paradigma conocido como "Convolutional Neural Networks". En la tesis se describen detalladamente las arquitecturas de cada una de las dos versiones propuestas de chips de convolución, así como algunos resultados experimentales obtenidos.

A 32x32 pixel convolution processor chip for address event vision sensors with 155 ns event latency and 20 Meps throughput
L. Camuñas-Mesa, A. Acosta-Jiménez, C. Zamarreño-Ramos, T. Serrano-Gotarredona and B. Linares-Barranco
Journal Paper - IEEE Transactions on Circuits and Systems I-Regular Papers, vol. 58, no. 4, pp 777-790, 2011
IEEE    DOI: 10.1109/TCSI.2010.2078851    ISSN: 1549-8328    » doi
[abstract]
This paper describes a convolution chip for event-driven vision sensing and processing systems. As opposed to conventional frame-constraint vision systems, in event-driven vision there is no need for frames. In frame-free event-based vision, information is represented by a continuous flow of self-timed asynchronous events. Such events can be processed on the fly by event-based convolution chips, providing at their output a continuous event flow representing the 2-D filtered version of the input flow. In this paper we present a 32 x 32 pixel 2-D convolution event processor whose kernel can have arbitrary shape and size up to 32 x 32. Arrays of such chips can be assembled to process larger pixel arrays. Event latency between input and output event flows can be as low as 155 ns. Input event throughput can reach 20 Meps (mega events per second), and output peak event rate can reach 45 Meps. The chip can be configured to discriminate between two simulated propeller-like shapes rotating simultaneously in the field of view at a speed as high as 9400 rps (revolutions per second). Achieving this with a frame-constraint system would require a sensing and processing capability of about 100 K frames per second. The prototype chip has been built in 0.35 mu m CMOS technology, occupies 4.3 x 5.4 mm(2) and consumes a peak power of 200 mW at maximum kernel size at maximum input event rate.

On scalable spiking ConvNet hardware for cortex-like visual sensory processing systems
L. Camuñas-Mesa, J.A. Pérez-Carrasco, C. Zamarreño-Ramos, T. Serrano-Gotarredona and B. Linares-Barranco
Conference - IEEE International Symposium on Circuits and Systems ISCAS 2010
[abstract]
This paper summarizes how Convolutional Neural Networks (ConvNets) can be implemented in hardware using Spiking neural network Address-Event-Representation (AER) technology, for sophisticated pattern and object recognition tasks operating at mili second delay throughputs. Although such hardware would required hundreds of individual convolutional modules and thus is presently not yet available, we discuss methods and technologies for implementing it in the near future. On the other hand, we provide precise behavioral simulations of large scale spiking AER convolutional hardware and evaluate its performance, by using performance figures of already available AER convolution chips fed with real sensory data obtained from physically avaliable AER motion retina chips. We provide simulation results of systems trained for people recognition, showing recognition delays of a few milliseconds from stimulus onset. ConvNets show good up scaling behavior and possibilities for being implemented efficiently with new nano scale hybrid CMOS/nonCMOS technologies.

Neocortical frame-free vision sensing and processing through scalable spiking Convet hardware
L. Camuñas-Mesa, J.A. Pérez-Carrasco, C. Zamarreño-Ramos, T. Serrano-Gotarredona and B. Linares Barranco
Conference - IEEE World Congress on Computational Intelligence WCCI 2010
[abstract]
This paper summarizes how Convolutional Neural Networks (ConvNets) can be implemented in hardware using Spiking neural network Address-Event-Representation (AER) technology, for sophisticated pattern and object recognition tasks operating at mili second delay throughputs. Although such hardware would require hundreds of individual convolutional modules and thus is presently not yet available, we discuss methods and technologies for implementing it in the near future. On the other hand, we provide precise behavioral simulations of large scale spiking AER convolutional hardware and evaluate its performance, by using performance figures of already available AER convolution chips fed with real sensory data obtained from physically available AER motion retina chips. We provide simulation results of systems trained for people recognition, showing recognition delays of a few miliseconds from stimulus onset. ConvNets show good up scaling behaviour and possibilities for being implemented efficiently with new nano scale hybrid CMOS/nonCMOS technologies.

Fast vision through frameless event-based sensing and convolutional processing: Application to texture recognition
J.A. Pérez-Carrasco, B. Acha, C. Serrano, L. Camuñas-Mesa, T. Serrano-Gotarredona and B. Linares-Barranco
Journal Paper - IEEE Transactions on Neural Networks, vol. 21, no. 4, pp 609-620, 2010
IEEE    DOI: 10.1109/TNN.2009.2039943    ISSN: 1045-9227    » doi
[abstract]
Address-event representation (AER) is an emergent hardware technology which shows a high potential for providing in the near future a solid technological substrate for emulating brain-like processing structures. When used for vision, AER sensors and processors are not restricted to capturing and processing still image frames, as in commercial frame-based video technology, but sense and process visual information in a pixel-level event-based frameless manner. As a result, vision processing is practically simultaneous to vision sensing, since there is no need to wait for sensing full frames. Also, only meaningful information is sensed, communicated, and processed. Of special interest for brain-like vision processing are some already reported AER convolutional chips, which have revealed a very high computational throughput as well as the possibility of assembling large convolutional neural networks in a modular fashion. It is expected that in a near future we may witness the appearance of large scale convolutional neural networks with hundreds or thousands of individual modules. In the meantime, some research is needed to investigate how to assemble and configure such large scale convolutional networks for specific applications. In this paper, we analyze AER spiking convolutional neural networks for texture recognition hardware applications. Based on the performance figures of already available individual AER convolution chips, we emulate large scale networks using a custom made event-based behavioral simulator. We have developed a new event-based processing architecture that emulates with AER hardware Manjunath's frame-based feature recognition software algorithm, and have analyzed its performance using our behavioral simulator. Recognition rate performance is not degraded. However, regarding speed, we show that recognition can be achieved before an equivalent frame is fully sensed and transmitted.

improved AER convolution chip for vision processing with higher resolution and new functionalities
L.A. Camuñas-Mesa, A. Linares-Barranco, A. Acosta, T. Serrano-Gotarredona and B. Linares-Barranco
Conference - Conference on Design of Circuits and Integrated Systems DCIS 2009
[abstract]
We present a new neuromorphic fully digital convolution microchip for Address Event Representation (AER) spike-based processing system. This chip computes 2-D convolutions with a programmable kernel in real time. Previously, we designed and tested another convolution chip with a size of 32 x 32 pixels [1] and, based on the information obtained from this test, we have designed a new chip with larger resolution (64 x 64 pixels), improved behavior and new functionalities included. This chip receives and generates data in AER format, which is an asynchronous protocol, implementing the convolution of the input images with a programmable kernel. The most important new functionality included in this chip is the multikernel capability, which allows us to program several kernels (up to 32) so that each input event will be processed with the corresponding kernel, depending on the origin of the input event. The paper describes the architecture of the chip, with special emphasis to the new improvements.

CAVIAR: A 45k neuron, 5M synapse, 12G connects/s AER hardware sensory-processing-learning-actuating system for high-speed visual object recognition and tracking
R. Serrano-Gotarredona, M. Oster, P. Lichtsteiner, A. Linares-Barranco, R. Paz-Vicente, F. Gómez-Rodríguez, L. Camuñas-Mesa, R. Berner, M. Rivas-Pérez, T. Delbrueck, S.C. Liu, R. Douglas, P. Hafliger, G. Jiménez-Moreno, A. Civit-Ballcels, T. Serrano-Gotarredona, A.J. Acosta-Jiménez and B. Linares-Barranco
Journal Paper - IEEE Transactions on Neural Networks, vol. 20, no. 9, pp 1417-1438, 2009
IEEE    DOI: 10.1109/TNN.2009.2023653    ISSN: 1045-9227    » doi
[abstract]
This paper describes CAVIAR, a massively parallel hardware implementation of a spike-based sensing-processing-learning-actuating system inspired by the physiology of the nervous system. CAVIAR uses the asychronous address-event representation (AER) communication framework and was developed in the context of a European Union funded project. It has four custom mixed-signal AER chips, five custom digital AER interface components, 45k neurons (spiking cells), up to 5M synapses, performs 12G synaptic operations per second, and achieves millisecond object recognition and tracking latencies.

Fully digital AER convolution chip for vision processing
L. Camuñas-Mesa, A. Acosta-Jiménez, T. Serrano-Gotarredona and B. Linares-Barranco
Conference - IEEE International Symposium on Circuits and Systems ISCAS 2008
[abstract]
We present a neuromorphic fully digital convolution microchip for Address Event Representation (AER) spike-based processing systems. This microchip computes 2-D convolutions with a programmable kernel in real time. It operates on a pixel array of size 32 x 32, and the kernel is programmable and can be of arbitrary shape and size up to 32 x 32 pixels. The chip receives and generates data in AER format, which is asynchronous and digital. The paper describes the architecture of the chip, the test setup, and experimental results obtained from a fabricated prototype. ©2008 IEEE.

Image Processing Architecture Based on a Fully Digital Aer Convolution Chip
L.A. Camuñas-Mesa, A.J. Acosta-Jimenez, T. Serrano-Gotarredona, B. Linares-Barranco and R. Serrano Gotarredona
Conference - Conference on Design of Circuits and Integrated Systems DCIS 2007
[abstract]
Abstract not avaliable

The stochastic I-Pot: A circuit block for programming bias currents
R. Serrano-Gotarredona, L. Camuñas-Mesa, T. Serrano-Gotarredona, J.A. Leñero-Bardallo and B. Linares-Barranco
Journal Paper - IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 54, no. 9, pp 760-764, 2007
IEEE    DOI: 10.1109/TCSII.2007.900881    ISSN: 1549-7747    » doi
[abstract]
In this brief, we present the "Stochastic I-Pot." It is a circuit element that allows for digitally programming a precise bias current ranging over many decades, from pico-amperes up to hundreds of micro-amperes. I-Pot blocks can be chained within a chip to allow for any arbitrary number of programmable bias currents. The approach only requires to provide the chip with three external pins, the use of an external current measuring instrument, and a computer. This way, once all internal I-Pots have been characterized, they can be programmed through a computer to provide any desired current bias value with very low error. The circuit block turns out to be very practical for experimenting with new circuits (specially when a large number of biases are required), testing wide ranges of biases, introducing means for current mismatch calibration, offsets compensations, etc. using a reduced number of chip pins. We show experimental results of generating bias currents with errors of 0.38% (8 bits) for currents varying from 176 mu A to 19.6 pA. Temperature effects are characterized.

A bio-inspired event-based real-time image processor
R. Serrano-Gotarredona, T. Serrano-Gotarredona, A.J. Acosta-Jiménez, B. Linares-Barranco and L.A. Camuñas-Mesa
Conference - IEEE RAS-EMBS International Conference on Biomedical Robotics and Biomechatronics BioRob 2006
[abstract]
AER (Address Event Representation) is an emergent bio-inspired protocol intended to communicate chips containing many processing units, called them neurons or pixels. It exploits the advantages of communicating the activation state of a neuron as pulses, as done in the human brain. The information is sent out sorted beginning with the most relevant. This feature together with the parallel processing of the information allows for performing very fast image processing. In this paper, we explain how AER is suitable for real-time image processing and, as an example, we present results from some AER-based convolution chips which is able to perform convolutions in real time.

On Fully Digital Address-Event-Representation Convolution Processing
L. Camuñas-Mesa, A.J. Acosta-Jimenez, T. Serrano-Gotarredona and B. Linares-Barranco
Conference - Conference on Design of Circuits and Integrated Systems DCIS 2005
[abstract]
Abstract not avaliable

A digital pixel cell for address event representation image convolution processing
L. Camuñas-Mesa, A. Acosta-Jiménez, T. Serrano-Gotarredona and B. Linares-Barranco
Conference - Conference on Bioengineered and Bioinspired Systems II, 2005
[abstract]
Address Event Representation (AER) is an emergent neuromorphic interchip communication protocol that allows for real-time virtual massive connectivity between huge number of neurons located on different chips. By exploiting high speed digital communication circuits (with nano-seconds timings), synaptic neural connections can be time multiplexed, while neural activity signals (with mili-seconds timings) are sampled at low frequencies. Also, neurons generate 'events' according to their information levels. Neurons with more information (activity, derivative of activities, contrast, motion, edges,...) generate more events per unit time, and access the interchip communication channel more frequently, while neurons with low activity consume less communication bandwidth. AER technology has been used and reported for the implementation of vaRíous type of image sensors or retinae: luminance with local agc, contrast retinae, motion retinae,... Also, there has been a proposal for realizing programmable kernel image convolution chips. Such convolution chips would contain an array of pixels that perform weighted addition of events. Once a pixel has added sufficient event contributions to reach a fixed threshold, the pixel fires an event, which is then routed out of the chip for further processing. Such convolution chips have been proposed to be implemented using pulsed current mode mixed analog and digital circuit techniques. In this paper we present a fully digital pixel implementation to perform the weighted additions and fire the events. This way, for a given technology, there is a fully digital implementation reference against which compare the mixed signal implementations. We have designed, implemented and tested a fully digital AER convolution pixel. This pixel will be used to implement a full AER convolution chip for programmable kernel image convolution processing.

On leakage current temperature characterization using sub-pico-ampere circuit techniques
B. Linares-Barranco, T. Serrano-Gotarredona, R. Serrano-Gotarredona and L.A. Camuñas
Conference - IEEE International Symposium on Circuits and Systems ISCAS 2004
[abstract]
Recently, a reliable circuit design technique for current mode signal processing down to femto-amperes was reported [1]. The technique involves logarithmic current splitters for obtaining on-chip sub-pA currents and a special saw-tooth oscillator for current monitoring, while using "source voltage shifting". This way, sub-pA currents can be characterized without driving them off-chip which would require expensive instrumentation with complicated low leakage setups. In this paper we report on characterization of temperature dependence of leakage currents, exploiting these techniques. Currents as low as 0.3fA have been characterized.

Scopus access Wok access