Editorial Acesso aberto Revisado por pares

Guest editorial: Special issue on computational image sensors and smart camera hardware

2018; Wiley; Volume: 46; Issue: 9 Linguagem: Inglês

10.1002/cta.2551

ISSN

1097-007X

Autores

Jorge Fernández‐Berni, Ricardo Carmona‐Galán, Gilles Sicard, Antoine Dupret,

Tópico(s)

Infrared Target Detection Methodologies

Resumo

Recent advances in both software and hardware technologies are enabling the emergence of vision as a key sensorial modality in various application scenarios. Concerning hardware, all of the components along the signal chain play a significant role when it comes to implementing smart vision-enabled systems. At the front end, new circuit structures for sensing, processing, and signal conditioning are adding functionalities in CMOS imagers beyond the mere generation of 2-D intensity maps. Moreover, the development of vertical integration technologies is facilitating monolithic realizations of visual sensors where the incorporation of computational capabilities has no impact at all on image quality. Typically, the outcome of the front-end device in a smart camera will be a preprocessed flow of information ready for further efficient analysis. At this point, specific ICs known as vision processing units can be inserted to accelerate the processing flow according to the targeted application. On the other hand, reconfigurability is a valuable asset in the ever-changing field of vision. FPGAs leverage cutting-edge digital technologies to offer flexible hardware for exploration of different memory arrangements, data flows, and processing parallelization. It is precisely parallelization for which GPUs constitute an interesting alternative in smart cameras when massive pixel-level operation is required. This is the case of state-of-the-art vision algorithms based on convolutional neural networks. At higher level, DSPs and multicore CPUs make software development notably easier at the cost of losing hardware specificity. Overall, this special issue aims at covering some of the latest research works in the vast ecosystem of hardware for artificial vision. All the accepted articles, briefly introduced below, have been reviewed by at least two experts in the field. In “A 99.95% Linearity Readout Circuit with 72 dB Dynamic Range for Active Pixel Sensors,” Bruno de Sá et al1 report comprehensive simulations of readout circuitry providing extended dynamic range while preserving a key specification of imagers, ie, the fill factor. Another notable feature of the proposed circuitry is its high linearity. Linearity in CMOS image sensors is also the focus of the paper “An Ultra-Linear CMOS Image Sensor for a High-Accuracy Imaging System” by Teymouri and Sobhi.2 The authors describe a column-parallel architecture for signal conditioning and analog-to-digital conversion with built-in correlated double sampling. The reported circuitry achieves remarkable linearity while keeping reasonable performance in terms of noise, dynamic range, and power consumption. Ahlberg et al present an application-specific architecture of image sensor in the article entitled “Simultaneous Sensing, Read-Out, and Classification on an Intensity-Ranking Image Sensor.”3 The authors exploit the concept of address-event representation to perform pixel intensity ranking feeding subsequent image classification. Low computational load and reduced memory are the main advantages of this approach versus others like classification based on state-of-the-art convolutional neural networks. Still on the topic of address-event representation, Leñero-Bardallo et al survey applications of computational image sensors based on this processing paradigm. The paper, entitled “Applications of Event-Based Image Sensors—Review and Analysis,”4 focuses on scenarios where this kind of sensors can outperform conventional frame-based visual systems, eg, sun tracking or flame monitoring. Finally, the article “In-Pixel Analog Memories for a Pixel-Based Background Subtraction Algorithm on CMOS Vision Sensors” by García-Lesta et al5 describes a focal-plane sensing-processing array to accelerate image segmentation. The authors propose a hardware-oriented implementation featuring less memory per-pixel than commonly required. This is crucial to reduce the impact on image quality. Bonamy et al presents a technique for better exploitation of multiprocessor architectures on FPGAs in the paper entitled “Energy Efficient Mapping on Manycore with Dynamic and Partial Reconfiguration: Application to a Smart Camera.”6 The authors apply the proposed methodology to achieve energy-efficient license plate recognition on two different platforms, namely, Virtex-6/MicroBlaze and Zynq-7000. The article “An FPGA-Based Smart Camera for Accurate Chlorophyll Estimations” by Pérez-Patricio et al7 describes an embedded vision system tailored for a particular application: analysis of reflectance/transmittance of tree leaves for precision agriculture. This smart camera can estimate chlorophyll content at about 200 fps with 97% accuracy, outperforming most previous approaches. An FPGA is also at the core of the algorithm realization reported by Lapray et al in “An FPGA-Based Pipeline for Micropolarizer Array Imaging.”8 The authors propose a so-called stokes processing pipeline to deal with nonconventional sensorial information coming from a micropolarizer array. The resulting hardware, tested on commercial FPGAs, features low complexity and low latency. Rubio-Ibañez et al present an FPGA-based accelerator of SIFT, a classical vision algorithm for image matching and tracking. Their paper, entitled “An All-Hardware Implementation of the Subpixel Refinement Stage in SIFT Algorithm,”9 focuses on computing the subpixel location of features. The reported implementation, tested on a Xilinx Zynq 7020, takes into account the limited resources available in embedded devices. Despite trading off some accuracy, the performance for image matching is notable. Yet another article on exploiting FPGAs for visual processing is “FPGA-SoC Implementation of an ICA-Based Background Subtraction Method” by Carrizosa-Corral et al.10 In this case, the authors distribute the computational load of an algorithm for background subtraction between the reconfigurable fabric of an FPGA and the companion ARM processor. This joint realization permits to boost the performance when compared with an implementation exclusively based on an embedded processor. Vazquez-Cervantes et al address embedded character recognition in the article “Toward Implementation of Associative Model in Real Time for Character Recognition: A Hardware Architecture Proposal for Embedded Systems.”11 They describe not only an algorithmic approach for this task but also how to deal with its physical implementation in an FPGA. In “Dataflow Management, Dynamic Load Balancing and Concurrent Processing for Real-Time Embedded Vision Applications Using Quasar,”12 B. Goossens explores runtime aspects of Quasar, a high-level programming language and development environment designed to make the most of underlying heterogeneous hardware in a simple way. Specifically, this paper demonstrates that automatic parallelization and implicit concurrency detection can be achieved in multicore/GPU systems from high-level code. Finally, Corpas et al present in “Acceleration and Energy Consumption Optimization in Cascading Classifiers for Face Detection on Low-Cost ARM Big. LITTLE Asymmetric Architectures”13 a procedure to optimize face detection on multicore processors with limited power budget. They report experimental results on two commercial embedded computers, ie, Odroid X4U and Raspberry Pi 2B. These results demonstrate that throughput can be significantly improved by parallelization of key tasks along the processing pipeline. The guest editors would like to express their greatest appreciation to all the authors who submitted papers to this special issue. We also want to thank all the reviewers for their valuable and timely feedback. Last but not the least, we really appreciate the help and guidance provided by the IJCTA's Editor-in-Chief, Prof Ángel Rodríguez-Vázquez and the editorial staff to bring this special issue together.

Referência(s)