Accelerating Embedded Vision Integration with Xilinx SoCs and the reVISION Stack - LEKULE

Breaking

18 May 2018

Accelerating Embedded Vision Integration with Xilinx SoCs and the reVISION Stack

FPGA design strategy is changing, especially in the sphere of embedded vision systems. New design solutions utilize software-based systems with integrated hardware acceleration, allowing faster development but also requiring new design methods and tools. Xilinx and Avnet have partnered to provide a comprehensive system environment to help designers keep up with these emerging trends.


SoCs with programmable logic are an essential element of real-time embedded vision systems. Designers can capitalize on the power and efficiency of Xilinx's Zynq Ultrascale+ MPSoC devices to implement their designs using Avnet's Embedded Vision Kits and the Xilinx reVISION stack.
This ecosystem enables a straightforward approach to integrating deep learning AI-based vision features and facilitates rapid development by eliminating software development dependency upon the first prototype hardware cycle.

Introducing computer vision to embedded design can be a complex process. Invariably the hardware must be small, lightweight, low power, and low cost. Fast product development cycles make known good solutions for the underlying functionality essential. Every minute spent on the lowest levels of the firmware is another minute not designing functionality that differentiates the product.
Xilinx provides a fully integrated solution that engineers can modify and build upon. Software engineers can get started on complex machine-learning-based image processing designs without writing a single line of hardware HDL code by using the SDSoC Development Environment, a Zynq Ultrascale+ MPSoC development kit, and one of the many complete design examples available.

Zynq Utrascale+ MPSoC

The rapid development of embedded vision products requires the use of an existing hardware platform with sufficient interfaces and onboard functionality to meet product requirements. The platform must also provide an easy-to-use, robust firmware and application development environment.

For this, Xilinx collaborates with Avnet. Avnet's experience with years of vision-oriented development cycles has culminated in a complete system approach including the Avnet Embedded Vision Kit with multiple SoC-based SoM options, video-specific carrier cards, and features like PoE. Designers can utilize Xilinx's development ecosystem to exploit programmable SoC device families like the Zynq Ultrascale+ MPSoC, enabling focus upon fine-tuning and customizing intellectual property rather than porting code.

Xilinx and Avnet stack concept.
Xilinx and Avnet work together to compliment embedded vision products with robust firmware and application development environments.

Today’s embedded vision products require single device solutions that are powerful enough to meet real-time task deadlines and critical mission safety specifications while staying within challenging power efficiency budgets. Video and image processing typically requires sophisticated features like object detection and recognition, algorithmic decision-making, and motion path selection. The outputs of these processes must be deterministically bound to control decisions, status analysis, and human-machine interface notification. Without such determinism, safety and reliability are directly impacted. Devices like the Zynq Ultrascale+ MPSoC feature four ARM Cortex-A53 CPUs that enable symmetric multiprocessing implementation upon image processing and application-rich operating systems like Linux.

The Zynq Ultrascale+ MPSoC further integrates essential functionality critical to embedded vision products with two ARM Cortex R5 real-time processors operating independently of the quad-core and operating system environment. This enables the implementation of lockstep monitoring and safety features that can continue operation in the event of a serious software system failure. A separate fault-tolerant platform management unit enables safety and power management functions while a configuration and security unit provides easy configuration and security threat protection. Finally, a Mali-400 graphics processor provides built-in 2D and 3D rendering, allowing the platform to provide for high-quality video display output.

The Zynq Ultrascale+ MPSoC are not simply FPGAs anymore. Xilinx has recognized a software-centric approach better meets the expectations of the embedded vision marketplace. FPGA design strategy has changed from proprietary hardware solutions requiring considerable investment in hardware HDL implementations to achieve real-time performance. Systems are now software solutions that require hardware acceleration increasingly provided by tried and tested off-the-shelf IP. Hardware acceleration integrates into software applications by future proofing frameworks like OpenCL. The reVISION stack is Xilinx's way of putting this all together in a complete system environment.

The reVISION Stack

Embedded vision systems need guided machine learning and computer vision acceleration. Xilinx 'all-programmable' technology utilizes the software-defined embedded vision 'reVISION stack' to realize machine learning, sensor fusion, and computer vision. The reVISION stack encompasses a software-defined environment inside an industry standard framework to enable implementation of most of the popular neural networks used today, including AlexNet, GoogLeNet, SqueezeNet, SSD, and FCN. Optimized reference models for these neural networks are available.

The reVISION stack also includes all the functional capability and blocks required to build completely custom neural networks. Neural networks are typically layers of convolutional (filter) and non-linear (activation) processes that may interpolate (upsample) or decimate (downsample) information from the previous layer. The reVISION stack accommodates most interface layering methods with hardware-optimized implementations of Conv, ReLU, Pooling, Dilated conv, Deconv, FC, Detector & Classifier, and SoftMax.

reVISION stack logo

Designers can choose from an array of image processing IP that seamlessly integrates with neural network capability under frameworks like Caffe and OpenVx. The result is high responsiveness and configurability with access to the resources of a wide development community of people continually adding and updating OpenCV libraries. With Xilinx OpenCV (xfopenCV) the most critical acceleration functions oriented toward applications like drone control, autonomous driving, and machine learning are immediately available.

Software developers can incorporate hardware accelerators like filters, image processing, and motion tracking with a few lines of well-documented code. Input data can easily be streamed in and out of these instantiations as simple objects being referenced like function parameters. Direct streaming is a powerful way to optimize the use of memory in a system. By using streaming in this way, the compiler can directly link acceleration modules using internal bus structures with minimum memory overheads and avoid external memory access. This reduces power consumption and leads to significant improvements in processing latency.

Platform Resources

If you need a platform to get started with, Avnet’s Zedboard is an online resource containing related examples, information, and training using a number of ready-made Zync SoC module (SOM) based kits. Designers can also utilize the reVISION stack in combination with development platforms like the Zynq Ultrascale+ MPSoC ZCU102 Evaluation Kit using FMC and USB interfaced cameras, HDMI sources and virtual video devices to both train and implement applications. The neural network-based system is easily customized through software running on the ARM processor system without the need of a time-consuming compilation process. Many design examples incorporating both machine learning and vision are available to learn from, including motion detection, face tracking, thermal imaging, and robotics applications.

Embedded Vision in the World

Multi-camera vision applications are becoming increasingly common. This is especially the case in Advanced Driver Assistance Systems (ADAS) where a platform must be capable of meeting the processing power required for the fast frame rates, high-performance signal processing, sophisticated sensor fusion, and dedicated neural network hardware acceleration. However, the problem goes beyond this into relatively simple criteria that can be frequently overlooked. It does not matter how powerful a device is if it does not meet the approval standards required for automotive applications.
Xilinx’s automotive qualified XA Zynq Ultrascale+ MPSoC family meets AEC-Q100 test specifications. This enables device use in harsh automotive environments that require higher temperature grades, high visibility change management, and high-reliability manufacturing. Above the physical and environmental specifications, these devices incorporate a 'safety island' that enables real-time processing to be implemented in mission-critical safety applications like ADAS, allowing device certification that meets the ISO 26262 ASIL-C standard.

As the previous example showed, it is important to factor in all the requirements essential to an embedded vision system before choosing a platform. The programmable logic available on the Zynq Ultrascale+ MPSoC devices enables solutions in systems where a CPU-only based solutions would be impractical and even dangerous. An example of this is in industrial robotic motor control applications. These typically require high-speed PID loops that base error calculation on real-world feedback requiring high-speed sampling of analog signals.

The programmable logic fabric available on the Zynq Ultrascale+ MPSoC devices works well in this role, reducing the needs for rapid interrupt task-based software drivers that can reduce system stability and degrade performance. Even if the control algorithm is simple, the real-time determinism required to maintain low jitter and the sample rates lead to rapid task switching, causing significant processing power wasted in the task switching alone.

Safety-critical embedded vision products like those in industrial robotics control require a failsafe operation. The Zynq Ultrascale+ MPSoC integrated system monitor includes a multi-channel ADC along with on-chip sensors that monitor on-die operating conditions such as temperature and supply voltages. This enables fault conditions to be detected independently from the software domain, with status available through external communication ports such as an I2C interface and alarm outputs. The Zynq Ultrascale+ MPSoC has an additional high-speed monitor capable of up to 1MSPS sampling (PDF), enabling extremely rapid response to fault conditions. Upon fault detection, the robotic control system can park itself in a safe state of operation protecting both equipment and user.

Conclusion

Xilinx Zynq Ultrascale MPSoCs are devices made easy to use due to the comprehensive reVISION stack and flexible, vision-oriented hardware development kits. An MPSoC has a clear advantage over embedded CPUs due to configurable programmable logic hardware acceleration. The result is a fully integrated embedded vision development system that utilizes a software-centric approach.
Xilinx has added functionality to support reconfiguration, reliability, monitoring, and safety, eliminating the need to bolt on additional supervisory hardware.

Existing examples enable designers with limited knowledge of FPGA logic design to get started. The use of OpenVx, Caffe, OpenCL, and OpenCV standards, along with an operating system like Linux, opens up any system development to a large pool of third-party IP to accelerate development and future-proof applications.


Implementing advanced vision features are possible with the Zynq Ultrascale+ MPSoC and reVISION. Solutions with Xilinx and Avnet can help cut through all the pain and suffering of complex system design and bring clarity projects: whether it’s an autonomous car, a medical imaging device, or the next-generation coffee stirring, dishwashing robotic super drone. Resources to help you discover more about how you can realize embedded vision solutions and read about other innovative successes including robotics and autonomous driving are available here.

No comments: