Cache Coherent Interconnect IP for Machine Learning SoCs - LEKULE

Breaking

12 Apr 2017

Cache Coherent Interconnect IP for Machine Learning SoCs

A new interconnect IP takes on processing and functional safety challenges in chipsets designed for ADAS and autonomous vehicle systems.

ArterisIP has announced the availability of the Ncore 2.0 Cache Coherent Interconnect IP that, according to the company, allows system-on-chip (SoC) designers to easily integrate custom processing elements using low-latency proxy caches or I/O caches.

In neural network-based machine learning chipsets, where workloads are typically partitioned onto different processing elements, low-latency proxy caches offer a more efficient way of communicating between different processing elements than fixed internal SRAMs or scratchpad memories.
Machine learning is becoming an intrinsic part of chip designs for advanced driver assistance systems (ADAS) and autonomous driving systems. And that inevitably calls for a new generation of chips that can handle high data bandwidth as well as ensure low latency.

The Design Challenges of Automotive Safety

Another stumbling block in the realization of autonomous vehicles is functional safety, which provides protection against random hardware and systematic faults, but also adds complexity to the automotive SoCs. Functional safety requires everybody to follow the same standard in order to tackle random hardware faults.


The block diagram of Ncore 2.0 cache coherent interconnect solution for machine learning SoCs. Image courtesy of ArterisIP.

ArterisIP's new interconnect solution addresses these design challenges by implementing low-latency proxy caches that efficiently integrate neural network processing elements. Moreover, Arteris Ncore 2.0, a scalable cache coherent interconnect, supports functional safety capabilities to help meet ISO 26262 requirements.

Ncore is a highly-configurable cache coherent interconnect IP technology that ArterisIP introduced in 2016. According to Kurt Shuler, vice president of marketing at ArterisIP, the changes in the Ncore interconnect IP are based on the feedback from chipmakers such as NXP, Toshiba, and ZTE.
"Automotive SoCs entail complex processing to meet functional safety requirements, and that demands a different architecture to implement interconnect in the SoC designs," Shuler said. "The new IP also protects the proxy caches so that SoC designers don't need to duplicate the entire system to meet functional safety requirements."

CPUs Plus Hardware Accelerators

Machine learning SoCs are comprised of CPUs, AKA processing elements and hardware accelerators. What an embedded system does is slice up the machine learning tasks and pass them to hardware accelerators. And hardware accelerators facilitate high data throughput as well as low latency.


Hardware accelerators facilitate near-real-time processing in automotive SoCs. Image courtesy of ArterisIP.

"ArterisIP’s new Ncore 2.0 interconnect IP helps integrate heterogeneous processor cores and hardware accelerators," said Mike Demler, senior analyst at the Linley Group. "That allows chip designers to implement cache-coherent machine-learning architectures for ADAS and autonomous vehicle applications."

The coherent memory cache technology used in the Ncore 2.0 interconnect IP consumes less area than traditional last level cache (LLC) and improves latency by reducing DRAM traffic and power consumption. And that makes them far more suitable for use in neural network-centric processing elements.

ArterisIP unveiled its new interconnect solution for automotive SoCs at the Linley Autonomous Hardware Conference held in Santa Clara, California on April 6, 2017. The company, previously known as Arteris, is now rebranding itself as ArterisIP.

No comments: