VeriSilicon NPU IP is Shipped in Over 100 Million AI-Enabled Chips Worldwide

Enabling efficient execution of applications ranging from AI voice, AI vision, AI pixel to AIGC on embedded devices

VeriSilicon (688521.SH) today announced that it has reached a milestone achievement with its Neural Network Processor (NPU) IP integrated into over 100 million AI-enabled chips across 10 major application sectors worldwide, including Internet of Things (IoT), wearables, smart TVs, smart home, security monitoring, servers, automotive electronics, smartphones, tablets, and smart healthcare. As a global leader in embedded AI/NPU over the past seven years, VeriSilicon’s NPU IP has been successfully integrated into 128 AI SoCs supplied by 72 licensees in these varied market segments.

VeriSilicon’s NPU IP is a high-performance AI processor IP designed with a low-power, programmable, and scalable architecture. It can be easily configured to meet licensees’ requirements for chip sizes and power budgets, making it a cost-effective neural network acceleration engine. This IP also ships with extensive and mature Software Development Kit (SDK) supporting all major deep learning frameworks, ensuring fast product deployment in the market.

VeriSilicon’s latest VIP9000 series NPU IP offers scalable and high-performance processing capabilities for both Transformer and Convolutional Neural Network (CNN). Coupled with Acuity toolkits, this powerful IP supports all major frameworks including PyTorch, ONNX, and TensorFlow. Additionally, it features 4-bit quantization and compression technologies to address bandwidth constraints, facilitating the deployment of AI Generated Content (AIGC) and Large Language Model (LLM) algorithms such as Stable Diffusion and Llama 2 on embedded devices.

By leveraging VeriSilicon’s FLEXA® technology, VIP9000 can be seamlessly integrated with VeriSilicon’s Image Signal Processor (ISP) and video encoders to form low-latency AI-ISP and AI-Video subsystems without requiring DDR memory. It can be further customized to balance cost and flexibility for power- and space-constrained environments in deeply embedded applications.

“AI capability is now an essential part of every smart device across different applications, ranging from MCUs to high-performance application processors. Leveraging our expertise in Graphics Processor Unit (GPU), we have architected low-power, programmable, and scalable NPU processor IPs capable of handling various types of neural networks and computational tasks within the NPU itself. Such efficiency, combined with minimized data traffic, serves as key enablers for embedded smart devices,” said Wei-Jin Dai, Executive VP and GM of IP Division at VeriSilicon. “With the rapid advancement of AI, we have achieved a level of human-like reasoning that paves the way for intelligent human assistants. VeriSilicon is leveraging our highly efficient AI computing capabilities and vast experience in deploying over 100 million AI-enabled chips to bring server-level AIGC capabilities to embedded devices.”

About VeriSilicon

VeriSilicon is committed to providing customers with platform-based, all-around, one-stop custom silicon services and semiconductor IP licensing services leveraging its in-house semiconductor IP. For more information, please visit: www.verisilicon.com

Contacts

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.