
The common misconception about artificial intelligence (AI) often relates this up-and-coming technology to data center and high-performance compute (HPC) applications. This is no longer true, says Tom Hackenberg, principal analyst for the Memory and Computing Group at Yole Group. He said this while commenting on STMicroelectronics’ new microcontroller that embeds a neural processing unit (NPU) to support AI workloads at the tiny edge.
ST has launched its most powerful MCU to date to cater to a new range of embedded AI applications. “The explosion of AI-enabled devices is accelerating the inference shift from the cloud to the tiny edge,” said Remi El-Ouazzane, president of Microcontrollers, Digital ICs and RF Products Group (MDRF) at STMicroelectronics.
He added that inferring at the edge brings substantial benefits, including ultra-low latency for real-time applications, reduced data transmission, and enhanced privacy and security. Not sharing data with the cloud also leads to sustainable energy use.
STM32N6, available to selected customers since October 2023, is now available in high volumes. It integrates a proprietary NPU, the Neural-ART Accelerator, which can deliver 600 times more machine-learning performance than a high-end STM32 MCU today. That will enable the new MCU to leverage computer vision, audio processing, sound analysis and other algorithms that are currently beyond the capabilities of small embedded systems.
Figure 1 STM32N6 offers the benefits of an MPU-like experience in industrial and consumer applications while leveraging the advantages of an MCU. Source: STMicroelectronics
“Today’s IoT edge applications are hungry for the kind of analytics that AI can provide,” said Yole’s Hackenberg. “The STM32N6 is a great example of the new trend melding energy-efficient microcontroller workloads with the power of AI analytics to provide computer vision and mass sensor-driven performance capable of great savings in the total cost of ownership in modern equipment.”
Besides the AI accelerator, STM32N6 features an 800-MHz Arm Cortex-M55 core and 4.2 MB of RAM for real-time data processing and multitasking, which ensure sufficient compute for complementing AI acceleration. As a result, the MCU can run AI models to carry out tasks like segmentation, classification, and recognition. Moreover, an image signal processor (ISP) incorporated into the MCU provides direct signal processing, which allows engineers to use simple and affordable image sensors in their designs.
Design testimonials
Lenovo Research, which rigorously evaluated STM32N6 in its labs, acknowledges its neural processing performance and power efficiency claims. “It accelerates our research of “AI for All” technologies at the edge,” said Seiichi Kawano, principal researcher at Lenovo Research. LG, currently incorporating AI features into smartphones, home appliances and televisions, has also recognized STM32N6’s AI performance for embedded systems.
Figure 2 Meta Bounds has employed the AI-enabled STM32N6 in its AR glasses. Source: STMicroelectronics
Then there is Meta Bounds, a Zhuhai, China-based developer of consumer-level augmented reality (AR) glasses. Its founding partner, Zhou Xing, acknowledges the vital role that STM32N6’s embedded AI accelerator, enhanced camera interfaces, and dedicated ISP played in the development of the company’s ultra-lightweight and compact form factor AI glasses.
Besides these design testimonials, what’s important to note is the transition from MPUs to MCUs for embedded inference. That eliminates the cost of cloud computing and related energy penalties, making AI a reality at the tiny edge.
Figure 3 The shift from MPU to MCU for AI applications saves cost and energy and it lowers the barrier to entry for developers to take advantage of AI-accelerated performance for real-time operating systems (RTOSes). Source: STMicroelectronics
Take the case of Autotrak, a trucking company in South Africa. According to its engineering director, Gavin Leask, fast and efficient AI inference within the vehicle can give the driver a real-time audible warning to prevent an upcoming incident.
At venues like this, AI-enabled MCUs can run computer vision, audio processing, sound analysis and more at a much lower cost and power usage than MPUs.
Related Content
- Getting a Grasp on AI at the Edge
- Implementing AI at the edge: How it works
- It’s All About Edge AI, But Where’s the Revenue?
- Edge AI accelerators are just sand without future-ready software
- Edge AI: The Future of Artificial Intelligence in embedded systems
The post Profile of an MCU promising AI at the tiny edge appeared first on EDN.