
Software engineers developing artificial intelligence (AI) models using standard frameworks such as Keras, PyTorch, and TensorFlow are usually not well-equipped to translate those models into silicon-based implementations. A new synthesizable tool claims to solve this design conundrum with faster and more power-efficient execution compared to standard AI processors.
Most machine learning (ML) experts working on AI frameworks—Keras, PyTorch, and TensorFlow—are not comfortable with synthesizable C++, Verilog, or VHDL. As a result, there has been no easy path for ML experts to accelerate their applications in a right-sized ASIC or system-on-chip (SoC) implementation.
Enter hls4ml, an open-source initiative intended to help bridge this gap by generating C++ from a neural network described in AI frameworks such as Keras, PyTorch, and TensorFlow. The C++ can then be deployed for an FPGA, ASIC or SoC implementation.
Siemens EDA joined hands with Fermilab, a U.S. Department of Energy laboratory, and other leading contributors to hls4ml while tying up its Catapult software for high-level synthesis (HLS) with hls4ml, an open-source package for ML hardware acceleration. The outcome of this collaboration was Catapult AI NN software for high-level synthesis of neural network accelerators on ASICs and SoCs.
Figure 1 Here is a typical workflow to translate an ML model into an FPGA or ASIC implementation using hls4ml, an open-source codesign workflow to empower ML designs. Source: CERN
Catapult AI NN extends the capabilities of hls4ml to ASIC and SoC design by offering a dedicated library of specialized C++ machine learning functions tailored to ASIC design. This allows designers to optimize power, performance, and area (PPA) by making latency and resource trade-offs across alternative implementations from the C++ code.
Design engineers can also evaluate the impact of different neural net designs to determine the best neural network structure for their hardware. Catapult AI NN starts with a neural network description from an AI framework, converts it into C++ and synthesizes it into an RTL accelerator in Verilog or VHDL for implementation in silicon.
Figure 2 Catapult AI NN provides automation of Python-to-RTL for neural network (NN) hardware designs. Source: Siemens EDA
“The handoff process and manual conversion of a neural network model into a hardware implementation is very inefficient, time-consuming and error-prone, especially when it comes to creating and verifying variants of a hardware accelerator tailored to specific performance, power, and area,” said Mo Movahed, VP and GM for high-level design, verification and power at Siemens Digital Industries Software.
This new tool enables scientists and AI experts to leverage industry-standard AI frameworks for neural network model design and synthesize these models into hardware designs optimized for PPA. According to Movahed, this opens a whole new realm of possibilities for AI/ML software engineers.
Catapult AI NN allows developers to automate and implement their neural network models for optimal PPA concurrently during the software development process,” he added. Panagiotis Spentzouris, associate lab director for emerging technologies at Fermilab, acknowledges the value proposition of this synthesis framework in AI designs.
“Catapult AI NN leverages the expertise of our scientists and AI experts without requiring them to become ASIC designers,” he said. That’s especially critical when A/ML tasks migrate from the data center to edge applications spanning consumer appliances to medical devices. Here, the right-sized AI hardware is crucial to minimize power consumption, lower cost, and maximize end-product differentiation.
Related Content
- Key technologies push AI to the edge
- C++ runs hardware/software designs
- Adapting the Microcontroller for AI in the Endpoint
- Edge AI accelerators are just sand without future-ready software
- Don’t Be Misled By “Accuracy” — 3 Insights For More Useful AI Models
The post Synthesis framework simplifies silicon implementation for AI models appeared first on EDN.