
Introduction: Engineered systems
Human inventions, namely engineered systems, have relied on fundamental discoveries in physics and mathematics, e.g., Maxwell’s equations, Quantum mechanics, Information theory, etc., thereby applying these concepts to achieve a particular goal. However, engineered systems are rapidly growing in complexity and size, where the functionality of subcomponents may be nonlinear in nature and starting from these first principles is restrictive. MathWorks has steadily laid a foundation in modeling and simulation with MATLAB and Simulink for over four decades and now assists designers with these complex, multivariate systems with AI.
Houman Zarrinkoub, MathWorks principal product manager for wireless communications, discussed with EDN the growing role AI plays in the design of next generation wireless systems.
MATLAB’s toolboxes for wireless design
“So you’re building a wireless system and, at a basic level, you have a transmission back and forth between, for example, a base station and a cell phone,” said Zarrinkoub, “this is known as a link.”
To begin, Zarrinkoub explains at a very basic level engineers are building the two subsystems (transmitter and receiver) that “talk” to each other with this link. There are the digital components that will sample, quantize, and encode the data and the RF components that will generate the RF signal, upconvert, downconvert, mix, amplify, filter, etc. MATLAB has an array of established toolboxes such as the 5G Toolbox, LTE Toolbox, and Satellite Communications Toolbox that already assist with the design, simulation, and verification of all types of wireless signals from 5G NR and LTE to DVB-S2/S2X/RCS2 and GPS waveforms. This is extended to subcomponents with the tools including (but not limited to) the RF Toolbox, Antenna Toolbox, and Phase Array System Toolbox.
Now with AI, two main design approaches are used leveraging the Deep Learning Toolbox and Reinforcement Learning Toolbox.
AI workflow
The workflow includes four basic steps that are further highlighted in Figure 1.
- Data generation
- AI training
- Integration, simulation, and testing
- Deployment and implementation
These basic steps are necessary for implementing any deep learning model in an application, but how does it assist with RF and wireless design?
Figure 1 MATLAB workflow for implementing AI in wireless system design. Source: MathWorks
Data generation: Making a representative dataset
It goes without saying that data generation is necessary in order to properly train the neural network. For wireless systems, data can either be obtained from a real system by capturing signals with an antenna or done synthetically on the computer.
The robustness of this data is critical. “The keyword is making a representative dataset, if we’re designing for a wireless system that’s operating at 5 GHz we have data at 2.4 GHz, it’s useless.” In order to ensure the system is well-designed the data must be varied including signal performance in both normal operating conditions and more extreme conditions. “You usually don’t have data for outliers that are 2 or 3 standard deviations from the mean, but if you don’t have this data your system will fail when things shift out of the comfort zone,” explains Zarrinkoub.
Zarrinkoub expands on this by saying it is best for designers to have the best of both worlds and use both a real world, hardware-generated dataset as well as the synthetic dataset to include some of those outliers. “With hardware, there are severe limitations where you don’t have time to create all that data. So, we have the Wireless Waveform Generator app that allows you to generate, verify, and analyze your synthetic data so that you can augment your dataset for training.” As shown in Figure 2, the app allows designers to select waveform types and introduce impairments for more real world signal scenarios.
Figure 2 Wireless Waveform Generator app allows users to generate wireless signals and introduce impairments. Source: MathWorks
Transfer learning: Signal discrimination
Then, AI training is performed to either train a model that was built from scratch or, to train an established model (e.g., AlexNet, GoogleNet) to optimize it for your particular task; this is known as transfer learning. As shown in Figure 3, pretrained networks can be reused in a particular wireless application by adding new layers that allow the model to be more fine-tuned towards the specific dataset. “You turn the wireless signal, and in a one-to-one manner, transform it into an image,” said Zarrinkoub when discussing how this concept was used for wireless design.
Figure 3 Pretrained networks can be reused in a particular wireless application by adding new layers that allow the model to be more fine-tuned towards the specific dataset. Source: MathWorks
“Every wireless signal consists of IQ samples, we can transform them into an image by taking a spectrogram, which is a presentation of the signal in time and frequency,” said Zarrinkoub, “we have applied this concept to wireless to discriminate between friend or foe, or between 5G and 4G signals.” Figure 4 shows the test of a trained system that used an established semantic segmentation network (e.g., ResNet-18, MobileNetv2, and ResNet-5). The test used over-the-air (OTA) signal captures with software-defined radio (SDR). Zarrinkoub elaborated, “So you send a signal and its classified, based on that classification, you have multiple binary decisions. For example, if it’s 4G, do this; if it’s 5G, do this; if it’s none of the above, do this. The system is optimized by the reliable classification of the type of signal the system is encountering.”
Figure 4 Labeled spectrogram outputted by a trained wireless system to discriminate between LTE and 5G signals. Source: MathWorks
Building deep learning models from scratch
Supervised learning: Modulation classification with built CNN
Modulation classification can also be accomplished with the Deep Learning Toolbox where users generate synthetic, channel-impaired waveforms for a dataset. This dataset is used to train a convolutional neural network (CNN) and tested with hardware such as SDR with OTA signals (Figure 5).
Figure 5 Output confusion matrix of a CNN trained to classify signals by modulation type with test data using SDR. Source: MathWorks
“With signal discrimination, you’re using more classical classification so you don’t need to do a lot of work developing those trained networks. However, since modulation and encoding is not found on the spectrogram, most people will then choose to develop their models from scratch,” said Zarrinkoub, “in this approach, designers will use MATLAB with Python and implement classical building blocks such as rectifier linear unit (ReLU) to build out layers in their neural network.” He continues, “Ultimately a neural network is built on components, you either connect them in parallel or serially, and you have a network. Each network element has a gain and training will adjust the gain of each network element until you converge on the right answer.” He mentions that, while a less direct path is taken to obtain the modulation type, systems that combine these allow their wireless systems to have a much deeper understanding of the signals they are encountering and make much more informed decisions.
Beam selection and DPD with NN
Using the same principles neural networks (NNs) can be customized within the MATLAB environment to solve inherently nonlinear problems such as applying digital predistortion (DPD) to offset the nonlinearities in power amplifiers (PAs). “DPD is a classical example of a nonlinear problem. In wireless communications, you send a signal, and the signal deteriorates in strength as it leaves the source. Now, you have to amplify the signal so that it can be received but no amplifier is linear, or has constant gain across its bandwidth.” DPD attempts to deal with the inevitable signal distortions that occur when using a PA that is operating within its compression region by observing the output signal from the PA and using that as feedback for the alterations to the input signal so that the PA output is closer to ideal. “The problem is inherently non-linear and many solutions have been proposed but AI comes along, and produces superior performance than other solutions for this amplification process,” said Zarrinkoub. The MATLAB approach trains a fully connected NN as the inverse of the PA and uses it for DPD (NN-DPD), then, the NN-DPD is tested using a real PA and compared with a cross-term memory polynomial DPD.
Zarrinkoub goes on to describe another application for NN-based wireless design (Figure 6), “Deep learning also has a lot of applications in 5G and 6G where it combines sensing and communications. We have a lot of deep learning examples where different algorithms are used to position and localize users so you can send data that is dedicated to the user.” The use case that was mentioned in particular related to integrated sensing and communication (ISAC), “When I was young and programming 2G and 3G systems, the philosophy of communication was that I would send the signal in all directions, and if your receiver got that information, good for it; it can now decode the transmission. If the receiver couldn’t do that, tough luck,” said Zarrinkoub, “With 5G and especially 6G, the expectations have risen, you have to have knowledge of where your users are and beamform towards them. If your beamwidth is too big, you lose energy. But, if your beamwidth is too narrow, if your users move their head, you miss them. So you have to constantly adapt.” In this solution, instead of using GPS signals, lidar, or roadside camera images, the base station essentially becomes the GPS locator; sending signals to locate users and based upon the returned signal, sends communications.
Figure 6 The training phase and testing phase of a beam management solution that uses the 3D coordinates of the receiver. Source: MathWorks
Unsupervised learning: The autoencoder path for FEC
Alternatively, engineers can follow the autoencoder path to help build a system from the ground up. These deep learning networks consist of an encoder and a decoder and are trained to replicate their input data to, for instance, remove noise and detect anomalies in signal data. The benefit of this approach is that it is unsupervised and does not require labeled input data for training.
“One of the major aspects of 5G and 6G is forward error correction (FEC) where, when I send something to you, whether its voice or video, whether or not the channel is clean or noisy, the receiver should be able to handle it,” said Zarrinkoub. FEC is a technique that adds redundant data to a message to minimize the number of errors in the received information for a given channel (Figure 7). “With the wireless autoencoder, you can automatically add redundancy and redo modulation and channel coding based on estimations of the channel condition, all unsupervised.”
Figure 7 A wireless autoencoder system ultimately restricts the encoded symbols to an effective coding rate for the channel. Source: MathWorks
Reinforcement learning: Cybersecurity and cognitive radar
“With deep learning and machine learning, where the process of giving inputs and receiving an output will all be performed offline,” explained Zarrinkoub. “With deep learning, you’ve come up with a solution and you simply apply that solution in a real system.” He goes on to explain how reinforcement learning must be applied to a real system at the start. “Give me the data and I will update that brain constantly.”
Customers in the defense industry will leverage Reinforcement Learning Toolbox to, for example, assess all the vulnerabilities of their 5G systems and update their cybersecurity accordingly. “Based upon the vulnerability, they will devise techniques to overcome the accessibility of the unfriendly agent to the system.” Other applications might include cognitive radar where cognitive spectrum management (CSM) would use reinforcement learning to analyze patterns in the spectrum in real-time and predict future spectrum usage based upon previous and real-time data.
Integration, simulation, and testing
As shown in many of these examples, the key to the third step in the workflow is to create a unique dataset to test the effectiveness of the wireless system. “If you use the same dataset to train and test, you’re cheating! Of course it will match. You have to take a set that’s never been seen during training but is still viable and representative and use that for testing,” explains Zarrinkoub, “That way, there is a confidence that different environments can be handled by the system with the training we did in the first step of data gathering and training.” The Wireless Waveform Generator app is meant to assist with both these stages.
Deployment and implementation
The MathWorks approach to deployment works with engineers at the language level with a more vendor-agnostic approach to hardware. “We have a lot of products that turn popular languages such into MATLAB code, to train and test the algorithm, and then turn that back into the code that will go into the hardware. For FPGAs and ASICs, for example, the language is Verilog or VHDL. We have a tool called the HDL Coder that will take the MATLAB and Simulink model and turn that into low level VHDL code to go into any hardware.”
Addressing the downsides of AI with the digital twin
The natural conclusion of the interview was understanding the “catch” of using AI to improve wireless systems. “AI takes the input, trains the model, and produces an output. In that process, it merges all the system components into one. All those gains, they change together, so it becomes an opaque system and you lose insight into how the system is working,” said Zarrinkoub. While this process has considerable benefit, troubleshooting issues can be much more challenging than debugging with solutions that leverage the traditional, iterative approach where isolating problems might be simpler. “At MathWorks, we are working on creating a digital twin of every engineered system, be it a car, an airplane, a spacecraft, or a base station.” Zarrinkoub describes this as striking a balance between the traditional engineered system approach and an AI-based engineering solution, “Any engineer can compare their design to the all-encompassing digital twin and quickly identify where their problem is. That way, we have the optimization of AI, plus the explainability of model-based systems. You build a system completely in your computer before one molecule goes into the real world.”
Aalyia Shaukat, associate editor at EDN, has worked in the engineering publishing industry for over 8 years. She holds a Bachelor’s degree in electrical engineering from Rochester Institute of Technology, and has published works in major EE journals as well as trade publications.
Related Content
- Artificial intelligence for wireless networks
- The next embedded frontier: machine learning enabled MCUs
- Use digital predistortion with envelope tracking
- RF predistortion straightens out your signals
- Millimeter wave beamforming and antenna design
The post Applying AI to RF design appeared first on EDN.