Student Lounge

Sharing technical and real-life examples of how students can use MATLAB and Simulink in their everyday projects #studentsuccess

Excellence in Innovation: Accelerate PLL Design with Deep Learning

Recently, Liping from the Student Programs team at MathWorks had an interview with Lingfeng Lu and Jiangchuan Li, where they talked about their experience on how to use deep learning to accelerate PLL (Phase Locked Loop) design when they participated in one of the MathWorks Excellence in Innovation Projects. You can download the code and data shared by the team on GitHub:
Lingfeng and Jiangchuan’s store started in 2021 when they were students at Shanghai Jiao Tong University in China. They took the school-enterprise cooperation course on “Engineering practice and scientific innovation”, which was taught by Prof. Yuhong Yang and supported by Dr. Yueyi Xu, a MathWorks engineer. They are required to complete a project for this course, so decided to select one of the MathWorks Excellence in Innovation Projects and complete it within three months.
MathWorks Excellence in Innovation Projects provides students and researchers with different cutting-edge ideas. All the projects are designed by MathWorks’ engineers who combined current industry needs with the latest technology development trend. The topics cover different areas including 5G, big data, industrial 4.0, artificial intelligence, automatic driving, robots, unmanned aerial vehicle (UAV), computer vision, sustainable development, and renewable energy.
When checking the project list, Lingfeng and Jiangchuan were attracted by one of them named Behavioral Modelling of Phase-Locked Loop using Deep Learning Techniques.

What is a Phase Locked Loop (PLL)?

With the significantly increased complexity of chip design, how to explore the design space faster has been becoming more and more challenging! A PLL is usually called the ‘heart’ of a chip, which uses the signal from an external oscillator as a reference and generates an output as a stable clock usually with a higher frequency via a closed-loop control. Designing a stable and robust PLL is important for a chip just like a healthy heart for a human body.
Fig1 PLL Architecture.jpg

Why did you select the project?

The practicality and novelty of this project attracted us! Behavior-level modeling of PLL can save time and costs in the design process. Specifically, after establishing the behavior-level model of PLL, we can directly obtain the performance of a PLL by importing the device parameters into the model without running a lot of simulations or tests.

What problems did you solve in this project?

Data sets and models are two key factors for deep learning. In this project, the two main problems that we met are:
Problem 1: How to build a data set efficiently?
Problem 2: How to build an effective deep learning model?

Solution to Problem 1: How to build a data set efficiently?

In this project, no data set was available for us. Before we started, Mr. Pragati Tiwary, the MathWorks engineer who designed the problem, gave us an in-depth explanation of the problem statement. He told us that the N-division PLL reference model provided in the Mixed-Signal Blockset™ in MATLAB provided a way to build data sets through simulations.
One of the reference models shown below consists of five modules: Phase Frequency Detector (PFD), charge pump, loop filter, Voltage Controlled Oscillator (VCO), and frequency divider. What we need to do was constantly change the parameters of five modules to test the PLL’s performance on operation frequency, lock time, and phase noise.
PLL refModel.JPG
Besides reference models, a lot of different test benches provided by the Mixed-Signal Blockset™ made our task easier. Leveraging the PLL Testbench, we can conveniently test the performance of the PLL model with various parameters and then record the results.
In the beginning, to obtain a set of data, we manually changed the parameter settings of the model, ran a simulation, and then manually record the output results. However, we found the way we collect data was very time-consuming.
At this point, Pragati gave us patient guidance on how to automatically import data, run simulations, and export performance results in batches. Please refer to this webpage for more information on programmatic model management in Simulink. With Pragati’s help, we changed the model parameters from constants to variables, then used the MATLAB program to adjust the parameter value of the Simulink PLL model, run simulations, and then collected the results automatically.
However, we then found that some parameters that defined the model structure, such as the order of the loop filter, could not be modified simply by changing the value of the variable.
When we lost our path, we pleasantly found that we can always assume a fourth-order loop filter architecture and set some capacitance and resistance values to 0 to achieve a lower-order one, for instance, we set R3=R4=0 (ohm) and C3=C4=0 (F) the in fourth-order loop filter architecture to achieve the second-order loop filter. In this way, we could do a rapid scanning of different model settings.
Fig3 Loop Filters.jpg
We also hope that the performance can be automatically recorded by the program. However, we found that the required performance data could not be exported, so we must export the intermediate outputs from the test bench and then calculate the final ones with MATLAB programs.
Finally, we established a MATLAB program to automatically simulate and test the PLL model. In each round, the program:
  1. Generated random numbers within a certain range and then set the values of the model parameters as these random numbers.
  2. Ran simulations and tests of the Simulink model.
  3. Recorded the intermediate results sent back by Simulink.
  4. Calculated the final performance metrics based on the recorded intermediate results.
Using MATLAB program to automatically collect data improves the efficiency of data set establishment significantly. After we have a data set, our problem became how to build an effective deep-learning model.

Solution to Problem 2: How to build an effective deep learning model?

Deep learning is generally used for feature extraction and regression or fitting. For example, convolutional neural network models have many convolutional layers and pooling layers for feature extraction.
Through experiments, we found that a two layers feedforward neural network could already model the mapping between input parameters and output performance metrics well, so we use a simple feedforward neural network structure in our project.
MATLAB provides a Deep Learning Toolbox, where you can build the neural network model from scratch or by modifying a reference model. With this toolbox, MATLAB supports transfer learning for popular pre-trained models such as DarkNet-53, ResNet-50, NASNet, and SqueezeNet. Moreover, you can also import models from TensorFlow and Caffe to MATLAB.
What we have used in this project is the Neural Network Fitting App included in the Deep Learning Toolbox™. We recommended this App since it provides a two-layer feedforward neural network with an optional number of neurons as shown in the figure below.
Fig4 NN Fitting App.jpg
In our neural network, the classical nonlinear function Sigmoid was used as the activation function of neurons in the hidden layer, while the linear output function was used in the output layer. The performance of the neural network has been evaluated using the Mean Square Error (MSE) and the regression analysis.
It must be mentioned that the fitting effect of the model was not good at the beginning, so we tried different methods such as data preprocessing, increasing the number of neurons, adjusting the ratio of the training, the test, and the validation sets, and finally achieved a good result. The improvement methods that we finally used in our project include:
  1. Data preprocessing: For the data with a large difference in magnitude, we normalized the data with a logarithm function, so that the distribution of the output data became more uniformly, which reduced the possibility of underfitting or overfitting.
  2. Increasing the size of the test set: as usual, the data set in our project has been divided into the training set, the test set, and the validation set. The training set and test set were used for the training, while the validation set was mainly used for the final evaluation. We noted that a test set with at least 200 samples was necessary to ensure the reliability of the model training.

Conclusion: Study hard + Try bravely = Success

Time flies. Now Lingfeng has been working in Shanghai Mitsubishi Elevator Co., Ltd. (SMEC), and Jiangchuan has been preparing for the postgraduate entrance examination.
They told us this cross-cultural-difference experience was very unforgettable. This project has not only broadened their horizons but also improved their courage when communicating with people across the world. Through this project, they have realized that it is important for students to leverage knowledge and innovation to solve real-world problems.
At last, they would like to thank MathWorks for providing them with the opportunity and thank Prof. Yang, Mr. Tiwary, and Dr. Xu for their help and guidance!
  • print


To leave a comment, please click here to sign in to your MathWorks Account or create a new one.