{"id":18554,"date":"2025-10-08T11:14:42","date_gmt":"2025-10-08T15:14:42","guid":{"rendered":"https:\/\/blogs.mathworks.com\/deep-learning\/?p=18554"},"modified":"2025-10-08T11:15:39","modified_gmt":"2025-10-08T15:15:39","slug":"tennis-analysis-with-ai-object-detection-for-ball-tracking","status":"publish","type":"post","link":"https:\/\/blogs.mathworks.com\/deep-learning\/2025\/10\/08\/tennis-analysis-with-ai-object-detection-for-ball-tracking\/","title":{"rendered":"Tennis Analysis with AI: Object Detection for Ball Tracking"},"content":{"rendered":"<h6><\/h6>\r\n<em>This blog post is from <\/em><a href=\"https:\/\/www.linkedin.com\/in\/cory-hoi-5a3373235\/\"><em>Cory Hoi<\/em><\/a><em>, Engineer at MathWorks Engineering Development Group.<\/em>\r\n<h6><\/h6>\r\n&nbsp;\r\n<h6><\/h6>\r\nIn <a href=\"https:\/\/blogs.mathworks.com\/deep-learning\/2025\/10\/01\/tennis-analysis-with-ai-interactive-ground-truth-labeling\/\">our previous blog post<\/a>, we used the Video Labeler app to segment both the tennis ball and the court, creating ground truth data for training a neural network. This ground truth forms the foundation for the next step: building and evaluating object detectors.\r\n<h6><\/h6>\r\nPretrained networks offer faster development, require less labeled data, and are well-suited for common tasks. However, they might not generalize well to highly specific applications. Custom networks, on the other hand, offer greater flexibility and control but typically require more effort to design, train, and validate. In this post, we\u2019ll explore both approaches\u2014building a detector from scratch and using a pretrained model\u2014while making use of the labeled ground truth data to improve performance through retraining.\r\n<h6><\/h6>\r\n&nbsp;\r\n<h6><\/h6>\r\n<p style=\"font-size: 20px; color: #c04c0b;\"><strong>Data Preprocessing<\/strong><\/p>\r\nThe first step is to organize the labeled data into a format suitable for training. Since the annotations were created using the Video Labeler app, MATLAB makes this process straightforward. We begin by loading the ground truth data and storing the image files using an <a href=\"https:\/\/www.mathworks.com\/help\/matlab\/ref\/matlab.io.datastore.imagedatastore.html\">imageDataStore<\/a> object. This object efficiently manages large collections of images, supports batch processing, and integrates well with deep learning workflows such as training.\r\n<pre>load(\"trainingData\\tennisData\\gTruth.mat\")\r\n\r\ninputSize = [540 960 3];\r\n\r\n[imds, pxds] = pixelLabelTrainingData(gTruth,\"WriteLocation\",\"folder\\to\\write\\images\");\r\n\r\npxds = imageDatastore(pxds.Files);\r\nmasks = imageDatastore(pxds.Files);\r\n<\/pre>\r\n<h6><\/h6>\r\nWe split the dataset into training and validation sets using the partitionData function to help prevent <a href=\"https:\/\/www.mathworks.com\/discovery\/overfitting.html\">overfitting<\/a>. During training, the network updates its weights to minimize the loss on the training data. If stopping criteria are based on the training set, the model may overfit, that is, it will perform well on seen data but poorly on unseen data. By using the validation set to define stopping criteria, we improve generalization and overall model performance.\r\n<h6><\/h6>\r\nThe <a href=\"https:\/\/www.mathworks.com\/help\/matlab\/ref\/matlab.io.datastore.transform.html\">transform<\/a> function is used to preprocess the input images. After the data is processed we combine the training and validation datastore objects by using the <a href=\"https:\/\/www.mathworks.com\/help\/matlab\/ref\/matlab.io.datastore.combine.html\">combine<\/a> function.\r\n<pre>[trainingImages, trainingMasks, validationImages, validationMasks] = partitionData(imds, masks);\r\n\r\nresizedTrainingImages = transform(trainingImages, @(x) preProcessImages(x, inputSize));\r\nresizedTrainingMasks = transform(masks, @(x) preProcessImages(x, inputSize));\r\n\r\nresizedValidationImages = transform(validationImages, @(x) preProcessImages(x, inputSize));\r\nresizedValidationMasks = transform(validationMasks, @(x) preProcessImages(x, inputSize));\r\n\r\ndsTraining = combine(resizedTrainingImages, resizedTrainingMasks);\r\ndsValidation = combine(resizedValidationImages, resizedValidationMasks);\r\n<\/pre>\r\n<h6><\/h6>\r\n&nbsp;\r\n<h6><\/h6>\r\n<p style=\"font-size: 20px; color: #c04c0b;\"><strong>Designing Object Detector<\/strong><\/p>\r\nTo design a simple object detector from scratch, we are using five main building blocks: input layer, downsampling layers, bottleneck layers, upsampling layers, and output layer.\r\n<h6><\/h6>\r\n<img decoding=\"async\" loading=\"lazy\" width=\"1194\" height=\"507\" class=\"alignnone size-full wp-image-18566\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2025\/09\/post2_image1.png\" alt=\"\" \/>\r\n<h6><\/h6>\r\n<em>Neural network layout<\/em>\r\n<h6><\/h6>\r\n&nbsp;\r\n<h6><\/h6>\r\nThe downsampling block reduces the spatial dimensions of the input features and only captures larger, higher-level information. This reduces the computational cost, because smaller feature maps mean fewer operations in subsequent layers. The downsampling block in our network consists of a <a href=\"https:\/\/www.mathworks.com\/help\/deeplearning\/ref\/nnet.cnn.layer.convolution2dlayer.html\">convolutional layer<\/a>, a <a href=\"https:\/\/www.mathworks.com\/help\/deeplearning\/ref\/nnet.cnn.layer.batchnormalizationlayer.html\">batch normalization layer<\/a>, a <a href=\"https:\/\/www.mathworks.com\/help\/deeplearning\/ref\/nnet.cnn.layer.relulayer.html\">ReLU<\/a> activation function, and a <a href=\"https:\/\/www.mathworks.com\/help\/deeplearning\/ref\/nnet.cnn.layer.maxpooling2dlayer.html\">2-D max pooling layer<\/a>.\r\n<h6><\/h6>\r\n<img decoding=\"async\" loading=\"lazy\" width=\"1216\" height=\"357\" class=\"alignnone size-full wp-image-18569\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2025\/09\/post2_image2.png\" alt=\"\" \/>\r\n<h6><\/h6>\r\n<em>Convolutional neural network structure<\/em>\r\n<h6><\/h6>\r\n&nbsp;\r\n<h6><\/h6>\r\nThe convolutional layer applies learnable kernels to the input by sliding them over local regions of the image, performing element-wise multiplication and summation to generate feature maps that capture spatial patterns. The ReLU activation introduces non-linearity by zeroing out negative values, allowing the network to learn complex features. The max pooling layer reduces spatial dimensions by sliding a 2\u00d72 window over the feature map and keeping only the maximum value from each region, which decreases computation and adds robustness to small spatial shifts. Stride and padding settings in both convolution and pooling control how the filters move and how edges are handled.\r\n<h6><\/h6>\r\n<img decoding=\"async\" loading=\"lazy\" width=\"444\" height=\"224\" class=\"alignnone size-full wp-image-18572\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2025\/09\/post2_image3.png\" alt=\"\" \/>\r\n<h6><\/h6>\r\n<em>Example of the max pooling layer<\/em>\r\n<h6><\/h6>\r\n&nbsp;\r\n<h6><\/h6>\r\nIn the middle of the neural network are the bottleneck layers, which capture and compress the most relevant features from the downsampling block. This encoded information is then passed to the upsampling block through a sequence of transposed convolutional layers, which reconstruct the spatial dimensions. In this architecture, the number of transposed convolutional layers mirrors the number of convolutional layers in the downsampling block, ensuring that the output image matches the input size.\r\n<h6><\/h6>\r\nTo create the architecture for the described object detector, use the following code.\r\n<pre>layers = [\r\n    imageInputLayer(inputSize)\r\n\r\n    % Downsampling\r\n    convolution2dLayer(3, 16, 'Padding', 'same', 'Name', 'conv1')\r\n    batchNormalizationLayer\r\n    reluLayer('Name', 'relu1')\r\n    maxPooling2dLayer(2, 'Stride', 2, 'Name', 'maxpool1')\r\n\r\n    convolution2dLayer(3, 32, 'Padding', 'same', 'Name', 'conv2')\r\n    batchNormalizationLayer\r\n    reluLayer('Name', 'relu2')\r\n    maxPooling2dLayer(2, 'Stride', 2, 'Name', 'maxpool2')\r\n\r\n    % Bottleneck\r\n    convolution2dLayer(3, 64, 'Padding', 'same', 'Name', 'conv2')\r\n    batchNormalizationLayer\r\n    reluLayer('Name', 'relu2')\r\n\r\n    % Upsample\r\n    transposedConv2dLayer(3, 32, 'Stride', 2, 'Cropping', 'same', 'Name', 'upsample1')\r\n    reluLayer('Name', 'relu3')\r\n\r\n    transposedConv2dLayer(3, 16, 'Stride', 2, 'Cropping', 'same', 'Name', 'upsample2')\r\n    reluLayer('Name', 'relu4')\r\n\r\n    % Mask output layer\r\n    convolution2dLayer(1, 1, 'Padding', 'same', 'Name', 'conv3')\r\n    sigmoidLayer('Name', 'softmax')\r\n    ];\r\n<\/pre>\r\n<h6><\/h6>\r\nConstructing a neural network from scratch involves careful architecture design, layer configuration, and parameter initialization. Before proceeding with training, it\u2019s useful to compare this custom approach with leveraging a pretrained network, which can significantly reduce development time and improve performance on certain tasks.\r\n<h6><\/h6>\r\n&nbsp;\r\n<h6><\/h6>\r\n<p style=\"font-size: 20px; color: #c04c0b;\"><strong>Pretrained Object Detector<\/strong><\/p>\r\nInstead of creating a network from scratch, you can use a pretrained object detector. In MATLAB, there are many available options for <a href=\"https:\/\/www.mathworks.com\/solutions\/deep-learning\/models.html\">pretrained models<\/a> depending on your task. For this task, we will use the You Only Look Once X (YOLOX) object detector from the <a href=\"https:\/\/www.mathworks.com\/matlabcentral\/fileexchange\/116555-automated-visual-inspection-library-for-computer-vision-toolbox\">Automated Visual Inspection Library for Computer Vision Toolbox<\/a> support package.\r\n<h6><\/h6>\r\nThe YOLOX detector is a pretrained neural network that consists of three parts: the backbone, the neck, and the head.\r\n<h6><\/h6>\r\n<ul>\r\n \t<li>The backbone is a pretrained CNN, trained on the COCO data set. Its purpose is to extract features and compute feature maps from the input images. This is similar to the upsampling block of the detector we designed.<\/li>\r\n \t<li>The neck concatenates the feature maps from the backbone and feeds them as inputs into the head at three different scales.<\/li>\r\n \t<li>The head outputs classification scores, regression scores, and objectness scores.<\/li>\r\n<\/ul>\r\nLoad the pretrained YOLOX-large deep learning network by using the <a href=\"https:\/\/www.mathworks.com\/help\/vision\/ref\/yoloxobjectdetector.html\">yoloxObjectDetector<\/a> function. This network has the largest number of filters and convolutional layers, achieving the highest-level accuracy at the expense of computational cost and speed.\r\n<pre>networkName = \"large-coco\";\r\ndetector = yoloxObjectDetector(networkName,{'ball'},InputSize=inputSize);\r\n<\/pre>\r\n<h6><\/h6>\r\nThat\u2019s all it takes, just two lines of code to set up an accurate object detector. This object detector has been trained on the COCO data set, and it can recognize 80 object categories. However, tennis balls are not one of these categories. To enable the detector to recognize tennis balls, we will perform <a href=\"https:\/\/www.mathworks.com\/discovery\/transfer-learning.html\">transfer learning<\/a> by retraining the network using our own labeled ground truth data.\r\n<h6><\/h6>\r\n&nbsp;\r\n<h6><\/h6>\r\n<p style=\"font-size: 20px; color: #c04c0b;\"><strong>Training Object Detectors<\/strong><\/p>\r\nTo train each neural network, start by defining a set of <a href=\"https:\/\/www.mathworks.com\/help\/deeplearning\/ref\/trainingoptions.html\">training options<\/a> that specify the optimization algorithm, learning rate, and stopping criteria. Once the options are configured, you can train each network by calling the appropriate function.\r\n<h6><\/h6>\r\nTo train the network that we designed:\r\n<pre>net = trainnet(dsTraining, layers, 'binary-crossentropy',options);\r\n<\/pre>\r\n<h6><\/h6>\r\nTo retrain the YOLOX network:\r\n<pre>[trainedDetector, info] = trainYOLOXObjectDetector(dsTraining,detector,options);\r\n<\/pre>\r\n<h6><\/h6>\r\nDuring training, two sets of outputs are generated allowing you to observe the training progress. The first is displayed in a plot where the blue line represents the loss from the training data set and the orange line represents the loss from the validation data set.\r\n<h6><\/h6>\r\n<img decoding=\"async\" loading=\"lazy\" width=\"452\" height=\"214\" class=\"alignnone size-full wp-image-18575\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2025\/09\/post2_image4.png\" alt=\"\" \/>\r\n<h6><\/h6>\r\n<em>YOLOX training and validation loss vs iteration<\/em>\r\n<h6><\/h6>\r\n&nbsp;\r\n<h6><\/h6>\r\nThe loss can also be displayed at the Command Window.\r\n<h6><\/h6>\r\n<img decoding=\"async\" loading=\"lazy\" width=\"604\" height=\"148\" class=\"alignnone size-full wp-image-18578\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2025\/09\/post2_image5.png\" alt=\"\" \/>\r\n<h6><\/h6>\r\n<em>YOLOX loss at Command Window<\/em>\r\n<h6><\/h6>\r\n&nbsp;\r\n<h6><\/h6>\r\n<p style=\"font-size: 20px; color: #c04c0b;\"><strong>Object Detection Performance<\/strong><\/p>\r\nAfter training the networks, you can evaluate their performance by reading an input frame, passing it through the network, and overlaying any detected predictions. To visualize predictions using the YOLOX network, use the following code.\r\n<pre>I = read(testingImages);\r\n    \r\n[bboxes,scores,labels] = detect(trainedDetector,I{1},Threshold=0.25);\r\ndetectedImg = insertObjectAnnotation(I{1},\u201dRectangle\u201d,bboxes,labels);\r\n\r\nimshow(detectedImg)\r\n<\/pre>\r\n<h6><\/h6>\r\n<div class=\"row\"><div class=\"col-xs-12 containing-block\"><div class=\"bc-outer-container add_margin_20\"><videoplayer><div class=\"video-js-container\"><video data-video-id=\"6379621479112\" data-video-category=\"blog\" data-autostart=\"false\" data-account=\"62009828001\" data-omniture-account=\"mathwgbl\" data-player=\"rJ9XCz2Sx\" data-embed=\"default\" id=\"mathworks-brightcove-player\" class=\"video-js\" controls><\/video><script src=\"\/\/players.brightcove.net\/62009828001\/rJ9XCz2Sx_default\/index.min.js\"><\/script><script>if (typeof(playerLoaded) === 'undefined') {var playerLoaded = false;}(function isVideojsDefined() {if (typeof(videojs) !== 'undefined') {videojs(\"mathworks-brightcove-player\").on('loadedmetadata', function() {playerLoaded = true;});} else {setTimeout(isVideojsDefined, 10);}})();<\/script><\/div><\/videoplayer><\/div><\/div><\/div>\r\n<em>YoloxCoryNet predicting the tennis ball\u2019s bounding box<\/em>\r\n<h6><\/h6>\r\n<div class=\"row\"><div class=\"col-xs-12 containing-block\"><div class=\"bc-outer-container add_margin_20\"><videoplayer><div class=\"video-js-container\"><video data-video-id=\"6379618971112\" data-video-category=\"blog\" data-autostart=\"false\" data-account=\"62009828001\" data-omniture-account=\"mathwgbl\" data-player=\"rJ9XCz2Sx\" data-embed=\"default\" id=\"mathworks-brightcove-player\" class=\"video-js\" controls><\/video><script src=\"\/\/players.brightcove.net\/62009828001\/rJ9XCz2Sx_default\/index.min.js\"><\/script><script>if (typeof(playerLoaded) === 'undefined') {var playerLoaded = false;}(function isVideojsDefined() {if (typeof(videojs) !== 'undefined') {videojs(\"mathworks-brightcove-player\").on('loadedmetadata', function() {playerLoaded = true;});} else {setTimeout(isVideojsDefined, 10);}})();<\/script><\/div><\/videoplayer><\/div><\/div><\/div>\r\n<em>coryNet predicting the tennis ball\u2019s location<\/em>\r\n<h6><\/h6>\r\nOne decision we must make about the prediction is the value for the detection threshold. Setting the threshold to a low number (less than 0.5) will detect objects with lower confidence scores. Setting this value higher will result in less detections, but they will be more precise and have a higher likelihood of being a true positive.\r\n<h6><\/h6>\r\nTo evaluate the overall performance of each network we need to determine how accurately each network is marking the ball. To do so, we can look at the overlapping pixels of the predicted tennis ball and compare those to the overlapping pixels of the ground truth data. For example, see the following image.\r\n<h6><\/h6>\r\n<img decoding=\"async\" loading=\"lazy\" width=\"302\" height=\"200\" class=\"alignnone size-full wp-image-18584\" src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2025\/09\/post2_image6.png\" alt=\"\" \/>\r\n<h6><\/h6>\r\n<em>Ground truth (green) vs. coryNet prediction (red)<\/em>\r\n<h6><\/h6>\r\nIn the above image, the green marker shows the ground truth and the red marker is the coryNet\u2019s prediction. One method for calculating the total error would be to take the absolute value of the difference between the two images. Looping through all images and taking an average value for the network\u2019s accuracy, the network we designed has an accuracy of 66%. While the YOLOX object detector has an accuracy of 87%.\r\n<h6><\/h6>\r\n&nbsp;\r\n<h6><\/h6>\r\n<p style=\"font-size: 20px; color: #c04c0b;\"><strong>Improving Ball Tracking<\/strong><\/p>\r\nIn the previous section, we trained a neural network to detect a tennis ball. While neither object detector achieved perfect accuracy, the main objective was to demonstrate the overall workflow rather than to develop a fully optimized model. That said, there are several potential areas for improvement that remain unexplored. One of the key advantages of using MATLAB is the flexibility to go beyond object detection and integrate additional techniques for enhanced performance. We can leverage the object detector and combine it with object tracking in <a href=\"https:\/\/www.mathworks.com\/products\/sensor-fusion-and-tracking.html\">Sensor Fusion and Tracking Toolbox<\/a>.\r\n<h6><\/h6>\r\nFunctions in the toolbox can easily be paired with our trained object detector. For a comprehensive tutorial, refer to the <a href=\"https:\/\/www.mathworks.com\/help\/fusion\/ug\/visual-tracking-of-occluded-and-unresolved-objects.html\">Visual Tracking of Occluded and Unresolved Objects<\/a> example. To use the object tracker with the object detector that we have trained, the detectorObjects must be set to our trainedDetector.\r\n<h6><\/h6>\r\nWe can then call the function runTracker to setup the individual object tracks. In our case, there will be only a single track since we are only interested in detecting the tennis ball. In this example, we create an object tracker by calling the function <a href=\"https:\/\/www.mathworks.com\/help\/fusion\/ref\/trackergnn-system-object.html\">trackerGNN<\/a>.\r\n<pre>tracker =  trackerGNN(MaxNumSensors=1,MaxNumTracks=1);\r\n\r\ntracker.FilterInitializationFcn = @initcvkf;\r\ntracker.TrackLogic = \"History\";\r\ntracker.ConfirmationThreshold = [2 2];\r\ntracker.DeletionThreshold = [2 2];\r\n<\/pre>\r\n<h6><\/h6>\r\nThis is more powerful tracking than only using object detection, because we can set additional constraints for an object track. In the above code, the FilterInitializationFcn is set to @initcvk, meaning that the tracker users a constant-velocity unscented Kalman filter.\r\n<h6><\/h6>\r\nTo run the tracker, call the following function.\r\n<pre>frames = runTracker(vidReader, tracker, detectionHistory);\r\n<\/pre>\r\n<h6><\/h6>\r\n<div class=\"row\"><div class=\"col-xs-12 containing-block\"><div class=\"bc-outer-container add_margin_20\"><videoplayer><div class=\"video-js-container\"><video data-video-id=\"6379619963112\" data-video-category=\"blog\" data-autostart=\"false\" data-account=\"62009828001\" data-omniture-account=\"mathwgbl\" data-player=\"rJ9XCz2Sx\" data-embed=\"default\" id=\"mathworks-brightcove-player\" class=\"video-js\" controls><\/video><script src=\"\/\/players.brightcove.net\/62009828001\/rJ9XCz2Sx_default\/index.min.js\"><\/script><script>if (typeof(playerLoaded) === 'undefined') {var playerLoaded = false;}(function isVideojsDefined() {if (typeof(videojs) !== 'undefined') {videojs(\"mathworks-brightcove-player\").on('loadedmetadata', function() {playerLoaded = true;});} else {setTimeout(isVideojsDefined, 10);}})();<\/script><\/div><\/videoplayer><\/div><\/div><\/div>\r\n<em>Improving object detection with tracker<\/em>\r\n<h6><\/h6>\r\nThe tracker properties defined here use relatively strict thresholds for confirming and deleting tracks. While this can improve precision by reducing false positives, it also increases the risk of discarding valid tracks. Ideally, we want to establish a track and maintain it as long as it remains accurate. This becomes more challenging in cases like a tennis ball, where the trajectory is curved and fast-moving.\r\n<h6><\/h6>\r\n&nbsp;\r\n<h6><\/h6>\r\n<p style=\"font-size: 20px; color: #c04c0b;\"><strong>Discussion<\/strong><\/p>\r\nThis blog post only scratches the surface. MATLAB offers a wide range of capabilities for designing, training, and deploying neural networks. Whether you\u2019re interested in building custom models or working with pretrained detectors, this is a good starting point for deeper exploration. If you\u2019re looking to further improve the performance of the neural networks covered here, consider experimenting with the following:\r\n<h6><\/h6>\r\n<ol>\r\n \t<li>Use <a href=\"https:\/\/www.mathworks.com\/help\/deeplearning\/ref\/experimentmanager-app.html\">Experiment Manager<\/a> to systematically train and compare models under different conditions. Try varying the solver, learning rate, or mini-batch size to see which combination yields the best results.<\/li>\r\n \t<li>Modify the YOLOX starting network to evaluate how different backbone architectures affect detection accuracy and training speed.<\/li>\r\n \t<li>Retrain on a new dataset, perhaps for another sport, using the pretrained model. Assess how well the network generalizes and what adjustments may be needed.<\/li>\r\n<\/ol>\r\n<h6><\/h6>","protected":false},"excerpt":{"rendered":"<div class=\"overview-image\"><img src=\"https:\/\/blogs.mathworks.com\/deep-learning\/files\/2025\/09\/post2_image2.png\" class=\"img-responsive attachment-post-thumbnail size-post-thumbnail wp-post-image\" alt=\"\" decoding=\"async\" loading=\"lazy\" \/><\/div><p>\r\nThis blog post is from Cory Hoi, Engineer at MathWorks Engineering Development Group.\r\n\r\n&nbsp;\r\n\r\nIn our previous blog post, we used the Video Labeler app to segment both the tennis ball and the... <a class=\"read-more\" href=\"https:\/\/blogs.mathworks.com\/deep-learning\/2025\/10\/08\/tennis-analysis-with-ai-object-detection-for-ball-tracking\/\">read more >><\/a><\/p>","protected":false},"author":194,"featured_media":18569,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[36,9,5],"tags":[],"_links":{"self":[{"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/posts\/18554"}],"collection":[{"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/users\/194"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/comments?post=18554"}],"version-history":[{"count":23,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/posts\/18554\/revisions"}],"predecessor-version":[{"id":18737,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/posts\/18554\/revisions\/18737"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/media\/18569"}],"wp:attachment":[{"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/media?parent=18554"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/categories?post=18554"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.mathworks.com\/deep-learning\/wp-json\/wp\/v2\/tags?post=18554"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}