NeurIPS 2024 Highlights
As NeurIPS 2024 comes to a close, I am thinking back to all the inspiring presentations and conversations with brilliant AI researchers. It was clear that scientists are committed to bringing the benefits of AI into real-world applications. From generative AI to physics-informed neural networks (PINNs), the AI community is exploring new methods and facing new challenges, like noise, system integration, and safety requirements, when AI leaves the desktop and is applied in practical scenarios.
This year, the MathWorks team showcased how MATLAB and Simulink continue to empower researchers and engineers in deep learning and AI. From streamlined workflows for deploying AI models on hardware to integrating pretrained models with simulation, it was exciting to discuss with the many attendees, who stopped by our booth, the tools that help create robust, explainable, and optimized models for AI-powered systems.
The MathWorks booth
From left to right: Jon Cherrie, David Ho, Philip Brown, Jianghao Wang, Wayne King, Sivylla Paraskevopoulou, Abhijit Bhattacharjee, Sarah Mohamed, Lucas García, Reece Teramoto, Jordan Olson, and Anoush Najarian.
In this blog post, I am going to provide the highlights of the presentations and workshops delivered by the MathWorks team.
EXPO Talk: AI Verification & Validation: Trends, Applications and Challenges
Lucas García kicked off the NeurIPS EXPO with a compelling talk on how AI-enabled engineered systems are being increasingly adopted in safety-critical industries like aerospace, automotive, and manufacturing, where ensuring reliability and safety is vital. He was joined by Darren Cofer, Principal Fellow at Collins Aerospace Applied Research & Technology Group, who dived deeper on the use cases and challenges in aviation. Verifying AI systems, especially those involving neural networks, presents challenges due to their data-driven, non-deterministic nature and opaque decision-making. This talk explored key verification methodologies, including abstract interpretation to mathematically analyze AI models, the use of constrained deep learning to embed safety requirements into model training, and runtime monitors for dynamic performance assessment. Notable contributions, such as the FoRMuLA report by Collins Aerospace and EASA, were highlighted to demonstrate the potential of formal methods in AI assurance processes. The discussion underscored the importance of developing robust verification techniques to ensure AI systems in safety-critical applications operate reliably and as intended.Workshop: Hands-On AI for Everyone
The Hands-on AI for Everyone workshop series is designed to make AI creation, testing, and deployment accessible and engaging for diverse audiences with varying level of experience with AI and programming. At NeurIPS, we presented the Farm-to-Plate AI workshop, which focused on applying AI in agriculture. The participants simulated drone flights with LiDAR to survey mango orchards, used object detection for fruit counting, and trained regression models to assess fruit ripeness from hyperspectral images—demonstrating AI’s positive impact on the food production chain.Workshop: AI for Enhanced Spacecraft Orientation
This hands-on workshop focused on building AI and machine learning workflows for accurate spacecraft pose estimation, a critical component of successful space rendezvous missions. Using the state-of-the-art Speed-UE-Cube dataset, developed in collaboration with Stanford University’s Space Rendezvous Laboratory (SLAB), participants explored the complete process, from image preprocessing to deploying deep learning algorithms on hardware. The workshop exercises started with aircraft classification and progressed to an advanced domain-specific spacecraft pose estimation application.Poster: Memory-Efficient On-Device Learning for TinyML Applications with Low-Rank Approximation and Quantization in MATLAB
This poster presented an innovative approach that enables on-device learning for deep neural networks (DNNs) on resource-constrained edge devices, addressing the challenges of high computational and memory demands. This work optimized quantized neural networks using Quantizer Scaling Emulation (QSE) to calibrate gradient scaling for various word lengths (4-bit to 16-bit) and applied Sparse Update Backpropagation with Singular Value Decomposition (SVD)-based low-rank approximation (SVD-LR) to reduce memory and computational requirements. Structural pruning was used to further compress AI models offline for deployment. Evaluations in MATLAB showed significant memory savings: for smaller models like handwriting digit recognition, a 10-fold reduction in memory (under 400KB) was achieved with minimal accuracy loss; for larger models like command keyword spotting, SVD-LR achieved 60% memory savings while maintaining accuracy. This work demonstrated the feasibility of on-device learning for TinyML applications on low-power MCUs, paving the way for more efficient and privacy-preserving model adaptation on edge devices.Looking Ahead for 2025
We are not only wrapping up NeurIPS; 2024 is almost ending too. It has been an exciting year for AI, and I believe that 2025 will bring great advances for AI applications. I am looking forward to sharing these advances here at the MATLAB AI blog and next year at NeurIPS.
댓글
댓글을 남기려면 링크 를 클릭하여 MathWorks 계정에 로그인하거나 계정을 새로 만드십시오.