The claim and the technologyThe technology behind the trend is Transformer Models: a neural network architecture that was trained on many words and sentences and will predict the next word in a sentence. The world of transformers has become very popular: these are models that can understand relationships and trends in text and other sequential data. The end application can be anything from sentiment analysis to image captioning to object recognition. One of the architectures in the news is Google's Language Model for Dialogue Applications (LaMDA) which, according to Google's blog can "engage in a free-flowing way about a seemingly endless number of topics." This is because the input data used to train the model was dialogue-based, and the model was trained to respond in a way that is "sensible and specific." You can explore and implement transformer models in MATLAB here: https://github.com/matlab-deep-learning/transformer-models with models such as BERT, and GPT-2. It's important to keep in mind that a small number of researchers are focused on this aspect of AI, while a much larger community is focused on using transformers and other AI architectures to improve the systems we use every day. While transformers are a powerful architecture, they are one of many model architectures that can provide real results for a variety of applications in AI.
Reactions to "the news"Sorry to be a downer, but sentient AI doesn't exist around us. Here's one post I found that used GPT-3 to have a conversation with very… unique characters: https://www.aiweirdness.com/interview-with-a-squirrel/ In addition, there are some researchers that would rather have people focus on the actual, real-life good and bad of AI right now. My recommendation, when faced with technology in the news, is to approach everything with a healthy sense of skepticism and focus not on the outcome, but how that work could relate to, or improve, the work you are already doing.
Why you can still be excited about AIWe don't need to sensationalize AI for the technology to be useful. True, AI might not walk among us, but it is solving real problems. Remove the hype from AI by being mindful of statements such as, "AI can exceed human accuracy". Is this true? Maybe not. Regardless, it distracts from the reason you should consider using deep learning and machine learning techniques in your work.
What does this mean for engineers?As always, let's bring this back to the engineer, and 3 things we can take away from this story.
- Focus on the tasks in which AI can (actually) help. Here are 2 examples of AI being used for real, practical applications:
|Using AI simulations for computational fluid dynamic solvers: link to story||Using neural networks for diagnosis in medical imaging: link to story|
- Focus on AI results in addition to accuracy. Keep in mind fairness and bias: a growing number of engineers and scientists are focusing on explainability techniques to help explain their work. Explainability and interpretability are both concepts to help ensure AI is created without implicit and explicit bias on specific features in data. Also, track your experiments to replicate results: I've mentioned Experiment Manager before, but being able to replicate and prove your results is essential to AI project success.
- Be critical of hype
- Be wary of "super-human" results. AI that "exceeds human level accuracy" may not be an accurate statement, and if you are looking to use AI to simply reach super-human levels, you may be disappointed in the results. Be mindful of who is making these claims and bring it back to the problem at hand: What are you trying to accomplish, and how will AI help you?
- Be careful of futuristic promises. Statements such as, "We're not there yet" promises a future world where we will eventually arrive. We should avoid the long and ongoing debate of a future world of sentient robots that should be saved for science fiction. Future AI promises distract from the world we live in today, where AI can help solve current problems in many diverse applications.
To leave a comment, please click here to sign in to your MathWorks Account or create a new one.