NVIDIA has announced that its AI platform was able to train one of the most advanced AI language models, BERT, in just 53 minutes. The AI platform moreover was also able to complete AI inference in just 2 milliseconds.
This is groundbreaking news for the developers who use AI language models for large-scale applications. This breakthrough could pave the way for making the power of this understanding sophisticated AI language models available to millions of consumers worldwide.
Conversational AI services have been around for some years now but they have had their share of troubles operating with human-level comprehension because so far they have been unable to deploy extremely large AI models in real-time. NVIDIA has worked on alleviating this problem by adding optimizations to its AI platform. This has significantly improved speed records in AI training and inference.
“Large language models are revolutionizing AI for natural language,” said Bryan Catanzaro, vice president of Applied Deep Learning Research at NVIDIA. “They are helping us solve exceptionally difficult language problems, bringing us closer to the goal of truly conversational AI.”
The breakthrough from NVIDIA will give conversational AI a big push and in near future will allow organizations to create state-of-the-art services which will be able to assist customers in ways never before imagined.
Nvidia CEO strongly believes AI to be the future of technology
NVIDIA and VMware team up to release cloud software
AWS to use the power of NVIDIA T4 Tensor Core GPUs
Toyota and Nvidia to test autonomous vehicles in ‘virtual world’
Nvidia GauGAN will let people create their own synthetic scenery
© 2021 CIO Bulletin. All rights reserved.