Today, NVIDIA introduced the NVIDIA Omniverse Avatar, an important technology for creating web AI avatars.
Omniverse Avatar combines the company’s expertise in artificial intelligence language, computer vision, natural language comprehension, machine learning, and modeling technology. Avatars were created based on 3D graphics that allowed people to see, talk, talk about multiple topics, and understand a common goal.
Omniverse Avatar opens the door to creating AI tools that are easy to use for any business. It can help billions of everyday customer transactions – restaurant orders, banking, planning, and investment, etc. – lead to more business opportunities and increased productivity.
Omniverse Avatar is part of NVIDIA Omniverse ™, a high-quality and compatible 3D platform that is currently in beta with over 70,000 users.
In his keynote speech at NVIDIA GTC, Huang shared various examples of Avatar Omniverse: Project Tokyo for customer support, NVIDIA DRIVE Concierge for full-time services, smart car services, and Project Maxine video conferencing.
In the first episode of the Tokyo project, Huang showed his friends in real-time interviews with a picture of his play, discussing topics such as biology and science.
Another example of the Tokyo project, which depicted a customer’s avatar in a restaurant, can be seen, talked about, and understood by two customers who buy vegetable burgers, french fries, and drinks. These models were powered by NVIDIA AI software and the Megatron 530B, the world’s largest language format.
For example, the DRIVE Concierge AI on the platform, a computer-assisted device in the main menu of the car menu, helps the driver to select a route to the destination at the right time and then follow his wish to set a reminder if the car falls less than 100 miles.
In addition, Huang Project Maxine demonstrated the ability to add the latest and most comprehensive graphics in collaboration with design tools. The video is shown to an English speaker in a noisy café but can be heard without background noise. While he was speaking, all his words were written and translated in real-time into German, French, and Spanish with the same word.
Omniverse Avatar Key Elements
Omniverse Avatar uses visual artificial intelligence, computer visualization, understanding the original language,
Recommendation engines, facial animation, and graphics delivered through the following technologies:
• Its language tracking program is based on the multilingual NVIDIA Riva platform. Riva is also used to create personal answers using oral writing skills.
• His native language skills are based on the Megatron 530B, a great language that can recognize, understand and improve human language. The Megatron 530B is the first learning mode that can complete sentences, answer questions in many subjects, summarize long stories, mistakes, and translate them into other languages with little or no training. and deal with many non-existent sites he learned a lot to do.
• Powered by NVIDIA Merlin, a program that enables companies to create sophisticated learning systems that can solve many problems.
• Its display capabilities are supported by NVIDIA Metropolis, a computer viewing platform.
• This avatar animation is powered by NVIDIA Video2Face and Audio2Face, 2D and 3D AI-driven facial animation and translation technology.
• This technology is integrated and used in real-time using NVIDIA’s unified computing framework. As these micro-services can be used, customized, NVIDIA Fleet Command can be installed, managed, and customized in many areas.
The development of the NVIDIA GPU in 1999 stimulated the growth of the computer game market and defined the latest graphics, high performance, and high performance. The company’s pioneering work in computer speed and artificial intelligence will revitalize multi-billion dollar businesses such as transportation, healthcare, and manufacturing, and drive the growth of many other businesses.