AI Gets Bigger and Better: Microsoft’s AI-at-Scale Predictions AI Gets Bigger and Better: Microsoft’s AI-at-Scale Predictions
Two years ago, when Luis Vargas, Ph.D. of Microsoft AI addressed ODSC, he told the story of his daughter learning multiple... AI Gets Bigger and Better: Microsoft’s AI-at-Scale Predictions

Two years ago, when Luis Vargas, Ph.D. of Microsoft AI addressed ODSC, he told the story of his daughter learning multiple languages as a small child and how this hidden complexity transferred to teaching machines to learn. The development of AI follows a similar path as the development of the human brain, and that’s the inspiration for parameters that make up the neural connections of some of our most exciting AI deployments. 

In the years since that first talk, Microsoft has built massive supercomputers capable of supporting truly complex artificial intelligence. This type of research drives AI innovation—people pushing the limits of what computers are capable of doing and how closely we can mimic the human mind when building our machines. In Vargas’s ODSC East 2022 keynote, “The Big Wave of AI at Scale,” he demonstrates the power of big AI and what Microsoft’s research and engineering means for other companies.

AI at Scale is a Strong Trend

Three key components are driving the push to build bigger, better AI: 

  • Self-supervised learning: AI learns contextual representation from the data itself
  • Transfer learning: Researchers can train AI in one task and then transfer that learning to provide the foundation for a variety of other related tasks (similar to how humans learn in real life.)
  • Transformer Architecture: Researchers and developers can encode and decode contextual representations

These functions allow artificial intelligence to take on more complex tasks and process more data with fewer errors. This is exciting news for researchers who need to build large models but worry about the availability of processing power.

Microsoft continues to increase the parameters of their models. We’re now seeing a more comprehensive range of tasks in language, vision, and other disciplines. In other words, we’re integrating artificial intelligence into the very spaces previously occupied only by humans and creeping towards the day when our machines finally become as complex as our own brains.

This is an exciting step.

What does this look like in the real world? 

Two years ago, it looked like AI digesting large texts and abstracting the essential ideas to answer questions or summarize. It looked like synthesizing information and writing entirely new pieces that had never appeared before. It even looked like creating a fictional dialog between two famous people based on their past writing.

Today, artificial intelligence can solve riddles and explain the process behind its solution. Companies have rapidly adopted these language models in a variety of products that require high levels of language understanding from machines.

Microsoft deploys these models in its search engine, Bing, to power its search functions. Now, people can use the same capabilities to analyze word documents to extract relevant information. This could have a significant impact on business applications where analyzing large texts is causing bottlenecks in operations. 

Microsoft also moved these capabilities into email. The function, Viva, helps users stay on top of the most important aspects of their inbox. It allows them to keep vital information close at hand without scrolling through thousands of email threads. In outlook, AI is helping users better prepare for meetings by analyzing all available data from places like Mail, Teams, or other documents. These are just a few of the different applications possible with a truly robust artificial intelligence model.

Other companies will soon deploy these large models at scale

These examples have all been language-related. However, machines can also abstract this type of information gathering to other forms of data thanks to transfer learning. Microsoft demonstrated cross-modality tasks such as generating an image based on text input from the user or analyzing and answering questions about images.

The Microsoft AI tech stack includes API services, pertained models, machine learning accelerating software, and even infrastructure. They’re making these services part of the web so that users can build their own functions.

In addition, users will be able to use as much processing power as they need to accomplish tasks. Microsoft AI is creating an ecosystem to support AI deployments and build new, innovative research tools for business users and organizations.

According to Vargas, we’re now in the fifth industrial revolution. This is a time when humans and machines are increasingly inseparable within online spaces and business applications. These new innovations should foster a better, more balanced working relationship between these intelligent machines and the humans they support. What we know of work is changing, and it’s a great time to be alive.

To see the entire keynote and discover more details about what Microsoft AI teams have been working on in the two years since the previous keynote, check out the ODSC link.

Elizabeth Wallace, ODSC

Elizabeth is a Nashville-based freelance writer with a soft spot for startups. She spent 13 years teaching language in higher ed and now helps startups and other organizations explain - clearly - what it is they do. Connect with her on LinkedIn here: https://www.linkedin.com/in/elizabethawallace/