The last two years have clearly shown how Generative AI has the potential to revolutionize many industries and solve complex problems. Though there are great potential benefits, it is important to ensure this technology is developed and used responsibly. In her keynote speech at ODSC West, Sarah Bird, Global Lead for Responsible AI Engineering at Microsoft, discussed Microsoft’s journey in building and using generative AI responsibly. She is responsible for leading Microsoft’s efforts to develop and use AI responsibly and is a member of the board of directors of the Partnership on AI.
During her keynote, Sarah began by outlining the potential benefits of generative AI. She said that generative AI has the unique ability to be used to create new products and services, improve existing products and services, and solve complex problems that before were thought impossible.
One example of this is how generative AI can be used to create new drugs through protein sequencing and new molecule discovery as well as imaging pattern recondition. Another related to Microsoft’s own Co-pilot and how it is assisting developers free themselves from repetitive programming tasks; in turn, freeing them to focus on planning and exploration. But with all of these benefits come a level of risks.
Sarah warned of the potential risks of generative AI. She said that generative AI can be used to create harmful content, such as deepfakes and hate speech. It can also be used to spread misinformation and propaganda. For example, generative AI can be used to create deepfakes of politicians that are difficult if not impossible to distinguish from legitimate sources. These deepfakes have already begun to cause issues with phishing scams cropping up across the globe using deepfake technology to trick vulnerable populations.
Sarah Bird then went on to explain Microsoft’s approach to responsible AI. Sarah took the audience on a dive into how Microsoft is committed to developing and using generative AI in a way that benefits society. To achieve this, Microsoft has developed a set of responsible AI principles. These principles include:
- AI should be used for good.
- AI should be accountable to people.
- AI should be built and tested for safety and security.
- AI should be fair and unbiased.
- AI should be transparent.
- AI should be privacy-preserving.
- AI should be aligned with societal values.
According to Sarah Bird, Microsoft is working to implement all of these principles in its products and services to ensure its AI-powered tools and services provide maximum net benefit with minimal risk. Part of the company’s strategy is to help educate the public about the principles of responsible AI. As more users understand the importance of these principles, the greater awareness of what AI should be and how the technology should be handled.
Bird concluded her speech by calling on the AI community to work together to create a future where AI benefits everyone. She said that we need to develop and use generative AI responsibly and that we need to educate the public about the potential risks and benefits of AI.
If you found this keynote interesting, then you shouldn’t miss the next ODSC conference, ODSC East! Be one of the first 50 attendees for East and save 75%!