Fiddler is hosting our third annual Explainable AI Summit on October 21st, bringing together industry leaders, researchers, and Responsible AI experts to discuss the future of Explainable AI. This year’s conference will be entirely virtual, and we’re using the opportunity to take the conference global, with speakers zooming in from across the country and the globe.
We’re not the only ones going virtual – 2020 has been a year of rapid adaptation and digital progression for enterprises, with businesses across industries being forced to shift their business models, sales processes, and overarching strategies to meet the unique needs of their customers. In many cases, this has meant accelerating the adoption of AI projects. The use cases for AI continue to grow, from hiring decisions to brushing your teeth. According to the Gartner 2020 CIO Agenda Survey, leading organizations expect to double the number of AI projects in place within the next year, and over 40% of them plan to deploy AI solutions by the end of 2020. But as AI becomes more prevalent in everyday life, consumers are becoming increasingly discerning, demanding transparency into how and why businesses are using their AI and how and why their algorithms are making decisions.
At Fiddler, we believe that the key to successfully deploying AI is visibility and transparency of AI systems. “In order to root out bias within models, you must first be able to understand the ‘how’ and ‘why’ behind problems to efficiently root cause issues,” says Fiddler’s CEO Krishna Gade. “When you know why your models are doing something, you have the power to make them better while also sharing this knowledge to empower your entire organization.”
This year’s conference is bringing together speakers from across industries and functions to cover topics that speak to this unique moment, such as key considerations for ethical and accountable AI deployment, the implications of AI on the financial services industry, and the past, present, and future states of Explainable AI.
Lofred Madzou, Artificial Intelligence Project Lead at WEF, who will be speaking on the Responsible AI panel, explains that building trustworthy AI is critical for the future of the industry. He says “If we don’t manage to build trustworthy systems, in the long run, we’re going to limit the use of AI in high-stakes domains such as criminal justice, healthcare, banking or employment.” Joining him on the panel is AI Ethicist Merve Hickok, who explains that we must not remove the human element from responsibility in building AI systems. “I would like to rephrase [Responsible AI as] “responsible development and deployment of AI,”’ shes says, “It is important that we do not attribute traits like ‘responsible’ or ‘trustworthy’ to algorithms.”
In addition to panels with industry experts, the Summit will feature keynotes from Karen Hao, the Senior AI Reporter for MIT Technology Review, and Scott Belsky, Chief Product Officer and Executive Vice President of Adobe Creative Cloud. The day will close with a special screening of Shalini Kantayya’s documentary film Coded Bias, which premiered at the 2020 Sundance Film Festival. The film explores the fallout of MIT Media Lab researcher Joy Buolamwini’s startling discovery that facial recognition does not see dark-skinned faces and women accurately, and her journey to push for the first-ever legislation in the U.S. to govern against bias in the algorithms that impact us all.
To learn more about the Explainable AI Summit and reserve your seat today, visit the event website.