15 Open Source Responsible AI Toolkits and Projects to Use Today 15 Open Source Responsible AI Toolkits and Projects to Use Today
Responsible AI, ethical AI, trustworthy AI, and transparent AI are all important topics lately. As more and more companies come under... 15 Open Source Responsible AI Toolkits and Projects to Use Today

Responsible AI, ethical AI, trustworthy AI, and transparent AI are all important topics lately. As more and more companies come under fire for allowing bias in their models or being secretive about their AI, people are becoming increasingly aware of the dangers of AI black boxes. People wonder what tools are being used, where data was obtained, and the motives behind the algorithms.  Luckily, there are a number of responsible AI toolkits, frameworks, and other open-source tools to help make your projects more transparent, trustworthy, and ethical.


Responsible AI Toolkits for AI Ethics & Privacy

Ethical AI highlights the importance of using AI for legitimate reasons while avoiding immoral uses. Organizations that have transparent ethical AI standards will follow strict guidelines, a thorough review process, and clear goals to ensure all standards are met.

TensorFlow Privacy

TensorFlow Privacy is a Python library that includes implementations of TensorFlow optimizers for training machine learning models with differential privacy.

TensorFlow Federated

TFF has been developed to facilitate open research and experimentation with Federated Learning (FL), an approach to machine learning where a shared global model is trained across many participating clients that keep their training data locally. 


deon is a command-line tool that allows you to easily add an ethics checklist to your data science projects. The goal of deon is to push that conversation forward and provide concrete, actionable reminders to the developers that have influence over how data science gets done. Federated Learning helps preserve privacy, as it’s a new machine learning paradigm to learn a shared model across users or organizations without direct access to the data.

AI Transparency & Bias 

Transparent AI allows people to look under the hood of AI models, so that the model can be properly explained and communicated. Similar to explainable AI, providing transparency into the motives, data, or intent behind the model will take the guesswork out of the model.

Model Card Toolkit

MCT streamlines and automates the generation of Model Cards [1], machine learning documents that provide context and transparency into a model’s development and performance.

TensorFlow Model Remediation

TensorFlow Model Remediation is a library that provides solutions for machine learning practitioners working to create and train models in a way that reduces or eliminates user harm resulting from underlying performance biases.


A core principle of ethical AI, fairness includes protecting individuals and groups from discrimination, bias, or mistreatment. Models need to be evaluated for fairness so that there’s no bias towards any groups, factors, or variables.

AI Fairness 360

The AI Fairness 360 toolkit from IBM is an extensible open-source library containing techniques developed by the research community to help detect and mitigate bias in machine learning models throughout the AI application lifecycle.


Fairlearn is a Python package that empowers developers of artificial intelligence (AI) systems to assess their system’s fairness and mitigate any observed unfairness issues. Fairlearn contains mitigation algorithms as well as metrics for model assessment.

Responsible AI Toolbox

From Microsoft, the Responsible AI Toolbox is a suite of tools that provides a collection of model and data exploration and assessment user interfaces that enable a better understanding of AI systems. It’s an approach to assessing, developing, and deploying AI systems in a safe, trustworthy, and ethical manner, and taking responsible decisions and actions.

AI Explainability

Explainability, aka XAI, is a set of processes and methods that allows human users to better understand and trust the results and output created by machine learning algorithms.


The moDel Agnostic Language for Exploration and eXplanation (aka DALEX) package Xrays any model and helps to explore and explain its behavior, while helping to understand how complex models are working.

TensorFlow Data Validation

TensorFlow Data Validation (TFDV) is a library for exploring and validating machine learning data. It is designed to be highly scalable and to work well with TensorFlow and TensorFlow Extended (TFX).


XAI is a machine learning library that is designed with AI explainability at its core. XAI contains various tools that enable for analysis and evaluation of data and models.

Adversarial Machine Learning & Trusted AI

Adversarial machine learning is a machine learning technique that attempts to exploit models by taking advantage of obtainable model information and using it to create malicious attacks. Responsible AI toolkits can help to protect these attacks from happening and to save the systems should they occur.


Fawkes is an algorithm and software tool that gives individuals the ability to limit how unknown third parties can track them by building facial recognition models out of their publicly available photos. This involves distorting personal images, or cloaking them, so that they can’t be detected by malicious models.


TextAttack is a Python framework for adversarial attacks, adversarial training, and data augmentation in NLP. TextAttack makes experimenting with the robustness of NLP models seamless, fast, and easy. It’s also useful for NLP model training, adversarial training, and data augmentation.


AdverTorch is a Python toolbox for adversarial robustness research. The primary functionalities are implemented in PyTorch. Specifically, AdverTorch contains modules for generating adversarial perturbations and defending against adversarial examples, also scripts for adversarial training.

Responsible AI Toolkits and More at ODSC East 2022

Responsible AI is now a cornerstone of ODSC events, including the upcoming ODSC East 2022 this April 19th-21st. The Responsible AI focus area will highlight everything from responsible AI toolkits to other open-source frameworks, tools, and case studies that can help you make sure your AI algorithms and projects are ethical, trustworthy, safe, and unbiased.

Currently, scheduled sessions include:

  • Deploying AI for Climate Adaptation: A Spotlight on Disaster Management
  • Open-source Best Practices in Responsible AI
  • Intro to Trustworthy AI
  • Data Science and Contextual Approaches to Palliative Care Need Prediction
  • Deep Learning Enables a New View in the Agriculture Industry
  • You Too Can Be a Cybersecurity Data Scientist!
  • …and more added each week!

To stay current with Responsible AI toolkits and more, subscribe to our newsletter for more case studies, news, and tutorials. You can also register for ODSC East 2022 now to save 60% off on all ticket types so you can learn more about responsible, ethical, and trustworthy AI.



ODSC gathers the attendees, presenters, and companies that are shaping the present and future of data science and AI. ODSC hosts one of the largest gatherings of professional data scientists with major conferences in USA, Europe, and Asia.