Reverse Dependencies of fairlearn
The following projects have a declared dependency on fairlearn:
- aequitas — The bias and fairness audit toolkit.
- aia-fairness — Various attributes inferences attacks tested against fairness enforcing mechanisms
- aif360 — IBM AI Fairness 360
- aif360-fork2 — IBM AI Fairness 360
- AutoBrewML — With AutoBrewML Framework the time it takes to get production-ready ML models with great ease and efficiency highly accelerates.
- azureml-responsibleai — AzureML Responsible AI package
- cortex-cli — Nearly Human Cortex CLI for interacting with model functions.
- credoai-lens — Lens: comprehensive assessment framework for AI systems
- deeploy — The official Deeploy client for Python
- equal-odds — _PACKAGE UNDER CONSTRUCTION_
- equalityml — Algorithms for evaluating fairness metrics and mitigating unfairness in supervised machine learning
- equalityml-fork2 — Algorithms for evaluating fairness metrics and mitigating unfairness in supervised machine learning
- EthicML — A toolkit for understanding and researching algorithmic bias
- genda-lens — A package for quantifying bias in Danish language models.
- lale — Library for Semi-Automated Data Science
- lares — LARES: vaLidation, evAluation and REliability Solutions
- oracle-guardian-ai — Oracle Guardian AI Open Source Project
- pre-ai-python — Microsoft AI Python Package
- pureml-evaluate — no summary
- pureml-policy — no summary
- pycaret — PyCaret - An open source, low-code machine learning library in Python.
- pycaret-nightly — Nightly version of PyCaret - An open source, low-code machine learning library in Python.
- pycaret-ts-alpha — PyCaret - An open source, low-code machine learning library in Python.
- raiwidgets — Interactive visualizations to assess fairness, explain models, generate counterfactual examples, analyze causal effects and analyze errors in Machine Learning models.
- skops — A set of tools to push scikit-learn based models to and pull from Hugging Face Hub
- sliceguard — A library for detecting critical data slices in structured and unstructured data based on features, metadata and model predictions.
- torchmetrics — PyTorch native Metrics
- whyshift — A package of various specified distribution shift patterns of out-of-distributoin generalization problem on tabular data, and tools for diagnosing model performance are integrated.
- XplainML — XplainML is a comprehensive Python package designed for Explainable AI (XAI) and Responsible AI practices. It provides a suite of tools and algorithms to enhance the transparency, interpretability, and fairness of machine learning models.
1