We build solutions, services and products for companies,  developers, analysts, and policymakers to audit machine learning models for  and bias such as security, quality and discrimination, and make informed and equitable decisions around developing and deploying predictive risk-assessment tools.

Why we created Biasfix

Machine Learning, AI and Data Science based predictive tools are being increasingly used in problems that can have a drastic impact on people’s lives in policy areas such as criminal justice, education, public health, workforce development and social services. Recent work has raised concerns on the risk of unintended bias in these models, affecting individuals from certain groups unfairly. While a lot of bias metrics and fairness definitions have been proposed, there is no consensus on which definitions and metrics should be used in practice to evaluate and audit these systems. Further, there has been very little empirical work done on using and evaluating these measures on real-world problems, especially in public policy.

Biasfix, can be used to audit the predictions of machine learning based risk assessment tools  to understand different types of biases, and make informed decisions about developing and deploying such systems.

Different bias and fairness criteria need to be used for different types of interventions. Biasfix allows audits to be done across multiple metrics

Equal Parity

Also known as Demographic or Statistical Parity
If you want each group represented equally among the selected set

Proportional Parity

Also known as Impact Parity or Minimizing Disparate Impact
If you want each group represented proportional to their representation in the overall population

False Positive Parity

Desirable when your interventions are punitive
If you want each group to have equal False Positive Rates

False Negative Parity

Desirable when your interventions are assistive/preventative
If you want each group to have equal False Negative Rates

What you need to use Biasfix?

You can audit your risk assessment system for two types of biases:

  1. Biased actions or interventions that are not allocated in a way that’s representative of the population.
  2. Biased outcomes through actions or interventions that are a result of your system being wrong about certain groups of people.

For both of those audits, you need the following data:

Data about the the overall population considered for interventions along with the protected attributes (that you want to audit) for each of them (race, gender, age, income for example).

The set of individuals in the above population that your risk assessment system recommended/selected for intervention or action. It’s important to have this set come from the assessments made after the system has been built, and not from the data the machine learning system was “trained” on. You can also audit the training set but it’s critical to run the audit on the population going forward.

If you want to audit for biases due to disparate errors of your system, then you also need to collect (and provide) actual outcomes for the individuals who were selected and not selected. In order to collect this information, you may need to run a trial and/or hold out part of the data from the recent past when building your machine learning system.

How can you use Biasfix?

Web Audit Tool

Try our Audit Tool to generate a Bias Report
1. Upload Data (or use pre-loaded sample data)
2. Configure (bias metrics of interest and reference groups)
3. Generate the Bias Report

Python Library

Use our python code library to generate bias and fairness metrics on your data and predictions.

Command Line Tool

Use our command line tool to generate a report using your own data and predictions.

What does Biasfix produce?


Bias and fairness measuring and mitigation of in business algorithms

We can help your company by:

- Indentifing bias
- Mitigating/Elimanting bias
- Managing bias
- Monitoring bias
- Producing Bias report

Measuring and mitigating of bias discrimination [gender, ethinicity and race]

We can indetifing and eliminate bias in:

- Facial Recognition Systems - Terms in texts (articles, reports, papars) - Databases
-Reports, documents, sheets

Inclusion/Diversity Certified: Evaluating companies people inclusion by analyzing the staff [Gender, Ethinicity and Race]

We can hep your company indetifying bias in your staff by:

- Ranking your companie considering regional and global markets
- Producing reports with your campanie inlcusion rate

Security, quality and governance in machine learning data models
We can help your analyze the levels of security, quality and govenranace in company's machine learning/deep learning models

The Team

Biasfix was created by professionals wiht long term carrer and experience in data analytics and software development. Our goal is to further the use of data science in policy research and practice. Our work includes educating current and future policymakers, doing data science projects with government, nonprofit, academic, and foundation partners, and developing new methods and open-source tools that support and extend the use of data science for public policy and social impact in a measurable, fair, and equitable manner.


Adolfo Eliazàt

Product Owner

Ana Wang

Data Scientist

Kevin Neubecker

Data Scientist​

Josh Steiner

Interested in creating data-driven policies and systems that are fair and equitable?

Talk to us. We’re building a series of bias, fairness, and equity audit tools, trainings, and methodologies for governments, non-profits, and corporations.