1. Introduction

In this tutorial, we’ll review the differences between bias and error. Firstly, we’ll define the terms and describe them. Next, we’ll review the different types of errors and biases in machine learning and how to resolve them. Then finally, we’ll highlight the differences between the two.

2. What Is Bias?

In machine learning, a bias refers to a scenario where a system or model produces systematically prejudiced results.

To understand bias, we must look into the underlying causes. Bias in machine learning models occurs at any stage in the machine learning process. This could be a result of certain features or values in a dataset or in the model’s configurations. For example, a dataset may be biased towards a certain group of people if it mostly contains information on that set of people.

A biased model results in inaccurate predictions and skewed outcomes. For instance, in our previous example of the dataset, any results produced from using that data will not be representative of the entire population.

2.1. Types of Bias

Different types of biases exist, occurring at different stages in the machine learning process. We’ll look at a few of these in this section.

Algorithmic bias occurs when the algorithm produces prejudiced results for the task at hand. For example, suppose we have a machine learning system for assigning scores to candidates based on tasks performed. If the underlying algorithm starts assigning lower scores to female candidates as compared to their male counterparts, we would call this algorithmic bias.

On the contrary, sampling bias happens during data collection or data selection. In sampling bias, the data at hand is biased towards a specific type of data. For example, in our score assigning system, suppose we sampled more males than females for the training data. This would produce results that are prejudiced towards males.

Confirmation bias occurs when we have preconceived assumptions about the task at hand. In our example of the score assigning system, suppose we know from experience that males perform well in certain tasks as compared to women. If the system outputs results that are contrary to what we know, we might intentionally or unintentionally tune the train to get results that are consistent with our assumptions.

Other types of biases exist such as prejudice bias, aggregation bias, deployment bias and measurement bias.

3.2. How Do We Identify and Deal with Bias?

Depending on the task at hand, we can continually assess the outcomes of machine learning systems to identify whether there is some form of prejudice in the results. Alternatively, we can also check to see if the data is skewed by graphing the distribution of the data as below. This can be done before it is used for any machine learning task:

img_63253a1b98437

For non-numeric data, we can visualize the data using exploratory data analysis. There are also tools that have been developed to detect and eliminate bias in machine learning models. Google’s What-If tool is an example of this. What-If is open-source software that allows users to probe their machine learning models. It also allows users to analyze their data and the output produced, and essentially identify any bias.

IBM’s AI Fairness 360 is another tool that helps users to identify bias and additionally eliminate bias. It offers tools to analyze and identify bias in machine learning models using fairness metrics. Furthermore, users can eliminate bias in their models using the built-in algorithms provided by the application.

3. What Is an Error?

An error is a term used to refer to anything that is incorrect and inaccurate. In machine learning, errors give insight into how accurate predictions are. Alternatively, errors can also refer to the difference between the predictions and the ground truth value.

In machine learning, errors are caused randomly by a model or from the model configurations. These errors result in inaccurate predictions or outcomes from the machine learning model in use.

3.1. Types of Errors

The most general type of error in machine learning is prediction error. Prediction errors are the actual differences between what is predicted and the observed values. There are other specific types of errors that exist as well. We’ll discuss a few of these in this section.

The other types of errors that exist in machine learning are Type I and Type II errors. Type I errors or false positive errors refer to errors that occur when the null hypothesis is true but is rejected. For example, suppose we have a system that predicts diseases in patients. If the system predicts patient A has a disease but in actual fact, he doesn’t, this is known as a Type I error.

Type II errors or false negative errors refer to when a false null hypothesis is accepted. In our example of the system that predicts diseases, suppose the system predicts patient B as not having a disease but in actual fact he does. This is what we call a Type II error.

3.2. How Do We Identify and Deal with Errors?

To identify prediction errors, we usually use metrics such as Root mean squared error (RMSE). RMSE determines how far the predictions are from the actual values in the data:

Rendered by QuickLaTeX.com

False positive errors are usually determined by calculating the false positive rate, where FP is the number of false positive samples, and TN is the number of true negative samples:

Rendered by QuickLaTeX.com

Consequently, false negative errors are usually determined by calculating the false negative rate, where FN is the number of false negative samples, and TN is the number of true negative samples:

Rendered by QuickLaTeX.com

Most importantly, errors can be resolved by editing the configurations of the model to produce accurate results. This can be done through hyperparameter tuning methods.

4. Differences

There are subtle differences between biases and errors in machine learning. We sum them up as follows:

Rendered by QuickLaTeX.com

5. Conclusion

In this tutorial, we’ve reviewed biases and errors. Bias produces systematically prejudiced results while errors produce inaccurate results. Errors are usually identified through calculations and metrics such as false positive rates, false negative rates and RMSE. Biases are identified manually or through available software programs such as the What-If tool and AI Fairness 360.

Comments are open for 30 days after publishing a post. For any issues past this date, use the Contact form on the site.