A blonde haired guy with nerd glasses and green eyes filling a form to handle missing data

How to Handle Missing Data in Machine Learning (Part 1)

Introduction

In the world of data science, missing data is a common issue that can impact the performance of machine learning models. In data scientist job interviews, I often ask candidates how they would handle missing data in a machine learning project. Surprisingly, many struggle to choose the most appropriate method. This article explores various techniques to handle missing data and helps you understand their pros and cons.

Why Missing Data Matters

Missing data can introduce bias, reduce statistical power, and lead to inaccurate conclusions. It’s crucial to handle it properly to ensure the robustness of your models.

Common Methods to Handle Missing Data

1. Deletion Methods

Listwise Deletion – Handle Missing Data

Listwise deletion is also called complete case analysis. It involves removing any row with at least one missing value. 

  • Pros: It’s simple to put in place.
  • Cons: It can lead to significant data loss, especially with large datasets.
df.dropna(inplace=True)

Pairwise Deletion – Handle Missing Data

Pairwise deletion, on the other hand, allows the analysis to use all available data without discarding entire rows. It only removes the missing data points for the specific analysis being performed, preserving as much data as possible.

  • Pros: Retains more data compared to listwise deletion.
  • Cons:
    Can complicate analysis and lead to inconsistencies.
    The sample size can vary based on the variables.

2. Imputation Methods

Mean/Median Imputation – Handle Missing Data

Mean or median imputation involves replacing missing values with the mean or median of the observed values for that variable.

  • Pros: It’s easy to implement, suitable for numerical data.
  • Cons: It can introduce bias, it underestimates variability.
df['column'].fillna(df['column'].mean(), inplace=True)

Mode Imputation

Mode imputation is used for categorical data, where missing values are replaced with the most frequent value (the mode) in the dataset.

  • Pros: Simple and effective for categorical data.
  • Cons: Can distort frequency distributions.
df['column'].fillna(df['column'].mode()[0], inplace=True)

K-Nearest Neighbours (KNN) Imputation

KNN imputation uses the values from the k-nearest neighbours (similar data points) to fill in missing values.

  • Pros: Can provide more accurate imputations by considering the similarity of samples.
  • Cons: Computationally intensive, sensitive to outliers.
from sklearn.impute import KNNImputer

imputer = KNNImputer(n_neighbors=5)
df_filled = imputer.fit_transform(df)

Many Imputation

Imputation often involves creating many imputed datasets. You then combine the results to account for the uncertainty in the imputations.

  • Pros: Accounts for uncertainty, produces robust estimates.
  • Cons: More complex and time-consuming.
from sklearn.experimental import enable_iterative_imputer
from sklearn.impute import IterativeImputer

imputer = IterativeImputer()
df_filled = imputer.fit_transform(df)

Advanced Imputation Techniques (e.g., MICE, Predictive Mean Matching)

Advanced imputation techniques, such as Multiple Imputation by Chained Equations (MICE) and Predictive Mean Matching, consider the relationships between variables to provide highly accurate imputations.

  • Pros: Highly accurate, considers relationships between variables.
  • Cons: Requires more computational resources and expertise.

3. Model-Based Methods

Using Algorithms that Handle Missing Data

Some machine learning algorithms, such as XGBoost, handle missing data internally. They do this by treating missing values as a separate category or by using them to leverage the algorithm’s built-in capabilities.

  • Pros: Simplifies the process, leverages the algorithm’s capability.
  • Cons: Limited to specific algorithms.

Choosing the Right Method

When deciding how to handle missing data, consider the type of data, the percentage of missing values, and the impact on your analysis. For instance, mean imputation might be good for small datasets with few missing values. Multiple imputation is better for larger datasets with more missing data.

Curious to see these methods in action and compare their effectiveness? Check out Part 2 of this article. We’ll use the Titanic dataset to demonstrate and analyse different imputation techniques.

Let’s Connect

You can also find me on:


Posted

in

,

by