fbpx
Transforming Skewed Data for Machine Learning Transforming Skewed Data for Machine Learning
Skewed data is common in data science; skew is the degree of distortion from a normal distribution. For example, below is... Transforming Skewed Data for Machine Learning

Skewed data is common in data science; skew is the degree of distortion from a normal distribution. For example, below is a plot of the house prices from Kaggle’s House Price Competition that is right skewed, meaning there are a minority of very large values.

Why do we care if the data is skewed? If the response variable is skewed like in Kaggle’s House Prices Competition, the model will be trained on a much larger number of moderately priced homes, and will be less likely to successfully predict the price for the most expensive houses. The concept is the same as training a model on imbalanced categorical classes. If the values of a certain independent variable (feature) are skewed, depending on the model, skewness may violate model assumptions (e.g. logistic regression) or may impair the interpretation of feature importance.

We can objectively determine if the variable is skewed using the Shapiro-Wilks test. The null hypothesis for this test is that the data is a sample from a normal distribution, so a p-value less than 0.05 indicates significant skewness. We’ll apply the test to the response variable Sale Price above labeled “resp” using Scipy.stats in Python.

The p-value is not surprisingly less than 0.05, so we can conclude that the variable is skewed. A more convenient way of evaluating skewness is with pandas’ “.skew” method. It calculates the Fisher–Pearson standardized moment coefficient for all columns in a dataframe. We can calculate it for all the features in Kaggle’s Home Value dataset (labeled “df”) simultaneously with the following code.

 

A few of the variables like Pool Area are highly right skewed due to lots of zeros, this is okay. Some models like decision trees are fairly robust to skewed features.

We can address skewed variables by transforming them (i.e. applying the same function to each value). Common transformations include square root (sqrt(x)), logarithmic (log(x)), and reciprocal (1/x). We’ll apply each in Python to the right-skewed response variable Sale Price.

Square Root Transformation

After transforming, the data is definitely less skewed, but there is still a long right tail.

Reciprocal Transformation

Still not great, the above distribution is not quite symmetrical.

Log Transformation

The log transformation seems to be the best, as the distribution of transformed sale prices is the most symmetrical.

Box Cox Transformation

An alternative to manually trying a variety of transformations is the Box Cox transformation. For each variable, a Box Cox transformation estimates the value lambda from -5 to 5 that maximizes the normality of the data using the equation below.

 

For negative values of lambda, the transformation performs a variant of the reciprocal of the variable. At a lambda of zero, the variable is log transformed, and for positive lambda values, the variable is transformed the power of lambda. We can apply “boxcox” to all the skewed variables in the dataframe “df” using Scipy.stats.

Skewness reduced quite a bit! The box cox transformation is not a panacea for skew however; some variables cannot be transformed to be normally distributed.

Transforming skewed data is one critical step during the data cleaning process. See this article to learn about dealing with imbalanced categorical classes.


Interesting in learning more about machine learning? Check out these Ai+ training sessions:

Machine Learning Foundations: Linear Algebra

This first installment in the Machine Learning Foundations series the topic at the heart of most machine learning approaches. Through the combination of theory and interactive examples, you’ll develop an understanding of how linear algebra is used to solve for unknown values in high-dimensional spaces, thereby enabling machines to recognize patterns and make predictions.

Supervised Machine Learning Series

Data Annotation at Scale: Active and Semi-Supervised Learning in Python

Explaining and Interpreting Gradient Boosting Models in Machine Learning

ODSC West 2020: Intelligibility Throughout the Machine Learning Lifecycle

Continuously Deployed Machine Learning 

Nathaniel Jermain

Nathaniel Jermain

Nathaniel builds and implements predictive models for a fish research lab at the University of Southern Mississippi. His work informs the management of marine resources in applications across the United States. Connect with Nathaniel on LinkedIn: linkedin.com/in/njermain/

1