fbpx
The holiday shopping season is in full swing! The economy is relatively strong compared to a few years back and so...

The holiday shopping season is in full swing! The economy is relatively strong compared to a few years back and so retail sales are probably going to be strong especially for amazon. Other retailers like Target and Wal-Mart are also running amazing black Friday and holiday sales to attract customers. However, amazon has consistently shown it can outwit these retail giants with a greater selection, customer service, and sophisticated pricing. In the last post I showed you how impressive their growth since 1996 has been. In fact, amazon’s 2015 second quarter revenue is more than 800 times the 1997 Q2 revenue!

In this post I will show you how to take the previous web scraped data and create a time series. In case you missed it check out the last post to get the data. I will show you how to decompose amazon’s quarterly revenue and then make a simple forecast for the Q4 holiday sales season.

Recall the all.df data frame was organized with 4 columns. It contains 80 rows representing quarterly revenue starting in 1996. As a refresher, the table below can be called using the head function to show a portion of the data.

head(all.df)

[table id=21 /]

The first 6 rows of all.df shows raw and cleaned amazon quarterly revenue.

When you are starting out with forecasting I suggest the forecast package. It contains many standard yet accurate forecasting methods. I will show you how to use two methods to understand a time series. After loading the package change the initial NA values to 0 using the is.na function in the second code line. Of course you could handle NAs differently but these occur early in the time series so I just switched them to zero.

library(forecast)
all.df[is.na(all.df)] <-0

After changing the NA values to 0 you can change the entire data frame to a time series object. The time series object not only captures the revenue value but also the meta-information associated with the values. In all.df the meta-information is the periodicity. The repeating pattern of amazon’s revenue needs to be captured as a time series so the forecast package can work its magic.

Using the ts function pass in the numeric revenue vector called revenue. Within the function specify the frequency. Since our data is quarterly frequency=4. If your data is daily change this parameter to 365, and use 52 for weekly. Make sure this input matches the inherent periodicity of your data! The last parameter start=1996 simply tells the ts function where the series begins.

data.ts<-ts(all.df$revenue, frequency=4, start=1996)

I always examine the time series object in the console to make sure it is organized the way I expected. I have been known to make mistakes with frequency and my start inputs! Call the data.ts object in the console. The screenshot below shows amazon’s quarterly revenue is now organized from a linear vector into rows representing years and quarters as columns.
data.ts

image00

Amazon’s quarterly revenue represented as a time series object with annual rows and quarterly columns.

Time Series Decomposition

Within the stats package there is a function called decompose. This function will deconstruct a time series object into three parts. The time series decomposition creates trend, seasonality and random subsets of the original time series.
components<-decompose(data.ts)

Calculating Trend

First, using a moving average the function calculates the trend of the time series. This is the overall upward, downward or stationary relationship the revenue has to time. Given amazon’s growth and success, I expect the trend to be moving upward in an exponential fashion. Plus in the previous post I observed exponential growth when reviewing only Q2 revenues.

Calculating Seasonality

Next decompose will use averages to understand the seasonality. In this case, for all Q1 values (minus the trend) the average value is calculated. This process is repeated for Q2 and so on. Once there are four quarterly averages the values are centered. The seasonality represents the repeating pattern within the time series. For instance, the seasonality values may catch the fact that every Q4 amazon sales jump by $1B+ compared to Q3. In the last post visually examining the line chart there was a repeating peak. So I would expect the seasonality in this decomposition to be strong and look like a saw took. Every year we should expect a Q4 peak, with a Q1 reduction comparatively. Keep in mind the periodicity may impact the seasonality, so be sure to understand if your data is in weeks or months not just quarters.

Accounting for “random”

The left over values not accounted for in either the trend or seasonality are the error terms. The error terms are called random in this method. However, the values may not be true random noise. A forecaster could further model the time series to account for events like significant competitor sales or snow storms forcing more shoppers to be online versus at brick and mortar stores.

Putting it all together

In this basic example I am using additive modeling. An additive model assumes the differences between each period is the same once trend is accounted for. So the difference between Q1 and Q2 is roughly the same each year. The starting points for Q1 and Q2 in subsequent years change because of trend but the impact of the holiday shopping season is the same each Q4.

An decomposed additive model uses the simple equation below. A quarterly revenue at time period “t” is made from adding the trend at time t, seasonality of time t and error of time t.

Y[t] = T[t] + S[t] + e[t]

To make the equation real, at time period t10, which is Q2 1998, the value is made of the trend moving average $129,140,000 plus the seasonal Q2 impact which is a negative -$83,6072,105, plus the left over value $822,912,105. Adding it all up Q2 1998 is $115,980,000.

Once the previous code decomposes the time series you can reference each individual decomposition section with the $.

components$seasonal
components$trend
components$random

I like to visually examine data as much as possible. For me it is the best way to draw insights and make conclusions. Luckily it is easy to plot the components by calling either autoplot or plot on the components object.

plot(components)

analysis_graph

Amazon’s quarterly revenue shows strong upward trend & seasonality from the holiday shopping season.

As expected there is a clear upward trend in the data. Additionally I expected to see strong seasonality represented in a repeating pattern. This is characterized in the “saw tooth” of the seasonal section in the above plot. Interestingly there is still some repeating pattern in the random section. I suspect the decomposition struggled with even larger Q4 spikes starting around 2010. Before 2010 the “random” values work to diminish the Q4 peak. As I examine the plot above I come away thinking that Amazon is growing exponentially and that the Q4 peaks are becoming more pronounced.
With time series decomposition it is easy to remove the seasonality effects in the data. To remove the effect of seasonality from amazon’s quarterly revenue simply subtract it from the original time series object. In this case subtract components$seasonal from data.ts. The resulting plot leaves behind the trend with the random unexplained variance.

seasonal_adjustment<-data.ts-components$seasonal
plot(seasonal_adjustment, main="Seasonal Impact removed")

analysis_graph4

Amazon’s quarterly revenue without seasonality.

Holt-Winters Forecasting

One of my favorite forecasting methods is Holt-Winters. When I worked in call center analytics this simple method did a great job forecasting arriving daily contacts. I like it because it is quick, simple, and often times “gets the job done.” Keep in mind it’s definitely not the only forecasting method and you should always explore multiple methods balancing speed and accuracy. However, HW will always have a sweet spot in my forecasting toolset because it was one of the first methods I learned and I have seen its practical use!

What is Holt-Winters?

Holt Winters is a form of exponential smoothing. According to one of my favorite books Business Forecasting with ForecastX by Barry Keating, “with exponential smoothing the forecast value at any time is a weighted average of all the available values; the weights decline geometrically as you go back in time.” That is to say that the impact of a period diminishes the farther back in time it is. A common sense example is if you were to forecast tomorrow’s oil price. The price from yesterday is likely more indicative and should have a higher weight in the weighted average compared to an oil price from 10 years ago. Within exponential smoothing there is a parameter “alpha” which represents the level smoothing parameter.

However, if your data has trend and seasonality you have to extend the smoothing model. The amazon data contains both so you can’t just do a simple exponential smoothing. A Holt Winter’s model has the “alpha” level smoothing constant from exponential smoothing but also adds “beta” for trend and “gamma” for seasonality.

For “alpha”, “beta” and “gamma” values range between 0 and 1. Values near 0 mean the older values have higher weight. Conversely values near 1 mean the most recent values carry the most weight for the level, trend and seasonality.

In the next section I will show you how to create a HW forecast.

Making a Holt-Winters Forecast

To make a HW forecast use HoltWinters with the time series object.

fit

Once you have the fit model object you can examine the alpha, beta, and gamma values. Remember numbers closer to 1 mean the most recent quarters have the largest weight. Calling fit in your console will print the follow information. The most recent quarter has a huge impact on the model and is supported by gamma = 1. The trend smoothing component “beta” still weights towards more recent quarters while alpha is leaning more towards older quarters.

Smoothing parameters:
alpha: 0.2916944
beta : 0.5926455
gamma: 1

Next the print out will show coefficients. The most interesting component here are the seasonal adjustments which demonstrate how the model interprets the seasonality. The HW forecast is affected by each quarter’s impact in the following table.

[table id=22 /]

Using plot on the model object will show the original time series and the fitted values with a red line.

plot(fit)

analysis_graph2

The HW fit in red is extremely close to the actual values.
Although it’s great to look back and say that the model is fitting well, it is more important to actually perform a forecast. To do so use forecast with the model object called fit. The second parameter h=20 tells the forecast function to provide values 20 periods ahead. Since I am working with quarters this means 5 years past the last period.
amzn.forecast<-forecast(fit, h=20)

I can now visualize the forecasted values using plot again. In the illustration the blue line represents the exact forecasted point values. The forecast function also produces the confidence intervals for 80 and 95 percent. As you review the visual you can see the widening grey confidence intervals. In forecasting the farther out you predict the less certainty you have in your outcome. This is common sense but still worthwhile to remember.
plot(amzn.forecast)

image03

The HW forecasts for the next 20 quarters.

Calling the predicted value object I will see the forecasts and 80 and 95 percent confidence intervals. These values represent the widening grey intervals around the forecasted values. A section of the outcome is provided below.

[table id=23 /]

Conclusion

When I look back on this time series I have a nostalgic connection to it. I worked at amazon for 3 hard but exciting years and I know how these numbers also mean more than dollars. The values represent billions of packages, millions of phone calls and millions of delighted customers and tens of thousands of hard working employees around the world. I can only imagine the planning and chaotic pressure the leadership team put into this holiday shopping season. I busted my hump and worked 18 hour days through peak…and my 2010 peak was meager compared to this years. I leave you with this fact to understand amazon’s forecasted growth from my 2010 days.
Amazon’s 2010 annual revenue was $34B compared to HoltWinter’s Fourth Quarter $41B Forecast!

Amazon’s 2010 annual revenue was $34B compared to HoltWinter’s Fourth Quarter $41B Forecast!


©ODSC 2016

Edward Kwartler

Edward Kwartler

Working at Liberty Mutual I shape the organization's strategy and vision concerning next generation vehicles. This includes today's advanced vehicle ADAS features and the self driving cars of the (near) future. I get to work with exciting startups, MIT labs, government officials, automotive leaders and various data scientists to understand how Liberty can thrive in this rapidly changing environment. Plus I get to internally incubate ideas and foster an entrepreneurial ethos! Specialties: Data Science, Text Mining, IT service management, Process improvement and project management, business analytics

1