I live in Utah, an extremely dry state. Like much of the western United States, Utah is experiencing water stress from increasing demand, episodes of drought, and conflict over water rights. At the same time, Utahns use a lot of water per capita compared to residents of other states. According to the United States Geological Survey, in 2014 people in Utah used more water per person than in any other state, and in years before and after, Utah’s per capita water use is always near the very top in the U.S. Let’s explore water consumption in Salt Lake City, the largest city in Utah. A first step to any water solution in Utah is to better understand who is using water, when, and for what.
The city of Salt Lake makes water consumption data publicly available at the census tract and block level, with information on the type of water user (single residence, apartment, hospital, business, etc.) and amount of water used from 2000 into 2015. The data at the census tract level is available here via Utah’s Open Data Catalog and can be accessed via Socrata Open Data API.
After loading the data, let’s do a bit of cleaning up. There are just a few rows with non-year values in the year column, and a few
NA values for the water consumption value. Then, let’s adjust the data types.
How much data do we have now?
So this data set after cleaning includes 99379 observations of water consumption in Salt Lake City.
Water Use by Type
Let’s group these observations by month, year, and type of user; the types include categories like single residence, park, business, etc. Then let’s sum up all water consumption within these groups so that we can see the distribution of aggregated monthly water consumption across the types of water users.
Let’s see what these distributions look like.
This box plot shows that single residence and business users consume the most water each month in Salt Lake City. There are some very high outliers; it turns out these points all come from 2014, a drought year for Utah.
Now let’s see how water consumption has changed over time in Salt Lake City. If we group the observations by date and type, we can make a streamgraph to see how water consumption (in units of 100 cubic ft) has varied with time since 2000.
View interactive version here
The first thing I’m sure we all notice is the obvious annual pattern in water consumption. Also notice the unusual water consumption in 2014, a drought year here in Utah. How much does the distribution of water use change over the course of the year? The distribution is such that it’s hard to see unless we plot this on a log scale, actually.
The highest rate of water use occurs in August and the lowest in March; the increase from March to August is about a factor of 4, which is the same as what we can read off of the streamgraph. What are the residents and businesses of Salt Lake City doing with all that water during the warm months?
Time Series Decomposition
We can think about these water use data as a time series. Let’s add up the water use for all the types of users in all the census tracts and find the total water use in Salt Lake City for each month included in this data set. We can then change this to a time series object with
There is, as we saw, a strong seasonal component to the water use, so why not do a seasonal decomposition? The
stl function will decompose a time series into 3 components: the varying seasonal component, an underlying trend component, and the leftover irregular component.
The trend component increases into 2013 and 2014; we can see the effect of drought there as water use increases. Also notice the scale on the y-axis for the remainder component and how large the remainder component is for the last years in this data set.
Party Like It’s 2013
Let’s pretend that it is the beginning of 2013 and we would like to use the water use data we have to predict water use in the future. Then let’s check how well that prediction matches the actual water use in 2013 – 2015. We can subset the time series with the
window function and fit the data with an ARIMA model. I am new to using ARIMA models, but the idea is that they use differencing and autocorrelation (when a variable depends on past values of the variable itself) to fit the time series.
Let’s now go back to the data we held back, from 2013 and on, and see how well it agrees with the prediction from the ARIMA model. First, let’s do some data wrangling to make the plot because as far as I can tell, we can’t use
ggfortify to plot both a time series and a forecast at the same time. Definitely let me know if I am wrong!
Most of the real data points for 2013 and later do fall within the 95% confidence bands of the prediction, but certainly not all of them. Let’s calculate how many of the monthly totals for 2013 and later are within the 95% confidence bands.
About 80% of the water use totals are within the 95% confidence bands of the prediction, which is not awful but not super great. The effects of drought have reduced the accuracy of the model’s prediction. An unusual circumstance like significant drought reduces our ability to reliably model future water use based on past water use. This is perhaps not a shocking revelation, but it’s a good reminder to check model assumptions and to ask whether the distribution underlying the data used to make a model is a good one for making a prediction.
Understanding water use is important in western states like mine; this past last summer, there was a kerfluffle in our state government arguing over exactly how well we even know where Utah’s water is being used and for what. I certainly found this analysis interesting, and I hope to do a little more soon with the spatial information in this data set. The R Markdown file used to make this blog post is available here. I am very happy to hear feedback and other perspectives!
My background in the physical sciences and programming has given me the tools to apply sophisticated analytical techniques to complicated problems. I am a data scientist and analyst with an understanding of mathematics and statistical models. Analyzing, understanding, and communicating about data makes me happy and I am passionate about finding insights in data and building data products to meet the needs of an organization. I come from a background in physics and astronomy and have worked in academia and ed tech before moving into data science. My experience in the physical sciences and education has given me a solid foundation for using data to answer interesting questions, and then communicating those findings to decision makers. I work effectively in both independent and collaborative environments, I learn new skills and subjects quickly, and I have proven writing and speaking abilities.
ODSC’s Accelerate AI focuses on three key areas: Innovation, Expertise, and Management. Learn what the latest advances in AI and applied data science are, how they can affect your company, and how to build an effective team around their potential. Ready to learn more? Learn more here.
- Cracking the Box: Interpreting Black Box Machine Learning Models 154 views | by Yuriy Gavrilin | under Machine Learning, Modeling
- 7 Reasons Your Data Science Resume is Suboptimal 67 views | by ODSC Team | under Career Insights, Featured Post
- Bias Variance Decompositions using XGBoost 36 views | by RAPIDS | under Guest contributor, Machine Learning, Modeling