Tutorial - Time Series Analysis With Pandas - Dataquest
Tutorial - Time Series Analysis With Pandas - Dataquest
Start Free Dashboard
In this tutorial, we will learn about the powerful time series tools in the pandas
library. And we’ll learn to make cool charts like this!
Originally developed for financial time series such as daily stock market prices,
the robust and flexible data structures in pandas can be applied to time series
data in any domain, including business, science, engineering, public health, and
many others. With these tools you can easily organize, transform, analyze, and
visualize your data at any level of granularity — examining details during specific
time periods of interest, and zooming out to explore variations on different time
scales, such as monthly or annual aggregations, recurring patterns, and long-
term trends.
In the broadest definition, a time series is any data set where the values are
measured at different points in time. Many time series are uniformly spaced at a
specific frequency, for example, hourly weather measurements, daily counts of
web site visits, or monthly sales totals. Time series can also be irregularly spaced
and sporadic, for example, timestamped data in a computer system’s event log
For a limited time, get 50% off 1 year of Premium! GET 50% OFF
or a history of 911 emergency calls. Pandas time series tools apply equally well
to either type of time series.
This tutorial will focus mainly on the data wrangling and visualization aspects of
time series analysis. Working with a time series of energy data, we’ll see how
techniques such as time-based indexing, resampling, and rolling windows can
help us explore variations in electricity demand and renewable energy supply
over time. We’ll be covering the following topics:
We’ll be using Python 3.6, pandas, matplotlib, and seaborn. To get the most out
of this tutorial, you’ll want to be familiar with the basics of pandas and
matplotlib.
Not quite there yet? Build your foundational Python skills with our Python for
Data Science: Fundamentals and Intermediate courses.
energy production in recent years. The data set includes country-wide totals of
electricity consumption, wind power production, and solar power production for
2006-2017. You can download the data here.
import pandas as pd
pd.to_datetime('2018-01-15 3:45pm')
Timestamp('2018-01-15 15:45:00')
pd.to_datetime('7/8/1952')
For a limited
Timestamp time, get
('1952-07-08 50% off )1 year of Premium!
00:00:00' GET 50% OFF
As we can see, to_datetime() automatically infers a date/time format based
on the input. In the example above, the ambiguous date '7/8/1952' is
assumed to be month/day/year and is interpreted as July 8, 1952. Alternatively,
we can use the dayfirst parameter to tell pandas to interpret the date as
August 7, 1952.
pd.to_datetime('7/8/1952, dayfirst=True)
Timestamp('1952-08-07 00:00:00')
In the DatetimeIndex above, the data type datetime64[ns] indicates that the
underlying data is stored as 64 -bit integers, in units of nanoseconds (ns). This
data structure allows pandas to compactly store large sequences of date/time
values and efficiently perform vectorized operations using NumPy datetime64
arrays.
If we’re dealing with a sequence of strings all in the same date/time format, we
can explicitly specify it with the format parameter. For very large data sets, this
can greatly speed up the performance of to_datetime() compared to the
default behavior, where the format is inferred separately for each individual
string. Any of the format codes from the strftime() and strptime()
functions in Python’s built-in datetime module can be used. The example below
uses the format codes %m (numeric month), %d (day of month), and %y (2-digit
year) to specify the format.
In addition to Timestamp and DatetimeIndex objects representing individual
points in time, pandas also includes data structures representing durations (e.g.,
125 seconds) and periods (e.g., the month of November 2018). For more about
these data structures, there is a nice summary here. In this tutorial we will use
DatetimeIndexes, the most common data structure for pandas time series.
opsd_germany_daily
opsd_daily = pd.read_csv('opsd_germany_daily.csv')
opsd_daily.shape
(4383, 5)
The DataFrame has 4383 rows, covering the period from January 1, 2006
through December 31, 2017. To see what the data looks like, let’s use the
head() and tail() methods to display the first three and last three rows.
opsd_daily.head(3)
opsd_daily.tail(3)
Date datetime64[ns]
Consumption float64
Wind float64
Solar float64
Wind+Solar float64
dtype: object
Now that the Date column is the correct data type, let’s set it as the
DataFrame’s index.
opsd_daily = opsd_daily.set_index('Date')
opsd_daily.head(3)
opsd_daily.index
Alternatively, we can consolidate the above steps into a single line, using the
index_col and parse_dates parameters of the read_csv() function. This is
often a useful shortcut.
opsd_germany_daily
opsd_daily = pd.read_csv('opsd_germany_daily.csv', index_col=0, parse_d
Now that our DataFrame’s index is a DatetimeIndex, we can use all of pandas’
powerful time-based indexing to wrangle and analyze our data, as we shall see
in the following sections.
For a limited time, get 50% off 1 year of Premium! GET 50% OFF
Another useful aspect of the DatetimeIndex is that the individual date/time
components are all available as attributes such as year , month , day , and so
on. Let’s add a few more columns to opsd_daily , containing the year, month,
and weekday name.
Weekday
Consumption Wind Solar Wind+Solar Year Month
Name
Date
2008-08-
1152.011 NaN NaN NaN 2008 8 Saturday
23
2013-08-
1291.984 79.66693.371 173.037 2013 8 Thursday
08
2009-08-
1281.057 NaN NaN NaN 2009 8 Thursday
27
2015-10-
1391.050 81.229160.641 241.870 2015 10 Friday
02
2009-06-
1201.522 NaN NaN NaN 2009 6 Tuesday
02
Time-based indexing
One of the most powerful and convenient features of pandas time series is
time-based indexing — using dates and times to intuitively organize and access
our data. With time-based indexing, we can use date/time formatted strings to
select data in our DataFrame with the loc accessor. The indexing works similar
to standard label-based indexing with loc , but with a few additional features.
For example, we can select data for a single day using a string such as '2017-
08-10' .
opsd_daily.loc['2017-08-10']
For a limited
Consumption time, get 50% off 1 year of Premium!
1351.49 GET 50% OFF
Wind 100.274
Solar 71.16
Wind+Solar 171.434
Year 2017
Month 8
Weekday Name Thursday
Name: 2017-08-10 00:00:00, dtype: object
opsd_daily.loc['2014-01-20':'2014-01-22']
Weekday
Consumption Wind Solar Wind+Solar Year Month
Name
Date
2014-01-
1590.687 78.6476.371 85.018 2014 1 Monday
20
2014-01-
1624.806 15.6435.835 21.478 2014 1 Tuesday
21
2014-01-
1625.155 60.25911.99272.251 2014 1 Wednesday
22
opsd_daily.loc['2012-02']
Weekday
Consumption Wind Solar Wind+Solar Year Month
For a limited time, get 50% off 1 year of Premium! GET 50%Name
OFF
Date
2012-02-
1511.866 199.607 43.502243.109 2012 2 Wednesday
01
2012-02-
1563.407 73.469 44.675118.144 2012 2 Thursday
02
2012-02-
1563.631 36.352 46.51082.862 2012 2 Friday
03
2012-02-
1372.614 20.551 45.22565.776 2012 2 Saturday
04
2012-02-
1279.432 55.522 54.572110.094 2012 2 Sunday
05
2012-02-
1574.766 34.896 55.38990.285 2012 2 Monday
06
2012-02-
1615.078 100.312 19.867120.179 2012 2 Tuesday
07
2012-02-
1613.774 93.763 36.930130.693 2012 2 Wednesday
08
2012-02-
1591.532 132.219 19.042151.261 2012 2 Thursday
09
2012-02-
1581.287 52.122 34.87386.995 2012 2 Friday
10
2012-02-
1377.404 32.375 44.62977.004 2012 2 Saturday
11
2012-02-
1264.254 62.659 45.176107.835 2012 2 Sunday
12
2012-02-
1561.987 25.984 11.287 37.271 2012 2 Monday
13
2012-02-
1550.366 146.495 9.610 156.105 2012 2 Tuesday
14
2012-02-
1476.037 413.367 18.877432.244 2012 2 Wednesday
15
2012-02-
1504.119 130.247 38.176168.423 2012 2 Thursday
16
2012-02-
1438.857 196.515 17.328213.843 2012 2 Friday
17
2012-02-
1236.069 237.889 26.248264.137 2012 2 Saturday
18
2012-02-
1107.431 272.655 30.382303.037 2012 2 Sunday
19
2012-02-
1401.873 160.315 53.794214.109 2012 2 Monday
20
2012-02-
1434.533 281.909 57.984339.893 2012 2 Tuesday
21
2012-02-
1453.507 287.635 74.904362.539 2012 2 Wednesday
22
2012-02-
1427.402 353.510 18.927372.437 2012 2 Thursday
23
2012-02-
1373.800 382.777 29.281412.058 2012 2 Friday
24
2012-02-
1133.184 302.102 42.667344.769 2012 2 Saturday
25
2012-02-
1086.743 95.234 37.214132.448 2012 2 Sunday
26
2012-02-
1436.095 86.956 43.099130.055 2012 2 Monday
27
2012-02-
1408.211 231.923 16.190248.113 2012 2 Tuesday
28
Weekday
Consumption Wind Solar Wind+Solar Year Month
For a limited time, get 50% off 1 year of Premium! GET 50%Name
OFF
Date
2012-02-
1434.062 77.024 30.360107.384 2012 2 Wednesday
29
We’ll use seaborn styling for our plots, and let’s adjust the default figure size to
an appropriate shape for time series plots.
Let’s create a line plot of the full time series of Germany’s daily electricity
consumption, using the DataFrame’s plot() method.
opsd_daily['Consumption'].plot(linewidth=0.5);
We can see that the plot() method has chosen pretty good tick locations
(every two years) and labels (the years) for the x-axis, which is helpful. However,
with so many data points, the line plot is crowded and hard to read. Let’s plot
the data as dots instead, and also look at the Solar and Wind time series.
For a limited
cols_plot time, get 50%
= ['Consumption' off 1 year
, 'Solar' of Premium!
, 'Wind' ] GET 50% OFF
axes = opsd_daily[cols_plot].plot(marker='.', alpha=0.5, linestyle='Non
for ax in axes:
ax.set_ylabel('Daily Totals (GWh)')
time intervals. The Consumption , Solar , and Wind time series oscillate
between high and low values on a yearly time scale, corresponding with the
seasonal changes in weather over the year. However, seasonality in general
does not have to correspond with the meteorological seasons. For example,
retail sales data often exhibits yearly seasonality with increased sales in
November and December, leading up to the holidays.
Seasonality can also occur on other time scales. The plot above suggests there
may be some weekly seasonality in Germany’s electricity consumption,
corresponding with weekdays and weekends. Let’s plot the time series in a
single year to investigate further.
ax = opsd_daily.loc['2017', 'Consumption'].plot()
ax.set_ylabel('Daily Consumption (GWh)');
Now we can clearly see the weekly oscillations. Another interesting feature that
becomes apparent at this level of granularity is the drastic decrease in electricity
consumption in early January and late December, during the holidays.
ax = opsd_daily.loc['2017-01':'2017-02', 'Consumption'].plot(marker='o'
ax.set_ylabel('Daily Consumption (GWh)');
For a limited time, get 50% off 1 year of Premium! GET 50% OFF
fig, ax = plt.subplots()
ax.plot(opsd_daily.loc['2017-01':'2017-02', 'Consumption'], marker='o',
ax.set_ylabel('Daily Consumption (GWh)')
ax.set_title('Jan-Feb 2017 Electricity Consumption')
# Set x-axis major ticks to weekly interval, on Mondays
ax.xaxis.set_major_locator(mdates.WeekdayLocator(byweekday=mdates.MONDAY
# Format x-tick labels as 3-letter month name and day number
ax.xaxis.set_major_formatter(mdates.DateFormatter('%b %d'));
For a limited time, get 50% off 1 year of Premium! GET 50% OFF
Now we have vertical gridlines and nicely formatted tick labels on each Monday,
so we can easily tell which days are weekdays and weekends.
There are many other ways to visualize time series, depending on what patterns
you’re trying to explore — scatter plots, heatmaps, histograms, and so on. We’ll
see other visualization examples in the following sections, including
visualizations of time series data that has been transformed in some way, such
as aggregated or smoothed data.
Seasonality
Next, let’s further explore the seasonality of our data with box plots, using
seaborn’s boxplot() function to group the data by different time periods and
display the distributions for each group. We’ll first group the data by month, to
visualize yearly seasonality.
fig,For a limited
axes = plt.time, get (50%
subplots 3, 1off 1 year of
, figsize=(Premium! GET
11, 10), sharex 50%) OFF
=True
for name, ax in zip(['Consumption', 'Solar', 'Wind'], axes):
sns.boxplot(data=opsd_daily, x='Month', y=name, ax=ax)
ax.set_ylabel('GWh')
ax.set_title(name)
# Remove the automatic x-axis label from all but the bottom subplot
if ax != axes[-1]:
ax.set_xlabel('')
These box plots confirm the yearly seasonality that we saw in earlier plots and
provide some additional insights:
* Although electricity consumption is generally higher in winter and lower in
summer, the median and lower two quartiles are lower in December and
January compared to November and February, likely due to businesses being
closed over the holidays. We saw this in the time series for the year 2017, and
the box plot confirms that this is consistent pattern throughout the years.
* While solar and wind power production both exhibit a yearly seasonality, the
wind power distributions have many more outliers, reflecting the effects of
occasional extreme wind speeds associated with storms and other transient
weather conditions.
Next, let’s
For agroup thetime,
limited electricity
get 50% consumption
off 1 year oftime series by day
Premium! GETof50%
the week,
OFF to
explore weekly seasonality.
sns.boxplot(data=opsd_daily, x='Weekday Name', y='Consumption');
Time series with strong seasonality can often be well represented with models
that decompose the signal into seasonality and a long-term trend, and these
models can be used to forecast future values of the time series. A simple
example of such a model is classical seasonal decomposition, as demonstrated
in this tutorial. A more sophisticated example is as Facebook’s Prophet model,
which uses curve fitting to decompose the time series, taking into account
seasonality on multiple time scales, holiday effects, abrupt changepoints, and
long-term trends, as demonstrated in this tutorial.
Frequencies
When the data points of a time series are uniformly spaced in time (e.g., hourly,
daily, monthly, etc.), the time series can be associated with a frequency in
pandas. For example, let’s use the date_range() function to create a sequence
of uniformly spaced dates from 1998-03-10 through 1998-03-15 at daily
frequency.
For a limited
pd.date_range time, get 50%
('1998-03-10' off 1 year of Premium!
, '1998-03-15' , freq='D') GET 50% OFF
DatetimeIndex(['1998-03-10', '1998-03-11', '1998-03-12', '1998-03-13',
'1998-03-14', '1998-03-15'],
dtype='datetime64[ns]', freq='D')
As another example, let’s create a date range at hourly frequency, specifying the
start date and number of periods, instead of the start date and end date.
Now let’s take another look at the DatetimeIndex of our opsd_daily time
series.
opsd_daily.index
specifying any frequency for the time series.
If we know that our data should be at a specific frequency, we can use the
DataFrame’s asfreq() method to assign a frequency. If any date/times are
missing in the data, new rows will be added for those date/times, which are
either empty ( NaN ), or filled according to a specified data filling method such as
forward filling or interpolation.
To see how this works, let’s create a new DataFrame which contains only the
Consumption data for Feb 3, 6, and 8, 2013.
Consumption
2013-02-03 1109.639
2013-02-06 1451.449
2013-02-08 1433.098
Consumption - Forward Fill column, the missings have been forward filled,
meaning that the last value repeats through the missing rows until the next non-
missing value occurs.
If you’re doing any time series analysis which requires uniformly spaced data
without any missings, you’ll want to use asfreq() to convert your time series
to the specified frequency and fill any missings with an appropriate method.
Resampling
It is often useful to resample our time series data to a lower or higher
frequency. Resampling to a lower frequency (downsampling) usually involves an
aggregation operation — for example, computing monthly sales totals from
daily data. The daily OPSD data we’re working with in this tutorial was
downsampled from the original hourly time series. Resampling to a higher
frequency (upsampling) is less common and often involves interpolation or
other data filling method — for example, interpolating hourly weather data to
10 minute intervals for input to a scientific model.
We will focus here on downsampling, exploring how it can help us analyze our
OPSD data on various time scales. We use the DataFrame’s resample()
method, which splits the DatetimeIndex into time bins and groups the data by
time bin. The resample() method returns a Resampler object, similar to a
pandas GroupBy object. We can then apply an aggregation method such as
mean() , median() , sum() , etc., to the data group for each time bin.
For example, let’s resample the data to a weekly mean time series.
# Specify the data columns we want to include (i.e. exclude Year, Month
data_columns = ['Consumption', 'Wind', 'Solar', 'Wind+Solar']
# Resample to weekly frequency, aggregating with mean
opsd_weekly_mean = opsd_daily[data_columns].resample('W').mean()
opsd_weekly_mean.head(3)
labelled 2006-01-08 , contains the mean data for the 2006-01-08 through
2006-01-14 time bin, and so on. By default, each row of the downsampled time
By construction, our weekly time series has 1/7 as many data points as the daily
time series. We can confirm this by comparing the number of rows of the two
DataFrames.
print(opsd_daily.shape[0])
print(opsd_weekly_mean.shape[0])
4383
627
Let’s plot the daily and weekly Solar time series together over a single six-
month period to compare them.
We can see that the weekly mean time series is smoother than the daily time
series because higher frequency variability has been averaged out in the
resampling.
For a limited time, get 50% off 1 year of Premium! GET 50% OFF
Now let’s resample the data to monthly frequency, aggregating with sum totals
instead of the mean. Unlike aggregating with mean() , which sets the output to
NaN for any period with all missing data, the default behavior of sum() will
return output of 0 as the sum of missing data. We use the min_count
parameter to change this behavior.
# Compute the monthly sums, setting the value to NaN for any month whic
# fewer than 28 days of data
opsd_monthly = opsd_daily[data_columns].resample('M').sum(min_count=28)
opsd_monthly.head(3)
You might notice that the monthly resampled data is labelled with the end of
each month (the right bin edge), whereas the weekly resampled data is labelled
with the left bin edge. By default, resampled data is labelled with the right bin
edge for monthly, quarterly, and annual frequencies, and with the left bin edge
for all other frequencies. This behavior and various other options can be
adjusted using the parameters listed in the resample() documentation.
Now let’s explore the monthly time series by plotting the electricity consumption
as a line plot, and the wind and solar power production together as a stacked
area plot.
fig, ax = plt.subplots()
ax.plot(opsd_monthly['Consumption'], color='black', label='Consumption'
opsd_monthly[['Wind', 'Solar']].plot.area(ax=ax, linewidth=0)
ax.xaxis.set_major_locator(mdates.YearLocator())
ax.legend()
ax.set_ylabel('Monthly Total (GWh)');
For a limited time, get 50% off 1 year of Premium! GET 50% OFF
At this monthly time scale, we can clearly see the yearly seasonality in each time
series, and it is also evident that electricity consumption has been fairly stable
over time, while wind power production has been growing steadily, with wind +
solar power comprising an increasing share of the electricity consumed.
Let’s explore this further by resampling to annual frequency and computing the
ratio of Wind+Solar to Consumption for each year.
# Compute the annual sums, setting the value to NaN for any year which
# fewer than 360 days of data
opsd_annual = opsd_daily[data_columns].resample('A').sum(min_count=360)
# The default index of the resampled DataFrame is the last day of each
# ('2006-12-31', '2007-12-31', etc.) so to make life easier, set the in
# to the year component
opsd_annual = opsd_annual.set_index(opsd_annual.index.year)
opsd_annual.index.name = 'Year'
# Compute the ratio of Wind+Solar to Consumption
opsd_annual['Wind+Solar/Consumption'] = opsd_annual['Wind+Solar'] / ops
opsd_annual.tail(3)
Finally, let’s plot the wind + solar share of annual electricity consumption as a
bar chart.
For from
# Plot a limited
2012time, get 50%
onwards, off 1 year
because of Premium!
there GET 50% OFF
is no solar production data in
ax = opsd_annual.loc[2012:, 'Wind+Solar/Consumption'].plot.bar(color='C
ax.set_ylabel('Fraction')
ax.set_ylim(0, 0.3)
ax.set_title('Wind + Solar Share of Annual Electricity Consumption')
plt.xticks(rotation=0);
Rolling windows
Rolling window operations are another important transformation for time
series data. Similar to downsampling, rolling windows split the data into time
windows and and the data in each window is aggregated with a function such as
mean() , median() , sum() , etc. However, unlike downsampling, where the
time bins do not overlap and the output is at a lower frequency than the input,
rolling windows overlap and “roll” along at the same frequency as the data, so
the transformed time series is at the same frequency as the original time series.
By default, all data points within a window are equally weighted in the
aggregation, but this can be changed by specifying window types such as
Gaussian, triangular, and others. We’ll stick with the standard equally weighted
window here.
Let’s use the rolling() method to compute the 7-day rolling mean of our daily
data. We use the center=True argument to label each window at its midpoint,
so the rolling windows are:
2006-01-01 to 2006-01-07
For a limited — labelled
time, get 50% asof
off 1 year Premium!
2006-01-04 GET 50% OFF
2006-01-02 to 2006-01-08 — labelled as 2006-01-05
2006-01-03 to 2006-01-09 — labelled as 2006-01-06
and so on…
We can see that the first non-missing rolling mean value is on 2006-01-04 ,
because this is the midpoint of the first rolling window.
To visualize the differences between rolling mean and resampling, let’s update
our earlier plot of January-June 2017 solar power production to include the 7-
day rolling mean along with the weekly mean resampled time series and the
original daily data.
We can see that data points in the rolling mean time series have the same
spacing as the daily data, but the curve is smoother because higher frequency
variability has been averaged out. In the rolling mean time series, the peaks and
troughs tend to align closely with the peaks and troughs of the daily time series.
In contrast, the peaks and troughs in the weekly resampled time series are less
closely aligned with the daily time series, since the resampled time series is at a
coarser granularity.
Trends
Time series data often exhibit some slow, gradual variability in addition to
higher frequency variability such as seasonality and noise. An easy way to
visualize these trends is with rolling means at different time scales.
We’ve already computed 7-day rolling means, so now let’s compute the 365-day
rolling mean of our OPSD data.
Let’s plot the 7-day and 365-day rolling mean electricity consumption, along with
the daily time series.
For daily,
# Plot a limited time,rolling
7-day get 50% mean,
off 1 year
and of Premium!
365-day rollingGET 50%
mean OFFseries
time
fig, ax = plt.subplots()
ax.plot(opsd_daily['Consumption'], marker='.', markersize=2, color='0.6
linestyle='None', label='Daily')
ax.plot(opsd_7d['Consumption'], linewidth=2, label='7-d Rolling Mean')
ax.plot(opsd_365d['Consumption'], color='0.2', linewidth=3,
label='Trend (365-d Rolling Mean)')
# Set x-ticks to yearly interval and add legend and labels
ax.xaxis.set_major_locator(mdates.YearLocator())
ax.legend()
ax.set_xlabel('Year')
ax.set_ylabel('Consumption (GWh)')
ax.set_title('Trends in Electricity Consumption');
We can see that the 7-day rolling mean has smoothed out all the weekly
seasonality, while preserving the yearly seasonality. The 7-day rolling mean
reveals that while electricity consumption is typically higher in winter and lower
in summer, there is a dramatic decrease for a few weeks every winter at the end
of December and beginning of January, during the holidays.
Looking at the 365-day rolling mean time series, we can see that the long-term
trend in electricity consumption is pretty flat, with a couple of periods of
anomalously low consumption around 2009 and 2012-2013.
We can see a small increasing trend in solar power production and a large
increasing trend in wind power production, as Germany continues to expand its
capacity in those sectors.
Other potentially useful topics we haven’t covered include time zone handling
and time shifts. If you’d like to learn more about working with time series data in
pandas, you can check out this section of the Python Data Science Handbook,
this blog post, and of course the official documentation. If you’re interested in
forecasting and machine learning with time series data, we’ll be covering those
topics in a future blog post, so stay tuned!
For a limited time, get 50% off 1 year of Premium! GET 50% OFF
Level up your data skills!
Clear explanations.
No gaps.
Fast feedback.
Jennifer Walker
Environmental scientist / data geek / Python evangelist.
TAGS
How to Learn Fast: 7 Science-Backed Study 11 Reasons Why You Should Learn the
Tips for Learning New Skills Command Line
Search
Categories
Cheat Sheets
Top Picks
For a limited time, get 50% off 1 year of Premium! GET 50% OFF
JULY 2, 2019
Why Jorge Prefers Dataquest Over DataCamp for Learning Data Analysis
AUGUST 1, 2020
MAY 4, 2020
JULY 6, 2020
Sign up now
Or, visit our pricing page to learn about our Basic and Premium plans.
Privacy Policy
Terms of Use
Blog
Facebook
Twitter
LinkedIn
Resource List