As discussed in the first chapter, serial correlation of time series is both one the most challenging and most valuable aspects of time series. Autocovariance allows us to exploit serial correlation to address questions such as:
Is a high market close on one day indicative of a lower close the following day?
How long does a heat wave continue to skew temperatures above the norm?
Given higher sunspot activity this year, what year(s) in the future should we expect a repeat of this activity?
To give a concrete example, if we wish to know how strongly today’s highest temperature influences tomorrow’s, we may take the covariance of the series (T0,T1,...,Tn−1) and (T1,T2,...,Tn), i.e.
A high positive value for Eq. (1) tells us that higher (lower) temperatures today are indicative of higher (lower) temperatures tomorrow. While implausible for weather, in general many time series exhibit negative correlation, in which case a higher (lower) value at one time step is indicative of a lower (higher) value in the next.
We can naturally extent Eq. (1) to larger time lags to ask how long an anomalous temperature continues to skew the daily high. We will see in subsequent chapters that it is very common for time series to obey an exponentially decaying autocovariance.
Following the notation used in sources such as Shumway & Stoffer (2025) and Brockwell & Davis (2016), we will use the notation γx(s,t) to denote the autocovariance at time lags s and t for time series x:
In general, we will drop the x subscript when it is obvious which time series we are referring to and only write γ(s,t). For an arbitrarily large (or infinite) observation window, μs=μt as both series share almost all the same observations, leading to the simplification
thus the autocovariance of a random walk is independent of drift terms. To simplify notation we will therefore only explicitly address a random walk without drift.
By the assumption of independent wt’s, the only non-zero terms will be when i=j (i.e. the variances). Assuming without loss of generality that t≤s, we have
Figure 1 displays how random walks spread out with respect to time. Note how the random walks more closely follow a square root rather than a linear spread, which provides a concrete example of why standard deviation is often favored over variance in analysis.
Figure 1:1,000 simulated random walks with σw2=1 and δ=0.
Shumway, R. H., & Stoffer, D. S. (2025). Time Series Analysis and Its Applications: With R Examples. In Springer Texts in Statistics. Springer Nature Switzerland. 10.1007/978-3-031-70584-7
Brockwell, P. J., & Davis, R. A. (2016). Introduction to Time Series and Forecasting. In Springer Texts in Statistics. Springer International Publishing. 10.1007/978-3-319-29854-2