Strict vs. Weak Stationarity¶
Now that we are familiar with the autocovariance, we can address stationarity, one of the most central concepts in time series analysis. Loosely, a stationary time series is one that does not demonstrate any trend or seasonal variation. More concretely, there are two distinct definitions of stationarity:
Strict stationarity is defined as a time series for which the probabilistic behavior of any subset of observations is identical, i.e.
Weak stationarity is defined as a time series for which the following hold:
i. The mean is finite and constant across all times .
ii. The variance is finite and constant across all times .
iii. The autocovariance is a function of lag alone and not of absolute time, .
An example of a strictly stationary series would be pure white noise. Stock returns (not values) are also reasonably approximated as being strictly stationary. Weakly stationary processes are far more prevalent; in coming chapters we will encounter numerous examples from fields such as medicine and econometrics.
Strict to Weak Stationarity¶
It should be obvious that demonstrating weak stationarity does not demonstrate strict stationarity[1]. Interesting, the converse also does not necessarily hold. Consider the white noise process defined by the standard Cauchy distribution:
is strictly stationary as every point is drawn from the same probability distribution as every other point. However, is not weakly stationary as the Cauchy distribution lacks a finite variance. That said, any strongly stationary finite variance process is also weakly stationary.
Advantages of Weak Stationarity¶
In practice, we generally do not use strict stationarity for two major reasons:
Establishing strict stationarity requires knowing the underlying probability distribution (or at least having a confident guess to it). In contrast, weak stationarity is established solely via observables in the form of the mean, variance, and autocovariance.
Strict stationarity is generally too strict for most purposes. We will see in coming chapters that methods requiring stationarity can be used with weakly stationary time series.
Autocovariance of a Stationary Process¶
As discussed above, the autocovariance of a stationary process depends exclusively on the lag . Consequently, when dealing with stationary processes we will simply use the notation
where is the variance, is the autocovariance at lag 1, and so on. Note that
consequently, we will only calculate for .
Autocovariance of a Stationary Process is Positive Semidefinite¶
We are now in a position to prove a property of autocovariance that will be useful in future chapters. The autocovariance matrix is defined as
for a stationary process, like standard covariance matrices, is positive semidefinite. A positive semidefinite matrix is defined as a symmetric matrix[2] with the property that for any vector
To demonstrate that is positive semidefinite, let
As variance cannot be negative, we have
or using the variance of a sum
As the above holds for any possible vector we conclude that must be positive semidefinite.
It can be proved that if the process that generates is a Gaussian process, then weak stationarity does demonstrate strict stationarity. This is not overly useful in practice as it requires us to know ’s hidden generating process.
The definition of positive semidefiniteness can be expanded to include complex vectors and matrices, but that will not be necessary for this book.
- Shumway, R. H., & Stoffer, D. S. (2025). Time Series Analysis and Its Applications: With R Examples. In Springer Texts in Statistics. Springer Nature Switzerland. 10.1007/978-3-031-70584-7
- Brockwell, P. J., & Davis, R. A. (1991). Time Series: Theory and Methods. In Springer Series in Statistics. Springer New York. 10.1007/978-1-4419-0320-4