Time Series White Noise

Last edited: 2023-01-08 16:06:09

Thumbnail

In time series sample data there is always noise. But there exist different types of noise. In this post we will run through regular white noise and iid noise and how to identify them in your residuals after for example applying an ARMA or GARCH model. First we need to define weak stationarity before we define white noise.

Definition of Weak Stationarity

A time series XX is weakly stationary if

  • E[Xt]=μ=const\mathbb{E}[X_t] = \mu = \text{const} for all tZt \in \mathbb{Z}
  • Var[Xt]=σ2=const<\text{Var}[X_t] = \sigma^2 = \text{const} < \infty for all tZt \in \mathbb{Z}
  • Cov[Xr,Xs]=Cov[Xr+h,Xs+h]\text{Cov}[X_r, X_s] = \text{Cov}[X_{r+h}, X_{s+h}] for all s,r,hZs,r,h \in \mathbb{Z}

Defintion of White Noise

A time series XX is said to be white noise if

  • XX is weakly stationary
  • γ(h)=Cov[Xt,Xt+h]=0\gamma(h) = \text{Cov}[X_t, X_{t+h}] = 0 where h0h \neq 0

Definition of IID Noise

What is iid noise? A time series XX is said to be iid noise if it meets the conditions for regular white noise above but also XrX_r and XsX_s (rsr \neq s) is independent of each other and identically distributed for all r,sZr,s \in \mathbb{Z}, that is why it is called iid noise where iid stands for independent and identically distributed.

Before going through the tricks to identifying the two different kind of noises we need to go through the definitions of the autocorrelation function (ACF) and the partial autocorrelation function (PACF).

The Autocorrelation Function (ACF)

The autocorrelation function for the time series XX is defined as:

ρX(h)=γ(h)γ(0)=Cov[Xt,Xt+h]Var[Xt].\rho_X(h) = \frac{\gamma(h)}{\gamma(0)} = \frac{\text{Cov}[X_t, X_{t+h}]}{\text{Var}[X_t]}.

The Partial Autocorrelation Function (ACF)

The autocorrelation function α(h)\alpha(h) for the time series XX is defined by α(0)=1\alpha(0) = 1 and α(h)=ϕhh\alpha(h) = \phi_{hh} for h1h \geq 1 where ϕhh\phi_{hh} is the last component of

ϕh=([γ(ij)]i,j=1h)1[γ(1),...,γ(h)].\phi_h = ([\gamma(i-j)]_{i,j=1}^{h})^{-1} [\gamma(1), ... , \gamma(h)]'.

How to Identify White Noise

The first simple thing you could do to see if your data is just white noise is if it looks like it has no structure. The second thing is looking at the sample ACF and PACF graphs for the data. Lets look at the definition of white noise and the ACF. We know that γ(h)=0\gamma(h) = 0 for h1h \geq 1 for white noise which means ρX(h)=0\rho_X(h) = 0 for h1h \geq 1. So all lags greater than zero should be within the red striped confidence band (±1.96/n\pm 1.96/\sqrt{n}, where nn is the sample size) of the ACF in the plot below. So the following sample ACF plot indicates white noise:

Sample ACF of the sample data

The same can be said for the PACF. If XX is white noise then the vector [γ(1),...,γ(h)][\gamma(1), ... , \gamma(h)]' will just be filled with zeros and thus α(h)=0\alpha(h) = 0 for h1h \geq 1.

How to Identify IID Noise

To identify iid noise we will have to look at the sample ACF and PACF of the data squared, X2X^2. Lets start with the ACF. We have:

γX2(h)=Cov[Xt2,Xt+h2]=E[(Xt2E[Xt2])(Xt+h2E[Xt+h2])]\gamma_{X^2}(h) = \text{Cov}[X^2_t, X^2_{t+h}] = \mathbb{E}[(X_t^2 - \mathbb{E}[X_t^2])(X_{t+h}^2 - \mathbb{E}[X_{t+h}^2])]

and now we use the property of iid noise that XtX_t and Xt+hX_{t+h} is independent of each other so we get the following:

(E[Xt2E[Xt2]])(E[Xt+h2E[Xt+h2]])=(E[Xt2]E[E[Xt2]])(E[Xt+h2]E[E[Xt+h2]]).(\mathbb{E}[X_t^2 - \mathbb{E}[X_t^2]]) (\mathbb{E}[X_{t+h}^2 - \mathbb{E}[X_{t+h}^2]]) = (\mathbb{E}[X_t^2] - \mathbb{E}[\mathbb{E}[X_t^2]]) (\mathbb{E}[X_{t+h}^2] - \mathbb{E}[\mathbb{E}[X_{t+h}^2]]).

Lastly we use the general property that E[E[Y]]=E[Y]\mathbb{E}[\mathbb{E}[Y]] = \mathbb{E}[Y] so we get:

(E[Xt2]E[Xt2])(E[Xt+h2]E[Xt+h2])=0.(\mathbb{E}[X_t^2] - \mathbb{E}[X_t^2]) (\mathbb{E}[X_{t+h}^2] - \mathbb{E}[X_{t+h}^2]) = 0.

So the sample ACF for X2X^2 should be in the confidence band for all lags greater than zero. For the PACF we can see that the vector [γX2(1),...,γX2(h)][\gamma_{X^2}(1), ... , \gamma_{X^2}(h)]' would be zero for all hh. Therefore the PACF should also be within the confidence band for lags greater than zero if XX were to be iid noise which we can see in the following plot:

Sample PACF of the sample data squared

So to conclude, for iid noise XX and X2X^2's sample ACF and PACF should all have to be within the confidence band for lags greater than zero.

Ljung-Box Test for IDD Noise

Lastly one can do a Ljung-Box test to determine whether the data is independently distributed. The test is defined by the following hypothesis:

H0:XIID(μ,σ2)H1:X≁IID(μ,σ2)\begin{aligned} & H_0 : X \sim \text{IID}(\mu, \sigma^2) \\ & H_1 : X \not\sim \text{IID}(\mu, \sigma^2) \end{aligned}

The Ljung-Box test has a test statistic which is:

λ=n(n+2)i=1hρ^(i)2ni,\lambda = n (n + 2) \sum_{i=1}^h \frac{\hat{\rho}(i)^2}{n-i},

where ρ^(i)\hat{\rho}(i) is the sample ACF at lag ii, nn is the sample size and hh is the number of lags being tested. The null-hypothesis, H0H_0, is rejected at the α\alpha-level, α(0,1)\alpha \in (0,1), if λ>χ1α,h2\lambda > \chi_{1-\alpha, h}^2, where χ1α,h2\chi_{1-\alpha, h}^2 denotes the 1α1 - \alpha-quantile of the χ2\chi^2-distribution with hh degrees of freedom.

Was the post useful? Feel free to donate!

DONATE