Econometric Analysis of Financial and Economic Time Series: Volume 20 Part 1

Subject:

Table of contents

(19 chapters)

Volume 20 of Advances in Econometrics is dedicated to Rob Engle and Sir Clive Granger, winners of the 2003 Nobel Prize in Economics, for their many valuable contributions to the econometrics profession. The Royal Swedish Academy of Sciences cited Rob “for methods of analyzing economic time series with time-varying volatility (ARCH),” while Clive was cited “for methods of analyzing economic time series with common trends (cointegration).” Of course, these citations are meant for public consumption but we specialists in time-series analysis know their contributions go far beyond these brief citations. Consider some of Rob's other contributions to our literature: Aggregation of Time Series, Band Spectrum Regression, Dynamic Factor Models, Exogeneity, Forecasting in the Presence of Cointegration, Seasonal Cointegration, Common Features, ARCH-M, Multivariate GARCH, Analysis of High Frequency Data, and CAViaR. Some of Sir Clive's additional contributions include Spectral Analysis of Economic Time Series, Bilinear Time Series Models, Combination Forecasting, Spurious Regression, Forecasting Transformed Time Series, Causality, Aggregation of Time Series, Long Memory, Extreme Bounds, Multi-Cointegration, and Non-linear Cointegration. No doubt, their Nobel Prizes are richly deserved. And the 48 authors of the two parts of this volume think likewise. They have authored some very fine papers that contribute nicely to the same literature that Rob's and Clive's research helped build.

The editors are pleased to offer the following papers to the reader in recognition and appreciation of the contributions to our literature made by Robert Engle and Sir Clive Granger, winners of the 2003 Nobel Prize in Economics. Please see the previous dedication page of this volume. This part of Volume 20 of Advances in Econometric focuses on volatility models. The contributions cover a variety of topics and are organized into three broad categories to aid the reader. The first five papers focus broadly on multivariate Generalised auto-regressive conditional heteroskedasticity (GARCH) models. The first four papers propose new models that enhance existing models, while the final paper proposes a test for multivariate GARCH in the models with non-stationary variables. The next three papers examine topics related to high frequency-data. The first of these papers compares asymptotically mean square error (MSE)-equivalent sampling frequencies and window lengths, while the other two papers in this group consider the problem of estimating volatility in the presence of microstructure noise. The last five papers are contributions relevant primarily to univariate volatility models. Of course, we are also pleased to include Rob's and Clive's remarks on their careers and their views on innovation in econometric theory and practice that were given at the third annual Advances in Econometrics Conference held at Louisiana State University, Baton Rouge, on November 5–7, 2004.

The Nobel Prize is given for good ideas–very good ideas. These ideas often shape the direction of research for an academic discipline. These ideas are often accompanied by a great deal of work by many researchers.

In 1956, I was searching for a Ph.D. topic and I selected time series analysis as being an area that was not very developed and was potentially interesting. I have never regretted that choice. Occasionally, I have tried to develop other interests but after a couple of years away I would always return to time series topics where I am more comfortable.

Existing multivariate generalized autoregressive conditional heteroskedasticity (GARCH) models either impose strong restrictions on the parameters or do not guarantee a well-defined (positive-definite) covariance matrix. I discuss the main multivariate GARCH models and focus on the BEKK model for which it is shown that the covariance and correlation is not adequately specified under certain conditions. This implies that any analysis of the persistence and the asymmetry of the correlation is potentially inaccurate. I therefore propose a new Flexible Dynamic Correlation (FDC) model that parameterizes the conditional correlation directly and eliminates various shortcomings. Most importantly, the number of exogenous variables in the correlation equation can be flexibly augmented without risking an indefinite covariance matrix. Empirical results of daily and monthly returns of four international stock market indices reveal that correlations exhibit different degrees of persistence and different asymmetric reactions to shocks than variances. In addition, I find that correlations do not always increase with jointly negative shocks implying a justification for international portfolio diversification.

Empirical research on European stock markets has shown that they behave differently according to the performance of the leading financial market identified as the US market. A positive sign is viewed as good news in the international financial markets, a negative sign means, conversely, bad news. As a result, we assume that European stock market returns are affected by endogenous and exogenous shocks. The former raise in the market itself, the latter come from the US market, because of its most influential role in the world. Under standard assumptions, the distribution of the European market index returns conditionally on the sign of the one-day lagged US return is skew-normal. The resulting model is denoted Skew-GARCH. We study the properties of this new model and illustrate its application to time-series data from three European financial markets.

In this paper we develop a new semi-parametric model for conditional correlations, which combines parametric univariate Generalized Auto Regressive Conditional Heteroskedasticity specifications for the individual conditional volatilities with nonparametric kernel regression for the conditional correlations. This approach not only avoids the proliferation of parameters as the number of assets becomes large, which typically happens in conventional multivariate conditional volatility models, but also the rigid structure imposed by more parsimonious models, such as the dynamic conditional correlation model. An empirical application to the 30 Dow Jones stocks demonstrates that the model is able to capture interesting asymmetries in correlations and that it is competitive with standard parametric models in terms of constructing minimum variance portfolios and minimum tracking error portfolios.

A new multivariate heavy-tailed distribution is proposed as an extension of the univariate distribution of Politis (2004). The properties of the new distribution are discussed, as well as its effectiveness in modeling ARCH/GARCH residuals. A practical procedure for multi-parameter numerical maximum likelihood is also given, and a real data example is worked out.

Macroeconomic or financial data are often modelled with cointegration and GARCH (Generalized Auto-Regressive Conditional Heteroskedasticity). Noticeable examples include those studies of price discovery in which stock prices of the same underlying asset are cointegrated and they exhibit multivariate GARCH. It was not until recently that Li, Ling, and Wong's (2001) Biometrika, 88, 1135–1152, paper formally derived the asymptotic distribution of the estimators for the error-correction model (ECM) parameters, in the presence of conditional heteroskedasticity. As far as ECM parameters are concerned, the efficiency gain may be huge even when the deflated error is symmetrically distributed. Taking into consideration the different rates of convergence, this paper first shows that the standard distribution applies to a portmanteau test, even when the conditional mean is an ECM. Assuming the usual null of no multivariate GARCH, the performance of this test in finite samples is examined through Monte Carlo experiments. We then apply the test for GARCH to the yearly or quarterly (extended) Nelson–Plosser data, embedded with some prototype multivariate models. We also apply the test to the intra-daily HSI (Hang Seng Index) and its derivatives, with the spread as the ECT (error-correction term). The empirical results throw doubt on the efficiency of the usual estimation of the ECM parameters, and more importantly, on the validity of the significance tests of an ECM.

Despite the difference in information sets, we are able to compare the asymptotic distribution of volatility estimators involving data sampled at different frequencies. To do so, we propose extensions of the continuous record asymptotic analysis for rolling sample variance estimators developed by Foster and Nelson (1996, Econometrica, 64, 139–174). We focus on traditional historical volatility filters involving monthly, daily and intradaily observations. Theoretical results are complemented with Monte Carlo simulations in order to assess the validity of the asymptotics for sample sizes and filters encountered in empirical studies.

In this chapter, we aim to measure the actual volatility within a model-based framework using high-frequency data. In the empirical finance literature, it is widely discussed that tick-by-tick prices are subject to market micro-structure effects such as bid-ask bounces and trade information. These market micro-structure effects become more and more apparent as prices or returns are sampled at smaller and smaller time intervals. An increasingly popular measure for the variability of spot prices on a particular day is realised volatility that is typically defined as the sum of squared intra-daily log-returns. Recent theoretical results have shown that realised volatility is a consistent estimator of actual volatility, but when it is subject to micro-structure noise and the sampling frequency increases, the estimator diverges. Parametric and nonparametric methods can be adopted to account for the micro-structure bias. Here, we measure actual volatility using a model that takes account of micro-structure noise together with intra-daily volatility patterns and stochastic volatility. The coefficients of this model are estimated by maximum likelihood methods that are based on importance sampling techniques. It is shown that such Monte Carlo techniques can be employed successfully for our purposes in a feasible way. As far as we know, this is a first attempt to model the basic components of the mean and variance of high-frequency prices simultaneously. An illustration is given for three months of tick-by-tick transaction prices of the IBM stock traded at the New York Stock Exchange.

Microstructure noise contaminates high-frequency estimates of asset price volatility. Recent work has determined a preferred sampling frequency under the assumption that the properties of noise are constant. Given the sampling frequency, the high-frequency observations are given equal weight. While convenient, constant weights are not necessarily efficient. We use the Kalman filter to derive more efficient weights, for any given sampling frequency. We demonstrate the efficacy of the procedure through an extensive simulation exercise, showing that our filter compares favorably to more traditional methods.

It is shown in Chou (2005). Journal of Money, Credit and Banking, 37, 561–582that the range can be used as a measure of volatility and the conditional autoregressive range (CARR) model performs better than generalized auto regressive conditional heteroskedasticity (GARCH) in forecasting volatilities of S&P 500 stock index. In this paper, we allow separate dynamic structures for the upward and downward ranges of asset prices to account for asymmetric behaviors in the financial market. The types of asymmetry include the trending behavior, weekday seasonality, interaction of the first two conditional moments via leverage effects, risk premiums, and volatility feedbacks. The return of the open to the max of the period is used as a measure of the upward and the downward range is defined likewise. We use the quasi-maximum likelihood estimation (QMLE) for parameter estimation. Empirical results using S&P 500 daily and weekly frequencies provide consistent evidences supporting the asymmetry in the US stock market over the period 1962/01/01–2000/08/25. The asymmetric range model also provides sharper volatility forecasts than the symmetric range model.

In this paper, we consider the estimation of volatility parameters in the context of a linear regression where the disturbances follow a stochastic volatility (SV) model of order one with Gaussian log-volatility. The linear regression represents the conditional mean of the process and may have a fairly general form, including for example finite-order autoregressions. We provide a computationally simple two-step estimator available in closed form. Under general regularity conditions, we show that this two-step estimator is asymptotically normal. We study its statistical properties by simulation, compare it with alternative generalized method-of-moments (GMM) estimators, and present an application to the S&P composite index.

This paper proposes the Student's t Dynamic Linear Regression (St-DLR) model as an alternative to the various extensions/modifications of the ARCH type volatility model. The St-DLR differs from the latter models of volatility because it can incorporate exogenous variables in the conditional variance in a natural way. Moreover, it also addresses the following issues: (i) apparent long memory of the conditional variance, (ii) distributional assumption of the error, (iii) existence of higher moments, and (iv) coefficient positivity restrictions. The model is illustrated using Dow Jones data and the three-month T-bill rate. The empirical results seem promising, as the contemporaneous variable appears to account for a large portion of the volatility.

We develop a theoretical model to compare forecast uncertainty estimated from time-series models to those available from survey density forecasts. The sum of the average variance of individual densities and the disagreement is shown to approximate the predictive uncertainty from well-specified time-series models when the variance of the aggregate shocks is relatively small compared to that of the idiosyncratic shocks. Due to grouping error problems and compositional heterogeneity in the panel, individual densities are used to estimate aggregate forecast uncertainty. During periods of regime change and structural break, ARCH estimates tend to diverge from survey measures.

A univariate GARCH(p,q) process is quickly transformed to a univariate autoregressive moving-average process in squares of an underlying variable. For positive integer m, eigenvalue restrictions have been proposed as necessary and sufficient restrictions for existence of a unique mth moment of the output of a univariate GARCH process or, equivalently, the 2mth moment of the underlying variable. However, proofs in the literature that an eigenvalue restriction is necessary and sufficient for existence of unique 4th or higher even moments of the underlying variable, are either incorrect, incomplete, or unnecessarily long. Thus, the paper contains a short and general proof that an eigenvalue restriction is necessary and sufficient for existence of a unique 4th moment of the underlying variable of a univariate GARCH process. The paper also derives an expression for computing the 4th moment in terms of the GARCH parameters, which immediately implies a necessary and sufficient inequality restriction for existence of the 4th moment. Because the inequality restriction is easily computed in a finite number of basic arithmetic operations on the GARCH parameters and does not require computing eigenvalues, it provides an easy means for computing “by hand” the 4th moment and for checking its existence for low-dimensional GARCH processes. Finally, the paper illustrates the computations with some GARCH(1,1) processes reported in the literature.

DOI
10.1016/S0731-9053(2006)20_Part_1
Publication date
Book series
Advances in Econometrics
Editors
Series copyright holder
Emerald Publishing Limited
ISBN
978-0-76231-274-0
eISBN
978-1-84950-389-1
Book series ISSN
0731-9053