30th Anniversary Edition: Volume 30

Subject:

Table of contents

(22 chapters)

On March 24, 2012, we had one of the greatest surprises of our lives. We walked into the Lod Cook Conference Center on the LSU campus, to find scores of people, including our friend, mentor and major Professor Stanley R. Johnson, and many old colleagues and students. This event, in honor of the 30th volume of Advances in Econometrics, was engineered by our friend Dek Terrell, with the aide of his spouse, Dannielle Lewis, our spouses Nancy Fomby and Melissa Waters, Julianna Richard of LSU, Daniel Millimet of SMU, and our former Emerald Press editor Emma Whitfield. True to our nature we were oblivious to the nine months of planning and work that had gone into preparing the event. These cunning and diabolical people, obviously skilled in hiding the truth and keeping poker faces through it all, had organized a surprise conference! Not a birthday party, not a dinner, but a weekend conference with participants from all over the world! We are amazed, and honored.

The collection of chapters in this 30th volume of Advances in Econometrics provides a well-deserved tribute to Thomas B. Fomby and R. Carter Hill, who have served as editors of the Advances in Econometrics series for 25 and 21 years, respectively. Volume 30 contains a more varied collection of chapters than previous volumes, in essence mirroring the wide variety of econometric topics covered by the series over 30 years. Volume 30 starts with a chapter discussing the history of this series over the last 30 years. The next five chapters can be broadly categorized as focusing on model specification and testing. Following this section are three contributions that examine instrumental variables models in quite different settings. The next four chapters focus on applied macroeconomics topics. The final chapter offers a practical guide to conducting Monte Carlo simulations.

Advances in Econometrics is a series of research annuals first published in 1982 by JAI Press. In this paper, we present a brief history of the series over its first 30 years. We describe key events in the history of the volume, and give information about the key contributors: editors, editorial board members, Advances in Econometrics Fellows, and authors who have contributed to the great success of the series.

A Monte Carlo experiment is used to examine the size and power properties of alternative Bayesian tests for unit roots. Four different prior distributions for the root that is potentially unity – a uniform prior and priors attributable to Jeffreys, Lubrano, and Berger and Yang – are used in conjunction with two testing procedures: a credible interval test and a Bayes factor test. Two extensions are also considered: a test based on model averaging with different priors and a test with a hierarchical prior for a hyperparameter. The tests are applied to both trending and non-trending series. Our results favor the use of a prior suggested by Lubrano. Outcomes from applying the tests to some Australian macroeconomic time series are presented.

In this chapter we demonstrate the construction of inverse test confidence intervals for the turning-points in estimated nonlinear relationships by the use of the marginal or first derivative function. First, we outline the inverse test confidence interval approach. Then we examine the relationship between the traditional confidence intervals based on the Wald test for the turning-points for a cubic, a quartic, and fractional polynomials estimated via regression analysis and the inverse test intervals. We show that the confidence interval plots of the marginal function can be used to estimate confidence intervals for the turning-points that are equivalent to the inverse test. We also provide a method for the interpretation of the confidence intervals for the second derivative function to draw inferences for the characteristics of the turning-point.

This method is applied to the examination of the turning-points found when estimating a quartic and a fractional polynomial from data used for the estimation of an Environmental Kuznets Curve. The Stata do files used to generate these examples are listed in Appendix A along with the data.

We analyze Lagrange Multiplier (LM) tests for a shift in trend of a univariate time series at an unknown date. We focus on the class of LM statistics based on nonparametric kernel estimates of the long run variance. Extending earlier work for models with nontrending data, we develop a fixed-b asymptotic theory for the statistics. The fixed-b theory suggests that, for a given statistic, kernel, and significance level, there usually exists a bandwidth such that the fixed-b asymptotic critical value is the same for both I(0) and I(1) errors. These “robust” bandwidths are calculated using simulation methods for a selection of well-known kernels. We find when the robust bandwidth is used, the supremum statistic configured with either the Bartlett or Daniell kernel gives LM tests with good power. When testing for a slope change, we obtain the surprising finding that less trimming of potential shift dates leads to higher power, which contrasts the usual relationship between trimming and power. Finite sample simulations indicate that the robust LM statistics have stable size with good power.

In this chapter we provide analytical and Monte Carlo evidence that Chow and Predictive tests can be consistent against alternatives that allow structural change to occur at either end of the sample. Attention is restricted to linear regression models that may have a break in the intercept. The results are based on a novel reparameterization of the actual and potential break point locations. Standard methods parameterize both of these locations as fixed fractions of the sample size. We parameterize these locations as more general integer-valued functions. Power at the ends of the sample is evaluated by letting both locations, as a percentage of the sample size, converge to 0 or 1. We find that for a potential break point function, the tests are consistent against alternatives that converge to 0 or 1 at sufficiently slow rates and are inconsistent against alternatives that converge sufficiently quickly. Monte Carlo evidence supports the theory though large samples are sometimes needed for reasonable power.

We examine the Stein-rule shrinkage estimator for possible improvements in estimation and forecasting when there are many predictors in a linear time series model. We consider the Stein-rule estimator of Hill and Judge (1987) that shrinks the unrestricted unbiased ordinary least squares (OLS) estimator toward a restricted biased principal component (PC) estimator. Since the Stein-rule estimator combines the OLS and PC estimators, it is a model-averaging estimator and produces a combined forecast. The conditions under which the improvement can be achieved depend on several unknown parameters that determine the degree of the Stein-rule shrinkage. We conduct Monte Carlo simulations to examine these parameter regions. The overall picture that emerges is that the Stein-rule shrinkage estimator can dominate both OLS and principal components estimators within an intermediate range of the signal-to-noise ratio. If the signal-to-noise ratio is low, the PC estimator is superior. If the signal-to-noise ratio is high, the OLS estimator is superior. In out-of-sample forecasting with AR(1) predictors, the Stein-rule shrinkage estimator can dominate both OLS and PC estimators when the predictors exhibit low persistence.

This chapter studies the asymptotic properties of within-groups k-class estimators in a panel data model with weak instruments. Weak instruments are characterized by the coefficients of the instruments in the reduced form equation shrinking to zero at a rate proportional to nTδ, where n is the dimension of the cross-section and T is the dimension of the time series. Joint limits as (n,T)→∞ show that this within-group k-class estimator is consistent if 0≤δ<12 and inconsistent if 12≤δ<∞.

In the context of competing IV econometric models and estimators, we demonstrate a semiparametric Stein-like estimator (SSLE) that, under quadratic loss, has superior risk performance. The method eliminates the need for pretesting to decide whether covariate endogeneity is present and makes use of a pretest estimator choice between IV and non-IV methods unnecessary. A sampling study is used to illustrate finite sample performance over a range of sampling designs, including its performance relative to pretest estimators. An important applied problem from the literature is analyzed to indicate possible applied implications and the relation of SSLE to other modern IV estimators.

Most spatial econometrics work focuses on spatial dependence in the regressand or disturbances. However, Lesage and Pace (2009) as well as Pace and LeSage2009 showed that the bias in β from applying OLS to a regressand generated from a spatial autoregressive process was exacerbated by spatial dependence in the regressor. Also, the marginal likelihood function or restricted maximum likelihood (REML) function includes a determinant term involving the regressors. Therefore, high dependence in the regressor may affect the likelihood through this term. In addition, Bowden and Turkington (1984) showed that regressor temporal autocorrelation had a non-monotonic effect on instrumental variable estimators.

We provide empirical evidence that many common economic variables used as regressors (e.g., income, race, and employment) exhibit high levels of spatial dependence. Based on this observation, we conduct a Monte Carlo study of maximum likelihood (ML), REML and two instrumental variable specifications for spatial autoregressive (SAR) and spatial Durbin models (SDM) in the presence of spatially correlated regressors.

Findings indicate that as spatial dependence in the regressor rises, REML outperforms ML and that performance of the instrumental variable methods suffer. The combination of correlated regressors and the SDM specification provides a challenging environment for instrumental variable techniques.

We also examine estimates of marginal effects and show that these behave better than estimates of the underlying model parameters used to construct marginal effects estimates. Suggestions for improving design of Monte Carlo experiments are provided.

In this chapter, using a combination of long-run and sign restrictions to identify aggregate monetary and productivity factors, I find that the monetary factor is responsible for long swings in nominal variables but has little effect on fluctuations in output, real wage, or labor input growth. The productivity factor in addition to increasing output growth and real wage growth in the short and long run, also results in increases in labor input and decreases in prices, but the quantitative effect of the productivity factor on labor input is relatively small. These results are robust to the number of factors included in the model and to alternative priors about the short-run effects of the monetary factor, and to the inclusion of oil prices. Oil prices, in fact, appear to be largely driven by the other aggregate factors.

In this chapter, we examine the relationship between the cyclical components of output, the price level and the inflation rate. During the post-war period, there is a negative correlation between output and the price level and a positive correlation between output and the inflation rate. A phase shift in the cyclical component between output and the price level can account for these two facts. The phase shift is consistent with movements in the price level Granger causes movements in output. In addition, we consider time-varying correlations between the two pairs of series. Spectral analysis suggest the price and output have different wavelengths, but the difference is not statistically significant.

The causal relationship between money and income (output) has been an important topic and has been extensively studied. However, those empirical studies are almost entirely on Granger-causality in the conditional mean. Compared to conditional mean, conditional quantiles give a broader picture of an economy in various scenarios. In this paper, we explore whether forecasting conditional quantiles of output growth can be improved using money growth information. We compare the check loss values of quantile forecasts of output growth with and without using past information on money growth, and assess the statistical significance of the loss-differentials. Using U.S. monthly series of real personal income or industrial production for income and output, and M1 or M2 for money, we find that out-of-sample quantile forecasting for output growth is significantly improved by accounting for past money growth information, particularly in tails of the output growth conditional distribution. On the other hand, money–income Granger-causality in the conditional mean is quite weak and unstable. These empirical findings in this paper have not been observed in the money–income literature. The new results of this paper have an important implication on monetary policy, because they imply that the effectiveness of monetary policy has been under-estimated by merely testing Granger-causality in conditional mean. Money does Granger-cause income more strongly than it has been known and therefore information on money growth can (and should) be more utilized in implementing monetary policy.

Tail-dependence evolution for the symmetrized Joe–Clayton copula is proposed to depend on an exponentially weighted moving average (EWMA) of the absolute difference in probability integral transforms. Using these dynamics, time-varying tail dependence between bank and insurance equity prices is assessed in a parametric copula, generalized autoregressive conditional heteroscedastic framework. The results suggest a relatively long lag and support the EWMA lag structure as an effective estimation vehicle. Tail dependence is shown often to tend higher during periods of market stress.

Monte Carlo simulations are a very powerful way to demonstrate the basic sampling properties of various statistics in econometrics. The commercial software package Stata makes these methods accessible to a wide audience of students and practitioners. The purpose of this chapter is to present a self-contained primer for conducting Monte Carlo exercises as part of an introductory econometrics course. More experienced econometricians that are new to Stata may find this useful as well. Many examples are given that can be used as templates for various exercises. Examples include linear regression, confidence intervals, the size and power of t-tests, lagged dependent variable models, heteroskedastic and autocorrelated regression models, instrumental variables estimators, binary choice, censored regression, and nonlinear regression models. Stata do-files for all examples are available from the authors' website http://learneconometrics.com/pdf/MCstata/.

DOI
10.1108/S0731-9053(2012)30
Publication date
Book series
Advances in Econometrics
Editors
Series copyright holder
Emerald Publishing Limited
ISBN
978-1-78190-309-4
eISBN
978-1-78190-310-0
Book series ISSN
0731-9053