Essays in Honor of Jerry Hausman: Volume 29

Subject:

Table of contents

(25 chapters)

I would like to thank Carter Hill and other people at LSU who helped organize a very enjoyable conference on the Hausman Specification Test in February 2012. Many of the chapters in this volume were given at the conference. I was pleased to be around many friends at the conference, and I found the chapters very interesting. I especially appreciate the chapter by Professor Hector Zapata and Ms. Cristina Camanita, which considered the diffusion of my econometrics ideas. In particular, I did not know that these techniques were widely used in other disciplines. I found their approach very innovative and very interesting.

We are pleased to introduce Advances in Econometrics Volume 29: Essays in Honor of Jerry Hausman. This volume contains research papers on the theory and practice of econometrics that are linked to, or related to, or inspired by the work of Jerry Hausman. We have divided the contributions into three sections: Estimation, Panel Data and Specification Testing. A visit to Professor Hausman's web page (http://economics.mit.edu/faculty/hausman) will show that he has published extensively in these three areas. His remarkable influence is outlined in “The Diffusion of Hausman's Econometric Ideas” by Zapata and Caminita. Their paper is presented first, before the sections, as it examines way the diffusion of Jerry Hausman's econometric ideas using citation counts, citing authors, and source journals of his most referenced citers.

This paper examines the diffusion of Jerry Hausman's econometric ideas using citation counts, citing authors, and source journals of his most referenced citers. Bibliographic information and citation counts of references to econometrics papers were retrieved from Thomson Reuters Web of Science and analyzed to determine the various ways in which Hausman's ideas have spread in econometrics and related disciplines. Econometric growth analysis (Gompertz and logistic functions) is used to measure the diffusion of his contributions. This analysis reveals that the diffusion of Hausman's ideas has been pervasive over time and disciplines. For example, his seminal 1978 paper continues to be strongly cited along exponential growth with total cites mainly in econometrics and other fields such as administrative management, human resources, and psychology. Some of the more recent papers have a growth pattern that resembles that of the 1978 paper. This leads us to conclude that Hausman's econometric contributions will continue to diffuse in years to come. It was also found that five journals have published the bulk of the top cited papers that list Hausman as a reference, namely, Econometrica, Journal of Econometrics, Review of Economic Studies, Academy of Management Journal, and the Journal of Economic Literature. “Specification tests in econometrics” is Hausman's dominant contribution in this citation analysis. We found no previous research on the econometric modeling of citation counts as done in this paper. Thus, we expect to stimulate methodological improvements in future work.

This chapter shows how a weighted average of a forward and reverse Jackknife IV estimator (JIVE) yields estimators that are robust against heteroscedasticity and many instruments. These estimators, called HFUL (Heteroscedasticity robust Fuller) and HLIM (Heteroskedasticity robust limited information maximum likelihood (LIML)) were introduced by Hausman, Newey, Woutersen, Chao, and Swanson (2012), but without derivation. Combining consistent estimators is a theme that is associated with Jerry Hausman and, therefore, we present this derivation in this volume. Additionally, and in order to further understand and interpret HFUL and HLIM in the context of jackknife type variance ratio estimators, we show that a new variant of HLIM, under specific grouped data settings with dummy instruments, simplifies to the Bekker and van der Ploeg (2005) MM (method of moments) estimator.

In the context of competing theoretical economic–econometric models and corresponding estimators, we demonstrate a semiparametric combining estimator that, under quadratic loss, has superior risk performance. The method eliminates the need for pretesting to decide between members of the relevant family of econometric models and demonstrates, under quadratic loss, the nonoptimality of the conventional pretest estimator. First-order asymptotic properties of the combined estimator are demonstrated. A sampling study is used to illustrate finite sample performance over a range of econometric model sampling designs that includes performance relative to a Hausman-type model selection pretest estimator. An important empirical problem from the causal effects literature is analyzed to indicate the applicability and econometric implications of the methodology. This combining estimation and inference framework can be extended to a range of models and corresponding estimators. The combining estimator is novel in that it provides directly minimum quadratic loss solutions.

In a recent paper, Hausman, Newey, Woutersen, Chao, and Swanson (2012) propose a new estimator, HFUL (Heteroscedasticity robust Fuller), for the linear model with endogeneity. This estimator is consistent and asymptotically normally distributed in the many instruments and many weak instruments asymptotics. Moreover, this estimator has moments, just like the estimator by Fuller (1977). The purpose of this note is to discuss at greater length the existence of moments result given in Hausman et al. (2012). In particular, we intend to answer the following questions: Why does LIML not have moments? Why does the Fuller modification lead to estimators with moments? Is normality required for the Fuller estimator to have moments? Why do we need a condition such as Hausman et al. (2012), Assumption 9? Why do we have the adjustment formula?

Principal component (PC) techniques are commonly used to improve the small sample properties of the linear instrumental variables (IV) estimator. Carrasco (2012) argue that PC type methods provide a natural ranking of instruments with which to reduce the size of the instrument set. This chapter shows how reducing the size of the instrument based on PC methods can lead to poor small sample properties of IV estimators. A new approach to ordering instruments termed ‘normalized principal components’ (NPCs) is introduced to overcome this problem. A simulation study shows the favourable small samples properties of IV estimators using NPC, methods to reduce the size of the instrument relative to PC. Using NPC we provide evidence that the IV setup in Angrist and Krueger (1992) may not suffer the weak instrument problem.

In this chapter we perform a Monte Carlo simulation study of the errors-in-variables model examined in Ramsey, Gallegati, Gallegati, and Semmler (2010) by using a wavelet multiresolution approximation approach. Differently from previous studies applying wavelets to errors-in-variables problem, we use a sequence of multiresolution approximations of the variable measured with error ranging from finer to coarser scales. Our results indicate that multiscale approximations to the variable observed with error based on the coarser scales provide an unbiased asymptotically efficient estimator that also possess good finite sample properties.

This chapter suggests a robust Hausman and Taylor (1981), hereafter HT, estimator that deals with the possible presence of outliers. This entails two modifications of the classical HT estimator. The first modification uses the Bramati and Croux (2007) robust Within MS estimator instead of the Within estimator in the first stage of the HT estimator. The second modification uses the robust Wagenvoort and Waldmann (2002) two-stage generalized MS estimator instead of the 2SLS estimator in the second step of the HT estimator. Monte Carlo simulations show that, in the presence of vertical outliers or bad leverage points, the robust HT estimator yields large gains in MSE as compared to its classical Hausman–Taylor counterpart. We illustrate this robust version of the HT estimator using an empirical application.

Purpose – This chapter considers a Hausman and Taylor (1981) panel data model that exhibits a Cliff and Ord (1973) spatial error structure.

Methodology/approach – We analyze the small sample properties of a generalized moments estimation approach for that model. This spatial Hausman–Taylor estimator allows for endogeneity of the time-varying and time-invariant variables with the individual effects. For this model, the spatial fixed effects estimator is known to be consistent, but its disadvantage is that it wipes out the effects of time-invariant variables which are important for most empirical studies.

Findings – Monte Carlo results show that the spatial Hausman–Taylor estimator performs well in small samples.

This paper studies the estimation of quantile regression panel duration models. We allow for the possibility of endogenous covariates and correlated individual effects in the quantile regression models. We propose a quantile regression approach for panel duration models under conditionally independent censoring. The procedure involves minimizing ℓ1 convex objective functions and is motivated by a martingale property associated with survival data in models with endogenous covariates. We carry out a series of Monte Carlo simulations to investigate the small sample performance of the proposed approach in comparison with other existing methods. An empirical application of the method to the analysis of the effect of unemployment insurance on unemployment duration illustrates the approach.

The objective of this study is to investigate the simultaneity between farm couples’ decisions on labor allocation and production efficiency. Using an unbalanced panel data set of Norwegian farm households (1989–2008), we estimate off-farm labor supply of married farm couples and farm efficiency in a three-equation system of jointly determined endogenous variables. We address the issue of latent heterogeneity between households. We solve the problem by two-stage OLS and GLS estimation where state dependence is accounted for in the reduced form equations. We compare the results against simpler model specifications where we suppress censoring of off-farm labor hours and endogeneity of regressors, respectively. In the reduced form specification, a considerably large number of parameters are statistically significant. Davidson–McKinnon test of exogeneity confirms that both operator and spouse's off-farm labor supply should be treated as endogenous in estimating farming efficiency. The parameter estimates seem robust across model specifications. Off-farm labor supply of farm operators and spouses is jointly determined. Off-farm work by farm operator and spouses positively affects farming efficiency. Farming efficiency increases with operator's age, farm size, agricultural subsidises, and share of current investment to total farm capital stock.

Debt burdens have risen for US households over the last several decades. As a result, several studies have investigated potential ethnic and gender differences in these debt burdens, along with the risks they pose. However, such estimations can be biased without correctly controlling for individual unobserved heterogeneity, and standard methods to deal with this, such as fixed effects, remove any time-invariant variables from the analysis. In this paper, I use the Hausman–Taylor (HT) estimator to estimate the relationship between these time-invariant demographics and debt burdens, allowing for potential correlation between some variables and the unobserved heterogeneity. I also consider some guidelines in determining the appropriateness of the HT estimation, both in terms of exogeneity assumptions as well as potential problems due to weak instruments. Using data from the National Longitudinal Survey of Youth 1979, the resulting estimates differ substantially from those of a typical random effects GLS estimator. In particular, the HT results find that after controlling for other variables, women are more likely to take on debt, especially nonhousing debt, but those who do take on debt tend to take on a lower amount than their male counterparts. No differences are found for black or Hispanic individuals with regard to the amount of debt, though black individuals are found to be slightly less likely to have debt.

Recently the world economy was confronted to the worst financial crisis since the great depression. This unprecedented crisis started in mid-2007 had a huge impact on the European government bond market. But what are the main drivers of this “perfect storm” that since 2009 affects EU government bond market as well? To answer this question, we propose an empirical study of the determinants of the sovereign bond spreads of EU countries with respect to Germany during the period 2003–2010. Technically, we address two main questions. First, we ask what share of the change in sovereign bond spreads is explained by changes in the fundamentals, liquidity, and market risks. Second, we distinguish between EU member states within and outside the Euro area and question whether long-term determinants of spreads affect EU members uniformly. To these ends, we employ panel data techniques in a regression model where spreads to Germany (with virtually no default risk) are explained by set of traditional variables and a number of policy variables. Results reveal that large fiscal deficits and public debt as well as political risks and to a lesser extent the liquidity are likely to put substantial upward pressures on sovereign bond yields in many advanced European economies.

We provide straightforward new nonparametric methods for testing conditional independence using local polynomial quantile regression, allowing weakly dependent data. Inspired by Hausman's (1978) specification testing ideas, our methods essentially compare two collections of estimators that converge to the same limits under correct specification (conditional independence) and that diverge under the alternative. To establish the properties of our estimators, we generalize the existing nonparametric quantile literature not only by allowing for dependent heterogeneous data but also by establishing a weak consistency rate for the local Bahadur representation that is uniform in both the conditioning variables and the quantile index. We also show that, despite our nonparametric approach, our tests can detect local alternatives to conditional independence that decay to zero at the parametric rate. Our approach gives the first nonparametric tests for time-series conditional independence that can detect local alternatives at the parametric rate. Monte Carlo simulations suggest that our tests perform well in finite samples. We apply our test to test for a key identifying assumption in the literature on nonparametric, nonseparable models by studying the returns to schooling.

In this paper, we follow the same logic as in Hausman (1978) to create a testing procedure that checks for the presence of outliers by comparing a regression estimator that is robust to outliers (S-estimator), with another that is more efficient but affected by them. Some simulations are presented to illustrate the good behavior of the test for both its size and its power.

This chapter proposes a simple, fairly general, test for global identification of unconditional moment restrictions implied from point-identified conditional moment restrictions. The test is a Hausman-type test based on the Hausdorff distance between an estimator that is consistent even under global identification failure of the unconditional moment restrictions, and an estimator of the identified set of the unconditional moment restrictions. The proposed test has a χ2 limiting distribution and is also able to detect weak identification. Some Monte Carlo experiments show that the proposed test has competitive finite sample properties already for moderate sample sizes.

Hausman (1978) represented a tectonic shift in inference related to the specification of econometric models. The seminal insight that one could compare two models which were both consistent under the null spawned a test which was both simple and powerful. The so-called ‘Hausman test’ has been applied and extended theoretically in a variety of econometric domains. This paper discusses the basic Hausman test and its development within econometric panel data settings since its publication. We focus on the construction of the Hausman test in a variety of panel data settings, and in particular, the recent adaptation of the Hausman test to semiparametric and nonparametric panel data models. We present simulation experiments which show the value of the Hausman test in a nonparametric setting, focusing primarily on the consequences of parametric model misspecification for the Hausman test procedure. A formal application of the Hausman test is also given focusing on testing between fixed and random effects within a panel data model of gasoline demand.

The Hausman test is used in applied economic work as a test of misspecification. It is most commonly thought of as a test of whether one or more explanatory variables in a regression model are endogenous. The usual Hausman contrast test requires one estimator to be efficient under the null hypothesis. If data are heteroskedastic, the least squares estimator is no longer efficient. The first option is to estimate the covariance matrix of the difference of the contrasted estimators, as suggested by Hahn, Ham, and Moon (2011). Other options for carrying out a Hausman-like test in this case include estimating an artificial regression and using robust standard errors. Alternatively, we might seek additional power by estimating the artificial regression using feasible generalized least squares. Finally, we might stack moment conditions leading to the two estimators and estimate the resulting system by GMM. We examine these options in a Monte Carlo experiment. We conclude that the test based on the procedure by Hahn, Ham, and Moon has good properties. The generalized least squares-based tests have higher size-corrected power when heteroskedasticity is detected in the DWH regression, and the heteroskedasticity is associated with a strong external IV. We do not consider the properties of the implied pretest estimator.

In this chapter we investigate the finite sample properties of a Hausman test for the spatial error model (SEM) proposed by Pace and LeSage (2008). In particular, we demonstrate that the power of their test could be very low against a natural alternative like the spatial autoregressive (SAR) model.

DOI
10.1108/S0731-9053(2012)29
Publication date
Book series
Advances in Econometrics
Editors
Series copyright holder
Emerald Publishing Limited
ISBN
978-1-78190-307-0
eISBN
978-1-78190-308-7
Book series ISSN
0731-9053