Essays in Honor of Subal Kumbhakar: Volume 46

Cover of Essays in Honor of Subal Kumbhakar
Subject:

Table of contents

(16 chapters)
Abstract

This chapter revisits the Hausman (1978) test for panel data. It emphasizes that it is a general specification test and that rejection of the null signals misspecification and is not an endorsement of the fixed effects estimator as is done in practice. Non-rejection of the null provides support for the random effects estimator which is efficient under the null. The chapter offers practical tips on what to do in case the null is rejected including checking for endogeneity of the regressors, misspecified dynamics, and applying a nonparametric Hausman test, see Amini, Delgado, Henderson, and Parmeter (2012, chapter 16). Alternatively, for the fixed effects die hard, the chapter suggests testing the fixed effects restrictions before adopting this estimator. The chapter also recommends a pretest estimator that is based on an additional Hausman test based on the difference between the Hausman and Taylor estimator and the fixed effects estimator.

Abstract

The authors examine whether or not applicants and recipients of federal disability insurance (DI) inflate their self-assessed health (SAH) problems relative to others. To do this, the authors employ a technique which uses anchoring vignettes. This approach allows them to examine how various cohorts of the population interpret survey questions associated with subjective self-assessments of health. The results of the analysis suggest that DI participants do inflate the severity of a given health problem, but by a small but significant degree. This tendency to exaggerate the severity of disability problems is much more apparent among those with more education (especially those with a college degree). In contrast, racial minorities tend to underestimate severity ratings for a given disability vignette when compared to their white peers.

Abstract

Homelessness has many causes and also is stigmatized in the United States, leading to much misunderstanding of its causes and what policy solutions may ameliorate the problem. The problem is of course getting worse and impacting many communities far removed from the West Coast cities the authors examine in this study. This analysis examines the socioeconomic variables influencing homelessness on the West Coast in recent years. The authors utilize a panel fixed effects model that explicitly includes measures of healthcare access and availability to account for the additional health risks faced by individuals who lack shelter. The authors estimate a spatial error model (SEM) in order to better understand the impacts that systemic shocks, such as the COVID-19 pandemic, have on a variety of factors that directly influence productivity and other measures of welfare such as income inequality, housing supply, healthcare investment, and homelessness.

Abstract

Classical unit root tests are known to suffer from potentially crippling size distortions, and a range of procedures have been proposed to attenuate this problem, including the use of bootstrap procedures. It is also known that the estimating equation’s functional form can affect the outcome of the test, and various model selection procedures have been proposed to overcome this limitation. In this chapter, the authors adopt a model averaging procedure to deal with model uncertainty at the testing stage. In addition, the authors leverage an automatic model-free dependent bootstrap procedure where the null is imposed by simple differencing (the block length is automatically determined using recent developments for bootstrapping dependent processes). Monte Carlo simulations indicate that this approach exhibits the lowest size distortions among its peers in settings that confound existing approaches, while it has superior power relative to those peers whose size distortions do not preclude their general use. The proposed approach is fully automatic, and there are no nuisance parameters that have to be set by the user, which ought to appeal to practitioners.

Abstract

Motivated by empirical features that characterize cryptocurrency volatility data, the authors develop a forecasting strategy that can account for both model uncertainty and heteroskedasticity of unknown form. The theoretical investigation establishes the asymptotic optimality of the proposed heteroskedastic model averaging heterogeneous autoregressive (H-MAHAR) estimator under mild conditions. The authors additionally examine the convergence rate of the estimated weights of the proposed H-MAHAR estimator. This analysis sheds new light on the asymptotic properties of the least squares model averaging estimator under alternative complicated data generating processes (DGPs). To examine the performance of the H-MAHAR estimator, the authors conduct an out-of-sample forecasting application involving 22 different cryptocurrency assets. The results emphasize the importance of accounting for both model uncertainty and heteroskedasticity in practice.

Abstract

The authors propose to estimate a varying coefficient panel data model with different smoothing variables and fixed effects using a two-step approach. The pilot step estimates the varying coefficients by a series method. We then use the pilot estimates to perform a one-step backfitting through local linear kernel smoothing, which is shown to be oracle efficient in the sense of being asymptotically equivalent to the estimate knowing the other components of the varying coefficients. In both steps, the authors remove the fixed effects through properly constructed weights. The authors obtain the asymptotic properties of both the pilot and efficient estimators. The Monte Carlo simulations show that the proposed estimator performs well. The authors illustrate their applicability by estimating a varying coefficient production frontier using a panel data, without assuming distributions of the efficiency and error terms.

Abstract

In this chapter, we consider the possibility that a firm may use costly resources to improve its technical efficiency. Results from static analyses imply that technical efficiency is determined by the configuration of factor prices. A dynamic model of the firm is developed under the assumption that managerial skill contributes to technical efficiency. Dynamic analysis shows that the firm can never be technically efficient if it maximizes profits, the steady state is always inefficient, and it is locally stable. In terms of empirical analysis, we show how likelihood-based methods can be used to uncover, in a semi-non-parametric manner, important features of the inefficiency-management relationship using a flexible functional form accounting for the endogeneity of inputs in a production function. Managerial compensation can also be identified and estimated using the new techniques. The new empirical methodology is applied in a data set previously analyzed by Bloom and van Reenen (2007) on managerial practices of manufacturing firms in the UK, US, France and Germany.

Abstract

There is growing empirical evidence that firm heterogeneity is technologically non-neutral. This chapter extends the Gandhi, Navarro, and Rivers (2020) proxy variable framework for structurally identifying production functions to a more general case when latent firm productivity is multi-dimensional, with both factor-neutral and (biased) factor-augmenting components. Unlike alternative methodologies, the proposed model can be identified under weaker data requirements, notably, without relying on the typically unavailable cross-sectional variation in input prices for instrumentation. When markets are perfectly competitive, point identification is achieved by leveraging the information contained in static optimality conditions, effectively adopting a system-of-equations approach. It is also shown how one can partially identify the non-neutral production technology in the traditional proxy variable framework when firms have market power.

Abstract

This chapter provides an empirical assessment of the effects of infrastructure provision on structural change and aggregate productivity using industrylevel data for a set of developed and developing countries over 1995–2010. A distinctive feature of the empirical strategy followed is that it allows the measurement of the resource reallocation directly attributable to infrastructure provision. To achieve this, a two-level top-down decomposition of aggregate productivity that combines and extends several strands of the literature is proposed. The empirical application reveals significant production losses attributable to misallocation of inputs across firms, especially among African countries. Also, the results show that infrastructure provision has stimulated aggregate total factor productivity growth through both within and between industry productivity gains.

Abstract

The traditional predictor of technical inefficiency proposed by Jondrow, Lovell, Materov, and Schmidt (1982) is a conditional expectation. This chapter explores whether, and by how much, the predictor can be improved by using auxiliary information in the conditioning set. It considers two types of stochastic frontier models. The first type is a panel data model where composed errors from past and future time periods contain information about contemporaneous technical inefficiency. The second type is when the stochastic frontier model is augmented by input ratio equations in which allocative inefficiency is correlated with technical inefficiency. Compared to the standard kernel-smoothing estimator, a newer estimator based on a local linear random forest helps mitigate the curse of dimensionality when the conditioning set is large. Besides numerous simulations, there is an illustrative empirical example.

Abstract

A semiparametric stochastic frontier model is proposed for panel data, incorporating several flexible features. First, a constant elasticity of substitution (CES) production frontier is considered without log-transformation to prevent induced non-negligible estimation bias. Second, the model flexibility is improved via semiparameterization, where the technology is an unknown function of a set of environment variables. The technology function accounts for latent heterogeneity across individual units, which can be freely correlated with inputs, environment variables, and/or inefficiency determinants. Furthermore, the technology function incorporates a single-index structure to circumvent the curse of dimensionality. Third, distributional assumptions are eschewed on both stochastic noise and inefficiency for model identification. Instead, only the conditional mean of the inefficiency is assumed, which depends on related determinants with a wide range of choice, via a positive parametric function. As a result, technical efficiency is constructed without relying on an assumed distribution on composite error. The model provides flexible structures on both the production frontier and inefficiency, thereby alleviating the risk of model misspecification in production and efficiency analysis. The estimator involves a series based nonlinear least squares estimation for the unknown parameters and a kernel based local estimation for the technology function. Promising finite-sample performance is demonstrated through simulations, and the model is applied to investigate productive efficiency among OECD countries from 1970–2019.

Abstract

Estimation of (in)efficiency became a popular practice that witnessed applications in virtually any sector of the economy over the last few decades. Many different models were deployed for such endeavors, with Stochastic Frontier Analysis (SFA) models dominating the econometric literature. Among the most popular variants of SFA are Aigner, Lovell, and Schmidt (1977), which launched the literature, and Kumbhakar, Ghosh, and McGuckin (1991), which pioneered the branch taking account of the (in)efficiency term via the so-called environmental variables or determinants of inefficiency. Focusing on these two prominent approaches in SFA, the goal of this chapter is to try to understand the production inefficiency of public hospitals in Queensland. While doing so, a recognized yet often overlooked phenomenon emerges where possible dramatic differences (and consequently very different policy implications) can be derived from different models, even within one paradigm of SFA models. This emphasizes the importance of exploring many alternative models, and scrutinizing their assumptions, before drawing policy implications, especially when such implications may substantially affect people’s lives, as is the case in the hospital sector.

Abstract

The standard method to estimate a stochastic frontier (SF) model is the maximum likelihood (ML) approach with the distribution assumptions of a symmetric two-sided stochastic error v and a one-sided inefficiency random component u. When v or u has a nonstandard distribution, such as v follows a generalized t distribution or u has a χ2 distribution, the likelihood function can be complicated or untractable. This chapter introduces using indirect inference to estimate the SF models, where only least squares estimation is used. There is no need to derive the density or likelihood function, thus it is easier to handle a model with complicated distributions in practice. The author examines the finite sample performance of the proposed estimator and also compare it with the standard ML estimator as well as the maximum simulated likelihood (MSL) estimator using Monte Carlo simulations. The author found that the indirect inference estimator performs quite well in finite samples.

Abstract

The author develops a bilateral Nash bargaining model under value uncertainty and private/asymmetric information, combining ideas from axiomatic and strategic bargaining theory. The solution to the model leads organically to a two-tier stochastic frontier (2TSF) setup with intra-error dependence. The author presents two different statistical specifications to estimate the model, one that accounts for regressor endogeneity using copulas, the other able to identify separately the bargaining power from the private information effects at the individual level. An empirical application using a matched employer–employee data set (MEEDS) from Zambia and a second using another one from Ghana showcase the applied potential of the approach.

Cover of Essays in Honor of Subal Kumbhakar
DOI
10.1108/S0731-9053202446
Publication date
2024-04-05
Book series
Advances in Econometrics
Editors
Series copyright holder
Emerald Publishing Limited
ISBN
978-1-83797-874-8
eISBN
978-1-83797-873-1
Book series ISSN
0731-9053