Maximum Simulated Likelihood Methods and Applications: Volume 26

Subject:

Table of contents

(15 chapters)

Simulation-based methods and simulation-assisted estimators have greatly increased the reach of empirical applications in econometrics. The received literature includes a thick layer of theoretical studies, including landmark works by Gourieroux and Monfort (1996), McFadden and Ruud (1994), and Train (2003), and hundreds of applications. An early and still influential application of the method is Berry, Levinsohn, and Pakes's (1995) (BLP) application to the U.S. automobile market in which a market equilibrium model is cleared of latent heterogeneity by integrating the heterogeneity out of the moments in a GMM setting. BLP's methodology is a baseline technique for studying market equilibrium in empirical industrial organization. Contemporary applications involving multilayered models of heterogeneity in individual behavior such as that in Riphahn, Wambach, and Million's (2003) study of moral hazard in health insurance are also common. Computation of multivariate probabilities by using simulation methods is now a standard technique in estimating discrete choice models. The mixed logit model for modeling preferences (McFadden & Train, 2000) is now the leading edge of research in multinomial choice modeling. Finally, perhaps the most prominent application in the entire arena of simulation-based estimation is the current generation of Bayesian econometrics based on Markov Chain Monte Carlo (MCMC) methods. In this area, heretofore intractable estimators of posterior means are routinely estimated with the assistance of simulation and the Gibbs sampler.

A major stumbling block in multivariate discrete data analysis is the problem of evaluating the outcome probabilities that enter the likelihood function. Calculation of these probabilities involves high-dimensional integration, making simulation methods indispensable in both Bayesian and frequentist estimation and model choice. We review several existing probability estimators and then show that a broader perspective on the simulation problem can be afforded by interpreting the outcome probabilities through Bayes’ theorem, leading to the recognition that estimation can alternatively be handled by methods for marginal likelihood computation based on the output of Markov chain Monte Carlo (MCMC) algorithms. These techniques offer stand-alone approaches to simulated likelihood estimation but can also be integrated with traditional estimators. Building on both branches in the literature, we develop new methods for estimating response probabilities and propose an adaptive sampler for producing high-quality draws from multivariate truncated normal distributions. A simulation study illustrates the practical benefits and costs associated with each approach. The methods are employed to estimate the likelihood function of a correlated random effects panel data model of women's labor force participation.

In empirical research, panel (and multinomial) probit models are leading examples for the use of maximum simulated likelihood estimators. The Geweke–Hajivassiliou–Keane (GHK) simulator is the most widely used technique for this type of problem. This chapter suggests an algorithm that is based on GHK but uses an adaptive version of sparse-grids integration (SGI) instead of simulation. It is adaptive in the sense that it uses an automated change-of-variables to make the integration problem numerically better behaved along the lines of efficient importance sampling (EIS) and adaptive univariate quadrature. The resulting integral is approximated using SGI that generalizes Gaussian quadrature in a way such that the computational costs do not grow exponentially with the number of dimensions. Monte Carlo experiments show an impressive performance compared to the original GHK algorithm, especially in difficult cases such as models with high intertemporal correlations.

This chapter compares the performance of the maximum simulated likelihood (MSL) approach with the composite marginal likelihood (CML) approach in multivariate ordered-response situations. The ability of the two approaches to recover model parameters in simulated data sets is examined, as is the efficiency of estimated parameters and computational cost. Overall, the simulation results demonstrate the ability of the CML approach to recover the parameters very well in a 5–6 dimensional ordered-response choice model context. In addition, the CML recovers parameters as well as the MSL estimation approach in the simulation contexts used in this study, while also doing so at a substantially reduced computational cost. Further, any reduction in the efficiency of the CML approach relative to the MSL approach is in the range of nonexistent to small. When taken together with its conceptual and implementation simplicity, the CML approach appears to be a promising approach for the estimation of not only the multivariate ordered-response model considered here, but also for other analytically intractable econometric models.

In this paper we use Monte Carlo sampling experiments to examine the properties of pretest estimators in the random parameters logit (RPL) model. The pretests are for the presence of random parameters. We study the Lagrange multiplier (LM), likelihood ratio (LR), and Wald tests, using conditional logit as the restricted model. The LM test is the fastest test to implement among these three test procedures since it only uses restricted, conditional logit, estimates. However, the LM-based pretest estimator has poor risk properties. The ratio of LM-based pretest estimator root mean squared error (RMSE) to the random parameters logit model estimator RMSE diverges from one with increases in the standard deviation of the parameter distribution. The LR and Wald tests exhibit properties of consistent tests, with the power approaching one as the specification error increases, so that the pretest estimator is consistent. We explore the power of these three tests for the random parameters by calculating the empirical percentile values, size, and rejection rates of the test statistics. We find the power of LR and Wald tests decreases with increases in the mean of the coefficient distribution. The LM test has the weakest power for presence of the random coefficient in the RPL model.

In this chapter we develop and implement a method for maximum simulated likelihood estimation of the continuous time stochastic volatility model with the constant elasticity of volatility. The approach does not require observations on option prices, nor volatility. To integrate out latent volatility from the joint density of return and volatility, a modified efficient importance sampling technique is used after the continuous time model is approximated using the Euler–Maruyama scheme. The Monte Carlo studies show that the method works well and the empirical applications illustrate usefulness of the method. Empirical results provide strong evidence against the Heston model.

This chapter uses a dynamic structural model of household choices on savings, consumption, fertility, and education spending to perform policy experiments examining the impact of tax-free education savings accounts on parental contributions toward education and the resulting increase in the education attainment of children. The model is estimated via maximum simulated likelihood using data from the National Longitudinal Survey of Young Women. Unlike many similarly estimated dynamic choice models, the estimation procedure incorporates a continuous variable probability distribution function. The results indicate that the accounts increase the amount of parental support, the percent contributing and education attainment. The policy impact compares favorably to the impact of other policies such as universal grants and general tax credits, for which the model gives results in line with those from other investigations.

This chapter deals with the estimation of the effect of exchange rate flexibility on financial account openness. The purpose of our analysis is twofold: On the one hand, we try to quantify the differences in the estimated parameters when exchange rate flexibility is treated as an exogenous regressor. On the other hand, we try to identify how two different degrees of exchange rate flexibility (intermediate vs floating regimes) affect the propensity of opening the financial account. We argue that a simultaneous determination of exchange rate and financial account policies must be acknowledged in order to obtain reliable estimates of their interaction and determinants. Using a panel data set of advanced countries and emerging markets, a trivariate probit model is estimated via a maximum simulated likelihood approach. In line with the monetary policy trilemma, our results show that countries switching from an intermediate regime to a floating arrangement are more likely to remove capital controls. In addition, the estimated coefficients exhibit important differences when exchange rate flexibility is treated as an exogenous regressor relative to the case when it is treated as endogenous.

This chapter proposes M-estimators of a fractional response model with an endogenous count variable under the presence of time-constant unobserved heterogeneity. To address the endogeneity of the right-hand-side count variable, I use instrumental variables and a two-step procedure estimation approach. Two methods of estimation are employed: quasi-maximum likelihood (QML) and nonlinear least squares (NLS). Using these methods, I estimate the average partial effects, which are shown to be comparable across linear and nonlinear models. Monte Carlo simulations verify that the QML and NLS estimators perform better than other standard estimators. For illustration, these estimators are used in a model of female labor supply with an endogenous number of children. The results show that the marginal reduction in women's working hours per week is less as women have one additional kid. In addition, the effect of the number of children on the fraction of hours that a woman spends working per week is statistically significant and more significant than the estimates in all other linear and nonlinear models considered in the chapter.

In this chapter, we utilize the residual concept of productivity measures defined in the context of normal-gamma stochastic frontier production model with heterogeneity to differentiate productivity and inefficiency measures. In particular, three alternative two-way random effects panel estimators of normal-gamma stochastic frontier model are proposed using simulated maximum likelihood estimation techniques. For the three alternative panel estimators, we use a generalized least squares procedure involving the estimation of variance components in the first stage and estimated variance–covariance matrix to transform the data. Empirical estimates indicate difference in the parameter coefficients of gamma distribution, production function, and heterogeneity function variables between pooled and the two alternative panel estimators. The difference between pooled and panel model suggests the need to account for spatial, temporal, and within residual variations as in Swamy–Arora estimator, and within residual variation in Amemiya estimator with panel framework. Finally, results from this study indicate that short- and long-run variations in financial exposure (solvency, liquidity, and efficiency) play an important role in explaining the variance of inefficiency and productivity.

In a Bayesian approach, we compare the forecasting performance of five classes of models: ARCH, GARCH, SV, SV-STAR, and MSSV using daily Tehran Stock Exchange (TSE) market data. To estimate the parameters of the models, Markov chain Monte Carlo (MCMC) methods is applied. The results show that the models in the fourth and the fifth class perform better than the models in the other classes.

DOI
10.1108/S0731-9053(2010)26
Publication date
Book series
Advances in Econometrics
Editors
Series copyright holder
Emerald Publishing Limited
ISBN
978-0-85724-149-8
eISBN
978-0-85724-150-4
Book series ISSN
0731-9053