Bayesian Model Comparison: Volume 34

Subject:

Table of contents

(17 chapters)
Abstract

Massively parallel desktop computing capabilities now well within the reach of individual academics modify the environment for posterior simulation in fundamental and potentially quite advantageous ways. But to fully exploit these benefits algorithms that conform to parallel computing environments are needed. This paper presents a sequential posterior simulator designed to operate efficiently in this context. The simulator makes fewer analytical and programming demands on investigators, and is faster, more reliable, and more complete than conventional posterior simulators. The paper extends existing sequential Monte Carlo methods and theory to provide a thorough and practical foundation for sequential posterior simulation that is well suited to massively parallel computing environments. It provides detailed recommendations on implementation, yielding an algorithm that requires only code for simulation from the prior and evaluation of prior and data densities and works well in a variety of applications representative of serious empirical work in economics and finance. The algorithm facilitates Bayesian model comparison by producing marginal likelihood approximations of unprecedented accuracy as an incidental by-product, is robust to pathological posterior distributions, and provides estimates of numerical standard error and relative numerical efficiency intrinsically. The paper concludes with an application that illustrates the potential of these simulators for applied Bayesian inference.

Abstract

This paper investigates the usefulness of switching Gaussian state space models as a tool for implementing dynamic model selection (DMS) or averaging (DMA) in time-varying parameter regression models. DMS methods allow for model switching, where a different model can be chosen at each point in time. Thus, they allow for the explanatory variables in the time-varying parameter regression model to change over time. DMA will carry out model averaging in a time-varying manner. We compare our exact method for implementing DMA/DMS to a popular existing procedure which relies on the use of forgetting factor approximations. In an application, we use DMS to select different predictors in an inflation forecasting application. We find strong evidence of model switching. We also compare different ways of implementing DMA/DMS and find forgetting factor approaches and approaches based on the switching Gaussian state space model to lead to similar results.

Abstract

We investigate the Bayesian approach to model comparison within a two-country framework with nominal rigidities using the workhorse New Keynesian open-economy model of Martínez-García and Wynne (2010). We discuss the trade-offs that monetary policy – characterized by a Taylor-type rule – faces in an interconnected world, with perfectly flexible exchange rates. We then use posterior model probabilities to evaluate the weight of evidence in support of such a model when estimated against more parsimonious specifications that either abstract from monetary frictions or assume autarky by means of controlled experiments that employ simulated data. We argue that Bayesian model comparison with posterior odds is sensitive to sample size and the choice of observable variables for estimation. We show that posterior model probabilities strongly penalize overfitting, which can lead us to favor a less parameterized model against the true data-generating process when the two become arbitrarily close to each other. We also illustrate that the spillovers from monetary policy across countries have an added confounding effect.

Abstract

The latest financial crisis has stressed the need of understanding the world financial system as a network of interconnected institutions, where financial linkages play a fundamental role in the spread of systemic risks. In this paper we propose to enrich the topological perspective of network models with a more structured statistical framework, that of Bayesian Gaussian graphical models. From a statistical viewpoint, we propose a new class of hierarchical Bayesian graphical models that can split correlations between institutions into country specific and idiosyncratic ones, in a way that parallels the decomposition of returns in the well-known Capital Asset Pricing Model. From a financial economics viewpoint, we suggest a way to model systemic risk that can explicitly take into account frictions between different financial markets, particularly suited to study the ongoing banking union process in Europe. From a computational viewpoint, we develop a novel Markov chain Monte Carlo algorithm based on Bayes factor thresholding.

Abstract

The BEKK GARCH class of models presents a popular set of tools for applied analysis of dynamic conditional covariances. Within this class the analyst faces a range of model choices that trade off flexibility with parameter parsimony. In the most flexible unrestricted BEKK the parameter dimensionality increases quickly with the number of variables. Covariance targeting decreases model dimensionality but induces a set of nonlinear constraints on the underlying parameter space that are difficult to implement. Recently, the rotated BEKK (RBEKK) has been proposed whereby a targeted BEKK model is applied after the spectral decomposition of the conditional covariance matrix. An easily estimable RBEKK implies a full albeit constrained BEKK for the unrotated returns. However, the degree of the implied restrictiveness is currently unknown. In this paper, we suggest a Bayesian approach to estimation of the BEKK model with targeting based on Constrained Hamiltonian Monte Carlo (CHMC). We take advantage of suitable parallelization of the problem within CHMC utilizing the newly available computing power of multi-core CPUs and Graphical Processing Units (GPUs) that enables us to deal effectively with the inherent nonlinear constraints posed by covariance targeting in relatively high dimensions. Using parallel CHMC we perform a model comparison in terms of predictive ability of the targeted BEKK with the RBEKK in the context of an application concerning a multivariate dynamic volatility analysis of a Dow Jones Industrial returns portfolio. Although the RBEKK does improve over a diagonal BEKK restriction, it is clearly dominated by the full targeted BEKK model.

Abstract

In this paper, I propose an algorithm combining adaptive sampling and Reversible Jump MCMC to deal with the problem of variable selection in time-varying linear model. These types of model arise naturally in financial application as illustrated by a motivational example. The methodology proposed here, dubbed adaptive reversible jump variable selection, differs from typical approaches by avoiding estimation of the factors and the difficulties stemming from the presence of the documented single factor bias. Illustrated by several simulated examples, the algorithm is shown to select the appropriate variables among a large set of candidates.

Abstract

An important but often overlooked obstacle in multivariate discrete data models is the specification of endogenous covariates. Endogeneity can be modeled as latent or observed, representing competing hypotheses about the outcomes being considered. However, little attention has been applied to deciphering which specification is best supported by the data. This paper highlights the use of existing Bayesian model comparison techniques to investigate the proper specification for endogenous covariates and to understand the nature of endogeneity. Consideration of both observed and latent modeling approaches is emphasized in two empirical applications. The first application examines linkages for banking contagion and the second application evaluates the impact of education on socioeconomic outcomes.

Abstract

This paper examines variable selection among various factors related to motor vehicle fatality rates using a rich set of panel data. Four Bayesian methods are used. These include Extreme Bounds Analysis (EBA), Stochastic Search Variable Selection (SSVS), Bayesian Model Averaging (BMA), and Bayesian Additive Regression Trees (BART). The first three of these employ parameter estimation, the last, BART, involves no parameter estimation. Nonetheless, it also has implications for variable selection. The variables examined in the models include traditional motor vehicle and socioeconomic factors along with important policy-related variables. Policy recommendations are suggested with respect to cell phone use, modernization of the fleet, alcohol use, and diminishing suicidal behavior.

Abstract

We put forward the idea that for model selection the intrinsic priors are becoming a center of a cluster of a dominant group of methodologies for objective Bayesian Model Selection.

The intrinsic method and its applications have been developed in the last two decades, and has stimulated closely related methods. The intrinsic methodology can be thought of as the long searched approach for objective Bayesian model selection and hypothesis testing.

In this paper we review the foundations of the intrinsic priors, their general properties, and some of their applications.

Abstract

Structural models of demand founded on the classic work of Berry, Levinsohn, and Pakes (1995) link variation in aggregate market shares for a product to the influence of product attributes on heterogeneous consumer tastes. We consider implementing these models in settings with complicated products where consumer preferences for product attributes are sparse, that is, where a small proportion of a high-dimensional product characteristics influence consumer tastes. We propose a multistep estimator to efficiently perform uniform inference. Our estimator employs a penalized pre-estimation model specification stage to consistently estimate nonlinear features of the BLP model. We then perform selection via a Triple-LASSO for explanatory controls, treatment selection controls, and instrument selection. After selecting variables, we use an unpenalized GMM estimator for inference. Monte Carlo simulations verify the performance of these estimators.

Abstract

Copula modeling enables the analysis of multivariate count data that has previously required imposition of potentially undesirable correlation restrictions or has limited attention to models with only a few outcomes. This article presents a method for analyzing correlated counts that is appealing because it retains well-known marginal distributions for each response while simultaneously allowing for flexible correlations among the outcomes. The proposed framework extends the applicability of the method to settings with high-dimensional outcomes and provides an efficient simulation method to generate the correlation matrix in a single step. Another open problem that is tackled is that of model comparison. In particular, the article presents techniques for estimating marginal likelihoods and Bayes factors in copula models. The methodology is implemented in a study of the joint behavior of four categories of US technology patents. The results reveal that patent counts exhibit high levels of correlation among categories and that joint modeling is crucial for eliciting the interactions among these variables.

DOI
10.1108/S0731-9053201434
Publication date
2014-11-19
Book series
Advances in Econometrics
Series copyright holder
Emerald Publishing Limited
Book series ISSN
0731-9053