Dynamic Factor Models: Volume 35

Cover of Dynamic Factor Models
Subject:

Table of contents

(23 chapters)

Part I: Methodology

Abstract

The Factor-augmented Error-Correction Model (FECM) generalizes the factor-augmented VAR (FAVAR) and the Error-Correction Model (ECM), combining error-correction, cointegration and dynamic factor models. It uses a larger set of variables compared to the ECM and incorporates the long-run information lacking from the FAVAR because of the latter’s specification in differences. In this paper, we review the specification and estimation of the FECM, and illustrate its use for forecasting and structural analysis by means of empirical applications based on Euro Area and US data.

Abstract

This paper is concerned with estimation of the parameters of a high-frequency VAR model using mixed-frequency data, both for the stock and for the flow case. Extended Yule–Walker estimators and (Gaussian) maximum likelihood type estimators based on the EM algorithm are considered. Properties of these estimators are derived, partly analytically and by simulations. Finally, the loss of information due to mixed-frequency data when compared to the high-frequency situation as well as the gain of information when using mixed-frequency data relative to low-frequency data is discussed.

Abstract

Recent U.S. Treasury yields have been constrained to some extent by the zero lower bound (ZLB) on nominal interest rates. Therefore, we compare the performance of a standard affine Gaussian dynamic term structure model (DTSM), which ignores the ZLB, to a shadow-rate DTSM, which respects the ZLB. Near the ZLB, we find notable declines in the forecast accuracy of the standard model, while the shadow-rate model forecasts well. However, 10-year yield term premiums are broadly similar across the two models. Finally, in applying the shadow-rate model, we find no gain from estimating a slightly positive lower bound on U.S. yields.

Abstract

The implied volatility surface is the collection of volatilities implied by option contracts for different strike prices and time-to-maturity. We study factor models to capture the dynamics of this three-dimensional implied volatility surface. Three model types are considered to examine desirable features for representing the surface and its dynamics: a general dynamic factor model, restricted factor models designed to capture the key features of the surface along the moneyness and maturity dimensions, and in-between spline-based methods. Key findings are that: (i) the restricted and spline-based models are both rejected against the general dynamic factor model, (ii) the factors driving the surface are highly persistent, and (iii) for the restricted models option Δ is preferred over the more often used strike relative to spot price as measure for moneyness.

Part II: Factor Structure and Specification

Abstract

This paper compares alternative estimation procedures for multi-level factor models which imply blocks of zero restrictions on the associated matrix of factor loadings. We suggest a sequential least squares algorithm for minimizing the total sum of squared residuals and a two-step approach based on canonical correlations that are much simpler and faster than Bayesian approaches previously employed in the literature. An additional advantage is that our approaches can be used to estimate more complex multi-level factor structures where the number of levels is greater than two. Monte Carlo simulations suggest that the estimators perform well in typical sample sizes encountered in the factor analysis of macroeconomic data sets. We apply the methodologies to study international comovements of business and financial cycles.

Abstract

We generalise the spectral EM algorithm for dynamic factor models in Fiorentini, Galesi, and Sentana (2014) to bifactor models with pervasive global factors complemented by regional ones. We exploit the sparsity of the loading matrices so that researchers can estimate those models by maximum likelihood with many series from multiple regions. We also derive convenient expressions for the spectral scores and information matrix, which allows us to switch to the scoring algorithm near the optimum. We explore the ability of a model with a global factor and three regional ones to capture inflation dynamics across 25 European countries over 1999–2014.

Abstract

Previous studies have shown that the effectiveness of monetary policy depends, to a large extent, on the market expectations of its future actions. This paper proposes an econometric framework to address the effect of the current state of the economy on monetary policy expectations. Specifically, we study the effect of contractionary (or expansionary) demand (or supply) shocks hitting the euro area countries on the expectations of the ECB's monetary policy in two stages. In the first stage, we construct indexes of real activity and inflation dynamics for each country, based on soft and hard indicators. In the second stage, we use those indexes to provide assessments on the type of aggregate shock hitting each country and assess its effect on monetary policy expectations at different horizons. Our results indicate that expectations are responsive to aggregate contractionary shocks, but not to expansionary shocks. Particularly, contractionary demand shocks have a negative effect on short-term monetary policy expectations, while contractionary supply shocks have negative effect on medium- and long-term expectations. Moreover, shocks to different economies do not have significantly different effects on expectations, although some differences across countries arise.

Abstract

We propose a novel dynamic factor model to characterise comovements between returns on securities from different asset classes from different countries. We apply a global-class-country latent factor model and allow time-varying loadings. We are able to separate contagion (asset exposure driven) and excess interdependence (factor volatility driven). Using data from 1999 to 2012, we find evidence of contagion from the US stock market during the 2007–2009 financial crisis, and of excess interdependence during the European debt crisis from May 2010 onwards. Neither contagion nor excess interdependence is found when the average measure of model implied comovements is used.

Abstract

We compare methods to measure comovement in business cycle data using multi-level dynamic factor models. To do so, we employ a Monte Carlo procedure to evaluate model performance for different specifications of factor models across three different estimation procedures. We consider three general factor model specifications used in applied work. The first is a single-factor model, the second a two-level factor model, and the third a three-level factor model. Our estimation procedures are the Bayesian approach of Otrok and Whiteman (1998), the Bayesian state-space approach of Kim and Nelson (1998) and a frequentist principal components approach. The latter serves as a benchmark to measure any potential gains from the more computationally intensive Bayesian procedures. We then apply the three methods to a novel new dataset on house prices in advanced and emerging markets from Cesa-Bianchi, Cespedes, and Rebucci (2015) and interpret the empirical results in light of the Monte Carlo results.

Abstract

In the context of Dynamic Factor Models, we compare point and interval estimates of the underlying unobserved factors extracted using small- and big-data procedures. Our paper differs from previous works in the related literature in several ways. First, we focus on factor extraction rather than on prediction of a given variable in the system. Second, the comparisons are carried out by implementing the procedures considered to the same data. Third, we are interested not only on point estimates but also on confidence intervals for the factors. Based on a simulated system and the macroeconomic data set popularized by Stock and Watson (2012), we show that, for a given procedure, factor estimates based on different cross-sectional dimensions are highly correlated. On the other hand, given the cross-sectional dimension, the maximum likelihood Kalman filter and smoother factor estimates are highly correlated with those obtained using hybrid procedures. The PC estimates are somehow less correlated. Finally, the PC intervals based on asymptotic approximations are unrealistically tiny.

Part III: Instability

Abstract

This paper shows that the parsimoniously time-varying methodology of Callot and Kristensen (2015) can be applied to factor models. We apply this method to study macroeconomic instability in the United States from 1959:1 to 2006:4 with a particular focus on the Great Moderation. Models with parsimoniously time-varying parameters are models with an unknown number of break points at unknown locations. The parameters are assumed to follow a random walk with a positive probability that an increment is exactly equal to zero so that the parameters do not vary at every point in time. The vector of increments, which is high dimensional by construction and sparse by assumption, is estimated using the Lasso. We apply this method to the estimation of static factor models and factor-augmented autoregressions using a set of 190 quarterly observations of 144 US macroeconomic series from Stock and Watson (2009). We find that the parameters of both models exhibit a higher degree of instability in the period from 1970:1 to 1984:4 relative to the following 15 years. In our setting the Great Moderation appears as the gradual ending of a period of high structural instability that took place in the 1970s and early 1980s.

Abstract

Several official institutions (NBER, OECD, CEPR, and others) provide business cycle chronologies with lags ranging from three months to several years. In this paper, we propose a Markov-switching dynamic factor model that allows for a more timely estimation of turning points. We apply one-step and two-step estimation approaches to French data and compare their performance. One-step maximum likelihood estimation is confined to relatively small data sets, whereas two-step approach that uses principal components can accommodate much bigger information sets. We find that both methods give qualitatively similar results and agree with the OECD dating of recessions on a sample of monthly data covering the period 1993–2014. The two-step method is more precise in determining the beginnings and ends of recessions as given by the OECD. Both methods indicate additional downturns in the French economy that were too short to enter the OECD chronology.

Abstract

We analyze the interaction among the common and country-specific components for the inflation rates in 12 euro area countries through a factor model with time-varying parameters. The variation of the model parameters is driven by the score of the predictive likelihood, so that, conditionally on past data, the model is Gaussian and the likelihood function can be evaluated using the Kalman filter. The empirical analysis uncovers significant variation over time in the model parameters. We find that, over an extended time period, inflation persistence has fallen and the importance of common shocks has increased relatively to that of idiosyncratic disturbances. According to the model, the fall in inflation observed since the sovereign debt crisis is broadly a common phenomenon since no significant cross-country inflation differentials have emerged. Stressed countries, however, have been hit by unusually large shocks.

Part IV: Nowcasting and Forecasting

Abstract

We develop a framework for measuring and monitoring business cycles in real time. Following a long tradition in macroeconometrics, inference is based on a variety of indicators of economic activity, treated as imperfect measures of an underlying index of business cycle conditions. We extend existing approaches by permitting for heterogenous lead–lag patterns of the various indicators along the business cycles. The framework is well suited for high-frequency monitoring of current economic conditions in real time – nowcasting – since inference can be conducted in the presence of mixed frequency data and irregular patterns of data availability. Our assessment of the underlying index of business cycle conditions is accurate and more timely than popular alternatives, including the Chicago Fed National Activity Index (CFNAI). A formal real-time forecasting evaluation shows that the framework produces well-calibrated probability nowcasts that resemble the consensus assessment of the Survey of Professional Forecasters.

Abstract

We address the problem of selecting the common factors that are relevant for forecasting macroeconomic variables. In economic forecasting using diffusion indexes, the factors are ordered, according to their importance, in terms of relative variability, and are the same for each variable to predict, that is, the process of selecting the factors is not supervised by the predictand. We propose a simple and operational supervised method, based on selecting the factors on the basis of their significance in the regression of the predictand on the predictors. Given a potentially large number of predictors, we consider linear transformations obtained by principal components analysis. The orthogonality of the components implies that the standard t-statistics for the inclusion of a particular component are independent, and thus applying a selection procedure that takes into account the multiplicity of the hypotheses tests is both correct and computationally feasible. We focus on three main multiple testing procedures: Holm's sequential method, controlling the familywise error rate, the Benjamini–Hochberg method, controlling the false discovery rate, and a procedure for incorporating prior information on the ordering of the components, based on weighting the p-values according to the eigenvalues associated to the components. We compare the empirical performances of these methods with the classical diffusion index (DI) approach proposed by Stock and Watson, conducting a pseudo-real-time forecasting exercise, assessing the predictions of eight macroeconomic variables using factors extracted from an U.S. dataset consisting of 121 quarterly time series. The overall conclusion is that nature is tricky, but essentially benign: the information that is relevant for prediction is effectively condensed by the first few factors. However, variable selection, leading to exclude some of the low-order principal components, can lead to a sizable improvement in forecasting in specific cases. Only in one instance, real personal income, we were able to detect a significant contribution from high-order components.

Abstract

Forecasts from dynamic factor models potentially benefit from refining the data set by eliminating uninformative series. This paper proposes to use prediction weights as provided by the factor model itself for this purpose. Monte Carlo simulations and an empirical application to short-term forecasts of euro area, German, and French GDP growth from unbalanced monthly data suggest that both prediction weights and least angle regressions result in improved nowcasts. Overall, prediction weights provide yet more robust results.

Cover of Dynamic Factor Models
DOI
10.1108/S0731-9053201635
Publication date
2016-01-06
Book series
Advances in Econometrics
Editors
Series copyright holder
Emerald Publishing Limited
ISBN
978-1-78560-353-2
eISBN
978-1-78560-352-5
Book series ISSN
0731-9053