Advances in Econometrics: Volume 16

Subject:

Table of contents

(14 chapters)

In this paper, we develop four models to investigate the role of intentions (stated and true) and explanatory variables in forecasting purchase based on the social psychology view that true intentions determine purchase behavior. We found that a weighted average of stated intentions together with the complementary FED variables are powerful indicators of future purchase behavior. For intention survey designers, these results imply that a conversion scale is needed to convert stated intentions to true intentions and intentions questions would yield more useful information if it is formulated in terms of probabilities rather than in terms of yes/no answers.

This paper proposes a methodology for incorporating psychometric data such as stated preferences and subjective ratings of service attributes in econometric consumer's discrete choice models. Econometric formulation of the general framework of the methodology is presented, followed by two practical submodels. The first submodel combines revealed preference (RP) and stated preference (SP) data to estimate discrete choice models. The second submodel combines a linear structural equation model with a discrete choice model to incorporate latent attributes into the choice model using attitudinal data as their indicators. Empirical case studies on travel mode choice analysis demonstrate the effectiveness and practicality of the methodology.

Empirical studies in Marketing have typically characterized a household's purchase incidence decision, i.e. the household's decision of whether or not to buy a product on a given shopping visit, as being independent of the household's purchase incidence decisions in other product categories. These decisions, however, tend to be related both because product categories serve as complements (e.g. bacon and eggs) or substitutes (e.g. colas and orange juices) in addressing the household's consumption needs, and because product categories vie with each other in attracting the household's limited shopping budget. Existing empirical studies have either ignored such inter-relationships altogether or have accounted for them in a limited way by modeling household purchases in pairs of complementary product categories. Given the recent availability of IRI market basket data, which tracks purchases of panelists in several product categories over time, and the new computational Bayesian methods developed in Albert and Chib (1993) and Chib and Greenberg (1998), estimating high-dimensional multi-category models is now possible. This paper exploits these developments to fit an appropriate panel data multivariate probit model to household-level contemporaneous purchases in twelve product categories, with the descriptive goal of isolating correlations amongst various product categories within the household's shopping basket. We provide an empirical scheme to endogenously determine the degree of complementarity and substitutability among product categories within a household's shopping basket, providing full details of the methodology. Our main findings are that existing purchase incidence models underestimate the magnitude of cross-category correlations and overestimate the effectiveness of the marketing mix, and that ignoring unobserved heterogeneity across households overestimates cross-category correlations and underestimate the effectiveness of the marketing mix.

The authors review current developments in experimental design for conjoint analysis and discrete choice models emphasizing the issue of design efficiency. Drawing on recently developed optimal paired comparison designs, theoretical as well as empirical evidence is provided that established design strategies can be improved with respect to design efficiency.

One of the most important issues facing a firm involved in direct marketing is the selection of addresses from a mailing list. When the parameters of the model describing consumers' reaction to a mailing are known, addresses for a future mailing can be selected in a profit-maximizing way. Usually, these parameters are unknown and have to be estimated. These estimates are used to rank the potential addressees and to select the best targets.

Several methods for this selection process have been proposed in the recent literature. All of these methods consider the estimation and selection step separately. Since estimation uncertainty is neglected these methods lead to a suboptimal decision rule and hence not to optimal profits. We derive an optimal Bayes decision rule that follows from the firm's profit function and which explicitly takes estimation uncertainty into account. We show that the integral resulting from the Bayes decision rule can be either approximated through a normal posterior, or numerically evaluated by a Laplace approximation or by Markov chain Monte Carlo integration. An empirical example shows that indeed higher profits result.

In this paper we consider a recently developed non parametric econometric method which is ideally suited to a wide range of marketing applications. We demonstrate the usefulness of this method via an application to direct marketing using data obtained from the Direct Marketing Association. Using independent hold-out data, the benchmark parametric model (Logit) correctly predicts 8% of purchases by those who actually make a purchase, while the nonparametric method correctly predicts 39% of purchases. A variety of competing estimators are considered, with the next best models being semiparametric index and Neural Network models both of which turn in 36% correct prediction rates.

Marketing researchers may be confronted with biases when estimating response coefficients of multiplicative promotion models based on linearly aggregated data. This paper demonstrates how to recover the parameters obtained with data which are aggregated in a compatible way with such models. It provides evidence that the geometric means of sales and of prices across stores can be predicted with accuracy from their arithmetic means and standard deviations. Employing these predictions in a market-level model results in parameter estimates which are consistent with those obtained with the actual geometric means and fairly close to coefficients derived at the individual store level.

Market structure analysis continues to be a topic of considerable interest to marketing researchers. One of the most common representations of the manner in which brands compete in a market is via market maps that show the relative locations of brands in multi-attribute space. In this paper, we use logit brand choice models to estimate a heterogeneous demand system capable of identifying such brand maps. Unlike the previous literature, we use only aggregate store-level data to obtain these maps. Additionally, by recognizing that there exists heterogeneity in consumer preferences both within a store's market area as well as across store market areas, the approach allows us to identify store-specific brand maps. The methodology also accounts for endogeneity in prices due to possible correlation between unobserved factors, such as shelf space and shelf location that affect brand sales, and prices. We provide an empirical application of our methodology to store level data from a retail chain for the laundry detergents product category. From a manager's perspective, our model enables micromarketing strategies in which promotions are targeted to those stores in which a brand has the most favorable, differentiated, position.

Market share attraction models are useful tools for analyzing competitive structures. The models can be used to infer cross-effects of marketing-mix variables, but also the own effects can be adequately estimated while conditioning on competitive reactions. Important features of attraction models are that they incorporate that market shares sum to unity and that the market shares of individual brands ake in between 0 and 1. Next to analyzing competitive structures, attraction models are also often considered for forecasting market shares.

The econometric analysis of the market share attraction model has not received much attention. Topics as specification, diagnostics, estimation and forecasting have not been thoroughly discussed in the academic marketing literature. In this chapter we go through a range of these topics, and, along the lines, we indicate that there are ample opportunities to improve upon present-day practice. We also discuss an alternative approach to the log-centering method of linearizing the attraction model. This approach leads to easier inference and interpretation of the model.

Economic theory provides a great deal of information about demand models. Specifically, theory can dictate many relationships that expenditure and price elasticities should fulfill. Unfortunately, analysts cannot be certain whether these relationships will hold exactly. Many analysts perform hypothesis tests to determine if the theory is correct. If the theory is accepted then the relationships are assumed to hold exactly, but if the theory is rejected they are ignored. In this paper we outline a hierarchical Bayesian formulation that allows us to consider the theoretical restrictions as holding stochastically or approximately. Our estimates are shrunk towards those implied by economic theory. This technique can incorporate information that a theory is approximately right, even when exact hypothesis tests would reject the theory and ignore all information from it. We illustrate our model with an application of this data to a store-level system of demand equations using supermarket scanner data.

Although opinions among time series econometricians vary concerning whether the variables in linear regression models need to be stationary, the majority view is that stationary variables are desirable, if not required, because of the dangers of “spurious regression,” (Enders, 1985). Trending, but independent, variables will likely be significantly correlated when combined in a regression analysis. The issue of “spurious regression” and the appropriate manner of including explanatory variables in nonlinear models has not been extensively examined. In this study we examine the issue of model discrimination and “spurious regression” between two nonlinear diffusion models. We use the Generalized Bass Model (GBM) proposed by Bass, Krishnan and Jain (1994) where explanatory variables are included as percentage changes and as logarithms in comparison with the Cox (1972) proportional hazard model with non-stationary variables included as levels. We use simulations to analyze estimation properties and model discrimination issues for the two models.

Historically standard regression has been used to assess performance in marketing, especially of salespeople and retail outlets. A model of performance is estimated using ordinary least squares, the residuals are computed, and the decision-making units, say store managers, ranked in the order of the residuals. The problem is that the regression line approach characterizes average performance. The focus should be on best performance. Frontier analysis, especially stochastic frontier analysis (SEA), is a way to benchmark such best performance. Deterministic frontier analysis is also discussed in passing. The distinction between conventional ordinary least squares analysis and frontier analysis is especially marked when heteroscedasticity is present. Most of the focus of benchmarking has been on identifying the best performing units. The real insight, though, is from explaining the benchmark gap. Stochastic frontier analysis can, and should, model both phenomena simultaneously.

DOI
10.1016/S0731-9053(2002)16
Publication date
Book series
Advances in Econometrics
Series copyright holder
Emerald Publishing Limited
ISBN
978-0-76230-857-6
eISBN
978-1-84950-142-2
Book series ISSN
0731-9053