Advances in Business and Management Forecasting: Volume 5

Cover of Advances in Business and Management Forecasting
Subject:

Table of contents

(18 chapters)

This paper presents the results of a study comparing the ability of neural network models and multiple discriminant analysis (MDA) models to predict bond rating changes and to exam if segmentation by investment grade improves classification. Data was collected on more than 900 bonds that had their Standard and Poor's Corporation rating changed during the period 1997 to 2002. This was matched this dataset with corresponding firms which had the same initial bond rating but which did not change. The correspondence was based on the firms being in the same industry, having the same rating at the time of the change (the time frame was one month) and the same approximate asset size (within 20%). This relatively stringent set of criteria reduced the data set to 282 pairs of companies. A neural network model and a multiple discriminant analysis were used to predict both a bond change and the general direction of a movement from a particular bond rating to another bond rating. The predictive variables were financial ratios and rates of change for these ratios. In almost all cases, particularly for the larger sample studies, the neural network models were better predictors than the multiple discriminant models. The paper reviews, in detail, performance of the respective models, strengths and limitations of the models – particularly with respect to underlying assumptions- and future research directions.

The effectiveness of corporate governance is a major factor in forecasting firm performance. We examine the relationships among cross-listing, corporate governance and firm performance for a sample of Chinese cross-listed companies. We show that cross-listed firms display higher overall quality of corporate governance compared to non-cross-listed firms. Consequently better corporate governance results in higher operating performance. Our results support the bonding hypothesis of cross-listing. Furthermore, we also illustrate that the cross-listing status encapsulates the higher quality of corporate governance that leads to higher operating performance. When forecasting performance of cross-listing companies, it is therefore important to recognize the substitute effect between cross-listing and corporate governance.

We propose a novel method of forecasting equity option spreads using the degree of multiple listing as a proxy for expectations of future spreads. Spreads are a transactions fee for traders. To determine the future spreads on options being considered for purchase, traders must take current market trends affecting spreads into account. One such trend is the continued decline in spreads due to the multiple listing of options. Options listed on 4–6 exchanges compete more intensely than those listed on fewer exchanges, so that they may be expected to experience greater future declines in spreads. This study identifies the listing dates and number of listed exchanges for options listed on up to six exchanges as of May 2005. Listing criteria for multiple listing are defined with short- and long-term volumes, market capitalization, net income, and total assets being significant determinants of multiple listing. Short- and long-term volumes were found to have no explanatory power for multiple listing. Ranges of listing criteria are specified so that traders may locate the options of their choice.

Regression analysis is a commonly applied technique used to measure the relationship/predict/forecast of comparable units. A set of comparable units is some group of entities each performing somewhat the same set of activities. In this chapter, we will apply a modified version of our recently developed methodology to incorporate into the regression analysis a new variable that captures the unique weighting of each comparable unit. This new variable is the relative efficiency of each comparable unit that will be generated by a technique called data envelopment analysis (DEA). The results of applying this methodology with the DEA variable to a hospital labor data set will be presented.

This chapter will present a goal programming model which simultaneously generates forecasts for the aggregate level and for lower echelons in a multilevel forecasting context. Data from an actual service firm will be used to illustrate and test the proposed model against a standard forecast technique based on the bottom-up/top-down approach.

To fully accommodate the correlations between semiconductor product demands and external information such as the end market trends or regional economy growth, a linear dynamic system is introduced in this chapter to improve forecasting performance in supply chain operations. In conjunction with the generic Gaussian noise assumptions, the proposed state-space model leads to an expectation-maximization (EM) algorithm to estimate model parameters and predict production demands. Since the set of external indicators is of high dimensionality, principal component analysis (PCA) is applied to reduce the model order and corresponding computational complexity without loss of substantial statistical information. Experimental study on certain real electronic products demonstrates that this forecasting methodology produces more accurate predictions than other conventional approaches, which thereby helps improve the production planning and the quality of semiconductor supply chain management.

When forecasting intermittent demand the method derived by Croston (1972) is often cited. Previous research favorably compared Croston's forecasting method for demand with simple exponential smoothing assuming a nonzero demand occurs as a Bernoulli process with a constant probability. In practice, however, the assumption of a constant probability for the occurrence of nonzero demand is often violated. This research investigates Croston's method under violation of the assumption of a constant probability of nonzero demand. In a simulation study, forecasts derived using single exponential smoothing (SES) are compared to forecasts using a modification of Croston's method utilizing double exponential smoothing to forecast the time between nonzero demands assuming a normal distribution for demand size with different standard deviation levels. This methodology may be applicable to forecasting intermittent demand at the beginning or end of a product's life cycle.

Many forecasting methodologies used in the new product development process are superficial techniques that either fail to incorporate the voice of the consumer or only touch on superficial consumer attitudes while completely ignoring the affectively laden hedonic aspects of consumption. This chapter demonstrates how a relatively new qualitative methodology, the Zaltman Metaphor Elicitation Technique (ZMET), can provide managers with insight into the critical psychosocial and emotional landscape which frames how consumers react to a new offering. These insights can be leveraged at any stage of the new product development process to forecast and fine-tune deep consumer resonance with a product offering.

Production operations managers have long been concerned about new product development and the life cycle of these products. Because many products do not sell at constant levels throughout their lives, product life cycles (PLCs) must be considered when developing sales forecasts. Innovation diffusion models have successfully been employed to investigate the rate at which goods and/or services pass through the PLC. This research investigates innovation diffusion models and their relation to the PLC. The model is developed and then tested using modem sales from June 1994 to May 2006.

The elements involved in selecting a good statistical forecasting case are set forth. These include realism, managerial decision making possibilities, provision of extensive data, allowances for discussion of data incompleteness and possible additional data needs, and possibilities for utilizing a variety of forecasting methods. A case describing forecasting contributions to the United Way of America, used in graduate and undergraduate statistics classes, is presented. Finally, a list of grading criteria with suggested weights and issues to be covered is presented.

In this chapter, we describe how time series analysis can often provide better insight than prior year data for predicting the total impact of an atypical event – including (1) taking into account other atypical events, (2) determining if the impact lasted greater than one season, and (3) adjusting for any performance/metric “rebounding” in subsequent seasons. We demonstrate using time series analysis to estimate the impact of the 9–11 terror attacks on the Hawaiian tourism industry. Terror attacks, in addition to the potential loss of life and property, can induce a post event fear factor that results in decreased revenue and profitability for businesses and their respective industries, insurers, and tax-receiving governments.

This chapter shows how the forecasting and the planning functions in a supply chain can be organized so they will yield optimal forecasts for an entire supply chain. We achieve this result by replacing the process of generating forecasts with that of making optimal coordinated supply chain decisions. The ideal performance for a supply chain is to have the flows of materials perfectly synchronized with the demand rate for the finished product that the chain produces. When the equality is achieved, we have a pure “demand pull” supply chain. This ideal is difficult to achieve because forecasting and decision making in supply chains are typically decentralized and forecasting and planning uncoordinated. Creating a competitive advantage for the finished product requires achieving the ideal. The opposite, not achieving the ideal, leads to uncoordinated forecasts and decisions that trigger unintended buildup of inventories, lost sales and the bullwhip effects, slowness and high costs.

This chapter shows how (1) we can achieve the ideal synchronous supply chain flows by using temporal linear programs; (2) then, we guide each individual supply chain member company in developing his optimal operations plan to guide him in executing his part in the supply chain plan. The result from the two factors: the entire supply chain will achieve the ideal flow rates.

Assume that we generate forecasts from a model y=cx+d+. “c” and “d” are placement parameters estimated from observations on x and y, and is the residual.

If the residual is observed to be symmetric about the mode, it is usually assumed to be distributed by the Gaussian family of functions. If the residual is skew to the left of the mode, or to the right of the mode, it cannot be assumed to be normally distributed. A family of functions will then have to be found which will correctly represent the observed skew values for . The analyst has to search for a family on a case-by-case basis, trying one family of functions first, then another, till one is found which fits the observed non-symmetric -values correctly. This chapter aims to eliminate this time consuming estimation process. The chapter introduces a family of functions. The family is capable of taking any skew or symmetric locus by varying its placement parameters. The family will simplify the effort to correctly measure the densities of because the estimation problem is reduced to fitting only one function to the data if it is symmetric or skew.

This is a study of forecasting models that aggregate monthly times series into bimonthly and quarterly models using the 1,428 seasonal monthly series of the M3 competition of Makridakis and Hibon (2000). These aggregating models are used to answer the question of whether aggregation models of monthly time series significantly improve forecast accuracy. Through aggregation, the forecast mean absolute deviations (MADs) and mean absolute percent errors (MAPEs) were found to be statistically significantly lower at a 0.001 level of significance. In addition, the ratio of the forecast MAD to the best forecast model MAD was reduced from 1.066 to 1.0584. While those appear to be modest improvements, a reduction in the MAD affects a forecasting horizon of 18 months for 1,428 time series, thus the absolute deviations of 25,704 forecasts (i.e., 18*1,428 series) were reduced. Similar improvements were found for the symmetric MAPE.

Many economic and business problems require a set of random variates from the posterior density of the unknown parameters. The set of random variates can be used to integrate numerically many forms of functions. Since a closed form of the posterior density of models in time series analysis is not usually well known, it is not easy to generate a set of random variates. As a sampling scheme based on the probabilities proportional to sizes of the sample space, sampling importance resampling (SIR) method can be applied to generate a set of random variates from the posterior density. Application of SIR to signal extraction model of time series analysis is illustrated and given a set of random variates, the procedures to compute the Monte Carlo estimator of the component of signal extraction model are discussed. The procedures are illustrated with simulated data.

In this chapter, we analyze donor behavior based on the general segmentation bases. In particular, we study the behavior of the individual donor group's support for higher education. There has been very little research to date that discriminates the donor behavior of individual donors on the bases of their donation levels. The existing literature is limited to a general treatment of donor behavior using one of the available classical statistical discriminant techniques.

We investigate the individual donor behavior using both classical statistical techniques and a mathematical programming formulation. The study entails classifying individual donors based on their donation levels, a response variable. We use individuals’ income levels, savings, and age as predictor variables. For this study, we use the characteristics of a real dataset to simulate multiple datasets of donors and their characteristics. The results of a simulation experiment show that the weighted linear programming model consistently outperforms standard statistical approaches in attaining lower APparent Error Rates (APERs) for 100 replications in each of the three correlation cases.

Cover of Advances in Business and Management Forecasting
DOI
10.1016/S1477-4070(2008)5
Publication date
2008-04-30
Book series
Advances in Business and Management Forecasting
Editors
Series copyright holder
Emerald Publishing Limited
ISBN
978-0-7623-1478-2
eISBN
978-0-85724-787-2
Book series ISSN
1477-4070