Applications of Artificial Intelligence in Finance and Economics: Volume 19

Subject:

Table of contents

(13 chapters)

Artificial intelligence is a consortium of data-driven methodologies which includes artificial neural networks, genetic algorithms, fuzzy logic, probabilistic belief networks and machine learning as its components. We have witnessed a phenomenal impact of this data-driven consortium of methodologies in many areas of studies, the economic and financial fields being of no exception. In particular, this volume of collected works will give examples of its impact on the field of economics and finance. This volume is the result of the selection of high-quality papers presented at a special session entitled “Applications of Artificial Intelligence in Economics and Finance” at the “2003 International Conference on Artificial Intelligence” (IC-AI ’03) held at the Monte Carlo Resort, Las Vegas, NV, USA, June 23–26 2003. The special session, organised by Jane Binner, Graham Kendall and Shu-Heng Chen, was presented in order to draw attention to the tremendous diversity and richness of the applications of artificial intelligence to problems in Economics and Finance. This volume should appeal to economists interested in adopting an interdisciplinary approach to the study of economic problems, computer scientists who are looking for potential applications of artificial intelligence and practitioners who are looking for new perspectives on how to build models for everyday operations.

In this study, the performance of ordinal GA-based trading strategies is evaluated under six classes of time series model, namely, the linear ARMA model, the bilinear model, the ARCH model, the GARCH model, the threshold model and the chaotic model. The performance criteria employed are the winning probability, accumulated returns, Sharpe ratio and luck coefficient. Asymptotic test statistics for these criteria are derived. The hypothesis as to the superiority of GA over a benchmark, say, buy-and-hold, can then be tested using Monte Carlo simulation. From this rigorously-established evaluation process, we find that simple genetic algorithms can work very well in linear stochastic environments, and that they also work very well in nonlinear deterministic (chaotic) environments. However, they may perform much worse in pure nonlinear stochastic cases. These results shed light on the superior performance of GA when it is applied to the two tick-by-tick time series of foreign exchange rates: EUR/USD and USD/JPY.

We model international short-term capital flow by identifying technical trading rules in short-term capital markets using Genetic Programming (GP). The simulation results suggest that the international short-term markets was quite efficient during the period of 1997–2002, with most GP generated trading strategies recommending buy-and-hold on one or two assets. The out-of-sample performance of GP trading strategies varies from year to year. However, many of the strategies are able to forecast Taiwan stock market down time and avoid making futile investment. Investigation of Automatically Defined Functions shows that they do not give advantages or disadvantages to the GP results.

The purpose of this study is to contrast the forecasting performance of two non-linear models, a regime-switching vector autoregressive model (RS-VAR) and a recurrent neural network (RNN), to that of a linear benchmark VAR model. Our specific forecasting experiment is U.K. inflation and we utilize monthly data from 1969 to 2003. The RS-VAR and the RNN perform approximately on par over both monthly and annual forecast horizons. Both non-linear models perform significantly better than the VAR model.

Are the learning procedures of genetic algorithms (GAs) able to generate optimal architectures for artificial neural networks (ANNs) in high frequency data? In this experimental study, GAs are used to identify the best architecture for ANNs. Additional learning is undertaken by the ANNs to forecast daily excess stock returns. No ANN architectures were able to outperform a random walk, despite the finding of non-linearity in the excess returns. This failure is attributed to the absence of suitable ANN structures and further implies that researchers need to be cautious when making inferences from ANN results that use high frequency data.

This work applies state-of-the-art artificial intelligence forecasting methods to provide new evidence of the comparative performance of statistically weighted Divisia indices vis-à-vis their simple sum counterparts in a simple inflation forecasting experiment. We develop a new approach that uses co-evolution (using neural networks and evolutionary strategies) as a predictive tool. This approach is simple to implement yet produces results that outperform stand-alone neural network predictions. Results suggest that superior tracking of inflation is possible for models that employ a Divisia M2 measure of money that has been adjusted to incorporate a learning mechanism to allow individuals to gradually alter their perceptions of the increased productivity of money. Divisia measures of money outperform their simple sum counterparts as macroeconomic indicators.

This paper compares the predictive power of linear econometric and non-linear computational models for forecasting the inflation rate in the European Monetary Union (EMU). Various models of both types are developed using different monetary and real activity indicators. They are compared according to a battery of parametric and non-parametric test statistics to measure their performance in one- and four-step ahead forecasts of quarterly data. Using genetic-neural fuzzy systems we find the computational approach superior to some degree and show how to combine both techniques successfully.

Given the recent explosion of interest in streaming data and online algorithms, clustering of time series subsequences has received much attention. In this work we make a surprising claim. Clustering of time series subsequences is completely meaningless. More concretely, clusters extracted from these time series are forced to obey a certain constraint that is pathologically unlikely to be satisfied by any dataset, and because of this, the clusters extracted by any clustering algorithm are essentially random. While this constraint can be intuitively demonstrated with a simple illustration and is simple to prove, it has never appeared in the literature. We can justify calling our claim surprising, since it invalidates the contribution of dozens of previously published papers. We will justify our claim with a theorem, illustrative examples, and a comprehensive set of experiments on reimplementations of previous work.

This study applies VAR and ANN techniques to make ex-post forecast of U.S. oil price movements. The VAR-based forecast uses three endogenous variables: lagged oil price, lagged oil supply and lagged energy consumption. However, the VAR model suggests that the impacts of oil supply and energy consumption has limited impacts on oil price movement. The forecast of the genetic algorithm-based ANN model is made by using oil supply, energy consumption, and money supply (M1). Root mean squared error and mean absolute error have been used as the evaluation criteria. Our analysis suggests that the BPN-GA model noticeably outperforms the VAR model.

Divisia component data is used in the training of an Aggregate Feedforward Neural Network (AFFNN), a general-purpose connectionist system designed to assist with data mining activities. The neural network is able to learn the money-price relationship, defined as the relationships between the rate of growth of the money supply and inflation. Learned relationships are expressed in terms of an automatically generated series of human-readable and machine-executable rules, shown to meaningfully and accurately describe inflation in terms of the original values of the Divisia component dataset.

In this paper we show, by means of an example of its application to the problem of house price forecasting, an approach to attribute selection and dependence modelling utilising the Gamma Test (GT), a non-linear analysis algorithm that is described. The GT is employed in a two-stage process: first the GT drives a Genetic Algorithm (GA) to select a useful subset of features from a large dataset that we develop from eight economic statistical series of historical measures that may impact upon house price movement. Next we generate a predictive model utilising an Artificial Neural Network (ANN) trained to the Mean Squared Error (MSE) estimated by the GT, which accurately forecasts changes in the House Price Index (HPI). We present a background to the problem domain and demonstrate, based on results of this methodology, that the GT was of great utility in facilitating a GA based approach to extracting a sound predictive model from a large number of inputs in a data-point sparse real-world application.

DOI
10.1016/S0731-9053(2004)19
Publication date
Book series
Advances in Econometrics
Editors
Series copyright holder
Emerald Publishing Limited
ISBN
978-0-76231-150-7
eISBN
978-1-84950-303-7
Book series ISSN
0731-9053