Research in Law and Economics: Volume 23

Subject:

Table of contents

(18 chapters)

These chapters were produced with support from the Daniel J. Evans School of Public Affairs at the University of Washington, the Law School at the University of Washington, the Philosophy Department of the University of Washington, and The John D. and Catherine T. MacArthur Foundation. We thank them for their support.

In the earlier part of the twentieth century, cost–benefit (CBA) or benefit–cost analysis was used as a vehicle by Congress to curtail its wasteful spending, by using the Army Corp of Engineers to examine Congressional projects using CBA. Theodore Porter here examines the rise of the use of CBA in historical context and finds that the Corp was highly successful in reducing wasteful spending. Regardless of the present day effectiveness of the Corps, CBA currently provides valuable service. To appreciate this one need look no further than the effect Arnold Harberger's work and students have had in less developed countries, and at the several hundred useful evaluations of social programs produced over the last several years. Finally, one can look, criticisms of Ackerman and Heinzerling notwithstanding, at many of the analyses of environmental programs.

Government agencies have endeavored, with limited success, to improve the methodological consistency of regulatory benefit–cost analysis (BCA). This paper recommends that an independent cohort of economists, policy analysts and legal scholars take on that task. Independently established “best practices” would have four positive effects: (1) they would render BCAs more regular in form and format and, thus, more readily assessable and replicable by social scientists; (2) improved consistency might marginally reduce political opposition to BCA as a policy tool; (3) politically-motivated, inter-agency methodological disputes might be avoided; and (4) an independent set of “best practices” would provide a sound, independent basis for judicial review of agency BCAs.

This article posits that a more rigorous enforcement of the Constitutional Doctrine of Non-delegation would prevent many of the problems that have been identified with benefit–cost analysis. In particular, a rigorous application would prevent administrative agencies from using benefit–cost analysis as a screen to make policy decisions that the agency otherwise wishes to occur. Though the US Courts might have some difficulty in enforcing this notion, it is possible to do, and would greatly help the benefit–cost process, by regulating it to its proper place in an administrative system.

Benefit–cost analysis took root in the U.S. at the federal level in the 1930s with the use of the method by the Army Corps of Engineers. It now is used widely by government agencies and research organizations. The practice has long been controversial, and it remains so. Some critics find the weaknesses of benefit–cost analysis to be so severe as to warrant abandoning its practice.

Benefit–cost and cost-effectiveness analyses form the core of microeconomic performance measures by which to evaluate federal programs and policies. The current uses of such measures in the federal government are summarized and opportunities for their expanded application, sources of generally accepted analytical principles and opportunities to improve the consistency and credibility of microeconomic performance measures are discussed.

Elegant multi-market models and intricate discounting methods are difficult, at times impossible to utilize in the real world because the necessary data just are not available. While there is no perfect substitute for adequate data, there are good ones that are capable of improving policy decisions. This paper describes one such substitute by way of an example: the designation of critical habitat under the Endangered Species Act for West Coast salmon and steelhead. The example shows how a cost-effectiveness approach can mitigate (to some extent) the effects of poor data on the monetary benefits of regulatory actions.

Following the World Trade Organization (WTO) ruling favoring Brazil over U.S. cotton growers, the debate continues over the impact of U.S. farm policy. For U.S. cotton policy, the price impact depends on several factors, including the extent to which it is decoupled from production. The impact on world cotton prices under decoupling (the loan rate is used in supply response analysis) is much less than under coupling (the target price is used in producer production decisions). Also, the welfare impacts are very different. Using cotton as an example, the welfare cost of U.S. cotton policy is much less under a decoupled program.

The lens used by the courts and much of the antitrust literature on predatory selling and/or buying is based on partial equilibrium methodology. We demonstrate that such methodology is unreliable for assessments of predatory monopoly or monopsony conduct. In contrast to the typical two-stage dynamic analysis involving a predation period followed by a recoupment period, we advance a general equilibrium analysis that demonstrates the critical role of related industries and markets. Substitutability versus complementarity of both inputs and outputs is critical. With either monopolistic or monopsonistic market power (but not both), neither predatory overselling nor predatory overbuying is profitably sustainable. Two-stage predation/recoupment is profitable only with irreversibility in production and cost functions, unlike typical estimated forms from the production economic literature. However, when the market structure admits both monopolistic and monopsonistic behavior, predatory overbuying can be profitably sustainable while overselling cannot. Useful distinctions are drawn between contract versus non-contract markets for input markets.

The application of benefit–cost analysis principles by the Federal Aviation Administration (FAA) to a major infrastructure investment proposal – the expansion of Chicago O’Hare International Airport – is analyzed. The City of Chicago is proposing a major physical expansion of O’Hare Airport, which is but one of the alternative solutions to the high level of passenger delays that are currently experienced. The FAA must approve benefit–cost analyses done by the City in order for it to be eligible for federal funding. In the course of this process, the City has prepared two alternative benefit–cost studies of the proposed expansion. The analytic framework and empirical approach of both analyses is described, the results summarized, and the methods and estimates critiqued. It is concluded that neither study provides an estimate of net national benefits that meets minimal accepted professional standards. Finally, an overall assessment of the federal government process in considering and approving benefit–cost studies is provided, and suggestions for improving this process are offered.

This paper modifies the “standard” methodology for calculating the economic opportunity cost of foreign exchange (EOCFX), so as to incorporate into its calculation the distortions involved in the act of “sourcing” in the capital market the funds that will be spent by the project. Once we take these “sourcing” distortions into account, we are logically forced to pursue two parallel calculations. The first, EOCFX traces the results of sourcing money in the capital market and spending it on tradables. The second, the shadow price of nontradables outlays (SPNTO) traces the results of sourcing money in the capital market and spending it on nontradables. Supporting arguments and illustrative calculations are presented in the paper.

The Kaldor–Hicks (KH) criterion has long been the standard for benefit–cost analyses, but it has also been widely criticized as ignoring equity and, arguably, moral sentiments in general. We suggest the use of an aggregate measure (KHM) instead of KH, where M stands for moral sentiments. KHM simply adds to the traditional KH criterion the requirement that all goods for which there is a willingness to pay or accept count as economic goods. This addition, however, runs up against objections to counting moral sentiments in general and non-paternalistic altruism in particular. We show these concerns are unwarranted and suggest that the KHM criterion is superior to KH because it provides better information.

This paper demonstrates the importance of general equilibrium (GE) feedback effects inside and outside markets for the measurement of the efficiency costs of taxes in a distorted economy. Our specific focus is on the changes in environmental amenities that can result from pollution externalities generated from production activities. Even when amenities are under three percent of virtual income, the error in the GE approximations of the welfare effects of new taxes with pre-existing distortions can increase threefold. The nature of the link between the source of the external effects influencing amenities and the changes in amenity services can alter the conclusions one would make about the merits of an intervention based on benefit–cost analyses.

One of the most contentious issues concerning benefit–cost analyses of environmental and other regulatory programs has been the valuation of reductions in mortality risks. The conceptual basis for most valuation exercises has been the value of a statistical life (VSL). However, despite decades of both theoretical and empirical research on the meaning and measurement of the VSL concept, there is no consensus concerning the validity of the results it produces in actual applications. In this paper, we review the development and application of the VSL approach and then propose what we believe to be a better way to value changes in mortality hazard.

An experiment is undertaken to assess how the level of information provided to survey groups impacts upon the decisions they make. In this experiment, a group of experts is surveyed first to determine both the forms and levels of information important to them regarding an obscure environmental resource (remote mountain lakes), as well as their ranking of particular examples of these resources in accordance with their own criteria. Then three different groups of respondents are given different levels of this information to assess how their WTP for the resources responds to varying levels of this information, and how their rankings of the different goods alters with the information provided. The study reports evidence that generally increased levels of information provide significant quantitative changes in aggregate WTP (the enhancement effect), as well as a credible impact on their ranking of the various goods. On closer examination, much of the enhancement effect appears to be attributable to the changes in ranking, and to changes in the WTP for a single lake at each level of information. In addition the ranking does not respond in any consistent or coherent fashion during the experiment until the information provided is complete, including a ranking of subjectively reported importance by the expert group, and then the survey group converges upon the expert's group rankings. In sum, the experiment generates evidence that is both consistent with the anticipated effects of increased levels of information but also consistent with the communication of information-embedded preferences of the expert group. It may not be possible to communicate expert-provided information to survey groups without simultaneously communicating their preferences.

Cost–benefit analysis in its modern form grew up within mid-twentieth-century public agencies such as the Army Corps of Engineers. It was at first a very practical program of economic quantification, practiced by engineers before it drew in economists, and its history is as much a story of bureaucratic technologies as of applied social science. It has aimed throughout at a kind of public rationality, but in a particular, highly impersonal form. The ideal of standardized rules of calculation is adapted to the constrained political situations which generated the demand for this kind of economic analysis.

As commonly pointed out in most instructional and operational manuals, and the benefit–cost and valuation texts on which they are largely based, there is general agreement among economic analysts that the economic values of gains and losses are correctly assessed by two different measures. The value of a gain is appropriately measured by the maximum sum people are willing to pay for it (the so-called WTP measure) – the amount that would leave them indifferent between paying to obtain the improvement and refusing the exchange. The value of a loss is accurately measured by the minimum compensation people demand to accept it (the so-called willingness-to-accept, or WTA, measure) – the sum that would leave them indifferent between being paid to bear the impairment and remaining whole without it.

DOI
10.1016/S0193-5895(2007)23
Publication date
Book series
Research in Law and Economics
Editor
Series copyright holder
Emerald Publishing Limited
ISBN
978-0-7623-1363-1
eISBN
978-1-84950-455-3
Book series ISSN
0193-5895