Area of Research: The Scientific Enterprise
Simulating the Academy: Toward Understanding Colleges and Universities as Dynamic Systems
See the game that resulted from this research – Virtual-U
Colleges and universities are complex and arcane enterprises. They create and archive fundamental and pragmatic knowledge. They educate our young in preparation for adult life and society’s various endeavors. They interpret and critique culture and influence our world views. We expect these institutions to be all things to all people: generator of inventions for industry, spur for regional economic development, surrogate home for the young, guarantor of good jobs and high incomes, professional developer of those in mid-career, entertainer on Saturday afternoons, equalizer of social opportunity, and political refuge. As important as colleges and universities are to us, however, they are not well understood at a systems level even by those who live and work inside them.
This lack of understanding mattered less when the academy was held in high esteem and resources flowed to it at rates sufficient to maintain internal stability. But times have changed. The gleam of the ivory tower has dulled. A growing number of critics now believe that while educational services are central to America’s successful future, existing colleges and universities are failing to adequately manage their affairs, adapt to changing student needs, and exploit technological possibilities. Internal strife, from heightened competition for scarce resources among a heterogeneous mix of campus constituencies, makes governance increasingly difficult. Tools that can provide leaders both inside and outside the academy with a greater shared insight and und erstanding of our institutions of higher learning as dynamic systems are needed. This paper proposes one such tool.
The Need
Overemphasizing higher education’s importance in America is hard. It is a huge and influential enterprise. Roughly half of all young people enter a higher education institution. About 15 million students currently enroll. Faculty number about 900,000. In 1995, spending totaled close to $180 billion.
As a result of mounting difficulties in raising revenues, rising expectations for the role of universities in social and economic development, technologies that extend the ways in which education can be delivered, and shifts in student demographics and graduate labor markets, many academic leaders are seeking to move beyond incremental change and embark on more fundamental restructuring. Debate has in fact spilled beyond the borders of academe into the contents of best-selling books, lawsuits, and other areas on such issues as:
- the fundamental roles of the university (mission, vision)
- the relative emphasis on research, quality of teaching
- the need, quality, and character of basic research
- the appropriateness of applied research on campus
- faculty responsibility and behavior
- (excessive) management and (lack of) leadership
- costs, especially tuition and overhead rates
- the rationale for and length of time to the doctoral degree
- employment terms for the academic workforce
- curriculum content and knowledge structure
- elitism, social stratification, and diversity
- academic standards (e.g., admission policies, attrition rates, grade inflation, sports)
- the rise of foreign student population and links to foreign firms
The desirable response from higher education seems clear enough. From community colleges to research universities, they should raise productivity, modernize administrative and support services, and improve accountability while preserving autonomy. Upon identifying priorities, they should recast incentives and allocate resources accordingly. But how to go about achieving this response is less clear. A major obstacle is that key stakeholders appear not to recognize or accept facts about how colleges and universities work. They do not view the institution as a system or internalize the linkages between cause and effect. The resulting gaps in knowledge and credibility form major barriers to experimentation and reform. For example, seemingly logical proposals to close marginal departments and redeploy their faculty are vigorously fought. Conversely, incentive programs for early retirement are readily accepted even though they may generate unintended and undesired consequences (e.g., those who leave may be among the institution’s more productive faculty since they are more likely to have compelling career alternatives).
A large share of the problem owes to the fact that universities are complex, both to understand and to manage. Considering their functions and interactions with government, industry, and society in general, we can hardly be surprised. Universities are systems with many independent parts and interactive processes. Outcomes frequently depend on powerful but obscure second-order effects. One example is how the expansion of graduate education created a market for low-cost provision of labor for research and teaching, which now strongly influences the size of graduate programs and admissions, including the admissions of foreign students.
Some of the complexity in managing the system stems from lack of agreement or clarity among the various stakeholders about purposes, measures of performance, and productivity. Furthermore, the professional workforce and relatively flat organizational structure limit the exercise of direct management control, leaving institutional leaders to reconcile conflicting objectives as each stakeholder presses his or her own agenda. Stakeholders often attend only to their own values and needs, not stopping to see their institution in broad perspective. The decision-making process becomes volatile when emotionally charged issues such as tenure, academic freedom, and diversity are perceived to be at stake. Choices ultimately made may not be congruent with the institution’s long-run interest.
Most academic leaders come to their jobs lacking deep experience about economic and management matters. Intelligence and motivation can offset their inexperience to some extent, but organizational complexity limits the offset. The difficulty of achieving a comprehensive view also applies to trustees, faculty, staff, students, and others involved in university decision-making. Trustees are typically grounded in a business or professional field but often lack recent first-hand exposure to the higher education environment. Faculty rarely view their institution holistically, and the same is true for students, alumni, and other stakeholders.
Higher education needs innovative devices that help institutional leaders focus their thoughts, and then communicate with stakeholders without appearing manipulative or quickly raising defense mechanisms. Traditional devices such as conferences, commissions, and editorials help, but people rarely internalize complex scenarios by passively receiving information. A program to understand the college/university as a (complex) system, synthesized in a leadership strategy simulation game, can provide people at several levels with an opportunity to deepen their understanding of how colleges and universities work, motivating and engaging them without imposing the difficulties and risks that come with real life.
People concerned with higher education need to understand the decision-making process of the major actors–administration, faculty, students, and other internal and external stakeholders–and how these processes interact. Modeling the behavior of various subsystems within the university, their interactions, and the influence of external forces upon them can contribute to such an understanding. The parts of the system to be analyzed and modeled in detail will depend on what are considered the most important issues. Since controllability of the entire system is of paramount concern, priority would attach to building a model of the behavior of the entire complex system–yielding an intriguing, though necessarily rough, view of the whole.
A Leadership Strategy Game: “SimU”
The development of a college/university simulator, or “SimU,” would draw on three streams of activity: (a) management education games now widely used in management education (e.g., MIT and Carnegie Mellon distribute corporate management games popular in business schools); (b) special-purpose simulations developed to meet educational objectives in enterprises of various kinds (e.g., military battlefield simulations, nuclear power plant operations simulations); and (c) more purposely entertaining simulation games developed for a broader commercial market. Games such as SimEarth and SimCity (both developed by Maxis Software, the latter with sales of 2.5 million units) have proved to enlighten as well as amuse. Although formal evaluation of their effectiveness is hard to obtain, their enthusiastic adoption in school and university courses suggests their educational value. More explicitly serious simulations in this genre such as SimHealth (developed by Thinking Tools, Inc., to explore the reorganization of the US health care system implied by the Clinton health care reform proposals) have also had reasonable commercial success (sales of tens of thousands of units).
The games use continuous computation and constantly changing color graphics as well as sound to sustain user interest. Individual players “play against the model.” SimCity, for example, confronts a single player with zoning, infrastructure, transport, security, and fiscal issues played out over a sweep of time sufficient for long-term effects to become apparent. Efforts at developing multiuser online simulation games (e.g., the Internet-based “President ‘96” and “Reinventing America,” developed by Crossover Technologies under grants from the Markle Foundation to simulate the political and policymaking processes) are also now attracting considerable interest.
The authors believe that existing research and data are sufficient to build a simulation, both educational and entertaining, that will allow users to grapple with issues such as:
- strategic positioning of the institution
- academic performance and faculty morale
- administrative and support service performance and staff morale
- incentives and rewards
- goals and perceptions of students, parents, donors, research sponsors, community members, employers, and government
- comparative performance with respect to similar institutions
- tuition, financial aid, and overhead rates
- financial performance, including capital assets and liabilities (e.g., endowment, physical assets, and deferred liabilities)
The target market for such a simulation would be, broadly, anyone with an interest in how colleges and universities work as systems and, more specifically:
- higher education administrators
- faculty, especially those in leadership roles (e.g., department chairs)
- trustees
- education analysts, writers, and policymakers
- students of higher education, and in general
- alumni and interested public
How SimU Might Work
One of the biggest challenges in building the simulation will be to develop a successful user interface. It should be highly graphical and easy to understand and use. It should draw users to a depth sufficient for meaningful learning while at the same time maintaining interest, pace, and playability. Users report playing SimCity many times to experience its wide variety of different scenarios, exogenous events, and patterns of consequences. SimU should elicit a similar degree of interest.
SimU might open by inviting the user (you) to choose among institutional types and control. Do you want to lead a private research university, a public comprehensive, a private liberal arts college? In what year would you like to begin play? What would you like to name it? You might choose to “grow your own” generic institution or load one of a handful of pre-scripted scenarios that present a specific institution in the throes of a specific dilemma based on actual case studies. The scenarios would define “victory goals” that you must achieve to “win.” (In regular game mode, you will be free to define what success means yourself as you hone your own goals over time. ) Versions of the game might be developed that would allow tailoring to match more closely your specific real-life institution (e.g., through the loading of custom data sets).
Play might open with a panoramic view of the campus: a map with icons representing various organizational units and functions that then segues to a close-up of “Old Main,” your administrative headquarters. Double-clicking on it reveals the interior of an office (yours) complete with desk, file drawers, computer (for e-mail and information display), perhaps a door to a conference room in which meetings could be conducted and a window overlooking the campus that reveals significant changes in various aspects of the campus environment (e.g., dilapidated buildings if maintenance is deferred for too long, fewer students milling around if enrollment declines substantially, hostile faculty if you have not recently appeased them). You, by the way, are the president/senior administrator of this institution and have been blessed (burdened?) with an uncannily high degree of omnipotence.
Clickable icons to the side of the screen could represent schools and departments, offices for managing various functions (e.g., admissions, fund raising, buildings and grounds department), athletic facilities, dormitories, etc. Clicking on an icon would provide information about and/or encounters with the people or activity–analogous to searching out reports and management by walking around. Figure 1 provides a sampling of the kinds of activities and reports that might be included in the simulation.
Figure 1. Sample Activities, Decisions, and Reports Operating Units Academic departments Student services and student life Admissions and financial aid Institutional advancement Alumni relations and public affairs Libraries Information technology support Intercollegiate athletics Finance and administration Plant operations and maintenance Dormitories and food service Financial Decisions & Reports Operating and capital budgets Tuition rate and financial aid policy Research overhead rate Sources and uses of operating funds Faculty and staff salaries Faculty early retirement buyouts Endowment asset allocation and investment return Debt issuance and retirement Balance sheet operating surplus/deficit Other Actions & Reports Admissions selectivity and yield Enrollment by degree level and major Attainment rates & times to degree Course offerings Teaching method mix Course availability Class size distributions Teaching loads Sponsored research volume Publications record Faculty awards, prizes, etc. Popular prestige ratings Academic prestige ratings Faculty age distributions Faculty hiring & retention Staff additions and layoffs Staff turnover rates |
At this point you might proceed in one of two ways. You might provide a set of presidential goals: in effect, a “platform” that calls out the priority you attach to various stakeholders and outcomes. Or, you might begin by playing with the goals programmed into your chosen scenario. These goals would influence certain aspects of simulated behavior. Moreover, they would provide the institutional performance benchmarks needed to define what it means to “win” the game. Alternatively, you might choose to simply explore the simulated world. Rather than trying to “win,” you would be occupied with observing the intuitive and sometimes surprisingly counterintuitive consequences of various inputs you and others (driven by the underlying game engine) make over time.
Semester by semester, time passes as you observe (and seek to modify) outcomes like faculty gains and losses, shifts in applicant pool and graduation rates, growth or decline in externally sponsored research, crumbling buildings and infrastructure, and accreting or eroding financial health. Conditions permitting, you might raise or borrow money and construct new facilities. You may at any time review data in your office or by walking around, call meetings, or change certain policies. (Nothing will happen at the interface while you are engaged in one of these activities, but computations will continue in the background.)
You might visit a particular department and perhaps try to influence faculty behavior: e.g., numbers and types of courses, teaching loads, submission of proposals for sponsored research, involvement with students outside of class. Such efforts might or might not be successful, depending on the institution’s incentive-reward environment (which would stem in part from your own prior decisions) and other circumstances. Even if successful, they might exact a price in faculty morale–however, the opportunity to exert influence would allow you to seek changes at the academic working level that might otherwise appear out of reach.
Three kinds of pre-programmed events punctuate the passage of time:
- Scheduled events marking milestones or providing periodic information for which no response is required: e.g., quarterly and fiscal-year financial reports, key athletic outcomes, admission of the next freshman class, commencement. Simulated time continues.
- Scheduled events for which a response is required: e.g., submission of the annual tuition recommendation, the operating budget, and the capital budget; and the Board’s annual presidential performance evaluation and your acceptance or disagreement with it. Simulated time halts while you prepare the budget or react to the performance evaluation.
- Unscheduled events arising exogenously or because of some condition within the simulation: e.g., a stock-market crash, a dean or professor pleads a case or airs a grievance, a Faculty Senate action or student protest, a fire or a safety problem. Simulated time may continue or halt depending on the event.
By combining these events with the user-initiated ones, the SimU program should be able to provide a simulation that is sufficiently rich to realistically represent the essentials of university leadership and capture user interest. Most of the databases needed to specify the model already exist, and a growing number of research findings are available. Indeed, pulling together the information needed to build the model will be a valuable exercise in its own right. A companion handbook and strategy guidecould provide background, help focus play, and draw out lessons contained in the simulation.
SimU Actors
We have already described how you (as institutional leader) might interact with the SimU simulation. But in a university, the administration’s word is not exactly law. Much of what happens in SimU would result from the actions of various constituencies–simulated actors and stakeholders that operate inside or outside the university. Your actions would influence constituency behavior, but not control it.
A list of potential constituencies follows. Some are depicted as individuals while others represent aggregations of individuals. Some would appear in the simulations for all institutional types, others would apply to one or two types only. At this point we do not know how practical it will be to include all the following constituencies in the game’s initial version.
Internal constituencies
- The governing board might provide financial oversight and offer evaluations of presidential performance .
- School deans might be simulated as independent agents who have independent objectives and sets policy.
- Faculty in each department might be simulated as a set of cohorts with age-rank characteristics and probabilities for promotion, departure, and retirement.
- Department chairs might decide about course offerings, teaching loads, and research emphasis–in effect, representing the aggregate view of departmental faculty.
- The faculty senate might represent the view of the faculty taken as a whole.
- Students might be simulated in terms of admission cohorts, degree levels, and majors, each with course-taking, graduation rate, satisfaction, and similar characteristics.
- The student senate might represent aggregate student views.
- Non-academic operating units–e.g., support services, administration, operations and maintenance–might be described by production functions relating the quality and quality of outputs to budget allocations (see the list in Figure 1 for a more complete list).
- The non-faculty workforce (staff) might be portrayed as a small number of groups–e.g., professional and administrative, clerical, operations and maintenance–whose numbers would grow or ebb according to budget allocations; staff morale and efficiency might depend on workload in relation to numbers, and on compensation level.
- Prospective students might be simulated in terms of application and matriculation rates by market segment; “market research” data might be used to convey attitudes and predict behavior.
- Research sponsors might be simulated on a discipline-by-discipline basis, with each discipline characterized by the level of total funding and the intensity of competition.
- Alumni and potential donors might be simulated according to their interest in one or another department or in the whole institution; gift-giving might depend on department/institution performance and prestige.
- The media might be simulated as a single constituency, with media actions being illustrated with newspaper clippings or television stories.
- Public opinion also might be simulated as a single constituency; public opinion might drive regulatory decisions, and influence state funding decisions in game sessions dealing with public institutions.
Exogenous factors also might affect SimU’s fortunes. Economy-wide inflation and family income growth might drive up cost and mediate the effect of tuition increases on admissions yield and public opinion. Demographics might affect student demand. Governmental funding decisions might drive sponsored research, and for public institutions, state appropriations. A natural disaster might disrupt campus operations. Technological change might restructure cost functions and engender new competition that challenges market shares in education or research. Presidential actions throughout the simulation would determine how well the institution weathers the storms and captures and opportunities.
Issues to be Addressed
SimU would address at least four kinds of issues. The interaction of player decisions with data and response functions built into the model would determine how a college or university evolves and whether the president’s goals are achieved. Gaining insights about these issues and learning to analyze them in systems terms would constitute one of SimU’s most important benefits. The issues include:
- Capital investment vs. spending for current operations (spending vs. saving): policies governing financial capital (endowment, reserves), physical capital (facilities, equipment), and spending for operations. Most institutions bias decisions toward spending for current purposes, especially salaries. The simulation would address the consequences of such imbalances.
- Operating budget allocations: decisions to spend more on one field or activity than another; determination of cross-subsidies between fields and activities. Spending on certain fields may be seen as more or less consistent with the school’s mission, and fields will vary in their ability to generate enrollments and sponsored research dollars. Spending on academic support services may improve educational and research quality and competitiveness, institutional-support investments (e.g. G&A and O&M) may improve infrastructure and efficiency, institutional-advancement investments may increase giving levels, and so on.
- Transactions with customers and stakeholders: student applications, admissions, and yields; sponsored research finding; gift acquisition; and, for public institutions, state appropriations. Outcomes may be affected by quality and prestige, net prices (e.g., tuition minus average financial aid, the effective research overhead rate), and “marketing” expenditures (e.g., for admissions and institutional advancement), as well as uncontrollable factors.
- Academic department actions, which produce the institution’s instruction and research outputs. The range of simulated action might include: the profile of courses as represented by teaching method mix (lectures, seminars, labs), course level, and degree of specialization; faculty teaching loads; and the degree of emphasis placed on research. Considerable attention would be placed on departmental actions because such actions constitute the central focus of academic production.
Success Criteria and Performance Measures
Both game designers and players will have to address basic questions dealing with the university’s or college’s fundamental mission. Should the mission stress the preservation and transmission of knowledge (teaching) or the generation of knowledge (research)? Should the mission cater to the few or virtually everybody (the elite or the mainstream)? Should the institution focus its mission or should it try to serve a broad set of constituencies?
The SimU simulation would be rich enough to permit a large number of performance measures to be reported, but it would not dictate what players should pay attention to. Indeed, much of the data available as a byproduct of the behavioral simulations would not be displayed unless the player searches it out by clicking on the appropriate icons. While certain success criteria would be defined–either by the player or as part of the chosen scenario–the player will retain a great deal of latitude.
Much of SimU’s value will come from discussions about values, performance measures, and the functions programmed into the game. These discussions would be stimulated but not brought to closure by the software and supporting data. The players themselves would have to supply the missing pieces, but the game would supply two crucial elements.
- First, the game would provide a specific set of stimuli for discussion–a context within which to explore one’s own values and understandings and, depending on the circumstances, to compare them to those of one’s colleagues.
- Second, the game would enforce the disciplines of conservation and causality. Money allocated to one priority is not available for another. Actions and failures to act have consequences that must be considered when trying to satisfy one or another constituency. All constituencies cannot be satisfied to the full extent of their desires, especially when exogenous forces infringe on the institution’s market power or freedom of action.
These are important lessons in their own right, and their application in the context of discussions about values would add yet another important benefit. Without consideration of conservation and causality, discussions about values become unbounded, and the university is urged once again to be all things to all people.
Conclusion
The motivation for using SimU is to understand better how a university works. What performance measures should be considered? How do decisions made by the administration, the faculty, and other agents affect the performance measures? Why can’t the university simultaneously maximize the agendas of all its stakeholders? Some participants will challenge the theories used in the simulation, but the very act of challenging requires the formulation of an alternative hypothesis–which can be analyzed and compared with assumptions and data used in the model. SimU also should be fun to play, since learning depends on engagement and engagement will be stimulated and sustained if the activity is intrinsically interesting.
Faculty, staff, students, and trustees must develop more coherent and realistic perspectives about their institutions. Working with a simulation game can build experience and broaden perspectives. Gaming can help all stakeholders understand issues at the level of the institution–and from viewpoints of other stakeholders–and see the issues through less parochial eyes. Even experienced managers find that playing a sophisticated game expands their horizons and motivates broader discussion of management issues.
Ultimately, we would hope that development of SimU would bring three benefits:
- New knowledge: advances in fundamental understanding of how a university functions will come from facing for the first time the challenge of modeling the whole of a university.
- Education of a broad group of stakeholders: given a reasonably sound simulator, a rather large number of stakeholders, numbering in the tens of thousands, may enhance significantly their understanding of the university as a system by “playing the game.”
- Development of new management tools for universities: while SimU would be generic, it could prove the concept of university simulators and stimulate the subsequent development of more detailed, realistic simulators appropriate for specific institutions or classes of institutions.
The authors understand the difficulty of considering the university as a complex system. But because complexity lies at the heart of the university’s current problems, we feel it is important to address the issue head-on. Even the limited models that are practical using today’s knowledge can begin to capture the dynamics and the interactions of the parts. At a minimum, they can help organize the data that will be needed to simulate a university in finer grain, and they will lead to better definition of parameters, variables, and outcomes. But the real payoff–achievable, we believe, with today’s technology–will be to move higher education’s many constituencies toward more shared understanding of how the academy works.
Notes
1 This paper was motivated in part by a session on simulation an gaming, led by John Hiles of Thinking Tools, Inc., at last fall’s Stanford Forum for Higher Education Futures (Annapolis, November 1995).
The Organizational Ecology of Science Advice in America
Easy to forget, and so hard to remember
Using New Anthropological and Biological Tools to Learn about Leonardo Da Vinci
Facing toward governments: Non governmental organizations and scientific and technical advice
Verification of International Environmental Agreements
This paper was first published in 1992 in the Annual Review of Energy and Environment, https://www.annualreviews.org. The paper posted here was scanned and re-typeset in HTML. Every effort was made to minimize errors. Please email us at phe@mail.rockefeller.edu if you would like a hard copy of the original paper.
Abbreviations used: BWU, blue whale unit; CEMS, continuous emissions monitoring systems; CFCs, chlorofluorocarbons; CFE, Treaty on Conventional Armed Forces in Europe; CITES, Convention on International Trade in Endangered Species; CTB, comprehensive test ban; EC, European Community; ECE, United Nations Economic Commission for Europe; EEZ, Exclusive Economic Zone; EMEP, Cooperative Programme for Monitoring and Evaluation of the Long-Range Transmission of Air Pollutants in Europe; GAO, General Accounting Office (U.S. Congress); IAEA, International Atomic Energy Agency; ICES, International Council for the, Exploration of the Seas; IMCO, Inter-governmental Maritime Consultative Organization (IMCO after 1981); IMO, International Maritime Organization (IMCO before 1981); INF, Intermediate Nuclear Forces Treaty; IOS, International Observer System; IUCN, International Union for the Conservation of Nature (recently renamed to World Conservation Union); IWC, International Whaling Commission; LRTAP, Convention on Long Range Transboundary Air Pollution; LTB (T), Limited Test Ban (Treaty); MARPOL, Convention for the Prevention of Pollution from Ships; MSY, maximum sustainable yield; NAAQS, National Ambient Air Quality Standards (U.S.); NGOs, nongovernmental organizations; NEAFC, Northeast Atlantic Fisheries Commission; NPT, Nuclear Non-proliferation Treaty; NTM, national technical means; OECD, Organization for Economic Cooperation and Development; OSHA, Occupational Safety and Health Administration (U.S.); OSL, on-site inspection; OSIA, On-site Inspection Agency (U.S.); SALT, Strategic Arms Limitation Talks; TAC, total allowable catch; UNEP, United Nations Environment Programme.
INTRODUCTION
Problems and opportunities frequently cross national borders. Informal and formal international arrangements-loosely termed “regimes,” defined here as systems of rule or government that have widespread influence–are for the collective management of such transboundary issues. Regimes are pervasive; their number and extent have grown markedly in the 20th century, especially since the Second World War.
Students of the international system study the conditions under which regimes are formed and the factors that contribute to their success. These include distribution of power among states, the nature of the issue, its linkages to other issues, the roles and functions of international organizations, the processes of bargaining and rule-maldng, and the influence of domestic politics (1-3). Scholars also theorize how regimes are maintained and changed (4-6).
In the past two decades students of international cooperation have increasingly applied their tools to issues of the environment and natural resources (7-9). A few studies have critically assessed international cooperation for transboundary environmental protection and drawn tentative conclusions on factors that lead to effective international regimes (8, 10-12). Studies of local management of common natural resources also yield relevant lessons for international environmental cooperation (13).
For several reasons, assessing the effectiveness of international environmental agreements requires study of how compliance is verified. International agreements that are verifiable are more likely to succeed in both negotiation and implementation. The process of verification builds confidence in existing formal and informal agreements, thus improving the prospects for future cooperation and compliance. Verification activities produce information that can provide the technical basis for future agreements and shared understanding. Such information also can provide the basis for sanctions, which depend upon timely, legitimate, and accurate information. Information from verification activities helps to assess how effectively a regime has met its goals and whether changes in the regime are needed to improve effectiveness. By increasing transparency–the extent to which behavior and violations are visible to others–verification ran help build norms that influence behavior and contribute to regime effectiveness.
These propositions have been examined extensively for arms control (e.g. Refs. 14-16), but less for other issues, including protection and management of the natural environment. This paper is a review of the functions, concepts, and theories related to verification of international environmental agreements. Other useful reviews that have come to our attention are Fischer’s study of the verification provisions in 13 international environmental treaties particularly as they relate to a global warming convention (17, 18) and the U.S. General Accounting Offices (GAO) evaluation of reporting and monitoring under 8 major international environmental treaties (19).
MOTIVATION AND OUTLINE
This review is designed to address the question of whether verification is a topic deserving more social concern and research. Our approach is organized around four smaller questions. First, based on existing international environmental regimes, how is verification conducted and what are the relevant concepts? Second, how is verification conducted under domestic environmental law? Domestic experience is important because there is extensive study of how domestic compliance with pollution laws is verified and because international agreements are typically implemented by domestic institutions. Third, can major social science perspectives explain the demand for and character of verification that is observed in existing regimes? And, do those perspectives explain the differences between arms control verification and environmental verification? Fourth, what do the answers to these questions suggest for prospective regimes such as to control global climate change, preserve biodiversity, and limit deforestation?
The paper addresses these questions seriatim. To illustrate the arguments, we first describe nine international environmental regimes. For each we provide a summary of the problem, a synopsis of the main legal agreements and approach to solving the problem, and an assessment (where possible) of compliance with the agreement(s). Second, we describe the functions and concepts related to verification of international environmental agreements. Third, we review domestic experience with compliance and enforcement of environmental laws, primarily in the United States, and offer some comparisons of that setting with the international. Fourth, we employ several theoretical perspectives to explore the patterns of verification observed in the nine cases and to explain the differences between environmental and arms control verification. In conclusion we apply some of these findings to prospective agreements. For the reader unaware of the related arms control literature, a brief review is provided in an appendix.
DEFINITIONS
We distinguish five terms. Monitoring is the process of acquiring the information used to facilitate decision-making and implementation of the agreement. Compliance is the adherence to some formal or informal commitment. Verification is the process of determining whether or not a party is in compliance. Enforcement is the suite of sanctions and incentives to entice compliance. (“Verification regime” has been used to mean all of the above, especially in the arms control literature; we avoid it because of its imprecision.) Implementation is the process of putting in place laws, activities, and institutions to meet obligations of an agreement. This paper focuses on monitoring, compliance, and verification, though enforcement and implementation are mentioned.
There are two caveats. The discussion relies heavily on US scholarship, especially in the domestic context. The literature reviewed is mostly indirectly on the verification of international environmental agreements; little has been written directly on the topic.
INTERNATIONAL ENVIRONMENTAL PROTECTION: NINE CASES
More than 100 fon-nal international agreements to protect the environment exist (20); of these, most are in force. To illustrate how verification is practiced in these cases, we survey nine regimes for international environmental protection, some of them encompassing more than one formal agreement (Table 1). These cut across four types of environmental protection: atmospheric, oceanic, management of natural resources, and preservation of natural resources. Both global and regional agreements are represented.
Atmospheric Cases
LIMITED NUCLEAR TEST BAN Many reinforcing events in the mid-1950s led to concern about radioactive fallout from atmospheric testing of nuclear weapons. The public feared the health effects of fallout, radioactive elements were, for example, measurable in milk. The test ban also became a cause of the nuclear disarmament movement (and still is). Though primarily an arms control issue, the case is included here because of the role that health effects played in forcing the agreement.
In 1958 a US-USSR-UK Conference of Experts proposed an international monitoring system for verification of a comprehensive test ban (space, underwater, atmospheric, and underground). The issue preventing agreement was the delectability of underground explosions since detection in the atmosphere, underwater, and in outer space was relatively easy. Through the early 1960s the Conference of Experts met and negotiated the terms of a verification system, presenting proposals with different degrees of cost and intrusiveness and responding to innovative challenges that the verification systems they designed could be evaded. In addition to direct negotiations, both the United States and the Soviet Union attempted to sway world opinion through a series of short-lived unilateral test bans. The Cuban Missile Crisis (1962) focused attention on arms control, as did continued fears of health effects from large atmospheric nuclear tests (21).
Table 1. Summary of the nine cases (scanned JPEG)
In the early 1960s two proposals existed: one for a comprehensive test ban (CTB) and the other for a limited test ban (LTB) to ban tests everywhere except underground. A Limited Test Ban Treaty (LTBT) resulted when the United States and the Soviet Union could not agree on an acceptable number of annual on-site inspections for verifying compliance with a CTB. Compliance with the LTB has been perfect; both sides easily moved their weapons development programs underground. There have been infractions due to venting-accidental escape of radioactive gases from underground tests–but both sides see these as minor issues. By all measurements, ambient concentrations of radioactive elements from weapons testing have declined markedly since the LTB went into effect.
ACID RAIN IN EUROPE From the late 1960s the Scandinavian countries have claimed that the acidity of their rain was increasing, that it was caused by European emissions upwind, and that the acidity was damaging Scandinavian lakes (49). Beginning in 1972 the Organization for Economic Cooperation and Development (OECD) conducted a study of long-range transport of air pollutants to assess such claims. That program was given independent status in 1978 as the Cooperative Programme for Monitoring and Evaluation of the Long-Range Transmission of Air Pollutants in Europe (EMEP). EMEP now consists of a network to monitor the chemical composition of rain (including acidity) and three international centers to analyze that and other data (24).
In parallel, at the level of high politics and quite disconnected from the OECD/EMEP activities, at the 1975 Helsinki Conference on Security and Cooperation in Europe the Soviet Union pushed for some forum to continue the east-west dialogue begun during detente of the early 1970s. The topic chosen was the environment, and the U.N. Economic Commission for Europe (ECE) was chosen as the forum for negotiation because its membership includes all relevant parties (including the United States and Canada) and had the needed organizational infrastructure for negotiating a treaty. The negotiations’ first formal product was the Convention on the Long-Range Transmission of Air Pollution (LRTAP), signed in 1979 (26). Almost all states in Europe have joined LRTAP. The main achievement has been to strengthen understanding of the links between acid-causing emissions, long-range transport and damage to health, property, and ecosystems. Few parties accepted these arguments in the 1960s and 1970s when they were first made by the Scandinavians; now, all do (24).
Three protocols to LRTAP form the substance of the agreement. The first (1984) funds the EMEP monitoring network, thus formally bringing it (and its scientific products) into the LRTAP process. The second (1985) calls for a 30% reduction in emissions of sulfur dioxide, the leading cause of acidification; not all countries have joined the sulfur protocol. A third protocol, on emissions of nitrogen oxides (NO x), was signed in 1988, also without full participation. A fourth protocol on volatile organic compounds (which are precursors to the formation of tropospheric ozone, a health hazard) was signed in 1991 but is not yet in force. In parallel with LRTAP activities, the European Community (EC) has issued directives to control some sources of acid-causing pollutants within EC countries (25; 27, parts III and IV).
Compliance with LRTAP and its protocols has been quite high, at least among industrialized countries; many countries that signed the sulfur protocol have substantially overcomplied, suggesting states would have made these reductions on their own. Indeed, the downward trend in sulfur emissions began in the early 1980s, before the sulfur protocol was negotiated. The pattern of signing the protocol only if the state was going to make the cuts anyway is evident in the NO x protocol as well (25). On the surface, this suggests that LRTAP convention and its protocols have not been effective in gaining emissions control beyond what would have happened anyway; however, the treaties may have helped to deal with free rider problems and probably provided a helpful public forum within which environmental nongovernmental organizations (NGOS) pressured governments to impose stricter emissions controls (24).
STRATOSPHERIC OZONE DEPLETION Concern that chlorofluorocarbons (CFCs) might deplete the ozone layer, causing skin cancer and other health and ecological effects, dates to 1974. Understanding of the problem changed significantly with detection of the Antarctic ozone “hole” in 1985 and subsequent studies to explain it. Despite these major changes, the hypothesized link between CFCs (and other halocarbons) and ozone depletion has been substantially confirmed (50).
In the 1970s the United States, Canada, Norway, and Sweden acted unilaterally to control some uses of CFCs. International efforts included monitoring, research, and assessment programs beginning in the middle 1970S. The Vienna Convention (1985) established a framework for subsequent protocols; the Montreal Protocol (1987), negotiated and signed shortly after the ozone hole was detected, committed signatories to cut the planned use of offending chemicals by half. Amendments and adjustments to that protocol, signed in 1990, call for a ban of ozone-depleting substances (with a few exceptions) by 2000 with an additional decade for developing countries (28, 29). Negotiations are under way to advance that schedule in light of recent scientific evidence showing observed ozone depletion at faster rates than previously predicted.
It is early to assess compliance and effectiveness of the Montreal Protocol. However, many industrialized countries may overcomply because the transition to CFC-alternatives is proving easier and less expensive than originally feared. Evidence of ozone depletion, support from most major CFC manufacturers for stricter regulation, and persistent pressure by environmental NGOs have already contributed to swifter and more stringent domestic regulation in industrialized and some developing countries.
Oceanic Cases
OIL POLLUTION AT SEA Although accidental oil spills have commanded more public attention, “normal” operational discharges of oil into the sea, primarily from washing tanks and discharging ballast water, are the largest source of human-caused marine oil pollution. Attempts to manage oil pollution date back to the 1920s, but had little effect until the combination of the environmental movement and several salient accidental spills–e.g.– Torrey Canyon (1967) and Santa Barbara blowout (1969)–highlighted the need for domestic and international action.
International efforts to control operational and accidental oil pollution have centered on the Intergovernmental Maritime Consultative Organization (IMCO), formed in 1958 (in 1981 “Consultative” was dropped, “Intergovernmental” became “International,” and IMCO became IMO). Through the late 1960s IMCO served as consultant on uniform international safety standards, some of which also helped to reduce oil pollution. Following the damage from the 1967 Torrey Canyon accident, IMCO member states clairified the rights of coastal states to be compensated for accidental oil discharges. Subsequently, the 1973 Convention for the Prevention of Pollution from Ships (MARPOL), which employs an IMO body as its secretariat, set standards for operational discharges as well as for various measures designed to reduce accidental discharges. The original MARPOL never entered into force because of disputes over other provisions regarding transport of hazardous chemicals, but a modification in 1978 made the agreement more acceptable by separating and stretching out regulations on oil, hazardous substances, and other topics. Together these are known as MARPOL 73/78. Approximately 60 countries belong to MARPOL in some form.
IMO serves as a negotiating forum to amend and adjust safety and pollution standards; thus MARPOL 73/78 and related regulations are not static. IMO and MARPOL regulations take two forms, both implemented domestically. Operational regulations set guidelines for the conduct of tankers, for example by restricting the areas and rate at which oily ballast water is discharged into the ocean. Technological regulations prescribe equipment and designs that must be present on tankers of different sizes. Data on compliance with either form of regulation are not collected. Compliance with operational regulations can be assessed only by examining the self-reported records of ship captains; given the conflict of interest and general lack of independent monitoring, compliance may be far from perfect. Compliance with some technological regulations is nearly perfect, probably because the ease of detecting noncompliance and cost of retrofitting are both high (R. Nlitchell, personal communication; 33). In practice, as is frequently the case, a few large countries and firms are more active in the setting of standards than the whole; theses heavily influence the pace and direction for the international process of setting and enforcing common standards.
MEDITERRANEAN POLLUTION By the early 1970s, pollution of the Mediterranean, especially near industrial centers, had visibly increased, as had highly publicized egregious cases. The international response was a comprehensive plan to study and reduce Mediterranean pollution as a single ecosystem, rather than through a series of piecemeal agreements. Negotiated with strong leadership from the United Nations Environment Programme (UNEP), the 1975 Action Plan (Med Plan) seta forth the comprehensive approach (34). The legal instruments began the following year with the 1976 Barcelona Convention and two protocols calling for prevention (and, for some substances and cases, banning) of marine dumping and cooperation to reduce oil pollution. UNEP subsequently made the Med Plan a model for integrated pollution control in other regional seas (51); however, in most other applications the Med Plan model has, for a variety of reasons, not worked well (36). A notable case where the Med Plan model has not been used is the North Sea; although initially ineffective, there are recent signs the North Sea regime is becoming more effective (10, 52-54).
The main feature of the Med Plan as its system of coordinated monitoring and research (Med Pol), which has improved general understanding of the problem and has transferred knowledge, skills, and technology to developing countries in the Mediterranean. Some argue that these scientific activities have built networks of concerned researchers that, in turn, have effectively pressured governments to take substantive measures to reduce Mediterranean pollution (35). The most important substantive agreement is the land-sources protocol (1980) because such sources are, by far, the most important contributors to Mediterranean marine pollution. Although that protocol entered into force in 1983, it is early to determine how effective it has been or the general level of compliance. Implementation depends on standards still to be developed by the Med Plan’s scientific research programs. An additional protocol on specially protected areas was signed in 1982 and entered into force in 1986.
There was been a great deal of activity, for example, the construction of sewage treatment plants, suggesting compliance and effectiveness. But, it is unclear how much be can assigned to the Med Plan process and how much to domestic actions that would have proceeded anyway.
Management and Preservation of Natural Resources
Management characterizes the main objective of many fisheries agreements, of which we consider one, the North Sea herring. Preservation characterizes the protection of endangered species and the Antarctic. The whaling agreement began as a management issue and has gradually shifted to preservation.
We do not consider the several agreements on transport and disposal of hazardous waste, although they are related to preservation of natural resources. These include the 1989 Basel Convention on Transboundary Movements of Hazardous Wastes and Their Disposal (55) and the 1972 London Dumping Convention on disposal of wastes at sea (56, 57).
WHALING From the end of the 19th century through the middle 1960s the annual harvest of whales grew dramatically, peaking in the 1930s and again in the 1950s; consequently, the population of blue whales, for example, dropped from a quarter million to the tens of thousands. In the 1940s, overwhaling in traditional areas of the North Atlantic and Pacific, coupled to technological improvements, pushed the industry from the North Atlantic and Pacific to the Antarctic, which rapidly became the largest source of whales. Overwhaling has long been evident, but the several pre-World War II attempts to manage the population failed (37). Using the many existing and previous agreements to manage fish and seal populations as a guide, 1 in 1946 the whaling nations established an International Whaling Commission (IWC), as a negotiating forum for management of whale stocks. The IWC meets annually to discuss the state of stocks, to set quotas and other regulations, and to review how well the past season’s quotas and regulations were obeyed. Its Scientific Committee has warned, fairly accurately, of overwhaling problems; through the middle 1960s those warnings were only partially heeded in the quota and regulation-setting process (i.e. the quotas were set too high; 38). The Scientific Committee sponsors some research of its own but also relies heavily on outside sources, for example national reports on the annual whale catch and the International Council for the Exploration of the Sea (ICES, see below discussion of North Sea herring fishery).
The original rationale for the IWC was to maximize the economic benefit of whaling by reducing overfishing and, eventually, increasing total catch. In the early 1970s that rationale changed towards preservation of whales; at the 1972 U.N. Conference on the Human Environment (Stockholm), the preservationist ethic was reflected, for example, in a “whale parade” and a call, led by the US delegation, for a 10-year moratorium on whaling. Domestic pressure in many European nations and the United States to stop whaling was also strong. From that time, annual meetings reflect the shift away from economic management towards preservation (37, 38). There were also changes in membership as nonwhaling nations joined the IWC in the late 1970s and early 1980s to form a voting bloc; with this new membership the IWC approved a moratorium, beginning in 1986, that continues to the present. Whereas through the 1960s a major problem had been that quotas were set in excess of the scientific committee’s recommendations, the moratorium set quotas below what was probably justified by the IWC’s scientific assessment (41a). Some whaling nations (Japan, Norway, and the Soviet Union) entered objections to the ban, while others (e.g. Iceland) shifted to “scientific” whaling; through both these mechanisms, some whaling continues, and IWC has no formal authority to prevent such whaling. Through public opinion, NGOs continue pressure to stop all forms of whaling; some countries have assisted these efforts with threats of retaliation against whaling nations (58a).
Overall, compliance with IWC quotas seems to have been high. 2 The IWC meetings regularly address enforcement and compliance; national reports indicate that the number of infractions was perhaps one to two percent of the total catch (37). Not all nations submitted reports, and there have been numerous third-party reports and indirect evidence (e.g. anomalously low populations of certain whales) of noncompliance, including a dozen notable cases. In 1955 Norway first proposed an International Observer System (IOS) of independent observers to be stationed on whaling ships and factories to verify compliance. It was not until 1972 that IOS was put into action, and since then compliance has probably gone up (37). However, there are indications that compliance was already rising as the whaling fleets of persistent violators were purchased by the major whaling states.
Some claim that because the moratorium fails IWC’s original goal of commercial management of whaling, IWC effectiveness is low (10). Others suggest that because whaling has declined markedly in the past two decades, in part because of IWC decisions, the whaling regime has been effective (11). Future effectiveness is unclear because Iceland, a major potential whaling nation, has announced it will withdraw from the IWC.
ANTARCTIC TREATY SYSTEM Systematic exploration and territorial claims on Antarctica extend bark to the turn of the century. After World War II those claims expanded and threatened to militarize the continent. Antarctic research figured prominently in the 1957/58 International Geophysical Ym (IGY), the highly successful 18-month internationally coordinated scientific probing of the Earth. The 1959 Antarctic Treaty, negotiated with US and USSR leadership, calls for the continued absence of military activities, the suspension of all territorial claims, and the coordination of “peaceful” scientific research on the continent Membership in the treaty has remained small, a few dozen countries, because a prerequisite is serious interest in Antarctic research, typically demonstrated by maintenance of a year-round scientific base. In addition to the 1959 treaty, the parties have negotiated agreements to control seals (1972) and Antarctic marine living resources (1980), especially the rich fisheries (59, 60). The suite of treaties is known as the Antarctic Treaty System (ATS). A 30th anniversary review of the ATS produced a ban, signed in 1991, on mineral exploration for at least 50 years.
Parties to the treaty meet every two years to make decisions and interpret the provisions of the treaty; thus the ATS evolves over time (42, 44). Because the Antarctic Treaty manages both the continent and its surrounding oceans, it overlaps with efforts in other areas, for example the Law of the Sea, the whaling regime, and agreements controlling transport and dumping in the ocean (e.g. the 1989 Basel Convention on the Control of Tran boundary Movements of Hazardous Wastes and Their Disposal which, among other controls, prohibits disposal of hazardous waste south of 60 o S latitude).
The verification provisions of the ATS are unique in allowing anytime/anywhere inspection, including over flight, by any of the parties, and requiring advance notice of all expeditions. In practice, only the United States has regularly conducted such inspections, and only to underscore the international status of the continent (42) and to establish the precedent of intrusive inspections, which the Soviet Union had not accepted in the 1960s when the United States first conducted its Antarctic inspections. Although it is difficult to assess, compliance seems perfect, except that the treaty calls for coordination of scientific research that seems the exception rather Um rule. The Scientific Committee for Antarctic Research (SCAR) of the International Council of Scientific Unions (ICSU) helps integrate scientific research programs, but final authority for essentially all Antarctic research rests with national governments who provide the funding, as is normal and was the case even for the IGY.
ENDANGERED SPECIES As with many issues of environmental preservation, extinction of species became an important issue with the 1960s environmental movement. Domestically many countries passed laws to protect species, primarily popular land mammals, and their habitats. The 1972 Stockholm conference reinforced these concerns at the international level. The main international legal instrument to control extinctions has been the 1973 Convention on International Trade in Endangered Species (CITES), negotiated with US leadership and pressure from environmental groups. We focus on CITES, although controlling loss of species involves other agreements, including whaling and others (39).
Although the CITES goal is to preserve species, the mechanism is limited to controlling international trade in those species. CITES distinguishes among species according to their risk of extinction by listing species in two appendices: the first, of endangered species, for which commercial trade is essentially banned; and the second, of threatened species, for which commercial trade is closely controlled. Because decisions on listing are made by majority voting of the parties, there is also a third appendix in which a country ran unilaterally place a species to notify the international community that the country considers that species to be in need of international controls. The competence with which the trade restrictions are implemented varies widely by country and species.
The International Union for the Conservation of Nature (IUCN), 3 a quasi-governmental organization, has adopted endangered species as one of its issues and, since the 1960s, has been the leading international authority on the status of various species, publicizing its findings through its annual “red book.” In an unusual arrangement, IUCN also provides secretariat services to CITES on contract from UNEP; in that capacity IUCN performs and contracts a limited amount of research, data collection, and technical assistance related to formulating and implementing CITES regulations (45).
Losses of biodiversity surely continue, though the magnitude and distribution of species loss are uncertain. The most important levers on species decline are domestic actions to preserve species and their habitats, which are outside the realm of CITES. Thus, the regime is unable to stop extinctions directly. Parties to the Convention are required to send annual reports, including trade records, to the secretariat but assessing compliance requires some estimate of how many international shipments circumvent the system, which appears impossible to determine. Some reports suggest that even in the United States, which has among the strictest domestic implementation of CITES, compliance is low. Both because CITES is implemented poorly in many countries and because the agreement controls only international trade, its effectiveness in stopping extinctions is probably low (46). However, for many species and in many countries, there is evidence of more stringent local regulations than would be the case if CITES were not in existence.
FISHERIES MANAGEMENT The management of fisheries for maximum sustainable yield (MSY) is the apotheosis of international management of natural resources. There have been many fisheries agreements, but most have been ineffective in stopping overfishing, although it appears that effectiveness has improved since the 1970s in many cases (61). We focus on the North Sea herring fishery because it has received the most attention and may be the single most important of the fishery arrangements.
Until the middle 1970s the catch of herring was the most abundant of an the North Sea fishes, but extensive overfishing caused yields to drop until 1977, when the fishery was closed for five years to allow recovery. The fishery has evoked a variety of institutional responses. From the late 1950s it was controlled by the Northern Atlantic Fisheries Convention and its Commission (NEAFC), but they acted only as advisory bodies and had little practical impact on overfishing (48). The extension of exclusive economic zones (EEZs)–the area in which a nation has exclusive control over economic activity–in the 1970s to 200 miles effectively divided the North Sea among Norway and the EC member states, at which point control was removed from the NEAFC to more flexible bilateral negotiations between Norway and the EC (47). Negotiations have remained cumbersome because of disputes within the EC, which was both negotiating a common fisheries policy and expanding in membership at the time the EEZs were extending outward (62).
Since 1974 the principle of total allowable catch (TAC), a quantity based on assessments of MSY and the current status of the fishery, has been accepted as the means of controlling the fishery. Before the ban, the agreed quotas markedly “ceeded TAC; furthermore, compliance even with those agreed quotas has been low. It has not been difficult to detect noncompliance since statistics on the catch have been collected and disseminated since early in the century by ICES, an organization explicitly established to improve the data on fisheries (37).
It appears that little has changed as a result of the ban. Overall, compliance with the ban, at least initially, may have been high and, generally, the stocks have recovered, though not to levels that allow MSY. In the last years of the ban there may have been considerable fishing in banned areas, but reported as catches from unbanned areas of the North Sea. In the period since the ban effectively ended in 1982, agreed quotas have exceeded recommended TACS, and disputes over dividing the quotas have resulted in fishing at levels even above the agreed quotas. An accepted formula for distributing the quotas may help reduce these controversies (47a).
FUNCTIONS AND CONCEPTS
Here we describe issues that arise when comparing agreements and illustrate them with examples that extend the brief description of each agreement already provided. The discussion is divided into the two main functions: monitoring and verification.
Monitoring
“Monitoring” here means the process of acquiring information used to facilitate decision-making and implementation of an agreement. Three types of information are collected: finite about offending behaviors that lead to the problem, for example the catching of fish; second, about the problem itself, for example trends in the stocks of fish; third, about responses to the problem, for example to what degree particular governments enforce fishing quotas. These different types of monitoring are used to different degrees in each of the cases. We illustrate by discussing five dimensions of the process of monitoring. The three by five matrix is shown in Figure 1; the discussion below fills in the boxes by moving left to right, top to bottom.
MEASURABILITY The offending behavior that can be measured affects the agreements that are negotiated and the extent to which they are implemented. The whaling and fisheries agreements have logically attempted to set quotas of allowable annual catch because such data were easily collected and comparison with some standard relatively straightforward. In the oil pollution case, the contrast between operational and technological standards further illustrates the point: technological standards are easy to monitor, for example by demonstrating the presence of a particular device onboard the ship; operational requirements are difficult to monitor because they require observing the ship in diverse settings and over extended periods. To improve measurability (and increase stringency), IMCO changed the definition of an illegal oil discharge from 100 parts per million to the “clean ballast” standard (30). Under the new definition, noncompliance with “clean ballast” could be shown by aerial photograph, rather than in situ measurement. In practice this has proved complicated because additional in situ data are needed to demonstrate that an oil slick was the fault of a particular ship.
Regarding monitoring of the problem itself, lack of measurability is pervasive. Statistics on fish populations are notoriously inaccurate; the same is true for whales, though to a lesser degree because they live on the surface and are large. Improving the capacity to measure the relevant environmental parameters has been an explicit goal of both the Med Plan (through Med Pol; 35) and LRTAP (through EMEP; 24). In both cases the approach has been twofold: to fill gaps in the scientific research programs necessary for conducting the measurements; and to adopt uniform monitoring practices so that data and results are comparable.
Figure 1. Types and dimensions of monitoring
Measurability of responses to these problems is occasionally an issue; though most international environmental agreements do not formally require monitoring of how the actions called for are implemented domestically, frequently the parties are required to self-report on the process of implementation. The issue does arise at the periodic meetings of the parties, usually in the context of debates over compliance. The question is rarely one of monitoring whether or not the agreement has been implemented but, rather, whether implementation has been sufficient.
DIRECT AND INDIRECT INDICATORS Problems with direct measurability lead to the use of indirect indicators. In the case of monitoring behavior, most agreements to control atmospheric emissions make extensive use of indirect indicators because the technology for measuring gaseous emissions accurately is expensive, especially for diffuse nonpoint sources. Sulfur dioxide emissions in Europe, which are used to assess LRTAP compliance, are computed from the sulfur content of feedstock coals and unburned ash, except in cases where emissions-monitoring devices are instaued in the stacks and thus emissions can be monitored quasi-continuously. The Montreal Protocol controls “consumption” of CFCs, which is defuied in the Protocol as: production + imports – exports. The goal of the Protocol is to control atmospheric release of CFCs, but that would have been too complicated to measure in practice, so consumption was agreed upon as a reasonable indirect measure. Indirect data on polluting behavior, for example, emissions of acid-causing substances, can also be gained by working backwards. With the EMEP monitoring network, data on emissions from other countries, data on air currents, and numerical models, it is possible to deduce the gross emissions from a particular country. EMEP’s capabilities are unusual for international environmental regimes (23). There are several cases in the IWC history when inconsistency between data on whale stocks and self-reported data on whale Catches produced suspicions of noncompliance, for example, anomalously low data on humpback populations. In the ozone case, for large countries, it may be possible to determine gross compliance of large producers and consumers of CFCs with the Monural Protocol from atmospheric monitoring, data on other countries’ emissions, and atmospheric models.
Indirect measures are also frequently used to monitor a problem. Oil pollution catastrophes–used as an implicit measure by the public–have been instrumental in pushing adoption of IMCO/IMO and MARPOL regulations. Similarly, residents of Mediterranean states easily detect dead fish and smelly water. Visible dieback of German forests served as an indicator that helped convince that country to push for controls on emissions of acid-causing substances. Because data on fish stocks are poor, the catch of fish is frequently used as an indirect indicator of the stock: the declining herring catch helped to force the United Kingdom to close the fishery in 1977; the disastrous catch of the 1964-1965 whaling season helped to galvanize whaling nations to seek more rational management of the resource.
For long-term problems, indirect indicators of the problem may be all that is available, and extensive use of models, simulation, and forecasts may be needed to identify needed policy changes in a timely fashion. The London amendments and adjustments to the Monaral Protocol are partly based on computer models of the future problem, because it is impossible to measure such a problem direcgy until well after the needed actions must be taken.
Regarding direct and indirect indicators of implementation, IMO provides an example. MARPOL requires that members report all infractions and enforcement of the MARPOL regulations. As secretariat, IMO reports the number of infractions, fines, and other sanctions; these are, at best, only indirect indicators of compliance and implementation. The same has been true in the whaling agreement, except that since 1972 there have also been the IOS reports, which are a direct measure of whether selected ships and factories obey the IWC regulations.
SELF-REPORTING The most extensive source of monitoring information for all these agreements is self-reporting. The Montreal Protocol is entirely dependent upon national reports of production, imports, and exports of ozone-depleting substances. Five years after the Protocol was signed, these basic data are still missing for some countries. Much is dependent upon these data; for example, the classification of developing country-and thus eligibility for a 10-year delay in compliance with the Protocol-is computed from self-reported data. 4 Both the herring fishery and whaling cases show a different form of self-reporting: in those cases, the industry has provided the most useful data sets. The Bureau of International Whaling Statistics, established by the industry and the Norwegian government in the 1920s, provides the essential data on commercial whaling. The International Council for the Exploration of the Sea, using industry reports of annual catch, provides the data for the history of the herring fishery. In all these cases, it is unclear to what degree self-reported data are accurate.
National reporting is also a central component of monitoring the problem. Typically the secretariat for international agreements is small and has neither the funding nor capacity to conduct its own research; the few exceptions include the IWC and the IUCN (for the whaling and CITES cases, respectively), which are able to support a very limited amount of research related to monitoring. Because of limited international research capacity, national research programs, often conducted apart from the international agreement to control the problem, are usually the most important source of information. Consequently, most international environmental agreements include an under- standing that relevant national research results will be shared. Essentially every international environmental problem that has been “identified” by some scientific research program–the depletion of stratospheric ozone is the most notable–owes its origin to a few national research programs and free dissemination of the results.
Finally, national reporting of information about implementation is the norm in those cases where such data are required (17, 19). Whaling, CITES, LRTAP, and MARPOL all have mandatory self-reporting of issues related to domestic implementation of the international agreement, for example the number and amount of fines levied against violators. However, the quality of reports varies; for example, whaling reports have been notoriously late and incomplete, and similar experience exists with many other agreements (19). In addition to formal reporting, a number of agreements are characterized by a great many informal sources of reporting about government implementation. Nongovernmental organizations (NGOs) are playing a larger role in such reporting, at least in a few of these cases. At IWC and Montreal Protocol meetings NGO observers usually outnumber the member states, and they make available detailed critical analyses of national responses. The biennial statistical anthology World Resources published by an NGO, the World Resources Institute, in cooperation with U.N. agencies, spotlights shortcomings of policy responses on a range of problems. Nonetheless, NGOs may be most effective by their direct communication with the public and creation of political pressure rather than through infon-ning the formal processes of treaty negotiation.
INTRUSIVENESS In most cases where national reporting is the norm, intrusiveness is obviously low. However, there is varied experience with intrusive monitoring; carefully considering those cases is important because some observers claim that intrusive monitoring is a prerequisite for effective international governance.
From these nine cases there are two examples of intrusive monitoring of behavior. First, the IWC’s International Observer System (IOS) requiring whaling ships to allow impartial observers to monitor the killing of whales was in response both to claims that the whaling ships and nations whose flag they fly were inaccurately reporting data, and to claims that banned or more stringently controlled species were being killed and processed at sea, then mislabelled before the ship returned to port. The IOS seems to have rectified that problem, though it is unclear if high compliance on IOS-attended ships and factories is an accurate indicator of compliance at non-IOS facilities as well. IOS is not fully intrusive because it is based on bilateral exchanges of observers, and the observers tend to be exchanged between whaling nations and thus may be more lenient than would be the case if nonwhaling nations were, extensively involved in the IOS. The second case of intrusive monitoring is the anytime/anywhere inspection system of the Antarctic Treaty. In both cases, compliance may have increased slightly as a result of having intrusive inspections available. Intrusiveness may serve goals other than higher compliance; as noted earlier, the United States conducts Antarctic inspections primarily to reaffmn the principle of nonownership of the continent and to establish a precedent for intrusive inspections.
In monitoring the problem, intrusiveness has not been a significant issue. Because of cost, little monitoring of the problem is sponsored directly by the international organization. In those cases in which international monitoring of the problem has taken place–EMEP, Med Pol, and to a much lesser degree IWC and CITES—the sanction of the international collaborative effort seems to reduce fears of intrusiveness. Furthermore, in most cases, the international monitoring is carried out by local officials. Yet, because of the large role of science in all cases, in some sense there is a lot of intrusive monitoring. International scientific research on environmental topics is highly intrusive by nature, because scientists and their instruments travel around the world, subject partly to governmental prerogative.
In the monitoring of policy responses, many transnational actors, notably NGOs, in effect act as intrusive monitors. In none of these cases is this function formally established in the international environmental agreement, but it is carried out nonetheless.
ORGANIZATION Finally, monitoring activities vary in the organizational arrangements for carrying them out. Regarding monitoring of behavior, where self-reporting is the norm, the suite of organizational arrangements is dependent upon the prerogatives of the state. One of the major obstacles in several cases, notably LRTAP, was the absence and/or incompatibility of national emissions statistics because of widely different domestic capacities to collect and report data needed for the international regime. None of the cases that uses national reporting has a perfect record; often countries do not repom falsify reports, or submit incomplete or poor-quality reports (19). Much of this stems from the lack of domestic organizational capacity to prepare such reports. Some such misbehavior is intentional; in the 1960s Panama did not submit whaling catch reports to the IWC, even though it could have, because the only Panamanian whaling ship was engaged in egregious violation of the quotas. National data collection and reporting are not the only source of information. The Bureau of Whaling Statistics and the ICES, as noted above, are primary data sources for the whaling and fishery agreements and are supported not only by member nations but also by industry. In only one case, the IWC’s International Observer System, was there a new organizational capacity explicitly established to assess the veracity of self-reporting, and in that case the program was very small and funded on a bilateral basis by the parties.
In almost every case, the organizational arrangements for monitoring the problem are informal and diverse. Insofar as scientific information is critical for such monitoring, the existing national scientific research programs–which are frequently not organized or funded for the explicit purpose of providing information to the regime–are the most important sources of information. Frequently the regime supports some applied monitoring research; for example, as secretariat for CITES, IUCN provides some grants for monitoring stocks of species; the IWC supports similar types of research. Yet this research remains highly limited, primarily because of cost and lack of resources. Funding in the examples just cited is on the order of tens of thousands of dollars annually. In a few cases international funding commitments have been greater, and organizations have been established to improve such monitoring. The EMEP program under LRTAP and the Med Pol program under the Med Plan are two cases in which the regime explicitly empowered the organization to provide the primary source of monitoring information on the problem.
Regarding the organizational aspects of monitoring policy responses, in none of these cases is an organization formally empowered to collect information on policies. In those few cases where there is some formal reporting of national policies (whaling, MARPOL, LRTAP), the process is through national self-reporting. In most cases the secretariat collates and assembles national reports but provides little or no analysis of how those reports individually or collectively contribute to the goals of the regime. Thus, the organizational arrangements for such reporting are left to the member nation’s prerogative. Indeed, the extent to which nations actually submit the required reports depends highly upon the domestic organizational and technical capacity to collect and publish the needed information (19).
Actual functions and influence of organizations differ from the formal arrangements within the regime. For example, though national reporting of policies is frequently not a formal part of international environmental regimes (and even when it is there is flagrant nonreporting), the information nonetheless makes its way into the debates and actions of the regime. Independent of whatever formal arrangements exist, nations monitor each other in their implementation of international commitments, and independent groups such as environmental NGOs frequently monitor everyone. 5
Formally established organizations might have greater legitimacy with governments than informal networks, and legitimacy might lead to greater influence. However, the relationship between legitimacy and influence is far from clear. For example, EMEP’s legitimacy has been high, but so has the quality of its work; this combination makes EMEP results influential. IWC’s Scientific Committee has always been the most legitimate scientific body for the international whaling regime, but in the 1960s the quality of its work was low and its influence consequently diminished. IWC rectified that by establishing another small scientific advisory body, under the auspices of the Food and Agriculture Organization, whose work was influential because it was seen as unbiased, even though its legitimacy as an IWC body was lower than the formally established scientific commission. IUCN has long had considerable influence on the CITES process because of its “red books,” even though IUCN’s legitimacy has been problematic for some CITES members because it is not a strictly governmental organization. In the ozone case, it appears that scientific results were given greater legitimacy through an international scientific review process sponsored by the World Meteorological Organization and the United Nations Environment Programme, even though the bulk of the work had been done by scientists in a very few industrialized countries (50, 28).
Verification
Monitoring activities do not necessarily reveal when parties are in compliance. Verification, the process of determining whether a party is in compliance, varies across three dimensions: capability to verify, definition of compliance, and organizational arrangements.
CAPACITY TO VERIFY: NATURE OF THE STANDARD Many agreements are easy to verify, sometimes reflecting that the agreement was tailored to the prospects of verification. Easily verified agreements are characterized by a close match between the standard against which compliance is assessed and the information on behavior produced by monitoring. Fisheries agreements are of this vnn because the regulations tend to be simple (e.g. a quota or a technological standard such as minimum mesh size) and there is a lot of self-reported data. When the standard is indeterminate, verifying compliance is more difficult. The population dynamics of fisheries are typically not well understood or documented. Thus, determining what the. standard or quota should be is frequently difficult. The agreement to reduce land-based sources of Mediterranean pollution essentially calls for each country to do its best; thus there is no objective standard for determining compliance. Under CITES, there are only general standards against which it might be determined if local authorities have properly implemented the agreement; only in egregious cases is it clear that CITES obligations have been violated.
Improved capability to verify need not produce a more effective agreement. In the case of the herring fishery, compliance was low even though it was easy to determine noncompliance. With CITES, even though some forms of compliance are difficult to assess, many countries probably would not have joined the agreement if the standards had been more objective.
DEFINING COMPLIANCE: STRINGENCY OF THE STANDARD Although it is difficult to test the veracity of self-reported data, it seems that compliance with the nine agreements is fairly high. However, much of this may be an artifact of the standards. In the late 1950s, Norway and the Netherlands withdrew from the IWC in a dispute over quota-setting; they rejoined in the early 1960s when quotas were raised. Compliance remained high throughout the period; indeed, IWC quotas usually exceeded the actual catch. If Iceland leaves the IWC in the future, as seems likely, then compliance may remain high although significant whaling continues outside the regime. It appears that both LRTAP and the Montreal Protocol have similar levels of over compliance, but the former has done less to control the environmental problem than the latter. Thus verifying compliance is not the same as determining whether or not a particular party or the agreement as a whole has been effective.
The process of distinguishing compliance from noncompliance depends not only on how stringent the standards are set but also on how the problem is defined. Through the 1960s the IWC thought of the whale problem largely in aggregate terms, and thus set quotas in blue whale units (BWUs)–catches of different whales were converted into a single number according to an index. Compliance largely depended upon whether a particular nation’s catch in BWUs exceeded the quota, also expressed in BWUs. The main effect of changing to New Management Procedures in the early 1970s was to abandon the BWU and, instead, set quotas for individual species and individual parts of the ocean. Increased sophistication of whaling standards better protected the whales, but also required new and more extensive monitoring information both on the behavior of whalers and on the nature of the over whaling problem.
ORGANIZATON In most environmental cases there is a minimal role for the international organization in verification of compliance. Most agreements have secretariats and require some form of exchange of information such as national reports that can be used to assess compliance. Where the international organization sponsors some monitoring–LRTAP, the Med Plan, and to a lesser degree the IWC and CITES—there is some independent capacity to determine compliance. In practice, even when the international organization actively collects information on domestic implementation of the international agreement, it plays little formal role in explicitly identifying parties that are out of compliance. However, the process of collecting and disseminating the data probably makes it possible for other organizations, such as other signatories to the agreement or NGOs, to expose noncompliance. In some cases–notably IUCN under CITES, and UNEP under the Montreal Protocol and the Med Plan–the international organization has played an important informal role in identifying actual or potential noncompliance and exerting effective pressure.
Conclusions
Monitoring and verification have not been salient aspects of most international environmental issues. No large organizational infrastructures have been created at either the international or domestic levels to fulfill these functions. Most formal information collection under the regimes is self-reported by existing domestic organizations, although NGOs and other actors oversee and contribute to the effectiveness of the regimes to some extent. Thus, although compliance with the agreements seems to be high, the heavy reliance on national reports-which are incomplete, and may be inaccurate because of conflicts-of-interest–inakes true assessment of compliance difficult. Moreover, levels of compliance depend critically on the nature and stringency of the standard. Thus it is important to consider not only compliance but also whether standards are set at appropriate levels. Because international organizations have neither the power nor the capacity to monitor and enforce standards, we tentatively suggest that the most effective standards are those that allow for unilateral action, whether by parties to the agreement or by other actors such as NGOs.
DOMESTIC EXPERIENCE
Many of the same issues and concepts arise in the domestic context, where verification and compliance have been analyzed more extensively (63, 64). Nearly all of the theoretical work on economically optimal systems of verification has been done with the domestic context in mind. Domestic cases may be easier to study because they lack the complication of inherently weak international decision-making and enforcement.
EMPIRICAL STUDIES In the United States, responsibility for environmental protection is divided between the federal government (primarily the Environmental Protection Agency, EPA) and state governments. For example, to control urban air pollution the EPA sets standards for allowable ambient concentrations of several pollutants (the National Ambient Air Quality Standards, NAAQS); the states and some localities are responsible for implementing regulations locally so that, by certain dates, emissions of pollutants are controlled and the NAAQS are met (65, 66). In addition, there are federal emissions standards for new pollution sources. 6 Most enforcement (i.e. inspecting of sources and imposing of sanctions) is done by state authorities, but some 10% is done by EPA. The result is that the states and occasionally EPA monitor individual pollution sources for compliance, both the states and EPA monitor for compliance with ambient air concentrations, and EPA monitors the progress of states in implementing their air pollution control plans. The verification regime telescopes from individual sources up to the EPA.
Studies of state monitoring of individual pollution sources suggest that state authorities vary widely in competence but that generauy their inspections of polluters are too infrequent and cursory (63, 68, 69). Harrington’s (70) study of New Mexico showed that state authorities adopt fairly effective rules of thumb–for example, to inspect large polluters and frequent violators more often-so data on inadequate inspection may understate the efficacy of the inspections that are performed. Other case studies fmd much the same. Inspections frequently consist of spot checks to gauge the consistency of self-reported data on emissions, thus encountering the obvious problems of veracity with such data. Technological innovation may soon improve the prospects for monitoring, since continuous emissions monitoring systems (CEMS) are being installed on sources, making it much easier (and less expensive for government) to gain a continuous, tamper-proof record of actual emissions to the environment.
Studies of enforcement find much the same. Regulators are usually unwilling to levy large fines or other sanctions because these lead to expensive legal challenges and delays; the courts have also assessed only modest sanctions (66). There is some evidence of a trend towards stiffer sanctions, including jail terms; since 1983, EPA referrals of cases for criminal prosecution have increased significantly (71). Studies of EPA monitoring of overall compliance with the NAAQS show marked improvement for most pollutants since 1970 (72). The record is mixed for more difficult pollutants, notably tropospheric ozone in growing population centers such as southern California. EPA also monitors state implementation plans and the progress of such plans in achieving compliance with the NAAQS. In cases of continuing noncompliance EPA can intervene to enforce the federal standards and, for example, limit the siting of new pollution sources. In practice, EPA engages in a continuous renegotiation with state and local authorities rather than exercising its full power and autonomy. Thus, as with the international case, the term “compliance” has many meanings and is a function of the standard-setting process.
There have been similar studies of verification and enforcement for other issues, for example hazardous waste (73) and water pollution (64, 74). For comparison, the air pollution case described above is situated between two endpoints. At one extreme is inspection and enforcement of workplace health and safety regulations by the Occupational Safety and Health Administration (OSHA), which is very infrequent, one inspection per century per firm. Consequently, compliance and effectiveness of OSHA regulations may be much lower than if enforcement were higher (75). At the other extreme is EPA enforcement of water pollution regulations. This is regular–about once per year per firm–and thorough, and seems to increase compliance significantly and cost-effectively (64). The experience with enforcement of air pollution laws is closer to the successful enforcement of water pollution laws than the largely unsuccessful OSHA enforcement. Because of pervasive problems of measuring benefits of environmental regulations and enforcement, it is unclear what the optimal level of enforcement would be in these varied cases. Ostrom’s empirical study of management of local commons also finds that graduated enforcement supported by monitoring of behavior and compliance, contributes to effective management of natural resources, although she is unable to assess the exact relationship between enforcement and effectiveness (13).
A commonly asserted difference between international and domestic pollution control is that the former faces problems of sovereignty and thus cannot be intrusive. Domestic cases have also had to confront intrusiveness because the fourth amendment to the US constitution prohibits “unreasonable searches and seizures.” The courts have addressed this by reinterpreting the amendment so that it does not apply to neutral (i.e. unbiased) searches by administrative agencies, for example to enforce housing codes for the general good of the public (76, 77). This finding has been extended to include inspections for enforcement of air pollution laws (78), OSHA inspections, and many other similar cases.
THEORETICAL STUDIES In addition to empirical studies of domestic enforcement of pollution laws, there have been many theoretical contributions, largely by economists. Much of this can be traced to the work of Becker (79) and Stigler (80) on optimum enforcement of laws and the deterrent value of various sanctions such as fines and imprisonment. These have been extended to the case of environmental pollution by Downing & Watson (81) and Storey & McCabe (82). This research has become progressively more realistic to reflect the imperfect enforcement of pollution laws (83, 84) and the fact that pollution monitoring is stochastic (85). Synthesizing this literature, Russell et al (63, 86) have proposed an approach to enforcement such that the frequency of inspection would depend upon the number of alleged past violations. As noted above, regulators already adopt similar rules of thumb (70, 73); it is unclear to what extent the rules of thumb and the practice of enforcement deviate from the theory except for the general conclusion, already stated, that pollution laws are probably underenforced.
This theoretical literature on domestic enforcement of environmental laws may be useful for designing better monitoring and enforcement at the international level. To date, there is little evidence that it has been applied in that context.
THEORETICAL PERSPECTIVES
How might theorists of international affairs explain the patterns of verification evident in international environmental agreements? From a survey of several promising fields, the answers are both brief and speculative, because only a handful of scholars have asked the question directly. To further illustrate the differences among the theoretical perspectives, we have explored how they explain the preoccupation with verification in arms control cases but relative lack of attention in the international environmental cases.
GAME THEORY International cooperation is inherently a process of interdependent decision-making among two or more actors: it is a “game” in the terminology of game theory (87). Economists and political scientists have made extensive use of game theory to describe the conditions under which cooperation can be achieved. The process of thinking systematically about the costs and benefits or “payoffs” from cooperation has proved helpful (88-90), but it must be remembered that game-theoretic analyses are abstract and thus unable to describe fully the processes of bargaining and cooperation.
Figure 2. Structure of payoffs using game theory
One explanation of the difference in demand for verification between arms control and environmental protection is the structure of the “game” in the two issue-areas (Figure 2). From the perspective of nation A, arms control agreements are typified by the extreme need to avoid the case where A complies but B breaks the agreement. The demand for verification is high in those cases because there is a premium on identifying when the opponent defects. In contrast, environmental agreements may be less sharply characterized by such a payoff structure and thus the demand for verification is lower (91). Also, the emphasis in arms control verification upon “timely notice” of a violation reflects that the benefits of defecting without detection can be rapidly realized, whereas for environmental problems, which may be more cumulative, it may take longer for changes in behavior (e.g. from cooperation to defection) to result in changes to the payoffs.
Thus, the theory seems to predict successfully the differences between the arms control and environmental cases. Now we explore how well game theory can predict the differences in demand for verification among the nine environmental cases. Our nine cases span two ideal types of cooperation: coordination and collaboration (88, 89). Coordination games are characterized by the need for cooperation but the relative indifference of the parties to the particular agreement that is reached. Setting of common international standards for shipping (including many oil pollution standards) are of this type: the parties most want to avoid the case where cooperation fails and they face different shipping standards in every port. Coordination games are self-enforcing because behavior is not conditional on that of other parties and thus the incentives to defect are very low; thus these games should be accompanied by a low demand for verification. The other type of game is collaboration, where cooperation can achieve some common interest but there are significant incentives to defect. Both games in Figure 2 are collaboration; the top game is the famous prisoners’ dilemma. Collaboration games are not self-enforcing; thus these games should be accompanied by a high demand for verification so that each party can have confidence the other is not cheating. Tougher collaboration (more incentives to defect) should be accompanied by greater demand for verification.
These predictions are not met by the cases. Notably, the LTB is largely a game of coordination because US, Soviet, and UK nuclear programs did not appreciably suffer by moving underground, and the common problem of radiation in the atmosphere could only be averted if all parties moved underground. Yet the collective spending on verification procedures for the LTBT is probably greater than for the combined total of the other eight cases described in this paper, which reflects Cold War concern of Soviet cheating.
Rigorous testing of these predictions is difficult because the variable “demand for verification” and the payoffs of collaboration or coordination are difficult to define precisely. There are other complications as well. For example, fishery and whaling agreements had some built-in verification procedures before the agreement was first negotiated, such as the extensive self-reporting system provided by the ICES; thus marginal demand for verification in those cases might be depressed because much of the needed capacity already existed. Interestingly, the demand for verification in both the IWC and fisheries cases seems to be largely invariant with the level of compliance. Game theory would predict that as greater degrees of compliance are demanded and realized, the need for verification would increase because the risks of defection would increase as well.
Game-theoretic studies of international cooperation also underscore that games repeated over time lead to more successful cooperation than static games (92). This is true if compliance is transparent: willingness to collaborate more extensively and effectively will increase if the parties can be confident that all other parties have been adhering to past agreements. This suggests two related predictions: first, parties that want to improve cooperation over time will seek procedures for verification so that compliance is transparent. Second, in cases where compliance is transparent there should be an increase in confidence over time, accompanied by an increase in collaboration. Neither of these predictions is rigorously supported by the cases. In the case of the IWC’s international observer system, the original proposal was precisely to improve transparency of compliance. However, it took 18 years for IOS to be adopted; this suggests that the parties did not seek verification with much vigor. 7 Regarding the second prediction, there is not much evidence that when IOS finally went into effect that it produced greater confidence and more extensive collaboration. The stringency of IWC regulations did increase from the early 1970s to the present, but not because of IOS. In the LRTAP case, transparency of compliance may have led parties not to join the substantive protocols, rather than to cooperate more extensively and risk noncompliance.
In sum, game theory would appear to offer general insights into the demand for verification, especially the difference between the arms control and environmental cases. But upon closer examination, game theory is insufficient to predict patterns of behavior in environment.
DOMESTIC POLITICS Negotiating international agreements is better understood as at least two interacting processes: one at the international level and the other among domestic actors (93). In the United States the domestic debate over arms control agreements was characterized by loud proclamations of distrust of Soviet intentions; critics have demanded that arms control agreements have stringent provisions for verifying compliance. Because these critics have also had domestic political power, their concerns have been reflected in the formal international agreements. In contrast, the cries for verification of international environmental agreements have been few and soft. In many cases, the leaders of the environmental movement have sought world peace and trust; it is not surprising that verification has not been their major preoccupation. However, there are some cases where domestic interest groups have successfully enforced international agreements and norms, for example through boycotts. Domestic groups were able to add to the 1976 Magnuson fisheries act in the United States a provision requiring retaliation in the form of denied fishing rights against any other state that weakened the effectiveness of CITES (58).
The literature linking domestic politics to international negotiation might be usefully combined with studies of bureaucratic organization and procedures (e.g. 94). It may be that the important bureaucratic actors in the domestic formation of arms control policies-primarily the military–are “stamped” with an ethos of mistrust that leads the organization to demand strict verification. In contrast, the important bureaucratic actors in cases of environmental protection–for example, the Environmental Protection Agency–may be characterized by a different ethos, one that is less suspicious and more confident that compliance can be achieved without much attention to verification. This may explain the puzzle from the previous section: namely, why was there so much demand for verification of the LTBT when it is probably a self-enforcing agreement? The answer may be that because LTBT is an arms control agreement, its verification procedures are shaped by the bureaucratic and interest groups that think all arms control agreements should be extensively verified.
REALISM Realist students of international affairs assume that the distribution of power among states determines their bargaining strength and international behavior. Realists that have studied international regimes doubt that the regimes affect the behavior of states much because the underlying determinants of regime outcomes are state power. However, most realist students of international regimes accept that while economic and power relationships may be instrumental in the formation of a regime, once created the regime might exercise some independent leverage on behavior (4, 95). Because the most powerful states matter most those states will undertake to verify and enforce these international agreements on their own, according to their own preferences, rather than entrusting the task to some international organization. There is much evidence that compliance in some cases–notably CITES (referred to above) and the IWC–has been substantially improved because of threats by the United States against noncompliant states (58).
POWER AND INTERDEPENDENCE Power has proved a difficult concept to apply to studies of international relations, and in matters of “low politics” such as harmonizing of tariffs it is not clear what utility military power has. Rather, different states and nonstate entities have different degrees of power, depending on the issue at hand. Australia, New Zealand, and France have played leading roles in renegotiating the Antarctic Treaty; the United States played a leading role in negotiating the Montreal Protocol. UNEP has played the leading role in developing measures to protect regional seas, and NGOs have considerable power to influence behavior and regime outcomes in some issues. Even entrepreneurial individuals have some power over the structure and effectiveness of international agreements; for example, the Executive Director of UNEP was instrumental in the Montreal Protocol negotiations and subsequent efforts to strengthen the Protocol (28).
So far little has been said about enforcement of international environmental agreements and its effect on the demand for verification. Because of growing interdependence of states and a sense of “community” among a relatively stable set of actors, there are strong incentives to comply with international agreements–even where it may not be in a state’s immediate interest to do so–because the negative consequences of noncompliance may be felt in other issues (96). Because issues are interlinked, states have a variety of mechanisms to enforce international regulations; for example, the United States made effective use of threats to deny Japanese access to fishing waters within the United States EEZ unless the Japanese withdrew their objection to the IWC’s whaling moratorium. Formal, dedicated verification and enforcement may not be needed where economic and political interdependencies can be used to ensure compliance through “diffuse reciprocity” extending over time and across other issues (97). Cases of “high” politics such as nuclear arms control, where territorial security is the issue, may be characterized by lower interdependence and thus lower assurance of compliance and, perhaps, greater need for verification.
SYMBOLIC POLITICS An alternative explanation is that verification tends to be low not because of an expectation that nations will comply but because of neglect. Governments may negotiate many of these agreements for symbolic reasons—for example, to demonstrate concern about the environment and placate environmentalists. Thus they are concerned primarily with the presence and image of the international agreement and do not actually seek a process for forging substantive cooperation. The demand for verification remains low because verification is not integral to the symbol. Demand also remains low because verification might reveal noncompliance.
INFORMAL ACTORS The practice of monitoring and verification is conducted through many channels, not just the states and organizations that are formally associated with an international agreement. For example, it is now commonplace to assert an important role for NGOs in implementing international agreements by collecting and publishing information related to compliance and by pressuring states to control pollution. In CITES, IUCN has partially filled this function; in the whaling and fishery agreement the partially nongovernmental ICES has contributed extensive amounts of information. At present however, the roles and effectiveness of NGOs remain understudied both at the national and international level (98).
NORMS AND SOCIAL INSTITUTIONS The large number and increasing frequency of environmental agreements may reflect a long-term trend towards some form of world governance or even government. Perhaps such international governance is already evident in the various principles, norms, and expectations–some informal and others formally codified in international agreements–that are shared internationally. Scholars have long noted the power of norms in shaping behavior (99, 100), although it has proved difficult to track accurately when and how such norms develop. Nonetheless, high degrees of compliance that seem to be experienced in most international and domestic cases may reflect the operation of such norms, rather than the fear of formal enforcement. Individual compliance with laws may reflect the widespread belief that it is “right” to obey the law. Governmental compliance with international agreements may reflect the same principle operating on the international level. Governments tend to obey international agreements, choosing to change the expected norms rather than blatantly violating them (101). The effective operation of norms may reduce the need for explicit monitoring and verification. Within established communities norms may be more effective in shaping behavior; in addition, intrusive and cooperative monitoring may be easier and less costly. Clearly the operation of even well-established norms is not guaranteed. For example, the Iraqi invasion of Kuwait in 1991 violated the well-established principle of sovereignty.
Norms can be powerful; the environmental movement shaped a norm against whaling which, from the late 1960s to the 1980s, transformed the IWC from an organization that manages whale stocks to one that preserves them (37). In cases where norms effectively control behavior, little or no verification and enforcement may be needed. This may explain why states have devoted little attention to verification of these international agreements.
EMERGING ISSUES AND RESEARCH OPPORTUNITIES
ISSUES FOR NEW REGIMES Negotiations are under way to frame environmental regimes for global warming, tropical forests, and biodiversity. Based on this review, at least four issues are worth attention by practitioners and scholars addressing these problems. The first is availability of data. Analyses of global warming are based on country-by-country estimates of sources and sinks of greenhouse gases, not direct measurement; for many countries’ sources and sinks, the estimates are poor. The rate of tropical deforestation is uncertain. Biodiversity is marked by sparse data on both number of species and rate of loss. New regimes should be based upon data that are reasonably available or likely to be so in the near future. Perhaps it is possible to build incentives into regimes to improve data collection and dissemination and to counter false and incomplete self-reporting. Regimes calling for changes in behavior that are finer than the accuracy of data will not encourage compliance or permit verification.
The second issue is transparency and openness. Many of the successful regimes reviewed in this paper provide for clear presentation of data collected under the regime (transparency) and access to the negotiating process and information for a wide range of governmental and nongovernmental actors (openness). The environmental successes contrast with the arms control cases, which are marked by secrecy, obscurity, and limited participation. New environmental regimes may also benefit from transparency and openness.
A third issue is the balance between authority vested in domestic and international organizations. There is tension between the appeal of internationalizing environmental regulation and verification–for example, through creation of a global version of a national Environmental Protection Agency–and the reality that most functions of environmental management are carried out domestically, even when they form a critical component of an international agreement. Because monitoring and verification are intrusive, expensive, and must be responsive to local conditions, the balance favors domestic institutions. International organizations can contribute to verification, for example, through audit strategies such as the International Observer System and research and monitoring, but domestic organizations remain the mainstay of implementation. New regimes should be tailored to the reality of the domestic institutions upon which they depend.
The fourth issue is the division of roles between governmental and nongovernmental organizations. Domestically, NGOs have been important for setting environmental norms and pointing out noncompliance, a pattern likely to be extended. As in human rights, where organizations such as Amnesty International and Helsinki Watch have pressured governments to comply, we imagine that perhaps a “carbon watch” will play an important role in greenhouse verification. Such contributions of NGOs to effective international environmental regimes are enhanced by transparency and openness (101). It is also important to recognize that contributions of NGOs to international environmental policy are frequently dominated by concerns of industrialized countries, often have a narrow or “single-issue” focus, and are sometimes unresponsive to scientific evidence.
CONCLUDING THOUGHTS Because many environmental problems are the result of energy consumption, international organizational arrangements for energy issues must be kept in mind. Within the U.N. system there is a program for energy statistics, but it has little analytical capability and the data are frequently poor. The International Atomic Energy Agency addresses an important subset of energy issues, namely nuclear power. In OECD countries the International Energy Agency plays a coordinating role in energy markets. However, at the global level there is no organization particularly suited to address the pervasive link between energy and environment Currently, UNEP de facto is the lead organization on these issues because of its role in environmental protection, but UNEP’s expertise is spread thin across many fields.
It is also important to consider how advances in science and technology can contribute to international environmental verification, especially in monitoring, organization, and dissemination of information. Regarding non-point sources, for example, new monitoring devices can allow verification of agreements that would otherwise be administratively infeasible. Information systems ran allow worldwide transparency. The rapidity, extent, and cost of technological change and its effect on verification regimes are worth closer attention. Some technologies centrally controlled by a few countries, such as satellites, may assist global data collection and should be employed where appropriate, for example in the measurement of rates of change and extent of forest cover. Furthermore, the release of technical capabilities devoted to national security may greatly improve public knowledge about environmental changes ranging from deforestation to extent of snow cover and ice thickness.
Finally, study is needed to determine how market-based mechanisms to control environmental problems, currently in vogue, affect notions of compliance and verification. These mechanisms are largely dependent upon domestic institutions for implementation, and there is large variance across domestic systems, for example, in tax policies. International arrangements can help harmonize disparate domestic situations, but it is unclear how much harmonization is needed to accommodate international systems such as a global greenhouse tax or system of tradeable permits. Moreover, market-based mechanisms require changes in domestic institutions that make and implement rules, as well as new forms of monitoring, for example, tracking of permit trading that could markedly increase administrative burden (102).
A shift towards the market also implies a change in the definition of compliance. Existing environmental regulation is directed towards specific, predetermined firm responses to pollution abatement; compliance is determined by whether reality conforms to the standard. Where markets are employed, compliance is determined by whether emissions are covered by a tradeable emission permit and/or payment of an effluent fee. However, it is a priori impossible to determine the quantity and spatial distribution of emissions that will result. This uncertainty implies new strategies for detecting noncompliance and new challenges for public environmental management, which has been largely premised on a strong regulatory role for government institutions. A logical place for further study is the international and domestic verification regime needed for effective implementation of these market-based strategies.
Although lacking the urgency of verification in arms control, we conclude that greater attention to verification of environmental agreements is warranted. It may be a catalyst to better design of agreements and reporting of information and a stimulus to countries’ capacity to comply, as more environmental problems are addressed by international agreements. An enhanced statistical base will be needed to assess performance and compliance in meeting environmental goals. More attention to the improvement of national and international statistical systems for energy, forests, fisheries, toxics, and so forth may prove one of the greatest benefits of the development of international regimes.
APPENDIX: LESSONS FROM ARMS CONTROL VERIFICATION
Verification of arms control agreements is quite different in salience and procedures compared with international environmental agreements. Arms control agreements address matters of “hard security” and thus it is especially important to have timely detection of defections. Because arms control agreements predominantly control state activities rather than state subjects (people, corporations, etc), arms control verification is politically and physically less intrusive than international environmental agreements on the liberties of state subjects, which tend to be guaranteed by constitutions and norms of freedom.
Regardless of the differences, a comparison between arms control and environmental verification may be a useful exercise, if only because so much attention has been devoted to the arms control cases during the past three decades. In this appendix we briefly review the arms control verification literature and draw several lessons. Other types of comparisons would also be illuminating, for example between environmental and international criminal law enforcement.
Verification figures prominently in US-Soviet nuclear arms control (103). Also studied are the role of third countries, the role of international organizations, conventional arms control (104), prospective agreements to strengthen chemical and biological weapons, and the role of nuclear operations (105).
NUCLEAR ARMS CONTROL IN PRACTICE All major post-World War II arms control failures have in part been due to claims that the agreement could not be adequately verified: the 1946 Baruch Plan to transfer all nuclear weapons and materials to the United Nations partially foundered on the inability to detect clandestine nuclear weapons production without highly intrusive inspections; perennial proposals for a comprehensive nuclear test ban (see below) have partially failed because of disagreements over on-site inspections needed to distinguish between nuclear explosions and earthquakes; the United States failed to ratify the 1979 Strategic Arms Limitation Talks (SALT) II treaty in part because of fears the Soviets could cheat without being detected.
Verification is intertwined with assessments and fears of noncompliance. Claims and counter-claims of deceit and noncompliance periodically characterize east-west arms control. Fear of cheating produced a characteristic style, sought at least by US negotiators, of highly specific arms control agreements that reduce ambiguity and make it easier to detect compliance and noncompliance (106, 107).
The issue of on-site inspection (OSI) for verification is a perennial arms control issue because, in many cases, it is the best method for assessing compliance (108, 109). Because it is potentially intrusive and therefore potentially useful for military and industrial espionage as well as arms control, OSI has proved difficult to employ. 8 Through the 1970s intrusiveness of arms control verification was very low, with one exception (see below); rather, independent national means–cared national technical means (NTM), a term formally introduced in the SALT I treaty-were the norm. In practice, NTM has never been formally defined, but includes all forms of remote sensing whose platforms do not enter the other country’s territory (e.g. satellites but not aircraft; eavesdropping ships on the high seas but not territorial waters). NTM is not fully independent: the SALT process put limits on the extent to which nations could interfere with each other’s NTM, for example, by encrypting of certain data during missile testing and thus reducing the capacity of NTM to detect violations (107). 9
Recently arms control verification has become more intrusive and less politicized (110), because of improved east-west relations. The 1987 Intermediate Nuclear Forces (INF) agreement and the 1990 Treaty on Conventional Armed Forces in Europe (CFE) provide for on-site inspectors (22, 111). The 1991 Strategic Arms Reduction Talks (START) agreement allows on-site inspections of nuclear missiles, including surprise inspections. The United States has established an On-Site Inspection Agency (OSIA) to conduct inspections and perform other functions under these and other existing and prospective arms control agreements. As an indicator of the salience of arms control verification, OSIA’s budget for implementing INF alone is $522 million (112, 113).
One arms control arrangement–the International Atomic Energy Agency’s (IAEA) nuclear materials accounting-has made longstanding use of on-site inspection. Established in 1957, IAEA was charged with inspecting civilian nuclear power plants to “safeguard” all nuclear materials in participating countries, confirming they were not diverted from peaceful purposes. Under the 1968 nuclear nonproliferation treaty (NPT), IAEA safeguards have been extended to a larger group of nations and nuclear programs (114, 115, 116a). In practice, IAEA negotiates bilateral agreements with each country for each nuclear facility subject to safeguards; those contracts call for both regular and surprise short-notice (24-hour) inspection (117). IAEA safeguards are, by design, supposed to provide high confidence of timely detection of diversion of any significant amount of nuclear materials away from peaceful uses. “Timely” and “significant’” are defined by IAEA according to the material diverted.
IAEA inspections are limited, however, to nuclear facilities described in the bilateral agreements. Inspectors are not free to wander the countryside. IAEA members thought to own or be developing nuclear weapons are doing so outside of the declared facilities rather than diverting materials from the IAEA-monitored fuel cycle. Discovery of a well-advanced Iraqi nuclear weapons program by U.N. inspectors after the most recent Persian Gulf war is widely seen as a failure of safeguards procedures (Iraq was a member of NPT), and has underscored that timely detection of clandestine nuclear programs will require more intrusive inspections. At present, it is unclear (a) whether and to what degree IAEA has authority for more intrusive “special inspections” or whether such authority might be vested in IAEA, (b) whether and how IAEA might employ national intelligence data in its efforts to detect clandestine nuclear programs, and (c) what might be done when such programs are detected (118).
Currently IAEA safeguards apply to approximately 1000 nuclear facilities; a budget of approximately $50 million per year supports several hundred field inspectors and activities related to safeguards. Because IAFA provides equal inspections to all states under NPT, the bulk of IAEA safeguards resources are spent inspecting facilities in industrialized countries, primarily the France, Japan, and the United Kingdom. For comparison, the IAEA safeguards budget is approximately equal to the entire budget of the United Nations Environment Programme. The total IAEA budget is approximately $150 million and includes technical assistance, basic research, and other activities related to promotion of peaceful nuclear power.
LESSONS FROM PRACTICE AND THEORY First, verification can become a salient dimension of international cooperation, so much that agreements that cannot be verified adequately are politically infeasible. Clearly much rests on the definition of “adequate.” Concern about Soviet noncompliance had been so great within the US government that, since 1984, by requirement of Congress, the US President annually reported the status of Soviet compliance with arms control agreements (110).
Second, verification can be divisive. Within the United States, bitter disputes over verification, although a reflection of deeper ideological divisions, may have eroded the prospects for meaningful arms control in the late 1970s and early 1980s, especially because of debates over verifiability of the 1979 SALT II agreement. Disputes over which violations, if any, were significant led to escalating reciprocal charges of possible treaty “breakout,” all of which may have undermined support for international cooperation.
Third, verification is not an end in itself; rather, it should be seen as contributing to one’s overall goals, such as security (119). 10 Thus scholars have long distinguished between detecting important and unimportant violations of arms control agreements. Insofar as verification has contributed to increased confidence in east-west arms control–perhaps evidenced in the increasing stringency of arms control agreements and intrusiveness of verification-then it has probably enhanced the prospects for further arms control and security. Not all arms control contributes to increased security or lower military spending, but increased confidence in meaningful arms control in the past two decades is probably at least partially due to verification activities.
Fourth, the suite of technological and organizational arrangements for arms control verification has other purposes, for example, espionage. Attempts to explain the types of verification demanded in international agreements must consider the constraints and opportunities of these overlapping activities rather than just the more narrow purposes of arms control verification (14, 120, 121).
Fifth, technological change and scientific research programs can enhance the verification process. Research to improve verification techniques can make possible certain types of agreements; for example, research programs undertaken by government research programs to improve the capacity to distinguish earthquakes helped the negotiation of a partial test ban (21). Similarly, technological change in the commercial sector may also offer opportunities for verification and related activities.
ACKNOWLEDGMENTS
The authors gratefully acknowledge the assistance of Wolfgang Fischer and Juan C. Di Primio, and James Broadus, Antonia and Abram Chayes, and Eugene Skolnikoff.
ENDNOTES
1 The success of those agreements was mixed at best, A notable exception, the highly successful 1911 Fur Seal Agreement, is discussed by Lyster (39).
2 As implied above, through the 1960s high levels of compliance reflected that quotas were set high and thus states had to make little or no effort to remain in compliance with their quotas.
3 IUCN has recently changed its name to the World Conservation Union.
4 A developing country is defined as having consumption of CFCs below 0.3 kilograms per capita.
5 NGOs are becoming active in many issue-areas. Particular NGOs have adopted particular issues: for example, IUCN (which has both governmental and nongovernmental members) is active in CITES and Greenpeace is active in whaling. To understand better how and why a particular NGO captures a certain issue one would have to look more closely at the goals and processes within the NGO.
6 For a review of the recent changes to the federal clean air legislation see Ref. 67.
7 As Birnie (37) shows, the IOS was always mentioned at IWC meetings; however, none of the parties seems to have been extremely active in forcing the idea. IOS was also difficult to put into place because of rigidity in the Whaling Convention. Thus, the 18-year delay does not disprove the hypothesis here, though it does weaken it.
8 Additional difficulties in negotiating intrusive arms control verification procedures stem from differences in the degree of openness of societies. Calls for intrusiveness are often surrogates for larger political debates over openness. For example, the United States long pushed for intrusive arms control inspections in part to underscore the closed nature of Soviet society. That US position has become more cautious in its demands for OSI since approximately 1987 because Glasnost, among other achievements, produced greater Soviet willingness to allow intrusive inspections. Faced with negotiating the need for intrusive inspections as an issue in its own right rather than as a surrogate debate, the United States has become less insistent on OSI. Ironically, in some cases such as the chemical weapons treaty currently under negotiation, the United States is now actively opposing some forms of intrusive inspection.
9 The open skies proposals of the 1950s (which resurfaced in the 1980s) would have modified what is now known as NTM by allowing free overflight of enemy territory. This would be useful not only for arms control verification but for other activities that enhance security; for example, open skies would allow easier confirmation that an enemy was not mobilizing and thus decrease skittishness in a crisis. Satellite observation may reduce the need for open skies, but many of the security benefits of open skies remain relevant today.
10 Interestingly, there has been little assessment of the costs of verification and the marginal contribution of spending on verification and spending on other measures that might enhance security. One study of the costs of verification is (113).
LITERATURE CITED
1. Krasner, S. D., ed. 1983. International Regimes. Ithaca: Cornell Univ. Press
2. Haggard, S., Simmons, B. A. 1987. Theories of international regimes. Int. Organ. 41:491-517
3. Young, O. R. 1990. Global environmental change and international governance. Millennium: J. Int. Stud. 3:337-46
4. Keohane, R. O. 1984. After Hegemony: Cooperation and Discord in the World Political Economy. Princeton: Princeton Univ. Press
5. Haas, E. B. 1990. When Knowledge is Power: Three Models of Change in International Organizations. Berkeley: Univ. Calif. Press
6. Nye, J. S. 1987. Nuclear learning and U.S.-Soviet security regimes. Int. Organ. 41:371-402
7. Young, O. R. 1989. The politics of international regime formation: Managing natural resources and the environment. Int. Organ. 43:349-75
8. Kay, D. A., Jacobson, H. K., eds. 1983. Environment Protection: The International Dimension. London: Allenheld Osmun
9. Caldwell, L. K. 1984/1990. International Environmental Policy: Emergence and Dimensions. Durham,NC: Duke Univ. Press
10. Wettestad, J., Andresen, S. 1991. The Effectiveness of International Resource Cooperation: Some Preliminary Findings. R:007-1991. Lysaker, Norway: Fridtjof Nansens Inst.
11. Young, O. R. 1992. The effectiveness of international institutions: Hard cases and critical variables. In Governance without Government: Order and Change in World Politics, ed. J. N. Rosenau, E.-O. Czempiel. New York: Cambridge Univ. Press
12. Haas, P. M., Keohane, R. O., Levy, M. A. forthcoming. Institutions for the Earth: Sources of Effective International Environmental Protection. Cambridge: MIT Press
13. Ostrom, E. 1990. Governing the Commons: The Evolution of Institutions for Collective Action. New York: Cambridge Univ. Press
14. Rowell, W. F. 1986. Arms Control Verification: A Guide to Policy Issues for the 1980s. Cambridge: Ballinger
15. Krepon, M., Umberger, M. 1988. Verification and Compliance: A Problem-Solving Approach. Cambridge: Ballinger
16. Chayes, A. H., Chayes, A. 1990. From law enforcement to dispute settlement: A new approach to arms control verification and compliance. Int. Security 14:147-64
17. Fischer, W. 1991. The Verification of International Conventions on Protection of the Environment and Common Resources: A comparative Analysis of the Instruments and Procedures for International Verification with the Example of Thirteen Conventions. Programmgruppe Technologiefolgenforschung, Forschungszentrum Julich
18. Fischer, W. 1991. The verification of a greenhouse gas convention—a new task for international politics? In Verification Report 1991: Yearbook on Arms Control and Environmental Agreements, ed. J. B. Poole, pp. 197- 206. New York: Apex
19. Gen. Account. Off., US Congress. 1992. International Environment: international Agreements Are Not Well Monitored. GAO/RCED-92-43.
20. United Nations Environ. Prog. (UNEP), 1989. Register of International Treaties and Other Agreements in the Field of the Environment. UNEP/GC.15/Inf.2. Nairobi: UNEP
21. Jacobson, H. K., Stein, E. 1966. Diplomats, Scientists and Politicians: The United States and the Nuclear Test Ban Negotiations. Ann Arbor: Univ. Mich.
22. Arms Control and Disarmament Agency (ACDA), US Dept. State. 1990. Arms Control and Disarmament Agreements: Texts and Histories of the Negotiations. Washington, DC: ACDA
23. Sand, P. 1990. Regional approaches to transboundary air pollution. In Energy: Production, Consumption and Consequences , ed. J. L. Helm. Washington, DC: Natl. Acad. Press
24. Levy, M. forthcoming. European acid rain: The power of toteboard diplomacy. See Ref. 12
25. Lammers, J. G. 1988. The European approach to acid rain. In International Law and Pollution, ed. D. B. Magraw, pp. 265-309. Philadelphia: Univ. Penn.
26. Chossudovsky, E. M. 1988. “East-West” Diplomacy for Environment in the United Nations: The High-Level Meeting within the Framework of the ECE on the Protection of the Environment, A Case Study. United Nations Inst. Train. Res. [UNITAR]. New York: United Nations
27. Boehmer-Christiansen, S., Skea, J. 1991 . Acid Politics: Environmental and Energy Policies in Britain and Germany. New York. Belhaven
28. Benedick, R. E. 1991. Ozone Diplomacy: New Directions in Safeguarding the Planet. Cambridge: Harvard Univ. Press
29. Parson, E. A. forthcoming. Stratospheric ozone and CFCs: The evolution and impact of international institutions. See Ref. 12.
30. M’gonigle, R. M., Zacher, M. W. 1979. Pollution, Politics and International Law: Tankers at Sea. Berkeley: Univ. Calif.
31. Sielen, A. B., McManus, R. J. 1983. IMCO and the Politics of Ship Pollution. See Ref. 8, pp. 140-83
32. Timagenis, G. J. 1980 . International Control of Marine Pollution. Vols. 1, 2. Dobbs Ferry, NY: Oceana
33. Mitchell, R. forthcoming. Intentional oil pollution of the oceans: Crisis, public pressure and structural standards. See Ref. 12.
34. Boxer, B. 1983. The Mediterranean Sea: Preparing and implementing a regional action plan. See Ref. 8, pp. 267-309
35. Haas, P. M. 1990. Saying the Mediterranean: The politics of International Environmental Cooperation. New York: Columbia Univ. Press
36. Haas, P. M. 1992. Save the seas: UNEP’s regional seas programme and the coordination of regional pollution control efforts. In Ocean Yearbook 9, ed. E. M. Borgese, N. Ginsburg, J. A. Morgan, pp. 188-211. Chicago: Univ. Chicago Press
37. Birnie, P. 1985. International Regulation of Whaling: From Conservation of Whaling to Conservation of Whales and Regulation of Whale-Watching. Vols. 1, 2. New York: Oceana
38. McHugh, J. L. 1974. The role and history of the International Whaling Commission. In The Whale Problem, ed. W. E. Scherill, pp. 305-35. Cambridge: Harvard Univ. Press
39. Lyster, S. 1985. International Wildlife Law. Cambridge: Grotius
40. Phillips, C. 1990. What the moratorium means in practice. Marine Policy 14:93-95
41. Andresen, S. 1989. Science and politics in the international management of whales. Marine Policy 13:99-117
41a. Andresen, S. 1989. See Ref. 41, pp. 109-11
42. Auburn, F. M. 1982. Antarctic Law and Politics. Bloomington: Indiana Univ. Press
43. Jorgansen-Dahl A., Ostreng, W., eds. 1991. The Antarctic Treaty System in World Politics. London: Macmillan
44. Peterson, M. J. 1988. Managing the Frozen South: The Creation and Evolution of the Antarctic Treaty System. Berkeley: Univ. Calif.
45. Favre, D. S. 1989. International Trade in Endangered Species. Dordrecht: Martinus Nijhoff
46. Kosloff, L. H., Trexler, M. C. 1987, The convention on international trade in endangered species: No carrot, but where’s the stick? Environ. Law Rep. 17:10222-36
47. Coull, J. R. 1988. The North Sea herring fishery in the twentieth century. In Ocean Yearbook 7, ed. E. M. Borgese, N. Ginsburg, J. R. Morgan, pp. 115-31. Chicago: Univ. Chicago Press
47a. Coull, J. R. 1988. See Ref. 47, pp. 129-30
48. Underdal, A. 1980. The Politics of International Fisheries Management: The Case of the Northeast Atlantic. Oslo: Universitetsforlaget
49. Cowling, E. B. 1982. Acid precipitation in historical Perspective. Environ. Sci. Technol. 16:110A-123A
50. World Meteorol. Organ. (WMO). 1990. Scientific Assessment of Stratospheric Ozone: 1989. 2 vols., Global Ozone Research and Monitoring Project Report 20
51. Boczek, B. A. 1986. The concept of regime and the protection of the marine environment In Ocean Yearbook 6, ed. E. M. Borgese, N. Ginsburg, pp. 271-97. Chicago: Univ. Chicago Press
52. Saetevik, S. 1988. Environmental Cooperation Between the North Sea States: Success or Failure? New York: Bellhaven
53. Andresen, S. 1989. The Environmental North Sea Regime: A successful regional approach. In Ocean Yearbook 8 , ed. E. M. Borgese, N. Ginsburg, J. R. Morgan, pp. 378-401. Chicago: Univ. Chicago Press
54. Freestone, D., Ijlstra, T., eds. 1991. The North Sea: Basic Legal Documents on Regional Environmental Coopera tion. Dordrecht: Graham and Trotman/Martinus Nijhoff
55. Hilz, C., Ehrenfeld, J. R. 1991. Transboundary movements of hazardous wastes: A comparative analysis of the policy options to control the international waste trade. Int. Environ. Affairs 3:26-63
56. McManus, R. J. 1983. Ocean dumping: Standards in action. See Ref. 8, pp. 119-39
57. Bruce, M. 1986. The London dumping convention, 1972: The first decade and future. See Ref. 51, pp. 298-318
58. Birnie, P. 1985. The role of developing countries in nudging the International Whaling Commission from regulating whaling to encouraging nonconsumptive uses of whales. Ecol. Law Q. 12:937-75
58a. Birnie, P. 1985. See Ref. 58, pp. 946-50
59. MeElroy, J. K. 1984. Antarctic fisheries: History and prospects. Marine Policy 8:239-58
60. Bardach, J. E. 1986. Fish far away: Comments on the Antarctic fisheries. See Ref. 51, pp. 38-54
61. Peterson, M. J. forthcoming. International fisheries management. See Ref. 12
62. Farwell, J., Elles, J. 1984. In Search of a Common Fisheries Policy. Brookfield, Vt: Gower
63. Russell, C. S., Harrington, W., Vaughan, W. J. 1986. Enforcing Pollution Control Laws. Washington, DC: Resour. for the Future
64. Magat, W. A., Viscusi, W. K. 1990. Effectiveness of the EPA’s regulatory enforcement: the case of industrial effluent standards. J. Law Econ. 33:331-60
65. Roberts, M. J., Farrell, S. O. 1978. The political economy of implementation: The Clean Air Act and stationary sources. In Approaches to Controlling Air Pollution, ed. A. F. Friedlander. Cambridge: MIT Press
66. Melnick, R. S. 1983. Regulation and the Courts: The Case of the Clean Air Act. Washington, DC: The Brookings Inst.
67. Ferrall, B. L. 1991. The Clean Air Act Amendments of 1990 and the use of market forces to control sulfur dioxide emissions. Harvard J. Regul. 28:235-52
68. Gen. Account. Off., US Congress. 1989. Air Pollution: National Air Monitoring Network is Inadequate. GAO/RCED-90-15
69. Gen. Account. Off., US Congress. 1990. Air Pollution: Improvements Needed in Detecting and Preventing Violations. GAO/RCED-90-155
70. Harrington, W. 1981. The Regulatory Approach to Air Quality Management: A case study of New Mexico. Research Paper R-25, Resour. for the Future, Washington, DC
71. Counc. Environ. Qual. (CEQ). 1990. Environmental Quality. Washington, DC: US Gov. Print. Off.
72. Environ. Protect. Agency (EPA, US). 1990. National Air Quality and Emissions Trends Reports. Off. Air Qual. Plann. Standards, EPA/450/4-90-002
73. Gen. Account. Off., US Congress. 1987. Hazardous Waste: Facility Inspections Are Not Thorough and Complete. GAO/RCED-88-20.
74. Gen. Account. Off., US Congress. 1990. Drinking Water: Compliance Problems Undermine EPA Program as New Challenges Emerge. GAO/RCED-90-127
75. W. K. Viscusi, 1986. The impact of occupational safety and health regulation, 1973-1983. Rand J. Econ. 17: 567-80
76. Camara v. Municipal court of the City and County of San Francisco. 1967. US Supreme Court 387:523-40
77. See v. City of Seattle. 1967. US Supreme Court 387:541-55
78. Air Pollution Variance Board of Colorado v. Western Alfalfa Corp. 1974. US Supreme Court 416:861-66
79. Becker, G. S. 1968. Crime and punishment: An economic approach. J. Polit. Econ. 76:169-217
80. Stigler, G. J. 1970. The optimum enforcement of laws. J. Polit. Econ. 78:526-36
81. Downing, P. B., Watson, W. D. 1974. The economics of enforcing air pollution controls. J. Environ. Econ. Manage. 1: 219-36
82. Storey, D. J., McCabe, P. J. 1980. The criminal waste discharger. Scottish J. Polit. Econ. 27:30-40
83. Harford, J. D. 1978. Firm behavior under imperfectly enforceable pollution standards and taxes. J. Environ. Econ. Manage. 5:26-43
84. Viscusi, W. K., Zeckhauser, R. J. 1979. Optimal standards with incomplete enforcement. Public Policy 27:437-56
85. Beavis, B., Walker, M. 1983. Random wastes, imperfect monitoring and environmental quality standards. J. Public Econ. 21:377-87
86. Russell, C. S. 1990. Monitoring and enforcement. In Public Policies for Environmental Protection, ed. P. R. Portney. Washington, DC: Resour. for the Future
87. Ordeshook, P. C. 1986. Game Theory and Political Theory: An Introduction. New York: Cambridge Univ. Press
88. Stein, A. 1983. Coordination and collaboration: regimes in an anarchic world. See Ref. 1, pp. 115-40
89. Snidal, D. 1985. Coordination versus Prisoners’ Dilemma: Implications for international cooperation and regimes. Am. Polit. Sci. Rev. 79:923-42
90. Oye, K. A. 1986. Explaining cooperation under anarchy: Hypotheses and strategies. In Cooperation Under Anarchy, ed. K. A. Oye, pp. 1-24. Princeton: Princeton Univ. Press
91. Efinger, M., Breitmeier, H. 1991. Verifying a convention on greenhouse gases: A game-theoretic approach. In A Regime to Control Greenhouse Gases: Issues of Verification, Monitoring, Institutions, ed. J. C. Di Primio, G. Stein, pp.59-68. Proc. Workshop, Bad Neuenahr, June 12-14. Forschungszentrum Jülich, Programmgruppe Technologiefolgenforschung
92. Axelrod, R. 1984. The Evolution of Cooperation. New York: Basic Books
93. Putnam, R. D. 1988. Diplomacy and domestic politics: The logic of two-level games. Int. Organ. 42:427-60
94. Wilson, J. Q. 1990. Bureaucracy. New York: Basic Books
95. Krasner, S. D. 1983. Structural causes and regime consequences: regimes as intervening variables. See Ref. 1, pp. 1-21
96. Keohane, R. O., Nye, J. S. 1977/1989. Power and Interdependence. Glenview, Ill: Scott, Foresman. 2nd ed.
97. Keohane, R. O. 1986 Reciprocity in international relations. Int. Organ . 40:1-27
98. Carnegie Comm. Sci., Technol., Gov. 1992. Report of the Task Force on Nongovernmental Organizations. W. D. Carey and C. M. Mathias, chairs. New York: Carnegie Comm.
99. Kratochwil, F. V. 1989. Rules, Norms, and Decisions: On the Conditions of Practical and Legal Reasoning in International Relations and Domestic Affairs. New York: Cambridge Univ. Press
100. Nadelman, E. A. 1990. Global prohibition regimes: the evolution of norms in international society. Int. Organ. 44:479-526
101. Chayes, A., Chayes, A. H., 1991. Adjustment and compliance processes in international regulatory regimes. In Preserving the Global Environment: The Challenge of Shared Leadership, ed.J. T. Mathews. pp. 280-308. New York: Norton
102. Victor, D. G. 1991. Limits of market-based strategies to slow global warming: The case of tradeable permits. Policy Sci. 24:199-222
103. Crawford, A., MacKinnon, G., Hanson, L, Morris, E. 1987. Compendium of Arms Control Verification Proposals. Vols. 1-3. Operational Research and Analysis Establishment, Extramural paper no. 42. Ottawa, Canada: Dept. Natl. Defence
104. Kokowki, R., Koulik, S., eds. 1990. Verification of Conventional Arms Control in Europe: Technological Constraints and Opportunities. Boulder: Westview Press
105. May, M. M., Harvey, J. R. 1987. Nuclear operations and arms control. In Managing Nuclear Operations, ed. A. B. Carter, J. D. Steinbruner, C. A. Zraket, pp. 704-35. Washington, DC: The Brookings Inst.
106. Newhouse, J. 1973. Cold Dawn: The Story of SALT. New York: Holt, Rinehart and Winston
107. Talbott, S. 1979. Endgame: The Inside Story of SALT II. New York: Harper & Row
108. Dunn, L. A. with Gordon, A. E., eds. 1990. Arms Control Verification and the New Role of On-site Inspection. Lexington, Mass: Lexington Books
109. Graybeal, S. N., Krepon, M. 1988. On-site inspections. See Ref. 15, pp. 92-108
110. Lowenthal, M. M. 1991. The politics of verification: What’s new, what’s not. The Washington Q . 14:119-31
111. Kunzendorff, V. 1989. Verification in Conventional Arms Control . Adelphi Papers 245 . London: Brassey’s
112 Gen. Account. Off., US Congress. 1991. Arms Control: Intermediate-Range Nuclear Forces Treaty Implementation. GAO/NSIAD-91-262
113. Congressional Budget Office (CBO), United States Congress. 1990. U.S. Costs of Verification and Compliance under Pending Arms Treaties. Washington, DC: CBO
114. Scheinman, L. 1985. The Nonproliferation role of the International Atomic Energy Agency: A Critical Assessment. Washington, DC: Resour. for the Future
115. Scheinman, L. 1987. The International Atomic Energy Agency and World Nuclear Order. Washington, DC: Resour. for the Future
116. Schroeer, D. 1984. Science, Technology and the Nuclear Arms Race. New York: Wiley
116a. Schroeer, D. 1984. See Ref. 116, chapter 14
117. Fischer. D., Szasz, P. 1985. Safeguarding the Atom: A Critical Appraisal. Stockholm Int. Peace Res. Inst. London: Taylor & Francis
118. Pilat, J. F. 1992. Iraq and the future of nuclear nonproliferation: The roles of inspections and treaties. Science 255: 1224-29
119. Schelling, T. C., Halperin, M. H. 1962/1985 Strategy and Arms Control. Washington, DC: Pergamon-Brassey’s (reissue)
120. Tsipis, K., Hafemeister, D. W., Janeway, P., eds. 1986. Arms Control Verification: The Technologies That Make It Possible. Washington, DC: Pergamon-Brassey’s
121. Off. Technol. Assess. (OTA), US Congress. 1990. Verification Technologies: Measures for Monitoring Compliance with the START Treaty. Summary. Washington, DC: OTA
Five Worthy Ways to Spend Large Amounts of Money for Research on Environment and Resources
I envision a large, prosperous economy that treads lightly and emits little or nothing.
The first decade of my career I carried briefcases for William A. Nierenberg (NAE), Robert M. White (NAE), and other leaders in formulating such major research programs as the World Climate Program and the International Geosphere-Biosphere Program. An obscure fact is that in 1983 I was the scribe of Toward an International Geosphere-Biosphere Program: A Study of Global Change, the National Research Council (NRC) report that articulated and named the Global Change venture, one of the largest environmental programs of recent times. Working for the National Academies of Sciences and Engineering, I saw major efforts conceived, including the Human Genome Project, International Decade of Natural Disaster Reduction, and Superconducting SuperCollider. I learned what grand programs can and cannot do, how they are born, and what they cost. I learned that the sealing wax and string, the technical means to do research, open the new niches that we successfully explore.
Spurred by an invitation from the San Diego Science & Technology Council and hoping to rally my colleagues to glory on Earth, I here tell my top five Worthy Ways to spend large amounts of money for research on environment and resources. My top five span the oceans, land, human health, energy, and transport. All demand teams of engineers and scientists. Let’s
- count all the fish in the sea;
- verify that the extension of humans into the landscape has begun a Great Reversal and anticipate its extent and implications during the next century;
- assess national exposure of humans to bad things in the environment;
- build 5-gigawatt zero-emission power plants the size of an automobile; and
- get magnetically-levitated trains (Maglevs) shooting through evacuated tubes.
These Worthy Ways cohere in the vision of a large, prosperous economy that treads lightly and emits little or nothing.
1. Marine Census
In December 1998 for a week I sailed above the Arctic Circle in the Norwegian Sea, precisely counting herring in the dark. Over the decades of the Cold War, Norwegians honed their submarine acoustics, listening for Soviet vessels motoring out of Murmansk. This technology, integrated with others, makes possible the first-ever reliable worldwide Census of Marine Life. I prefer to say Census of the Fishes, conjuring beautiful images to Everyman. But, humanity needs to understand the diversity, distribution, and abundance of squids, jellies, and turtles, too, and so, deferring to accurate colleagues, I call this first Worthy Way the Census of Marine Life. But let me make the case primarily for fishes.
Many of the world’s leading icthyologists gathered at Scripps Institution of Oceanography in La Jolla, California, in March 1997 to consider what is known and knowable about the diversity of marine fishes.[1] The meeting attendees reported how many species are known in each region of the world’s oceans and debated how many might remain undiscovered. Known marine species total about 15,000. The meeting concluded that about 5,000 yet remain undiscovered. I find this prospect of discovering 5,000 fishes a siren call, a call to voyages of discovery in little explored regions of the Indian Ocean, along the deeper reaches of reefs, and in the mid-waters and great depths of the open oceans. The adventures of discovery of Cook, Darwin, and the explorers of Linnaeus’s century are open to our generation, too.
The urgency to cope with changes in abundance of fish amplifies the adventure of discovery. In August 1998 at the Oceanographic Institution in Woods Hole we advanced the concept of the Census at a workshop on the history of fished populations, some 100-200 of the 15-20 thousand species. From history, the assembled experts estimated that fish biomass in intensively exploited fisheries is about 1/10th the level pre-exploitation.[2] That is, the fish in seas where commercial fisherman do their best (or worst) to make a living now weigh only 10% of the fish they sought in those seas a few decades or hundred years ago.
Diverse observations support this estimate. For example, the diaries of early European settlers describe marvelous fish sizes and abundance off New England in the 1600s. From Scotland to Japan, commercial records document enormous catches with simple equipment during many centuries. Even now, when fishers discover and begin fishing new places, they record easy and abundant catches, for example, of orange roughy on Pacific sea mounts. Also scientific surveys of fish stocks indicate fewer and fewer spawning fish, mothers, compared to recruits, their offspring. The ratio of spawners to recruits has fallen to 20% and even 5% of its level when surveys began. A great marine mystery is what has happened to the energy in the ecosystem formerly embodied in the commercial fish.
The two dramatic numbers of the “missing,” the 5000 undiscovered fishes and the lost 90% of stocks, suggest the value of a much better and continuing description of life in the oceans. So, I propose a worldwide Census. The Census would describe and explain the diversity, distribution, and abundance of marine life, especially the upper trophic levels. Preoccupied by possible climatic change and the reservoirs of carbon that influence it, we have tended to assess life in the oceans in gigatons of carbon, neglecting whether the gigatons are in plankton, anchovies, or swordfish. I care what forms the carbon takes.
Three questions encapsulate the purpose of the Census. What did live in the oceans? What does live in the oceans? What will live in the oceans? The three questions mean the program would have three components. The first, probably not large or expensive, would be paleo and reconstruct the history of marine animal populations since human predation became important, say, the past 500 years.
The second and expensive part of the program would answer “What does live in the oceans?” and be observations lasting a few years, perhaps around the year 2005. We would observe the many parts of the oceans where we have so far barely glimpsed the biology, for example, the open oceans and mid-waters, together with strengthening efforts by national fisheries agencies that struggle with meager funds, personnel, and equipment to examine areas near shore where many species of commercial import concentrate.
As a maximalist, I hope to see integration and synchronization of technologies, platforms, and approaches. Acoustics are paramount, because every fish is a submarine, and acousticians can now interpret tiny noises 100 kilometers away. Optics also can detect much. For example, airborne lidars now range far, fast, and perhaps as deep as 50 meters. Lidars can also detect inexpensively if their planes are drones. And least expensive of all, smart and hungry, animals are themselves motivated samplers of their environments, and we know what they sampled if we tag them. The benefits of the technologies soar, if integrated. For example, acoustics, optics, and molecular and chemical methods can combine to identify species reliably from afar.
Answeringthe third question, “What will live in the oceans?” requires the integration and formalization that we call models. So, the Census would also have a component to advance marine ecosystem and other models to use the new data to explain and predict changes in populations and relations among them.
A major outcome of the Census would be an on-line three-dimensional geographical information system which would enable researchers or resource managers anywhere to click on a volume of water and bring up data on living marine resources reported in that area. Additionally, the observational system put in place for scientific purposes could serve as the prototype for a continuing diagnostic system observing living marine resources. A proper worldwide Census might cost a total of $1 billion over ten years. Costly, complicated observational programs prudently begin with pilot projects, to test both techniques and political will.
Not only technology and stressed fisheries but also an international treaty to protect biodiversity make the time ripe for this Worthy Way. Biodiversity now finds itself with many signatories to its Convention, but uncharted national obligations and resources. Acousticians, marine engineers, marine ecologists, taxonomists, statisticians, and others should join their talents to make the Census of Marine Life happen. In fact, some of us, supported by the Alfred P. Sloan Foundation, are trying.[3]
2. The Great Reversal
Humanity’s primitive hunting of the oceans has damaged marine habitats and populations. Fortunately on the land where humanity stands, engineering and science have infused farming and logging, so initiating the Great Reversal. The Great Reversal refers to human contraction in Nature, after millennia of extension. My measure is area, square kilometers or hectares. Simple area is the best single measure of human disturbance of environment.[4]
People transform land by building, logging, and farming.[5] First, let me speak briefly about the spread of the built environment, of “cities” broadly speaking. This includes land not only for roads, shopping centers, and dwellings, but also for lawns, town gardens, and parks. In the USA the covered land per capita ranges from about 2,000 m2 in states where travel is fast, like Nebraska, to less than a third as much in slower, more urban New York. The 30 million Californians, who epitomize sprawl, in fact average 628m2 of developed land each, about the same as New Yorkers.
The transport system and the number of people basically determine covered land. Greater wealth enables people to buy higher speed, and when transit quickens, cities spread. Average wealth and numbers will grow. So, cities will take more land.
What are the areas of land that may be built upon? The USA is a country with fast growing population, expecting about another 100 million people over the next 75 years, when the world is likely to have about 10 billion. At the New York and California rate of 600m2 each, the USA increase would consume only 6 million hectares, about the area of West Virginia or 15% of California. Globally, if everyone builds at the present California rate, 4 billion more people would cover about 240 million hectares, midway in size between Mexico and Argentina, 6 to7 Californias.
By enduring crowding, urbanites spare land for nature. Enduring more crowding, they could spare more. Still, cities will take more land. Can changes in logging and farming offset the urban sprawl?
Forests are cut to clear land for farms and settlements and also for fuel,
lumber, and pulp.[6] In America, from the time of European settlement until 1900 we chopped fervidly and made Paul Bunyan a hero. In the 20th century, however, America’s forested area has remained level, and since 1950 the volume of wood on American timberland has grown 30%. In the same interval, European forests have increased about 25% in volume. In the USA, the intensity of use of wood defined as the wood product consumed per dollar of GDP has declined about 2.5% annually since 1900. In 1998 an average American consumed half the timber for all uses as a counterpart in 1900.
In the USA, likely continuing fall in intensity of use of forest products should more than counter the effects of growing population and affluence, leading to an average annual decline of perhaps 0.5% in the amount of timber harvested for products. A conservative 1.0% annual improvement in forest growth would compound the benefits of steady or falling demand and could shrink the area affected by logging 1.5% annually. Compounded, the 1.5% would shrink the extent of logging by half in 50 years. If one half of this amount occurs by leaving areas now cut uncut, the area spared is 50 million hectares, 1/3rd more than the area of California. Changing technology, taste, and economics create similar timberland patterns in numerous countries. Since 1990 forests have increased in 44 of 46 temperate countries, excepting the Czech Republic and Azerbaijan.
Rising productivity of well-managed forests should comfortably allow 20% or less of today’s forest area of about 3 billion hectares to supply world commercial wood demand in the middle of the 21st century.[7] Unmanaged forests now yield yearly an average of 1-2 cubic meters (m3) of commercially valuable species per hectare. Potential in secondary temperate forests ranges between 5 and 10 m3. Many commercial plantation forests now reliably produce more than 20 m3 year, and experimental plots have yielded over 60 m3.
In poor regions of tropical countries such as Brazil, Indonesia, and Congo, the dominant force stressing forests remains the struggle to subsist. During the last couple of decades, the removal of tropical forests has been estimated at 1% per year. Until overcome by better livelihoods, cheap land, cheaper fuels, superior alternatives to wood in the marketplace, or taboos the one-time conversion of forests to money, cropland or fuel will continue. Nevertheless, global expansion of forests and rising incomes encourage. Indeed, about 165 million hectares once used for crops and pasture have reverted to secondary forest in Latin America alone, an area more than ¾ the size of Mexico, 4 Californias or 1 Alaska.[8]
This brings us to farms. For centuries, farmers expanded cropland faster than population grew, and thus cropland per person rose. Fifty years ago, farmers stopped plowing up more nature per capita, initiating the Great Reversal (Figure 1). Meanwhile, growth in calories in the world’s food supply has continued to outpace population, especially in poor countries. Per hectare, farmers have lifted world grain yields about 2 percent annually since 1960.
Frontiers for agricultural improvement remain wide open, as average practice moves steadily toward the present yield ceiling and the ceiling itself keeps rising. On the same area, the average world farmer consistently grows about 20% of the corn of the top Iowa farmer, and the average Iowa farmer advances in tandem about 30 years behind the yields of his or her most productive neighbor. While an average Iowa corn farmer now grows 8 tons per hectare, top producers grow more than 20 tons compared with a world average for all crops of about 2. On one hectare the most productive farmers now make the calories for a year for 80 people when their grandparents struggled to make the calories for 3.
High and rising yields are today the fruit of precision agriculture. Technology and information help the grower use precise amounts of inputs-fertilizer, pesticides, seed, water-exactly where and when they are needed. Precision agriculture includes grid soil sampling, field mapping, variable rate application, and yield monitoring-tied to global positioning. Precision agriculture is frugal with inputs, like other forms of lean production that now lead world manufacturing.
If during the next 60 to 70 years, the world farmer reaches the average yield of today’s USA corn grower (less than 40% of today’s ceiling), ten billion people eating on average as people now do will need only half of today’s cropland. The land spared exceeds the Amazonia. This sparing will happen if farmers maintain the yearly 2% worldwide growth of grains achieved since 1960. In other words, if innovation and diffusion continue as usual, feeding people will not stress habitat for Nature. Even if the rate of improvement falls to half, an area the size of India, globally, will revert from agriculture to woodland or other uses. A meaty USA diet of 6,000 primary calories/day doubles the difficulty or halves the land spared.
In summary, globally, if an additional 4 billion people pave and otherwise develop land at the present rate of Californians, cities will consume about 240 million hectares. This area appears likely to be offset by land spared from logging in the USA and other countries that now reduce their cutting of forests. The likely added land spared from crops globally over the time it takes to reach 10 billion people suggests a net worldwide return to Nature of lands equal to India or more than 6 Californias.
On land as in the oceans, anecdotes, affection for Nature, and the plight of the poor farmer and logger will impel nations to spend and prohibit. The goal of my second Worthy Way, verifying and forecasting the probable extent of the Great Reversal, is first guiding and then strengthening the actions so they will produce the hoped for conservation and restoration unalloyed by the disillusionment of failure. The distribution of lands spared will greatly affect the chances recreated for flora and fauna.
The research for the Great Reversal includes observations as well as experiments and analyses. In many parts of the world routine aerial surveying of land use confirmed by ground measurements remains far from complete or usefully periodic. Geographers, foresters, agronomists, ecologists, agricultural and civil engineers, and technologists need to agree on definitions, protocols, and priorities for building the world land information system. The long-term behavior and potential of intensively managed forests exemplify the need for experiment and analysis.
International frameworks for studying the Great Reversal exist in the Global Change program and in joint efforts of the World Bank and World Wildlife Fund for forest conservation. These programs hunger for a feasible, attractive technical vision. Excluding costs for satellites, which I believe have anyway already contributed the answers they are likely to contribute to this question, my guess is that for about $100 million we could verify the Great Reversal and forecast its probable extent. The information would chart a new sound and grand strategy for conserving the landscape and the other animals with which we share it.
3. Human Exposure Assessment
My first two Ways to spend have been Worthy because they would deepen our understanding of sea and land and create the context for protecting other life while we feed ourselves. My third Worthy Way to spendconcerns what we humans absorb from the environment. Recall our high fears and outlays for ionizing radiation, pesticides, and asbestos.
Like other animals, we take in water, food, air, and dust. Given our genes, we are what we eat in the broadest sense. Yet, little research chronicles actual human exposures. Exposure estimates often trace back to very indirect measures, such as chimney emissions. And our habits and habitats seem overlooked. Consider where Americans spend 24 hours (Figure 2). One wonders why so much exposure measurement and regulation have concentrated on traffic intersections when we are usually home sleeping. Moreover, exposures even to a single chemical may occur from contact with several media (air, water), via several pathways (hand-to-mouth transfers, food), and through several routes (inhalation, oral, dermal).
To gather information about the magnitude, extent, and causes of human exposures to specific pollutants and measure the total “dose” of selected pollutants that Americans receive, in 1994 the Environmental Protection Agency (EPA) launched a National Human Exposure Assessment Survey (NHEXAS).[9] Its ultimate goal is documenting the status and trends of national exposure to risky chemicals both to improve risk assessments and to evaluate whether risk management helps.
For pilot studies, EPA chose metals, volatile organic compounds, and pesticides and polynuclear aromatics, because of their toxicity, prevalence in the environment, and relative risk to humans—at least as EPA and perhaps the public believe. I never forget Bruce Ames’ work showing that 99.99% of the pesticides we ingest are natural.[10] In any case, EPA’s chosen classes of compounds and the expected combination of chemicals, exposure media, and routes of exposure would demonstrate and challenge currently available analytical techniques.
Phase I, demonstration and scoping projects, may already be the most ambitious study of total human exposure to multiple chemicals on a community and regional scale. It has focused on exposure of people to environmental pollutants during their daily lives. Survey participants wore “personal exposure monitors” to sample their microenvironments. Meanwhile, NHEXAS researchers measured levels of chemicals to which participants were exposed in their air, foods, water and other beverages, and in the soil and dust around their homes. They also measured chemicals or their metabolites in blood and urine provided by participants. Finally, participants completed time-activity questionnaires and food diaries to help identify sources of exposure to chemicals and to characterize major activity patterns and conditions of the home environment. Several hundred Arizonans, several hundred Midwesterners, and 60 Marylanders participated. Sample collection began in 1995 and went to early 1998. Publications are expected soon and databases in 2000.
The main purpose of the pilot study is to find the best way to conduct the full national human exposure assessment survey. Implementing representative monitoring projects to estimate the magnitude, duration, frequency, and the spatial and temporal distribution of human exposures for the USA will be a large task, involving chemists, biologists, statisticians, and survey researchers. I hope clever engineers can lighten, integrate, and automate the measurement and speed reporting.
I learned of NHEXAS while serving for three years on the executive committee of EPA’s Science Advisory Board. NHEXAS was an unpolished diamond in a lackluster research portfolio. Neither EPA’s leadership nor the Congress appreciated the Survey, so it has proceeded slowly and barely. I guess the cost to perform NHEXAS right might be $200 million over 6-7 years. I believe the USA should make a strong commitment to it, though not exactly as underway. It needs a less “toxic” bias. A national scientific conference to adjust and advance the concept might be timely.
The eventual outcomes of NHEXAS should include a comprehensive total human exposure database and models that accurately estimate and predict human exposures to environmental chemicals for both single and multiple pathways. The models would link environmental and biological data with information on human activity to estimate total human exposures to various chemicals and combinations and thus contribute to better risk assessments. We can establish proper baselines of normal range of exposure and identify groups likely to be more exposed.
We know surprisingly little about our exposures. For decades researchers have measured and tracked pollutants one at a time, often faddishly. This third Worthy Way can reduce the uncertainty about exposure and indeed make exposure a science. Understanding aggregate exposures, we may find surprisingly powerful levers to reduce ambient bads or increase goods.
4. ZEPPs
One way to finesse the question of exposure, whether for humans or green nature, is with industries that generate zero emissions. A growing gang of us has been promoting the concept of industrial ecology, in which waste tends toward zero, either because materials that would become waste never enter the system, or because one manufacturer’s wastes become food for another in a nutritious industrial food chain, or because the wastes are harmless. For this human, yours truly, I certainly want zero emissions of poisonous elements such as lead and cadmium.
For green nature exposed outdoors, however, the giga-emission is carbon, and I shall exemplify zero emission by tackling the giga-emission with my fourth Worthy Way to spend.
Today industries annually emit about 6 gigatons of carbon to the atmosphere, or a ton per each of the planet’s 6 billion people. The mounting worry is that these and more gigatons likely to be emitted will make a punishing climate for nature exposed outdoors.
Most of the carbon comes, of course, from fuel to energize our economies, and an increasing portion of the energy is in the form of electricity. Since Thomas Edison, the primary energy converted to electricity has grown in two sequential, long S-curves until it is now about 40% of all energy humanity uses. Although electric consumption leveled until recently at the top of its second S-curve, I believe it will maintain an average 2-3% annual growth through the 21st century. In the information era, consumers will surely convert even more of their primary energy to electricity. And, after all, two billion people still have no electricity. A hundred years at 2-3% growth per year would raise the world average per capita electricity consumption of 10 billion or so in the year 2100 only to today’s average USA per capita consumption.
Remembering that my fourth Worthy Way was to eliminate carbon emission, I ask what fuel generates the electricity. The evolving shares of primary energy sources, with more hydrogen per carbon atom, gradually and desirablydecarbonize the energy system from wood and hay to coal to oil to natural gas.[11] Nuclear, probably, or possibly some other non-carbon alternative will eventually close the hydrocarbon fuel era. In the interim, however, can we find technology consistent with the evolution of the energy system to economically and conveniently dispose the carbon from making kilowatts? This is my fourth Worthy Way: Finding a practical means to dispose the carbon from generating electricity consistent with the future context. The Way is what I and my associates call ZEPPs, Zero Emission Power Plants.
The first step on the road to ZEPPs is focusing on natural gas simply because it will be the dominant fuel, providing perhaps 70% of primary energy around the year 2030.[12] Although natural gas is far leaner in carbon than other fossil fuels, when natural gas does provide 70% of primary energy, CO2 emission from it will be about 75% of total CO2 emissions.
A criterion for ZEPPs is working on a big scale. A peak use of, say, 30 x 1012 m3 of natural gas in 2060, corresponds to 2 to 3 times today’s carbon emission to dispose annually. Even in 2020, we could already need to dispose carbon from gas alone equal to half today’s emission from all fuel.
Big total use means big individual ZEPPs because the size of generating plants grows even faster than use. Although the last wave of power station construction reached about 1.5 gigawatts (GW), growth of electricity use for the next 50 years can reasonably raise plant size to about 5 GW (Figure 3). For reference, the New York metropolitan area now draws above 12 GW on a peak summer day.
Plants grow because large is cheap if technology can cope. Crucial for controlling emission, one big plant emits no more than many small plants but emission from one is easier to collect. We cannot solve the carbon question if we need to collect emissions from millions of microturbines.
So far, I’ve specified my way to spend as a search for big ZEPPs fueled by natural gas. But bigger ZEPPs mean transmitting immense power from larger and larger generators through a large steel axis at a speed such as 3,000 revolutions per minute (RPM).
The way around the limits of mechanical power transmission may be shrinking the machinery. Begin with a very high pressure CO2 gas turbine where fuel burns with oxygen. Needed pressure ranges from 40 to 1000 Atm, where CO2 would be recirculated as a liquid. The liquid combustion products would be bled out.
Fortunately for transmitting power, the very high pressures shrink the machinery in a revolutionary way and permit very fast RPMs for the turbine. The generator could then also turn very fast, operating at high frequency, with appropriate power electronics to slow the output to 50 or 60 cycles. People have seen the attraction of higher RPMs for a while. High RPM generators are included in the last version of a gas turbine of the High Temperature Reactor of the General Atomics corporation.
Materials issues lurk and solutions are expensive to test. The envisioned hot temperature of 1500 degrees is what challenges engineers in aviation. Fortunately, Japanese have recently reported a tough, thermally conductive ceramic strong up to 1600o in air.[13] Problems of stress corrosion and cracking will arise.
Although combustion within CO2 does not appear a general problem, some may arise at the high temperatures and pressures. Also no one has yet made burners for such high pressures as we consider. Power electronics to slow the cycles of the alternating current raises big questions. So far, the cost of power electronics exceeds benefit. The largest systems for conversion between alternating and direct current are now 1.5 GW and can handle 50-60 cycles. Conversion costs are about $100 per kilowatt (kW), a big increment to the present $200 per kW for a simple turbine and $300-$400 for a combined cycle generator. Present limits of about 100 hertz are not fast enough to convert 30,000 RPMs to 3,600 RPM. What we envision is beyond the state of the art, but power electronics is still young, meaning expensive and unreliable, and we are thinking of the year 2020 and beyond when this Worthy Way could make it mature, cheap and reliable. Already engineers consider post-silicon power electronics with diamond plasma switches.
The requisite oxygen for the ZEPP, say, 1,000 tons/hr for a 5 GW plant, also exceeds present capacity, about 250 tons/hr by cryoseparation, but could be done. Moreover, the cryogenic plant may introduce a further benefit. The power equipment suppliers tend to think of very large and slow rotating machines for high unit power. The core of the problem is in mechanical resistance of materials. Here we might recur to superconductors that are more “in” with a cryogenic plant nearby.
With a ZEPP fueled by natural gas transmitting immense power at 60 cycles, the next step is sequestering the waste carbon. Because of the high pressure, the waste carbon is, of course, already easily-handled liquid carbon dioxide. In principle aquifers can store CO2 forever if their primary rocks are silicates, which with CO2 become stable carbonates and silica (SiO2). The process is the same as rocks weathering in air. The Dutch and Norwegians have done a lot on CO2 injection in aquifers, and the Norwegians have already started injecting.
Opportunity for storing CO2 will join access to customers and fuel in determining plant locations. Fortunately, access to fuel may become less restrictive. Most natural gas travels far through a few large pipelines, which makes these pipelines the logical sites for generators. The expanding demand will require a larger and wider network of pipelines, opening more sites for ZEPPs.
Another criterion is overall projected plant efficiency. Colleagues at Tokyo Electric Power calculate the efficiency of the envisioned ZEPP could be 70%.
In short, the fourth Worthy Way is a supercompact (1-2 m diameter), superpowerful (potentially 10 GW or double the expected maximum demand), superfast (30,000 RPM) turbine putting out electricity at 60 cycles plus CO2 that can be sequestered. ZEPPs the size of an automobile, attached to gas pipelines, might replace the fleet of carbon emitting non-nuclear monsters now cluttering our landscape.
We propose starting introduction of ZEPPS in 2020, leading to a fleet of 500 5 GW ZEPPs by 2050. This does not seem an impossible feat for a world that built today’s worldwide fleet of some 430 nuclear power plants in about 30 years. Combined with the oceans safely absorbing 2-3 Gt C yearly, ZEPPs, together with another generation of nuclear power plants in various configurations, can stop CO2 increase in the atmosphere near 2050 AD and 450-500 ppm without sacrificing energy consumption.
Research on ZEPPs could occupy legions of academic researchers, and restore an authentic mission to the DOE’s National Laboratories, working on development in conjunction with companies such as General Electric, Air Products, and General Atomics. The fourth Worthy Way to spend merits tens of billions in R&D, because the plants will form a profitable industry worth much more to those who can capture the expertise to design, build, and operate ZEPPs. Like all my Worthy Ways, ZEPPs need champions.
To summarize, we have searched for technologies that handle the separation and sequestration of amounts of carbon matching future fuel use. Like the 747 jumbojets that carry about 80% of passenger kilometers, compact ultrapowerful ZEPPs could be the workhorses of the energy system in the middle of the next century.
5. Maglevs
Cutting emissions and the footprints of farming, logging, and power, we naturally also wonder about transport. Transport now covers Earth with asphalt ribbons and roars through the air leaving contrails that could prove harmful. With cars shifting to fuel cells fed with hydrogen over the next few decades, the air transport system and its jet fuel can become emissive enemy #1. Fortunately the time is right for innovation in mobility, my fifth Worthy Way.
Since 1880, including walking, USA per capita mobility has increased 2.7%/yr and the French about the same. Europeans currently travel at about 35 km per hour and per day, because people travel about 1 hour per day. Of this, Europeans fly only about 20 seconds or 3 km per day. A continuing rise in mobility of 2.7% per year means a doubling in 25 years, and an additional 35 km per day or about 3 minutes on a plane. Three minutes per day equal about one round-trip per month per passenger. Americans already fly 70 seconds daily, so 3 minutes certainly seems plausible for the average European a generation from now. The jetset in business and society already flies a yearly average of 30 minutes per day. However, for the European air system, the projected level requires a 14-fold increase in 25 years, or about 12% per year. The USA would need a 20-fold increase in 50 years. A single route that carries one million passengers per year per direction would require 60 take-offs and landings of jumbojets. The jumbos would need to take off like flocks of birds. Unlikely. We need a basic rethinking of planes and airport logistics.
The history of transport can be seen as a striving to bring extra speed to the progressively expanding level of income within the fixed amount of time we are willing to expose ourselves to travel.[14] According to a rhythmic historical pattern (Figure 4), a new, fast transport mode should enter about 2000. The steam locomotive went commercial in 1824, gasoline engine in 1886, and jet in 1941. In fact, in 1991, the German Railway Central Office gave the magnetic levitation (maglev) system a certificate of operational readiness and a Hamburg-Berlin line is now under construction.[15],[16] The essence of the maglev is that magnets lift the vehicle off the track, thus eliminating friction, and that activation of a linear sequence of magnets propels the vehicle.
Maglevs have many advantages: not only high mean speed but acceleration, precision of control, and absence of noise and vibration. They can be fully passive to forces generated by electrical equipment and need no engine on board. Maglevs also provide the great opportunity for electricity to penetrate transport, the end-use sector from which it has been most successfully excluded.
The induction motors that propel maglevs can produce speeds in excess of 800 km per hour and in low pressure tunnels thousands of km per hr. In fact, electromagnetic linear motors have the capacity to exert pull on a train independent of speed. A traditional electric or internal combustion engine cannot deliver power proportional to speed. In contrast, the new motors allow constant acceleration. Constant acceleration maglevs (CAMs) could accelerate for the first half of the ride and brake for the second and thus offer a very smooth ride with high accelerations.
High speed does entrain problems: aerodynamic and acoustic as well as energetic. In tunnels, high speed requires large cross sections. The neat solution is partially evacuated tubes, which must be straight to accommodate high speeds. Low pressure means a partial vacuum comparable to an altitude of 15 thousand meters. Reduced air pressure helps because above about 100 km per hour the main energy expense to propel a vehicle is air resistance. Low pressure directly reduces resistance and opens the door to high speed with limited energy consumption. Tunnels also solve the problem of landscape disturbance. CAMs operating in evacuated tubes are my fifth Worthy Way.
For a subsurface network of such maglevs, the cost of tunneling will dominate. The Swiss are actually considering a 700 km system.[17] For normal high-speed tunnels, the cross-section ratio of tunnel to train is about 10-1 to handle the shock wave. With a vacuum, however, even CAMs could operate in small tunnels, fitting the size of the train. In either case the high fixed cost of infrastructures will require the system to run where traffic is intense–or huge currents can be created, that is, trunk lines. Because the vehicles will be quite small, they would run very often. In principle, they could fly almost head-to-tail, ten seconds apart.
Initially, maglevs will likely serve groups of airports, a few hundred passengers at a time, every few minutes. They might become profitable at present air tariffs at 50,000 passengers per day. In essence maglevs will be the choice for future Metros, at several scales: urban, possibly suburban, intercity, and continental.
The vision is small vehicles, rushing from point to point. Think of the smart optimizing elevators in new skyscrapers. Alternately, the physical embodiment resembles, conceptually, that of particle accelerators, where “buckets” of potential fields carry bunches of charged particles. Maglevs may come to be seen as spin-offs of the physics of the 1970s and 1980s, as transistors are seen as realizations of the quantum mechanics of the 1920s and 1930s.
With maglevs, the issue is not the distance between stations, but waiting time and mode changes, which must be minimized. Stations need to be numerous and trips personalized, that is, zero stops or perhaps one.
Technically, among several competing designs the side-wall suspension system with null-flux centering, developed in the United States by the Foster-Miller company, seems especially attractive: simple, easy access for repair, and compact.[18] Critically, it allows vertical displacement and therefore switches with no moving parts. Vertical displacement can be precious for stations, where trains would pop up and line up, without pushing other trains around. It also permits a single network, with trains crossing above or below. Alternatively, a hub-and-spoke system might work. This design favors straight tubes and one change.
The suspension system evokes a comparison with air. Magnetic forces achieve low-cost hovering. Planes propel by pushing air back. Momentum corresponds to the speed of the air pushed back, that is, energy lost. Maglevs do not push air back, but in a sense push Earth, a large mass, which can provide momentum at negligible energy cost. The use of magnetic forces for both suspension and propulsion appears to create great potential for low travel-energy cost, conceptually reduced by 1-2 orders of magnitude with respect to energy consumption by airplanes with similar performance.
Because maglevs carry neither engines nor fuel, the weight of the vehicle can be light and total payload mass high. Airplanes at takeoff, cars, and trains all now weigh about 1 ton per passenger transported. A horse was not much lighter. Thus, the cost of transport has mainly owed to the vehicle itself. Maglevs might be 200 kg per passenger.
At intercity and continental scale, maglevs could provide supersonic speeds where supersonic planes cannot fly. For example, a maglev could fuse all of mountainous Switzerland into one functional city in ways that planes never could, with 10 minute travel times between major present city pairs.
Traveling in a CAM for 20 minutes, enjoying the gravitational pull of a sports car, a woman in Miami could go to work in Boston and return to cook dinner for her children in the evening. Bostonians could symmetrically savor Florida, daily. Marrakech and Paris would work, too. With appropriate interfaces, the new trains could carry hundreds of thousands of people per day, saving cultural roots without impeding work and business in the most suitable places.
Seismic activity could be a catch. In areas of high seismic activity, such as California, safe tubes (like highways) might not be a simple matter to design and operate.
Although other catches surely will appear, maglevs should displace the competition. Intrinsically, in the CAM format they have higher speed and lower energy costs and could accommodate density much greater than air. They could open new passenger flows on a grand scale during the 21st century with zero emissions and minimal surface structures.
We need to prepare a transport system that can handle huge fluxes of traffic. A 2.7% per year growth in passenger kilometers traveled means not only doubling of mobility in 25 years but 16 times in a century, which is the rational time for conceiving a transport system. The infrastructures last for centuries. They take 50-100 years to build, in part because they also require complementary infrastructures. Moreover, the new systems take 100 years to penetrate fully at the level of the consumer. Railroads began in the 1820s and peaked with consumers in the 1920s.
It is time for my fifth Worthy Way, to conceive in detail maglevs for America. And to develop the required skills, such as tunnelling. Universities should be producing the needed engineers, operations researchers, and physicists, and government should partner with industry on the prototypes.
Like ZEPPs, maglevs will bring huge revenues to those who can design, build, and operate them, anywhere in the world.
Closing Remarks
A worldwide Census of Marine Life can reawaken the adventure of the Age of Discovery and teach us how to spare marine habitats. A study of the Great Reversal of human extension into the landscape can inspire us to lift yields and spare land for Nature. The National Human Exposure Assessment Survey can show what we absorb and how to spare exposures. ZEPPs can generate many gigawatts without harmful emissions, sparing the climate. And maglevs can multiply our mobility while sparing air and land. These Worthy Ways to spend on environment and resources cohere in the vision of a large prosperous human economy that treads lightly and emits little or nothing.
Research is a vision or dream in which we, like Leonardo da Vinci, simulate a machine first in our mind. Leonardo’s powers of visualization, one might say experiment, were so great, that the machines work, even if the letting of contracts and construction is delayed 500 years. Building machines is often costly. Dreaming is cheap. Let us start now with these Five Worthy Ways to Spend that can make dreams of improving the human condition and environment so irresistibly beautiful and true that societies, especially America, hasten to let the contracts and build the machines that can spare planet Earth–soon instead of after a delay of 500 years.
Acknowledgements: This essay was initially prepared as an address to the San Diego Science & Technology Council, La Jolla, California, 9 December 1998. Thanks to Edward Frieman and William A. Nierenberg (NAE) for hosting the visit. I am grateful to Cesare Marchetti, Perrin Meyer, and Paul Waggoner for helping develop these Worthy Ways over many years.
Figure Captions
Figure 1. The Great Reversal. After gradually increasing for centuries, the worldwide area of cropland per person began dropping steeply in about 1950, when yields per hectare began to climb. The diamond shows the area needed by the Iowa Master Corn Grower of 1998 to supply one person a year’s worth of calories. The dotted line shows how sustaining the lifting of average yields 2%/yr extends the Reversal. Sources of data: FAO Yearbooks, Food and Agriculture Organization of the United Nations, various years; Wallace’s Farmer, March 1999; J. F. Richards, “Land Transformation,” in The Earth as Transformed by Human Action, B. L. Turner et al., eds., Cambridge University, Cambridge, England, 1990.
Figure 2. Percentage of time spent in major locations by Californians. Source: J. A. Wiley, J. P. Robinson, T. Piazza, K. Garrett, K. Cirksena, Y. T. Cheng, and G. Martin, Activity Patterns of California Residents, California Survey Research Center, U. of California, Berkeley, 1991.
Figure 3. The maximum size of power plants, USA. Each line represents an S-shaped (logistic) curve normalized to 100 percent, with estimates for the midpoint of the process and saturation level indicated. So, the pulse centered in 1929 quickly expanded power plants from a few tens of megawatts (MW) to about 340. After a period in which plant size stagnated, the pulse centered in 1965 quadrupled maximum plant size to almost 1400 MW. The patterns for the world and a dozen other countries we have analyzed closely resemble the USA. We project another spurt in plant size centered around the year 2015, quadrupling the maximum again, to more than 5 GW. F is fraction of the process completed. Source of data: World Electric Power Data CDROM UDI-2454, Utility Data Institute, Washington DC, https://www.udidata.com/
Figure 4. Smoothed historic rates of growth (solid lines) of the major components of the USA transport infrastructure and conjectures (dashed lines) based on constant dynamics. The years are the midpoints of the processes, and delta t is the time for the system to grow from 10% to 90% of its extent. The inset shows the actual growth, which eventually became negative for canals and rail as routes were closed. Source: Jesse H. Ausubel, C. Marchetti, and P.S. Meyer, Toward Green Mobility: The Evolution of Transport, European Review 6(2):137-156, 1998.
References
[1] William A. Nierenberg, The Diversity of Fishes: The Known and Unknown, Oceanography 12(3):6-7, 1999.
[2] John H. Steele and Mary Schumacher,On the History of Marine Fisheries, Oceanography 12(3):28-29, 1999.
[3] https://phe.rockefeller.edu/fish
[4] “Thus, in spite of all the interest in fragmented populations, the primary aim in conservation should be simply to preserve as much habitat as possible.” (p. 47) Ilkka Hanksi, Metapopulation Dynamics, Nature 396:41-49, 1998.
[5] Paul E. Waggoner, Jesse H. Ausubel, Iddo K. Wernick, Lightening the Tread of Population on the Land: American Examples, Population and Development Review 22(3):531-545, 1996.
[6] Iddo K. Wernick, Paul E. Waggoner, and Jesse H. Ausubel, Searching for Leverage to Conserve Forests: The Industrial Ecology of Wood Products in the U.S. , Journal of Industrial Ecology 1(3):125-145, 1997.
[7] Roger A. Sedjo and Daniel Botkin, Using Forest Plantations to Spare Natural Forests, Environment 39(10): 14-20 & 20 & 30, 1997.
[8] Joyotee Smith, Can Secondary Forests Mitigate Primary Forest Depletion? Implications from Small-Scale Farms in the Peruvian Amazon, International Center for Tropical Agriculture (e.smith@cgnet.com).
[9] Special Issue on NHEXAS, Journal of Exposure Analysis and Environmental Epidemiology 5(3): 1995.
[10] Ames, B.N., Profet, M. and Gold, L.S., Dietary Pesticides (99.99% All Natural). Proceedings National Academy of Sciences USA 87:7777-7781, 1990.
[11] Jesse H. Ausubel, Energy and Environment: The Light Path, Energy Systems and Policy 15:181-188, 1991.
[12] Jesse H. Ausubel, Arnulf Gruebler, and Nebojsa Nakicenovic, Carbon Dioxide Emissions in a Methane Economy, Climatic Change 12:245-263, 1988.
[13] Toshihiro Ishikawa et al., A Tough Thermally Conductive Silicon Carbide Composite with High Strength up to 1600o C in Air, Science 282: 1295, 1998.
[14] Jesse H. Ausubel, C. Marchetti, and P.S. Meyer, Toward Green Mobility: The Evolution of Transport, European Review 6(2):137-156, 1998.
[15] MVP (Versuchs- und Planungsgesellschaft für Magnetbahnsysteme m.b.H), Die offizielle Transrapid Homepage, URL https://www.mvp.de/, Munich, Germany, 1997.
[16] MIKA, J., Transrapid Informations Resourcen Homepage, URL https://transrapid.simplenet.com/, Germany, 1997.
[17] Jufer, M., Swissmetro: Wissenschaftliche Taetigkeit der ETH-Lausanne und Zuerich, Hauptstudie-Zwischenbericht Juli 1994-Juni 1996, ETH-Lausanne, Switzerland, 30 August 1996. URL https://sentenext1.epfl.ch/swissmetro.
[18] U.S. Department of Transportation, Compendium of Executive Summaries from the Maglev System Concept Definition Final Reports, DOT/FRA/NMI-93/02, pp. 49-81, March 1993. On-line at https://www.bts.gov/smart/cat/CES.html