On September 23, 2019 PHE Senior Research Associate Iddo Wernick delivered at talk on ‘Environmentalism and Populism‘ at at meeting of the Breakthrough Institute in Middleburg, Virginia.
My task is to reflect on scientific social responsibility with respect to international environmental conflicts. First, I will mention a few cases of historical or prospective conflict relating to environment and resources and then I will offer some tentative conclusions. Adding the notion of resources to that of the environment may unfairly expand and ease my task. After all, humans are territorial animals, and probably the historical majority of conflicts have occurred over land. I will set aside land quarrels, though they continue, for example, between Peru and Ecuador. My cases are cod fish, Antarctica, atmospheric weapons testing, acid deposition, fresh water, and climate change.
Because war is the least ambiguous and usually the most serious form of conflict, I will begin with a war. My war, the Cod War, is obscure and was not lethal for humans.
The Cod War(s)
Iceland’s only international disputes have concerned fishing rights.1 Collectively known as the Cod Wars, they involved British fishing vessels violating Iceland’s in-creasing self-declared territorial waters. In 1952 Iceland extended its offshore rights from three to four miles, and in 1958 to twelve. The first real Cod War ensued in 1959, when U.K. Royal Navy frigates steamed to protect, unsuccessfully, British trawlers from being evicted or arrested by Iceland’s tiny Coast Guard.
The dispute was temporarily resolved, only to be followed in 1972 by an extension of the Icelandic claim to 50 miles, which brought more ferocious clashes. British warships, equipped with sophisticated weapons, were several times larger than the Icelandic Coast Guard vessels, each with a single 57-mm gun. Unarmed fishing vessels commissioned by the Icelandic Coast Guard also patrolled during the disputes. The British warships rammed Icelandic Coast Guard vessels and shot over their bows, while the Icelanders used large clippers, like garden shears, to cut the nets of British fishing trawlers, causing the loss of both nets and catches. Iceland broke diplomatic relations with the United Kingdom for a short time, the first NATO rupture of this kind.
On one occasion, when a British frigate confronted an Icelandic gunboat on the high seas, the world’s press may have outnumbered naval personnel. Fortunately, the opposing captains sensed the occasion, and the ensuing exchange was not shots but Biblical quotations delivered broadside by loudspeaker. The journalists awarded victory in the Scriptural battle to the Icelanders. More formal arguments gradually swung international opinion to Iceland’s view, and the nations agreed to a truce in 1976, by which time both nations had announced a new 200-mile exclusive economic zone.
The limit that Iceland first established has since become the standard for international maritime legislation. Thus, the little country with no armed forces had sent its Coast Guard gunships to take on the mighty Royal Navy, and won. The National Museum in Reykjavik still proudly displays some of the clippers. The British suffered a total of more than $2 million in damage to their Navy frigates, and claimed the loss of more than 9,000 fisheries-related jobs. Since 1976, British fishing boats have respected the 200-mile limit, and no new violence has erupted between the two nations.
I offer this example not only as an appealing curiosity, but because it illustrates how conflict may flourish in the absence of reliable scientific information, in this case, with regard to the abundance and distribution of fish. A serious, admirable inter-governmental organization called the International Council for the Exploration of the Seas (ICES) formed almost 100 years ago to better the collection, analysis, and dissemination of data on fish catches. However, only very recently have techniques for assessing and modeling fish stocks become sufficiently reliable to contribute to dispute resolution. The Cod Wars were partly about territory, but they were also partly about poor information about North Sea fisheries, which contributed to their decline.
Canada and Spain came to the brink of serious conflict over the cod on the Grand Banks off Newfoundland in the early 1990s. Widespread scientific agreement on the fragility of the fish stocks in the region, including the opinion of both Canadian and Spanish experts, helped avoid lethal outcomes. A worldwide Census of Marine Life might lessen conflicts over commercial fisheries as well as promote biodiversity. Biodiversity now finds itself with an international Convention, but uncharted national obligations and resources.
Let me more briefly mention the other cases of environmental and resource conflicts.
Systematic exploration and territorial claims on Antarctica extend back to the turn of the century.2 After World War II these claims expanded and threatened to militarize the continent. Meteorology, oceanography, glaciology, and other kinds of environmental research in or near Antarctica figured prominently in the 1957–58 International Geophysical Year, the highly successful 18-month internationally coordinated scientific probing of Earth. The 1959 Antarctic Treaty, negotiated with U.S. and USSR leadership, calls for the continued absence of military activities and the suspension of all territorial claims. For Antarctica, scientific cooperation appears to have eased the way for political cooperation.
Atmospheric testing of nuclear weapons was a highly visible form of threat behavior during the Cold War. Many reinforcing events in the mid-1950s led to concern about radioactive fallout from the testing. The public most feared the health effects of fallout; radioactive elements were, for example, measurable in milk. The test ban soon became a cause of the nuclear disarmament movement (and still is).
Scientists in both the Soviet Union and the United States also feared test-induced climate changes, now little recalled in the ruckus over global warming.3 The succession of large nuclear yield tests that began in the late 1940s and ended, for the most part, in the early 1960s injected much NO2 into the stratosphere. The oxides of nitrogen are mainly produced in the fireball, with heating and cooling of the captured air. The largest annual yield of nuclear tests occurred in 1962, 108 megatons, including two explosions of 30 megatons. The largest yield was an explosion in 1961 of 58 megatons. About three-fourths of total yield in the peak years around 1960 exploded in the atmosphere. The bulk of these detonations was in the upper troposphere and stratosphere, but Starfish detonated a yield of 1.4 megatons in the thermosphere at an altitude of 400 km. Altitude matters greatly for NO2 production calculations. NO2 absorbs solar radiation, and its enhanced presence in the stratosphere for a period of two decades could have reduced the sunlight reaching the surface by a few percent. Climatologists, in fact, observed a temporary cooling trend in the Northern hemisphere, where nearly all atomic tests occurred.
Part of the task of making nuclear bombs is performing the calculations of atmospheric effects, so several environmental scientists worrying about the climatic and other effects on both sides had ample access to high-level officials in government and the military. This access, and related trust, probably helped expedite the 1963 Limited Test Ban.
From the late 1960s, the Scandinavian countries began claiming that the acidity of their rain was increasing and that it was caused by European, especially English, emissions upwind.4 The acidity allegedly damaged Scandinavian lakes and woods. Beginning in 1972, the Organization for Economic Cooperation and Development (OECD) conducted a study of long-range transport of air pollutants to assess such claims. Later the International Institute for Applied Systems Analysis (IIASA) would conduct sequel studies. Similar conflicts and joint study efforts arose between the United States and Canada in the late 1970s, and peaked, with harsh words but no violence, in the early 1980s.
Much blood has been spilled over water. Water resources can be military goals (seize the water), military targets (bomb a hydro plant, reservoir, canal, or irrigation channel), and military means (cause a flood), and the absence of water can precipitate conflict.5 The problem, as for acid rain, is often the discrepancy between the borders of nature and politics. Ninety-seven percent of Egypt’s surface water flow originates outside its borders. Per capita water availability in Jordan, according to the UN, is about one-quarter of the minimum requirement for an efficient, moderately industrialized nation.
I must here mention another vital fluid, oil, one of the reasons for the Gulf War in 1990–1991. Though I believe the main motive for Iraq in the war was to raise its place in the international pecking order, oil resources certainly pointed the way. The use of the oil-field fires as a weapon also makes the Gulf War interesting in our present context.
Global warming induced by greenhouse gases emitted by human activities seems just now to be emerging from the realm of hypotheticality. It could cause conflicts in at least two ways. Erratic, unfavorable weather and climate could raise pressures for migration, certainly an irritant for some receiving states, although usually a welcome escape for the movers themselves. In recent years refugees, the neediest subcategory of migrants, have numbered around 10 million annually. The bulk have been concentrated in a few countries, such as Afghanistan, Ethiopia, and Burundi. Political threats to well-being, violence, and economic suffering as well as droughts and floods produce refugees. Studies attribute rather few refugees directly and solely to environmental disasters and shortages of resources, but some scientists warn of the growing potential for these factors to add to the number of persons fleeing.6
The second way climatic change could cause conflict is through inequitable or apparently inequitable means to reduce carbon emissions, especially from coal and oil. Conflict might arise between the rich, developed countries of the so-called North and those of the poorer South. The South wants to increase its use of carbon and continue exporting it, while the North is ambivalent about curbing its appetite. The idea of “joint implementation,” basically financial transfers from the North to the South (and the former Soviet Union) for emission reduction in the South that might also lower globally the cost of emission restraints, developed under the auspices of the Intergovernmental Panel on Climatic Change (IPCC), a body of several thousand technical experts. The idea has now moved into the political and diplomatic arena.
The IPCC originated in volunteer efforts under the auspices of the scientist-controlled Scientific Committee on Problems of the Environment (SCOPE) of the non- governmental International Council of Scientific Unions (ICSU) to provide international equivalents of U.S. National Research Council studies on global warming. As these reports gained influence, and required reiteration and expansion and therefore more money, governments changed the mechanism from nongovernmental to inter-governmental. A ruckus occurred in late 1996, when a few scientists, dissatisfied with the IPCC’s reports, took note of government tinkering with the final version of a report submitted in Madrid. The apparent loss of substantive rationality was surely disturbing, as was the abandonment of correct procedure, but such is the occasional price for control.
From these environment-and resource-related cases, I reaffirm three familiar conclusions about conflict resolution, echoing the thinking of the late Kenneth Boulding.7
1. Taking national boundaries off political agendas is a step toward stable peace, and engineers and scientists can help make spatial boundaries much less crucial.
Consider state boundaries within a country such as the United States. Almost every economic activity can flourish almost anywhere: the level and composition of the GDP is not much different in dry Arizona, wet Oregon, and cold Minnesota, all of which have diverse environment and resources. The same is true for Finland, the Netherlands, and Australia. When information is the prime resource, we need not fight over minerals or land. Aquaculture is the long-run solution to the Cod Wars. Clever civil engineering can multiply the availability of water and lower demand for it. Ubiquitous, plentiful natural gas can shrink oil’s martial power. In an economic sense, geography hardly need matter any more.
On the other hand, it clearly helps when national boundaries coincide with cultural boundaries. Africa today is probably worse off in this regard than Europe was in 1913. In this sense, geography will continue to matter greatly.
2. A great problem in stable peace is fear of betrayal, and international cooperation between scientists may help reduce the payoffs of betrayal and strengthen taboos against it.
We can easily imagine sneaky, nasty behavior over fisheries, sulfur and carbon, water, and weapons testing. Joint analyses, symmetric information, and transparent reporting about national behavior are means to reduction. Scientific responsibility here in part takes the form of increasing factual content, thus promoting substantive rationality, in management. Those favoring substantive rationality seek to infuse government, and often seek to have their efforts invited by government; they run the risk of capture. The substantive rationality begins with the idea of government’s gaining the advice of a few leading experts. As time passes, the bureaucracies that fund and manage the processes tend to increase their control and complicate matters with finer procedures. An example is the IPCC. Nevertheless, the IPCC and equivalent bilateral and multilateral mechanisms may forestall or lower conflict. A great difficult is the paucity of scientists in the Southern countries.
3. Stable peace relies on national self-interest, and, while resources that cross boundaries may heighten conflict, the diffusion of pollutants on the wind, in rivers, and in the seas may evoke countervailing cooperation.
For the past fifty years, nuclear weapons created a common global interest, namely the desire to do away with them. The common threat of nuclear destruction was valuable. With the waning of nuclear fear, degradation of the environment has emerged as a substitute threat. The most powerful realizations are global climatic change from greenhouse gases, the loss of biodiversity, and the depletion of the ozone layer. Conserving concern, science fuels individual and often collective moral fervor, usually in reference to potentially catastrophic, irreversible, and inequitable developments. The expressions of social responsibility are almost always scientist- initiated and tend to favor ends over means. Recall the 1961 story about nuclear madness by Leo Szilard, one of the catalysts for the atomic bomb and later a fervent campaigner for disarmament. Science, especially now environmental science, is The Voice of the Dolphins.8
- Hannes Jonsson, Friends in Conflict (London: Hurst & Co., 1982); <http:// gurukul.ucc.american.edu/ted/ICEFISH.htm>.
- Shirley Oakes Butler, “Owning Antarctica: Cooperation and Jurisdiction at the South Pole,” Journal of International Affairs 31 (1977): 35–52.
- Kirill Ya Kondratyev, Climate Shocks: Natural and Anthropogenic (New York: Wiley, 1988).
- Juan Carlos di Primio, “Data Quality and Compliance Control in the European Air Pollution Regime,” in The Implementation and Effectiveness of International Environmental Commitments: Theory and Practice, David K. Victor, Kal Raustiala, and Eugene B. Skolnikoff, eds., pp. 283–303 (Cambridge: MIT Press, 1998).
- Peter H. Gleick, “Water and Conflict: Fresh Water Resources and International Security,” International Security 18, 1 (1993): 79–112.
- Committee on Science, Engineering, and Public Policy, “Policy Implications of Global Warming,” pp. 620–628 (Washington, DC: National Academy Press, 1992).
- Kenneth E. Boulding, Conflict and Defense: A General Theory (New York: Harper, 1962).
- Leo Szilard and Barton J. Bernstein, The Voice of the Dolphin and Other Stories (Stanford: Stanford University Press, 1992).
The passage of time has connected the invention of the wheel with more than ten million miles of paved roads around the world today, the capture of fire with six billion tons of carbon going up in smoke annually. Must human ingenuity always slash and burn the environment? This essay and this volume suggest a more hopeful view. Indeed, the liberator of our title is human culture. Its most powerful tools are science and technology. These increasingly decouple our goods and services from demands on planetary resources.
Most observers emphatically designate the present as a period of intense environmental degradation. Surely, human numbers must weigh heavily, and they are highest now. Present world population stands at about 5.7 billion and each month increases by a number equivalent to the population of Sweden, Somalia, or New Jersey.
But for what period should we feel nostalgia? Has there been a golden age of the human environment? When was that age?
- In 1963, before the United States and Soviet Union signed the Limited Test Ban Treaty–after more than four hundred nuclear explosions in the atmosphere?
- In 1945, after much of the forest in Europe had been cut to provide fuel to survive World War II?
- In 1920, when coal provided three quarters of global energy, and choking smogs shrouded London and Pittsburgh?
- In 1870, when the Industrial Revolution boomed without filters in Silesia, Manchester, and Massachusetts?
- In 1859, before Edwin Drake first drew petroleum from an underground pool in Pennsylvania, when hunters slaughtered tens of thousands of whales for three million gallons of sperm oil to light American lamps?
- In the 1840s, when land-hungry farmers, spreading across North America, Australia, and Argentina, broke the plains and speedily shaved the native woods and grasses?
- In 1830, when cholera epidemics in many cities and towns literally decimated the populations that dumped their wastes in nearby waters?
- In 1700, when one hundred thousand mills interrupted the flow of every stream in France?
- In the late 1600s, when dense forests, once filled with a diversity of life, became seas of sugar cane in coastal Brazil and the Caribbean?
- In 1492, before Columbus stimulated reciprocal transatlantic invasions of flora and fauna? (The Old World had no maize, tomatoes, potatoes, green beans, groundnuts, sunflowers, cocoa, cotton, pineapple, vanilla, quinine, or rubber.)
- In the tenth century, before the invention of efficient chimneys, when people in cold climates centered their lives around a fireplace in the middle of a room with a roof louvered high to carry out the smoke–and much of the heat as well?
- In 55 b.c., when Julius Caesar invaded Britain and found less forest than exists today?
- In the centuries from Homer to Alexander, when the forests of the Eastern Mediterranean were cleared?
- Before the domestication of cows, sheep, pigs, and goats, when hunters caused a holocaust of wild creatures?
- In neolithic times, when building a house used up to thirteen tons of firewood to make the plaster for the walls and floor?
Environmental sins and suffering are not new.1 Humans have always exploited the territories within reach. The question is whether the technology that has extended our reach can now also liberate the environment from human impact–and perhaps even transform the environment for the better. My answer is that well-established trajectories, raising the efficiency with which people use energy, land, water, and materials, can cut pollution and leave much more soil unturned. What is more, present cultural conditions favor this movement.
Two central tendencies define the evolution of the energy system, as documented by Nebojsa Nakicenovic.2 One is that the energy system is freeing itself from carbon. The second is rising efficiency.
Carbon matters because it burns; combustion releases energy. But burnt carbon in local places can cause smog and in very large amounts can change the global climate. Raw carbon blackens miners’ lungs and escapes from containers to form spills and slicks. Carbon enters the energy economy in the hydrocarbon fuels, coal, oil, and gas, as well as wood. In fact, the truly desirable element in these fuels for energy generation is not their carbon (C) but their hydrogen (H). Wood weighs in heavily at ten effective Cs for each H. Coal approaches parity with one or two Cs per H, while oil improves to two H per C, and a molecule of natural gas (methane) is a carbon-trim CH4.
The historical record reveals that for two hundred years the world has progressively lightened its energy diet by favoring hydrogen atoms over carbon in our hydrocarbon stew. We can, in fact, measure this decarbonization in several different ways. As engineers, we can examine the changing ratio of the tons of carbon in the primary energy supply to the units of energy produced. From this perspective, the long-term, global rate of decarbonization is about 0.3 percent per year–gradual, to be sure, but enough to cut the ratio by 40 percent since 1860.
As economists, we can assess decarbonization as the diminishing requirement for carbon to produce a dollar’s worth of economic output in a range of countries. Several factors dispose nations toward convergent, clean energy development. One is the changing composition of economic activity away from primary industry and manufacturing to services. End users in office buildings and homes do not want smoking coals. America has pared its carbon intensity of gross domestic product per capita per constant dollar from about three kilos in 1800 to about three-tenths of a kilo in 1990. The spectrum of national achievements also shows how far most of the world economy is from best practice. The present carbon intensity of the Chinese and Indian economies resembles those of America and Europe at the onset of industrialization in the nineteenth century.
Physical scientists can measure decarbonization in its elemental form, as the evolution of the atomic ratio of hydrogen to carbon in the world fuel mix. This analysis reveals the unrelenting though slow ascendance of hydrogen in the energy market. All the analyses imply that over the next century the human economy will squeeze most of the carbon out of its system and move, via natural gas, to a hydrogen economy.3 Hydrogen, fortunately, is the immaterial material. It can be manufactured from something abundant, namely water; it can substitute for most fuels; and its combustion to water vapor does not pollute.
Decarbonization began long before organized research and development in energy, and has continued with its growth. Many ways to continue along this trajectory have been documented. Still, the displacement of carbon remains the largest single environmental challenge facing the planet. Globally, people on average now use 1,000 kilograms of carbon per year compared, for example, to 120 kilograms of steel.
Part of economizing on carbon is economizing on energy more broadly. Efficiency has been gaining in the generation of energy, in its transmission and distribution, and in the innumerable devices that finally consume energy. In fact, the struggle to make the most of our fires dates back at least 750,000 years to the ancient hearths of the Escale cave near Marseilles. A good stove did not emerge until a.d. 1744. Benjamin Franklin’s invention proved to be a momentous event for the forests and wood piles of America. The Franklin stove greatly reduced the amount of fuel required. Its widespread diffusion took a hundred years, however, because the colonials were poor, development of manufactures sluggish, and iron scarce.4
As Arnulf Grübler explains, we often fail to appreciate the speed and rhythms of social clocks.5 Many technological processes require decades or longer to unfold, in part because they cluster in mutually supportive ways that define technological eras every fifty years or so. The good news is that in a few decades most of our devices and practices will change, and major systems can become pervasive in fifty to one hundred years. It is also good news that latecomers to technological bandwagons can learn from the costly experiments of pioneers and that no society need be excluded from the learning. Evolutionary improvement and imitation transform the economy. Two percent per year may sound slow to a politician or entrepreneur, but maintained for a century it is revolutionary.
In energy and other sectors, the efficiency gains may have become more regular as the processes of social learning, embodied in science and technology, have taken root. In the United States since about 1800, the production of a good or service has required 1 percent less energy on average than it did the previous year. Nevertheless, embracing the full chain from the primary energy generator to the final user of light or heat, the ratio of theoretical minimum energy consumption to actual energy consumption for essentially the same mix of goods and services is still probably less than 5 percent.6 No limit to increasing efficiency is near.
But engineers are working hard and getting results, as Ausubel and Marchetti dramatize with a panorama of the past and future of electricity.7 In about 1700 the quest began to build efficient engines, at first with steam. Three hundred years have increased the efficiency of the devices from 1 to about 50 percent of their apparent limit. The technology of fuel cells may advance efficiency to 70 percent in another fifty years or so. While the struggle to improve generators spans centuries, lamps have brightened measurably with each decade. Edison’s first lamp in 1879 offered about fifteen times the efficiency of a paraffin candle. The first fluorescent lamp in 1942 bettered Edison’s by thirty times, and the gallium arsenide diode of the 1960s tripled the illumination efficiency of the fluorescent. Moreover, lamps are not the only means for illumination. The next century is likely to reveal quite new ways to see in the dark. For example, nightglasses, the mirror image of sunglasses, could make the night visible with a few milliwatts. We will speed efficiently to what we see. Using the same energy consumed by a present-day car, magnetically levitated trains in low-pressure tubes could carry a passenger several thousand kilometers per hour–connecting Boston and Washington in ten minutes.
Agriculture is by far the greatest transformer of the environment. Cities, paved roads, and the rest of the built environment cover less than 5 percent of the land in the forty-eight contiguous American states. Crops occupy about 20 percent of this land and pasture 25 percent. Crops cover 35 percent of France and 10 percent of China. Agriculture has consumed forests, drained wetlands, and voided habitats; the game is inherently to favor some plants and animals over others. Farms also feed us.
Yet since mid-century the amount of land used for agriculture globally has remained stable; and, as Paul Waggoner explains, the stage is set to reduce it.8 A shift away from eating meat to a vegetarian diet could roughly halve our need for land. More likely, diets will increase in meat and calories; under such conditions, the key will be the continuation of gains in yield resulting from a cluster of innovations including seeds, chemicals, and irrigation, joined through timely information flows and better-organized markets.
In fact, US wheat yields have tripled since 1940, and corn yields have quintupled. Despite these accomplishments, the potential to increase yields everywhere remains astonishing–even without invoking such new technologies as the genetic engineering of plants. The world on average grows only about half the corn per hectare of the average Iowa farmer, who in turn grows only about half the corn of the top Iowa farmer. Importantly, while all have risen steadily for decades, the production ratio of these performers has not changed much. Even in Iowa the average performer lags more than thirty years behind the state of the art. While cautious habits and other factors properly moderate the pace of diffusion of innovations, the effects still accumulate dramatically. By raising wheat yields fivefold during the past four decades, Indian farmers have in practice spared for other purposes an area of cropland roughly equal to the area of the state of California.
What is a reasonable outlook for the land used to grow crops for ten billion people, a probable world population sixty or seventy years hence? Future calories consumed per person will likely range between the 3,000 per day of an ample vegetarian diet and the 6,000 that includes meat. If farmers fail to raise global average yields, people will have to reduce their portions to keep cropland to its current extent. If the farmers can lift the global average yield about 1.5 percent per year over the next six or seven decades to the level of today’s European wheat, ten billion people can enjoy a 6,000-calorie diet and still spare close to a quarter of the present 1.4 billion hectares of cropland. The quarter spared, fully 300 million hectares, would equal the area of India. Reaching the level of today’s average US corn grower would spare for ten billion people half of today’s cropland for nature, an area larger than the Amazon basin–even with the caloric intake of today’s American as the diet.
The present realities of large amounts of land in Europe and North America reverting from farm to woodland, and high public subsidies to farmers, make the vision more immediate.9 Beyond a world of ten billion people, it is not crazy to think of further decoupling food from land. For more green occupations, today’s farmers might become tomorrow’s park rangers and ecosystem guardians. In any case, the rising yields, spatial contraction of agriculture, and sparing of land are a powerful antidote to the current losses of biodiversity and related environmental ills.
Watts and hectares are yielding more. What about water? Chauncey Starr points out that water is both our most valuable and most wasted resource.10 In the United States, total per capita water withdrawals quadrupled between 1900 and 1970. Consumptive use increased by one-third between just 1960 and the early 1970s, to about 450 gallons per day. However, since 1975, per capita water use has fallen appreciably, at an annual rate of 1.3 percent.11 Absolute US water withdrawals peaked about 1980.
Alert to technology as well as costs, industry leads the progress, though it consumes a small fraction of total water. Total industrial water withdrawals plateaued a decade earlier than total US withdrawals and have dropped by one-third, more steeply than the total. Notably, industrial withdrawals per unit of GNP have dropped steadily since 1940, from fourteen gallons per constant dollar to three gallons in 1990. Chemicals, paper, petroleum refining, steel, food processing, and other sectors have contributed to the steep dive.12 Not only intake but discharge per unit of production are perhaps one-fifth of what they were fifty years ago.
Law and economics as well as technology have favored frugal water use. Legislation, such as the US Clean Water Act of 1972, encouraged the reduction of discharges, recycling, and conservation, as well as shifts in relative prices. Better management of demand reduced water use in the Boston area from 320 million gallons per day in 1978 to 240 million gallons in 1992.13
Despite such gains, the United States is a long way from exemplifying the most-efficient practice. Water withdrawals for allusers in the industrialized countries span a tenfold range, with the United States and Canada at the highest end.14 Allowing for differences in major uses (irrigation, electrical cooling, industry, public water supply), large opportunities for reductions remain. In the late 1980s wastewaters still made up over 90 percent of measured US hazardous wastes. Importantly, as agriculture contracts spatially, its water demand will likewise tend to shrink.
In the long run, with much higher thermodynamic efficiency for all processes, removing impurities to recycle water will require small amounts of energy. Dialytic membranes open the way to such efficient purification systems. Because hydrogen will be, with electricity, the main energy carrier, its combustion may eventually provide another important source of water, perhaps 50 gallons per person per day at the level of final consumers, or about one-fourth the current withdrawal in water-prudent societies such as Denmark.
We can reliably project decarbonization, food decoupled from acreage, and more efficient water use. What about an accompanying dematerialization? Wernick, Herman, Govind, and Ausubel define dematerialization primarily as the decline over time in the weight of materials used to meet a given economic function.15 This dematerialization too would spare the environment. Lower materials intensity of the economy could translate into preservation of landscapes and natural resources, less garbage to sequester, and less human exposure to hazardous materials.
In fact, the intensity of use of diverse primary materials has plummeted over the twentieth century. Lumber, steel, lead, and copper have lost relative importance, while plastics and aluminum have expanded. Many products–for example, cars, computers, and beverage cans–have become lighter and often smaller. Although the soaring numbers of products and objects, accelerated by economic growth, raised municipal waste in the United States annually by about 1.6 percent per person in the last couple of decades, trash per unit of GDP dematerialized slightly.
The logic of dematerialization is sound. Over time new materials replace old, and theoretically each replacement should improve material properties per unit of quantity, thus lowering the intensity of use. Furthermore, as countries develop, the intensity of use of a given material (or system) declines as each country arrives at a similar level of development. The new arrivals take advantage of learning curves throughout the economy.
But superior materials also tend to make markets grow and thus take a kind of revenge on efficiency, offsetting the environmental benefits of each leaner, lighter object by enabling swarms of them to crowd our shelves. And our shelves lengthen. In Austin, Texas, the residential floor area available per person almost doubled in the past forty-five years–unsurprising when we consider that five people resided in the average US home in 1890 and 2.6 do now.
So far, trends of dematerialization are equivocal. Yet, as Robert Frosch theorizes, the potential surely exists to develop superior industrial ecosystems that reduce the intensity of materials use in the economy, minimize wastes, and use persisting wastes nutritiously in new industrial food webs.16 Since 1990 recycling has accounted for over half the metals consumed in the United States, up from less than 30 percent in the mid 1960s.17 The trick is to make waste minimization a property of the industrial system even when it is not completely a property of an individual process, plant, or industry. Advancing information networks may help by offering cheap ways to link otherwise unconnected buyers and sellers to create new markets or waste exchanges.
Liberation from the Environment
I have focused primarily on trajectories, strategies, and technologies that lessen pollution and conserve landscape. It would hardly make sense to do so unless we wish to expand human notions of the rights of other species to prosper or at least compete. Klaus Michael Meyer-Abich explicitly argues that we must stand up for the “co-natural world,” with which humans share Earth.18 We must take seriously the Copernican insight about Earth’s position in the cosmos and not simply replace geocentricism with anthropocentricism. As advised by the great early nineteenth-century natural historian Alexander von Humboldt, we should participate in the whole as part of a part of a part of it, together with others. We may draw parallels between expanding notions of democracy and enfranchisement within human societies with respect to class, gender, and race, and our broadening view of the ethical standing of trees, owls, and mountains.
Yet the condition for our widespread willingness to take the Copernican turn is surely the successful protections we have achieved for our own health and safety. Recall how deaths from the human environment have changed during the last century or two.19
First, consider “aquatic killers” such as typhoid and cholera, the work of bacteria that thrive in water polluted by sewers. In 1861 Queen Victoria’s husband, Prince Albert, died of typhoid fever reportedly contracted from Windsor’s water. Indeed, until well into the nineteenth century, townsfolk drew their water from ponds, streams, cisterns, and wells. They threw wastewater from cleaning, cooking, and washing on the ground, into a gutter, or into a cesspool lined with broken stones. Human wastes went into privy vaults–shallow holes lined with brick or stone, close to home, sometimes in the cellar. In 1829, New Yorkers deposited daily about one hundred tons of excrement into the city’s soil.
Between 1850 and 1900 the share of the American population in towns grew from about 15 to about 40 percent. The number of cities with populations over fifty thousand grew from ten to more than fifty. Overflowing privies and cesspools filled alleys and yards with stagnant water and fecal wastes. The environment could not be more propitious and convenient for typhoid, cholera, and other water-borne diseases. They reaped 11 percent of all American corpses in 1900.
But by 1900, towns were also building systems to treat their water and sewage. Financing and constructing such facilities took several decades. By 1940 the combination of water filtration, chlorination, and sewage treatment stopped most of the aquatic killers in the United States. Refrigeration in homes, shops, trucks, and railroad boxcars took care of much of the rest. Chlorofluorocarbons (CFCs), the agents in today’s thinning of the ozone layer, were introduced in the early 1930s as a safer and more effective substitute for ammonia in refrigerators; the ammonia devices tended to explode.
More killers have come by air, including tuberculosis (TB), diphtheria, influenza and pneumonia, measles, and whooping cough, as well as scarlet fever and other streptococcal diseases. In some years during the 1860s and 1870s, TB was responsible for 15 percent of all deaths in Massachusetts. Earlier in the nineteenth century, diphtheria epidemics accounted for 10 percent of all deaths in some regions of the United States. Influenza A is believed to have caused the Great Pandemic of 1918-1919, when flu claimed about a quarter of all corpses in the United States and probably more in Europe. (My own existence traces directly to this pandemic; my grandfather’s first wife and my grandmother’s first husband both died in the pandemic, leading to the union that produced my father.)
Collectively, the aerial killers accounted for almost 30 percent of all deaths in America in 1900. Their main allies were urban crowding and unfavorable living and working conditions. The aerial diseases began to weaken a decade later than the aquatics, and then weakened by a factor of seven over thirty years. Credit goes to improvements in the built environment: replacement of tenements and sweatshops with larger and better-ventilated homes and workplaces. Credit is also due to medical interventions. However, many of these, including vaccines and antibiotics, came well after the aerial invaders were already in retreat.
Formerly, most aerial attacks occurred in winter, when people crowded indoors; most aquatic kills occurred in summer, when organic material ferments speedily. Thus, mortality in cities such as Chicago used to peak in summer and winter. In America and other industrialized countries in temperate zones, the twentieth century has seen a dramatic flattening in the annual mortality curve as the human environment has come under control. In these countries, most of the faces of death are no longer seasonal.
Thus, when we speak of technological development and environmental change, it is well to remember first that our surroundings often were lethal. Where development has succeeded and peace holds, we have made the water fresher, the air cleaner, and our shelters more resistant to the violence of the elements. In the United States, perhaps 5 rather than 50 percent of deaths now owe to environmental hazards and factors, including environmentally-linked cancers. The largest global change is that humans–vulnerable, pathetic mammals when naked–have learned how to control their environment. Science and technology are our best strategies for control, and our success is why we now number nearly six billion.
But here is a catch for homo faber, the toolmaker. Our technology not only spares resources but also expands the human niche, within particular time frames. As Robert Kates explains, the intertwining of population, resources, and technology looks quite different depending on the time frame that one uses.20 From the greatest distance, human population appears to have surged three times. The first was associated with the invention of toolmaking itself, lasted about a million years, and saw human numbers rise to five million. The second surge swelled our population a hundredfold to about five hundred million over the next eight thousand years, following the domestication of plants and animals. Today we are midway into a third great population surge, which may level off at eleven billion or so three to four hundred years after the modern scientific and industrial revolution began.
But if one looks instead at the size of populations of regions over thousands of years, what goes up eventually comes down. In Egypt, Mesopotamia, the Central Basin of Mexico, and the Mayan lowlands, reconstructed population records show waves in which the population at least doubled over a previous base and then at least halved from that high point. Social learning works, but not forever. Societies flourish but they also forget and fail.
Shortening the time scale to recent centuries, we observe above all a systematic change in vital rates. Many countries have passed through the “demographic transition” from high death and birthrates to low death and birthrates. Technology certainly accounts for much of the increase in child survival and longevity, but no one can securely explain the changes in fertility, which ultimately determines the size of humanity. With respect to technology and fertility, the “pill” and its possible successors–while certainly more reliable–do not introduce an essential discontinuity in birth control. Many strategies against conception have always existed; parents have always essentially controlled family size. Though technology can ease implementation, population stabilization is a cultural choice.21 Fertility rates have been falling in most nations and are below levels needed to replace the current populations in Europe and Japan, which may implode. Perhaps the idea of the small family, which originated in France around the time of the Revolution, will become the norm after 250 years.
Still, recent population growth, which peaked globally at 2.1 percent per year around 1970, is unprecedented. The effect is that in the coming interval of a few decades human society will need to house, nurture, educate, and employ as many more people as already live on Earth. In the present era of lengthening lives and rising numbers, it appears, rather ironically, that our environmental achievement has been to liberate us from the environment.
In fact, high incomes, great longevity, and large population concentrations have been achieved in every class of environment on Earth. We manufacture computers in hot, dry Phoenix and cool, wet Portland. We perform heart surgery in humid Houston and snowy Cleveland. Year round we grow flowers in the Netherlands and vegetables in Belgium. The metro in Budapest runs regardless of the mud that slowed Hungarians for a thousand years. In Berlin and Bangkok we work in climate-controlled office buildings. We have insulated travel, communications, energy generation, food availability, and almost all major social functions from all but the most extreme environmental conditions of temperature and wind, light and dark, moisture, tides, and seasons.
The Japanese have even moved skiing and sand beaches indoors. In the world’s largest indoor ski center, Ski-Dome near Tokyo, the slope extends 490 meters by 100 meters, with a thrilling drop of 80 meters that satisfies the standards of the International Ski Federation for parallel slalom competition. On the South Island of Kyushu, Ocean-Dome encloses 12,000 square meters of sandy beach and an ocean six times the size of an Olympic pool, filled with 13,500 tons of unsalted, chlorinated water kept at a warm 28oC. A wave machine produces surf up to three-and-a-half meters high, enough for professional surfing. Palm trees and shipwrecks provide the context.
In fact, careful records of human time budgets show that not only New Yorkers and Indians but also Californians, reputed nature enthusiasts, average only about one-and-a-half hours per day outside.22 Fewer than 5 percent of the population of industrialized nations work outdoors. In developing countries, the number is plummeting and should be below 20 percent globally by 2050. As Lee Schipper shows, life-styles revolve around the household.23 The achievement of ten thousand years of human history is that we have again become cave dwellers–with electronic gadgets.
The Liberation of the Environment
For most of history thick forests and arid deserts, biting insects and snarling animals, ice, waves, and heat slowed or stopped humans. We built up our strength. We burned, cut, dammed, drained, channeled, trampled, paved, and killed. We secured food, water, energy, and shelter. We lost our fear of nature, especially in the aggressive West.
But we also secured a new insecurity. Although we have often cultivated the landscape with judgment and taste, we now recognize that we have transformed more than may be needed or prudent. Certainly, we would redo many episodes given the chance, particularly to protect precious habitats.
Some of our most arrogant behavior has been recent. Together the United States and the Soviet Union rocked Earth with close to two thousand nuclear blasts during the Cold War. The French, British, Chinese, and Indians also signaled their presence. The fifty-year bombing spree appears finally to be nearing an end.
Attitudes worldwide toward nature, and perhaps inseparably toward one another as humans, are changing. “Green” is the new religion. Jungles and forests, commonly domains of danger and depravity in popular children’s stories until a decade or two ago, are now friendly and romantic. The Amazon has been transformed into a magical place, sanctified by the ghost of Chico Mendes, the Brazilian rubber tapper. Environmental shrines, such as the Great Sarcophagus at Chernobyl, begin to fill the landscape. The characterization of animals, from wolves to whales, has changed. Neither the brothers Grimm nor Jack London could publish today without an uproar about the inhumanity of their ideas toward nature–and I would add, with regard to gender and race as well.
Although long in preparation, great cultural changes can sweep over us in decades once underway. Moreover, standing against them is hopeless when they come. Magyar nobles vigorously opposed the spread of Protestantism and in 1523 declared it punishable by death and by the confiscation of property; despite all the edicts, Protestantism took firm hold in Hungary. In the nineteenth century in Europe and America a rising moral feeling made human beings an illegitimate form of property. Within about fifty years most countries abolished slavery. Many countries vocally rejected women’s suffrage at the outset of the twentieth century. Now, politicians, though still mostly male, would not dream of mentioning the exclusion of women from full citizenship in most parts of the world.
The builders of the beautiful home of the US National Academy of Sciences in Washington, D.C., inscribed it with the epigraph, “To science, pilot of industry, conqueror of disease, multiplier of the harvest, explorer of the universe, revealer of nature’s laws, eternal guide to truth.” Finally, after a very long preparation, our science and technology are ready also to reconcile our economy and the environment, to effect the Copernican turn.24 In fact, long before environmental policy became conscious of itself, the system had set decarbonization in motion. A highly efficient hydrogen economy, landless agriculture, industrial ecosystems in which waste virtually disappears:over the coming century these can enable large, prosperous human populations to co-exist with the whales and the lions and the eagles and all that underlie them–if we are mentally prepared, which I believe we are.
We have liberated ourselves from the environment. Now it is time to liberate the environment itself.
I am grateful to Rudolf Czelnai, Cesare Marchetti, Perrin Meyer, and Iddo Wernick for assistance.
1See, for example, Jared M. Diamond, “Ecological Collapses of Ancient Civilizations: The Golden Age that Never Was,” Bulletin of the American Academy of Arts and Sciences XLVII (5) (1994): 37-59; “The Conquest of Nature, 1492-1992,” Report on the Americas 25 (2) (1991), North American Congress on Latin America (NACLA), New York, September 1991; Alexander Starbuck, History of the American Whale Fishery from its Earliest Inception to 1876, vol. 1 (New York: Argosy-Antiquarian, 1964); and B. L. Turner II, William C. Clark, Robert W. Kates, John F. Richards, Jessica T. Mathews, and William B. Meyer, The Earth as Transformed by Human Action (Cambridge and New York: Cambridge University Press, 1990).
9For discussion of the re-creation of the “Buffalo Commons” in the US Great Plains, proposed by geographers Deborah and Frank Popper, see Anne Matthews, Where the Buffalo Roam (New York: Grove Weidenfeld, 1992). For a net estimate of changes in land use from growth of cities as well as changes in farming and forestry in the United States over the next century, see Paul E. Waggoner, Jesse H. Ausubel, and Iddo K. Wernick, “Lightening the Tread of Population on the Land: American Examples,” Population and Development Review (forthcoming).
13Eugene Z. Stakhiv, “Managing Water Resources for Climate Change Adaptation,” in J. B. Smith, N. Bhatti, G. Menzhulin, R. Benioff, M. I. Budyko, M. Campos, B. Jallow, and F. Rijsberman, eds., Adapting to Climate Change: Assessment and Issues (New York: Springer-Verlag, Inc., 1996), 243-264.
18Klaus Michael Meyer-Abich, “Humans in Nature: Toward a Physiocentric Philosophy,” Dædalus 125 (3) (Summer 1996). See also Klaus Michael Meyer-Abich, Revolution for Nature: From the Environment to the Co-Natural World (Cambridge, U.K. and Denton, Tex.: White Horse and University of North Texas Press, 1993).
19Jesse H. Ausubel, Perrin Meyer, and Iddo K. Wernick, “Death and the Human Environment: America in the 20th Century,” working paper, Program for the Human Environment, The Rockefeller University, New York, 1995; and John B. McKinlay and Sonja M. McKinlay, “The Questionable Contribution of Medical Measures to the Decline of Mortality in the United States in the Twentieth Century,” Milbank Quarterly on Health and Society (Summer 1977): 405-428.
21 Cesare Marchetti, Perrin Meyer, and Jesse H. Ausubel, “Human Population Dynamics Revisited with a Logistic Model: How Much Can Be Modeled and Predicted?” Technological Forecasting and Social Change 52 (1996): 1-30.
22Peggy L. Jenkins, Thomas J. Phillips, Elliot J. Mulberg, and Steve P. Hui, “Activity Patterns of Californians: Use of and Proximity to Indoor Pollutant Sources,” Atmospheric Environment 26A (12) (1992): 2141-2148.
Jesse H. Ausubel is Director of the Program for the Human Environment at The Rockefeller University.
The proletariat of American research, the graduate students and the postdocs, cry and whisper. Internet traffic even suggests they organize. At Yale, some struck. Meanwhile, William Massy of Stanford University and Charles Goldman of RAND Corp. present a fresh analysis to explain the doctoral system (W.F. Massy, C.A. Goldman, The Production and Utilization of Science and Engineering Doctorates in the United States, Stanford Institute for Higher Education Research, 1995), and the National Academy of Sciences (NAS) complex releases two major assessments of American graduate education and research (Reshaping the Graduate Education of Scientists and Engineers and Research-Doctorate Programs in the United States, NAS, Washington, D.C., 1995). The bottom line is that alma mater is doctoring too many children.
Malthus’s classic negative checks on population were famine, war, and ill health. Here I would like to provide a backdrop for considering more positive checks on the burgeoning number of Ph.D.’s, drawing in part on the facts and findings in the three 1995 studies. Five features dominate: expansion of degree-granting franchises; the forgotten origin of the expansion, a need for teachers; emergence of a research enterprise recruiting students to sustain itself; a star system for faculty, further tipping graduate schools toward research; and, finally, too many doctorates. My positive checks, like those of Malthus, will involve better understanding and purposeful action as well as moral restraint.
The number and size of universities granting doctorates have multiplied. Gaining status, the institutions awarding a Ph.D. in science and engineering (S&E) doubled from 1961 to 1991, reaching 299. Grantors of master’s degrees in S&E slightly more than doubled in the same period, reaching 442, and provide a ready pool to multiply the population of schools granting Ph.D.’s still more.
No convincing logic defines the optimal set of doctoral programs for America. However, absolute numbers now impress in almost every field. In each major sub-field within biology, 100 to 200 schools now award Ph.D.’s. Circa 1990, 182 granted degrees in physics, 169 in mathematics, and 130 in civil engineering. Even in a sub-sub field such as biomedical engineering 86 granted Ph.D.’s, and in the sub-field of physics and biology called oceanography, 50 did so.
Enrollment multiplied as the franchises expanded. From 1967 to 1992, graduate students of all kinds increased about half, twice the growth of the United States population. They multiplied from slightly less than a half-million to just over two-thirds million. The swelling number of schools increased the annual output of S&E Ph.D.’s from about 18,000 to 25,000 during the decade 1983-93.
If a franchise means spending $30 million or more of federal money annually for basic research, about 100 institutions have franchises. In 1970 only about 30 universities had large research programs. (The 100 produce about 90 percent of Ph.D.’s.)
From 1960 to now, major league baseball added more franchises, too, from 16 to 28. The New York Yankees could not maintain their dynasty in that expanding field. The 1995 NAS ranking of doctoral programs in dozens of fields showed predictably that the average rank of most universities declined with the expanding number of competitors, worsening morale and lengthening the climb to the top of the standings. Questions also arise about the qualifications of a larger absolute number of students and faculty.
The Forgotten Need For Teachers
In the 1950s, war veterans swelled the ranks of students. Recovering from the thin years of the Depression, colleges needed teachers quickly. Fresh Ph.D.’s staffed the rapidly expanding state universities and enlarging older institutions, too. Subsequently, democratization of educational opportunity and the baby boom sustained the college boom.
Secondarily, the government paid for training technical personnel to compete with the perceived scientific prowess of the Soviets. With fresh memories of the victories of science in World War II and ample tax revenues, the government paid for research campaigns, even a war on cancer. These payments to spend more time on research encouraged professors to cut their hours of contact with students from, say, nine to three per week, tripling-in this example-the need for teachers (or teaching assistants).
Notwithstanding the college boom, the fraction of Ph.D.’s employed in academe declined from about 55 percent in 1973 to about 45 percent in 1991. The fraction whose primary work is teaching dropped from 36 percent in 1972 to 23 percent in 1991. Meanwhile, the fraction no longer performing research, the presumed goal of a Ph.D., or whose work was unclear, doubled to about one-third of those surveyed. When the investment in a degree totals $250,000, one wonders for these lost researchers whether doctoral training was a wise choice, for them or the nation.
By the 1980s, the demand for full-fledged teachers slowed, a large cadre of principal investigators was in place, and the research enterprise needed skilled workers. The market for Ph.D.’s no longer drove the production of Ph.D.’s but rather the need of the research enterprise for low-cost labor called graduate students and postdocs. The enterprise perfumes this reality by praising the effectiveness of joint education and research. Of course, no oppressive conspiracy existed. Rather, individual faculty and funders have acted rationally in their self-interest, heedless until recently of possibly harmful collective effects.
Objective understanding of doctoral production and use demystifies many current features of the system. These include the lengthening time to get a degree and the growing number of foreign students. Doctoral students and postdocs substitute for faculty in research. They also unburden faculty, more in the humanities and social sciences, in undergraduate teaching and evaluation. Expanding graduate enrollments and postdocs costs less than hiring new faculty. Moreover, faculty-especially young faculty-competing for promotion and eminence through research logically recruit yet more graduate students but lack an incentive to speed them to a degree.
Recruits to S&E face a dim future: six or seven years registered for a degree, eight or nine years from B.S. to Ph.D., then one or more postdocs, and thus no substantial income until past age 30. In the life sciences, for example, the Ph.D.’s age to a median of 33 years by the time they land their first permanent job.
American undergraduates with exceptional talent likely spy the opportunity costs posed by the long apprenticeship. Far superior incomes in other careers leave science attracting only those young Americans who hear a profound calling. In fact, the number of American male Ph.D.’s has shrunk for a quarter-century. Women and foreign students account for the growth. In many schools and fields, roughly half of graduate students and postdocs are foreign.
Foreign youth still know graduate training in America will propel them upward. Preferring to remain in the U.S., they may accept slow progression to the degree and a succession of low-paying postdocs. The practically infinite availability of young foreign talent could maintain the system as it exists, although politics, prosperity, and currencies cause fluctuations. Japan, Taiwan, Korea, and China send the most students. China, India, Malaysia, and Indonesia send particularly high fractions for engineering and science.
The Star System
Senior faculty have evolved a strategy of horizontally mobile stars, akin to “free agency” in baseball. The stars auction themselves to the highest bidder, driving up the cost of their services. Ratcheting up the top-most compensation packages, they restrict the dollars for expansion of the middle class of permanent faculty. The recent end to mandatory retirement at age 70 works in the same direction. At the same time the middle class is restricted, the enterprise tilts from teaching toward the research that brightens the stars.
The stars’ ambitions and tastes require not more undergraduates but more workers. Thus, institutions offer or accommodate more graduate students and postdocs as part of their bid for a star, and also hire more cheap adjunct teaching faculty to moderate the wage bill. The number and years of the postdocs expanded most dramatically in biology, where the fraction of postdocs so employed one to four years after an American Ph.D. first climbed rapidly during the 1970s and now hovers around 40 percent. As almost all fields boarded the bandwagon, the number of S&E postdocs tripled from about 8,000 in 1975 to 24,000 in 1992. The stars are well served.
Too Many Ph.D.’s
At the bottom line, one finds the “natural production rate” of Ph.D.’s in the American system based on the population of professors in doctorate programs and the total fertility rate of each professor. Physicist David Goodstein of the California Institute of Technology puts that fertility rate at about 15 Ph.D.’s per professorial career in fields he knows, while I guess the rate necessary for breeding professors to replace the national population of S&E Ph.D.’s is about five per career. The present outcome exceeds the steady-state intake of faculty into U.S. schools more than the demand from American industry and government and from abroad can absorb. Students stretch out their school years, partly because job prospects are poor, and partly because funders and peers of the discipline favor money for students or recruits. The life of the postdoc provides a further way to stretch the years, but even their numbers may be near saturation.
Persuasive recent findings by Massy and Goldman, funded by the New York-based Alfred P. Sloan Foundation, hint Ph.D.’s in engineering, math, and some sciences are currently overproduced fully 25 percent.
An expansion of universities or research could temporarily absorb the excess doctorates, but within a few years, sponsoring more university research would worsen Ph.D. job prospects in S&E. Immediate gains from faculty expansion would give way to more oversupply as expanded doctoral programs produce yet more graduates.
Challenges And Opportunities
Universities must reconsider production of Ph.D.’s and the invisible hands of franchise expansion, recruiting to sustain the enterprise, and stars that propel it. We should seek positive checks on population rather than suffer the academic equivalents of famine, war, and ill health.
The prescription must produce research without producing the disillusioned. During a period when money from research remains steady or falls, some universities might well revisit an antiquated system of staffing that makes durable commitments to technicians and shelters faculty who do not hold the high expectations of fresh Ph.D.’s and postdocs. Universities could reward students who finish fast, and penalize faculty whose students loiter.
Valorizing the master’s degree in sciences would reduce exploitation. In engineering, the master’s is respected and lucrative, while in scientific fields it is a stigmatized consolation. Consider students who look forward to careers in business or secondary schools, which might be where the elusive third of the Ph.D.’s went. For them, instead of a protracted and disillusioning Ph.D., an intensive two years of science courses after a B.S. program might meet their needs while benefiting the nation and reflecting glory instead of disenchantment on the university.
Another positive prescription is reducing the cost of research without a youthful army of exploited inductees minimizing labor cost. The late Yale historian of science Derek de Solla Price resignedly conjectured that scientific results grow at the discouraging price of the cube root of the expense (Little Science, Big Science . . . and Beyond, Columbia University Press, 1986). Cannot science find routes to increase its productivity, as other service industries now aggressively do? Surely, for example, scientists in America should spend more time doing research and less time proposing and reviewing.
Affection for alma mater and recognition of the invisible hands driving her causes several of us to try seriously to create “SimU.” Opportunities come from understanding the university as a system, in particular how the actors make their decisions. In more and more useful ways, simulation games raise questions about how agents behave and how the parts of a system interact. Such tools now simulate oil refineries and factories, the oceans and the atmosphere.
Maxis Software Inc. of Orinda, Calif., has created educational and commercially successful games, engagingly called SimEarth and SimCity. Seeking a learning tool for the many people and organizations concerned with the problems and solutions discussed here, experts in universities and simulations are beginning to create a virtual alma mater of Malthusian forces, invisible hands, and stakeholders. It may help universities manage better. The proletariat who cry and whisper on the Internet deserve at least this much.
Jesse H. Ausubel is director of the Program for the Human Environment at Rockefeller University and a program officer for the Alfred P. Sloan Foundation in New York, where he leads the foundation’s program on “The University as a System and the System of Universities.”(The Scientist, Vol:10, #3, pg.11 , February 5, 1996)(Copyright © The Scientist, Inc.)The Scientist, 3600 Market Street, Suite 450, Philadelphia, PA 19104, U.S.A.[This article appeared on The Scientist web page, used with permission – psm].