Reasons to Worry About the Human Environment

This article originally appeared in COSMOS, the journal of the Cosmos Club in Washington, D.C. It has been republished in the journal Technology and Society 21:217-231.

Thoughts are presented on climate and biodiversity, behavioral poisons, libido, depopulation, falling work, rejection of science, and the twilight of the West.

My subject is the insecurity many now experience in the West, by which I refer especially to Europe and its offshoots in the Americas and elsewhere. Deep and valid causes exist for insecurity:

I suspect unintentionally ingested poisons have made us violent and stupid. I fear intentionally ingested medications may be deadening libido and thus the creativity and edge of our societies. I observe the incipient implosion of the populations of most of the rich countries, sure to wreck our social security and further sap our vitality. I observe the falling time, during a life, that we spend employed for pay and wonder what will compensate for the eroding centrality of the work contract. I worry that we will reject science, the source of our paradoxical freedom and the most powerful means to achieve a better standard of living. And I anticipate, like Oswald Spengler in 1918, the decline of the West, the source of modern science and still its only secure home.1

I might mention other night thoughts, of financial and monetary crisis (which the economists and bankers arrogantly believe they have extirpated) or great wars or Africa’s hunger or Earth’s collision with an asteroid, but the night is only so long, and my six fears seem quite enough. More importantly, these fears cohere, culturally and historically.

First I will dwell briefly on two fears that I believe we need not harbor but that have occupied much of my work, namely climate change and loss of biodiversity. Conveying the reasons for my sanguine outlook justifies the apparent digression.

CLIMATE AND BIODIVERSITY

I do not fear climate change, because the human economy is losing its taste for carbon, the main prospective cause. The continuing, steady decarbonization of the energy system means that the concentrations of carbon dioxide will not rise much above 500 ppm, about 40 percent above today’s levels.2 Today’s levels are about 30 percent higher than 100 years ago. The century’s climate change is barely distinguishable in the statistics, notwithstanding detailed analysis by legions of climatologists. More importantly, the consequent impacts on our food production and other matters of concern are of no consequence or lost in the ruckus of history. To offer a flippant but telling example, Europeans, Americans, and Japanese ski much more now than we did in 1896, though it was colder then. I do not believe that a somewhat larger warming over the next 75 years will matter significantly for our diets, health, or incomes. Most of the economy has moved indoors, and much that has not will do so. Climate simply matters less and less.3

Though I do not fear the carbon emissions, I certainly do not defend them. The whole energy system leaks and squanders. Means abound to control it more tightly and efficiently. Engineers and entrepreneurs should get on with the business of concentrating the system safely and cleanly on gas and electricity, and introducing hydrogen and less wasteful devices for generation, distribution, and final use of energy. Researchers can ease the way with ex post and ex ante explanations.

Loss of biodiversity is also, I believe, an issue more of the past than the future. Humans have slashed and burned wildly for many centuries. Happily, observation of changing patterns of land use suggests that the worst is over in many countries, and probably globally. Cities will continue to expand where population grows or rising incomes enable each resident to sprawl further, that is, enjoy increased mobility. Fortunately, the total area cities may encompass is simply not that large. In rich, fast America citizens only take about 600 to 1,000 square meters each.

Globally, the land used for crops is about ten times greater than that for cities. In the United States and many other countries, land cropped is shrinking, because higher yields mean plants need less area on which to grow. The trend will continue and spread to countries where yields have remained low. Similarly, the area of forests used for wood products will diminish, as the Green Revolution that transformed the grains penetrates the trees in conjunction with other means of achieving high–yield forestry. Paul Waggoner, Iddo Wernick, and I have calculated that in the United States over the next 75 years or so, while the population increases by about 100 million, the growing cities, contracting farms, and reduced need for acreage for wood products will release an area of land for other uses equal to 100 times the size of Yellowstone National Park, or an area equal to Bolivia or Nigeria.4

This study of land–sparing in the United States should be repeated for many, even all countries. Surely some will find a different, and saddening, bottom line. Nonetheless, globally the future of land belongs to Nature. Extinctions can become a thing of the past.

Incidentally, what works to save land and the animals feeding on it works in the water, too. Clever aquaculture can end the holocaust of the wild fishes. Aquaculture already approaches one–fifth of all ocean landings of fish and shell-fish by weight.

The regrowth of forests brings its own set of problems. Formerly, Americans regularly burned woodland to encourage palatable grasses and legumes for livestock, to ease movement and, importantly, to keep down the snakes, chiggers, and ticks. Now the ticks are back with the woods and the deer, and with them Lyme disease and meningitis. We may regain some of the fear of forests and the animals that live in them evoked darkly in the German folk tales collected by the Brothers Grimm, published first in 1812–1815. The Grimm Brothers inspired or at least rationalized the work of the nineteenth–century deforesters. We cleared forests and drained swamps not only for land for crops and grazing but for health, safety, and mental security.

But the bugs of the woods are not among my main worries for humanity. Rather let me now explain the dangers I posted at the outset: behavioral poisons, loss of libido, depopulation, falling work, rejection of science, and, finally, the decline of the West.

BEHAVIORAL POISON

I worry that the Industrial Revolution, which has spared us from stoop labor and the hazards of the fields and brought us comfort, convenience, and mobility barely imaginable two centuries ago, has poisoned many among us. The annual worldwide production of most of the toxic heavy metals, including arsenic, cadmium, chromium, copper, lead, and mercury, has increased by 2–3 orders of magnitude over the past 1–2 centuries.5 Notwithstanding impressive gains in containing emissions, significant amounts continue to dissipate, creating chances for unintentional ingestion.6

Hippocrates recognized acute lead toxicity in ancient Greek miners. The sweetening and preserving of sour wines with lead–containing additives began with the Romans and caused severe colic, paralysis, and death until the practice was eliminated in the eighteenth century.7 The comparably acute effects of mercury and other metals have been well–documented. But the possibility, insufficiently researched, also exists that the low–level exposures to the heavy metals released by industry and the consumption of its products over the past two centuries have had serious behavioral effects.

Lead and other metallic elements profoundly alter the nervous system, and thus intelligence, memory, visual retention, and dexterity. Having no known biological role, lead only contaminates the human body. In 1983, a year for which global estimates happen to have been compiled, about 300,000 tons of lead may have been dispersed in the atmosphere and a million tons in the soil.8 At low levels, lead has been shown to cause a variety of learning disorders. Bone lead levels in fact have been convincingly linked to delinquent behaviors.9 Human skeletal lead burdens today show a 500–fold increase over the skeletons of ancient Peruvians who did not smelt.10

Cadmium, like lead, serves no biological function and is virtually absent from humans at birth. Contemporary American cadmium bone levels are about 50 times those found in the bones of Pecos Indians of the North American Southwest circa 1400 A.D.11 Groups of violent and nonviolent incarcerated male criminals differed significantly in cadmium as well as lead levels, measured in hair.12 Similarly, high cadmium and magnesium levels characterized disruptive recruits to the US Navy.13 Manganese madness” is characterized by hallucinations, unusual behavior, emotional instability, and numerous neurological problems. Aluminum and thallium are neurotoxic to varying degrees; their symptoms include depression, difficulty in sleeping, irritability, impulsivity, and violence.

I believe we have yet to understand how the metallic traces humans bear may have affected history. Natural variation in exposure from diverse soils and other sources may explain some past and persisting cultural” differences. The problem was surely very hard in the locales of miners, potters, glaziers, and smelters.14 Coming to the present, the rises of the metallic metabolism of the economy may help explain the observed patterns of chronic violence and, who knows, maybe even wars. Certainly other causes of dimness and violence exist, but a significant, and limitable, fraction may be attributable to this and other factors in the physical environment. We should find out. We could study the behavior of gas station attendants, auto mechanics, and other vulnerable populations.15 We might learn that improvements in air quality decrease aggressive behavior. New York City and many other parts of America have experienced dramatic drops in crime in the past few years. Police have quickly seized credit. Maybe they should share it with clean air legislation and the technologies that enable compliance.

In a more general way, the chance that small leaks from our industrial system have large behavioral consequences intensifies my interest in industrial ecology.l6 Industrial ecology studies the networks of all industrial processes as they interact with each other and live off each other. Observing the totality of material relations among different industries, their products, and the environment, we find big chances for reducing wastes and drips. We will hold the grail of the materials and energy systems when no drips remain to be caught, when emissions and dissipation approach zero.

LIBIDO

The curious converse of my worry about the rise of aggression from unintentional ingestion of metals is the decline of libido from the intentional consumption of medication. Libido refers to desire or, more formally, the emotional or psychic energy derived from primitive biological urges. Recently, alarms have drawn attention to the possible role of endocrine disrupters in reproductive failure and developmental twists in humans and other animals.17 The alleged culprits include plastic wrap and pesticides. My concern is psychiatric drugging.

The “modern” era of psychiatric drug treatment began in the 1950s with the introduction of tranquilizers to control excitement, agitation, and aggressivity, mostly in hospital settings.18 The current wisdom of the doctors suggests that 5 percent to 10 percent of populations in countries such as the United States and France suffer depression. Though the affected fraction of the population is very subjective, I accept the horribly painful and costly reality of mental illness as well as its physiological basis. Depression, for example, appears to be connected to the serotonin level in the brain, with which, incidentally, cadmium may interfere.19

Side effects of the tranquilizers and the subsequent first generation of antidepressants, the tricyclics, limited their diffusion. As for most innovations, bigger markets awaited better products. These came along in the 1980s in the form of Prozac (fluoxetine) and other drugs which selectively alter serotonin catabolism in the brain. America, always the innovator, leads in acceptance. Prescriptions for antidepressants from office–based psychiatric visits soared in the United States from about 2.5 million in 1980 to 4.7 million in 1989. 20 Increases were particularly evident for male patients, young adult patients, and patients with neurotic disorders. The overall prevalence of antidepressant use in certain communities quadrupled in a recent 10–year span.21 By May 1995, 10.5 million Americans were reportedly taking Prozac, and perhaps 15–20 million worldwide.22 Many more are taking chemically similar Zoloft and Paxil, and several new antidepressants are pending approval with the US Food and Drug Administration. In France, in 1995 about 3 percent of the population used the eight main antidepressants.23 The global market for antidepressants is expected to reach more than $6 billion by 1998, having doubled in four years.24

For younger people, access to the pharmacopoeia has also broadened. The line between children with “normal” variations of temper, lively or spontaneous children who are sensitive to stimuli, and those who have a “disorder” has shifted. In part the reason may be that with more women working outside the home, younger children are required to adhere to a more organized and less flexible social structure, in school or around the home. Seeing fewer children, parents may also more quickly think a particular child extraordinary, for better or worse. Maybe the causes include lead, cadmium, and other elemental exposures. In any case, a massive increase has occurred in the United States of diagnoses of Attention Deficit–Hyperactivity Disorder (ADHD). The drug of choice in this case is Ritalin (methylphenidate), which often improves the focus and performance of those diagnosed with ADHD. In 1995, 2.6 million Americans were taking Ritalin, a five–fold increase from 1990. 25 The vast majority of users are boys between 5 and 12, in fact more than 10 percent of boys in this age group.

My point is not to condemn the drugs. The drugs work. They are intended to change the individual’s mood, and they do. But, social mood is also exceedingly important. As with the toxic heavy metals, I think we have yet to consider the collective effects of all these, and other fresh chemicals, that we are voluntarily ingesting. Yet, we nod knowingly about the effects of alcoholism in many societies. We readily accept that crack cocaine caused an epidemic of violence in New York and other American cities in the mid–1980s and that its retreat contributed to falling crime.

Prozac is well–known to cause sexual dysfunction, along with general calming. Who needs endocrine disrupters? Maybe the attack on depression and hyperactivity is affecting aggression, violence, crime, and many other antisocial behaviors. But creativity in all its forms—economic, scientific, artistic—also often first appears as antisocial behavior. Maybe America and other nations are prescribing themselves a gradual but gigantic and deadly loss of libido. An ironic end to the Freudian century.

DEPOPULATION 26

Long before Prozac and Ritalin, western nations began to raise fewer children. The French, during the Revolution of 1789, were the first moderns to reduce fertility. The reason was not the death of sex. Rather the Revolutionary government abolished primogeniture, the practice of bequeathing all property to the first son. The splitting of inheritance and wealth would mean downward social mobility. The lever left to the family was to reduce offspring, and parents used it. Two hundred years later, on average, a woman in Europe or Japan now bears only about 1.4 children along her fertile span. As is well known, to preserve a population, the rate should be around 2.1. The gap means that numerous national populations, and that of Europe as a whole, are imploding. To give an example, if Japanese women sustain their present fertility (1.4 in 1995), the population of Japan will fall from 125 million today to 55 million in 2100. This apparent success for family planning worries me.

First one needs to understand the reasons for family size. The logic tends to be economic, as in the French case. The main question is, are children a burden or an asset? Both, naturally, although the burden tends to fall on the female, while the asset accrues to the family as a whole.27 Historically, even in periods of high fertility, the wealthy have had few children. In a well–off family, in most circumstances, children are not assets. Growing and educating them at the appropriate standard costs a lot. They bring no income when they are young. Aging but still wealthy parents do not need them for support; care for the old people is left to third parties financed by the income or assets of the old people. If static property such as land forms the wealth, many children would inevitably split it. These reasons explain the pre–modern family size of the rich. Nowadays wealth at large links more to financial assets than to static property. Still, child costs remain the same.

In the absence of economic incentive for families, a second basic reproductive instinct remains, that of continuity. Adults beyond reproductive age who realize that there is nothing after them rage and despair. Their genes will disappear. They have traveled their mission without delivering their message.

Assuming the basic instinct for continuity is finally stronger than bare economic considerations, then every couple longs for a child. With child mortality at around 1 percent, one child should be enough. But here another argument, or instinct, comes in. The child should be male. If we put biological mechanisms in control, this request makes sense, as otherwise the Y–gene would be lost. Perhaps the cultural practices favoring sons are an externalization of the basic instinct in folkloric disguise.

In any case, suppose couples reproduce starting with the idea of a boy. About 50 percent of them get one. The other half get a girl and a dilemma: what to do next? We may assume that the parents decide on a second try, the last, if unsuccessful. With this strategy in mind, and taking into account that about 15 percent of women never give birth for various reasons, the outcome is a fertility rate of about 1.3 per female, almost exactly the present fertility rate in European countries, including Germany, Spain, and Italy.28 If the reasoning is correct, the fertility rate is unlikely to change, because of a lack of driving forces in the short term. In fact, assuming immigration does not compensate, a population with a total fertility rate of 1.3 per female is unstable and converges to very small numbers in a few generations.

Though the implosion of our population might reopen niches for the rest of Nature, it promises severe stress for the surviving humans. Means for social security can be internal or external to the family but require children in both cases. Where means external to the family do not provide old age benefits, children are the only insurance for old age. With external mechanisms, as in the welfare state, the children in the system become a “common.” Children are economically decoupled from the family, but they are still coupled to society because, collectively, they must earn the pensions paid to old–timers. These commons can suffer a tragedy if everyone takes away and nobody restores the resource. In fact, with a plausible rate of long–run productivity growth (2 percent per year), most European countries and Japan currently do not have enough children for ensuring the pension system at levels their citizens expect. Moreover, chronic overcapacity may weaken the incentives to achieve productivity growth in these societies.

Perhaps the wreck of the pension system will set forces in action to restore higher fertility rates well ahead of the shrinking to zero of the total population. The publicity about endocrine disrupters may signal the need to try to reproduce. In fact, Sweden for a while somewhat countered the trend toward the lowering (to 1.3) of the total fertility in Europe. Swedes, after a decrease from a value of 2.5 in 1964 to 1.6 in 1978, started a rise in 1983, peaked at 2.13 in 1990 but have since again fallen below 2.00. 29 After slumping to about 1.7 in 1976, the US total fertility rate has remained about 2.1 in the 1990s. The former Soviet Union may also provide a useful case study. Fluctuations and changes in financial systems wiped out Soviet savings and pensions, putting older persons, many of whom have had only one child, at great risk. We should watch to see whether the present Russian youngsters repeat their parents’ gamble. Understanding episodes of rising fertility and pro–natalist policies matters greatly for the West, Japan, and eventually a list of other nations. The alternative, immigration, has historically proven an irritating cure for depopulation.

As with the metals and the psychiatric drugs, we must also be alert for collective, and noneconomic, effects in the society of the small family. Increasing longevity and improving prostheses for older persons can preserve numbers and vitality for a while. But the vis vitalis, life’s vital force, is not evenly distributed by age, and indeed it may not be equal among children. Frank Sulloway has argued that birth order weighs exceedingly heavy in determining creative lives.30 First children conform to authority, while younger ones are born to rebel. Charles Darwin was the fifth of six children, and the youngest son of a youngest son going back four generations, while his mother’s father was the youngest of 13 children; Benjamin Franklin was the fifteenth of 17 siblings and the youngest son of a youngest son for five generations. If we wish to keep a low birth rate, perhaps we should at least concentrate the fertility in a few families to maintain some psychological diversity.

Such concerns are probably secondary to the strife over social insurance sure to accompany depopulation. Recall that the preceding transition, from the security of the large family, helped create the space for the Communist dream and its realizations. Indeed, the German Chancellor Bismarck adopted the first social security system in the 1880s to forestall the gains of the Socialists. As late as 1940 pensions covered only about 4 million American workers. Neither employers nor the state will welcome the intensification of the debate over social responsibility soon to come.

FALLING WORK

Scarce youth might cause a tight labor market and encourage older workers to remain longer in jobs, alleviating both youth unemployment and the social security problem. On the contrary, the present era is the worst for employment since the 1930s in the industrialized nations, with the significant exception of the United States.31 Probably 20 percent of those who want jobs lack them. Even in the United States, with low reported unemployment, workers retire ever earlier, now barely above 60. The labor force participation rate for American men aged 55–64 dropped from 90 percent in 1948 to 83 percent in 1970 to 65 percent in 1994.

One reason is surely cyclical. The world is at the low point in the roughly 55–year pulsations of the economy, the so–called Kondratieff waves.32 While traditional employers have shed many workers, the industries that will newly employ many more, basically connected with information handling, one might say the Internet for short, have started vigorously in only a few countries, particularly the United States. The Internet vastly reduces transaction costs, in money and time, and thus can coalesce innumerable new markets, for news about heavy metal rock and roll groups or for obscure types of scrap metal. The Web will make its Wirtschaftswunder, but not pervasively for another decade or two.

Nevertheless, the secular trend is toward less work.33 Since the mid–nineteenth century, on average people in the industrialized nations have been working significantly less while living longer. While the average career length has remained around 40 years, the total life–hours worked shrank for an average UK worker from about 125,000 hours in the 1850s to fewer than 70,000 hours in the 1980s. UK male workers dropped from 150,000 to 88,000 lifetime hours at paid work, while UK working women dropped from 63,000 to 40,000. Well–documented long–run reductions in annual per capita work time in many countries suggest the universality of the trend.34

The combination of reduced lifetime working hours and increased life expectancy has caused a huge shift in life experience. While in 1856 half of the disposable life–hours of workers were spent working, the portion has fallen to less than one–fifth today. If the trends continue, soon after the year 2000 half of the years of the average worker will occur before or after work. Even in the working half of an individual’s life, formal work will account for a decreasing fraction of time, one–third or less, and should leave more time, for leisure and other activities such as caring for a child (or two) and the home. If the long–term trends continue at their historic rates, the work week might average 27 hours by the year 2050.

The formalized work contract has historically been the central economic and social fact in industrialized countries. It has not only regulated the standard of living, but also served as the most important factor for social integration. The secular trend away from the formal work to other socially obligatory activities and to free time implies numerous social challenges. Societies must examine whether their employment, pension, education, and other policies reflect the dominance non–work and free time have obtained over work.

New organizational models of distributing employment should be possible and indeed are already evident. “Temporary” employment agencies have displaced the famous manufacturing corporations as the largest employers in the United States. Labor economist Ronald Kutscher has described the strategy of assembling a just–in–time labor force,” analogous to the parts now kept flexibly available in “just–in–time inventory.”35 The projected 27–hour average work week may match even better temporary and contingent workers. In any case, labor market policies can enhance the distribution of work through the shortening of working time, more flexible working hours, and job sharing.

The fall of work changes pensions on a scale comparable to the demographic implosion. In the 1850s, a career for a UK male averaged about 47 years. Before education became mandatory, work began young, often around 10, and healthy men 1abored until they died. Indeed, at age 10 males expected only about 48 more years of life, so many did not experience the natural end of a working career or feel the need for pension. Now US males on average seek about 10 years and females about 20 years of pension.

Superior management of the financial assets of pension systems may raise the rate of return on the funds invested, but the gain seems unlikely to overcome calculable deficits without also postponing the age of retirement and eligibility for pension. Scaling pensions to contributions and life left solves the problem, but politicians will not dare to implement it. Unfortunately, the present reality is that per capita work is disappearing, so people are retiring, and thus ceasing to contribute, ever younger.

In the age of work through which the industrialized nations appear to have passed, the corporation was the central and characteristic institution of society. The corporation in its several forms was one of the great organizational innovations of the nineteenth century, an enabling technology for the Industrial Revolution, shaping not only income and health but social status, security, architecture, and numerous other features of the human environment.36 Obviously, if work time shrinks, the times of life that are not part of the formal work contract expand. With formal work losing its traditional place, so probably does the corporation.

Dynamic technology, markets, and management compound the question of the social role of the corporation. In many sectors several generations of corporations come and go in an 80–year human lifetime or even in a 40–year work career.37 In the American model of easy corporate entry and exit, a large fraction of private sector workers are employed by corporations whose life expectancy is short, 10 years or less. Moreover, the globalization of industry appears to favor frequent relocation of sites of employment in the quest for competitive advantage. Perhaps universities and other long–lived organizations, including parts of government, can play a larger role in a world in which work offers short–lived identity and stability. The need is for institutions which impart continuity.

I have only hinted at the essential cause of falling work. At the outset of this essay I emphasized the inexorable role of technology in raising the productivity of energy and land. Technologies spare not only physical resources, they spare labor. In 1975, 12.5 hours of American labor were required to produce a metric ton of cold–rolled steel sheet; in 1995, 4.4 hours were required. In fact, I suspect the desire to work less while living longer drives the human social system. I am scared because the prospect of success may be near, that is, a prospect in which globally and on average the production of goods is largely a solved problem, but the distribution of wealth, security, and status is not.

REJECTION OF SCIENCE

A large population might work less and tread more lightly in our environment either by restraining its consumption or by getting much more clever at both production and consumption. I am pessimistic about the chances for managing our wants downward. Few rich choose to become poor and few rich feel rich enough, so we must revolutionize the economy. The way is to better everything we do by 2 percent per year faster than population change, to compound productivity gains broadly, year after year.

Knowledge is what now grows productivity, and science and engineering are the most powerful forms of knowledge. They demonstrate their effectiveness every moment. Wisely used, science and technology can liberate the environment, can spare the Earth.38 Carbon–free hydrogen energy, food and fiber decoupled from acreage, and closed–loop industrial ecosystems can assuage fears about changing climate, vanishing species, and poisoning metals.

The aggressive search for knowledge and its application is perhaps the most significant contribution of Western civilization. The game began centuries ago but has reached completely new levels in the past 50 years, above all in the United States. Many industries have systematized their search for better practice and have the gains to show. I have mentioned the case of steel. The hard search is costly and requires skillful organization. And, I would emphasize, courage and confidence and the tolerance that can accompany them. Science, the structured and sanctioned overthrowing of authority, is the purest form of continuous improvement.

I believe the greatest threat to future well–being is the rejection of science. Having come this far, the 5.8 billion cannot take the road back. The Islamic world held the cutting edge of science until past 1100 A.D. Then it rejected the windmill and, later and repeatedly, the printing press. Loss of economic and political leadership followed.39 The objects of science, the technology, can be taken without the values. The corsairs of the Barbary Coast which raided British vessels for 50 years or so were in fact piloted by renegade Britons and Icelanders; when the foreigners died, their knowledge of sailing was rejected and forgotten. Voltaire noted that after 60 years of Swiss watch exports, no one in the Middle East could make or repair a watch.

So cultures can and do reject science. Or be excluded from it. Women have been. Historian David Noble convincingly traces the exclusion to the clerical ascetic culture of the Latin church, which gave birth to modern science but only as a male vocation.40 The otherwise revolutionary Galileo, concentrating on his own calling and knowing the costs of raising and marrying daughters, contrived to place his two girls, aged 11 and 12, in a convent in the year 1613. Livia suffered a permanent breakdown. The second, Virginia, whom her father praised as “a woman of exquisite mind,” dreamed of fathoming the heavens. She never left the strict enclosure of San Matteo in Arcetri, which did not permit her scientific pursuits.

In 1950, in the United States, one woman and 416 men received doctoral degrees in engineering, while five women and 353 men did so in physics. Today in the United States about one in six of doctoral students in engineering are women; the ratio is one in four in physical sciences and mathematics. In most sectors, the feminization of work and power is now well underway. While women provided about 15 percent of career years in the 1850s in the United Kingdom, they currently provide well over 30 percent. Science now seems likely to suffer if women reject it or it rejects women.

We should, of course, recall that the same church culture that excluded women had adopted the idea during the twelfth and thirteenth centuries, Baconian natural theology, that changed the role of men (at least) from passive recipients of spiritual messages through natural phenomena to active seekers for an understanding of the divine nature as reflected in the pattern of creation. The Book of Nature joined the Book of Scripture as a way to the mind of God. This opened the door to objective examination of nature and to experimental science. Medieval historian Lynn White, Jr. conjectured that the timing of the emergence of science may have owed to the process of defending the Christian position against the heresy of the Cathars, who upheld the existence of two gods, one good and one evil.41 The creation of the evil god was the visible universe.

The point is that past changes in science have related to changes in basic religious attitudes, in aesthetic perceptions, and in social relationships, as well as to economics and politics. Along with money, science must have a positive emotional context to thrive.42 As White observed, the modern outburst of scientific activity is not necessarily permanent.

THE TWILIGHT OF THE WEST

Why would science fail? Because of my final worry, the decline of the West, Europe in particular. As stated earlier, modern science is the greatest gift of Europe, rooted in particular forms of Christianity. Now I observe Europe full of self–doubt and self–hatred. Partly the mood properly reflects the lethal misadventures of this and other centuries. Still, the morbidity reaches its perigee in European as well as in American groups in the discussion of “sustainable development,” the environmental rubric for a confined future. It is hard to believe that at the start of the twentieth century Europe was spreading its messages confidently to every corner of the planet. I wonder whether the self–mortifying West (or North) will be rich or expansive enough to maintain science.

The United States and some of the other European offshoots, Canada and Australia, for example, still seem to be growing, at least demographically. The United States acquired the scientific lead from Europe in the 1930s and has carried it to new and glorious levels, especially in California, where inhibitions are famously few. Perhaps the former European colonies will maintain science for a century or two after Europe abandons it. Goa, in India, built baroque Portuguese cathedrals long after the metropole stopped. Latin America, which belatedly picked up the European form of the novel and made it flourish, magically, in the late twentieth century, might do the same for science.

Sadly, one can imagine a shrinking Europe, whose residences fill with immigrants from the Mahgreb, who spread their culture, hostile to science. The 5,624 mills listed in England in the Domesday book of 1086 exceeded the mills in the Ottoman Empire at its height.43 The Far East remains a question. A recent Nature article entitled “Can Confucius Excuse Poor Creativity?” listed factors that seriously undermine Korean creativity. 44 Then Nature queried, “And those who have returned from the creative hot–houses in the West? As soon as they return, it is said with a rueful laugh, they become Koreans again.” Spengler perceptively characterized western culture as Faustian,” symbolized by pure and limitless space, limitless striving and aspiration, its architectural symbols the soaring vaults and spires of a Gothic cathedral.45

Spengler particularly contrasted the West’s Faustian culture with what he called the Magian, whose proponents dwell in a magical world of mysterious presences. Western culture had superseded the Magian around 1000 A.D., according to Spengler. For how long?

CONCLUSION

We have causes for insecurity. We may be stupefied, subdued, aged, underemployed, and ashamed. Under these conditions we will not reproduce ourselves or our culture, and then I believe the best chance for a better human environment for everyone is lost.

Civilizations have simply melted away because of poor reproductive rates of the dominant class. We should not forget that the European supremacy started with a reproductive stir during the last part of the first millennium and continued with ups and downs until the end of the nineteenth century. The question may be whether underneath the personal decision to procreate lies a subliminal social mood influencing the process, as endorphins do. The subliminal mood of Europe and its retinue could now be for a blackout after 1,000 years on stage.

Yet, my hope is that the West is merely in the trough of a Kondratieff wave, basking in the depression. After all, science is surely early in what it can achieve. A look at progress in mapping the genome proves it. We must create the wealth and continue lifting the inhibitions to enable science to flourish. Science in turn can provide the means for a pure, rich economy. We can sell science and its products lucratively and helpfully to the rest of the world. It frees us, though often to solve problems that did not exist without science. But to overcome our morbidity and restore our belief in infinity will require more than science, perhaps a new or revived religion.

Acknowledgments

I am grateful to Arnulf Gruebler, Cesare Marchetti, Perrin Meyer, Paul Waggoner, Iddo Wernick, and Doron Weber for working with me on the problems discussed here; none of them is responsible for the views expressed.

REFERENCES

1 – O Spengler, The Decline of the West, Oxford, New York, one volume edition, republished 1991. Spengler published the 1st volume of his Der Untergang des Abendlandes in 1918 and the 2nd in 1922.

2 – JH Ausubel, A Gruebler, and N Nakicenovic, Carbon Dioxide Emissions in a Methane Economy, Climatic Change 12(3):245–264, 1988.

3 – JH Ausubel, Does Climate Still Matter? Nature 350:649–652, 1991; JH Ausubel, The Liberation of the Environment, Daedalus 153(3):I–17, 1996.

4 – PA Waggoner, IK Wernick, JH Ausubel, Lightening the Tread of Population on the Land: American Examples, Population and Development Review 22(3):531–545, 1996.

5 – JM Pacyna, Atmospheric Trace Elements From Natural and Anthorpogenic Sources, in Toxic Metals in the Atmosphere, JO Nriagu and C Davidson, eds., Wiley, New York, 1986.

6 – RU Ayres and LW Ayres, Industrial Ecology: Towards Closing the Materials Cycle, Edward Elgar, Cheltenham UK, 1996.

7 – J Eisinger, Lead and Wine: Eberhard Gockel and the Colica Pictonum, Medical History 26:279–302, 1982. Lead was also used in women’s cosmetics (Science 265:1655, 1994).

8 – JO Nriagu and JM Pacyna, Quantitative Assessment of Worldwide Contamination of Air, Water and Soils by Trace Metals, Nature 333:134–139, 1988.

9 – HL Needleman, JA Riess, MJ Tobin, GE Biesecker, and JB Greenhouse, Bone Lead Levels and Delinquent Behavior, Journal of the American Medical Association 275:363–369, 1996.

10 – JE Ericson, H Shirahata, and CC Patterson, Skeletal Concentrations in Ancient Peruvians, New England Journal of Medicine 300(17):946–951, 1979.

11 – JE Ericson, DR Smith, and AR Flegal, Skeletal Concentrations of Lead, Cadmium, Zinc, and Silver in Ancient North American Pecos Indians,
Environmental Health Perspectives 93:217–223, 1991.

12 – L Hastings, Neurotoxicology of Cadmium, in Handbook of Neurotoxicology, LW Chang and RS Dyer, eds., Marcel Dekker, New York, 1995, pp. 171–212; RO Pihl and F Ervin, Lead and Cadmium Levels in Violent Criminals, Psychological Reports 66:839–844, 1990.

13 – RE Struempler, GE Larson, and B Rimland, Hair Mineral Analysis and Disruptive Behavior in Clinically Normal Young Men, Journal of Learning Disabilities 18(10):609–612, 1985.

14 – Recall also that early paints, glasses, and stucco were often loaded with lead.

15 – Bootleggers, for example. Moonshine whiskey was often loaded with lead (and other metals), because car radiators served as condensers.

16 – RA Frosch, Toward the End of Waste: Reflections on a New Ecology of Industry, Daedalus 125(3):199–212.

17 – T Colborn, D Dumanoski, and JP Myers, Our Stolen Future: Are We Threatening our Fertility, Intelligence, and Survival?— A Scientific Detective Story, Dutton, New York, 1996.

18 – DH Jacobs, Psychiatric Drugging—40 Years of Pseudo–Science, Self–Interest, and Indifference to Harm, Journal of Mind and Behavior 16(4): 421–470, 1995.

19 – RB Rastogi, Z Merali, and RL Singhal, Cadmium Alters Behaviour and the Biosynthetic Capacity for Catecholamines and Serotonin in Neonatal Rat Brain, Journal of Neuro–chemistry 28:789–794, 1977.

20 – M Olfson, GL Klerman, Trends in the Prescription of Antidepressants by Office–Based Psychiatrists, American Journal of Psychiatry 150(4):571–577, 1993.

21 – A Hume et al., Is Antidepressant Use Changing? Prevalence and Clinical Correlates in Two New England Communities, Pharmacotherapy 15:78–84, 1995.

22 – D Pine, The Prozac Dilemma, Woman’s Day 58(6):70–74.

23 – F Rouillon, C Blachier, JP Dreyfus, M Bouhassira, and MP Allicar, Etude Pharmaco–Epidemiologique de la Consommation des Antidepresseurs en Population Générale, Encephale 22(l):39–48, 1996.

24 – H Critser, Oh, How Happy We Will Be, Harper’s 292(1753): 39–48, 1996.

25 – LH Diller, The Run on Ritalin: Attention Deficit Disorder and Stimulant Treatment in the 1990s, Hastings Center Report 26(2):12–18.

26 – This section draws heavily on C Marchetti, PS Meyer, and JH Ausubel, Human Population Dynamics Revisited with the Logistic Model: How Much Can Be Modeled and Predicted?, Technological Forecasting and Social Change 52:1–30, 1996.

27 – A Mother’s work consists mainly of grooming, feeding, safeguarding, and so forth; one recent study showed that the average American woman at home spends less than ten minutes a day playing with or reading to her child (S Scarr, Mother Care, Other Care, Basic Books, New York, 1984, pp. 26–27).

28 – If 15 percent of females have no children, 43 percent of females have a boy, and 42 percent have two kids, the total fertility rate is equal to 1.27.

29 – J–P Sardon, Fecundité Générale, L’Institut National d’Etudes Demographiques, Paris, 1994.

30 – F J Sulloway, Born to Rebel, Pantheon, New York, 1996.

31 – International Labor Organization (ILO), Yearbook of Labour Statistics, 55th Issue, ILO, Geneva; International Labor Organization, World Employment 1996/1997: National Policies in a Global Context, ILO, Geneva, 1996.

32 – C Marchetti, Fifty–Year Pulsation in Human Affairs: Analysis of Some Physical Indicators, Futures 17(3):376–388, 1986.

33 – This section draws on JH Ausubel and A Gruebler, Working Less and Living Longer: Long–Term Trends in Working Time and Time Budgets, Technological Forecasting and Social Change 50(3):195–213.

34 – Since the mid–1980s the decline in annual working hours for those who work full–time appears to have slowed down, even reversed in some countries (O Marchand, Une Comparaison Internationale de Temps de Travail, Futuribles 165–166(5–6):29–39, 1992). This factor may have accentuated the unemployment problems in several countries.

35 – R Kutscher, Growth of Service Employment in the United States in Technology in Services: Policies for Growth, Trade, and Employment, BR Guile and JB Quinn, eds., National Academy, Washington, DC, 1988, pp. 47–75.

36 – C Kaysen (ed.), The American Corporation Today, Oxford, New York, 1996.

37 – WL Crum, The Age Structure of the Corporate System, University of California, Berkeley, 1953.

38 – JH Ausubel, The Liberation of the Environment, Daedalus 125(3):I–17; JH Ausubel, Can Technology Spare the Earth?, American Scientist 84(2):166–178.

39 – B Lewis, The West and the Middle East, Foreign Affairs 76(l):114–130, 1997; B Lewis, Islam in History: Ideas, People, and Events in the Middle East, 2nd edition, Open Court, Chicago & La Salle IL, 1993.

40 – DF Noble, A World Without Women: The Christian Clerical Culture of Western Science, Knopf, New York, 1992.

41 – L White, Jr., Dynamo and Virgin Reconsidered: Essays in the Dynamism of Western Culture, MIT, Cambridge, 1968. See especially “The Context of Science,” pp. 95–106.

42 – For a congeries of present “anti–science” in the United States, see PR Gross, N Levitt, and MW Lewis, eds., The Flight from Science and Reason, Annals of the New York Academy of Sciences Vol. 775, New York, 1996. Also indicative of the present mood, writer Paul Horgan argues that researchers have reached the limits of knowledge in many disciplines in The End of Science (Addison Wesley, New York, 1996). TF Homer–Dixon even argues for the decumulation of the stock of knowledge in The Ingenuity Gap: Can Poor Countries Adapt to Resource Scarcity?, Population and Development Review 21(3): 587–612, 1995.

43 – L White, Jr., Medieval Technology and Social Change, Oxford, New York, 1966, pp. 80–89; CP Issawi, The Middle East Economy: Decline and Recovery: Selected Essays, Markus Wiener Publishers, Princeton, NJ, 1995. The Nobel–prize winning physicist Abdus Salam was a member of Pakistan’s Ahmadiya community, a persecuted religious minority declared “beyond the pale of Islam” by an international panel of Muslim jurists in 1974; Ahmadis believe that the second coming of Christ happened in India nearly a century ago (see Nature 384:296, 1996).

44 – Nature 384:197, 1996

45 – Contrast Spengler’s characterization with the innumerable publications of the past decades on “limits” and “scarcity.” These are well–summarized in JE Cohen, How Many People Can the Earth Support?, Norton, New York.

Resources and Environment in the 21st Century: Seeing Past the Phantoms

1. Introduction

The marathon of economic development kicks up clouds of questions about resource and environmental stresses. In the end, they come down to two. Will there be enough resources to serve humanity? Will the environmental fallout from the use of resources harm other life and us?

The expected answer to each question is “yes.”

As I will report, my search of the book of history and my projections for the 21st century based on the historical patterns I find suggest that many of the usual specters of shortage and fallout are phantoms. Instead, I see a society learning to use resources efficiently and cleanly. Keys described along the way include market substitution, precision agriculture, dematerialization, decarbonization, and industrial ecology. Unfortunately, I will not leave you, the reader, without worries. But, I hope to shift attention from the spell-binding phantoms to real stresses that we should sweat to relieve. Let us not exhaust ourselves chasing phantoms, when the race before us is anyway long and hard.

First, I offer a few words to describe the two multipliers of resource and environmental factors: Population and GDP per capita. As I proceed, the divisors will also emerge.

Population: Humans now number about 5.8 billion. Annual growth peaked near 2% in 1970 and has dropped below 1.4%. More than 90% of growth is in the so-called Developing Countries. In fact, the populations of one set of countries are exploding while others are imploding. Combinations of fertility rates near or more than the 2.1 children per woman that replaces present population plus increases in length of life make explosions in countries such as India, Iran, and Nigeria. Fertility far below replacement and populations already near the present limit of longevity start slow implosions in countries such as Japan and Italy. For example, if Japanese women continue bearing about 1.4 children, the population of Japan will fall from about 125 million today to 55 in the year 2100. US population seems likely to grow about 50% over this period, from about 270 million to about 400. Most demographers expect world population to reach 10 billion by 2100, a compound growth rate from today of about 0.5%/yr. However, because no one knows how many children parents will choose in the future, this number may be a way station upward, a plateau, or a peak before a fall.

GDP per capita: Over long periods, including the booms and busts no economy has learned to escape, GDP per capita grows slowly. For example, the annual rise of GDP per person in the US has averaged 1.7% since 1900 and 1.4% since 1985. Between 1965-1996 the GDP per Mexican rose 1.7% yearly, per Indian 2.4%, and per Japanese 3.7%. The present slump in Asia reminds us that economies rarely sustain high growth for more than two to three decades.

Combining annual growth of population and GDP per population gives GDP growth, the total multiplier of resource and environmental matters. Since the start of the industrial revolution, GDP in the US and UK has risen about 2.3%/yr. During the long run ahead, global growth of between 2% and 3%/yr again appears likely. Unmodified by a divisor, compounding by 2.3% doubles a resource demand or an environmental impact in about 30 years, quadruples it in 60, and multiplies it eightfold in about a century. By this arithmetic, world GDP itself would rise from about $25 to $200 trillion, giving 10 billion people $20,000 each in 2100.

2. Will there be enough?

Energy: Globally over the past two centuries, a succession of primary sources has held the largest share of the energy market: first wood and hay (dominating until the 1870s), then coal (peaking in the 1920s), and oil (peaking in the 1970s). If the succession continues, natural gas will move ahead, peaking around 2030.

So far, humanity has burned about 180 billion tons coal equivalent (btce), a reference point for the other fuels. We have burned about 75% as much oil (130 btce), and 40% as much natural gas (75 btce). In a scenario of continuing market substitution, perhaps 100 btce more of coal will be used, 300 btce of oil, and 1000 btce or more of gas. This scenario implies a worldwide phase-out of coal during the next 50 years at 2.5%/yr, leaving huge reserves underground.

Proved oil reserves (which exceed 200 btce) and resources can meet the oil demand. Though the bulk of its use lies ahead, oil will fade from prominence. Beginning in a decade or so, fuel cells drawing on natural gas and, later, neat hydrogen will power the world’s vehicles. The mean secular decline in oil production may be 1.5%/yr.

Although the upper bound of estimates of technically recoverable natural gas resources is now about 700 btce, evidence is growing of widespread and more abundant methane than traditional geological theory holds. This evidence lifts estimates of gas steeply even in the US and keeps prices low. I match this optimism with a caution and a business opportunity. Expanding demand for gas, at perhaps 4%/yr, will require construction and operation of pipelines and other means for shipment with much larger and more extensive capacity than now.

Globally, more than one-third of primary fuels now becomes electricity, twice the share 50 years ago. Still, two billion people lack electricity. The unique ability to run information machines plus opportunities in transport now powered by oil will combine with the billions yet to be wired to grow electric demand. Demand may conceivably grow from the present 1.5 terawatt (TW or 109 kilowatt) years to 30 TW yrs or more in 2100. Ten billion people using today’s US electricity per capita would consume about 15 TW yrs.

Dividing the multipliers of population and GDP, efficiency in generation, transmission, distribution, and end-use will temper demand. Efficient gas turbines appear the generator of choice for a broad mid-range of additions to capacity. At prices up to about $4.50 per million BTU, twice the recent average price, gas should gradually drive coal from power stations. Approaching a $200 trillion economy, annual gas use could peak at 30 x 1012 m3, an order of magnitude above oil’s peak use. Absolute gas use should peak about the year 2060.

Notwithstanding hopes, the so-called solar and renewable energies (wind, photovoltaics, biomass, et al.) still show promise only for small niches. Solar enthusiasts tend to stress that solar energy is free. All natural resources are in fact free. What costs is mobilizing the resources and making the products flow to the consumer in the proper form and amount. Here the solar and renewable proposals continue to bog down. Even with technical breakthroughs, these sources, and their hard counterpart, fusion, would take a long, long time to reach a consequential level.

Making gigawatts at each site, nuclear plants already add consequential amounts of energy to the world’s energy supply and continue to offer advantages for large additions to capacity. Either once-through or recycled uranium can fill likely nuclear demand. A scenario of high demand invokes worries about uranium supplies and thus the recycling, which so-called breeder reactors can perform. While we are sheltered by the main reliance of the energy system on natural gas for the next 60-70 years, the growth of nuclear energy has time to resume. Generating large amounts of both electricity and hydrogen in the latter part of the 21st century, nuclear should become the primary source of energy.

Continuing substitutions of one source for another should amply supply 21st century energy markets. The rapid growth of gas supply, the gradual penetration of nuclear energy, and the inexorable drive toward efficiency relieve the stress.

Land and Food: For centuries, farmers expanded cropland faster than population grew, and thus cropland per person rose. Fifty years ago, farmers stopped plowing up more nature per capita (Figure 1). Meanwhile, growth in calories in the world’s food supply has continued to outpace population, especially in poor countries. Per hectare, farmers lifted world grain yields 2.1 percent annually between 1960-1996.

Frontiers for even more agricultural improvement remain open. On the same area, the average world farmer grows only about 20% of the corn or beans of the top Iowa farmer, and the average Iowa farmer lags more than 30 years behind the yields of his most productive neighbor. Top producers now grow more than 20 tons of corn per hectare compared with a world average for all crops of about 2.

Globally, the future lies with precision agriculture. Technology and information help the grower use precise amounts of inputs-fertilizer, pesticides, seed, water-exactly where they are needed. Precision agriculture includes grid soil sampling, field mapping, variable rate application, and yield monitoring-tied to global positioning.

If during the next 60 to 70 years, the world farmer reaches the average yield of today’s US corn grower, ten billion people will need only half of today’s cropland while they eat today’s US calories. The land spared exceeds the Amazonia. This sparing will happen if farmers maintain the yearly 2.1% worldwide growth of grains achieved 1960-1996. In other words, if innovation and diffusion continue as usual, feeding people will not stress resources. Even if the rate of improvement falls to half, an area the size of India, globally, will revert from agriculture to woodland or other uses. A vegetarian diet of 3,000 calories/day halves the difficulty or doubles the land spared.

Success at growing food on land spares animals in the seas, where fishers stress many wild stocks. So can farming parts of the ocean. Just as fertilizer grows more corn, adding nutrients to the barren tropical ocean might, for instance, multiply selected fish a thousand times.

Despite more calories on average, a billion people remain hungry. Their number could grow or shrink, depending above all on distribution of income. Meanwhile, the continuing evolution of efficient use can assure ample resources on average.

Forests: Forests are cut to clear land for farms and settlements and also for fuel, lumber, and pulp. In the rich countries, nevertheless, forests have re-grown in recent decades. Since 1950 the volume of wood on American timberland has grown 30%, while European forests have similarly increased in volume and extent. In the US, the intensity of use of wood defined as the wood product consumed per dollar of GDP has declined about 2.5% annually since 1900. In 1993 an average American consumed half the timber for all uses as a counterpart in 1900.

In the US, likely continuing fall in intensity of use of forest products should more than counter the effects of growing population and affluence, leading to an average annual decline of perhaps 0.5% in the amount of timber harvested for products. A conservative 1.0% annual improvement in forest growth would compound the benefits of steady or falling demand and could shrink the area affected by logging 1.5% annually. Compounded, the 1.5% would shrink the extent of logging by half in 50 years. If one half of this amount occurs by leaving areas now cut uncut, the area spared is 50 million hectares, the size of Spain. A similar pattern is likely in Europe as well as Argentina and numerous other countries.

In poor regions of tropical countries such as Brazil, Indonesia, and Zaire, the dominant force stressing forests remains the struggle to subsist. During the last couple of decades, the removal of tropical forests has been estimated at 1 percent per year. Until overcome by better livelihoods, cheap land, cheaper fuels, superior alternative wood products in the marketplace, taboos, or other factors, the one-time conversion of forests to money, cropland or fuel will continue. Nevertheless, global expansion of forests and rising incomes encourage.

Water: Globally abundant, fresh water is unevenly distributed. Although ten times present use falls on Earth, only about 1/5 of this is easily reached or used. Almost 2/3 of water goes for farming, 1/4 for industry, and less than 1/10th for cities. Hydroelectric generation currently uses (but does not consume) 2.6 times the average runoff in the conterminous US by running water through several turbines in a river. In the US both total and industrial withdrawals appear to have peaked in the 1970s (Figure 2).

Opportunities to increase efficiency abound. Crops take up only about one-third of what is applied. Shrinking the area of farmland would likely save much water, even if the fraction irrigated rises. Industries and cities can recycle wastewater, if energy is cheap and pollutant removal does not cost dear. History shows that when water becomes scarce, use becomes efficient.

The time to design and build water infrastructures stretches for decades. International and continental management remain novel. Climatic variability and change make the diversity of wide nets of water infrastructures and markets attractive, but they may be hard to arrange. Because a major cost is pumping, the key of cheap energy can relieve stresses.

Materials: During the past two decades metals, industrial minerals, and construction minerals as well as energy, forest, and agricultural products have not grown more costly despite the rise of the multipliers of population and GDP. An average American, for example, consumes about 50 kg per day of these materials. Fuel forms about 40% of the material inputs and most emissions (Figure 3). Although the mix differs, wealthy countries such as Japan, the Netherlands, and the US consume about equal kilos.

Because suppliers, such as miners, have vastly improved their efficiency, iron, copper, and other ore previously considered low grade still yield products and profits. Meanwhile, efficiency for the consumer in product design and changes in consumption patterns have favored dematerialization, less material per dollar of GDP to do a task. Lowering the material intensity of the economy could preserve landscape and natural resources, lessen garbage, and reduce hazardous exposures. Economical energy, efficiency, and clever upgrading of plentiful materials divide the demand that population and GDP multiply and make sustained or widespread shortages of materials unlikely.

3. Will the fallout harm?

Habitat: People transform land by building, logging, and farming. Globally, if an additional 4 billion people pave and otherwise develop land at the present rate of Californians (about 600 m2 each), cities will consume about 240 million hectares, midway in size between Mexico and Argentina. This area appears likely to be offset by land spared from logging in the US and other countries that now reduce their cutting of forests. The likely added land spared from crops globally over the time it takes to reach 10 billion people suggests a net worldwide return to Nature of lands equal to India (3 Nigerias or 6 Spains). The distribution of these lands will greatly affect the chances recreated for habitat for flora and fauna.

Climate: The 30% increase in greenhouse gas concentrations, particularly carbon dioxide (CO2), since the start of the industrial revolution may already have changed climate some, and more seems likely. How much depends on the rate of decarbonization of the energy system. Decarbonization means we obtain more energy and income per carbon atom. The evolving shares of the leading primary energy sources (wood & hay, coal, oil, gas) mix hydrogen and carbon in increasing ratio, and have in fact caused a gradual, global decarbonization of the energy system (Figure 4). Sources with no carbon at all, such as hydro, nuclear, and solar decarbonize further.

Now, additions to atmospheric carbon dioxide come mostly from transport (oil) and power generation (coal), but they will come largely from gas when it dominates energy supply. Fuel cells, essentially continuous batteries fed by hydrogen extracted from natural gas or manufactured by nuclear plants, and other electromotive propulsion, promise to multiply automotive efficiencies soon. Fuel cells can produce zero pollutants and correspondingly reduce carbon emissions from transport.

To stabilize atmospheric CO2 at twice pre-industrial levels (550 ppm) probably requires limiting emissions from 1990 onward to about 900 billion tons of carbon. My decarbonization scenario emits about 500 billion tons and could stabilize atmospheric concentrations about 450 ppm, one-quarter above present levels. Still lower levels could be achieved by sequestering carbon, for example, in aquifers whose primary rocks are silicates that form stable carbonates plus silica from CO2.

The consequences of climate change depend on the adaptivity of plants, animals, and social systems. Analysts have produced fragile and robust scenarios and everything in between. In general, the adaptable and richer are safer.

Acidity: The precipitation on forests and lakes of acid-producing residues, mainly in the form of industrially released sulfur dioxide (SO2) and nitrogen oxides (NOx), emerged in the 1970s as a major issue in North America and Europe, and to a lesser extent in East Asia. In the US, SO2 emissions come primarily from electric power plants and have dropped more than a third since 1970. NOemissions, from automobiles as well as power plants, peaked about 1980 and have fluctuated since. Gas turbine power plants now replacing plants built before the 1972 Clean Air Act can drop NOx by 95%. Curiously, NO3 concentrations have not changed in several decades in the rain at the carefully monitored Hubbard Brook (New Hampshire, US). Decreased emissions of SO2 are evident in routine measurements of rainwater sulfates, but rainwater remains more acidic in the affected regions in the US than before widespread industrialization. Red spruce trees, among the vegetation apparently most susceptible to acid rain, show diminished growth, although the case for an acid cause remains weak.

Acid from afar is also deposited in Japan from Chinese and Korean emissions, but long-term records of the extent do not exist. Although emission, transport, and deposition of acid-causing emissions must occur elsewhere, especially where fossil fuels are heavily used, sparse data and knowledge of regional weather cloud assessment. The numerous other natural and anthropogenic changes pressing upon ecosystems make hard the attribution of effects to acid rain. In regions such as southern Poland, the Czech Republic, and eastern Germany, stresses may have been severe, but coal use and raw industrial activity have dropped drastically. For the next few decades, coal-using regions such as China will be prone, but over the long run shifts to clean-burning natural gas and cars powered by fuel cells will largely eliminate concern about acid deposition.

Increased ultraviolet: Depletion of stratospheric ozone, traceable to chemicals used primarily for refrigeration, but also fire retardation, aerosol propulsion, and cleaning could increase ultraviolet light. The increase in turn could harm people and affect the productivity of ocean plankton and land plants. Production and use of the main culprits, the chlorofluorocarbons (CFCs), concentrated in the industrialized countries. Production grew steadily in the early 1970s and leveled later in the decade, when the US and a few other industrial countries banned particular uses of CFCs.

International protocols on substances that deplete the ozone layer, signed in 1987 and amended in 1990 and 1992, phased out fifteen CFCs by 1996. By 1993 industries completed phase out of halons, another ozone-depleting substance. Developing countries have a 10-year delay in implementation. The detection in the mid-1980s of a “hole” in the ozone layer in the spring over Antarctica catalyzed signature of agreements. Evidence that less ozone caused more ultraviolet radiation at the surface of Earth remains elusive. The decisions curtailing production of the suspect chemicals appear to be causing anthropogenic ozone destruction to peak around now, with recovery in prospect over a few decades.

Urban and indoor air: London invented smog, but Los Angeles (LA) brought it fame and remains the benchmark of urban air in industrialized nations. In 1997, thanks to cleaner cars and fuels, and tougher regulations for power plants and refineries, the air in LA was the cleanest in 50 years. In 1997 LA issued only one smog alert, and air in its basin exceeded federal limits on ozone in the lower atmosphere on 68 days, down from 90 in 1996. The main reason is new cars that emit 95% less than 1970 vehicles. The near elimination of leaded gasoline in the US explains division of airborne lead by 20. Fine particles emitted by gasoline and diesel engines are, however, correlated with increased mortality.

In developing countries, large cities suffer acute and unabating air pollution. During the 1980s, the Chinese cities of Beijing and Shanghai exceeded World Health Organization (WHO) standards for particulate levels on 272 and 133 days per year. The average in New Delhi over the same period was 295 days. Since the mid-1970s, SO2 levels exceeded the standard an average of 100 days per year in Teheran. In 1991 in Mexico City standards were violated over 300 days. Wood and dung burning in Asian ovens raise indoor particulate concentrations more than one hundred times the WHO standards, illustrating indoor air pollution. Over the long run, natural gas and nuclear energy, that is, decarbonization, will largely eliminate urban air problems.

Waters: Groundwaters, many rivers in both developing and industrialized regions, and inland water bodies, such as the Aral Sea in Central Asia, continue to experience contamination from both active and inactive industrial sites, imprudent irrigation, and diffuse pollution from urban, fertilizer and pesticide runoff. Some estuaries and seas now manifest incipient eutrophication or hypoxia: dissolved oxygen reduced by more nutrients, such as those found in fertilizers and sewage. The growth of some plants and the loss of oxygen choke off other aquatic life. Shrinking cropland is likely to lower use of nitrogen and other fertilizers. In fact, fertilizer use has been flat in the US since 1980 and has dropped 10% globally since the late 1980s.

Some waters have been reclaimed. For example, although much remains to be done, the average dissolved oxygen in the rivers of the OECD nations improved over the past twenty-five years. Stresses, however, will worsen along coastlines, where urbanization proceeds faster than improvement of wastewater infrastructure.

Since 1970 oil spills have fluctuated with sporadic spikes, as due to the Exxon Valdez in Prince William Sound in 1991. Tanker accidents were fewer in the 1980s and 1990s than the 1970s. Improved technical standards for transport over the last few decades probably caused the improvement. Although commanding less public attention than spills, “normal” discharges of oil into the sea from washing tanks and discharging ballast water form the largest source of marine oil pollution and remain hard to assess. Shifts to natural gas and hydrogen would obviate oil spills as well as oil runoff from roads on land.

Industrial and municipal wastes: Population and GDP growth multiply the volume of products and objects. Thus, total wastes have tended upward despite their decline per GDP. The sophisticated materials and their applications valued in the “new materials society” make alloys and composites that can be hard to separate and recycle. For many materials, recycling seems unable to supply more than about 1/3 of demand. Popular materials can be lighter but bulkier and more toxic. So far, higher disposal prices have called forth sites for landfills, but who shall live near the dump always stresses. The vision of an industrial ecology in which waste is radically reduced and emissions approach zero is beginning to be realized in cities such as Kalundborg, Denmark, where plants are co-located so that the “wastes” from one become useful inputs to another.

Radioactive and other hazardous materials: The storage and disposal of radioactive wastes have implications from global to local. They come from nuclear reactors making electricity and arsenals making bombs. The public worries about low as well as high level radioactive wastes. Although high level volumes are smaller, they present harder problems.

The rise of nuclear electrification has raised the volume of spent fuel and other wastes, but it is still small. The waste volumes, in fact, are smaller than anticipated. In the US, for example, the current dumps for low-level radioactive waste will have excess capacity well into the next century, partly because fewer plants were built than projected, but mostly because recycling and compacting have shrunken the waste volume. In the US the annual volume of low-level waste shipped to disposal sites dropped from 3.8 million cubic feet in 1980 to 422,000 in 1996. The decline means that the three dump sites in Richland (Washington), Barnwell (South Carolina), and Clive (Utah) have 29 to 260 years of capacity at current disposal rates, which will probably drop even lower.

Wastes from weapons contribute much to total nuclear waste. In the US, the environmental problems of weapons operations are widely known, and prompted large appropriations to remedy them. In the former Soviet Union, the problems may be greater. Formal treaties have stopped earlier disposal, such as dumping at sea, and the scuttling of nuclear submarines has ceased. Regimes for transport and temporary storage of civil and defense nuclear wastes now function, although sites and designs for permanent disposal have yet to be reliably accepted.

The prevalence of several other hazardous materials has diminished in recent decades. Strontium-90 has diminished worldwide since the 1960s when atmospheric testing of nuclear weapons was banned. In the United States, levels of PCBs (used as coolants in power transformers) and lead (used in various forms in gasoline, cables, pipes, paint, and industrial chemical processes) and other heavy metals have declined dramatically as harmful consequences have been identified and remedies implemented. Despite bans, the persistence of the durable PCBs has kept them a leading topic of toxicological research and environmental controversy. Previous disposal of these and other hazardous wastes has contaminated many locations around the world, and the catalogue of these sites has grown. In the United States, while documentation and remediation of previously contaminated sites goes forward, few new sites are polluted.

I have searched history and found many of the specters of shortage and fallout are phantoms. Unfortunately, I must tell you that some other stresses are real.

4. Where are stresses likely?

In cities: In August 1997 the Chinese Ministry of Labor estimated that about 175 million surplus laborers linger in the countryside, representing 35% unemployment. Continuing rapid urbanization in China, parts of Africa, India, and other areas retaining large rural populations will press on water supply, waste treatment, energy, housing, and other infrastructures. Crowded, unsanitary, and unsafe conditions will bring deadly catastrophes, including epidemics, fires, and building collapses, already familiar in the overstuffed and underserved cities of today.

At cultural borders: Wealthy imploding regions such as Europe will draw into their empty rooms migrants from the poorer exploding regions such as the Mahgreb. Few countries have maintained peace when national and cultural boundaries do not coincide. Africa today is probably worse off in this regard than Europe in 1913.

Where systems leak: Best practice in industrial ecology offers zero emissions. In the world’s $200 trillion economy, however, worst practice will surely still lag behind best practice, some sectors will evolve along bad tracks, and small errors can be absolutely huge. Normal as well as surprising failures in plant operations and product design, corruption, stupidity, and carelessness will continue. Plants will leak, pipelines and tanks will explode, cement will crack, and water, air, and soil will be contaminated.

In scaling up: Within fifty years more than 3 billion people need to hook to commercial energy for the first time. China and India need more than their current 1 phone line per hundred tongues. Without capital, domestic tranquility, zero-maintenance technologies, and new economies of scale, stresses will run high, mistakes will be made and need correction, and transitions to new ways will test patience. The question of who is the next to gain service always arises.

In critical infrastructures: We move ever further from the society in which a family could farm, fuel, and clothe itself. No family can make microprocessors or storage disks. A winter ice storm shows how developed societies increasingly depend on a few, linked infrastructures. In particular, because electricity permeates the web of social services, a breakdown, even for a few hours, can bring tragedy. Neither can Internet commerce flourish, if the system is often down. To survive supply cartels, market fluctuations, war, and terrorism the wide natural gas distribution network that I envision must be an engineering masterpiece. In general, designing, building, and operating ultra-reliable infrastructures for electricity, gas, information, water, and other critical goods loom large.

For energy, the diffusion among individual consumers of multipurpose minigenerators might defend against great breakdowns. In effect, we would delegate base load to the global gas and nuclear system while leaving peaking and standby to a new household appliance able to use both gas and electricity to produce heat, cold, and electricity on demand.

In the innovation system: Although advancing productivity 2%/yr in key sectors may not sound hard, maintaining the pace for a century or more has exhausted earlier societies and enterprises, as the costliness of complexity tends to grow. Even if the world can maintain the supply of innovation, many societies historically have chosen to block diffusion. The loudest sound in the world is that of a habit breaking. Societies may choose dirt and disease, because accepting the means to lessen environmental and resource stresses will break the old culture, the tested formula for survival.

5. Conclusions

Will there be enough? There should be, though products in demand such as natural gas will surely give market power to prepared suppliers. And those with low incomes will always tend to be weak in the marketplace, though if the poorest multiply their income eightfold over today, most should be safe from hunger.

Will the fallout harm? Cities will spread and the climate may warm a little, but on balance humans may tread more lightly in nature.At the outset, we recognized the general multipliers, population and GDP. To answer the two basic questions, we now see we must address a third, “Must resource and environmental stresses intensify in unmodified lockstep with these general multipliers?” History suggests technology and science, which lift productivity and efficiency of resource use, are powerful enough divisors to lessen net stresses.

As well as the raising the divisors, people could choose to lessen the multipliers. However, habit favors multiplication, and so for the 21st century we should at least prepare for it.

A world of 10 billion people and $200 trillion will above all handle huge amounts of information, whether in its fields, factories, offices, or homes. Its greatest vulnerabilities may come from failures or rejections of the systems of control for the communication of information. Science is a uniquely effective system of communication for the control of complexity, and the ability to control complexity is effectively the central parameter of the dynamics of evolution. Having come far down the road with science and technology, perhaps we should feel greatest stress from knowing we must keep running.

Acknowledgements: I am grateful for many years of cooperative work with Arnulf Gruebler, Cesare Marchetti, Perrin Meyer, Nebojsa Nakicenovic, and Paul Waggoner and to Kurt Yeager and Chauncey Starr for asking the questions eliciting this paper.

Further reading:

The Environment for Future Business
Jesse H. Ausubel
Pollution Prevention Review 8(1):39-52, Winter 1998.

Technological Trajectories and the Human Environment
Jesse H. Ausubel and H. Dale Langford, eds.
National Academy, Washington DC, 1997.
Also appeared as special issue, “The Liberation of the Environment,” Daedalus 125(3), Summer 1996.

Elektron: Electrical Systems in Retrospect and Prospect
Jesse H. Ausubel and Cesare Marchetti, Daedalus 125(3):139-169, Summer 1996.

Toward Green Mobility: The Evolution of Transport
Jesse H. Ausubel, Cesare Marchetti, and Perrin S. Meyer
European Review 6(2):143-162, 1998.

The Environment Since 1970
Jesse H. Ausubel, David G. Victor, Iddo K. Wernick
Consequences: The Nature and Implications of Environmental Change 1(3):2-15, 1995.

Human Population Dynamics Revisited with the Logistic Model: How Much Can Be Modeled and Predicted?
Cesare Marchetti, Perrin S. Meyer, and Jesse H. Ausubel
Technological Forecasting and Social Change 52:1-30, 1996.

Lightening the Tread of Population on the Land: American Examples
Paul E. Waggoner, Jesse H. Ausubel, Iddo K. Wernick
Population and Development Review 22(3):531-545, 1996.

Searching for Leverage to Conserve Forests: The Industrial Ecology of Wood Products in the U.S.
Iddo K. Wernick, Paul E. Waggoner, and Jesse H. Ausubel
Journal of Industrial Ecology 1(3):125-145, 1997.

Jesse H. Ausubel is Director of the Program for the Human Environment at the Rockefeller University in New York City. Mr. Ausubel’s interests include environmental science and technology, industrial evolution, and the nature of the scientific enterprise. The main themes of the Rockefeller research program are industrial ecology (the study of the network of all industrial processes as they may interact with each other and live off each other, a field Mr. Ausubel helped originate) and the long-term interactions of technology and the environment.

Underlying the work are ongoing studies of the mathematics of growth and diffusion. From 1983-1988 Mr. Ausubel served as Director of Programs for the US National Academy of Engineering.

Figures

Figure 1. Land to Feed a Person for a Year, Global Average.


The blocks in Figure 2 show that global cropland per person expanded before Malthus until the mid-1950’s. Recently, however, cropland per person shrank. The star shows the land per person that would be required if global yields equaled the yield achieved by the winner of the Iowa Master Corn Growers Contest in 1996, 19.5 tons per ha.

Source of Figure: P.E. Waggoner, “Food, Feed, and Land,” in Ethics of Consumption: the Good Life, Justice, and Global Stewardship, D.A. Crocker and T. Linden, eds., Rowmand & Littlefield, Maryland, 1998. Sources of Data: J.F. Richards, “Land Transformation,” in The Earth as Transformed by Human Action, B.L. Turner et al., eds., Cambridge University, Cambridge, England, 1990; Food and Agriculture Organization of the United Nations, FAO Yearbook, vol. 45, FAO, Rome, tables 1 and 3; Rod Swoboda, “New Record Set in State Corn Contest,” Wallace’s Farmer, March 1997.

Figure 2. U.S. Total Water Use, Per Capita Per Day.


Source of figure: J. Ausubel, “Can Technology Spare the Earth,” American Scientist 84:166-178 (March-April 1996).

Sources of data: U.S. Bureau of the Census, Historical Statistics of the United States, Colonial Times to 1970, U.S. Government Printing Office, Washington D.C., 1975; U.S. Bureau of the Census, Statistical Abstract of the United States: 1992, 112th edition, U.S. Government Printing Office, Washington D.C., 1993.

Figure 3. Daily U.S. Per Capita Material Flows by Mass, Circa 1990.


All values are in kg’s. Materials here are classed as energy fuels (i.e., coal, oil, gas), construction minerals, industrial minerals, metals, forestry products, and agricultural products.

Source of figure: I.K. Wernick, “Consuming Materials: The American Way,” Technological Forecasting and Social Change 53:111-122, 1996.

Source of data: I.K. Wernick and J.H. Ausubel, “National Materials Flows and the Environment,” Annual Review of Energy and Environment 20:462-492, 1995.

Figure 4. Decarbonization or World Ascent of Hydrogen.


World primary energy sources have declined in carbon intensity since 1860. The evolution is seen in the ratio of hydrogen (H) to carbon (C) in the fuel mix, graphed on a logarithmic scale and analyzed as a logistic growth process. The data and logistic curve are plotted using a transform that renders the S-shaped logistic linear.

Source of figure: J. Ausubel, “Can Technology Spare the Earth,” American Scientist 84:166-178 (March-April 1996).

Resources are Elastic

With most animal populations, the niches that encase the populations are of constant size. Animal societies growing in a given niche have dynamics neatly fitted by equations with a constant limit or ceiling. In short, from a niche point of view, resources are the limits to numbers. But access to resources depends on technologies. When the animals can invent new technologies, such as when bacteria produce a new enzyme to dismantle a sleepy component of their broth, then we face a problem. New growth pulses suddenly pop up, growing from the prior.

Homo faber, the toolmaker, keeps inventing all the time, so that our limits are fleeting. These moving edges confound forecasting the long-run size of humanity. Expansion of the niche, the accessing and redefinition of resources, keeps happening with humans.

One of the greatest technological shifts was the industrial revolution. If we take the “industrial revolution” as one huge innovation, we can reconceive the population history of England and other countries in two phases.

The early English, islanders conceptually similar to the bacteria in a petri dish, could not directly expand their territory to support more people. In fact, by Roman times the English had already cleared a large fraction of their land for crops and animal husbandry. English population shows a slow rise, leveling around 5 million people in the year 1650. Perhaps sensing their local limit, the English were actively colonizing abroad during the 17th and 18th centuries and exporting population. The Island population remained rather level until nearly 1800. But meanwhile, another pulse of 50 million had begun, bringing England to its current population. Faster and cheaper transport, new energy sources, and other features of the industrial revolution made it possible for more English to eat in the same dish.

The growth of human populations demonstrates the elasticity of the human niche, determined largely by technology. For the homo faber, the limits to numbers keep shifting, in the English case by a factor of 10 in less than two centuries.

Now let me briefly scan two resources about which we worry, farmland and forests. Is farmland finite in any useful sense? For centuries, farmers expanded cropland faster than population grew, and thus cropland per person rose. When we needed more food, we ploughed more land, and fears about running out of arable land grew. But fifty years ago, farmers stopped plowing up more nature per capita. Meanwhile, growth in calories in the world’s food supply has continued to outpace population, especially in poor countries. Per hectare, farmers lifted world grain yields about 2 percent annually since 1960. Two percent sounds small but compounds to large effects: it doubles in 35 years and quadruples in 70.

Vast frontiers for even more agricultural improvement remain open. On the same area, the average world farmer grows only about 20% of the corn or beans of the top Iowa farmer, and the average Iowa farmer lags more than 30 years behind the yields of his most productive neighbor. Top producers now grow more than 20 tons of corn per hectare compared with a world average for all crops of about 2. From one hectare, an American farmer in 1900 could provide calories or protein for a year for 3 people. In 1999 the top farmers can feed 80 people for a year from the same area. So farmland again abounds, disappointing sellers who get cheap prices per hectare almost everywhere.

Forests tell a similar tale. Forests are cut to clear land for farms and settlements and also for fuel, lumber, and pulp. In the rich countries, nevertheless, forests have re-grown in recent decades. Since 1950 the volume of wood on American timberland has grown 30%, while European forests have similarly increased in volume. In the US, the intensity of use of wood defined as the wood product consumed per dollar of GDP has declined about 2.5% annually since 1900. Today an average American consumes about half the timber for all uses as a counterpart in 1900.

In the US, likely continuing fall in intensity of use of forest products should more than counter the effects of growing population and affluence, leading to an average annual decline in the amount of timber harvested for products. A conservative annual improvement in forest growth would compound the benefits of falling demand. Unmanaged forests now yield yearly an average of 1-2 cubic meters of commercially valuable species per hectare. Potential in secondary temperate forests ranges between 5 and 10 cubic meters. Many commercial plantation forests now reliably produce more than 20 cubic meters year, and experimental plots have yielded over 60 cubic meters. Compounded, the rising tree growth and falling wood demand should shrink the extent of US logging by half in 50 years.

By the middle of the 21st century, rising productivity of well-managed forests should comfortably allow 20% or less of today’s forest area of about 3 billion hectares to supply world commercial wood demand. In fact, 5% of world forests could suffice. Our vision of Earth’s surface in the year 2050 should be more forest cover, say 200 million hectares more than today, and most of the world’s forests reserved for Nature.

Knowledge, not more cropland or more timberland, is what now grows productivity, and science and engineering are the most powerful forms of knowledge. They demonstrate their effectiveness every moment. Wisely used, science and technology can liberate the environment, can spare the Earth. Food and fiber decoupled from acreage as well as carbon–free hydrogen energy and closed–loop industrial ecosystems can assuage fears about vanishing species, changing climate, and poisoning metals. And about finite resources. The greatest threat to future well–being is the rejection of science. Having come this far, the 6 billion cannot take the road back. Without science, the elastic band will snap back.

Exploring, inventive humanity exemplifies the lifting of carrying capacity. Through the invention and diffusion of technology, humans alter and expand their niche, redefine resources, and violate population forecasts. In the 1920’s, the leading demographer, Raymond Pearl, estimated the globe could support two billion people, while today about six billion dwell here. Today, many Earth observers seem stuck in their mental petri dishes. The resources around us are elastic.

Dis the Threat Industry

The CIA for decades overstated the size of the Soviet economy and thus its threat to the USA. Worldwatchers have yearly forecast a food crisis from the exhaustion of soil or oil since the early 1970s. The Wall Street Journal editorial pages daily scare entrepreneurs with multiplying regulations stifling markets. What should we make of currently touted threats such as germ warfare, global warming, and a graying population?

I answer “Dis the Threat Industry,” using youth’s short form of “Disrespect.” The Threat Industry has always done good business. Recall the 40-year career of the Old Testament prophet Jeremiah beginning 629 BCE. The scale is new, and the highly developed symbiosis with experts, including natural and social scientists.

For scale, consider the endeavor to find and predict “Global Change,” especially human-induced climate change. Each year the world spends $2 billion on it. At $100,000 per person-year 20,000 people are searching full-time. Searching for something that in any case fluctuates, such battalions cannot fail.

Indeed, they dare not. Funders look the fools if they expend for nothing. The sustainability of the endeavor, that is, the jobs of the managers and the searchers, depends on finding something. And the career of a searcher flourishes with a positive result. Journals publish few papers saying “I searched for years, spent much, and found nothing.” Critics of strong assertions of discoveries of global change are marginalized as “contrarians.” In general, smiling in the face of threats and naysaying make for a lonely, impoverished career.

Diminishing a problem unemploys not only experts and their publicists. Threats beget threat removal industries. Fears about asbestos created the asbestos removal industry, which in turn needed to feed fear of asbestos. Environmental protection agencies feared to reverse themselves, even as evidence for the removal programs itself got removed. So the game continues.

Indeed, threats find curiously cooperative ways to grow. The CIA overstatements boosted the USA military, whose growth in turn justified the Soviet military’s growth, which then further nourished the budgets of its USA counterpart. The Cold War shows how hard it is to break a threat cycle. Threats can make symbiotic enemies.

Of course, the USA and USSR truly did endanger one another. And I am not saying disbelieve global warming or ignore anthrax. I do say understand the biases inherent in assessments and forecasts. Germ warfare will sustain large military budgets.

We are accustomed to filtering the words of experts receiving rich fees from private companies. We need to become more sensitive to the bias of large chunks of academia funded to document threats by government and to the growing, vocal number living from other non-profit sources and means. Television evangelists weekly prophesy an upcoming Last Judgement and wrest checks. So do environmentalists. Our governments and tax-laws have created a flourishing business in threat legitimation. Today Jeremiah would lead a large institute.

So, follow the money and the public and peer approbation. Calibrate threats accordingly. The collapse of the USSR showed the Western Threat Industry overstated by about a factor of three. Dividing by three could prove a rule of thumb.

Fortunately, many threats have simply crumbled against time. The Threat Industry will not. Society appears subject to a Law of Conservation of Concern. Editors fill the front page of the newspaper everyday. Moreover, threateners contend they must inflate their claims to compete.

And a few threats prove worse than assessed. Both supporters of slavery and abolitionists in the pre-Civil War USA underestimated its malignant, enduring legacy. We need to learn better to separate slavery from steam engines, whose dangers also made headlines in the 1840s and 1850s. And we need to separate the real, manageable problems of exploding engines from the apocrypha of witches and weak magnetic fields.

Today, I do fear that a graying population will wreck social security. But I treasure the contrarians who contest that view. Science, like democracy, can thrive only with a loyal, tough opposition. Dis the Threat Industry.

Technological Trajectories and the Human Environment

The book is based on a conference held at The Rockefeller University and sponsored by the Program for the Human EnvironmentElectric Power Research Institute, and National Academy of Engineering. It has been published by the National Academy Press. Their bookstore web site contains ordering information. The papers in the book also appeared in a special issue of the journal Daedalus.

The conference, titled “Technological Trajectories and the Human Environment”, brought together leading scientists in disciplines as diverse as philosophy and agricultural science to discuss “The Liberation of the Environment.”

Elektron: Electrical Systems in Retrospect and Prospect

And I saw something like the color of amber,
like the appearance of fire round about enclosing it;
from what appeared to be his loins upward,
and from what appeared to be his loins downward,
I saw what appeared to be fire,
and there was a brightness round about him.

Ezekiel 1:27 (circa 595 b.c.)

In the ancient world, electrum (Hebrew) or elektron (Greek) was the material amber. Amber, when rubbed and electrified, preferably with cat fur, moved and lifted dust specks and small objects. The Greeks first identified electricity by its g odlike capacity for action at a distance. This capacity and its control have been and will continue to be the trump cards in the invention and diffusion of electric machinery.

While its power and magic are old, electricity as an applied technology is young, with a history of barely more than a century. Two thousand five hundred years passed between Ezekiel and Thomas Edison. Today the electrical system can place power in pre cise positions in space with an immense range of capacity, from nanowatts to gigawatts. This spatial fingering is made possible by electrical conductors that are immersed in insulating space or solids. The conductors, which are basically metals, are impen etrable to electric fields and can modify and draw them into long thin threads reaching an office, home, or the memory cell in a computer chip.

Electromagnetic waves, as well as wires, transport electrical energy into space. Microwave guides and optical fibers resemble wires fingering into space. Efficient interfaces between the two modes of transport have developed, greatly extending the pano ply of gadgets that transform electricity into useful actions.

Electrical technology is one of the few technologies that emerged straight from science and organized research. The lexicon of electricity-ohms, amperes, galvanometers, hertz, volts-is a gallery of great scientists of the eighteenth and nineteenth cent uries. Applications of electricity were the subject of the first systematic industrial research laboratory, established in 1876 by Edison in Menlo Park, New Jersey. There, Edison and his colleagues made the phonograph in 1877, a carbon-filament incandesce nt lamp in 1879, and myriad other inventions.

The earliest attempts to apply electricity came from laboratories studying electrostatic phenomena. Medicine, always curious to test new phenomena in the human body that promised healing or strength, led the way. Many claims sprang from the spark, shoc k, and sizzle of electrostatic phenomena. Eighteenth-century scientists reported that electric charges made plants grow faster and that electric eels cured gout. They sent electrical charges through chains of patients to conquer disease and, as among the clientele of Dr. James Graham’s fertility bed in London, to create life. C. J. M. Barbaroux, later a leader of the Girondist faction in the French Revolution, enthused in 1784:

O feu subtil, âme du monde,
Bienfaisante électricité
Tu remplis l’air, la terre, l’onde,
Le ciel et son immensité.1

Electricity brought to life the subject of Dr. Frankenstein’s experiments in Mary Shelley’s famous novel, published in 1818. An application of electricity also vitalized the ancient Egyptian in Edgar Allan Poe’s 1845 story “Some Words with a Mummy.”2 Upon awakening, the mummy observes to the Americans gathered round him, “I perceive you are yet in the infancy of Galvanism.” Later in the nineteenth century the Swedish playwright August Strindberg wrapped himself in current s to elevate his moods and even gave up writing to pursue electrical research until he badly burned his hands in an ill-planned experiment.

Popular imagery notwithstanding, the high-voltage, low-current electrostatic phenomena were at the core of electric research until only about 1800, when Alessandro Volta announced his invention of the battery. Volta introduced the more subtle low-volta ge, high-current game of electrodynamics. Twenty-five years linked the flow of electric currents to the force of electric magnets. Another twenty-five years bound the two productively into the electric dynamo and motor.

Among the key figures in the electromechanical game was an American, Joseph Henry, who, with the Englishman Michael Faraday, contributed a series of discoveries leading to practical electric generators. Tracing a bright path back to Benjamin Franklin, electricity was one of the first fields of research in which the United States assumed a leading role, and one of the first technologies to diffuse earliest in America. As we shall see, once the interface between mechanical and electrical power had been i nvented, the niche for expansion proved immense.3

Power for the Workshop

Since the Middle Ages, water wheels had provided the primary drive for grinding grain, fulling cloth, working metal, and sawing wood. But mechanical power drawn from water or wind did not permit action at a distance, except through even more mechanical devices. These could become sophisticated and baroque. For example, a cable system spread 1 megawatt of mechanical power from the falls of Schaffhausen, Switzerland, to the industrial barracks around them. The mechanically drawn San Francisco cable cars continue to delight visitors but only travel a distance of one or two kilometers.

Powered by water, workshops had to be riparian. “Zavod,” the Russian word for a plant, literally means “by the water.” Ultimately, steam detached power from place. Over a period of decades, steam engines overtook water wheels. In America, steam needed one hundred years to supersede water. Though we recall the nineteenth century as the age of steam, water did not yield first place until 1870. The primacy of steam in America would then last just fifty years (Figure 1).

Figure 1. Sources of Power for Mechanical Drives in the United States.Note: The upper panel shows the absolute horsepower delivered by each type and their sum. The lower panel shows the fraction (F) of the total horsepower provided by each type, according to a logistic substitution model.Data Source: Warren D. Devine, Jr., “From Shafts to Wires: Historical Perspective on Electrification,” Journal of Economic History 43 (1983): 347_372; Table 3, p. 351.

At first, steam preserved the layout of the factory. It simply provided more flexible and dependable mechanical energy. The small early steam engines usually operated individual devices. A leap forward came with the advent of the single, efficient, cen tral steam station to serve all the machinery inside a plant. Pulleys rotating above the heads of the workers provided power for their diverse machines via vibrating and clapping belts. But the network of beams, blocks, cords, and drums for transmitting t he steam power to the machinery on the floor encumbered, endangered, and clamored.

The electric motor drive, which emerged around 1890, revolutionized the layout of the factory. The first era of electrical systems commenced. The steam engine now ran an electric generator that penetrated the factory with relatively inconspicuous coppe r wires carrying electricity, which in turn produced mechanical energy at the point of consumption with an electric motor. Here was the seed of modern manufacturing. The electric motor drive permitted the factory machines to be moved along the production sequence, rather than the reverse.

One might suppose that the superior electric transmission, with a generator at one end and motors at each machine, would quickly supplant the old mechanical system. In fact, as Figure 1 shows, the process required fifty years. Resis tance was more mental than economic or technical. In 1905 the influential American historian and journalist Henry Adams chose the images of the Virgin and the dynamo around which to write his autobiography.4 The dynamo symbolized the dangerous, inhuman, and mindless acceleration of social change.

Power for the Region

By the time arcs and lamps emerged from Mr. Edison’s workshops, the generator could illuminate as well as grind, cut, and stamp. But the paradigm of the single generator for the single factory was soon superseded by the idea of a generator, or, better yet, a power plant, serving an entire community.

At first, electric companies were necessarily small. Technology for the transport of electricity particularly limited the scale of operations. The original Edison systems were based on low-voltage direct current (dc), which suffered drastic energy loss es over distance. Each piece of territory thus required its own company, and founding a new company meant filling a piece of territory or market niche.

Consider eastern Pennsylvania, a coal-and-steel region where some of the earliest Edison utilities began (Figure 2). Entrepreneurs swarmed the area to spread the successful innovation. About 125 power-and-light companies were establ ished between the middle 1880s and early 1920s, with 1897 being the year of peak corporate fertility. The rush to form companies was a cultural pulse, diffused by imitation.5

Figure 2. Founding and Consolidation of Electric Companies in the United States.Note: The main figure presents the two sets of data shown in the inset panel fitted to a linear transform of the logistic curve that normalizes each process to 100 percent, with estimates for the duration of the process, its midpoint, and saturation l evel indicated.Data Source: Pennsylvania Power and Light, Corporate History in nine volumes, Origin and Development of the Company, vol. 1, Allentown, Pa., 1940.

The evolution of technology to transport electricity, combined with the increase in the density of consumption (kW/km2), made higher transmission voltages economical and progressively coalesced companies. The key technology, first explored i n the 1880s by the inventor Nikola Tesla, was alternating current (ac), which could be raised in voltage through transformers and then transmitted long distances with low losses. The merger wave crested in 1914. By 1940 the resulting process left only Pen nsylvania Power and Light in operation.

When companies cover a geographical space, their natural tendency is to coalesce, like soap bubbles, especially if a technology permits the larger scale physically and encourages it economically. Several non-technical factors, including government and consumer fears about monopoly, can set limits on scale. Early in the century, Samuel Insull’s “electricity empire,” centered in Chicago, evoked public regulation, which became normal for the industry. Rapid growth and change usually elicit external regula tion. Still, the systems grow in the long run, as we shall see.

In the provision of electric power, the overriding independent variable is spatial energy consumption. Its increase leads to higher-capacity transport lines using higher voltage, making it possible to transport energy over longer distances with generators having higher power. This “higher and higher” game led the United States from the 10-kilowatt generator of Edison to the 1-gigawatt generators of today, one hundred thousand times larger.6

In fact, the expansion divides into two eras, as we see in Figure 3, which shows the evolution of the maximum line capacity of the US electric system. For the line-capacity indicator, we take over time the square of the highest volt age that is operational. Although various factors lower actual line capacity in practice, this indicator provides a consistent measure of power capacity for analysis of long-term trends.7 The maximum line capacity grows in two waves, one centered in 1921 and the second fifty-four years later in 1975.

Figure 3. Capacity of Top US Power Lines.Note: The units are kV2/1,000-a rough measure of power capacity. This figure as well as Figures 46, and 8 show a two-phase process analyzed as a “bi-logistic” normalized with a l inear transform. In essence, one S-shaped growth curve surmounts another. The actual values are the sum of the two waves, once the second wave is underway. See Perrin S. Meyer, “Bi-logistic Growth,” Technological Forecasting and Social Change 47 (1 994): 89_102.Data Source: Edison Electric Institute, Washington, D.C.

We label the first wave “Steinmetz,” for Charles Proteus Steinmetz, the founding leader of the engineering department of the General Electric Company (GE) and a symbol of the fruitful interaction of mathematical physics and electrical technology.8  Following the pioneering work of Tesla, Steinmetz began investigating the problems of long-distance transmission and high-voltage discharges around 1905. The spectacular success of GE in subsequent decades testifies to the timel iness of Steinmetz’s innovations. New alternating-current systems and related gadgets made huge profits for GE and the other leading equipment supplier, Westinghouse, and incidentally killed many small-scale utilities, as in Pennsylvania.

The second pulse of growth in line voltage reaches a temporary ceiling at about 1.5 megavolts. Interestingly, the stretches of innovative activity, as measured by the interval to achieve 10 to 90 percent of the system development, cover only about half the time of electricity’s waves of growth. Two to three decades of rapid expansion are digested in a comparably long period of stability and consolidation, a frustrating cycle for engineers. Again the limit may not be technical or economic, but social. S ociety tailors the expanded system to fit its norms for safety and harmony. One constraint is available rights-of-way, which are very limited at present.

Because the area of the United States is constant and filled by the electrical network, total generating capacity approximates the spatial density of consumption. The growth in installed generating capacity also splits into two pulses, centered around 1923 and 1971 (Figure 4). At peak times operators experience the most rapid change and customers suspect the operators’ ability to handle it. During the second wave, annual growth in consumption peaked in the 1950s and 1960s at more th an 10 percent per year for many US utilities. The system in the Northeast blacked out one day in November 1965, prompting regional power pooling arrangements. To address concerns about the reliability of the entire network, the industry consorted to form the Electric Power Research Institute, which opened its doors in 1973 under the leadership of Chauncey Starr, for whom we name electricity’s second wave.9

Figure 4. Installed Electric Generating Capacity in the United States.Data Source: US Bureau of the Census, Historical Statistics of the United States (Washington, D.C.: US Bureau of the Census, 1978); and US Bureau of the Census, Statistical Abstract of the United States (Washington, D.C.: US Bureau of th e Census, 1978, 1981, 1984, 1986, 1989, 1991, 1992, 1994).

The current pulse of growth in US generating capacity reaches a ceiling around 765 gigawatts. The actual system growth has exceeded 90 percent of the niche, which in our view explains the recent slowdown in the building of power plants, nuclear or othe r, in the United States. The system anticipated the growth in demand that is tuned to economic development and technological diffusion, boxed into the long, roughly fifty-year economic cycles that have characterized the last two hundred years.10 At the end of the cycles, demand lags and overcapacity tends to appear.

Will the higher-and-higher game resume? In both line voltage and generating capacity, the growth in the second electrical wave exceeded the first by more than an order of magnitude. If the pattern repeats, the increase in electricity consumption will l ead to ultra-high voltage lines (for example, + 2 megavolts) with higher capacity (for example, 5 or 10 gigawatts) and continental range. The great advantage of continental and intercontinental connections is that standby reserves and peak capacity can be globalized. The worldwide load would be smoothed over the complete and immanent solar cycle. Generators could also become very large, with according economies of scale.

If the system evolves to continental scale, the much-discussed superconductivity at room temperature might not revolutionize transmission after all. Energy lost in transport and distribution is a stable 10 percent, a huge amount in absolute terms, but too small to change the basic economics if 2-megavolt lines cover the continents. Superconductivity could, however, bring about a revolutionary drop in the size of machinery, thereby permitting the construction of units of larger capacity.

Continental scale surely means increased international trade in electricity. All territory looks the same to electricity. If available technology is employed, electricity will stream across borders despite the political barriers that typically impede t he easy flow of goods and ideas. Today Europe exchanges electricity almost freely. Italy buys from France the equivalent production of six 1-gigawatt nuclear reactors either via direct high-voltage lines or through Switzerland. Electricity trade could for m a significant component of international payments over the next fifty to one hundred years, requiring reorganization and joint international ownership of the generating capacity. Electricity trade between Canada and the northeastern United States alread y elicits attention.

Utilization and Capacity

The utilization factor of generation plants counts heavily in the economy of the system and indicates the quality of its organization. The US electric industry searched successfully between 1910 and 1940 for efficient organization, notwithstanding the Great Crash of 1929, as the average annual utilization climbed from two thousand to above four thousand hours, a utilization rate of about 50 percent (Figure 5). The rise owed to spatial integration and the reduction of reserves conseq uent to the introduction of high-capacity transport lines with increasing operating voltage as well as the coordination of network dispatch to use plants more effectively.

Figure 5. The Rate of Utilization of US Electric Generating Plants.Data Source: US Bureau of the Census, Historical Statistics of the United States (Washington, D.C.: US Bureau of the Census, 1978); and US Bureau of the Census, Statistical Abstract of the United States (Washington, D.C.: US Bure au of the Census, 1978, 1981, 1984, 1986, 1989, 1991, 1992, 1994).

Since 1940 the system appears to have fluctuated around a utilization rate of 50 percent. Generators with low capital cost and high variable cost combine with base-loads plants with high capital cost and low variable cost to determine the current usage level. Although the utilization factor surely has a logical upper limit quite below 100 percent, even with high-voltage lines having continental reach, a 50-percent national average appears low, notwithstanding scorching August afternoons that demand ext ra peak capacity.

Breaking the 50-percent barrier must be a top priority for the next era of the industry. Otherwise, immense capital sits on its hands. One attractive way to make electric capital work around the clock would be to use plants at night. The mismatched tim ing of energy supply and demand existed when water power dominated. Pricing, automation, and other factors might encourage many power-consuming activities, such as electric steel-making, to go on the night shift. Nuclear heat, generating electricity by da y, could of course help to make hydrogen at night. The ability to store hydrogen would make the night shift productive.

The nearness of overcapacity in the electrical system also creates suspicion that forecasting within the sector has not been reliable. Analyses of projections of total electricity use made by the US Department of Energy and others fuel the suspicion. R eflecting a period when electricity consumption had doubled in spans of ten years, in 1978 federal officials projected an increase by 1990 from 2,124 terawatt hours to 4,142 terawatt hours.11 The actual level for 1990 was 2 ,807 terawatt hours.

Can we do better? Fitting the data for total utility electric use to our model with data through 1977 yields an estimated level of about 2,920 terawatt hours for the growth pulse now ending (Figure 6). Net generation in 1993 was 2,8 83 terawatt hours. Projecting electricity demand matters because it influences investments in capacity. Accurate projections might have lessened the pain for the utilities, which ordered and then canceled plants; the equipment suppliers, who lost the orde rs; and consumers, who ultimately pay for all the mistakes.

Figure 6. Total US Electric Consumption.Note: Here and in Figure 8 the empty circles indicate periods of overlap in the sequential growth waves. Assigning the exact values to each wave during the periods of overlap is somewhat arbitrary.Data Source: US Bureau of the Census, Historical Statistics of the United States (Washington, D.C.: US Bureau of the Census, 1978); and US Bureau of the Census, Statistical Abstract of the United States (Washington, D.C.: US Bureau of the Census, 1978, 1981, 1984, 1986, 1989, 1991, 1992, 1994).

Power for the Home

As suggested earlier, electricity is a spatial technology. Conquering a territory means connecting with potential users. We tend to think that almost everyone was connected soon after the first bulb was lit, but in fact the process extended gradually over fifty years and culminated even in the United States only in mid-century (Figure 7). Although slowed by the Great Depression, non-rural hookups reached 90 percent of the market by 1940. Rural areas joined the grid about one generat ion later than cities, reaching a midpoint of the process in 1943 versus 1920 for the townsfolk. This interval measures the clout of rural politicians, who secured subsidies for the costly extension of power lines to areas of low population density, as we ll as the conservatism of the countryside.

Figure 7. Percentage of US Homes with Electric Service.Data Source: US Bureau of the Census, Historical Statistics of the United States (Washington, D.C.: US Bureau of the Census, 1978).

The data further confirm that electricity’s first century has encompassed two eras. During the developmental spread of the system until about 1940, most electricity went for industry and light, substituting for other energy carriers in already existing market niches. In the second era, electricity powered new devices, many of which could not have performed without it, such as televisions and computers. Most of the new demand came in the residential and commercial sectors.

Average residential consumption has increased by a factor of ten since 1940 and appears in our analyses to saturate in the 1990s at about 10,000 kilowatt hours per year. One might say that the customer is the home, not the human. Home appliances have i ncreased by the tens and hundreds of millions: refrigerators, video-cassette recorders, vacuum cleaners, toasters and ovens, clothes washers and dryers, dishwashers, air conditioners, space heaters, and, more recently, personal computers, printers, and fa x machines.

We emphasize the residential because it is becoming the number-one consumer. Residential consumption has grown faster than other major sectors over the past decades and in 1993 overtook industrial consumption in the United States. The number of housing units has grown sevenfold in the United States since 1900, while the number of people has tripled, as residents per unit have declined and second homes increased. 12 As the second wave of electrification reaches its culmina tion, the residential share appears destined to plateau at about 35 percent of the total use of electricity, more than twice its share of the first wave. In a third wave of electricity, residential consumption may grow only at the same rate as overall con sumption, or, if life-styles continue to include more home space and reduced working time, at an even faster rate.13 Californians already spend more than 60 percent of all their time at home indoors.14 So do New Yorkers and Indians.

Cleaning the Human Environment

In the absence of electricity, we heat, light, and power our homes and workplaces with wood, coal, kerosene, oil, manufactured city gas, and lesser fuels. Electrification has thus meant a cleaner, safer, and healthier environment at the level of the en d-user, once protections against shock and other hazards were properly wired into the system. Dangers associated with open fires and smoke diminished. Better-ventilated homes and workplaces lessened exposure to influenza, pneumonia, tuberculosis, diphther ia, measles, whooping cough, scarlet fever, and other airborne threats. Modern refrigeration in homes, shops, trucks, and railroad boxcars reduced the numerous waterborne gastrointestinal threats.

Environmentally, electricity concentrates pollution at a few points. At these points we can deal with the problems or not. The main question then becomes: What is the primary energy source for the generation? The most wanted environmental culprit is ca rbon, and so the main environmental challenge for electricity may be summarized by the measure of the carbon intensity of electricity production, for example, the ratio of carbon by weight to kilowatt hours generated.15 In the United States, this ratio fell by half between 1920 and 1940, from about 500 metric tons of carbon per gigawatt hour produced to about 250. Since the 1940s, the US ratio has fallen below only about 200 metric tons per gigawatt hour and has remained ra ther flat in recent decades because coal has gained markets in electric power plants, offsetting efficiency gains in the operations of the plants as well as gains in terms of reductions that oil and especially gas would have contributed. Many other countr ies have continued to create more watts with fewer carbon molecules. The powerful underlying evolution of the energy system from coal to oil to natural gas to nuclear or other carbon-free primary sources will bring reductions.16 The world appears a bit past the middle point of a decarbonization process that will take another 150 years for completion. The United States will not long remain apart from the global movement.

Electricity production was originally based on coal alone. At present, it is the only outlet for coal. Even steel-making, which historically consumed a substantial fraction of coal (sometimes more than 10 percent), abandoned coal, dropping demand. Coal will fight hard to keep its last customer. Interestingly, electricity was never linked to oil, one of the other major transforming technologies of the twentieth century. Electricity and oil may now begin to compete seriously for the transport market, as we discuss later. Natural gas is already penetrating the electrical system thanks to the great flexibility, low capital cost, quick starts, and efficiency of gas turbines. At present, electricity remains the only product of the nuclear system. Approaching an energy system with zero emissions, about which all environmentalists dream, will require nuclear to diversify into the hydrogen-making business. The team of electricity and hydrogen can eventually solve all the problems of pollution at the level of th e end-user of energy.

Electrical systems can add visual pollution with their network of towers, wires, and poles. Militant Greens already dynamite pylons and will accept no new structures. New technologies can increase the capacity of the existing lines and diminish intrusi ons. In this regard, direct current, now ultra-high, may have a second life as a bulk carrier aided by relatively cheap electronics, such as thyristors, which are capable of transforming all types of units of electricity into all others. Burying power lin es might beautify the landscape, as well as lessen fears about the health effects of electromagnetic fields.

Feeding the Electrical System

A growing share of primary fuels generates electricity; again, two waves are evident (Figure 8). At the world level, the first centered in 1920 and the second in 1970. The present wave is saturating at close to 40 percent. For the U nited States, the current wave appears to have saturated at about the same level.

Figure 8. Percentage of World Primary Fuels Converted to Electricity.Data Source: Nebojsa Nakicenovic, personal communication, 1995.

Is there a limit to the fraction of fuels feeding into the electrical system? Many energy buffs postulate a ceiling at around 50 percent. A third era of electrical growth does seem likely to occur. Electricity is more flexible and fungible than hydroca rbon fuels. The innumerable devices of the information revolution require electrical power. The transport sector, which has remained largely reliant on oil, could accept more electricity. But the drawbacks are the inefficiencies and the costs of the trans formation.

Inefficiencies are eventually eaten up.17 A successful society is, after all, a learning system.18 In fact, perhaps the greatest contribution of the West during the past three hundred year s has been the zeal with which it has systematized the learning process itself through the invention and fostering of modern science, institutions for retention and transmission of knowledge, and diffusion of research and development throughout the econom ic system. But learning may still go slowly when problems are hard.

The Six-Hundred-Year War for Efficiency

The degree of difficulty for society to learn about power and light shows quantitatively in the duration of the process improvements illustrated in Figure 9. Technologists fought for three hundred years to bring the efficiency of s team power generation from 1 percent in 1700 to about 50 percent of its apparent limit today. Electrical energy is glorified as the purest form of free energy. In fact, the heat value of other fuels when they burn also corresponds to free energy. Thus, th e thermodynamic limit of electric generators is 100 percent. Of course, it can be very difficult to reduce losses in combustion. Still, we may muse that during the next three hundred years efficiency will go to 99 percent.19 This long trajectory suggests that the structure upstream for power generation does not leave much room for spectacular breakthroughs.

Figure 9. Improvement in the Efficiency of Motors and Lamps Analyzed as a Sigmoid (logistic) Growth Process. Note: Shown in a linear transform that normalizes the ceiling of each process to 100 percent. Main Sources of Data: for lamps, Encyclopaedia Britannica, 1964; for motors, Hans Thirring, Energy for Man (Bloomington, Ind.: Indiana University Press, 1958).

Still, 70-percent efficiency can be eyed as the next target, to be achieved over fifty years or so. Turbosteam plants with an efficiency of about 60 percent have been constructed. Although further gains in this regard appear limited, the massive diffus ion of highly efficient turbine technology is sure to be a lucrative and influential feature of the next fifty years or so. Fuel cells, avoiding the free energy loss almost inevitable in the combustion process on which turbines rely, may well lead to the even higher efficiencies. Electrochemistry promises such technology but mentally seems more or less still stuck in Edison’s time. Perhaps solid-state physics can produce the insights leading to the needed leap forward as specialists in this field become m ore interested in surfaces, where the breakthroughs need to occur.

At the 70-percent level of efficiency, an almost all-electric distribution of primary energy looks most appealing. The catch is the load curve, which seems likely to remain linked to our circadian rhythms. In Cole Porter’s song lyric, we hear “Night an d day, you are the one”; but in energy systems night still dims demand and means expensive machinery remains idle. Even in cities famous for their nightlife, nocturnal energy demand is only one-third of the daytime requirement. The ratio of day to night a ctivity does not seem to have changed much. The ancients actually spent considerable time awake at night, despite miserable illumination. The fine old word “elucubrate” means to work by the light of the midnight oil, according to the Oxford English Dic tionary.

Even if most humans continue to sleep at night, we have pointed out earlier that their energy-consuming machines can work nocturnally. In fact, remote control and the shrinking work force required to operate heavy industry ease the problem. So, too, wi ll linking parts of the globe in sun and shade, summer and winter.

Still, we should clearly look further for efficiency gains. Much large electrical machinery is already so efficient that little or no gain is to be expected there. But a discontinuous step could yet come in the progress of machinery. Superconductivity, when it permits high magnetic fields, can lead to compactly designed motors with broad applications and very low energy losses. The proliferation of numerous micro-machines will of course tend to raise electricity demand, partially offsetting the efficie ncy gains they offer. The miniaturization of circuits and other aspects of computing systems in the past two decades shows how powerfully reducing the size of objects can increase their applications and numbers.

The Splicer

In proposing a more general solution we need to introduce another consideration, namely, reliability. The main drawback of an electrical system is that it permeates the web of social services, so that a breakdown, even for a few hours, can bring traged y. A defense against this vulnerability, as well as a means of addressing cyclical loads, could come with the diffusion of multipurpose minigenerators at the level of individual consumers. In effect, we would delegate base load to the global system, leavi ng peaking and standby to a new multipurpose household appliance. Multipurpose means the device could produce heat, electricity, and cold on demand.

Such combined thermal, electric, and cooling systems, which we will call “splicers,” are under development. Attempts so far, such as the FIAT TOTEM, have been unsuccessful, in part because the marketed models lack the basic characteristic of zero ma intenance required by household gadgets. Still, the scheme is appealing, both functionally and economically. The Japanese are doing a sizable amount of research and development in what appears to be a promising direction: stirling engines with free-fl oating pistons and a power output of a few kilowatts. The machines are maintenance-free, silent, and can compress fluids for the heating and cooling cycles on top of producing electricity with linear oscillating generators. The models described in the lit erature are powered by natural gas.

In conjunction with a clean gas distribution system, the penetration of the splicer as a home appliance over the next fifty years could revolutionize the organization of the electrical system. The central control could become the switchboard of million s of tiny generators of perhaps 5 kilowatts. Electric utilities might initially abhor the technology that brings such functional change, but already some plan to use it. One attraction is that the final user immediately pays the capital cost.

In any case, the breakthroughs may come instead on the side of the consumers. A number of well-known machines and appliances need technological rejuvenation, as efficiencies are systematically low. And new machines need to be invented. At a high level of abstraction, human needs are invariant: food, clothing, shelter, social rank, mobility, and communication (a form of mobility where symbols move instead of persons or objects). Let us guess the shape of the new machines in the areas of vision and warmt h.

Efficient Vision

Illumination, the first brilliant success of electricity beyond powering the workshop, provides a good example. Breaking the rule of the night is an old magical dream. The traditional tools-oil lamps, torches, and candles-were based on a flame w ith relatively low temperature and small amounts of incandescent soot to emit the light. They performed the task poorly (see Figure 9).20 The typical power of a candle is 100 watts, but the light efficiency is less than 0.1 percent.

Electricity fulfilled the dream, almost from the beginning, with arc lights, whose emitting source was solid carbon at temperatures of thousands of degrees centigrade.21 The light was as white as the sun, and efficiency reached about 10 percent. The technical jump was enormous. Theaters, malls, and monuments were lavishly illuminated. People were seduced by the magic. Amusement parks such as Luna Park and Dreamland at Coney Island in New York drew millions of paying visi tors to admire the architectural sculptures of light.

Edison’s 1879 incandescent lamp was a trifle inferior to the arc in light quality and efficiency but was immensely more practical. Symbolically, in 1882 the New York Stock Exchange installed three large “electro-liers,” new chandeliers with sixty-six e lectric lamps each, above the main trading floor. The exhibition of the power to break the night came first and dramatically. Penetration of the technology came later and, as usual, slowly. US cities, as shown earlier, achieved full illumination only abou t 1940.

The period from 1940 to 1995 can be called a period of consolidated light. Lamps became brighter and efficiency rose. To the human eye, the quality of the light may actually have worsened with the spread of fluorescents. With laser light, which has ter rible visual quality now, we may approach theoretical efficiency, though actual lasers remain inefficient. Will that be the light at the end of the tunnel?

To return to basics, we illuminate in order to see in the dark. Illumination has no value if nobody looks. Arriving in a town at night, we always see the roads brightly lit and empty, so we know of waste. The marvels of the 1980s, electronic sensors an d computer chips, can already scan rooms and streets and switch the lights off if no one is present. The watt-watch can help, but we can go further.

Sophisticated weapons systems-those mounted in helicopters, for example-feel the thumb of the pilot, observe his eyes, and shoot where he looks. A camera-computer in a room can watch the eyes of people present and illuminate only what they watch. Phase d arrays, familiar in sonars and radars and developed now for infrared emitters, are certainly transportable into the visible range and can create sets of beams that are each directed to a chosen point or following a calculated track. The apparatus might now look baroque, but with miniaturization it could be concealed in a disk hanging from the ceiling of a room. Such a gadget appears to be the supreme fulfillment, illuminating an object only if a human gazes upon it.

But recall again that the objective is not to illuminate but to see. We illuminate because the eye has a lower limit of light sensitivity and, in any case, operating near such a limit is unpleasant. The military has developed complicated gadgets by whi ch scanty photons from a poorly illuminated target are multiplied electronically to produce an image of sufficient luminosity. The principle is good; the machine is primitive. If photons flowing in an energized medium (such as an excited laser crystal) mu ltiplied in a cascade along the way while keeping frequency and direction, we would have invented nightglasses, the mirror image of sunglasses.22 We could throw away all sorts of illuminating devices. A few milliwatt s of power would be enough to brighten the night.

Efficient Warmth

The largest part of energy consumed in the home is used for temperature control. Space heating accounts for 60 percent or more of total residential energy use in many developed countries. Heating a home is a notably inelegant process from a thermodynam ic point of view. We use pure free energy (electricity or fossil fuels) to compensate for a flow of energy from inside to outside having an efficiency according to the Second Law of Thermodynamics of about 3 percent if the difference in temperature is 10< SUP>oC. Heat pumps solve the problem conceptually, but they see temperatures inside their heat exchangers and consequently overwork.23 Moreover, operating on electricity generated upstream, they already invite inefficiency into the endeavor.

Consider a radically different proposal. Windows are the big leaks, even when the glazing is sophisticated and expensive. Why not use window panes as thermoelectric devices, not to carry heat uphill but to stop heat from sledding downhill, that is, as heat-flux stopping devices?

Thermoelectric generators are usually seen as machines to make electricity by using the principle of the thermocouple. However, the device is reversible: by passing electricity through the machine, heat can be moved uphill. Several decades ago refriger ators were proposed using this principle on the basis of its great simplicity, although efficiencies are low. The old scheme for refrigerators could be revised in view of new thermoelectric materials and given suitably competitive objectives.

The basic idea is that electrodes on the inner and outer surfaces of the windowpanes can be made of conductive, transparent glasses. Glass made of zinc oxide might be sufficiently conductive. Voltages across the glass would be very low-volts or fractio ns of volts. Holding a temperature differential with zero flux would be more efficient energetically than putting heat (electrically!) into a house to balance the outgoing flux.

Electric Motion

So far we have looked at examples where efficiency wins, and net demand for power grows, only if the human population and its use of devices increase faster than efficiency. Now let us look at one example where a large new market might emerge, matching the ultra-high voltage lines and continental connections.

Toward the end of the last century electric motors for vehicle engines attracted much inventive action. Edison and Ferdinand Porsche produced sophisticated prototypes. The idea flopped on the roads but succeeded on the rails. Electric trams clamored th rough American and European cities, helped create suburbs, and in some cases connected cities. After 1940 most of the system was rapidly dismantled, largely because the trams could not match buses and cars in flexibility or speed. The mean velocity of tra nsport keeps increasing through the progressive substitution of old technologies with new, faster ones. For France, the increase in the average speed of all machine transport has been about 3 percent per year during the last two centuries. Urban and subur ban railways have a mean speed of only about 25 kilometers per hour, including stops. Cars have a mean speed on short distance trips of about 40 kilometers per hour. The latest in the series are airplanes, with a mean speed of 600 kilometers per hour. Air planes will provide most of the increase in mean speed over the next fifty years.

Electric trains succeeded in Europe and Japan for the densely trafficked lines and still operate today. They have decent acceleration and speed compared with diesels. But most trains are not fast; the inclusive travel time on intercity rail journeys is only about 60 kilometers per hour. The fastest trains, the French trains à grande vitesse (TGVs), are electric. The question for trains is how to compete with cars on one side and with airplanes on the other. Electricity probably cannot com pete with hydrogen for propulsion of cars and other light vehicles.

The great market challenge for the current generation of fast trains, with top speeds of 400 kilometers per hour, is the short distances of less than 100 kilometers along which cars congest and airplanes cannot compete. The present configuration of air ports and airplanes are high-speed but low-flux machines. TGVs could prove extremely competitive in the intense shuffling of commuters and shoppers within these distances. A cursory review of Europe reveals about 5,000 kilometers of intercity links fittin g the constraints of a 100-kilometer distance and high potential passenger flux.

Fast trains consume more or less the same amount of primary energy per seat-kilometer as a turboprop plane24 or a compact car. From the power point of view, a running TGV absorbs about 10 kilowatts per seat. The mean pow er demand of the proposed 5,000-kilometer system of TGV trains for commuters and shoppers would be around 6 gigawatts, with a peak of probably 10 gigawatts. If the concept is successful, this form of transport will be an important consumer of electricity, but it will take at least fifty years to become fully implemented.

To go to very high passenger fluxes over longer distances, one would need to go to aerial configurations of which even the most daring air-transport planners do not chance to dream: flocks of airplanes of five thousand passengers each taking off and la nding together like migrating birds.

For intense connections linking large cities with peak fluxes around ten thousand passengers per hour, a solution is emerging that matches system requirements: the magnetically levitated (maglev) train operating in a partially evacuated tube or tunnel. In fact, Swiss engineers have developed the concept of a vacuum version of maglevs in part to reduce drastically the tunnel boring expenses, which in Switzerland would account for at least 90 percent of the cost in a conventional layout. 25 To handle the shock wave from a high-speed train, a tunnel normally needs a cross section about ten times that of the train. In addition to narrowing greatly the tunneling requirement, the partial vacuum greatly reduces friction, making speed cheap and thus expanding the operational range of the train.

When operated at constant acceleration-for example, 5 meters per second or 0.5 g (the force of gravity), about what one experiences in a Ferrari sports car-maglevs could link any pair of cities up to 2,000 kilometers apart in fewer than twenty minutes. Consequently, daily commuting and shopping become feasible. Such daily trips account for 90 percent of all travel and are controlled by the total human time budget for travel of about one hour per day. With fast, short trips cities can coalesce in functi onal clusters of continental size. City pairs spaced less than 500 kilometers or ten minutes apart by maglevs, such as Bonn-Berlin, Milan-Rome, Tokyo-Osaka, and New York-Washington, would especially benefit.

Part of the energy consumption of vacuum maglevs overcomes residual friction; an economic balance must be struck between the friction losses and the pumping power to keep the vacuum. Part regenerates the electromagnetic system that pushes and pulls the trains.26 The power per passenger could roughly correspond to that of a large car, although these trains may travel at a mean speed of 3,000 kilometers per hour.

The great advantage of the constant acceleration configuration for maglevs is that the energy required for each length of track is constant and could be stored, perhaps magnetically, in the track itself. Power demand is proportional to train speed and moves into the gigawatt range in the central section; however, with local storage (a few kilowatt hours per meter) the external electric power networks would see only the need to make up losses. Even assuming 90-percent efficiency, these would not be negl igible. One hundred trains per hour would demand 1 gigawatt for the single line on which they operated.27 The Swiss system has a final potential of five hundred trains per hour, which would require 5 gigawatts-about one-thi rd of current installed Swiss generating capacity.

The first long-distance maglev will probably run in about five to ten years. Berlin-Hamburg is under construction. The penetration of the technology will be gradual, as major infrastructural technologies always are. In fact, the next fifty years will p robably be used largely to establish the feasibility, chart the maglev map, and prepare for the big push in the second half of the twenty-first century. In the long run, maglevs may establish several thousand kilometers of lines and become one of the most important users of electricity. A maglev trip per day becomes a few thousand kilowatt hours per year per person. If India and Eastern China join life in this superfast lane, the picture of a globally integrated, high-capacity electrical system begins to cohere.

Conclusions

The long economic cycles that seem to affect all parts of social and economic life constitute a good frame of reference for the development of the electrical system in terms of technology, territorial penetration, birth and death of enterprises, and in tensity of use. Our examples suggest this is true for the United States and globally.

Two waves of electrification have passed through our societies. In the first, the United States attained system saturation in the 1930s at about 1,000 kilowatt hours annual consumption per residential customer, 200 gigawatt hours of total annual use, 4 0 gigawatts of installed capacity, and 20 percent of primary fuels producing electricity. In the second wave, we have reached 10,000 kilowatt hours per residential customer, 3,000 gigawatt hours of total use, 800 gigawatts of installed capacity, and about 40 percent of fuels producing electricity.

The fact that the patterns of temporal diffusion and growth are followed makes it possible to fit dynamic equations to the time series of facts and then compare them for consistency. This operation indicates that the 1990s are the season of saturation, which includes the experience of overcapacity or, alternately, underconsumption. Such phases are not uncommon for various branches of the industrial system, as managers tend to assume that growth characteristics of boom periods will extend into recession s, while consumers cut corners.

In the short term, total energy and electric energy consumption may continue to grow at a slower rate than overall economic activity. One interpretation is that during the expansion period of the long cycles the objective is growth, while during the re cessive period the objective is to compete, shaving costs here and there and streamlining production. The savings include energy. Meeting goals pertaining to environmental quality and safety further tighten the system.

A new cycle formally beginning in 1995 started the game again, although the effects of the restart will not be particularly visible for a few years. Minima are flat. Looking at the cycles from a distance to grasp the general features, one sees the peri ods around their ends as revolutionary, that is, periods of reorganization-political, social, industrial, and institutional. We are evidently at this conjunction, and the electrical system will not escape it.

When the electrical system served the village, a complete vertical integration was inevitable. Regional coverage, the preferred scale of the past fifty years, also favored such integration. With the expansion to continental dimensions, a shift in respo nsibilities may make the system more efficient, agile, and manageable. The typical division is production, trunk-line transport, and retailing, with different organizations taking care of the pieces and the market joining them. The experiments in this sen se now running in Great Britain, Australia, and other countries can be used as a test bed to develop the winning ideas.28

Apart from various economic advantages and organizational complications, the use of splicers on a large scale-untried to date-may bring an almost absolute resiliency, as every subset of the system may become self-sufficient, if temporarily. The electri cal system should also become cleaner, as it intertwines more closely with natural gas and probably nuclear energy, thus furthering decarbonization. A sequence of technical barriers will appear, and thus the process of systematic research and innovation w ill continue to be needed; it will produce timely results.

In fact, our analyses suggest that rates of growth of technology tend to be self-consistent more than bound to population dynamics. Population, however, defines the size of the niche in the final instance. Thus a key question is, how long will it take to diffuse Western electric gadgetry to the 90 percent of the world that is not already imbued with it? The gadgetry keeps increasing. Followers keep following, if more closely. Based on historical experience, diffusion to distant corners requires fifty t o one hundred years. Even within America or Europe, as we have seen, pervasive diffusion takes that long for major technologies. So most people may have to wait for most of the next century to experience nightglasses, splicers, and maglevs. These devices may be largely features of a fourth wave of electrification, while the spread of the profusion of information-handling devices dominates the third wave that is now beginning.

Considered over centuries and millennia, the electrical adventure is deeper than a quest for gadgets. In 1794 Volta demonstrated that the electric force observed by Luigi Galvani in twitching frog legs was not connected with living creatures, but could be obtained whenever two different metals are placed in a conducting fluid. Today we use electricity to dissolve the difference between inanimate and living objects and to control and inspire the inanimate with more delicacy than Dr. Frankenstein. Introd ucing electricity into production raised the rank of workers from sweating robots to robot controllers. The process can be generalized, with humanity-at leisure or at work-giving orders to its machines by voice or a wink of the eye.

This ancient aspiration for action at a distance and direct command over the inanimate will drive invention, innovation, and diffusion for hundreds of years more; we come full circle to the elektron of the ancient Hebrews and Greeks.

Acknowledgments

We thank Perrin Meyer, for research assistance and figure preparation, as well as Arnulf Grübler, John Helm, Eduard Loeser, Nebojsa Naki¬enovi¬, and Chauncey Starr.

Endnotes

1“Oh subtle fire, soul of the world, / beneficent electricity / You fill the air, the earth, the sea, / The sky and its immensity.” Quoted in Robert Darnton, Mesmerism and the End of the Enlightenment in France (Cambridge, Mass.: Harvard University Press, 1968), 29.

2Edgar Allan Poe, The Science Fiction of Edgar Allan Poe (New York: Penguin, 1976).

3For general histories of electrification, see Thomas P. Hughes, Networks of Power: Electrification in Western Society (Baltimore, Md.: Johns Hopkins University Press, 1983); Richard F. Hirsch, Technology and Transformation in the American Electric Utility Industry (Cambridge: Cambridge University Press, 1989); David Nye, Electrifying America (Cambridge, Mass.: MIT Press, 1990); Wolfgang Schivelbusch, Disenchanted Night: The Industrialization of Light in the Nineteenth Century (Berkeley, Calif.: University of California Press, 1988); and Sam H. Shurr, Calvin C. Burwell, Warren D. Devine, Jr., and Sidney Sonenblum, Electricity in the American Economy: Agent of Technological Progress (Westport, Conn.: Greenwood Press, 1990). For data and information on the early history of energy and electricity, see H. D. Schilling and R. Hildebrandt, Primarenergie-Elektrische Energie, Die Entwicklung des Verbrauchs an Primarenergietragern und an Elektrischer Energie in der Welt, in den USA und in Deutschland seit 1860 bzw. 1925 (Essen: Vertrag Gluckauf, 1977).

4Henry Adams, The Education of Henry Adams: An Autobiography (Boston, Mass.: Massachusetts Historical Society, 1918; reprinted Boston, Mass.: Houghton Mifflin, 1961).

5Such diffusive processes are well fit by the logistic equation, which represents simply and effectively the path of a population growing to a limit that is some function of the population itself. For discussion of applications of logistics, see Nebojsa Nakicenovic and Arnulf Grübler, eds., Diffusion of Technology and Social Behavior (Berlin: Springer, 1991). On the basic model, see S. Kingsland, “The Refractory Model: The Logistic Curve and the History of Population Ecology,” Quarterly Review of Biology 57 (1982) : 29-52.

6A kilowatt (kW) is 1,000 watts; a megawatt (MW) is 1,000,000 W; a gigawatt (GW) is 1,000 MW; a terawatt (TW) is 1,000 GW. US generating capacity was 735 GW in 1990.

7Power is equal to V2/R, where V is voltage and R is resistance.

8John Winthrop Hammond, Charles Proteus Steinmetz: A Biography (New York and London: The Century, 1924).

9Chauncey Starr, “A Personal History: Technology to Energy Strategy,” Annual Review of Energy and the Environment 29 (1995): 31-44.

10Cesare Marchetti, “Fifty-Year Pulsation in Human Affairs: Analysis of Some Physical Indicators,” Futures 17 (3) (1986): 376-388.

11For an analysis of electricity projections, see Charles R. Nelson, Stephen C. Peck, and Robert G. Uhler, “The NERC Fan in Retrospect and Prospect,” The Energy Journal 10 (2) (1989): 91-107.

12See Lee Schipper, “Life-Styles and the Environment: The Case of Energy,” Dædalus 125 (3) (Summer 1996).

13Jesse H. Ausubel and Arnulf Grübler, “Working Less and Living Longer: Long-Term Trends in Working Time and Time Budgets,” Technological Forecasting and Social Change 50 (3) (1995): 195-213.

14Peggy L. Jenkins, Thomas J. Phillips, Elliot J. Mulberg, and Steve P. Hui, “Activity Patterns of Californians: Use of and Proximity to Indoor Pollutant Sources,” Atmospheric Environment 26A (12) (1992): 2141-2148.

15Sulfur and other emissions from power plants also cause ills, but these have proven to be largely tractable. See Nebojsa Nakicenovic, “Freeing Energy from Carbon,” Dædalus 125 (3) (Summer 1996).

16Jesse H. Ausubel, “Energy and Environment: The Light Path,” Energy Systems and Policy 15 (3) (1991): 181-188.

17N. Nakicenovic, L. Bodda, A. Grübler, and P.-V. Gilli, Technological Progress, Structural Change and Efficient Energy Use: Trends Worldwide and in Austria, International Part (Laxenburg, Austria: International I nstitute for Applied Systems Analysis, 1990).

18Cesare Marchetti, “Society as a Learning System,” Technological Forecasting and Social Change 18 (1980): 267-282.

19While Carnot efficiency (now about 60 percent) limits heat cycles, fuel cells do not face such a limitation, as they are not based on heat cycles.

20Gaslight, with a mantle with rare-earth elements, was a superior source of bright light for a period.

21The plasma struck between the two carbon electrodes also emits.

22Sticking to monochromatic light, a ray proceeding in a resonantly excited medium stimulates emission and becomes amplified. Amplification is relatively small with present devices; hence the ray must travel up and down betwe en mirrors. But no physical law limits amplification to such low levels. Semiconductor lasers, pumped by electric voltage, might hold the solution. In a second stage, they should also operate for a number of colors.

23The equivalent free energy of heat flowing out of a building is measured through the temperatures inside (T1) and outside (T2) in kelvin and is (T1-T2)/T1. In the case of a heat pump, due to temperature drops in the heat exchanger, it pumps heat from a temperature lower than T2 into a temperature higher than T1.

24For example, airplanes of the type ATR-42 or Dash.

25R. Nieth, W. Benoit, F. Descoeudres, M. Jufer, and F.-L. Perret, “Transport interregional à grande vitesse-Le Project SWISSMETRO,” Ecole Polytechnic Federale de Lausanne, 1991.

26We can calculate the amount of energy circulating in the system for a maglev with constant acceleration operating over a distance of 500 kilometers. A train of 200 metric tons accelerating at 0.5 g has a pull force (drag) of 1,000 kilonewtons, which over a stretch of 500 kilometers corresponds to 5 x 1011 joules, or approximately 140,000 kilowatt hours. A mean loss of 10 percent would require 14,000 kWh for one thousand seats, or 14 kWh per seat over 500 km. This would correspond to 84 kW per passenger at a typical trip time of 10 minutes (e.g., Bonn to Berlin).

27For example, fifty trains might operate in each direction, spaced one minute apart. They can start from different stations or lanes. One per minute would be the frequency in the neck of the tunnel.

28For discussions of models of organizational change in the electricity industry, see Judith B. Sack, research memorandum series on “Global Electricity Strategy” (New York: Morgan Stanley), especially “The Darwinian Theory of Distribution” (12 December 1995), and “An Interrelated World” (29 February 1996).


Jesse H. Ausubel is Director of the Program for the Human Environment at The Rockefeller University.

Cesare Marchetti is an Institute Scholar at the International Institute for Applied Systems Analysis in Laxenburg, Austria.