Decarbonization: The Next 100 Years

Jesse H. Ausubel

50th Anniversary Symposium of the Geology Foundation
Jackson School of Geosciences, U. of Texas
Austin, Texas
25 April 2003

Introduction

About 750,000 years ago some of our ancestors made a wood fire in a cave in the south of France near Marseilles.  From such early fires until about the year 1800 energy supply changed little.  The system relied on carbon, like a mesquite grill.

The most important and surprising fact to emerge from energy studies during the past two decades is that, for the last 200 years, the world has progressively pursued a path of decarbonization, a decreasing relative reliance on carbon [Figure 1].  Think of decarbonization as the course over time in the ratio of tons of carbon in the energy supply to the total energy supply, for example, tons of carbon per tons of oil equivalent encompassing all energy supplies.

Alternately, think hydrocarbons.  Both hydrogen and carbon burn to release heat, so we can consider decarbonization as the ratio of hydrogen and carbon in our bowl of energy chili. When the energy system relied on hay and wood, it relied most heavily on carbon.  Wood is made of much cellulose and some lignin.  Heated cellulose leaves charcoal, almost pure carbon.  Lignin is a hydrocarbon with a complex benzenic structure.  Wood effectively burns about ten carbon for each hydrogen atom.  Coal approaches parity with one or two C’s per H, depending on the variety [Figure 2].  Oils are lighter yet, with, for example, with two H’s per C, in kerosene or jet fuel. A molecule of methane, the typical natural gas, is a carbon-trim CH4.

Thus, the inverse of decarbonization is the ascendancy of hydrogen [Figure 3].  Think of hydrogen and carbon competing for market niche as did horses and automobiles or audio cassettes and compact discs, except the H/C competition extends over 300 years.  In 1800 carbon had 90% of the market.  In 1935 the elements tied.  With business as usual, hydrogen will garner 90% of the market around 2100.

Because carbon becomes soot or the feared greenhouse gas CO2, and hydrogen becomes only water when combusted, carbon appears a bad element, the black hat, and hydrogen a good one, the white hat.  So, decarbonization is not only a fact but a happy fact.

Let me explain the course of decarbonization.  Neither Thomas Jefferson nor Queen Victoria decreed it.  Why does it happen?  The driving force in evolution of the energy system is the increasing spatial density of energy consumption at the level of the end user. 

By 1800 or so, in England and other early loci of industry, high population density and the slow but steady increase in energy use per capita increased the density of energy consumption.  The British experience demonstrates that, when energy consumption per unit of area rises, the energy sources with higher economies of scale gain an advantage.

Wood and hay, the prevalent energy sources at the start of the 19th century, are bulky and awkward to transport and store.  Consider the outcome if every high-rise resident needed to keep both a cord of wood on her floor for heat and a pile of hay in the garage for the SUV.  Think of retailing these goods in the costly real estate of Dallas or New York.  Sales of fuel wood in cities now are, of course, limited to decorative logs providing emotional warmth.  Biomass gradually lost the competition with coal to fuel London and other multiplying and concentrating populations, even when wood was abundant.

Coal had a long run at the top of the energy heap.  It ruled notwithstanding its devastating effects on miners’ lungs and lives, the urban air, and the land from which it came; but about 1900, the advantages of an energy system of fluids rather than solids began to become evident.  On the privacy of its rails, a locomotive could pull a coal car of equal size to fuel it.  Coal-powered automobiles, however, never had much appeal. The weight and volume of the fuel were hard problems, especially for a highly distributed transport system.  Oil had a higher energy density than coal—and the advantage of flowing through pipelines and into tanks.  Systems of tubes and cans can deliver carefully regulated quantities of fuel from the scale of the engine of a motor car to that of the Alaska pipeline.  It is easy to understand why oil defeated coal by 1950 as the world’s leading energy source.

Yet, despite many improvements from wellhead to gasoline pump, distribution of oil is still clumsy.  Fundamentally, oil is stored in a system of metal cans of all sizes.  One famous can was the Exxon Valdez. Transfer between cans is imperfect, which brings out a fundamental point. The strongly preferred configuration for very dense spatial consumption of energy is a grid that can be fed and bled continuously at variable rates. There are two successful grids, gas and electricity.

Natural gas is distributed through an inconspicuous, pervasive, and efficient system of pipes.  Its capillaries reach right to the kitchen.  It provides an excellent hierarchy of storage, remaining safe in geological formations until shortly before use. Natural gas can be easily and highly purified, permitting complete combustion.

Electricity, which must be made from primary energy sources such as coal and gas, is both a substitute for these (as in space heating) and a unique wayto power devices that exist only because electricity became widely available.  Electricity is an even cleaner energy carrier than natural gas and can be switched on and off with little effort and great effect. Electricity, however, continues to suffer a disadvantage: it cannot be stored efficiently, as today’s meager batteries show.  Electrical losses also occurin transmission; with the present infrastructure, a distance of 100 km is normal for transmission, and about 1,000 km is the economic limit.   Moreover, because of its limited storage, electricity is not good for dispersed uses, such as cars.

Nevertheless, the share of primary energy used to make electricity has grown steadily in all countries over the past 75 years and now approaches 40%. The Internet economy demands further electrification, with perfect reliability.  Thus, the core energy game for the next 30 to 50 years is to expand and flawlessly operate the gas–electric system.

In contrast to what many believe, the stable dynamics of the energy system permit reliable forecasts.  Decarbonization essentially defines the future of energy supply.

Globally we are destined to use about 50-80 billion tons more coal.  This is about one-third what humans have mined in all our earlier history, and about 30 years at present levels of production, so all the participants in the coal industry have a generation or so in which to remodel themselves.  We should squeeze the maximum electricity from the black rocks with the minimum fallout of nasties, but coal is not our primary concern because its use will fade anyway.  In fact, coal companies would better concentrate on extracting methane from coal seams and sink CO2 there, staying in business without coal extraction.  Using CO2 to displace methane (CH4)adsorbed in coal beds provides a two for one bargain.  Tunneling, as we shall see, matters immensely for future human well being, so the coal industry also has a valuable skill to sell.

If it is dusk for coal, it is mid-afternoon for oil, which already has lost in energy markets other than transport.  Globally, drivers and others will consume close to 300 billion tons more oil, before the fleet runs entirely on H2 separated from methane or water.   This amount is almost double the petroleum that has so far been extracted, and about 50 years at present production, so oil companies can choose to play business as usual for a while.  But the entry under the car’s hood of fuel cells or other motors fueled by H2 dooms oil, over the decades required for the turnover of the fleet, and makes a huge niche for the easy ways to make the needed hydrogen fuel.

For gas, it is midmorning, and the next decades will bring enormous growth, matching rising estimates of the gas resource base, which have more than doubled over the past 20 years.  Preaching the advent of the Methane Age 20 years ago I felt myself a daring prophet but now this prophecy is like invoking the sunrise.  Between its uses to fuel turbines to make electric power and for fuel cells for transport, gas will dominate the primary energy picture for the next five or six decades.  I expect methane to provide perhaps 70% of primary energy soon after the year 2030 and to reach a peak absolute use in 2060 of about 30 x 1012 m3, ten times present annual use. 

Through fuel cells we will adopt gas in transport as well as for electric power.  Fuel cells, essentially continuous batteries, can be fed by hydrogen extracted from methane.  In replacing the internal combustion engine, they will multiply automotive efficiencies and slash pollutants. Wood and coal fogged and blackened cities, and oil gave us brown clouds of smog; methane can complete the clearing of the skies of Houston and other cities in the world, soon to come, of one billion motor vehicles.  Governments will need to make it easier to build and access gas pipelines.  Attention must also be given to the safety and environmental aspects of gas use because pipelines and tanks can explode tragically.   Refiners need to shift their focus to transforming methane into hydrogen and CO2.

Very Large ZEPPs

Now let me introduce the first of two Texas-size ideas.  My first is zero emission electric power plants or ZEPPs, very large ZEPPs.  The emission of concern is, of course, carbon, feared because of climate change.  Although simply substituting methane for coal or oil reduces CO2 emissions by a third to a half, the peak use would correspond to 2 to 3 times today’s carbon emission to dispose annually.  Even in 2020, we could already need to dispose carbon from natural gas alone equal to half today’s emission from all fuel and later methane would cause about 75% of total CO2 emissions.  So, prevention of climate change must focus on methane.  Can we find technology consistent with the evolution of the energy system to dispose economically and conveniently the carbon from making kilowatts?  The practical means to dispose the carbon from generating electricity consistent with the future context is the very large ZEPP.  Let me try to leave ZEPPs indelibly in your minds.

The basic idea of the ZEPP is a gas power plant operating at very high temperatures and pressures, so we can bleed off the CO2 as a liquid and sequester it underground in porous formations like those that harbor oil.

A criterion for ZEPPs is working on a Texas scale.  One reason is the information economy.  Even with efficiency increasing, the information economy demands huge amounts of electricity.  Observe the recent rapid growth of demand in a college dormitory.  Chips could well go into 1000 objects per capita, or 10 trillion objects, as China and India log into the game.

Big total energy use means big individual ZEPPs because the size of generating plants grows even faster than use, though in spurts.  Plants grow because large is cheap, if technology can cope.   Although the last wave of power station construction reached about 1.5 gigawatts (GW), growth of electricity use for the next 50 years can reasonably raise plant size to about 5 GW.  For reference, my city, New York, now draws above 12 GW on a peak summer day.

Bigness is a plus for controlling emission.  Although one big plant emits no more than many small plants, emission from one is easier to collect.   Society cannot close the carbon cycle if we need to collect emissions from millions of microturbines.

Big ZEPPs means transmitting immense mechanical power from larger and larger generators through a large steel axle as fast as 3,000 revolutions per minute (RPM).  The way around the limits of mechanical power transmission may be shrinking the machinery.  Begin with a very high pressure CO2 gas turbine where fuel burns with oxygen.  Needed pressure ranges from 40 to 1000 atmospheres, where CO2 would be recirculated as a liquid.  The liquid combustion products would be bled out.

Fortunately for transmitting mechanical power, the high pressures shrink the machinery in a revolutionary way and so permits the turbine to rotate very fast. The generator could then also turn very fast, operating at high frequency, with appropriate power electronics to slow the generated electricity to 60 cycles.

Our envisioned hot temperature of 1500 degrees C will probably require using new ceramics now being engineered for aviation.  Problems of stress corrosion and cracking will arise at the high temperatures and pressures and need to be solved.  Power electronics to slow the cycles of the alternating current also raises big questions.   What we envision is beyond the state of the art, but power electronics is still young, meaning expensive and unreliable, and we are thinking of the year 2020 and beyond.

The requisite oxygen for a 5 GW ZEPP also exceeds present capacity but could be made by cryoseparation.  Moreover, the cryogenic plant may introduce a further benefit.  Superconductors fit well with a cryogenic plant nearby.   Superconducting generators are a sweet idea.  Already today companies are selling small motors wound with high temperature superconducting wire that halve the size and weight of a conventional motor built with copper coils and also halve the electrical losses.  Colleagues at Tokyo Electric Power calculate the overall ZEPP plant efficiency could be 70%, well above the 50-55% peak performance of today.

With a ZEPP fueled by natural gas transmitting immense power at 60 cycles, the next step is sequestering the waste carbon.  At the high pressure, the waste carbon is, of course, already liquid carbon dioxide and thus easily-handled.  Opportunity for storing CO2 will join access to customers and fuel in determining plant locations.  Because most natural gas travels far through a few large pipelines, these pipelines are the logical sites for ZEPPs.  The best way to sequester the emissions is in caverns underground, where coal, oil, and gas came from.  On a small scale, CO2 already profitably helps tertiary recovery of oil.  The challenge is large scale.  The present annual volume of CO2 from all sources is about 15 km3.  Of course natural geological traps only occasionally contain hydrocarbons, so one can extend storage to the traps that lack oil and gas that prospectors routinely find.  Aquifers in silicate beds could be used to move the waste CO2 to the silicates where “weathering” would make carbonates and silica, an offset good for millions of years.

In short, the ZEPP vision is a supercompact, superpowerful, superfast turbine: 1-2 m diameter, potentially 10 GW or double the expected maximum demand, 30,000 RPMs, putting out electricity at 60 cycles plus CO2 that can be sequestered.  ZEPPs the size of a locomotive or even an automobile, attached to gas pipelines, might replace the fleet of carbon emitting antiques now cluttering our landscape.

I propose starting introduction of ZEPPs in 2020, leading to a fleet of 500 5 GW ZEPPs by 2050.  This does not seem an impossible feat for a world that built today’s worldwide fleet of some 430 nuclear power plants in about 30 years.  ZEPPs, together with another generation of nuclear power plants in various configurations, can stop CO2 increase in the atmosphere near 2050 AD in the range 450-500 ppm, about one-quarter more than today, without sacrificing energy consumption.

ZEPPs merit tens of billions in R&D, because the plants will form a profitable industry worth much more to those who can capture the expertise to design, build, and operate them.  They offer the best chance for safe use of the immense wealth of hydrocarbons in America and its offshore exclusive economic zones.  Research on ZEPPs could occupy legions of academic researchers, and restore an authentic mission to the Department of Energy’s National Laboratories, working on development in conjunction with private companies.  ZEPPs need champions, and I hope the U. of Texas will be one.  The Geology Foundation and other parts of UT should whip the imaginations of the geologists to discover methane and develop leak-proof CO2 sequestration industries and the petrochemists to make more efficient processes suitable for plants two orders of magnitude larger than present fertilizer plants.  Like the jumbo jets that carry the majority of passenger kilometers, compact ultra-powerful ZEPPs could be the workhorses of the energy system in the middle of the next century.

The Continental SuperGrid

Still, energy’s history will not end with natural gas. The completion of decarbonization ultimately depends on the production and use of pure hydrogen, already popular as rocket fuel and in other high-performance market niches.  Environmentally, hydrogen is the immaterial material; its combustion yields only water vapor and energy.  The hydrogen, of course, must eventually come from splitting water—not from cooking a hydrocarbon source. The energy to make the hydrogen must also be carbon-free.  According to the historical trend in decarbonization, large-scale production of carbon-free hydrogen should begin about the year 2020.

Among the alternatives, including solar and photovoltaic routes, nuclear energy fits the context best.  I am old enough to have been impressed by schoolbooks of the 1960s that asserted that the splitting and fusing of atoms was a giant step, akin to harnessing fire and starting to farm.  We should persist in peacefully applying Albert Einstein’s revolutionary equations.  It seems reasonable that understanding how to use nuclear power, and its acceptance, will take a century and more.  Still, fission is a contrived and extravagant way to boil water if steam is required only about half of each day to make electricity.

Nuclear energy’s special potential is as an abundant source of electricity for electrolysis and high-temperature heat for water splitting while the cities sleep.  Nuclear plants could nightly make H2 on the scale needed to meet the demand of billions of consumers.  Windmills and other solar technologies cannot power modern people by the billions.  Reactors that produce hydrogen could be situated far from population concentrations and pipe their main product to consumers.

            Here let me introduce a second Texas-size idea, the continental SuperGrid to deliver electricity and hydrogen in an integrated energy pipeline.  Specifically, the SuperGrid would use a high-capacity, superconducting power transmission cable cooled with liquid hydrogen produced by advanced nuclear plants.  The SuperGrid would serve as both a distribution and a storage system for hydrogen, with hydrogen ultimately used in fuel cell vehicles and generators or refreshed internal combustion engines.

By continental, I mean coast-to-coast, indeed all of North America, making one integrated market for electricity.  Continental SuperGrids should thrive on other continents, of course, but as an American I hope North America builds first and dominates the market for these systems, which in rough terms might cost $1 trillion, or $10 billion per year for 100 years.  The continental scale allows much greater efficiency in the electric power system, flattening the electricity load curve which still follows the sun.  Superconductivity solves the problem of power line losses.  By high capacity, I mean 40-80 gigawatt (GW).

The fundamental design is for liquid hydrogen to be pumped through the center of an evacuated energy pipe, both to cool the surrounding superconducting cable and to serve as an interstate pipeline for the hydrogen-electricity energy economy [Figure 4]. The cable would carry direct current and might look either like a spine or a ring nearing many of North America’s large cities.  Power converters would connect the direct current SuperGrid at various points to existing, high-voltage alternating current transmission substations.

Initially some forty 100-km long sections of the joint cable/pipeline might be joined by nuclear plants of a few GW supplying to the SuperGrid both electricity and hydrogen.  High-temperature, gas-cooled reactors promise a particularly high-efficiency and scalable route to combined power and hydrogen production.  Nuclear power fits with the SuperGrid because of its low cost of fuel/kwhr and its operational reliability at a constant power level.  The hydrogen storage capacity of the SuperGrid, combined with fuel cells, may allow electricity networks to shift to a delivery system more like oil and gas, away from the present, costly, instant matching of supply to demand.

For safety, security, and aesthetics, let’s put the entire system, including cables and power plants, underground.  I mentioned earlier that tunneling has a future even if coal mining does not.  The decision to build underground critically determines the cost of the SuperGrid.  But, benefits include reduced vulnerability to attack by human or other nature, public acceptance by lessening right-of-way disputes, reduced surface congestion, and real and perceived reduced exposure to real or hypothetical accidents and fallout.

An even more evolved concept for the underground corridors  combines energy with transport.  Sharing the tunnels, magnetically levitated trains in low pressure tubes would run on linear motors of superconducting magnets, speeding from Atlantic to Pacific in 1 hour.  I am speaking now of 100 years, but that is our time frame.  The maglevs would help spread the infrastructure cost over multiple uses.

            As with ZEPPS, magic words for the SuperGrid are hydrogen, superconductivity, zero emissions, and small ecological footprint, to which we add energy storage, security, and reliability. 

Conclusion

Evolution is a series of replacements.  Replacements also mark the evolution of the energy system.  Between about 1910 and 1930 cars replaced horses in the United States.  Earlier steam engines had replaced water wheels and later electric drives replaced steam engines.  These replacements required about 50 years in the marketplace.  It required about the same amount of time for railways to replace canals as the lead mode of transport and longer for roads to overtake railways and for air to overtake roads.

Decarbonization is a series of replacements.  Considering primary sources of energy, we find that coal replaced wood and hay, and oil in turn beat coal for the lead position in the world power game.  Now natural gas is preparing to overtake oil.  The so-called oil companies know it and invest accordingly.  We must favor natural gas strongly everywhere and prepare the way for hydrogen, which is a yet better gas.

Importantly, the superior performance of the technology or product fits a larger market.  Hydrogen and electricity can cleanly power a hundred megacities.

The global energy system has been evolving toward hydrogen but perhaps not fast enough, especially for those most anxious about climate change. With business as usual, the decarbonization of the energy system will require a century or more.  To assuage social anxiety about possible climate change, we should start building ZEPPs, which will pay anyway because of their efficiency.

When increasing spatial density of energy consumption drives the system, we must match it with economies of scale in production and distribution.   The coming world of ten billion people needs jumbo jets as the backbone of the energy system, not 2-seater Piper Cubs.  Of course, the little planes play crucial roles in the capillary ends of the system and in providing back-up and flexibility.  Most effort on the energy system the last couple of decades has been retouching here and there.

Now is the time to think and act big again.  ZEPPs and the SuperGrid will bring riches to companies and nations and glory to engineers and scientists and the institutions that nurture them, such as the Geology Foundation and the Jackson School.  Let’s commit now to the Texas-size ideas that will complete the grand and worthy challenge of decarbonization.

Decarbonization or the changing carbon intensity of primary energy for the world
Figure 1. Decarbonization or the changing carbon intensity of primary energy for the world. Carbon intensity is calculated as the ratio of the sum of the carbon content of all fuels to the sum of the energy content of all primary energy sources. Figure prepared by N. M. Victor, Program for the Human Environment, The Rockefeller University, 2003.
The atomic structure of typical molecules of coal, oil, and gas and ratio of hydrogen to carbon atoms
Figure 2. The atomic structure of typical molecules of coal, oil, and gas and ratio of hydrogen to carbon atoms.  Source: Jesse H. Ausubel, Mitigation and Adaptation for Climate Change: Answers and Questions,  pp. 557-584 in Costs, Impacts, and Benefits of CO2 Mitigation, Y. Kaya, N. Nakicenovic, W.D. Nordhaus, and F.L. Toth, eds., International Institute for Applied Systems Analysis, Laxenburg, 1993.
Competition between hydrogen and carbon in primary energy sources
Figure 3. Competition between hydrogen and carbon in primary energy sources. The evolution is seen in the ratio of hydrogen (H) to carbon (C) in the world fuel mix, graphed on a logarithmic scale, analyzed as a logistic growth process and plotted in the linear transform of the logistic (S) curve. Progression of the ratio above natural gas (methane, CH 4) requires production of large amounts of hydrogen fuel with non-fossil energy. Source: J. H. Ausubel, Can Technology Spare the Earth? American Scientist 84(2):166-178, 1996.
Conceptual design for a hydrogen-electricity pipeline
Figure 4.  Conceptual design for a hydrogen-electricity pipeline. Source: T. Moore, SuperGrid Concept Sparks Interest, EPRI Journal, November 2002, https://www.epri.com/journal/details.asp?doctype=features&id=511.

Acknowledgements: Thanks to Cesare Marchetti, Perrin Meyer, Chauncey Starr, and Paul Waggoner.

This talk draws from:

Report of the National Energy Supergrid Workshop, 6-8 November 2002, Palo Alto CA, Thomas Overbye and Chauncey Starr, convenors

Supergrid Sparks Interest, Taylor Moore, EPRI Journal, November 2002, Microsoft Word – SuperGrid Concept Sparks Interest.doc (rockefeller.edu)

Some Ways to Lessen Worries about Climate Change Jesse H. Ausubel, Electricity Journal 14(1):24-33, 2001. https://phe.rockefeller.edu/Lessen_Worries/

Where is Energy Going? Jesse H. Ausubel, The Industrial Physicist 6(1): 16-19, 2000 (February). https://phe.rockefeller.edu/IndustrialPhysicistWhere/

Five Worthy Ways to Spend Large Amounts of Money for Research on Environment and Resources,  Jesse H. Ausubel, The Bridge 29(3):4-16, Fall 1999. https://phe.rockefeller.edu/five_worthy_ways/

Resources and Environment in the 21st Century: Seeing Past the Phantoms, Jesse H. Ausubel, World Energy Council Journal, pp. 8-16, July 1998. https://phe.rockefeller.edu/phantoms/

Jesse H. Ausubel is director of the Program for the Human Environment at The Rockefeller University in New York

Does Energy Policy Matter?

On 18 April Jesse gave a lunch seminar at the University Club in
Washington DC titled “Does Energy Policy Matter?” Here is the edited
transcript, including Q & A.

William O’Keefe: This Roundtable is timely.  I would like to say it has something to do with our foresight, but in honesty, it was luck.  Energy policy and long-term energy issues have once again been given a high priority because of world events.  Next week, the Senate is likely to pass energy legislation that will set a policy direction for the future.  We are fortunate to have one of the most informed people I know to discuss whether energy policy matters.  I might say as an aside that whether it does more good than harm is also an interesting question.

Jesse Ausubel is a recognized scholar on the environment, technology, and  energy.  In addition to his position at The Rockefeller University, he has served with the Carnegie Commission on Science, Technology and Government, the National Academies, and the Sloan Foundation.  He has a long a list of articles, books, and publications that go beyond being impressive.  I have known him for fifteen or more years and have always found his remarks and views insightful.  Unlike most people who talk about the future, he seems to know what he is talking about.  So with that, I present Jesse Ausubel.

Jesse Ausubel:  I’d like to begin with a little fun.  The subject is mating behavior in Philadelphia.  Is there anyone here from Philadelphia?  Now mating behavior is certainly a domain in which people consider themselves free, if not in India at least in the USA.  So, it may come as a shock, how vulgar and simple are the ways of fate.

Consider relating the number of marriages to the distance in blocks between partners. The result is a so-called rank-size distribution, on a logarithmic scale.  The collector of oddball statistics G. K. Zipf graphed data for the first 5000 couples married in the year 1931 in his wonderful 1949 book, Human Behavior and the Principle of Least Effort.  About 350 couples lived 1 block apart, about 100 lived 5 blocks apart, and so on.  The couples line up perfectly, as if to begin a Minuet or perhaps a Highland Fling.  In mathematical terms, the probability of finding the proper partner was inversely proportional to the distance between them to the power of 0.8.  So much for Philadelphia freedom.

The subject of my talk today is not love but energy, and the title is “Does Energy Policy Matter?”  My essential point, foretold by the mating behavior of Philadelphians, is that strong constraints are at work that greatly reduce the freedom of strategists.  Nevertheless, there are some sensitive areas where efficient decisions can be taken.

Before turning to energy, let me introduce a couple of concepts about how systems grow and evolve. The first concept is that systems grow to limits.  A classic case is the growth of a colony of bacteria in a dish. They grow like an epidemic, in an s-shaped or logistic curve.  In one example the midpoint of the process is 2.5 days, the time the process required to go from 10% to 90% of its completion was 2.2 days, and the saturation was 50 square centimeters.  These same data can be normalized, that is, plotted against 100% of the expected outcome, in a manner that transforms the s-curve into a straight line.  Our analyses demonstrate that energy and other socio-technical systems often grow like bacteria.

The second concept is that systems grow by substitution, by mutation and selection, by evolution.  An innovation, a mutation, enters the picture and if fitter gains a share of the ecological niche or market.  Often the substitution process also follows the s-shaped or logistic curve.  A familiar example to all of us is recording media, where tapes overtook long-playing records, and in turn CDs replaced tapes. The MP3 system of downloading is now overtaking CDs.

Now to views of energy developed over the past 25 years with Cesare Marchetti, Nebojsa Nakicenovic, Arnulf Gruebler, Perrin Meyer, and others.  In our analysis, the master trend of energy evolution is decarbonization.  Molecules of the main so-called fossil fuels, coal, oil, and natural gas, each have a typical ratio of hydrogen to carbon atoms.  Coal’s H:C ratio is about 1 to 2, oil 2 to 1, and methane, CH4, is obviously 4 to 1.  Other elements, such as sulfur and mercury of course contaminate the real resources, especially coal and oil.  Importantly, wood has an even more primitive H:C ratio, 1: 10.  Given that hydrogen is much better stuff for burning than carbon, the hydrocarbons clearly form a hierarchy.

When we look at the historical growth of energy consumption over the past 150 years, we find it has grown in long waves of 50-60 years or so, each time built around the arrival of some new and more desirable form of energy.  The first pulse centered around coal and the second on oil.  A new pulse is just beginning, centered on gas, now almost everyone’s favorite fuel.

The explanation for this pattern of growth is simple.  The overall evolution of the system is driven by the increasing spatial density of energy consumption at the level of the end user, that is, the energy consumed per square meter, for example, in a city.  Finally, fuels must conform to what the end user will accept, and constraints become more stringent, as spatial density rises.  Rich, dense cities accept happily only electricity and natural gas (or, later, hydrogen).

Energy use will keep rising.  One reason is that computer chips could well go into 1000 objects per capita, or 10 trillion objects worldwide, as China and India log into the game.  By the way, some studies suggest the total energy system demand of a cell phone is not unlike a refrigerator, because the telecom system must flood the skies with waves and always be on.

Meeting more stringent demands brings us back to decarbonization.  Suppose we put all fuels people have used since 1860 in a blender each year, mixed them, and plotted the yearly ratio of hydrogen to carbon.  We find a monotonic rise of H.  According to our analysis, by 2020 the reference point for the world’s energy will be CH4, methane.  Beyond 2020 we need to begin introducing more H2 into the system to lift the system average above the norm of methane.  The obvious way is to have the nuclear power plants that generate electricity by day manufacture H2 at night.

We have reached an important place to ask: “Does energy policy matter?” Neither Queen Victoria nor Abraham Lincoln decreed a policy of decarbonization.  Yet, the System pursued it.  I was part of the research group that discovered decarbonization in the 1980s and invented the term.  Our societies had been pursuing it for 130+ years before anyone noticed.  Now presidents and energy ministers declaim decarbonization.  I conclude, in general, politicians legitimate what is happening anyway. But, a nation may be above or below the fated line, a costly variation to which I will return, where politics matters a lot.

Now consider the global trend of decarbonization plotted not as the rise of H but as the fall of C, measured as the C employed to produce each kilowatt of energy or its equivalent.  In 1988 my colleagues Gruebler and Nakicenovic and  I extrapolated the 130 year trend out to 2100.  A few years later we had occasion to compare the trend and extrapolation to three forecasts of H/C ratios used in the early reports of the Intergovernmental Panel on Climate Change (IPCC).  We laughed and laughed.  The reference line picked by the IPCC in 1990 did not continue the downward secular trend of decarbonization.  Instead the IPCC charted an absolutely flat line and labeled it “BAU.”  BAU means “Business as Usual.”  This is the energy system projection that pervaded the IPCC’s first report and of course had the effect, desired by many associated with the IPCC, of generating terrifying heat.  I called this forecast the Brezhnev Scenario, for technical stagnation.  Properly understood, business as usual, as you can now see from the past 140 years, is very dynamic and progressive, and eliminates carbon by 2100.

In 1992, spurred by criticism from some of us, the IPCC tried to rectify itself but could not finally accept the idea that ingenuity in the energy sector could continue.  Instead, decarbonization had to expire and keep the IPCC in business.   The most recent IPCC report used more than 40 energy scenarios, with decarbonization, or carbonization, sloping every which way and no probabilities attached.  Of course, none of these scenarios will actually influence the evolution of the system, but the spewing of scenarios by the working group of experts is amusing.  It is a confession that collectively they know nothing, that no science underlies their craft, and that politics strongly bias their projections.  Those whose ends it serves even managed to have included a scenario causing a 5.8 degree C warming by 2100, which of course made the headlines and created the image of the most recent IPCC report.

Now let me return to more direct effects of politics, or of strategies and fate.  Consider the evolution of the USSR energy system fitted with the model of market substitution.  Several items merit comment.  Most obviously, the Russian Revolution and World War II literally drove Russians back into the woods to collect their fuel.  Yet, these extreme shocks were later absolutely absorbed.  By 1950 one sees no visible effect on the energy system of World War I, the Bolshevik Revolution, the Great Depression, or the Great Patriotic War.  The system had arrived at its genetic destiny.  Along the way, the leaders of Russia and its adversaries had made miserable its population.  Yet, they made no lasting effect on the USSR energy system.  Wood was disappearing right on schedule, coal peaking, oil growing, gas soaring, and nuclear penetrating.

Interestingly, the early oil bubble that Alfred Nobel created in Russia also could not be sustained.  In 1900 oil provided more than 10% of Russian primary energy.  It took until about 1950 before the infrastructure to use that fuel was in place.  So, both central planner Joseph Stalin and entrepreneur-capitalist Alfred Nobel had unsustainable energy strategies.

Let me remark in turn on the accomplishment of US political and industrial leaders.  Examining USA electricity generation, we find that in the early 1970s natural gas was poised to take off and become the lead fuel.  The great accomplishment of several members of the Nixon and Carter administrations, in cahoots with leaders of firms in the coal industry and their friends in the Congress, was to stymie the progress of gas.  We know during the last decade that all orders for new power plants were gas, and that gas will become dominant in the next 10-20 years.  In the end, the system wins. But the USA wasted 25 years, and incurred lots of unnecessary problems, environmental and other, by putting coal on life support systems.

At this time, I want to make another point about predictability of technological evolution.  Industries such as the chemical and airframe industries use learning curves giving the cost evolution of a manufacturing operation as a function of time, or much better, of the total integral amount of the goods manufactured in that industry sector.

Another famous example of learning is the computer chip industry.  Consider the learning curves for dynamic random access memory chips (DRAMs).  Since the late 1980s, we find one generation after another of chip introduced at lower initial cost and the price for each chip falling over its lifetime.

Contrast the chipsters with the record of the coal industry since 1920.  Between 1920-1970 the coal industry did manage to extract more energy from each kg of coal it burned, averaging a learning rate (LR) of 8-15%.

Not bad.  However, since 1970, the performance of the coal industry by this measure has worsened, a negative 4.2.  Some blame the worsening on costs associated with emission control for sulfur and other pollutants.  Yet, most industries have met environmental goals AND lowered costs as they gain experience.  Only protected industries can survive such stagnant performance.  So, energy policy can matter, for a while.

So far, I have suggested good news, about the overall evolution of the energy system towards cleaner fuels that fit and fuel society’s growth.  But I have also suggested that the system has its own internal clock, its own rates of change and evolution.   Trying to go much faster than this clock can be as wasteful as trying to stop it.  Entrepreneurs know it is dangerous to be early as well as late.  In this regard, let me offer one more analytic approach, and apply it to the Kyoto protocol.

My colleague Paul Waggoner and I call the approach the ImPACT identity.   Impact equals people, P, times affluence or income, A, times consumer behavior C, times technology T.  In the case of CO2 emissions, carbon equals people times GDP/people times fuel/GDP times carbon/fuel . . . or carbon = carbon, an identity.

Consider the world carbon emission performance for 1950-1990 and for 1991-1999 divided into the 4 forces, p, a, c, t.  In the first period, carbon emissions increased rather fast as population and income grew, though declining consumer’s preference for carbon and improving technology offset them somewhat.  In the 1990s, emissions grew slowly because globally population growth slowed, income growth slowed a lot, and favorable consumer behavior and efficiency, that is, technology, both accelerated.

Could analysis by ImPACT have provided some foresight about the targets for 2010 adopted with high aspirations in 1997 in Kyoto?  Comparison of the aspirations with recent American and French experience tests the prospects for what the diplomats signed in Kyoto.  The Kyoto negotiators could have had the performances of p, a, c, and t from 1980-1990 in mind.

During this decade in the USA and France population grew slowly but income rapidly.  Also both nations lowered their intensity of use c of energy, leaving per capita use nearly unchanged. Efficiency (t) of carbon emission per energy improved slightly in the USA and dramatically in France.  In the 1980s the national emissions in the USA increased a slow 0.5 percent per year in the US and fell 2.8 in France.

To meet its Kyoto target of 93 percent of 1990 emissions between 1997-2010. the USA would have needed an annual 1.4 percent decline of national emission. For France a 0.8 rate of decline would be required to attain 92 percent of 1990 emissions for the same period.  To allocate this decline among the four forces, we estimated reasonable rates for p and a. Comparing experience to t tests practicability.  To meet the 1997 Kyoto Protocol, the USA would nearly have had to match the French improvement of t emission per energy during the 1980s, and France would have had to continue its remarkable improvement, won largely by producers installing more than 40 GW of nuclear generating capacity.

Now consider the actual changes 1990-2000.  Moderate population growth continued, and income grew, especially in the USA.  Although the intensity of use (c) declined in the USA, c did not decline in France.  Efficiency measured as emission per energy improved in both nations, but more slowly than the French performance of the 1980s and more slowly in either nation than needed to hit their Kyoto targets.

By the end of 2001 negotiating sessions in Bonn and Marrakech relaxed the original Kyoto targets.  Suppose the relaxed emissions target is 98 percent of the 1990 emissions in 2010.  But, of course the interval to achieve the reduction had shrunk from 1997-2010 to 2001-2010.  Despite the relaxation, with reasonable values of population and income changes, even more rapid improvements in technology (t) during 2001-2010 are needed.  The USA would need to lower its yearly declines of 1.8 in the 1990s to 4.3 and France to move the 0.9 decline of the 1990s to 3.2.

Whatever one believes about global climate change, Kyoto shows the incompetence and incomprehension of the diplomats who negotiated it, not only for the USA.  They could not add and subtract.  They did not understand rates of technical change.  No levers in Washington or other world capitals are powerful enough to achieve the Kyoto goals.

Recurring to our theme of the misguided strategist, I was amused to observe that the USA delegation to Kyoto included “leadership” accountable for the attacks on gas and favoring of synfuels in the late 1970s.   Perhaps Kyoto was an attempt for redemption.  Failing to understand the evolution of the energy system, politicians can waste money and create anxiety with constructs like the Synthetic Fuels Corporation and Kyoto.

Though natural gas and nuclear will eventually win and decarbonize, I do not propose to unemploy all politicians. I will give one example of a potential big job for them.  Because we know what the rates of change are, we know that hundreds of gigatons more of carbon will pass through the energy system.  Here societies face a choice: to capture and sequester carbon.  I do not believe the decisions about sequestration are built into the genetics of the energy system.  The amounts societies want to spend on sequestration depend on how risk-averse societies are.  Politicians should assess the risk aversion and act on it.

Let me now summarize.  Very stable trends, particularly that of primary energy substitution, appear finally to go unscathed through economic depressions, wars, and central planning.  The trends and associated rules put severe constraints on the playgrounds of the modelers, scenario makers, and strategists.  So, what can we do with the narrow channel left for “free” decisions? Understanding the trends and rules may lead humans to devise a more coherent, restricted, and useful set of possible courses than they have done in the past.  But the IPCC suggests the community of energy analysts is as bewildered as ever, and Kyoto suggests most politicians and diplomats are clueless about their power, or lack of it, over the evolution of the energy system.

Planning and R&D should essentially support the invariants in the system.  Fate can perhaps be influenced at the level of seeding, a fact well known to peasants for a few thousand years.  A new product appears to follow a fatal course after it has penetrated a few percent of the market. What one can hopefully do is try to preset the starting point and the slope on the basis of the effects one wants to reach.  More importantly, one can avoid the wild, painful excursions around the trend organized by Lenin & Stalin, or the USA coal interests.  In the case of the USA, the policy prescriptions are simply: favor gas, hydrogen, and nuclear.

Let me conclude on a philosophical note.  We feel a freedom of decision inside ourselves, which economists and politicians assume as sacred dogma, in the face of the obvious determinism of many global outcomes.

The situation fits the famous analogy between the somewhat free and unobservable behavior of single molecules and the beautifully clean pressure-volume relationship in a gas on a macroscopic scale.  The determinism and feeling of liberty may not be contradictory.  The crux lies in the properties of systems with a large number of degrees of freedom.  These systems tend to evolve globally through some kind of variational control which may be reduced to the existence of invariants, making the behavior of certain macroscopic variables appear deterministic.

So, in 1931 in Philadelphia each teenage male might dream of any girl in the world, but half the time he would marry the girl next door.

Thanks to Cesare Marchetti, Dionel Lopez, Chauncey Starr, Nadejda Makarova Victor, Paul Waggoner.

Questions  & Answers

(O’Keefe: In fairness to Stu Eisenstat, what he did at Kyoto, I believe, was to undo some of the damage by Al Gore by introducing unconstrained trading and sequestration to lower the cost from the unrealistic target that we accepted.  But, he was around for the Synfuels Corporation.)

Question: Since we are on climate change, Jesse is, as always, polite and allows me to embarrass myself instead of doing it for me.  He will remember that I had something to do with that 1990 report of the IPCC.  I will not try to defend the indefensible, but I will just say that if you had watched that sausage being made and the utterly chaotic and highly political atmosphere in which these things were done, it comes as no surprise that that is not serious science by any stretch of the imagination.  So the results speak for themselves.  I am always interested when people talk about the hydrogen economy, because as you well know, hydrogen doesn’t come out of the ground or out of the air.  Presumably one expends energy in producing it.  Can you explain your assumptions?

Ausubel: Yes.  First, there is an existing hydrogen industry.  There are plants producing hydrogen around the world for aerospace and other applications.  Hydrogen is the obvious and necessary fuel for fuel cells, which again seem destined to take over from the internal combustion engine for powering vehicles and much transport.  The question is, where will the hydrogen come from?  There are two candidates, which I think compete in some ways but fit together very well in others.  One way is by steam reforming of natural gas and the other is by electrolysis or thermochemical processes using nuclear energy.  Natural gas is abundant and the processes are well understood, and I think in the near term, steam reforming of natural gas will be the preferred way to produce hydrogen for fuel cells.  Over the longer term, the production of hydrogen is what will improve the economics of nuclear power, much more than standardizing plants or building plants more quickly; it gives nuclear power plants something to do 24 hours a day.

One of the great problems that the electric power industry faces is that, notwithstanding the talk of the “24/7 society,” electric power demands remain quite asymmetrical and most electricity is used during the day.  So you have this immense capital that sits on its hands between about 9 o’clock at night and 6 or 7 o’clock in the morning.  Anything that can turn that capital into an asset is going to be incredibly valuable.  Just as in the hotel and airline industries, you’d rather operate at 90% capacity than 60% capacity.

I believe the production of hydrogen will bring the nuclear industry into a new scale of operation.  In the near term, steam reforming of natural gas will be the dominant source of hydrogen.  Of course, nuclear power plants as well as gas itself can provide the energy for the steam reforming.  An enormous amount of natural gas comes through a few large pipelines from Russia through Slovakia and a few other places.  These are attractive places to build large chemical complexes where, if you put in a few nuclear power plants and siphoned off some of the methane, you could then manufacture large amounts of hydrogen that could be distributed around the world.  Over the next 10-15 years, I will keep my eye on the few places where much gas transits and see whether these regions implement the next generation energy system.

Question: I would certainly agree with you. As a physicist, I have been appalled at these IPCC reports over the past fifteen years and the predictions of global warming based on carbon dioxide and so on.  But one important factor is the intense ideological opposition to nuclear power.   When you go to a doctor, you hear about “magnetic resonance imaging,” but as everyone in physics knows, it is really nuclear magnetic resonance.  General Electric had to take out the word “nuclear” because of ideology.  One of the concepts they use is that you have to keep nuclear waste for 50,000 years.  I think that belief has to be destroyed.  The most intense radiation goes away in a few hundred years and in fact, we may find this radiation quite valuable for sterilizing food or doing other things.  The other thing is that methods will be developed to dispose of nuclear waste.  But how do you overcome this ideological opposition to fission nuclear power and what will eventually occur, controlled thermonuclear power?

Ausubel: You have to look at what people do rather than at what they say.  In the year 2001, nuclear plants had a record year in the United States: they made 23% of all electric power, the highest fraction it has ever been, and the largest amount in absolute terms.  The Seabrook plant just sold a couple of days ago for $837 million.  The nuclear power industry, in fact, is flourishing; the plants are printing money because they are almost all depreciated and there is plenty of demand for electricity.  So the industry is actually doing well, notwithstanding what you might think if you read the newspapers.

Question: But you can’t build another one.

Ausubel: Plants are being built around the world.  Vietnam, which has 80 million people, just announced they’re going nuclear.

Question: Not in the United States.

Ausubel: The US will build more nukes, too.  No new coal-fired power plants are being built either.  Big additions to capacity are just returning to fashion, with the shortages in California and other concerns about reliability.  Importantly, it will be under the wing of natural gas that nuclear grows again.  The big fact of the energy system over the next twenty years is the massive expansion of the gas system.  In part, people will be more comfortable with the return of growth of nuclear power as long as they feel and recognize that gas, a very attractive fuel in many ways, is taking the lead.  Even in Europe, look at what people do and not what they say.  The Italians say they haven’t built nuclear power plants but five of the French nuclear power plants essentially export electricity to Italy.  So it is not that the Italians don’t use nuclear power; they do.  But they were clever to get the French government to build the plants.  Keep your eye on what people do and pay much less attention to what they say.

Question:  I am waiting for the first nuclear power plant to be built in twenty-five years.

Ausubel:  It will come.

Question: I want to go back to hydrogen for a minute.  On one of your early slides, you had coal taking over, then oil, then gas, and you had a conceptual space for hydrogen.  I submit that it helps to talk about hydrogen not as a fuel at all but as a carrier, like electricity.  That’s what it really is.

Ausubel: Yes, that’s correct.  Hydrogen is an energy carrier and in that sense, the new primary energy source is nuclear, or it could be solar, it could be fusion.  Energy systems evolve in terms of product substitution both at the level of primary energy (generation) and at the level of the end user.  Ultimately it is the behavior of the end user that drives the system.  It is very important to recognize that.  Again, that’s the problem with coal or hay or whatever; if you tried to run Washington with hay, the problems of storage and transport just become immense.  Finally, if the product coming out of the burner tip in your kitchen is natural gas, it is much easier to begin with natural gas than it is to begin with coal and gasify it.  Finally, over time the system drives in this direction.  Finally, the end user wants electricity and hydrogen, not hay and charcoal and candles.

Question: I have a question about the fuel cells, because a lot of the discussion on the Hill this last year has been about the Bush energy plan and our increasing dependence on foreign oil, or just the use of oil and CAFE standards coming into the mix.  But I think that the concern about the increase in the consumption of oil can be countered if you talk about fuel cells and the combustion engine being replaced by fuel cells.  What time frame do you see for the development of fuel cells and automobiles that use fuel cells instead of oil?  The debate on the Alaska National Wildlife Refuge is up today and that has to do with reducing oil consumption.

Ausubel: The newspapers regularly run articles about the growth of the fuel cell industry.  Today an article says that Governor Engler of Michigan wants to build a new energy R&D park near Ann Arbor to promote fuel cells, fearing that Michigan will otherwise lose its edge in the automotive industry.  If you speak with Shell, Exxon/Mobil, General Motors, Ford, Daimler-Chrysler and their R&D labs, everybody just takes it as a fact that within eight, ten, fifteen years, the fuel cell will be the motor system of choice.  Some companies and countries are going to make an immense amount of money from the patents and manufacturing of fuel cells.  A Canadian company, Ballard in Vancouver, is generally regarded as the world leader, but it is early in the game.  Stimulating competition in fuel cell design and manufacturing is obviously a good idea.

On the energy side, one of the great disappointments to me about the energy plan is that it doesn’t separate natural gas from oil.  As an environmentalist, every time I hear “oil & gas” talked about like they’re Siamese twins, my heart just falls.  They are very different fuels.  I spend most of my time with Greens of various kinds and I think there are lots of Greens who would accept drilling for natural gas, whether off-shore or in upstate New York, if natural gas is the exclusive target, if it isn’t a cover for drilling for more oil and all the problems that come with oil.  One way in which the politicians could help, or could recognize reality and ratify and legitimize it, would be to have a natural gas policy that is really about natural gas and not the “oil & gas” policy, which is an increasingly uncomfortable hybrid.  We are going to use 200 million tons more of oil, and it is still going to be a big product for another thirty or forty years.  But oil is not a growth industry, whereas there’s enormous need for growth in gas, and room for growth, and it fits with fuel cells and other things.

There’s lots to be done politically, I can say coming from the New York area.  The situation with siting and building pipelines is terrible.  Christie Whitman, now the head of EPA, actually opposed some of the gas pipelines that were supposed to bring gas across New Jersey into New York City.  There are problems getting gas through Connecticut and under Long Island Sound to Long Island.  The rights of way for pipelines are the sorts of things that the political system has to deal with, and should deal with.  Think if we had leadership in Congress or in the Executive Branch or in industry that would get up and say: “Gas is the way to go for the next few decades.”  I believe a lot of the Greens who have been fighting fuels as one bundle are fighting oil.  Many would support gas development.  We might be surprised how different the energy discussion would become if a Gas First policy were promoted.

Question: It is largely unknown to the rest of the world, but the aeronautics industry spent quite a bit of money investigating whether hydrogen could be used as a fuel in an airplane.  In the process, we got very close to some of the problems that hydrogen is going to face, which are really no different that what natural gas faces: safety in the home, for instance.  Can you have a hydrogen-powered house? Most people don’t know that there are twelve hydrogen houses operating in the West somewhere that were built fifteen or twenty years ago.  It can be done; it is not hazardous.  The only real problem is that pictures of the Hindenburg burning up appear regularly on TV.  People don’t realize that there were ninety-five people on the Hindenburg when it crashed.  It took a long time to find a crash report on this.  There were 200 to 250 people on the ground underneath it trying to get it down, and nobody ever mentions how many people lost their lives: thirty-five.  The reason that thirty-four of those people died is that they jumped out before the durned thing got on the ground; only one person was burned by hydrogen.  He was in between the sacs, near the tail end where the fire started, and nobody knows, or will ever know, I guess, whether he was trying to sneak a smoke or whether he was deliberately doing something.  But that’s where the fire started and he was the only person burned by hydrogen.  There were actually five bodies that were burned, but they were burned by fuel oil.

Now when it comes to fuel cells, Ballard, whom you mentioned, had done a beautiful job.  They now have, I think, eight or twelve buses operating on fuel cells in Chicago.  A device converts natural gas to hydrogen on the fuel cell itself, so the fuel cell operates on hydrogen but the energy that goes into the bus is natural gas and that conversion turns out to be pretty easy, and it is part of the efficiency to actually do that conversion right at the fuel cell.  And it is working fine.

We did a safety study for the government, to see what would happen if a hydrogen-powered airplane crashed.  It turns out it was the safest fuel you could handle.  The question was, if everybody was in the airplane and the landing gear had folded and the fire had started, should the passengers evacuate the plane.  We found it was best to stay in the airplane.  Hydrogen burns extremely fast and it has no heat emission from its flame, so that the chances are pretty fair that you’re safer in the airplane than you are trying to get out.  The fire will be gone probably in a minute and a half.

Ausubel: There are transport and storage issues associated with hydrogen and plenty more R&D to do.  Your company and others have shown the way.  Trade-offs need to be understood between on-board reforming and external refineries. Any system that might expand by a factor of ten or a hundred at the level of the end user is going to face new issues, including safety, as it scales up.  I don’t  belittle the engineering challenges that will arise as the hydrogen system grows, but your basic point is absolutely right.  The basic features of hydrogen are attractive and there are no show-stoppers.

Question: In the automotive industry, there are people who have been working on this for a long time and haven’t been able to solve the problem of how to store the fuel.  The automobile that is most successful, other than the one that converts natural gas, actually runs on compressed natural gas.  It still takes about three times the volume of a normal gasoline tank in the car and you are carrying it around at 5,000 psi or something like that, which is a lot of pressure to be carrying around in an automobile.

Ausubel: This may be a reason, as several people have suggested, to begin with buses and trucks first, which are more predictable in their routes and in which weight may be less of an issue.  Part of what I was trying to show is that these are processes that operate over decades and centuries and yet they are, in some way, inexorable.  The wise entrepreneurs and businessmen as well as the wise regulators understand the rates at which the change can happen.

Question: Where did you find a wise regulator?  (Laughter)

Question: I have a question about the latest IPCC report, but first I would like to dispute your claim that many environmental groups will come on board if we can somehow talk about gas and not oil and gas.  The NRDC’s energy plan makes it very clear – and they claim to be one of the more moderate groups because they have worked with some companies and claim to be for markets – that they see natural gas, not as a short term substitute in the transition from oil and coal to non-fossil fuels, but as sort of a stalking horse.  They say it right in their plan: “We are going to pretend that natural gas is the fuel of the future until we get rid of oil and coal, and then we will go after natural gas.”  So I do not believe that.  But my question is: have you looked at the forty-some scenarios of the IPCC report?  On your impact model, the one that got the very high temperature increase, the 5.8 degrees, depends upon rapid world population growth, a rapid attainment of US living standards all around the world and continued very high energy use.  It would seem to be that that scenario is much more unbelievable than the 1990 business-as-usual scenario.

Ausubel:  Most of the forty scenarios are unbelievable.  The fact that the authors as a group were unwilling to attach probabilities to them is important.  After all, weather forecasters are willing to say there’s a 40% chance of rain nowadays.  The fact that the group could do no more than say that every one of these scenarios is equally valid was pathetic.  And there are internal contradictions in many of the scenarios.  Richer is cleaner, and the idea that you could have a scenario in which a society was very wealthy would not choose sanitation, industrial sanitation, whatever you want to call it, is preposterous.  When you get that rich, you can afford a hydrogen-powered home.  A lot of the scenarios are Brezhnev-ite.  We might think of some other name than Brezhnev-ite, but they are strange, incompatible combinations of things.

Question: It’s important to know where those supplementary scenarios came from.  It was the Clinton Administration that insisted on the high-end scenario, so you might call it the Clinton model.

Ausubel: There were people who knew what they were doing.  They knew that by including high emission scenarios, a year later, when the whole thing had been worked through the climate models, there would be a headline in the newspapers saying that there will be a calamity.

Question: I want to go back to natural gas again, because anyone who has watched the utility industry, as you obviously have the last few years, had seen that, with the rare exception of a coal plant or two, everything is now centered on natural gas.  I grew up on the Great Plains, and this reminds me of when the price of hogs went up in Nebraska one year; all the farmers started raising hogs and suddenly next year they had a problem.  I wonder whether the supply of natural gas really is going to be able to support the scenario that you describe and I am just curious what assumptions are behind your apparent optimism.  There is clearly a lot of gas out there, but unless you do something with the clathrate deposits and so forth, it’s not so clear that you could support it.  What are your assumptions?

Ausubel: It depends on your time scale.  It’s a system, of course.  The geologists and resource economists, who are well known to some of the people in this room, for example, Bill Fisher at the University of Texas, say that the quantity of gas itself in North America or even in the lower 48 is not the problem.  However, getting the rights of way, getting the pipelines in and so forth, expanding the whole system is a big challenge.  You have to do things in the right order, otherwise you can lose a lot of money and go bankrupt and fail for reasons that don’t have to do with resource abundance.  The key in this regard is to harness people’s economic self-interest.  A place like upstate New York, which is a relatively poor part of the United States, having lost a lot of industry, could be an important gas-producing region.  Somehow we have to convince the people of that region that it’s to their advantage and it’s an industry that could operate very cleanly and very safely.  It has to be done in a way that it is genuinely safe and that people can be comfortable with it.

Question:  My question really is, your confidence in the supply.

Ausubel:  It is second-hand but listening to the people who should know, I believe there’s plenty of natural gas.  And if you include Canada and the Gulf of Mexico and Mexico, there are enormous amounts.  If you go off-shore, gas is coming in now from Newfoundland.  Again this comes back to the importance of separating oil and gas policies, which I think the Canadians have understood better than we have.  Nova Scotia is almost certainly about to start extracting gas from the Scotian shelf.  We could do the same off North Carolina and other areas.  My reading of the community of which I am part is that there will be some objections, as this gentleman says, but I think if one has a real, authentic, believable gas-first policy, and not just an excuse to go out and look for oil,  there are regions of the West and East Coast which could be opened up for gas exploration.

Question: Thomas Gold believes that you go deep enough, there are tremendous amounts, that gas doesn’t come only from the vegetation being compressed in the usual geological rate.

Ausubel:  In the long run, Gold’s theories are important.  For the next twenty or thirty years, they don’t matter.  Certainly his explanations are fascinating, but you should have him come and talk about it.  I love listening to him. He is one of the most interesting speakers in the world.

Question: In addition to all the problems that you have discussed with predicting what the temperature will be in the future, it is absolutely absurd to claim that in any model, the temperature will go up by 5.8 degrees, or any other number you care to name.  The computer programs that predict that are trying to simulate an extremely non-linear, extremely complex system with all sorts of simplifications.  If you were doing a computer modeling of any other non-linear system in physics, other than this one, with such simplifications and with such poor match to existing data, you would be laughed off the stage.  Yet people say they believe these models!  They are totally unbelievable.

Ausubel: I am only proximate to the climate-modeling world, so let me give you an indirect comment.  I am sure you have had other speakers here who speak directly to the models.  I like to use a triad for thinking about problems of this kind.  There is the known, there is the unknown, and there is the unknowable.  I think in the case of climate predictions and climate modeling, the scientific community, in its use of terms like “uncertainty” and “predictability,” has gotten very mixed up about what is unknown but knowable versus what is fundamentally unknowable.  I tend to agree with you, that knowing the future climate in detail, knowing what will actually be the climate of Washington, D.C. in 2050 is unknowable.  No amount of computational power (and as you say, the more equations you add, the more “fudge factors” you have) can predict this.  There is a lot of confusion and misleading statements in this regard and a lot of what people are claiming to know is actually fundamentally unknowable.  At the same time, people are entitled to offer their best guess and that’s what they do.  What I would do is try to make people put money where their mouth is.  The IPCC doesn’t even put probabilities on its results.

I would like to set up a system of rewards and penalties or incentives or whatever, such that it costs more money to publish things that you think are more unlikely or to which you will not attach probabilities.  If people say “It’s my best guess, but there’s only a one in ten chance that it’s right,” somehow the reader, the consumer of the information would learn that.  A lot of what consumers of information think is “known” is not really “known”, and researchers hide a lot of what is unknown and unknowable.

Question: The people who produce these computer models say that they are right because they agree with each other.  The fact that they disagree with experience gets overlooked.

Ausubel: There’s a big dispute about the quality and performance of the climate models.  I am trying to approach the debate in a rather different way.  The important thing to me is whether answers to key questions that we are asking of the atmospheric sciences and geosciences community are unknowable.  If that is the case, we should say, okay, the question is, how risk-averse to consequences is society today?  That’s really the question.

(Bill O’Keefe: I would like to add that Bob Sproull co-chaired a work group last year that produced a report on climate science and policy.  It makes a number of these points about the models and the projections for 2100.)

Question:  I just wanted to mention that one of the things you were talking about was the effect of new technology as we go up or down those curves.  What we are trying to do with ocean farming to utilize the ocean to sequester very large amounts of carbon dioxide, down beneath the thermocline where the best numbers we have, it lasts for about 16,000 years before coming up.  Since there is so much ocean, it appears possible with this technology to sequester all the net increase of carbon dioxide from burning fossil fuels that we do today.  Not that you’d ever want to do it, but it is a possibility that would be wonderful to have, to address peoples’ concerns about this problem.

Ausubel: As Mike Markels knows, I admire his efforts.  A lot of carbon fuel is going to be used in any scenario.  The question is how clumsy we are in using it.  I think this is a real social choice, and it’s exactly the kind of thing that the societies should authentically debate, and some may decide that ocean sequestration is a wise way to go.

Question: I wonder if you could explain how literally you really mean this inexorable substitution of hydrogen for gas, and what about the timing?  On your chart, it looks extremely symmetrical.  Suppose gas hydrates become technologically a great opportunity compared to storage of hydrogen.  Could you foresee the gas era lasting another hundred years?

Ausubel: I would say gas dominance will happen, and the timing is plus or minus 10 or 20%.  Energy systems can take 100 years to evolve, so we might be talking about a decade or two.  Basically the internal clock is set.  That said, in different parts of the world, the rate of change may be different.  In North America and some countries, the system may be in place earlier. America had a high level of automobile ownership fifty years before Europe.  The level of car ownership in Europe in 1960 was about what the US had by about 1920.  So there is a lot of spatial heterogeneity.  Even with a global system like energy, you could have a situation where an older system persists in one region of the world or one network much longer than another.  So looking at the global average may not always tell you what you want to know, especially as a businessman or politician.

Question: Hydrate storage is the way to handle hydrogen in many cases; it is a known technology and we know how to do it.  But you go through the gaseous stage first and a natural gas pipeline will handle gaseous hydrogen.

Question: There is one other area of the energy question, and that is the subsidies for biofuels and the use of ethanols and that sort of thing.  I assume you do not see that coming into the equation.

Ausubel: My research group has studied land use and forests and farms, and we’re habitat freaks.  Our view is that over the next 50 years, in the US and globally, hundreds of millions of hectares that are now being actively logged and actively farmed should revert to nature.  The growth in farm productivity (yields) and the ability to grow trees faster with every year and every decade means that the acreage of land that you need to provide farm products and wood products is actually shrinking, after having expanded for thousands of years.  It has been shrinking since about the middle of the 20th century.  Our view is that the technological progress in farming and forestry offers an enormous chance to liberate the environment and address a lot of the biodiversity questions by a return of wilderness.  To me, it seems tragic to go the biomass route.  It is so clumsy and so wasteful of land.  I am on the side of the birds and the trees.

William O’Keefe: Thank you very much, Jesse.

*  Edited informal remarks made at the George C. Marshall Institute Roundtable discussion April 18, 2002. Mr. Ausubel’s remarks have been slightly modified so that they make sense without the use of figures and tables, which are omitted here.

In May 2001 IIASA

In May 2001 IIASA sponsored a meeting in Helsinki about the Institute for its
Nordic members. Jesse spoke about why IIASA has mattered in general and in
particular its role in the development of industrial ecology. The IIASA
Society has now posted his talk, “Industrial ecology, its origins, progress,
and relation to IIASA” and we have as well, IIASA_Ausubel_2001

Because the Brain Does Not Change, Technology Must

We post a copy of a recently published talk Jesse gave at the UN Commission on Sustainable Development meetings in New York in April 1999, “Because the Brain Does Not Change, Technology Must.”

We also newly post the 1995 paper, National Materials Metrics for Industrial Ecology, recently republished in the 1999
boook, Measures of Environmental Performance and Ecosystem Condition.