Apollo Program vs Manhattan Project

Whenever a need solving a complex scientific issue arises, calls often go out to start another Apollo/Manhattan Project. It is constructive to make a comparison of those two programs and determine if they are really a suitable model for today’s problems.

Cost

The media often cites the costs for these programs without accounting for inflation, otherwise known as nominal costs. That’s a serious mistake, especially when attempting to make a comparison to modern effort.

The Manhattan Project cost $1.9 billion in 1944 dollars. Adjusting for inflation, that is $27.5 billion in 2019. The average annual cost of the project is on par for annual spending on tobacco marketing. While most associate Los Alamos with the Manhattan Project, over 50% of the spending was for facilities at Oak Ridge, TN. This would include the gaseous diffusion plant to extract fissionable uranium.

The Apollo program cost $19.5 billion which equates to $150 billion in 2019. It was considerably more expensive to put a human on the Moon than to build the atomic bomb. What both programs had in common is spending spiked before their successful conclusion. Funding for the Manhattan Project peaked in 1944 and the Apollo program in 1966. Spending surged to build the industrial plants at Oak Ridge and Hanover for uranium enrichment and for the development of the Saturn V rocket. If a politician proposes a modern type project of this nature without increasing spending in the front end, it’s not a serious proposal.

Source: https://fas.org/sgp/crs/misc/RL34645.pdf

A key difference between the two programs was spending for the Manhattan Project was secret while the Apollo program was public. In his autobiography, Man of the House, Tip O’Neill relates John McCormack’s story how then Speaker Sam Rayburn arranged funding for the atomic bomb:

Einstein estimated the project would cost two billion dollars. Not surprisingly, the president was concerned about how to allocate that kind of money without alerting either the public or the press.

“Leave it to me,” said Sam Rayburn.

The next day, Sam called all the committee and subcommittee chairmen and told them to put an extra hundred million dollars in their budgets.”

No questions were asked or meetings held while those funds were siphoned off to build the atomic bomb. In contrast, President Eisenhower mandated NASA’s work and results to be public. This was to differentiate from the highly secretive Soviet program. Funding Apollo was often contentious as it had to compete with other priorities (Vietnam War/Great Society). Public approval for Apollo spending topped 50% only once, that during the first Moon landing.

Sustainability

The Manhattan Project and Apollo Program had varying success in sustaining their mission. The key components of the Manhattan Project in Los Alamos and Oak Ridge remained in operation as national laboratories. No doubt, the Soviet success in 1947 with their own atomic bomb was the driving point. Many would argue the Manhattan Project was too sustainable. The original program built four atomic bombs. By the 1960’s, America had 30,000 nuclear warheads (the Soviets had 40,000 by the 1980’s). Since then, a series of treaties have caused a reduction of both stockpiles to a few thousand and atomic testing eliminated.

Apollo met a different fate. After the Moon landing was accomplished, President Nixon had no particular loyalty to the Kennedy inspired program. Once a recession hit in 1971, the final three missions (18-20) were cancelled. These were to be the major scientific phase of the program. Nixon directed NASA to work on the reusable Space Shuttle, thought to be a more economical means of space travel, but in reality, was more costly than expendable rockets. NASA has continued a robust planetary/observatory program, but its human program has not left Earth orbit since 1972.

Mars mission profile proposed in 1969 by Wernher von Braun. Apollo funding had peaked three years prior and would never return to that level. By the mid eighties, von Braun’s team brought to America under Operation Paperclip were under investigation for their V-2 efforts, especially the use of slave labor camps. Von Braun passed away in 1977. Credit: NASA.

Sustainability for both these programs were dependent upon political viability. During the Cold War, America felt the need to maintain nuclear superiority to the Soviet Union. While Americans generally wanted to stay ahead of the Soviet space program, this did not translate necessarily into human space exploration. NASA has far exceeded any other space agency in terms of planetary exploration, astrophysics, and Earth science. That gap is closing as developing nations such as China and India build their space programs.

Benefits

I’ll spare you the tales of NASA developing Velcro. Certainly private industry could have developed such a product. However, both programs contributed key innovations to American society.

As one might imagine, the Manhattan Project required solving complex mathematical problems. Given the urgency of the program, innovations were sought to speed up the process. John von Neumann expanded upon the IBM tabulating machines used at the project to build the first modern computer. The Apollo program began the miniaturization of the computer. While these computers were rudimentary compared to today, modern high tech has its roots in these programs.

The Manhattan Project kick started the field of nuclear medicine (used for imaging) and radiation treatments for cancer. The Apollo program contributed advancements for pacemakers, dialysis treatment, and development of CAT scan imaging. Both projects required the development of high-speed and powerful film imaging of the results of their work.

Nuclear bomb less than one millisecond after detonation. Credit: Lawrence Livermore National Laboratory.

Often overlooked, given the political nature of the Apollo program, is its scientific contributions. Prior to Apollo, there were three competing ideas how the Moon was formed – capture (Earth’s gravity captured Moon), accretion (Earth & Moon formed together), and fission (Moon split off from Earth during formation). Apollo proved all three incorrect. The generally accepted theory supported by evidence brought back by Apollo is the Moon was formed in the aftermath of a Mars sized planet colliding with Earth. The key point here is a scientific idea, no matter how impressive it may by, needs to be supported by evidence to be proven.

While spinoffs are secondary to the primary objective of these programs, as we can see, they often have powerful impacts on the economy and society in general.

Analogies

The most obvious analogy today would be addressing climate change. It’s not a perfect analogy. Climate change is much larger and more international in scope, but there are some lessons to be culled.

The urgency of climate change is similar to the Manhattan Project. If the Soviets had beaten the U.S. to the Moon, it would have been distressing but not an existential threat. However, solving climate change does not require secrecy and any innovations on that front, as with NASA work, should be in the public domain. A large scale program to combat climate change would entail the following:

An upfront surge in spending as similar to both the Manhattan Project and Apollo, the time frame to solve this problem is exceedingly short.

A realization that such an effort will rely on a mixture of government/university/private sector initiatives. The worst thing we could do is introduce ideology into the program i.e. must be an all government or private sector effort. All 3,000,000 parts of the Saturn V was designed and built by private contractors. DuPont produced plutonium and Kellex designed the uranium enrichment plants for the Manhattan project.

What should the government do and what should be left to the private sector?

Historically, government has performed best at providing an infrastructure the private sector can innovate upon. Infrastructure can take many forms including transportation, research centers, and the internet (developed by state universities and CERN). NASA, for one, provides intensive remote sensing of Earth to monitor the climate.

As challenging as the problem of climate change appears, it has one major advantage over the Manhattan Project and Apollo. There are market forces sustaining the advancements to reduce carbon emissions. The cost of renewable energy is now competitive with fossil fuels. Unlike space exploration, where Pan-Am flights to the Moon were once envisioned, market forces now favor investment and research into renewable energy.

As hard as our current president might try, he’ll not be able to cancel the fight against climate change as Nixon cancelled Apollo.

But, and this is a big but, it will be difficult to provide an accurate cost estimate. Any program that relies on the invention of new technology to bring to completion will have this problem. It’s not like repaving a road. Budget overruns of this nature often provoke political blowback. Here is where political leadership is required to keep moving a program forward.

If, as is often said, “History doesn’t repeat itself but it often rhymes”, taking the proper lessons from history along with some flexibility will enable us to solve today’s most urgent problems. Things looked bleak in 1941 and 1960, but a strong effort and resolve overcame the odds.

  • Image atop post – left: Trinity Test, credit: Department of Energy, right: launch of Apollo 11, credit: NASA.

The Little Ice Age & Global Warming

In some quarters of the media, global warming is presented as a natural rebound from an epoch known as the Little Ice Age. Is it possible the rise in global temperatures represents a natural recovery from a prior colder era? The best way to answer that is to understand what the Little Ice Age was and determine if natural forcings alone can explain the recent rise in global temperatures.

The Little Ice Age refers to the period from 1300-1850 when very cold winters and damp, rainy summers were frequent across the Northern Europe and North America. That era was preceded by the Medieval Warm Period from 950-1250 featuring generally warmer temperatures across Europe. Before we get into the temperature data, lets take a look at the physical and cultural evidence for the Little Ice Age.

Retreat of the Formi Glacier from A-1890, B-1941, C-1997, and D-2007. Source: http://geomorphologie.revues.org/7882?lang=en

You can see the retreat of the glaciers in the Alps at the end of the Little Ice Age to the current day. In the Chamonix Valley of the French Alps, advancing glaciers during the Little Ice Age destroyed several villages. In 1645, the Bishop of Geneva performed an exorcism at the base of the glacier to prevent its relentless advance. It didn’t work. Only the end of the Little Ice Age halted the glacier’s advance in the 19th century.

The River Thames Frost Fairs

The River Thames in London froze over 23 times during the Little Ice Age and five times, the ice was thick enough for fairs to be held on the river. When the ice stopped shipping on the river, the fairs were held to supplement incomes for people who relied on river shipping for a living. These events happened in 1684, 1716, 1740, 1789, and 1814. Since then, the river has not frozen solid enough in the city to have such an activity occur. An image of the final frost fair is below:

The Fair on the Thames, February 4th 1814, by Luke Clenell. Credit: Wiki Commons

The Year Without a Summer

The already cold climate of the era was exacerbated by the eruption of Mt. Tambora on April 10, 1815. If volcanic dust reaches the stratosphere, it can remain there for a period of 2-3 years, cooling global temperatures. The eruption of Mt. Tambora was the most powerful in 500,000 years. Its impact was felt across Europe and North America during the summer of 1816. From June 6-8 of that year, snow fell across New England and as far south as the Catskill Mountains. Accumulations reached 12-18 inches in Vermont. In Switzerland, a group of writers, stuck inside during the cold summer at Lake Geneva, decided to have a contest on who could write the most frightening story. One of the authors was Mary Shelley and her effort that summer is below:

First Edition cover for Mary Shelley’s Frankenstein. Credit: Wiki Commons

Let’s take a look at what the hard data says about the Little Ice Age. Below is a composite of several temperature reconstructions from the past 1,000 years in the Northern Hemisphere:

Credit: IPCC, 2007.

The range of uncertainty is wider as we go back in time as we are using proxies such as tree rings and ice cores rather than direct temperature measurements. However, even with the wider range of uncertainty it can be seen that temperatures in the Northern Hemisphere were about 0.50 C cooler than the baseline 1961-90 period. Was the Little Ice Age global in nature or was it restricted to the Northern Hemisphere?

Recent research indicates that the hemispheres are not historically in sync when it comes to temperature trends.  One key difference is that the Southern Hemisphere is more dominated by oceans than the Northern Hemisphere.  The Southern Hemisphere did not experience warming during the northern Medieval Warm Period.  The Southern Hemisphere did experience overall cooling between 1571 and 1722.  More dramatically, the Southern Hemisphere is in sync with the Northern Hemisphere since the warming trend began in 1850.  This indicates the recent global warming trend is fundamentally different than prior climate changes.

The Census of Bethlehem by Pieter Bruegel the Elder. Painted in 1566, inspired by the harsh winter of 1565. Credit: Wiki Commons.

Keep in mind that we are dealing with global averages.  Like a baseball team that hits .270, but may have players hitting anywhere between .230 and .330, certain areas of the globe will be hotter or cooler than the overall average.  During the 1600’s, Europe was colder than North America, and the reverse was the case during the 1800’s.  At it’s worst, the regional drops in temperature during the Little Ice Age were on the order of 1 – 2 C (1.8 to 3.6 F).  At first glance, that might not seem like much.  We tend to think in terms of day-to-day weather and there is not much difference between 0 and 2 C (32 and 35 F).  But yearly averages are different than daily temperatures.

We’ll take New York City as an example.  The hottest year on record is 2012 at 57.3 F.  The average annual temperature is 55.1 F.  If temperatures were to climb by 3 F, the average year in New York City would become hotter than the hottest year on record.  Again, using the baseball example, a player’s game average fluctuates more so than a career batting average.  You can think of daily weather like a game box score, and climate as a career average.  It’s much more difficult to raise a career batting average.  In the case of climate, it takes a pretty good run of hotter than normal years to raise the average 2-3 F.

Although the Northern Hemisphere was emerging from the Little Ice Age in the late 1800’s, cold winters were still frequent. This train was stuck in the snow in 1881, the same winter that served as the inspiration for Laura Ingalls Wilder’s The Long Winter, part of her Little House on the Prairie series. Credit: Minnesota Historical Society.

Lets go back to the climate history.  Global temperatures dipped about 0.5 C over a period of several centuries during the Little Ice Age.  Since 1800, global temperatures have risen 1.0 C.  This sharp increase gives the temperature graph the hockey stick look.   The latest warming trend is more than just a return to norm from the Little Ice Age.  There are two other factors to consider as well.  One is the increasing acidity of the oceans, the other is the cooling of the upper atmosphere.

Carbon dioxide reacts with seawater to form carbonic acid.  Since 1800, the acidity of the oceans have increased by 30%.  A rise in global temperatures alone does not explain this, but an increase in atmospheric carbon dioxide delivered to the oceans via the carbon cycle does.  As carbon dioxide in the atmosphere increases, it traps more heat near the surface.  This allows less heat to escape into the upper atmosphere.  The result is the lower atmosphere gets warmer and the upper atmosphere gets cooler.  The stratosphere has cooled 1 C since 1800.  A natural rebound in global temperatures would warm both the lower and upper atmosphere, observations do not match this.  However, increased carbon dioxide in the atmosphere does explain this.

The Little Ice Age looms large historically in that the colder climate played a role in many events leading to modern day Europe and America.  What caused the Little Ice Age?  That is still a matter of debate.  The Maunder Minimum, a sustained period of low solar activity from 1645 to 1715, is often cited as the culprit.  However, solar output does not vary enough with solar activity to cause the entire dip in global temperatures during the Little Ice Age.  As the old saying goes, correlation is not causation.  That’s were the science gets tough.  You need to build a model based on the laws of physics explaining causation.  While the cause of the Little Ice Age is still undetermined, the origin of modern global warming is not.  To deny that trend is caused by human carbon emissions, you have to explain not only the warming of the lower atmosphere, but the cooling of the upper atmosphere and increase in ocean acidity.

To date, no one has accomplished that.

*Image atop post is Hendrick Avercamp’s 1608 painting, Winter Landscape with Ice Skaters.  Credit:  Wiki Commons.

Carbon

Most are aware the role carbon, specifically in carbon dioxide, plays in global warming. What is important is not to designate carbon as something inherently harmful. In fact, without carbon, life would not be possible. So lets take a look at carbon and how it fits into the big picture on Earth.

Carbon is created in the nuclear fusion of stars.  When sun-like stars become red giants, their cores fuse helium and beryllium into carbon atoms.  When a massive star goes supernova, the explosion disperses the matter created by that star into the universe and is recycled into new stars and planets. Remember the old song lyric, “We are stardust?” That is literally the case. The matter that makes up most of our bodies was produced in the fusion reaction of an ancient generation star.

So what is carbon? Lets take a look at the image below:

Credit: Alejandro Portis/Wiki Commons.

First, note the number of protons in the nucleus equals the number of electrons orbiting the nucleus. Protons have a positive charge and electrons have a negative charge. The fact that there are equal numbers of both means the atom is electrically neutral. Also, note that there are four electrons in the outside orbital shell. This shell can fit a total of eight electrons. Thus, the carbon atom can form molecules with other elements by sharing four electrons in the outer shell with the other element. Atoms like to have their outside shells filled, or as many a high school chemistry teacher has said, are “happy” when those outer shells are filled.

Carbon Based Life

The study of organic chemistry is often treated as a course onto itself. What is important to understand is that life on Earth is carbon based. The bonds that a carbon atom can form with hydrogen, oxygen, and nitrogen atoms make it the backbone of organic molecules that life consists of.  Carbon atoms have the ability to form long complex chains of molecules to create carbohydrates, lipids, proteins, and nucleic acids (such as DNA).

Nature likes to recycle. As noted above, carbon was formed in stars and recycled in new stars. Carbon is recycled on Earth as well. Ever hear of the term fossil fuels? That is because the fuel we use is carbon based. And those carbon based fuels are extracted from the Earth. How did those carbon based fuels get there? From the dead remains of plant and animal life that existed on Earth millions of years ago.

Hydrocarbons

The fuel we use in our day-to-day lives are based on hydrocarbons. The term is derived from the molecular structure of these fuels based on molecules composed of carbon and hydrogen atoms. For example, natural gas is mostly methane which is a simple hydrocarbon based on one carbon atom sharing an electron with four hydrogen atoms. Hence, methane’s molecular formula is CH4. On the other hand, gasoline is formed by long chains of carbon-hydrogen bonds designated as C11H24 or C12H26. An example of some hydrocarbons is shown below:

Credit: United States Geological Survey.

Why do hydrocarbons make an excellent fuel source? There are a multitude of reasons. Hydrocarbons produce a lot of energy and can be controlled during combustion. Economically, fossil fuels are easy to store and transport. That also makes gasoline difficult to replace as not only do new automobile engines need to be designed, but a new infrastructure would need to be built to replace the current refinery-pipeline-gas station system. While great strides are being made in alternative fuel sources, fossil fuels will be a significant player in the economy for the foreseeable future.

To see why this is a concern, we’ll take a look at a simplified version of the carbon cycle below.

Credit: U.S. Department of Energy Genomic Science Program/http://genomicscience.energy.gov

Note how the use of fossil fuels results in a net intake of 6 billion (Gt=giga tons, giga = 1 billion) tons of carbon into the atmosphere. Carbon is recycled between the land, oceans, and atmosphere. Why do fossil fuels emit more carbon into the atmosphere than absorbed back into land? The reason is, it takes millions of years to form fossil fuels but only a few months to extract and burn it. It’s the same if you run more water into a bathtub than the drain can take away. So, what happens to that carbon when fossil fuels are burned and released into the atmosphere?

Carbon Dioxide

To understand how carbon dioxide is formed, lets take a look at an oxygen atom below:

Credit: Greg Robson/Wiki Commons.

Note that oxygen has 6 electrons in the outer shell that can hold eight electrons. Remember, the carbon atom has 4 empty spots in its outer shell to share. That being the case, two oxygen atoms will combine with a single carbon atom so that the outer shells of the oxygen atoms will be completely filled with eight electrons and are “happy”.

Methane is the simplest of the hydrocarbon fuels. What happens when methane is burned for energy?

Oxygen is used as a catalyst to burn methane as follows:

CH4 (methane) + 2O2 -> CO2 (carbon dioxide) + 2H2O + energy

Note that each side of the equation contains 1 carbon atom, 4 hydrogen atoms, and 4 oxygen atoms. When fossil fuels are burned for energy, carbon dioxide is released in the exhaust and into the atmosphere.

Greenhouse Gases

The composition of the Earth’s atmosphere is as follows:

Nitrogen:                    78%

Oxygen:                      21%

Argon:                         0.9%

Carbon Dioxide:        0.03%

Methane:                    0.00017%

How is it that trace gases such as carbon dioxide and methane play a dominant role in the greenhouse effect but nitrogen and oxygen do not? That is a matter of the molecular structure of each substance. Before we get into that, lets take a look at the role greenhouse gases have on Earth’s ability to support life.

To appreciate greenhouse gases on Earth, we’ll take a look at a place without greenhouse gases, the Moon. The Moon is the same distance from the Sun as the Earth and provides a baseline to examine. Below is a comparison of average temperature on the Moon and on Earth:

Moon: 00 F

Earth: 600 F

In other words, without greenhouse gases, the average temperature of the Earth would be the same as the Moon at 00 F.   At that temperature, water on Earth would be frozen and human life would not exist. The point here is that greenhouse gases are not “bad”. In fact, we need those gases to survive. However, too much of a good thing can be a bad thing and that includes greenhouse gases.

What makes a gas a greenhouse gas?

That question can be answered by looking at the molecular structure of the gases that exist in the Earth’s atmosphere. Some molecules, such as carbon dioxide, have molecular bonds that can stretch and vibrate, while others, such as nitrogen and oxygen, have molecular bonds that are rigid. In addition, the molecules whose bonds can vibrate are choosy at which frequencies they vibrate. To understand this better, take a look at the electromagnetic (EM) spectrum below:

Credit: NASA

Note that radio, microwaves, infrared, light, ultraviolet, x-rays, and gamma rays are all forms of EM radiation. What differentiates the various types of EM radiation are the wavelengths. The shorter the wavelength, the more energy the EM radiation has. That is why gamma rays are very damaging to life and we must be careful not to overexpose ourselves to x-rays and ultraviolet rays. Greenhouse gases only absorb radiation in the infrared range. What exactly is infrared radiation?

As you can tell from the image above, our eyes can only detect a small part of the EM spectrum. Infrared radiation is one form that we cannot see but can feel as heat. The vibrational motions of atoms and molecules produce infrared radiation and all objects radiate in the infrared. In fact, humans radiate most strongly in the infrared as does the planets, including Earth. Night vision goggles are basically infrared sensors. Detecting heat from objects at night allow us to see those objects in the dark.  Below is an image of a cat in infrared:

Credit: NASA/IPAC

Note the yellow areas on the infrared image. These are the warmest areas of the cat. The nose, which is dark, is the coolest area of the cat.

As sunlight strikes the Earth’s surface, the ground warms and radiates the energy back into the atmosphere as heat or infrared radiation.

What happens when infrared radiation encounters a greenhouse gas? The gas molecule absorbs the infrared energy and converts it to kinetic energy via vibration of molecular bonds. The molecule then stops vibrating and reconverts the kinetic energy back into the atmosphere as infrared energy where surrounding carbon dioxide molecules repeat the process. This prevents the infrared radiation from entering the upper atmosphere and escaping into space.  In essence, increasing greenhouse gases is like throwing an extra blanket on the Earth.*

The impact of the greenhouse effect is twofold. One, it traps heat in the lower atmosphere. This increases global temperature near the surface. Second, by preventing heat from escaping into the upper atmosphere, it cools the stratosphere.  This provides us with a key diagnostic tool to test if greenhouse gases are causing increasing surface temperature. If the increase in surface temperature originates from another forcing such as solar irradiance, then both the lower and upper atmosphere would become warmer. So how does the evidence look? The answer is below:

Credit: NASA Earth Observatory.

As the lower atmosphere has warmed the upper atmosphere has cooled. A good portion of the upper atmospheric cooling is due to ozone loss. The less ozone there is, the less ultraviolet radiation is absorbed in the stratosphere. However, the loss of ozone has not been enough to explain all the stratospheric cooling. The rest is caused by the greenhouse effect. You’ll note the two short-term spikes in stratospheric temperatures around 1983 and 1992. These were generated by volcanic ash ejected into the upper atmosphere from two separate explosions. The aerosols reflect sunlight and heat the stratosphere. However, the effect lasts on the order of 2-3 years and should not be confused with long-term trends.

Carbon Isotopes

All carbon atoms come with six electrons and six protons.  Where they differ is in the amount of neutrons in the nucleus.  Most carbon atoms have six neutrons, about 1% have seven neutrons and one out of a trillion will have eight neutrons.  Plant life produces carbon dioxide that favor the common six neutron configuration.  As fossil fuels consist of the remnants of past life on Earth, burning it produces less of the heavier seven and eight neutron carbon atoms than natural processes.  If the increase in atmospheric carbon dioxide is a result of the burning of fossil fuels, we would expect it to have a higher ratio of lighter six neutron carbon atoms.  Indeed, the amount of six neutron carbon to seven neutron atoms has increased since 1850, and are at their highest levels in at least 10,000 years.

Thus, the theoretical model meets data, meaning the best explanation is climate change is caused by human made greenhouse gases, especially carbon dioxide.

*Gavin Schmidt uses this analogy in his book Climate Change.  As Schmidt notes, like most analogies this is not perfect.  Under a blanket, heat is generated by the person using it.  In the atmosphere, the energy is received above from the Sun in the form of light and transformed and radiated by the ground in the form of infrared radiation.

**Image atop post is NASA computer model on the global distribution of carbon dioxide.  Credit:  NASA’s Goddard Space Flight Center/B. Putman

Beware of Outliers

As we currently digest the run-up to the 2016 presidential election, it can be expected that the candidates will present exaggerated claims to promote their agenda.  Often, these claims are abetted by less than objective press outlets.  Now, that’s not supposed to be the press corps job obviously, but it is what it is.  How do we discern fact from exaggeration?  One way to do that is to be on the lookout for the use of outliers to promote falsities.  So what exactly is an outlier?  Merriam-Webster defines it as follows:

A statistical observation that is markedly different in value from the others of the sample.

The Wolfram MathWorld website adds:

Usually, the presence of an outlier indicates some sort of problem. This can be a case which does not fit the model under study, or an error in measurement.

The most simple case of an outlier is a single data point that strays greatly from an overall trend.  An example of this is the United States jobs report from September 1983.

bls
Credit: Bureau of Labor Statistics

In September 1983, the Bureau of Labor Statistics announced a net gain of 1.1 million new jobs.  As you can tell from the graph above, it is the only month since 1980 that has gained 1 million jobs.  And why would we care about a jobs report from three decades ago?  It is often used to promote the stimulus of the Reagan tax cuts.  When you see an outlier such as this being used to support an argument, you should be wary.  As it turned out, there is a simpler explanation for this that has nothing to do, pro or con, with Reagan’s economic policy.  See the job loss immediately preceding September 1983?  In August 1983, there was a net loss of 308,000 jobs.  This was caused by the strike of 650,000 AT&T workers who returned to work the following month.

If you eliminate the statistical noise of the striking workers from both months, you have a gain of over 300,000 jobs in August 1983, and 400,000 jobs in September 1983.  Those are still impressive numbers and require no need for the use of an outlier to exaggerate.  However, it has to be noted, it was the monetary policy of the Fed Chair Paul Volcker, rather than the fiscal policy of the Reagan administration that was the main driver of the economy then.  Volcker pushed the Fed Funds rate as high as 19% in 1981 to choke off inflation causing the recession.  When the Fed eased up on interest rates, the economy rebounded quickly as is the normal response as predicted by standard economic models.  So we really can’t credit Reagan for the recovery, or blame him for the 1981-82 recession, either.  It’s highly suspect to use an outlier to support an argument, it’s even more suspect to assume a correlation.

To present a proper argument, your data has to fit a model consistently.  In this case, the argument is tax cuts alone are the dominant driver determining job creation in the economy.  That argument is clearly falsified in the data above as the 1993 tax increases were followed by a sustained period of job creation in the mid-late 1990’s.  And that is precisely why supporters of the tax cuts equals job creation argument have to rely on an outlier to make their case.  It’s a false argument intended to rely on the fact that, unless one is a trained economist, you are not likely to be aware of what occurred in a monthly jobs report over three decades ago.  Clearly, a more sophisticated model with multiple inputs are required to predict an economy’s ability to create jobs.

When dealing with an outlier, you have to explore whether it is a measurement error, and if not, can it be accounted for with existing models.  If it cannot, you’ll need to determine what type of modification is required to make your model explain it.  In science, the classic case is the orbit of Mercury.  Newton’s Laws do not accurately predict this orbit.  Mercury’s perihelion precesses at a rate of 43 arc seconds per century greater than predicted by Newton’s Laws.  Precession of planetary orbits are caused by the gravitational influence of the other planets.  The orbital precession of the planets besides Mercury are correctly predicted by Newton’s laws.  Explaining this outlier was a key problem for astronomers in the late 1800’s.

At first, astronomers attempted to analyze this outlier within the confines of the Newtonian model.  The most prominent of these solutions was the proposal that a planet, whose orbit resided inside of Mercury’s, perturbed the orbit of Mercury in a manner that explained the extra precession.  This proposed planet was dubbed Vulcan, after the Roman god of fire.  Several attempts were made to observe this planet during solar eclipses and predicted transits of the Sun with no success.  In 1909, William W. Campbell of the Lick Observatory stated no such planet existed and declared the matter closed.  At the same time, Albert Einstein was working on a new model of gravity that would accurately predict the orbit of Mercury.

Vulcan’s Forge by Diego Velázquez, 1630. Apollo pays Vulcan a visit. Instead of having a real planet named after him, Vulcan settled for one of the most famous planets in science fiction.  Credit: Museo del Prado, Madrid.

The general theory of relativity describes the motion of matter in two areas that Newton could not.  That is, when located near a large gravity well such as the Sun or moving at a velocity close to the speed of light.  In all other cases, the solutions of Newton and Einstein match.  Einstein understood that if his new theory could predict the orbit of Mercury, this would pass a key test for his work.  On November 18, 1915, Einstein presented his successful calculation of Mercury’s orbit to the Prussian Academy of Sciences.  This outlier was finally understood and a new theory of gravity was required to do it.  Nearly 100 years later, another outlier was discovered that could have challenged Einstein’s theory.

Relativity puts a velocity limit in the universe at the speed of light.  A measurement of a particle traveling faster than this would, as the orbit of Mercury did to Newton, require a modification to Einstein’s work.  In 2011, a team of physicists announced they had recorded a neutrino with a velocity faster than the speed of light.  The OPERA (Oscillation Project with Emulsion-tRacking Apparatus) team could not find any evidence for a measurement error.  Understanding the ramifications of this conclusion, OPERA asked for outside help in verifying this result.  As it turned out, a loose fiber optic cable caused a delay in firing the neutrinos.  This delay resulted in the measurement error.  Once the cable was repaired, OPERA measured the neutrinos at its proper velocity in accordance with Einstein’s theory.

While the OPERA situation was concluding, another outlier was beginning to gain headlines.  This being the increase in the annual sea ice in Antarctica, seemingly contradicting the claim by climate scientists that global temperatures are on the rise.  Is it possible to reconcile this observation within the confines of a model of global warming?  What has to understood is this measurement is an outlier that cannot be extrapolated globally.  It only pertains to sea ice surrounding the Antarctica continent.

Glaciers on the land mass of Antarctica continue to recede, along with mountain ranges across the globe and in the Arctic as well.  Clearly something interesting is happening in Antarctica, but it is regional in nature and does not overturn current climate change models.  At least, none of the arguments I’ve seen using this phenomenon to rebut global warming models have provided an alternative model that also explains why glaciers are receding on a global scale.

Outliers are found in business as well.  Most notably, carelessly taking an outlier and incorporating it as a statistical average in a forecasting model is dangerous.  Lets take a look at the history of housing prices.

Credit: St. Louis Federal Reserve.
Credit: St. Louis Federal Reserve.

In the period from 2004-06, housing prices climbed over 25% per year.  This was clearly a historic outlier and yet, many assumed this was the new normal and underwrote mortgages and derivative products as such.  An example of this would be balloon mortgages, where it was assumed the homeowner could refinance the large balloon payment at the end of the note with newly acquired equity in the property as a result of rapid appreciation.  Instead, the crash in property values left these homeowners owing more than the property was worth causing high rates of defaults.  Often, the use of outliers for business purposes are justified with slogans such as this is a new era, or the new prosperity.  It turns out to be just another bubble.  Slogans are never enough to justify using an outlier as an average in a model and never be swayed by any outside noise demanding you accept an outlier as the new normal.  Intimidation in the workplace played no small role in the real estate bubble, and if you are a business major, you’ll need to prepare yourself against such a scenario.

If you are a student and have an outlier in your data set, what should you do?  Ask your teachers to start with.  Often outliers have a very simple explanation, such as the 1983 jobs report, that will not interfere with the overall data set.  Look at the long range history of your data.  In the case of economic bubbles, you will note a similar pattern, the “this time is different” syndrome.  Only to eventually find out this time was not different.  More often than not, an outlier can be explained as an anomaly within a current working model.  And if that is not the case, you’ll need to build a new model to explain the data in a manner that predicts the outlier, but also replicates the accurate predictions of the previous model.  It’s a tall order, but that is how science progresses.

*Image on top of post is record Antarctic sea ice from 2014.  This is an outlier as ice levels around the globe recede as temperatures warm.  Credit:  NASA’s Scientific Visualization Studio/Cindy Starr.

El Nino – The Long and Short of It

On Christmas Eve, 1777, the HMS Resolution landed on a small, isolated island 2,160 km south of Hawaii.  The captain of the HMS Resolution, James Cook, named the island for the day it was discovered.  Over a century earlier, 9,000 km across the Pacific, fishermen off the coast of present day Peru noticed a periodic warming of Pacific waters that coincided with a drop in local fish population.  This event, which occurs every few years around Christmas day, was named El Nino or Christ child.  Today, the coral reefs off Christmas Island (aka Kiritimati Island) are a focal point for researchers who stand ready to measure potential coral bleaching as a consequence of this year’s El Nino.  This event influences weather far away from Christmas Island, and as we’ll see, across the globe.

El Nino is a natural phenomena that occurs periodically and is not a recent climate development.  In fact, climatologists have studied clam fossils to measure El Nino events going back 10,000 years.  El Nino is caused by an oscillation of high and low pressure zones and ocean temperature in the eastern equatorial region of the Pacific.  For this reason, El Nino is formally named by scientists as the El Nino – Southern Oscillation or ENSO.  While El Nino is associated with warmer ocean waters, its counterpart, La Nina (Little Girl in Spanish) is marked by colder than normal waters in the same region.  To understand one, you need to understand the other.  The image below shows the Pacific temperature variation between the two events.

Credit: NOAA

You’ll note that the temperature variation is not very great, just a few degrees Celsius.  However, given the size of the area the temperature anomaly occurs, this can have a dramatic effect on atmospheric circulations in the region.  Warm ocean waters transports heat into the atmosphere above it.  Warm air rises, creating a low pressure area that tends to be unstable and results in precipitation.  Cooler ocean waters stabilizes the air above it.  Cold air tends to sink and this results in a region of high pressure which is marked by low participation.  So, what causes this oscillation?  Lets take a look at the global wind map below:

Credit: NASA/Caltech

For those of us who live in the Northern Mid-latitudes such as the United States or Europe, we are used to winds prevailing from the west.  However, in the equatorial tropics, where the ENSO takes place, the trades winds prevail from the East.  Typically, these easterly trade winds push the Pacific waters towards the west.  Normally the result is this:

Credit: NOAA

The easterly trade winds cause warmer water to pool up in the Western Pacific by Australia.  The heat from the ocean transfers to the atmosphere in the region, which in turn, causes instability in the air.  As the air rises, it cools, releasing moisture in the form of rain.  This pattern causes the wet season in Australia from November to March.  Warm water also expands, which means sea level is higher in the Western Pacific than the Eastern Pacific.  The warm water in the west sinks and becomes colder.  The ocean circulation returns this cold water to the coast of South America.  The upwelling of this cold water brings with it nutrients for fish to feed from.  A La Nina event is an amplification of these conditions.

An El Nino event is a flip-flopping of this pattern.  During El Nino, the easterly trade winds die down, causing warm water to migrate back towards the coast of Peru rather than around Australia resulting in the scenario below:

Note the differential in sea level is exaggerated.  Typically there is a 0.5 meter difference in sea level between the western and eastern ends of the Pacific.  Credit: NOAA

Throughout El Nino, the upwelling of colder, nutrient rich water off the Peruvian coast weakens.  As their food supply drops, fish in the region migrate away.  This means lower catch amounts for fishers, which was observed in the 1600’s.  In the west, rain moves away from Australia bringing in drought conditions.  In the east, the warmer than normal waters produce flooding on the west coast of South America.  Globally, an El Nino can result in a spike in global temperatures.  And here is why:

The 1998 El Nino was among the strongest on record. Credit: NOAA

During El Nino years, warm Pacific waters are dispersed over a larger surface area.  Have you ever seen rain water pool up on a race track?  Typically, to dry the track faster, crews come out with blowers to disperse the water over a wider area of the track, making it easier for water to evaporate into the air.  El Nino essentially does the same thing.  By dispersing warm water over a wider area, it allows for greater rate of transport of ocean heat into the atmosphere.  The 1997-98 El Nino was the strongest on record, and that was reflected as a surge in global temperatures in 1998:

Credit: NOAA

The 1982-83 El Nino also brought about a rise in global temperatures.  Thus, ENSO brings short term noise into global temperature data.  Other climate factors do this as well.  For example, powerful volcanic eruptions can eject sulfur dioxide into the stratosphere.  This results in global cooling for a period of 2-3 years.  You can see that above as the Mt. Pinatubo eruption in 1991 caused a brief drop in temperatures in the early 1990’s.  To discern between short term and long term effects, trend lines are used which are in blue above.

Note that, contrary to what you may hear in some quarters, global temperatures have continued to trend upwards since 1998.  Trend lines are typically regression calculations which minimize sums of  the distances between the individual data points and the trend line.  Some charts have trend lines starting at the 1998 point and ending at a later data point lower to “prove” temperatures have not risen since 1998.

That is a statistical no-no!

Doing so would flunk you out of introductory statistics as that would not meet the regression fit requirements.  This is not the only area where the popular media confuses short term and long term trends.  You see it when economic data such as the monthly job report comes out.  Not only do monthly figures contain a lot of short term noise, but they are typically revised later.  Stock and commodity prices are also pretty noisy and often media reports hyperventilate over insignificant daily trends.  Students have complained to me that statistics is boring, but it can be one of the most useful, practical courses one can take.

Moving off my soap box and back to El Nino, what else can we expect to encounter during an El Nino year?  The oscillation of high and low pressure zones in the Pacific can have a dramatic effect on the jet stream and weather tracks across the Americas.

Credit: NOAA

During the La Nina phase, high pressure in the North Pacific deflects the jet stream into Alaska where it sweeps down across Canada into the Northern United States bringing polar air along with it.  During El Nino, low pressure in the Pacific allows the jet stream to drop southward.  The heat from El Nino strengthens the jet stream.   This creates a strong storm track that can bring significant rainfall and flooding across California and the South.  Polar air can be trapped north of the U.S. resulting in warm winters in the Midwest and Northeast.  The 1982 El Nino brought in the warmest Christmas in Buffalo history, clocking in at 64 degrees.  The 1997 El Nino brought in a year’s worth of rain, some 13 inches, to Los Angeles in February of 1998.  Early detection of El Nino can help in preparations for flooding in areas such as Southern California.

In South America, closer to El Nino itself, the effects are amplified.  During the 1997-98 El Nino, some areas in Peru received ten times the normal rainfall amounts.  As a consequence, landslides claimed the lives of over 200 persons in both Peru and Ecuador.  On the other side of the Pacific, El Nino brings abnormally dry conditions.  During the same 1997-98 El Nino, drought conditions combined with slash and burn agriculture sparked wildfires in Indonesia that consumed 7 million hectares (17.3 million acres).  In 2015, wildfires in Indonesia have again heralded the onset of El Nino.  These fires are rich in carbon dioxide emissions and it has been estimated that so far, as much carbon dioxide has been released in Indonesia as an entire year in Japan.

ENSO oscillates between El Nino and La Nina phases. The strength of the 1982, 1997, and 2015 El Nino events are quite noticeable in this graph. Also note the phase has listed towards El Nino since 1980. Credit: NOAA

How does the 2015 El Nino compare with the great El Nino of 1997?  The early returns are that this is a comparable event.  In fact, the 2015 El Nino has already eclipsed the one week record set in 1997 for Pacific warming.  Climatologists use a three month baseline to determine El Nino strength and if 2015 does not match 1997 on that baseline, it will not lag very far behind.  That being the case, we can expect a general rerun of the events of 1997-98.  As the video below explains, there are always variations to each El Nino event, but we can make probabilistic predictions to what this winter holds.

Lying in the cross-hairs of El Nino are the coral reefs off of Christmas Island.  The surge in water temperature can generate a bleaching of the coral reefs.  During the 1997-98 El Nino, some 20% of the world’s coral population was lost to bleaching.  Warm water causes corals to eject algae called zooxanthellae.  This algae lives with the coral and produces nutrients for the coral to consume.  The loss of these nutrients triggers the bleaching of the vibrant colors the corals are famous for.

Example of coral bleaching. Credit: NOAA

As the Pacific waters have reached 31 C (88 F), scientists stand ready not only to record the effects of this year’s El Nino, but to utilize coral fossils to reconstruct El Nino’s history and project the future.  While the bleaching of corals has historically occurred on a periodic basis, the corals typically have been able to recover.  However, with the oceans temperatures trending upward as a result of global warming, this may inhibit future recoveries of coral bleaching events.  El Nino is part of a naturally occurring cycle, nonetheless, it will provide us with important information on what to expect as we experience a long term, non-cyclical warming globe.

As we proceed into 2016, the El Nino will diminish and the ENSO cycle will eventually trend back towards La Nina.  Global temperatures, just as happened after the 1997-98 El Nino concluded, may subside a bit.  It will be important not to be fooled by the short term noise.  That drop in temperature will not represent a long term shift, but only a return to the trend line.  Afterwards, we should expect global temperatures to commence its rise again.  Heat is energy, and as the global base temperature continues its climb, El Nino events in the future can be anticipated to be more powerful.  And we will need to incorporate that, along with a lot of other implications of climate change, into our long term policy planning.

*Image on top of post is comparison of sea height anomalies between the 1997 and 2015 El Nino events.  As water warms, it expands, causing sea levels to rise.  Credit:  NASA.