Leonard Bernstein and The Planets

From 1958 to 1972, Leonard Bernstein presented a series of educational programs on the nature of music dubbed Young People’s Concerts. The very last one televised on March 26, 1972 was the very first one that I watched, a presentation of Gustav Holst’s The Planets.  Most of the series is now available on YouTube, and among the programs are What is Orchestration, What is Classical Music, and What is a Melody? While I can appreciate music, the process of creating music always seemed a bit of a mystery to me. Bernstein is excellent in demystifying that process for this no longer quite so young person. It’s not an exaggeration to say Bernstein did for music what Carl Sagan or Neil deGrasse Tyson did for astronomy.

In 1967, Bernstein hosted a special called Inside Pop – The Rock Revolution.  While he called rock 95% trash, Bernstein said the new music and its message should be listened to and taken seriously.  By 1972, Bernstein seemed a bit cynical on that, at least the embrace of astrology over science that started during that era. Holst’s The Planets was based on astrology and Bernstein went through great pains to distinguish that from science. Any astronomy teacher who receives a paper with the class title Astrology 101 can relate. Nonetheless, we cannot control the beliefs a student has coming into a class, but we can use that to bridge the gap into a scientific understanding of the universe.

Bernstein starts things off with a rousing version of Mars – Bringer of War. Mars was the Roman god of war and the planet was given that designation as a result of its blood-red appearance.  The reddish hue of the Martian surface can be seen with the naked eye when Mars approaches opposition.  This occurs when Mars and the Sun are on opposite sides of Earth and is when Mars is closest. Opposition of Mars happens every 26 months and the next is July 27, 2018. These events also provide the optimal launch window to the red planet. Oxidation of iron in the Martian dust that creates the red color, oxidation being a fancy word for rusting. The same process occurs in parts of Oklahoma which has red soil.

Oxidation creates the red surface of Mars. Credit: NASA.

The most famous association of Mars with war was H.G. Wells’ War of the Worlds.  We now know that intelligent life does not exist on Mars. As late as the 1950’s, it was still thought that vegetation could survive on Mars. The Mariner missions of the 1960’s disproved that. However, the space age has proven oceans once existed on Mars and the subsurface still has quite a bit of water. It is possible for microbial life to thrive in the Martian subsurface. Perhaps ironic, as it was Earth’s microbes that did in the invading Martians in War of the Worlds. It is food for thought at NASA’s Planetary Protection office charged with preventing cross contamination between Earth and Mars.

Bernstein concludes that Mars – The Bringer of War is an ugly piece of music and that is appropriate as what is uglier than war? Unspoken was the Vietnam War still casting an ugly shadow over America in 1972. Six years later, John Williams would use this piece as an inspiration for his Star Wars score. From politics to pop culture, perhaps an indication of America’s beginning stages of healing during that period.

Next up is Venus – Bringer of Peace. Bernstein notes Venus was actually a god of love, but astrologers use Venus to symbolize peace. Venus is the brightest of all the planets from our vantage point on Earth. Venus is anything but peaceful. The atmosphere is 96% carbon dioxide and a runaway greenhouse effect heats the surface enough to melt lead. The atmosphere is so thick that pressure is 90 times greater than Earth’s. NASA has never tried to land on Venus, but the Soviet Venera program made 10 landings between 1970 and 1981. The landers lasted from 23 minutes to two hours before being overwhelmed by the harsh conditions.

A false color UV image that allows differentiation between different aspects of Venus’ atmosphere. Credit: JAXA / ISAS / DARTS / Damia Bouic

The brightness of Venus that seems so peaceful to us on Earth is caused by the reflection of light from sulfuric acid clouds.  Some 70% of sunlight that hits Venus is reflected back into space. This compares to 30% for Earth. As Venus occupies an orbit inside Earth’s, it does not appear to stray too far from the Sun, becoming visible just after sunset or just before sunrise.  This is even more so for Mercury.

Bernstein introduces Mercury – The Winged Messenger by noting how Holst employs double keys and rhythms as Mercury is perceived as a double-dealing, tricky sort.  It only takes Mercury 88 days to orbit the Sun and as it oscillates from one side of the Sun to the other, it changes from morning object, hidden by the Sun, to evening object in less than 2 months. Mercury has some other tricks up its sleeve, such as ice in permanently shadowed polar craters. Mercury lacks an atmosphere so heat is not distributed from sunlight to dark areas allowing ice to form in the closest planet to the Sun.

Yellow indicates shadowed areas of polar regions on Mercury where water ice is present. Credit: Credits: NASA/Johns Hopkins University Applied Physics Laboratory/Carnegie Institution of Washington/National Astronomy and Ionosphere Center, Arecibo Observatory

Then comes Jupiter – Bringer of Jollity. This is the most famous piece in the suite. While I do not think of Jupiter as jolly, it can be described as boisterous. Jupiter is a source of radio emissions that are detected with ham radios on Earth. Jupiter’s intense magnetic field accelerates charged particles creating the radio emissions. Jupiter’s moon Io is flexed by the giant planet’s gravity, making it the most volcanic body in the Solar System, so much so, its surface resembles a pizza. As Io ejects this material into space, it becomes ionized and is fed into Jupiter’s magnetic field providing a source for radio emissions.

The volcano world of Io. Credit: NASA/JPL/University of Arizona

Due to time constraints, Bernstein elected to skip the pieces on Saturn and Neptune which he described as slow and ponderous. As this program was geared for children, I suspect even then, these pieces would have had trouble keeping the attention of the audience. After Uranus – The Magician, (no jokes made on the pronunciation, this was at the Lincoln Center), Bernstein wrapped things up with an improvised piece called Pluto – The Unpredictable. Holst composed The Planets before Pluto was discovered. And Pluto did turn out to be unpredictable, so much so that it is no longer considered a planet. Rather, it was the first Kuiper Belt object discovered. It was not until the 1990’s that others would be detected. So, no need to fret about missing Pluto in this musical set.

I don’t frown upon someone who has an emotional reaction when gazing at the night sky. We’re not Vulcans. The planets and stars inspire more than just science. It can inspire music and art among other things including, shudder, astrology.  As far as the latter goes, one hopes to transition a student from a belief in superstition to science, but be aware, that usually does not occur overnight.  That aside, Holst’s The Planets still presents a nifty opportunity for an interdisciplinary take on the Solar System as it did for me on that sunny, cold early Spring Sunday afternoon 46 years ago.

*Image atop post – Leonard Bernstein leads the New York Philharmonic in its rendition of  Jupiter – Bringer of Jollity.

Demo Lesson: Simple Circuits, Current, Voltage, and Resistance

As part of an interview process, I was recently asked to provide a demo lesson in physics. The class had just started its unit on electronics, so I decided to teach with an online interactive simple circuit to give a conceptual basis for current, voltage, and resistance.  In my experience, these topics are often presented in abstract form right away with students drawing circuit diagrams and cranking out solutions to equations without getting an intuitive sense what these concepts are.  This is exacerbated by the fact that while we can observe the end result of an electrical system, we cannot see the inner workings of one.

There are two analogies that can be used for an electrical circuit.  One is a water system, the other is a roller coaster.  I’ll go over both here.  For the demo lesson, I used the roller coaster.  The school was in New York City and my thinking was the students would have, for the most part, experience riding a roller coaster.  There is Paterson Falls in New Jersey, but most people I talked to in the region were not aware of those falls. I became aware of it while watching the movie PatersonHad the lesson been in Buffalo where I live, I would have used the water system example as Niagara Falls is such a prominent feature in local geography.

Current defines the flow of electricity in a circuit in the direction of positive charge.  It’s actually the flow of loose negatively charged electrons that create a current, but this convention was defined before the nature of the atom was unveiled in the 20th Century.  Electrical charge is conserved, that is, it cannot be created of destroyed.  One unit of charge is a Coulomb (C), and a flow of 1 C/s is referred to as an amp.  During my high school years, students would brag about how many amps their stereos had, which delighted our parents no end.

If a stream has a flow of 10 gallons per second, we could call that its current.  If you are watching a roller coaster and observe 10 cars pass a point in one second, then 10 cars per second is its current.  The same holds true for a circuit, a flow of 10 units of charge in a wire is 10 C/s or 10 amps.  A circuit has to complete a loop for current to flow.  A switch in the on position completes a loop and allows a current to flow through the system.  The off position breaks the loop.  However, it takes more than a switch to create a current, and that’s where voltage comes in.

If an object is on the ground, it has zero potential energy.  If we lift the object above the ground it gains potential energy.  That potential energy is converted to kinetic energy if we release the object.  Go back to the roller coaster analogy.  How much potential energy do the cars have while level on the ground?  Zero. The coaster adds potential energy by lifting the cars up on a hill.  Coney Island’s Cyclone is 85 feet tall whereas modern coasters can be 200-300 feet tall.  The potential energy is converted to kinetic energy as you reach the top and begin to drop.  Batteries do the same by adding potential to a circuit.  This potential is measured in volts.

The Comet from a 1950’s postcard. The first hill at 96 feet provided the potential energy for the ride. As the height decreases in the loop, potential energy decreases – same as voltage decreases in a circuit loop.

In the water analogy, think of a canal that is level.  Current does not flow and in fact, this causes canals to be stagnant and a health hazard.  The canals of Amsterdam are flushed each morning for this reason.  It is also why the Buffalo segment of the Erie Canal was filled in during the 1920’s.  It is this segment that I-190 was built upon.  What happens when you add a height difference?  Think of Niagara Falls.  It adds a current and potential energy which is used to produce hydroelectric power.  Water in the amount of 748,000 gallons per second drops 180 feet into 25 turbines producing 2.6 megawatts of energy.

Robert Moses Hydroelectric Plant. Water is diverted before the Falls and its potential energy is converted to kinetic energy and then converted to electric power. Credit: Gregory Pijanowski

The lines from a power plant can have voltage in the hundreds of thousands.  Transformers drop that to 120 volts before entering a household.  Voltage can also be thought of as pressure.  Think of a pressure washer.  Higher pressure can deliver water farther.  Higher voltage can send a spark longer.  So while voltage and current are proportional to each other, they are not the same thing.  You need voltage to start a current.

The final piece of the puzzle is resistance.  This is akin to friction on the roller coaster.  Without friction, a roller coaster would never stop but would travel in a continuous loop.  Friction between the cars and rails converts kinetic energy into heat and is dissipated into the surrounding air.  Hence, an engine has to push the coaster up the hill again to start another trip around the loop.  Resistance in a circuit does the same.  Energy in the circuit is converted by resistance in the wire and dissipated as heat.  This causes voltage to drop as current travels in the loop.  The battery serves the same purpose as the hill in the coaster. It adds voltage or potential to restart the current around the loop.

Superconductivity represents a state of zero resistance.  This requires a very cold temperature.  During the 1980’s, a ceramic material was discovered that raised the known temperature of superconductivity from 30 K to 92 K.  The media at the time presented this as hope of building practical superconductive systems that would bring about high efficiencies to electric generation.  Since then, progress has been slow on this front, at least in terms of some expectations after that discovery.  You can think of a superconductive circuit as a roller coaster that would not require energy to start each successive loop after the initial potential was added.

The PhET interactive above allows the class to build their own circuits and analyze the relationship between current, voltage, and resistance.  For the sake of the demo lesson, I used the Physics Classroom interactive as it is a bit more easier to get it up and running given the limitations involved of a demo lesson. Over the long haul, the PhET interactive is more robust. Both will allow a student to adjust voltage and current to see how it affects the circuit.

The key points for the class to learn are:

A circuit must be a closed loop from one terminal of the battery to the other for a current to flow.  A switch in the off position breaks the loop while the on position closes the loop. A car ignition key serves the same function.

A potential or voltage must be applied to the circuit to get the current flowing.  Otherwise, it would be like trying to ride a flat roller coaster.

Voltage or potential will drop as the current travels through the loop.  This is analogous to a roller coaster lowering in elevation (and potential energy) as it completes the ride, eventually to be grounded.

In increase in voltage will increase current and an increase in resistance will decrease current. This is the basis for Ohm’s Law or I = V/R.

Of all the concepts here, voltage or potential tends to be the most difficult.  The roller coaster example is just one of several that can be used. I think it best for a teacher to be flexible and use whatever example is most effective for each student. Another example could be that as the battery being like a water pump.  The pump applies pressure in the circuit and thus, starts current.  A slingshot could be used as well.  As a battery forces a positive current towards the positive terminal, the two like charges want to repel each other.  Once the positive charge is released into the wire, it is as if the positive terminal slingshots that charge inducing a current.

The key to the lesson is to enable students to visualize and obtain an intuitive grasp of the concepts of current, voltage and resistance. Once accomplished, the class can move on to real circuits and will have a better understanding what a voltmeter or ammeter is telling them as well as what the variables to Ohm’s law signify.

Science in Extreme Conditions

If it isn’t true in the extreme, it cannot be true in the mean.”

That, at least, was an argument I heard in an undergrad philosophy class.  As we’ll learn, what happens in extreme environments are quite different from the confines of the conditions the human body has evolved in.  The conditions we live in are not typical of the universe, one which is mostly hostile to life.  And just like the physical sciences, the social sciences can present some extreme conditions that provide counter-intuitive results.

I’ll start with absolute zero.  At this temperature all atomic motion ceases.  On the Kelvin scale it is 0 degrees, on the more familiar scales it is – 459.67 F or – 273.15 C.  You can’t actually reach absolute zero.  Heat transfers from a warmer to a cooler object.  So ambient heat will always try to warm an object that cold.  However, you can get awfully close to absolute zero.  In fact, we’ve gotten as close as a billionth of a degree above absolute zero.  And this is close enough to see matter behave in strange ways.

At these temperatures, some fluids become superfluids.  That is, they have zero viscosity.    Liquid helium becomes a superfluid as it is cooled towards absolute zero and having zero viscosity means no frictional effects inside the fluid.  If you stirred a cup of superfluid liquid helium and let it sit for a million years, it would continue to stir throughout that time.  The complete lack of viscosity means a superfluid can flow through microscopic cracks in a glass (video below).  Good thing coffee isn’t a superfluid.

Is there an opposite of absolute zero, a maximum temperature?  You’d have to take all the mass and energy (really, one and the same, remember Einstein’s mass-energy equivalence E = mc2) and compress it to the smallest volume possible.  These were the conditions found just after the Big Bang formed the universe.  The smallest distance we can model is Planck length equal to 1.62 × 10-35 m.  How small is this? A hydrogen atom is about 10 trillion trillion Planck lengths. Any length smaller than this general relativity, which describes gravity, breaks down and we are unable to model the universe.

What was the universe like when it was only a Planck length in radius?

For starters, it was very hot at 1032 K, and very young at 1043 seconds.  This unit of time is referred to as Planck time and is how long a photon of light takes to transverse a Planck length.  At this point in the young universe, the four fundamental forces of nature, gravity, electromagnetic, electroweak, and electrostrong, were unified into a single force.  By the time the universe was 10-10 seconds old, all four forces branched apart.  It would take another 380,000 years before the universe became cool enough to be transparent and light could travel unabated.  Needless to say, the early universe was very different than the one we live in today.

How will the universe look at the opposite end of the time spectrum?

One possibility is a Big Rip.  Here, the universe expands to the point where even atomic particles, and time itself, are shredded apart.  In the current epoch, the universe is expanding, but the fundamental forces of nature are strong enough to hold atoms, planets, stars, and galaxies together.  Life obviously could not survive a Big Rip scenario unless, as Michio Kaku has postulated, we can find a way to migrate to another universe.  That would be many, many billions of years in the future and humanity would need a way to migrate to another star system before then.  It is not known with complete certainty how the universe will end.  For starters, a greater understanding of dark energy, the mysterious force that is accelerating the expansion of the universe, is required to ascertain that.

Other extremes that we do not experience, but we know the effects are include relativity, where time slows as you approach the speed of light or venture near a large gravity well such as a black hole.  In the quantum world, particles can pop in and out of existence unlike anything we experience in our daily lives.  The key point is as we approach extreme boundaries, we simply cannot extrapolate what occurs away from those boundaries.  Often what we find at the extreme ends of the spectrum is counter-intuitive.

One might ask if this is the case beyond the hard physical sciences.  Recent experience indicates that at least in economics, the answer is yes.

Hyperinflation rendered German marks so worthless they were used for wallpaper. Credit: Georg Pahl/German Federal Archives.

Under most scenarios, a growth in currency base greater than the demand for currency will result in inflation.  A massive increase in the currency base will end with hyperinflation.  The classic case was in post World War I Germany.  In the early 1920’s, to make payments on war reparations, Germany cranked up the printing press.  In 1923, this was combined with a general strike so you had a simultaneous increase in currency and decrease of available goods to buy.  At one point, a dollar was worth 4.2 trillion marks.  After the 2008 financial crisis, the Federal Reserve embarked on quantitative easing which greatly expanded the United States currency base. Many predicted this expansion would result in inflation.  It didn’t happen.

What gives?

In the aftermath of a banking crisis, demand for cash increases. If that demand is not met, spending falls, unemployment increases, bank loan defaults increase, leading to bank failures and a further fall in money supply.  This was the feedback loop in play during 1932, which was a very deflationary environment.  The expansion of the currency base simply offsets deflationary pressure rather than starting inflation. The extreme limit being faced here is the zero percent Fed Funds rate making bonds and cash pretty much interchangeable.

Unlike the physical sciences, ideology can muddy the waters in economic thinking.  However, the evidence is quite clear on this.  The same phenomena was observed both in Sweden in the mid-1990’s and Japan over the past decade.  It also happened in the United States during the late 1930’s.  In that case, Europeans shipped gold holdings to America in anticipation of war.  During that era, central banks sterilized imported gold by selling securities to stabilize the currency base.  Facing the deflation of the Great Depression, the U.S. Treasury opted not to sterilize the flood of gold from Europe.  The result was the currency base increased 366% but inflation only rose 27% (an average of 3% annually) from 1937-45.

The lesson here is, if you find yourself examining the most extreme conditions or up against a boundary, whether it is the speed of light, the infinite gravity of a black hole, the coldest temperature or lowest interest rate possible, it’s not sufficient to extrapolate the mean into the extreme.  You have to look into how these extreme environments alter the manner how systems operate.  In many cases, your intuition from living in conditions not in the extreme can lead you astray.  However, if you let observations, rather than preconceptions, guide you, some interesting discoveries may be in store.

*Image atop post is the formation of a Bose-Einstein condensate as temperature approaches absolute zero.  Predicted by Satyendra Nath Bose and Albert Einstein in 1924, as temperatures approach absolute zero, many individual atoms begin to act as one giant atom.  Per the uncertainty principle, as an atom’s motion is specified as close to zero, our ability to specify a location of that atom is lost.  The atoms are smeared into similar probability waves that share identical quantum states (right).   Credit:  NASA/JPL-Caltech.

Fate of the Sun

About 4.6 billion years ago, a large molecular cloud gave birth to the Sun.  Within our Solar System, the Sun contains 99.8% of its mass, the rest going to the planets, moons, comets, and asteroids.  The Sun is now halfway through its expected lifetime.  During the course of a human lifetime, the Sun does not change much.  It rises and sets the same times each year, its energy output does not vary much, and the average human will see about seven solar cycles.  Over the course of billions of years, the Sun does and will continue to evolve.  If the human race survives that long, that will have implications for its future.

The majority of the Sun’s life is spent on what astronomers call the main sequence.  During this time, the Sun fuses hydrogen into helium, a fraction of this mass is converted into energy providing the sustenance for life on Earth.  This reaction converts 4 hydrogen atoms into 1 helium atom plus two left over hydrogen atoms.  The process converts 0.71% of the original 4 hydrogen atoms’ mass into energy.  Each second, the Sun transforms 4 million tons of mass into energy.  If the Sun was the size of Earth, this would be the equivalent of converting 12 tons of mass into energy each second. You might worry that this would burn up the Sun in short order, but the Sun is very large and if you divide its mass by 4 million, it would use up its mass in 4.9725 × 1023 seconds, or 1.58 × 1016 years.  The Sun will not exist that long as there are other factors in play.

Fusion in the Sun only occurs in the core where temperatures reach 15 million degrees. Credit: NASA.

There are two major forces acting within the Sun.  One is the force of gravity as the Sun’s mass compresses its core.  This compression heats up the core to a temperature of 15 million K.  A temperature of 12 million K is required to start nuclear fusion.  Here you see the challenge of using fusion as an energy source on Earth.  Hydrogen bombs use fusion to explode, but require a fission atomic bomb to detonate it by delivering the required heat to start the fusion process.  Controlled fusion would make for a great energy source on Earth, but it is problematic to create a temperature of 12 million K.  Current research is looking into high energy lasers to heat hydrogen enough to commence controlled fusion.

Once fusion starts in the Sun’s core, this creates the second force in play, an outward pressure generated by heat.  This outward force perfectly balances the inward force of gravity preventing the Sun from collapsing upon itself.  This balancing act, referred to as hydrostatic equilibrium, is one of nature’s great regulators.  It is this balancing act that regulates short-term solar output so that it varies only a fraction of a percent.  This modulation of solar output provides a stable environment on Earth required for life.  However, over the course of a few billion years, it’s a different story.

As the Sun’s core converts hydrogen into helium, it becomes denser and hotter.  This in turn gradually makes the Sun more luminous.  The Sun is 30% more luminous today than 4 billion years ago.  In about 1 billion years, the Sun will become hot enough to boil off the oceans on Earth.  If humanity can survive its foibles over that time, it will need to move off the Earth to exist.  Colonizing Mars within that time frame is certainly doable.  What may not be doable, is interstellar colonizing when the Sun ends its main sequence stage.  Just before that occurs, another event will impact the Sun.

In about 4 billion years, the Milky Way will collide with its neighbor, the Andromeda galaxy.  While galaxies frequently collide, stars do not.  If the Sun was the size of a grain of sand, the nearest star would be another grain of sand over four miles away.  What could happen is the Sun may be ejected from the Milky Way.  The result of this collision is that the two spiral galaxies will combine to form one giant elliptical galaxy in a process that will cover 2 billion years (video below).  It’s impossible to model whether or not the Sun will be part of this new galaxy, but either way, the Sun will become a red giant afterwards.

A star becomes a red giant when it runs out of hydrogen in its core.  The rate of fusion slows down causing gravity to compress the core.  As a result, the shell of hydrogen outside the now helium core ignites.  The hotter core creates an outward pressure expanding the star greatly.  When the Sun turns into a red giant in 5 billion years, Mercury, Venus, and possibly Earth will be incinerated.  A red giant’s surface is much cooler than the Sun is today, but is much more luminous.  That may sound counter-intuitive, but think of it this way.  One 100-watt light bulb is brighter than one 60-watt light bulb.  However, 100 60-watt light bulbs is brighter than one 100-watt light bulb.  Besides temperature, stellar radius also factors into a star’s luminosity.  The Sun still has a few more steps to complete in its life cycle.

The red giant phase of the Sun will end in a helium flash.  This occurs when the core is compressed to a degenerate state where electrons are packed to the point where all possible states are occupied.  The compression heats the core to the required 100 million K to commence helium fusion into carbon.  This in turn breaks down the degenerate state of the core and the Sun will become a yellow giant.  The Sun is not large enough to fuse carbon.

However, the intense heat of helium fusion will generate even more outward pressure and expand the Sun’s radius even further so its outer shell becomes transparent, and cool.  So cool, that elements such as carbon and silicon solidify into grains and are expelled out by an intense solar wind.  At this stage, the Sun will be a Mira variable for 10 million years.  After this, the Sun will enter the final stages of its life as a white dwarf surrounded by a planetary nebula.

A white dwarf is the exposed core of a star.  Comprised of carbon and oxygen, it is not large enough to fuse atoms.  Its heat is akin to a car engine still being warm after it has been turned off.  While an engine will cool off in a few hours, it will take trillions of years for a white dwarf to go completely dark.  This is longer than the current age of the universe at 13.7 billion years.  The planetary nebula’s life is much shorter.

Samples of planetary nebulae. Credit: NASA/HST.

The term planetary nebula is a holdover from the days when these nebulae resembled planets in telescopes.  With the Hubble Space Telescope, we now know planetary nebulae can also take the shape of bipolar jets.  How the Sun will look we do not know.  We do know that the core will no longer be capable of holding on to its outer shell.  The planetary nebula will disperse into interstellar space in 10,000 years.

These gases will not only hold the remnants of the Sun, but the planets and the very atoms that make up our bodies.  The Sun itself is a remnant of a prior star.  We know this as trace amounts of metal exist in the Sun.  These metals are produced by fusion, or if the star is large enough, a supernova explosion.  Colliding galaxies compress interstellar gas igniting star formation.  As the Andromeda galaxy collides with the Milky Way, it is very possible what used to make up the Sun will form a new star, with planets, and possibly, plants, then animals, and finally, intelligent beings.

The cycle of life begins anew.

*Image atop post is from NASA’s Solar Dynamics Observatory.

The Little Ice Age & Global Warming

In some quarters of the media, global warming is presented as a natural rebound from an epoch known as the Little Ice Age. Is it possible the rise in global temperatures represents a natural recovery from a prior colder era? The best way to answer that is to understand what the Little Ice Age was and determine if natural forcings alone can explain the recent rise in global temperatures.

The Little Ice Age refers to the period from 1300-1850 when very cold winters and damp, rainy summers were frequent across the Northern Europe and North America. That era was preceded by the Medieval Warm Period from 950-1250 featuring generally warmer temperatures across Europe. Before we get into the temperature data, lets take a look at the physical and cultural evidence for the Little Ice Age.

Retreat of the Formi Glacier from A-1890, B-1941, C-1997, and D-2007. Source: http://geomorphologie.revues.org/7882?lang=en

You can see the retreat of the glaciers in the Alps at the end of the Little Ice Age to the current day. In the Chamonix Valley of the French Alps, advancing glaciers during the Little Ice Age destroyed several villages. In 1645, the Bishop of Geneva performed an exorcism at the base of the glacier to prevent its relentless advance. It didn’t work. Only the end of the Little Ice Age halted the glacier’s advance in the 19th century.

The River Thames Frost Fairs

The River Thames in London froze over 23 times during the Little Ice Age and five times, the ice was thick enough for fairs to be held on the river. When the ice stopped shipping on the river, the fairs were held to supplement incomes for people who relied on river shipping for a living. These events happened in 1684, 1716, 1740, 1789, and 1814. Since then, the river has not frozen solid enough in the city to have such an activity occur. An image of the final frost fair is below:

The Fair on the Thames, February 4th 1814, by Luke Clenell. Credit: Wiki Commons

The Year Without a Summer

The already cold climate of the era was exacerbated by the eruption of Mt. Tambora on April 10, 1815. If volcanic dust reaches the stratosphere, it can remain there for a period of 2-3 years, cooling global temperatures. The eruption of Mt. Tambora was the most powerful in 500,000 years. Its impact was felt across Europe and North America during the summer of 1816. From June 6-8 of that year, snow fell across New England and as far south as the Catskill Mountains. Accumulations reached 12-18 inches in Vermont. In Switzerland, a group of writers, stuck inside during the cold summer at Lake Geneva, decided to have a contest on who could write the most frightening story. One of the authors was Mary Shelley and her effort that summer is below:

First Edition cover for Mary Shelley’s Frankenstein. Credit: Wiki Commons

Let’s take a look at what the hard data says about the Little Ice Age. Below is a composite of several temperature reconstructions from the past 1,000 years in the Northern Hemisphere:

Credit: IPCC, 2007.

The range of uncertainty is wider as we go back in time as we are using proxies such as tree rings and ice cores rather than direct temperature measurements. However, even with the wider range of uncertainty it can be seen that temperatures in the Northern Hemisphere were about 0.50 C cooler than the baseline 1961-90 period. Was the Little Ice Age global in nature or was it restricted to the Northern Hemisphere?

Recent research indicates that the hemispheres are not historically in sync when it comes to temperature trends.  One key difference is that the Southern Hemisphere is more dominated by oceans than the Northern Hemisphere.  The Southern Hemisphere did not experience warming during the northern Medieval Warm Period.  The Southern Hemisphere did experience overall cooling between 1571 and 1722.  More dramatically, the Southern Hemisphere is in sync with the Northern Hemisphere since the warming trend began in 1850.  This indicates the recent global warming trend is fundamentally different than prior climate changes.

The Census of Bethlehem by Pieter Bruegel the Elder. Painted in 1566, inspired by the harsh winter of 1565. Credit: Wiki Commons.

Keep in mind that we are dealing with global averages.  Like a baseball team that hits .270, but may have players hitting anywhere between .230 and .330, certain areas of the globe will be hotter or cooler than the overall average.  During the 1600’s, Europe was colder than North America, and the reverse was the case during the 1800’s.  At it’s worst, the regional drops in temperature during the Little Ice Age were on the order of 1 – 2 C (1.8 to 3.6 F).  At first glance, that might not seem like much.  We tend to think in terms of day-to-day weather and there is not much difference between 0 and 2 C (32 and 35 F).  But yearly averages are different than daily temperatures.

We’ll take New York City as an example.  The hottest year on record is 2012 at 57.3 F.  The average annual temperature is 55.1 F.  If temperatures were to climb by 3 F, the average year in New York City would become hotter than the hottest year on record.  Again, using the baseball example, a player’s game average fluctuates more so than a career batting average.  You can think of daily weather like a game box score, and climate as a career average.  It’s much more difficult to raise a career batting average.  In the case of climate, it takes a pretty good run of hotter than normal years to raise the average 2-3 F.

Although the Northern Hemisphere was emerging from the Little Ice Age in the late 1800’s, cold winters were still frequent. This train was stuck in the snow in 1881, the same winter that served as the inspiration for Laura Ingalls Wilder’s The Long Winter, part of her Little House on the Prairie series. Credit: Minnesota Historical Society.

Lets go back to the climate history.  Global temperatures dipped about 0.5 C over a period of several centuries during the Little Ice Age.  Since 1800, global temperatures have risen 1.0 C.  This sharp increase gives the temperature graph the hockey stick look.   The latest warming trend is more than just a return to norm from the Little Ice Age.  There are two other factors to consider as well.  One is the increasing acidity of the oceans, the other is the cooling of the upper atmosphere.

Carbon dioxide reacts with seawater to form carbonic acid.  Since 1800, the acidity of the oceans have increased by 30%.  A rise in global temperatures alone does not explain this, but an increase in atmospheric carbon dioxide delivered to the oceans via the carbon cycle does.  As carbon dioxide in the atmosphere increases, it traps more heat near the surface.  This allows less heat to escape into the upper atmosphere.  The result is the lower atmosphere gets warmer and the upper atmosphere gets cooler.  The stratosphere has cooled 1 C since 1800.  A natural rebound in global temperatures would warm both the lower and upper atmosphere, observations do not match this.  However, increased carbon dioxide in the atmosphere does explain this.

The Little Ice Age looms large historically in that the colder climate played a role in many events leading to modern day Europe and America.  What caused the Little Ice Age?  That is still a matter of debate.  The Maunder Minimum, a sustained period of low solar activity from 1645 to 1715, is often cited as the culprit.  However, solar output does not vary enough with solar activity to cause the entire dip in global temperatures during the Little Ice Age.  As the old saying goes, correlation is not causation.  That’s were the science gets tough.  You need to build a model based on the laws of physics explaining causation.  While the cause of the Little Ice Age is still undetermined, the origin of modern global warming is not.  To deny that trend is caused by human carbon emissions, you have to explain not only the warming of the lower atmosphere, but the cooling of the upper atmosphere and increase in ocean acidity.

To date, no one has accomplished that.

*Image atop post is Hendrick Avercamp’s 1608 painting, Winter Landscape with Ice Skaters.  Credit:  Wiki Commons.

Evidence for the Big Bang

The evolution of the world can be compared to a display of fireworks that has just ended, some few red wisps, ashes, and smoke. Standing on a cooled cinder, we see the slow fading of the suns, and we try to recall the vanished brilliance of the origin of the worlds.” – Fr. Georges Lemaitre

Since the ancient astronomers, humans have wondered about the origins of the universe.  For most of history, mythology filled the void in our knowledge.  Then with Issac Newton, scientists began to assume the universe was infinite in both time and space.  The concept of a universe that had a discrete origin was considered religious and not scientific.  During the 20th century, dramatic advancements in both theory and observation provided a definitive explanation how the universe originated and evolved.  Most people I talk to, especially in America, are under the impression the Big Bang is just a theory without any evidence.  Nothing could be further from the truth.  In fact, every time you drink a glass of water, you are drinking the remnants of the Big Bang.

In 1916, Einstein published his general theory of relativity.  Rather than viewing gravity as an attractive force between bodies of mass, relativity describes gravity as mass bending the fabric of space-time.  Think of a flat trampoline with nothing on it.  If you roll a marble across the trampoline, it moves in a straight path.  Now put a bowling ball on the trampoline, the marble’s path is deflected by the bend in the trampoline.  This is analogous to the Sun bending space-time deflecting the paths of planets.  Once Einstein finished up on relativity, he endeavored to build models of the universe with his new theory.  These models produced one puzzling feature.

The equations describing the universe with relativity produced the term dr/dt.  The radius of the universe could expand or contract as time progresses.  If you introduced matter into the model, gravity would cause space-time and the universe itself to contract.  That didn’t seem to reflect reality, and Einstein was still operating with the Newtonian notion of an infinite, unchanging universe.  To check the contraction of the universe, Einstein included a cosmological constant to relativity to offset the force of gravity precisely.  By doing this, Einstein missed out on one of the great predictions made by his theory.

Balancing forces are not unheard of in nature.  In a star like the Sun, the inward force of gravity is offset by the outward force of gas as it moves from high pressure in the core to lower pressure regions towards the surface.  This is referred to as hydrostatic equilibrium and prevents the Sun from collapsing upon itself via gravity to form a black hole.  Einstein’s cosmological constant served the same purpose by preventing the universe as a whole from collapsing into a black hole via gravity.  However, during the 1920’s, a Catholic priest who was also a mathematician and astrophysicist, would provide a radical new model to approach this problem.

Georges Lemaitre had a knack for being where the action was.  As a Belgian artillery officer, Lemaitre witnessed the first German gas attack at Ypres in 1916.  Lemaitre was spared as the wind swept the gas towards the French sector.  After the war, Lemaitre would both enter the priesthood and receive PhD’s in mathematics and astronomy.  The math background provided Lemaitre with the ability to study and work on relativity theory.  The astronomy background put Lemaitre in contact with Arthur Eddington and Harlow Shapley, two of the most prominent astronomers of the time.  This would give Lemaitre a key edge in understanding both current theory and observational evidence.

It’s hard to imagine, but less than 100 years ago it was thought the Milky Way was the whole of the universe.  A new telescope, the 100-inch at Mt. Wilson, would provide the resolution power required to discern stars in other galaxies previously thought to be spiral clouds within the Milky Way.  One type of star, Cepheid variables, whose period of brightness correlates with its luminosity, provided a standard candle to measure galactic distances.  It was Edwin Hubble at Mt. Wilson who made this discovery.  Besides greatly expanding the size of the known universe, Hubble’s work unveiled another key aspect of space.

Edwin Hubble, Albert Einstein, & Fr. Georges Lemaitre. Credit: American Institute of Physics.

When stars and galaxies recede from Earth, their wavelengths of light are stretched out and move towards the red end of the spectrum.  This is akin to the sound of a car moving away from you.  Sound waves are stretched longer resulting in a lower pitch.  What Hubble’s work revealed was galaxies were moving away from each other.  Hubble was cautious in providing a rational for this.  However, Fr. Lemaitre had the answer.  It wasn’t so much galaxies were moving away as space was expanding between the galaxies as allowed by relativity theory.  Lemaitre also analyzed Hubble’s data to determine that the more distant a galaxy was, the greater its velocity moving away from us.  Lemaitre published this result in an obscure Belgian journal.  Hubble would independently publish the same result a few years later and received credit for what is now known as Hubble’s law.  This law equates recessional velocity to a constant (also named after Hubble) times distance.

As space is expanding between each object and in each direction, the recession velocity is greater the more distant an object is. Credit: OpenStax Astronomy/CC 4.0.

It would require more resolving power to determine the final value of the Hubble constant.  In fact, it took Hubble’s namesake, the Hubble Space Telescope to pin down the value which also provides the age of the universe.

Hubble’s original plot of recession velocity vs distance. The more distant a galaxy, the faster it is moving away from us. This is indicative of an expanding universe. Credit: doi: 10.1073/pnas.15.3.168 PNAS March 15, 1929 vol. 15 no. 3 168-173

In the meantime, the debate on the origin of the universe still needed to be settled.  Lemaitre favored a discrete beginning to the universe that evolved throughout its history.  Specifically, Lemaitre felt vacuum energy would cause the expansion of the universe to accelerate in time and thus, kept Einstein’s cosmological constant, albeit with a different value to speed up the expansion.  Einstein disagreed and thought the cosmological constant was no longer required.  By 1931, Einstein conceded the universe was expanding, but not accelerating as Lemaitre thought.  A decade later, the most serious challenge to the Big Bang theory emerged.

The label Big Bang was pinned on Lemaitre’s theory derisively by Fred Hoyle of Cambridge, who devised the Steady State theory.  This theory postulated an expanding universe, but the expansion was generated by the creation of new hydrogen.  Hoyle scored points with the discovery that stellar nucleosynthesis created the elements from carbon to iron via fusion processes.  Although Hoyle proved the Big Bang was not required to form these heavy elements, he still could not provide an answer to how hydrogen was created.  It would take some modifications to the Big Bang model to challenge Hoyle’s Steady State model.

Fred Hoyle and Fr. Georges Lemaitre. Credit: St. Edmond’s College/University of Cambridge.

During the 1940’s, George Gamow proposed a hot Big Bang as opposed to the cold Big Bang of Georges Lemaitre.  In Gamow’s model, the temperature of the universe reaches 1032 K during the first second of existence.  Gamow was utilizing advancements in quantum mechanics made after Lemaitre proposed his original Big Bang model.  Gamow’s model had the advantage over Hoyle’s Steady State model as it could explain the creation of hydrogen and most of the helium in the universe.  The hot Big Bang model had one additional advantage, it predicted the existence of a background microwave radiation emanating from all points in the sky and with a blackbody spectrum.

A blackbody is a theoretical construct.  It is an opaque object that absorbs all radiation (hence, it is black) and remits it as thermal radiation.  The key here is to emit blackbody radiation, an object has to be dense and hot.  A steady state universe would not emit blackbody radiation whereas a big bang universe would in its early stages.  During the first 380,000 years of its existence, a big bang universe would be a small, hot, and opaque.  By the time this radiation reached Earth some 13 billion years later, the expansion of the universe would stretch out these radiation wavelengths into the microwave range.  This stretching would correlate to a cooling of the blackbody radiation somewhere between 0 and 5 K or just 5 degrees above absolute zero.  Detection of this radiation, called the Cosmic Microwave Background (CMB) would resolve the Big Bang vs. Steady State debate.

In 1964, Arno Penzias and Robert Wilson were using the 20-foot horn antenna at Bell Labs to detect extra-galactic radio sources.  Regardless of where the antenna was pointed, they received noise correlating to a temperature of 2.7 K.  Cosmology was still a small, somewhat insular field separate from the rest of astronomy.  Penzias and Wilson did not know how to interpret this noise and made several attempts to rid themselves of it, including two weeks cleaning out pigeon droppings from the horn antenna.  Finally, they placed a call to Princeton University where they reached Robert Dicke, who had been building his own horn antenna to detect the CMB.  When the call ended, Dicke turned to his team and said:

“Boys, we’ve been scooped”

Actually, the first time the CMB was detected was in 1941 by Andrew McKeller but the theory to explain what caused it was not in place and the finding went forgotten.  Penzias and Wilson published their discovery simultaneously with Robert Dicke providing the theory explaining this was proof that the young universe was in a hot, dense, state after its origin.  Georges Lemaitre was told of the confirmation of the Big Bang a week before he passed away.  Penzias and Wilson were awarded the Nobel in 1978.  Back at Cambridge, Fred Hoyle refused to concede the Steady State theory was falsified until his death in 2001.  Some believe this refusal, among other things, cost Hoyle the Nobel in 1983 when it was awarded for the discovery of nucleosynthesis.  Hoyle was passed in favor of his junior investigator, Willy Fowler.

It would take 25 more years before another mystery of the CMB was solved.  The noise received by Penzias and Wilson was uniform in all directions.  For the stars and galaxies to form, some regions of the CMB had to be cooler than others.  This was not a failure on Penzias and Wilson’s part, but better equipment with higher resolution capabilities were required to detect the minute temperature differences.  In 1989, the COBE probe, headed by George Smoot, set out to map these differences in the CMB.  The mission produced the image below.  The blue regions are 0.00003 K cooler than the red regions, just cold enough for the first stars and galaxies to form.  This map is a representation of the universe when it was 380,000 years old.

The oval represents a 3-D spherical map of the universe projected into 2-D by cutting from one pole to the other. Credit: NASA

Could we peer even farther into the universe’s past?  Unfortunately, no.  The universe did not become transparent until it was 380,000 years old, when it cooled down sufficiently for light photons to pass unabated without colliding into densely packed particles.  It’s similar to seeing the surface of a cloud and not beyond.

The first nine minutes of the COBE probe produced a spectrum of the CMB.  The data was plotted against the predicted results of a blackbody spectrum.  The results are below:

Credit: Mather et al. Astrophys. J. Lett. , 354:L37ÐL40, May 1990

The data points are a perfect match for the prediction.  In fact, the CMB represents the most perfect blackbody spectrum observed.  The universe was in a hot, dense state after its creation.

The late 1990’s would add another twist to the expansion of the universe.  Two teams, one based in Berkeley and the other at Harvard, set out to measure the rate of expansion throughout the history of the universe.  It was expected that over time, the inward pull of gravity would slow the rate of expansion.  And this is what relativity would predict once the cosmological constant was pulled out, or technically speaking, equal to zero.  The two teams set about their task by measuring Type Ia supernovae.

Like Cepheid variables, Type Ia supernovae are standard candles.  They result when a white dwarf siphons off enough mass from a neighboring star to reach the size of 1.4 Suns.  Once this happens, a supernova occurs and as these transpire at the same mass point, their luminosity can be used to calibrate distance and in turn, the history of the universe.  A galaxy 1 billion light years away takes 1 billion years for its light to reach Earth.  A galaxy 8 billion light years away takes 8 billion years for its light to reach Earth.  Peering farther into the universe allows us to peer farther back in time as well.

What the two teams found sent shock waves throughout the world of astronomy.

The first 10 billion years of universe went as expected, gravity slowed the expansion.  After that, the expansion accelerated.  The unknown force pushing out the universe was dubbed dark energy.  This puts the cosmological constant back into play.  The key difference instead of operating from an incorrect assumption of a static universe, data can be used to find a correct value that would model an increasingly expanding universe.  Georges Lemaitre’s intuition on the cosmological constant had been correct.  While the exact nature of dark energy needs to be worked out, it does appear to be a property of space itself.  That is, the larger the universe is, the more dark energy exists to push it out at a faster rate.

Besides detecting the CMB, cosmologists spent the better part of the last century calculating the Hubble constant.  The value of this constant determines the rate of expansion and provides us with the age of the universe.  The original value derived by Hubble in the 1920’s gave an age for the universe that was younger than the age of the Earth.  Obviously, this did not make sense and was one reason scientists were slow to accept the Big Bang theory.  However, it was clear the universe was expanding and what was needed was better technology affording more precise observations.  By the time I was an undergrad in the 1980’s, the age of the universe was estimated between 10-20 billion years.

When the Hubble Space Telescope was launched in 1990, a set of key projects were earmarked for priority during the early years of the mission.  Appropriately enough, pinning down the Hubble constant was one of these projects.  With its high resolution able to measure the red shift of distant quasars and Cepheid variables, the Hubble was able to pin down the age of the universe at 13.7 billion years.  This result has been confirmed by the subsequent WMAP and Planck missions.

The story of the Big Bang is not complete.  There is the lithium problem. The Big Bang model predicts three times the amount of lithium as is observed in the universe.  And there is inflation.  In the first moments of the universe’s existence, the universe was small enough for quantum fluctuations to expand it greatly.  Multiple models exist explaining how this inflation occurred and this needs to be resolved.  This would determine how the universe is observed to be flat rather than curved and why one side of the universe is the same temperature as the other when they are too far apart to have been in contact.  An exponential expansion of the universe during its first moments of existence would solve that.

NASA’s WMAP mission measured the average angular distance between CMB fluctuations. One degree is flat, 0.5 degrees open, 1.5 degrees closed. The measurement came in flat at 1 degree consistent with inflation theory. Credit: NASA/WMAP

Then there is the theory of everything.  In his 1966 PhD thesis, Stephen Hawking demonstrated that if you reverse time in the Big Bang model, akin to running a movie projector in reverse, the universe is reduced to a singularity at the beginning.  A singularity has a radius of zero, an infinite gravity well and infinite density.  Once the universe has a radius less than 1.6 x 1035 meter, a quantum theory of gravity is required to describe the universe at this state as relativity does not work on this scale.

When discussing these problems with Big Bang skeptics, the tendency is to reply with a gotcha moment.  However, this is just scientists being honest about their models.  And if you think you have an alternative to the Big Bang, you’ll need to explain the CMB blackbody spectrum, which can only be produced by a universe in a hot dense state.  And you’ll need to explain the observed expansion of the universe.  It’s not enough to point out the issues with a model, you’ll need to replicate what it gets right.  While there are some kinks to work out, the Big Bang appears to be here to stay.

You don’t need access to an observatory or a NASA mission to experience the remnants of the Big Bang.  Every glass of water you drink includes hydrogen created during the Big Bang.  And if you tune your television to an empty channel, part of the static you see is noise from the CMB.  The Big Bang theory and the observational evidence that backs it up is one of the landmark scientific achievements of the 20th century, and should be acknowledged as such.

*Image atop post is a timeline of the evolution of the universe.  Credit:  NASA/WMAP Science Team.

The March of Time

Time is a mystery to physicists.  The Newtonian notion of absolute time, that is, all clocks run at the same rate, was demolished by Einstein’s relativity theory.  Despite the fact that we know clocks in different reference frames can run at different rates, we don’t know what exactly time is.  Does time run continuously or in discrete packets?  Can one travel backwards in time?  Why don’t the laws of physics tell us what time is?  An intriguing book, Now by Richard Muller, gives us some paths to seek these answers.

During the Victorian age, time was considered a constant for everyone.  A conductor’s watch on the way to London would run at the same rate as the station master’s clock.  Perhaps this is why Big Ben is the perfect symbol for that era.  Einstein proved this notion wrong.  The conductor’s pocket watch will run slower than the motionless station master’s clock.  Given the speeds we travel, the difference is too small to discern.  Einstein’s genius was to follow a theory to its natural conclusion and not allowing false intuition to lead him astray.  Clocks run slower the faster you go.  Gravity also slows the rate of time.  Once you hit the speed of light, or enter the infinitely deep gravity well of a black hole, time stands still.

The question remained, why does time always flow forward?  The most common response to that has been quoting the 2nd law of thermodynamics.  This is the only law in physics that suggests a flow of time.  It states that entropy of the universe always increases with time.  More specifically, a closed system, one which energy cannot enter or leave, must become more disordered over time.  An open system, like the Earth which receives a constant stream of energy from the Sun, can experience a decrease in entropy and an increase in ordered states.  The universe, as far as we know, is a closed system.  This argument was first advanced by Arthur Eddington, but Richard Muller proceeds to poke some holes in it.

Eddington was one of the most accomplished astronomers of the early 20th Century.  One of the first to grasp Einstein’s relativity theory, Eddington led an expedition to the island of Príncipe off the west coast of Africa to observe the solar eclipse of 1919.  Eddington was able to measure the bending of starlight by the Sun as predicted by Einstein.  Once this result was reported by the media, Einstein became the most famous scientist in the world.  Eddington was also what we would call today a populizer of science.  His hypothesis that entropy mandates the flow of time was published in his book The Nature of the Physical World, written in a manner for the general public to enjoy.

However, as Muller notes, clocks do not run slower in local regimes where entropy is decreasing.  A recent experiment verified that, at least on a quantum level, heat can run from a cold to warmer object.  This is a violation of the 2nd law of thermodynamics where energy runs from hot to cold objects.  Some of the news articles on this experiment also claim this has reversed the arrow of time.  Muller considers entropy and time to be two separate concepts.  Rather than rely on entropy, Muller’s hypothesis on time is tied into one of the most asked questions I receive from my students.

When going over the Big Bang, I am often asked what existed before then?  The answer is…nothing.  Space did not exist before the Big Bang.  And neither did time.  Muller speculates that just as space is being created with the expansion of the universe, so is time.  And it is this expansion that gives us the flow of time and a sense of now.  Unlike Eddington’s entropy argument, Muller provides a means of falsifying his theory.

The idea of falsifying a theory might seem odd as we are taught in grade school that experiments prove a theory right.  That’s not quite correct.  As Richard Feynman would say, we don’t prove a theory correct, only that it is not wrong.  Newton’s law of gravity was not wrong for a couple of centuries.  It predicts the motion of most celestial objects quite well.  By the late 1800’s, observations came in that Newton’s laws could not predict, specifically the precession of Mercury’s orbit.  Einstein’s relativity theory provided accurate predictions in two areas Newton could not, when an object is near a large gravity well like the Sun and when an object moves at velocities near the speed of light.

When Einstein realized his theory predicted Mercury’s orbit correctly, he was so excited he suffered from heart palpitations.  For a hundred years, Einstein’s theory has been proven not wrong.  It may take a unification of quantum mechanics and relativity to change that.

Muller speculates that as dark energy is accelerating the expansion of the universe, it must also accelerate time.  That is, time runs faster now than in the past.  To detect this, we must look at galaxies at least 8 billion light years away and make highly precise measurements of their red shifts.  Any excess in the red shift not predicted by space expansion would be caused by time expansion.  At this time, we do not have instruments to make this precise a measurement.  It’s not unusual for theory to race ahead of experimental ability.  After all, it took one hundred years to prove gravitational waves predicted by Einstein actually exist.

Is there any way to falsify relativity or quantum mechanics? To date, both have held up to rigorous testing.  One possibility is the simultaneous collapse of the quantum probability curve upon observation.  With the Copenhagen interpretation of quantum mechanics, atomic particles exist in all possible states along a probability curve.  Once observed, the probability curve collapses instantaneously to its exact state.  As Muller notes, this is in direct odds with relativity where nothing, not even information, can exceed the speed of light.  Perhaps, this can provide a crack in the theory that can lead to a unification of the physics of atoms and of large-scale objects.

Muller’s exploration of time delves into other topics, often Star Trek related. In the case of the transporter, Muller questions if the person assembled at the other end is the same person or a duplicate with the original destroyed.  I thought this was interesting as it follows the plot of James Blish’s one off Star Trek novel Spock Must Die.  Published in 1970, the opening chapter involves a rec room conversation between Scotty and Dr. McCoy, where McCoy frets over the possibility he is no longer his original self.  That is, the transporter destroyed the original McCoy the first time he used it and constructed a replicate each time afterward.  Scotty is nonplussed – “a difference that makes no difference is no difference.”

Would it make a difference?

As Muller notes, our bodies are mostly made of different atoms and cells than it was years ago, yet we maintain our sense of self.  The only thing that does change is the sense of now.  So, when I bought Spock Must Die in the mid-70’s, the body of atoms that searched through department and stationery store bookshelves, is markedly different than the body of atoms that purchased Now online.  In that sense, I am a replicate of my childhood self.  Yet, throughout that whole time, my mind has maintained a continuous state of consciousness.

That brings me back to an argument made in a college philosophy class.  If you take a boat, and replace each plank of wood over time, is it still the original boat?  Boats do not experience the sensation of time, it takes a mind to do that.  The brain, in some regions, does replace neurons throughout life and this may lead to memory loss.  For other regions, it appears not to replicate.  This may explain our continuity of consciousness, but as many a journal article has ended, more research is required in this area.

In the same philosophy class, our professor discussed how we lack access to each other’s state of consciousness.  Unless we could perform a mind meld a la Mr. Spock, our life experience and sense of time is locked up within each individual.  So, is time a matter that can be solved by physics alone?  Or does in require an interdisciplinary approach?  My instinct is that time is a problem for physics to solve.  We require eyes to see light and the mind to interpret it, but the electromagnetic waves that create light was solved by physics.  Until some evidence based results come in, we’ll have to keep an open mind. Many a time instincts have led a scientist astray.  How will this story end?

I honestly don’t know.  Only time will tell.

*Image atop post is a Munich clock store.  Credit:  Gregory Pijanowski

The State of SETI

In 1960, astronomers at West Virginia’s Green Bank Telescope launched the first effort to discover extraterrestrials.  Led by Frank Drake, Project Ozma used a radio telescope to detect transmissions at a single frequency of 1420 MHz.  This is the frequency emitted by hydrogen clouds and the most common radio emission in the universe.  Drake was attempting to receive pulses of transmissions from intelligent life located in star systems Tau Ceti and Epsilon Eridani.  More than 50 years later, Frank Drake is still searching for life beyond Earth.  Today’s efforts are seeking to increase the coverage and bandwidth of frequencies to find life in the universe.

The first detection of life beyond Earth may not be intelligent life but microbial life in our own Solar System.  Where there is water, there may be life.  And beyond Earth, there is plenty of water to be found in the Solar System.  Mars once had oceans of water on the surface.  While the surface is now dry, the subsurface contains significant amounts of water, especially in the high latitudes.  The Jovian moons of Europa and Ganymede, along with Saturn’s moon Titan, each have subsurface oceans with more water volume than all of the Earth’s oceans.  Missions planned for the next decade may provide evidence for life in these locations.

Map of Martian subsurface water made by the Mars Odyssey Gamma Ray Spectrometer. Highest concentrations are the blue regions by the poles. Credit: NASA/JPL

NASA is developing robotic moles to dig over 600 feet under the Martian surface.  These moles will be attached to a tube to send samples to the surface for examination.  The Europa Clipper will make repeated flybys of Europa to measure the properties of the subsurface ocean such as salinity.  Eventually, plans are to land on Europa and sample the ocean directly.  This mission may be 10-20 years off in the future.  A key factor in this is planetary protection.  If life exists in these locations, missions to detect it have to be sterilized to prevent contamination from Earth’s microbes.  To this end, NASA has an office dedicated to this purpose.

Although Europa is much smaller than Earth, its subsurface ocean is much deeper. Thus, Europa has more water than Earth as can be seen in this comparison. Credit: Kevin Hand (JPL/Caltech), Jack Cook (Woods Hole Oceanographic Institution), Howard Perlman (USGS)

Another possibility for life is the Saturn moon Enceladus. This moon ejects water vapor from its subsurface ocean into space.  NASA is currently devising instrumentation to study these water samples for a future mission.  This would allow for the detection of possible microbial life without the need to burrow into the subsurface saving time and money.  Enceladus is pretty young, some estimates are 100 million years old.  However, that may be enough time for life to have evolved there.

Water plumes near the South Pole of Enceladus. These plumes eject water hundreds of miles into space allowing for remote sampling. Credit: NASA/JPL/Space Science Institute

The environments in the Solar System beyond Earth are too harsh for plant life, but we can use what we know about Earth to detect plant life on exoplanets.  Earth’s early atmosphere was mostly carbon dioxide.  About 2.7 billion years ago plant life, specifically cyanobacteria, began to convert carbon dioxide to oxygen via photosynthesis.  About 2.5 billion years ago, that oxygen began to take hold in the atmosphere.  Oxygen is very reactive, that’s why its so combustible, it would not appear naturally in an atmosphere without photosynthesis to produce it as it likes to combine with other elements.  If we were able to detect oxygen in sizable quantities in an exoplanet atmosphere, that could be a tell-tell sign plant life exists.  This is what is known as a biomarker.

Other biomarkers include methane which is a by-product of living organisms.  Both oxygen and methane could be present in an atmosphere naturally.  However, if they both appear together, it would most likely be a sign of life.  The color of an exoplanet can also serve as a biomarker.  Astronomers at Cornell have cataloged 137 microorganisms and the color their pigmentation would reflect if detected on Earth.  It is hoped the next generation of 30-40 meter ground telescopes to go online in the next decade, along with the James Webb Space Telescope, will be powerful enough to detect biomarkers.

Of course, most people want to discover more than plant and microbial life, we want to know if there are alien civilizations out there.  We typically associate those efforts with the movie Contact and that’s accurate in the sense we listen for radio transmissions from other civilizations.  Keep in mind, that will help us discover civilizations just as advanced, or more so, than ours.  If an alien race had their radio receivers pointed to Earth from 800-1800 AD, they would not have heard a pique as radio had not been invented yet.  Over the past decade, the search for extraterrestrial intelligence (SETI), has received a bump in funding and resources.

In 2001, Paul Allen began to fund the Allen Telescope Array.  Rather than scarfing for time on radio telescopes used for other research projects, this array of 42 radio dishes is dedicated solely to SETI.  Ultimately the goal is to build 350 dishes and collaborate with similar arrays across the globe.  Breakthrough Listen is funded by Yuri Milner.  This program will use the Green Bank Radio Telescope, the 64-meter Parkes Radio Telescope in Australia, and the Automated Planet Finder at Lick Observatory.  Rather than focus on a single target and frequency, both projects endeavor to survey a million stars across a wide band of frequencies.  Besides radio, Breakthrough Listen will also search for optical laser transmissions.

The Green Bank Radio Telescope had its funding pulled by the NSF in the aftermath of the financial crash of 2008. Private funding for SETI has helped ensure its future over the next decade. Credit: NRAO/AUI

During a recent lecture at Cornell, Frank Drake noted that it had been previously thought lasers could not be transmitted across interstellar distances.  New developments in laser technology have changed that.  High powered lasers created for controlled fusion research have the capability to reach other stars.  The thinking is advanced civilizations might use high powered lasers as a beacon to attempt to communicate across space.  The Automated Planet Finder will search for laser signals in this new frontier of SETI research.

When thinking of habitable planets, the Goldilocks Zone is what usually comes to mind.  This is the region around a star where water can exist – neither too hot nor too cold.  However, other factors come into play in determining a planet’s suitability for life.  A magnetic field is required to shield the surface from cosmic rays.  An ozone layer is needed to absorb ultraviolet and x-ray radiation that would break apart organic compounds on the surface.  A planet’s axis must be moderately tilted and orbit not too elliptical to avoid extreme seasons.  Also, the host star should be relatively quiet and not emit flares with excessive radiation.   While recent research indicates exoplanets in the Goldilocks Zone are common, they may not necessarily be able to support life.

Looking into the future, the Starshot initiative plans to send probes to our nearest interstellar neighbors to find life.  When we think of interstellar voyages, we tend to think big as in Star Trek‘s USS Enterprise which was 947 feet long and held a crew of over 400.  Starshot takes the opposite approach.  Thinking small, the project aims to design a fleet of nano sized spacecraft.  The thinking here is the smaller the mass, the easier to accelerate to velocities required to reach the stars.  Also, a fleet of these probes can withstand damage to a few along the way and still complete the mission.  New technology needs to be invented to make this a go, but $100 million in funding has started the ball rolling.

In 1961, Frank Drake devised an equation to determine how many intelligent civilizations may exist in the Milky Way.  The final term of the equation estimates how long these civilizations last after emitting their first radio signals.  We won’t know the answer to this until we start making contact with alien civilizations.  Do advanced civilizations destroy themselves? Do natural events such as supervolcanoes disrupt intelligent life?  Finding the answers to these questions may help us survive on Earth.  May be a bit of a long shot, but most certainly worth making the effort.

*Image atop post is the Allen Telescope Array.  Credit:  Seth Shostak, SETI Institute.

Sunspots, Roswell, & Wright Field

On April 7, 1947, the largest sunspot in recorded history was observed.  Forty times the diameter of Earth, this solar activity would be connected with some odd happenings later that year in Roswell, NM.  That’s a testament, as we’ll see, to humans’ ability to connect dots that really are not there.  Nonetheless, this event does offer the opportunity to explore solar physics along with history.

Sunspots were first observed by ancient Chinese astronomers around 800 BC.  The invention of the telescope accelerated the study of sunspots and Galileo spent several years observing them.  Sometimes it takes a few centuries of observations to discern a pattern and that was the case with sunspots.  In 1843, Samuel Schwabe discovered sunspots appear in roughly 11-year cycles.  One major exception to this was the Maunder Minimum from 1645 to 1715 when very few sunspots appeared at all.  It would not be until the early 20th Century and the work of George Ellery Hale that the physics of sunspots would be understood.

Animation of Galileo’s sunspot drawings from June 2 to July 8 in 1613. The Sun rotates once every 25 days. Credit: Rice University Galileo Project.

From 1897 to 1993, the world’s largest telescope was one built by Hale.  These telescopes discovered galaxies existed beyond the Milky Way, the expansion of the universe, and quasars, the most distant objects known.  Somewhat overshadowed by all this was Hale’s work in solar physics.  In 1908, Hale discovered sunspots were regions of intense magnetic activity.  The magnetic field acts as a bottleneck for convection to the solar surface.  As a result, sunspots are a few thousand degrees cooler than the surrounding region and consequently appear dark.  Hale would also discover the polarity of sunspot magnetic field flips after each 11-year cycle as part of an overall 22-year cycle.  It was the 150-foot solar tower at Mt. Wilson that imaged the great sunspot of 1947.

Great sunspot of 1947. Earth and Jupiter added to image for scale. Credit: Mt. Wilson Observatory

Despite the darkness of the sunspots, this type of solar activity does not significantly change visible light radiation received on Earth.  However, high energy ultraviolet and x-ray radiation increases during times of intense solar activity.  This radiation is harmful to life but thankfully, an upper layer of the atmosphere called the thermosphere absorbs it.  This layer, 500 to 1,500 km above the surface, is where the International Space Station, Hubble Space Telescope, and many other satellites reside.  We think of this region as outer space as the atmosphere is so rarefied here, but rarefied as it is, an increase in solar activity can expand and increase the density of the thermosphere enough to drag orbiting objects to a lower altitude, or in the case of Skylab in 1979, back down to Earth.

Skylab was America’s first space station. This image was taken on February 8, 1974 as Skylab’s final crew departed. Credit: NASA

Some have claimed the massive solar activity of 1947 is responsible for an extraterrestrial space vehicle crash near Roswell, NM that year.  What crashed in New Mexico was earthly in origin, but solar activity would not bring down this type of craft at any rate.  Skylab was an abandoned space vehicle and it took several years for the energized thermosphere to drag it down to Earth.  An advanced space vehicle with propulsion would simply compensate for any decay in its orbit with a minor burn.  Orbiting satellites perform this maneuver routinely.  So what happened in the New Mexico desert in 1947?

In mid June, rancher W. W. Brazel found a crash site filled with debris he thought may have been part of a flying saucer.  By early July, Brazel notified a nearby Army Air Force base and three men were sent to investigate.  Here’s where things got complicated.  The debris field contained items described as foil, balsa wood beams, and other parts held together with scotch tape.  There was also a black box which looked like some sort of radio transmitter.  Now, this obviously is not something built to withstand the rigors of interstellar travel.  However, the United States was in the mist of the great flying saucer wave of 1947 and a Roswell paper famously reported the military captured the remains of a crashed saucer.

The term flying saucer had just been coined in late June of that year when pilot Kenneth Arnold reported seeing nine saucer type objects near Mt. Rainier.  A wave of sightings was followed over the summer.  The military quickly backtracked on that initial news release and stated it was a weather balloon that had crashed.  Brazel knew that wasn’t quite right as he had seen weather balloons before and what he discovered this time did not look like his previous finds.  Annoyed by the publicity, even the Russians chimed in, joking that the flying saucers reports were the result of too much scotch whiskey, he kept quiet and the story of Roswell died down until the late 70’s when new claims of a government coverup emerged.

Brazel was right, it was not a weather balloon, and there was a coverup by the government, just not the one usually promoted by those who make a living off this event.  I can recall a 1989 episode of Unsolved Mysteries sensationalizing the Roswell incident.  I happened to like Unsolved Mysteries quite a bit back in the day.  However, television is a business and sensationalism sells.  In between the 4-5 legitimate mysteries presented each week the show would delve once into the paranormal.  Even then, you had to discern what was real and what was fake.  Roswell was a legit mystery that would be resolved in the mid-90’s.  In retrospect, looking at the 994-page Air Force Roswell Report, it was well beyond the scope of Unsolved Mysteries or ufologists to untangle Roswell.

Unlike most ufoligists accounts of Roswell, the Air Force investigation interviews first hand sources.  Three key interviewees are Sheridan Cavitt, who recovered the original remains at Roswell with Jesse Marcel, Irving Newton, who inspected the debris in Fort Worth, and Albert Trakowski, who was director of Project Mogul.  Jesse Marcel died in 1986.  It was Marcel from the military side who was a key force in reviving the Roswell story in the late ’70’s.  Cavitt described Marcel as a good man but was prone to exaggeration.  Cavitt confirms the original debris field was consistent with a balloon crash.  Newton confirms the same and in fact, broke out laughing at the thought the debris might come from an alien craft when he saw it in 1947.  It was Marcel again who pushed that idea in Fort Worth, noting what he thought was alien writing on the balsa wood beams.  The Roswell story lied dormant until 1978, when Marcel appeared in a National Enquirer article claiming he recovered a flying saucer at Roswell in 1947.

From there, the Roswell myth picked up steam until it became a cottage industry onto itself.  The report interview with Trakowski is key.  This interview provided information on a project that was classified in 1947 and when unclassified in 1994, solved the Roswell mystery.

In the dawn of the Atomic Age, the United States was researching methods to detect atomic bomb tests in the Soviet Union.  One such effort was Project Mogul.  This program designed high altitude balloons to detect sound waves from atomic explosions.  The balloons were unusual in design, consisting of balloon trains up to 600 feet long.  The balloons were made of polyethylene material and the train included radar reflectors.  The original find by Brazel indicated the lack of an impact crater.  And the alien hieroglyphics?  Those were flower/geometric shaped figures on tape used to seal the balloon’s radar targets.  This tape was procured from a toy manufacturer when the targets were built during World War II.  The material shortage during the war forced the use of a toy manufacturer’s tape and was often the source of jokes within the Project Mogul staff.

Schematic or 600-foot project Mogul balloon train. Credit: USAF.

The materials discovered fit the description of the Mogul balloons.  Brazel’s intuition was correct, it wasn’t a weather balloon, but given the then classified nature of Project Mogul, the military could not disclose its true nature in 1947.

The debris was to be shipped to Wright Field (now Wright-Patterson) in Ohio with a stop in Fort Worth where it was photographed.  The purpose of sending the debris to Wright was to properly identify it.  However, the debris never made it to Wright as it was identified as some sort of balloon rather than a flying saucer in Fort Worth.  In fact, the entire contents was described as being able to fit in a car trunk.   Another aspect of the Roswell myth was a second crash where alien bodies were discovered and shipped to Wright.  No first hand accounts of a second crash exist and those who make this claim can’t even agree on its location.  Fact is, it never happened.  While the people at Wright Field were not examining alien bodies, they were at work making aviation history.

After World War II, Chuck Yeager was assigned to Wright Field where the Army Air Force maintained its test flight center.  The Bell X-1 was built in Bell Aerospace’s plant in Niagara Falls, but many of the design features came from the engineers at Wright.  It was at Wright where the decision was made to model the Bell X-1 after a .50 caliber machine gun bullet.  When the push came to move the X-1 past the sound barrier, operations were transferred to Muroc Air Base in the Mojave Desert and Yeagar went supersonic on October 14th.  There were a lot of interesting going-ons at Wright Field in 1947, just none of it involving extraterrestrials.

Bell X-1 during a test flight. Credit: NASA Langley Research Center

So why does the myth of Roswell endure, more than two decades after it was debunked?  As the poster from the X-Files says, “I want to believe.”  People naturally want to be in on the discovery of something momentous as alien life.  Problem is, science requires evidence and all that evidence points towards Project Mogul as the source of the Roswell crash.  That, and the Roswell UFO story is a livelihood for authors.  As I said before, sensationalism sells, and as much as people don’t want to give up on myths, they are more stubborn giving up a cash cow.

It’s unfortunate that the remains of the Project Mogul balloon crash was disposed of.  Wright-Patterson is now the home of the National Museum of the USAF.  A fabulous collection of aviation history from the Wright brothers to the Space Age, an exhibit of the crash remains from Roswell would have been a great addition.  Besides an interesting historical artifact from the nascent atomic age, one could laugh just as Warrant Officer Irving Newton did in Fort Worth back in 1947, when told this debris was the flying saucer found in Roswell.

The Space Between Us

From our vantage point on Earth, we tend to think of our surroundings as the norm of the universe.  It is not.  When we study astronomy we focus on the planets, the Sun, stars and galaxies.  These objects represent a small fraction of the universe.  If you could shrink the Sun to the size of a grain of sand, the nearest star would be another grain of sand over four miles away.  On this scale, light would travel at seven inches an hour.  What lies in all that space between the stars?  A cauldron of plasma, dust, gas, and magnetic fields in conditions we do not experience on Earth.  Some of the most important processes in the universe occurs in these environments.

Plasma is electrified gas.  In the Sun, or in any star, the heat of the core separates positively charged nuclei (ions) from negatively charged electrons.  These free floating particles are then discharged into space via the solar wind.  Plasma does not occur naturally on the Earth’s surface although it can be created to be used in florescent lights and plasma TV’s.  As plasma carries an electrical charge, its movement is determined by the ambient magnetic field.  A charged particle travels along the path of a magnetic field line.  In turn, the solar wind drags the solar magnetic field towards the planets.  This is referred to as the Interplanetary Magnetic Field or IMF.  As a result of the Sun’s rotation, the IMF is spiral shaped much like water from a rotating sprinkler.  The IMF also undulates in a wave-type formation as the image below indicates.

Parker Spiral, Credit: NASA/J. Jokipii, University of Arizona.

The journey of this plasma is fairly uneventful until it collides with a planet.  In the case of Earth, the IMF connects with the Earth’s magnetic field to transfer mass (the plasma) and its energy.  Once this plasma enters the Earth’s magnetic field, it follows the Earth’s magnetic field lines eventually finding its way into the upper atmosphere near the magnetic poles.  Here, these highly energetic particles collide with oxygen and nitrogen atoms.  The kinetic energy of these collisions excites the atom’s electrons to a higher energy level.  The electrons eventually fall back to their original energy levels and release the energy in the form of light causing the aurora.  Without this protective shield, life could not exist on Earth’s surface.  And that scenario is played out on Mars which lacks a magnetic field.

Most of the solar wind does not collide with planets.  What becomes of it?  Eventually, it hits the heliopause.  Here is where the solar wind meets the interstellar medium and no longer has the ability to push out any further.  Both Voyager I & II, launched in 1977 and still sending data back to Earth, are headed towards the heliopause.  Both have crossed the termination shock which precedes the heliopause.  It is here where the solar wind slows from supersonic to subsonic speeds.  It is not known specifically when the Voyager’s will cross this threshold into the interstellar medium.  But hopefully, it will occur before the Voyager’s last instruments are shut down in 2025.  What do we know about the interstellar medium?

Voyager’s golden record. The hydrogen electron spin state is depicted on the lower right. This provides a calibration for distance and time for any extraterrestrial life that might encounter Voyager. A detailed explanation of the golden record can be found here.  Credit: NASA/JPL

As hydrogen was created in the immediate aftermath of the Big Bang, it is the most common element in the universe.  In interstellar space, it is not hot enough to ionize hydrogen.  Neutral hydrogen (HI) emits 21 cm radio waves.    If the spin of a hydrogen atom’s electron and proton are parallel, the electron flips its spin to be anti-parallel.  This action causes the electron to occupy a slightly lower energy level emitting radio waves in the process.  Unlike light, radio transmissions penetrate through dust clouds.  Think of it this way, if you have your radio on, the reception will be the same regardless how dusty your room is.  This has allowed astronomers to complete comprehensive maps of galactic hydrogen gas clouds.  This is crucial in mapping the Milky Way as hydrogen’s radio transmissions give us a better look at our home galaxy.

In a spiral galaxy, hydrogen tends to be found in the arms.  It is in these areas where stars tend to be born.  The rotational velocity of the hydrogen in the arms and its resultant red/blue shifts allow us to differentiate it from intergalactic hydrogen.  The 21 cm radio emissions are so ubiquitous that it was decided to use these emissions on the Voyager golden record as a calibration scale for potential extraterrestrials who might find the probe.  The thinking being, as this is the most common emission in the universe, any alien race would also know about it and use it to decipher the time and distance measurements.

A 360 degree map of hydrogen in the Milky Way. The oval represents a sphere flattened by making a cut from one pole to the other. Bluish gas is approaching Earth while greenish gas is receding from Earth. The plane of the Milky Way runs across the center while the neighboring Magellanic Clouds are in the lower right. Credit: HI4PI: A full-sky HI survey based on EBHIS and GASS, Astronomy & Astrophysics.

When there is a star near a hydrogen cloud, it can heat it up to the point where it ionizes.  When accelerated, charged particles emit radio waves.  In an ionized hydrogen cloud, when a negatively charged electron is near a positively charged proton, it accelerates and emits radio waves.  This is something akin to radio transmission towers on Earth.  Electrons are accelerated up and down the tower generating the radio transmission you receive at home.  Ionized hydrogen clouds are hot enough to emit visible light as well.  By combining radio and visual observations, astronomers have been able to map out the spiral arms of the Milky Way.

Orion Nebula as captured by the Hubble Space Telescope. During winter and spring it is visible with binoculars.

Hydrogen is not the only element in interstellar space.  The second most abundant element is helium, also created in the throes of the Big Bang.  Beyond that there are trace amounts of other elements such as oxygen and carbon generated in the nuclear fusion of ancient generation stars that released these elements as a planetary nebula or supernova explosion.  While small in amounts, these are large in importance.  This is especially true of organic, carbon based molecules in space.  It is these molecules that form the basis of life on Earth, and perhaps elsewhere.

Also occupying interstellar space are dust grains.  If lightwaves are smaller than the dust grains, it is scattered in random directions.  If a lightwave is longer than a dust grain, it is not scattered and allowed to pass through unabated.  Thus, on Earth, short wavelength blue light is scattered by dust in the atmosphere resulting in the blue sky.  Conversely, long red wavelength passes through dust and creates the red sky at sunset.  The same processes are at work in space.  Dust grains scatter blue light reddening celestial objects when viewed here on Earth.  In some cases, such as nebulae and the galactic center, dust can obscure our view entirely.  The answer is to view in even longer wavelengths than red light – infrared light.

Infrared light, which is basically heat, is not visible to the eye.  You cannot see body heat with your eyes, but you can view it with night vision goggles, which is an infrared detector.  At near-infrared wavelengths, located adjacent to the optical band on the electromagnetic spectrum, dust is transparent.  Far-infrared, which has longer wavelengths closer to radio waves, can detect dust formations that radiate in these wavelengths.  The Earth radiates most in infrared and thus, it is advantageous to have an infrared observatory in space protected from interference from Earth.  The Spitzer Space Telescope does just that by observing in the infrared.

A composite image in both far and near infrared showing stars in the Milky Way core and the dust that normally obscures those stars. Credit: NASA/JPL-Caltech

Dust grains are important for life.  When dust grains begin to clump together around a protostar, it is the first step in planet building.  It has been theorized that dust is how organic material was delivered to Earth to form the building blocks of life.  The early Earth would have been too hot for organics to survive on the surface.  The theory is ultraviolet radiation broke apart dust grains, allowing them to recombine into organic compounds to be deposited on Earth via asteroids and comets.  The jury is still out on this, and while we cannot observe the formation of the Earth, we can observe the formation of other planetary systems via infrared and radio observatories.  The James Webb Space Telescope, to be launched in 2018, will observe in the infrared and should advance our understanding of these processes greatly.

The interstellar medium has about one atom per cubic centimeter.  The intergalactic medium has less than one atom per cubic meter.  It is also very hot at 100,000 to 10,000,000 Kelvin.  This is not intuitive given the lack of obvious heat sources between galaxies.  We know the temperature as the intergalactic medium emits high energy x-rays indicative of hot objects.  The heat is generated by active galactic nuclei and gravitational wells of galactic clusters.  Temperature is a measure of energy which in turn is a measure of motion.  Since this space is so rarefied, it does not take a lot of push to move it to high velocities.  And since it is so rarefied, this is where dark energy makes its full impact.

The expansion of the universe slowed until about 5 billion years ago when dark energy became more dominant than gravity. Credit: Zosia Rostomian, Lawrence Berkeley National Laboratory, and Nic Ross, BOSS Lyman-alpha team, Berkeley Lab

Dark energy is the force that is speeding up the expansion of the universe.  Until about 5 billion years ago, gravity dominated the universe and was slowing down the acceleration.  Since then, dark energy has dominated and has accelerated the expansion.  What is dark energy?  We don’t know.  Within the confines of galaxies, gravity still dominates and we don’t feel the universal expansion on Earth.  However, these confines are but a small part of the volume of the universe.  The ultimate fate of the universe will be dictated by dark energy in the intergalactic void.  Some feel that the universe will end in a Big Rip where even subatomic particles are shredded apart by the continuing expansion of the universe.  Obviously, life could not exist in this state.  Worry not, this would be many, many billion years in the future.  Nonetheless, our universe has a life cycle.  And this underscores our need to understand the processes at work in seemingly, but not quite, empty space.

*Image atop post is Hubble wide field view of NGC 6791, an open star cluster that also has a couple of galaxies in the background.  Credit:  NASA, ESA, and L. Bedin (STScI)