Showtime for New Horizons

New Horizons has now entered its planned quiet mode in preparation for its big day on July 14th as it makes its closest approach to Pluto at 7:49 AM EDT.  We will not hear back from the spacecraft until 22 hours later around 9 PM EDT tomorrow.  So what to expect tomorrow and the upcoming week?  Emily Lakdawalla from the Planetary Society has a very detailed rundown that you can read here.  Below is a general timeline of events.

Patience will be a virtue.  New Horizons was built before the age of social media and is not sending Instagram pics.  If all goes well, there will be a tremendous amount of data to download across the Solar System and that will take time.  Keep in mind, even traveling at the speed of light, it will still take 4 1/2 hours for transmissions to reach Earth from Pluto.

New Horizons must either collect data or send it back to Earth.  It cannot do both simultaneously.  Consequently, New Horizons will spend July 14th diligently gathering data as it makes its closest approach to Pluto.  And that is why New Horizons will be quiet for 22 hours until Tuesday night.  Images should start to come in on Wednesday, July 15th.  The mission timeline is as follows:

July 14th – 8:53 PM EDT, New Horizons scheduled to signal Earth the flyby was completed successfully.

July 15th – LORRI images (black & white) start to download along with data from ALEX, REX, and SWAP instruments (see below).

July 16th – first color images from Ralph instrument package to arrive.

July 20th – data will continue to download until this date.  The data package received from July 15-20 is high priority science & public interest data and will represent a small percentage of total New Horizons data.

September 14th to November 16th – After a quiet period of 8 weeks, New Horizons will download compressed data set.

November 2015 to November 2016 – New Horizons downloads uncompressed data set.

A few of the highlights:

NASA TV will begin flyby coverage at 7:30 AM EDT on July 14th.  You may watch online here.

New Horizons has an excellent Twitter feed.

New Horizons has two websites, one from NASA, and the other from the John Hopkins Applied Physics Laboratory where mission operations are based.

The seven instruments at work are the following:

Long Range Reconnaissance Imager (LORRI) – this basically takes wide angle black & white shots of Pluto.  NASA intends to release these images close to real time as they arrive.

Ralph – this is the instrument that takes color images of Pluto.  It will take longer for NASA to release these images due to processing time.  This instrument will map the surface of Pluto by taking stereo images and search for organic compounds among its many functions.

Alice – this is an ultraviolet spectrometer that will study the composition of Pluto’s atmosphere and attempt to detect an ionosphere (upper part of the atmosphere where particles are ionized by solar radiation).

Radio Science Experiment (REX) – will measure temperature and pressure in Pluto’s atmosphere.  This instrument will be used after the flyby as it must be pointed towards the direction of Earth while in use.

Solar Wind Around Pluto (SWAP) – will measure the loss of Pluto’s atmosphere as a result of its weak gravity field.  As the atmosphere escapes into space, it is ionized by solar radiation and carried away from Pluto by the solar wind.

Pluto Energetic Particle Spectrometer Science Investigation (PEPSSI) – this instrument basically performs the same function as SWAP, but measures higher energy atmospheric particles escaping Pluto into space.  The combination of PEPSSI and SWAP will provide a comprehensive profile of Pluto’s atmospheric interaction with the solar wind.

Venetia Burney Student Dust Counter (SDC) –  Venetia Burney was the eleven year old girl who named Pluto shortly after its discovery in 1930.  This instrument has been measuring dust grain properties throughout New Horizons’ voyage and will provide a dust profile of the Solar System.  This dust is a result of collisions of various Solar System bodies.

After the Pluto flyby, the mission team will select a Kuiper Belt object to head to.  This flyby will take place in 2019.  The voyage continues.

*Image at top of post is the mission operations center that will receive the data from New Horizons.  NASA/Johns Hopkins University Applied Physics Laboratory/Southwest Research

Why is Pluto Red?

The short answer to the question posed in the title of this post is the surface of Pluto is covered by tholin, not to be confused with Star Trek’s Tholians.  The word was first coined by Carl Sagan in 1979.  It is derived from the Greek word tholos for muddy, due to the texture of this substance that Sagan was able to produce in laboratory experiments.  Tholin is not produced naturally on Earth as oxygen would break it apart in the atmosphere soon after its formation.  However, it is possible for tholin to have been produced early in Earth’s history and may have played a key role in the formation of life.

Tholin is found on some bodies located in the outer Solar System, notably Saturn’s moon Titan, Neptune’s moon Triton, and Pluto.  As tholin no longer occurs naturally on Earth, the study of these objects may allow us to acquire key information on the chemical processes that occurred early in our planet’s history.

The chemical process that produces tholin is fairly complex.  Ultraviolet light and negative ions (electrons) combine to breakdown methane (CH4) and nitrogen (N2) molecules in the atmosphere.  Natural gas that we use to heat our homes during the winter is mostly methane.  Through a series a chemical processes, tholin is formed and falls onto the surface as a reddish, gooey type substance.  Below is how tholin appears when produced in a laboratory:

Credit: Chao He, Xinting Yu, Sydney Riemer, and Sarah Hörst, Johns Hopkins University

The reddish tinge in Titan’s atmosphere is thought to be a result of the presence of tholin.  Below is an image of Titan taken by the Huygens probe as it descended towards its surface in 2005.  Huygens was originally part of the Cassini spacecraft that has been exploring the Saturn system for the past 11 years now.  As a side note, this is still the most distant landing from Earth successfully attempted by any space mission.

Credit: NASA/JPL

So why is this important and why did Carl Sagan spend several years researching this stuff?  When tholin is on the surface of a body, and there is water present on the surface, it dissolves and forms amino acids that are the building blocks of life.

The atmosphere of the young Earth was not anything like we enjoy today.  A half billion years after Earth’s formation, the atmosphere was dominated by the outgassing of volcanic activity and the Earth was covered with a blanket of carbon dioxide and methane.  At the same time, water began to form on the surface.  These conditions would have been ripe for the formation of tholins on Earth and the eventual breakdown into amino acids in the oceans.  How exactly this could have led to life on Earth is not completely understood at this point.  And that is why astrobiologists are very keen to study places where tholins are naturally present.

To learn more about the search for life in space and how it was formed on Earth, NASA’s Astrobiology website is a good start.  At Cornell University, the recently formed Carl Sagan Institute’s purpose is to continue Sagan’s quest to find life beyond Earth.  Their website can be accessed here.

*Image of Pluto at top of post.  Credit:  Credits: NASA/JHUAPL/SWRI

Sigh…No, We are Not Headed for a Little Ice Age

A recent report by Professor Valentina Zharkova does not predict a mini-ice age as has been publicized by popular media.  In fact, nowhere in the RAS press release is that stated.  What Dr. Zharkova does predict is that the solar cycle will resemble the Maunder Minimum when there was little solar activity. During the same time, both Northern Europe and North America suffered cold weather referred to as the Little Ice Age from 1300 to 1850.

This prediction is the result of modelling solar activity with two solar dynamos rather than with one.  Both these dynamos have cycles of activity. When two cycles offset, they cancel each other out.  Zharkova’s model predicts the activity from the two solar dynamos will cancel each other out between 2030 and 2040.  This is referred to as destructive interference and an example can be seen below:

Credit: NASA

The Maunder Minimum was a period of little solar activity and as a result, very few sunspots were observed on the Sun’s surface:

Credit: NASA

Can the same Little Ice Age climate event be extrapolated if the Sun goes into a similar quite period?  The answer is no.  Atmospheric conditions on Earth today are much different than during the nascent Industrial Age.

Credit: NASA

Both carbon dioxide and methane are greenhouse gasses.  That is, they trap heat at the Earth’s surface in the same manner a blanket traps bodyheat on your bed at night.  Carbon dioxide levels have increased 60% and methane has increased by 300% in the atmosphere since 1750.  And the trend is to keep increasing.  For the sake of argument, if we were to cap carbon dioxide at today’s level of 400 parts per million, what would happen?  Global temperatures would still rise 0.80 C (or 1.40 F) over the next few decades as the oceans continue to release heat trapped during the prior warming phase.  How much would a return of the solar cycle to Maunder Minimum conditions reduce global temperatures?  The drop would be 0.1 C globally.  Not nearly enough to offset the effect of current greenhouse gas levels in the atmosphere, and not nearly enough to offset the most conservative expected increase of 1.00 C (or 1.80 F) over the next three decades.

This is very poor reporting of Valentina Zharkova’s work.  A return to Maunder Minimum conditions refers to sunspot levels.  There is no reason to expect a change to the Little Ice Age climate conditions.  It’s unfortunate, if  Zharkova’s predictions hold out, it represents a great discovery in solar physics and will allow more accurate modelling of solar activity and space weather.  No small feat, as space weather does have potential damaging effects on electrical systems both in space and on Earth.  And that is the true ramifications of this model.

*Image on top of post is a frost fair on the River Thames during the Little Ice Age.

New Horizons Updates

New Horizons is getting close enough to detect geological features on the surface of Pluto.  Below is an image released on July 10th.

Credit: NASA

Craters are an indication of the age of the surface of a planet (or in this case, dwarf-planet).  The more craters there are, the older the surface.  The less craters there are, the younger the surface and it is an indication of active geological processes on that surface.

Lets take a look at a celestial body we are familiar with, the Moon.

Credit: NASA/Sean Smith

The bright, highly cratered areas are referred to as the highlands.  The surface age here ranges from 4 to 4.5 billion years old.  The darker, less cratered areas are referred to as mare regions.  These surfaces are about 3.5 billion years old.  Why are the mare regions younger than the highlands?  The mare regions were formed by large impact events that caused lava to flood these areas and eventually solidify into basaltic rocks.  This process wiped out the original cratered surface that existed before the impact events.

The Earth’s surface has very few craters due to a variety of geological processes.  This would include plate tectonics along with both wind and water erosion.  One crater that has been preserved, so far, is the Meteor Crater in Arizona.

Credit: USGS/D. Roddy

The dry climate (and lack of water erosion) has preserved this crater since the impact that created it 50,000 years ago.  Keep in mind, that is very young geologically speaking.  The Arizona Meteor Crater is only 0.0014% the age of the lunar mare regions.  One can visit the Meteor Crater and details on that are here.

The expectation is that Pluto will resemble the Neptune satellite Triton as both are about the same size.  Pluto is a Kuiper Belt object while Triton is a Kuiper Belt object that was captured by Neptune’s large gravity well.  Voyager 2 visited Triton in 1989 and this is what it saw:

The surface of Triton is lightly cratered and is estimated to be about 10 million years old.  Still pretty young, as that is about 0.3% the age of the lunar mare regions.  Voyager detected geyser like formations that vented nitrogen gas onto the surface.  This type of cryovolcanic activity is suspected to be the responsible party for Triton’s young surface.

How will Pluto compare?  We’ll know next week and the following months as images continue to download from New Horizons.  Keep in mind, the less craters there are, the more active the surface is.

*Image on top of post is artist conception of New Horizons flyby.  Credit:  NASA.

Teaching About the Confederate Flag

The recent controversy concerning the display of the Confederate flag presents an excellent opportunity for teachers to employ constructivist learning techniques for students to understand the flag’s original intent.  Also, this can provide a good lesson in the value of examining original historical documents rather than relying on interpretations of those documents.  When I took American history in high school, back in the early 80’s, these documents were not readily available for inspection.  The internet now allows students to access these documents with little difficulty.

Four states, South Carolina, Georgia, Mississippi, and Texas, wrote formal declarations as to the cause of succeeding from the Union.  Students can access these documents here.  The students can read the documents as a homework assignment in preparation for a discussion segment follow-up in class.

Themes the teacher can present in the discussion are these:

What was the major cause for Southern states to leave the Union?

Did the assigned documents list any secondary causes?

Did the documents conflict with the student’s pre-existing notions as to why the South succeeded from the Union?

Explain what the word seminal means.  Ask your students if the lesson helped them to understand why it is important to review and cite seminal sources, rather than solely rely on secondary sources, for academic work.

The National Archives has several photographs from the Civil War.  A picture of the Confederate flag (seen at the top of the post) flying over Fort Sumter can be accessed here.

Students should be asked, how does this flag differ from the one normally associated as the Confederate flag?  Why does this flag only have 7 stars?  What is the significance of the date, April 14, 1861, and what happened at Fort Sumter that caused this event?  Why did Confederate battle flags evolve to look differently than the one that flew over Fort Sumter?

Finally, the class can discuss how the flag is displayed today.  Does it match the original intent of the flag?  Discuss the difference between a hate group displaying the flag and a historical exhibit of the flag.  Ask your students if they think the individuals who display the flag as a personal statement have inspected the historical documents as the class just did.  If those individuals did read those documents, would it alter their perspective on displaying the Confederate flag?

Going into this exercise, students may have been taught versions of what caused the Civil War that conflict with the historical record.  And they may have learned these alternative versions from the people they trust the most in their lives – family and friends.  If that is the case, it will often take some time for a student to resolve this internal conflict.  In fact, it could be after the student has completed the course before this conflict is resolved.  A teacher should be prepared for that.

And that might be the most difficult academic lesson to learn in life, always do your due diligence, no matter how much you trust a person.

Hubble’s Successor and the Man it’s Named After

In 2018, NASA is scheduled the launch the James Webb Space Telescope (JWST). The JWST is the successor to the Hubble Space Telescope. The Hubble, upgraded in 2009, is still producing high quality science. However, it has been in operation for 25 years, and like an old car, will begin to break down sooner or (hopefully) later. It is projected that Hubble will fall back to Earth sometime around 2024.

The JWST will be fundamentally different from the Hubble in three ways, its mirror type, location, and the part of the electromagnetic spectrum observed.

The Hubble’s primary mirror is a single piece 2.4 meters (8 feet) in diameter. The mirror is made of ultra low expansion glass that weights 2,400 lbs. This is pretty lightweight; a regular glass mirror the same size would weigh five times as much. The JWST primary mirror will consist of 18 segments with a total weight of 1375 pounds. The total mirror size will be 6.5 meters (21 feet) in diameter.

Why are the JWST mirrors so light?

The mirrors for the JWST, rather than composed of glass, are made of beryllium. This substance (mined in Utah) has a long history of use in the space program, as it is very durable and heat resistant. In fact, the original Mercury program heat shields were made of beryllium. In space, weight is money. Currently, it cost $10,000 to put a pound of payload into orbit. Since the JWST mirror has 7 times the area of the Hubble mirror, a lighter material had to be found.

Beryllium itself is a dull gray color. The mirrors will be coated with gold to reflect the incoming light back to the secondary mirror to be focused into the JWST instrument package.  The choice of gold was not for aesthetic purposes, but rather gold is a good reflector of infrared light and that is key to the JWST mission.  The total amount of gold used is a little over 1 1/2 ounces, worth roughly $2,000, which is a minute fraction of the JWST $8.5 billion budget (about the same price tag for an aircraft carrier).

The final assembly of the primary mirror will take place at the Goddard Space Flight Center in Maryland. The contractor for the assembly is ITT Exelis, which was formally a part of Kodak and is still based in Rochester, NY.

The JWST will launch in 2018 on an Ariane 5 rocket at the ESA launch facility in French Guiana. This is near Devil’s Island, the site of the former penal colony featured in the film Papillon. Its location near the equator provides a competitive advantage over the American launch site at Cape Canaveral. The closer to the equator, the greater the eastward push a rocket receives from the Earth’s rotation. In Florida, the Earth’s rotational speed is 915 mph. At French Guiana, it is 1,030 mph.  That extra 1,000 mph boost allows a launch vehicle to lift more payload into orbit.

Even if the shuttle program were still active, unlike the Hubble, it would not have been used to lift the JWST into space. The Hubble is situated in orbit 350 miles above the Earth. This was the upper end of the shuttle’s range. The JWST will be placed 1,000,000 miles away from Earth at a spot known as the L2 Lagrange point.  What is the L2 point?  Think of the launch of the JWST as a golfer’s drive shot.  The interplay between the Earth and Sun produce gravitational contours as seen below:

Credit: NASA / WMAP Science Team

The gravitational contours are like the greens on a golf course.  The arrows are the direction gravity will pull an object.  The blue areas will cause the satellite to “roll away” from a Lagrange point.  Red arrows will cause the satellite to “roll towards” the desired destination.  Kind of like this shot from the 2012 Masters:

The L2 spot is not entirely stable.  If the JWST moves towards or away from the Earth, its operators will need to make slight adjustments to move it back towards the L2 spot.  Due to this placement, the JWST will not have the servicing missions the Hubble enjoyed. The specifications of the JWST must be made correctly here on Earth before launch.

Why does the JWST need to be so far away from Earth?

The answer lies in the part of the electromagnetic (EM) spectrum the telescope will observe in. Don’t get turned off by the term electromagnetic, as we’ll see below, you will already be familiar with most parts of the EM spectrum.

Credit: NASA

The word radiation tends to be associated with something harmful, and in some cases, it is.  However, radio and light waves are also forms of EM radiation.  What differentiates one form of radiation from another is its wavelength.  Cool objects emit mostly long wavelength, low energy radiation.  Hot objects emit short wavelength, high energy radiation.  The JWST will observe in the infrared.  And this is a result of the objects the JWST is designed to detect.

The JWST will search the most distant regions of the universe.  Due to the expansion of the universe, these objects are receding from us at such a rapid rate, their light is red-shifted all the way into the infrared.  Planets also emit mostly in the infrared as a consequence of their cool (relative to stars) temperatures.    The infrared detectors on the JWST will enable it to study objects in a manner that the Hubble could not.

The L2 location allows the JWST to be shielded from the Earth, Moon, and Sun all at the same time.  This prevents those bright sources of EM radiation from blotting out the faint sources of infrared that the telescope is attempting to collect.

The video below from National Geographic provides a good synopsis of the JWST.

So, who was James Webb? And why did NASA name Hubble’s successor after him?

The short answer is that James Webb was NASA Administrator during the Apollo era. Given that Apollo may very well be NASA’s greatest accomplishment, that alone might be enough to warrant the honor. However, Webb’s guidance during NASA’s formative years was also instrumental in commencing the space agency’s planetary exploration program. To understand this, lets take a look at John Kennedy’s famous “we choose to go to the Moon” speech at Rice University on September 12, 1962.

During that speech, President Kennedy not only provided the rational for the Apollo program, but stated the following:

“Within these last 19 months at least 45 satellites have circled the earth. Some 40 of them were made in the United States of America and they were far more sophisticated and supplied far more knowledge to the people of the world than those of the Soviet Union.

The Mariner spacecraft now on its way to Venus is the most intricate instrument in the history of space science. The accuracy of that shot is comparable to firing a missile from Cape Canaveral and dropping it in this stadium between the 40-yard lines.

Transit satellites are helping our ships at sea to steer a safer course. Tiros satellites have given us unprecedented warnings of hurricanes and storms, and will do the same for forest fires and icebergs.”

It has to be noted here that soaring rhetoric notwithstanding, Kennedy was not exactly a fan of spending money on space exploration. At least not to the extent the Apollo program demanded. Kennedy felt the political goal of beating the Soviet Union to the Moon trumped space sciences.  Nonetheless, you can see the origins of NASA’s planetary & Earth sciences programs along with applications such as GPS in Kennedy’s speech. So how does James Webb fit into all this?

When tapped for the job as NASA administrator, Webb was reluctant to take the position. Part of it was his background as Webb was a lawyer. He was also Director for the Bureau of the Budget and Under Secretary of State during the Truman Administration. Webb initially felt the job of NASA Administrator should go to someone with a science background. However, Vice President Lyndon Johnson, who was also head of the National Space Council, impressed upon Webb during his interview that policy and budgetary expertise was a greater requirement for the job.

That background paid off well when dealing with both Presidents Kennedy and Johnson. As NASA funding increased rapidly during the early 1960’s, there was great pressure to cut space sciences in favor of the Apollo program. Webb’s philosophy on that topic was this; “It’s too important. And so far as I’m concerned, I’m not going to run a program that’s just a one-shot program. If you want me to be the administrator, it’s going to be a balanced program that does the job for the country that I think has got to be done under the policies of the 1958 Act.”

The 1958 Act refers to the law the founded NASA and stipulated a broad range of space activities to be pursued by NASA.  The law can be found here.

During the 1960’s, NASA’s percentage of total federal spending is below:

Credit: Center for Lunar Science and Exploration

NASA has never obtained that level of funding since. Most of it was earmarked to develop and test the expensive Saturn V launch vehicle. And pressure was often applied from the President to Webb to scale back or delay NASA’s science program to meet Apollo’s goal of landing on the Moon before 1970. The video below is a recording of one such meeting between Kennedy and Webb.

Webb’s law background served him well in making the case for a balanced NASA agenda.  Despite pressure of the highest order, Webb was able to guide both Apollo to a successful conclusion and build NASA’s science programs as well.  The latter would include the Mariner program that conducted flybys of Mercury, Venus, and Mars.  Mariner 9 mapped 70% of Mars’ surface and Mariners 11 & 12 eventually became Voyager’s 1 & 2, humanity’s first venture beyond the Solar System.

Quite a legacy for a non-science guy.

This also demonstrates you do not necessarily have to have a science/engineering background to work in the space program.  Take a gander at NASA’s or SpaceX’s career pages and you will find many jobs posted for backgrounds other than science.  As James Webb proved, it takes more than science to study the universe.

*Image at top of post is JWST mirror segment undergoing cryo testing.  Credit:  NASA.

Pluto & New Horizons

When I was in grade school, I designed a crewed mission to Pluto and dubbed it Hercules, obviously taking a cue from the then recent Apollo program. The ship itself was armed with laser banks. Not sure what exactly I was expecting to run into out there, perhaps just Cold War paranoia. The crew was also top heavy in security personnel. As anyone who watches Star Trek can tell you, just like pitchers in baseball, you can’t have enough redshirts on a space mission. The mission was planned to go in the year 2002.

It’s really funny to see the ideas one can conjure when you do not have to worry about budgets, research, and politics. The people at NASA who do have to worry about that stuff are unable to send humans that far, but have pulled off a most excellent mission in New Horizons that will flyby Pluto with its closest approach on July 14th.  An added bonus, with the advent of social media, we will get to see the images from this mission almost in real-time.

The mission was put together at a cost of $700 million.  That is the same amount spent in Colorado on marijuana during its first year of legalization.

This particular mission has a personal tie to me in that it was launched on January 19, 2006, the very same week I started my first class teaching astronomy. Next fall, in my 10th year of teaching, I will finally be able to discuss the New Horizon’s images of Pluto rather as something to look forward to. An animation of New Horizon’s voyage to Pluto is below. You’ll note that New Horizons performed a flyby of Jupiter to receive a gravity boost towards Pluto.

The gravity boost from Jupiter in 2007 shortened the journey to Pluto by three years. The flyby of Jupiter provided a test run for New Horizon’s imaging equipment and the results were impressive. The video below shows New Horizon’s look at the rotation of Jupiter.

New Horizons also took this shot of a volcanic plume on Io, which is the most volcanically active body in the Solar System. This activity is generated by gravitational flexing of Io as it is stretched back and forth by Jupiter, Europa, and Ganymede. This is similar to the heat caused by stretching a putty ball back and forth.

Credit:  NASA/Johns Hopkins University Applied Physics Laboratory/Southwest Research Institute
Credit: NASA/Johns Hopkins University Applied Physics Laboratory/Southwest Research Institute

The image above of Io was taken from 1.5 million miles away.  While Pluto is only 60% the size of Io, New Horizons will approach much closer at 6,200 miles and should provide exceptional image quality.

A lot has happened to Pluto itself over the past decade. Not so much Pluto, but rather our perception of it. Of course, when New Horizons was first proposed in 2001, Pluto was still classified as a planet. It is now referred to as a dwarf planet. Over time, as memory of Pluto as a planet fades, I suspect this will eventually be changed to simply a Kuiper Belt object (KBO).

The reclassification was portrayed in the popular media as a demotion for Pluto. It really was not so much a demotion as it was an expansion of our understanding of the nature of Pluto and the Solar System.  For those of us who went to grade school before the reclassification, we were introduced to the planets with a diagram such as this:

Credit: Rice University

An updated version of this diagram looks like this:

Credit: NASA/Johns Hopkins University Applied Physics Laboratory/Southwest Research Institute/Alex Parker

The yellow line is the path of the New Horizons probe.  The yellow dots?  Those are Kuiper Belt objects of which Pluto is one of.

Pluto’s classification as a planet was shaky from the start.  Pluto’s orbit is inclined much more than the other eight planets and its composition is unlike the four gas giants which occupy the outer Solar System.  Questions about Pluto’s planet classification were raised only a few months after its discovery by Clyde Tombaugh of the Lowell Observatory, as this New York Times article from April of 1930 indicates.

However, Pluto’s size was thought to be much larger at the time than it actually is and that caused the planetary classification to stick.  Gerard Kuiper himself, as late as 1950, calculated Pluto to be about the same size as Earth.  In fact, it was this overestimate of Pluto’s size that caused Kuiper to predict the following year there would not be what we now call the Kuiper Belt.  It’s a bit ironic that the Kuiper Belt is named after the astronomer who predicted its non-existence, as Kuiper felt Pluto would have cleared out that region of the Solar System during its formation.

Nonetheless, Kuiper had a distinguished career that included the discovery of Titan’s atmosphere, carbon dioxide in Mars’ atmosphere, and the Uranus satellite Miranda.  Kuiper played a key role as mentor to Carl Sagan during the 1950’s as well.  Unlike most astronomers at the time, Kuiper felt there was an abundance of planets outside the Solar System.  In turn, this inspired Sagan to explore that along with the possibility of life beyond Earth.  This was mentioned prominently during Ann Druyan’s remarks at the recent inauguration of the Carl Sagan Institute at Cornell University.

The Kuiper Belt is a region of the Solar System past the orbit of Neptune that is thought to contain thousands of small celestial bodies composed of water ice, methane, and ammonia.  Short period comets originate from this region.  An excellent overview on the Kuiper Belt can be found here.

The first Kuiper Belt object discovered besides Pluto came in 1992.  Since then, some 1,300 Kuiper Belt objects have been observed.  This, along with more precise measurements of Pluto’s mass, which have come in at 0.002% of Earth’s, have resulted in the reclassification of Pluto to its present dwarf-planet designation.  Pluto is simply too small to be considered a planet and its placement in the Solar System puts it among other Kuiper Belt objects.

Does this reclassification affect Clyde Tombaugh’s legacy as the discoverer of Pluto?  I think not.  Consider this, Tombaugh discovered a Kuiper Belt object 62 years ahead of the next observation of another such object.  Tombaugh also discovered Pluto six years before earning his bachelors degree at University of Kansas.  Keep in mind, the discovery of Pluto would have been a suitable topic for a Ph.D thesis.  Tombaugh’s legacy is quite safe.  In fact, a portion of Tombaugh’s ashes are aboard the New Horizons probe and will flyby  Pluto along with the spacecraft.

The flyby of Pluto next July may very well represent a once in a lifetime opportunity to observe Pluto this close.  No other missions are in the proposal stage at this time and given the travel time to Pluto, it will be at the very least, 15-20 years before another mission arrives in that part of the Solar System.  NASA has just released the first color image of Pluto and its moon Charon (below).  I consider myself very fortunate to be able to witness the culmination of this 15 year effort .

Pluto and Charon orbit shared center of gravity. Credit: NASA

*Image on top of post is the Pluto discovery plates.  Credit:  Lowell Observatory Archives.

 

Minimum Wage & Unemployment: Confusing Micro and Macro

The recent movement to raise the minimum wage to $15.00/hr. has brought out the usual dire warnings that this will cause a significant increase in unemployment and lock out low wage workers from obtaining jobs. Intuitively, this seems to make sense. When one looks at this scenario from the viewpoint of a business owner, the common sense outcome is you would have to offset the increase in costs by reducing staff.

Classic demand and supply analysis of the labor market from Econ 101 would seem to confirm this as you can see below:

Wages set above equilibrium creates surplus of labor - unemployment.
Wages set above equilibrium creates surplus of labor – unemployment. Image: Wiki Commons.

Yet, the evidence is clear that unemployment does not rise with an increase of the minimum wage. Why should this produce such a counter-intuitive result? The key to the answer lies in the fundamental differences between microeconomics (study of individuals and firms) and macroeconomics (study of the economy as a whole) as well as a more complex model of labor markets developed the past few decades referred to as efficiency wage theory.

First, lets take a look at that classic Econ 101 model of labor markets since this is how the issue is most often debated in the popular media and general public.

Models of micro units in the economy are open systems. Take an employer for example, income flows into the employer from an outside entity (customers). Likewise, spending flows out to entities beyond the employer in the form of wages.  The argument against raising the minimum wage sees the cash flow out increasing without an increase in the cash flow in.  Hence, staff is reduced to offset this outflow.

Employer is an open system. the system is "permeable" as cash flows in and out of the system boundary.
Employer is an open system. the system is “permeable” as cash flows in and out of the system boundary.

The same does not hold for a national economy as a whole. Why? A macro unit, such as nation, is a closed system. As Paul Krugman says, in this scenario, everybody’s spending is someone else’s income. If spending drops overall in a macro unit, then income must necessarily drop as well.  This is the cause of business cycles.  An example of a closed system is below. The three major components of GDP are consumption, investment, and government spending. Unlike a household or business, there is no income to flow in from outside the system.  As we’ll see, modulating these business cycles is more important than minimum wage laws in reducing unemployment for low wage workers.

Closed System
Boundary around a closed system is “impermeable”. Cash flows remain inside the system and do not leak out.

As noted earlier, the classic micro model would indicate that an increase in the minimum wage forces employers to reduce staff and also increases the available pool of labor as higher wages induce more people to look for a job.  In this model, an increase in the minimum wage can represent a transfer of wealth from employers to employees, which is the real cause of the political friction on this issue.  Framed this way, it directly pits employees against employers.

Is this transfer of wealth fair?

In the late 1800’s, Alfred Marshall made the great defense for capitalism against the growing socialist movement. Marshall postulated that increased worker productivity would result in increased wages and that was the key to reducing the great poverty of the time. How to increase productivity? Marshall proposed an expansion of expenditures in public education. He was also recognized the productivity gains acquired from spillover knowledge. That is, less experienced workers increase productivity by working with more experienced workers.

The classical economic model derived by Marshall (and others) suggested that workers wages are commensurate with productivity in a free labor market. This model makes a few assumptions. Among them being:

Information is symmetric. That is, both employers and employees have the same knowledge of the existing labor market.

The labor market is competitive to the point where neither an individual employer nor employee can affect the wage rate.

The economy is always at full capacity.

To paraphrase Harry Callahan, a good economic model has got to know its limitations.

How close to reality are these assumptions? A good diagnostic is to compare productivity gains with labor costs (wages and benefits). If the model is correct, both these variables should match. What does the data show?

Below is a comparison between annual increases in worker productivity and real hourly compensation (which accounts for both wages and benefits) since 1979:

Data Source: St Louis Federal Reserve
Data Source: St Louis Federal Reserve

For the most part, compensation lags behind productivity.  Only periodically has compensation matched productivity and that includes the mid-1980’s and late 1990’s.  Hence, the economy usually operates at less than full capacity.  Since the late 1990’s, compensation has seriously lagged behind productivity.  This represents a transfer of wealth from labor to employers not predicted by classical micro models of the economy.  Consequently, the defense of free labor markets as a means to reduce poverty breaks down.  How to change that?

It helps to use real world case studies rather than fictional work.

One thing to avoid is fear of a supply shock by employers “going Galt” and reducing and/or quitting their businesses in a snit over increasing wages.  During the period between 1947-79, while unions were at their peak, wages kept up with productivity gains.  The result was an expanding middle class and business did just fine.  Employers might resent an increase in the minimum wage, but on an aggregate scale, will not have an incentive to downscale their business unless wage increases surpass productivity increases for a sustained period of time.  If that did not happen in the post-World War II period, it’s not likely to happen now.

One of the major critiques of the minimum wage law is that it locks entry level workers out of the job market by keeping wages above the market equilibrium level.  However, the main driver of unemployment for teenagers (by definition entry level employees), as with any section of the population, is the business cycle seen below:

fredgraph

The impact of minimum wage increases is dwarfed by the effect of the business cycle on unemployment.

Here is where macroeconomics comes into the picture.  Over the past 35 years, teenage unemployment topped 20% three times, all during recessions.  The Great Recession, created by the 2008 financial crisis, produced a teenage unemployment rate of over 25%.  The first step in creating job opportunities is to modulate the business cycle in a manner to avoid steep recessions.  A combination of New Deal banking regulations and appropriate monetary/fiscal policy was successful in this regard from 1947-1972.

It doesn’t make sense to oppose minimum wage laws as a means to decrease teenage unemployment if one is also opposed to employing monetary and fiscal policy to moderate business cycles.  The first priority in this direction is to regulate the financial sector so the risk of banking crisis are reduced.  It is financial crisis that cause periods of severe unemployment lasting 3-5 years, sometimes longer.  Prior to World War II, financial panics induced multi-year depressions in 1857, 1873, 1893, and 1929.  The last recession is not an isolated event, but a natural consequence of an unregulated financial sector.  Younger workers are significantly at risk during these events of long-term unemployment.

An additional step is to index the minimum wage to the inflation rate.  The minimum wage topped out at $10.69/hr (2013 dollars) in 1968 and has steadily eroded since.  Overall unemployment was 3.6% during that year with a teenage unemployment rate of 11-12%, a further indication that the business cycle is a greater determinant of teenage unemployment than the minimum wage.   Also, indexing minimum wage to productivity increases needs to be considered.  Paying for production is a reasonable proposition.

The perfect labor market model as presented by the demand and supply graph at the top of this post is an abstract concept.  That model relies on assumptions that cannot be fully realized in the real-world.  Think of it as the economic version of the Carnot engine, which represents the theoretical limit for engine efficiency.  You cannot build such an engine in the real world as it relies on a cycle that does not lose heat to friction.  Likewise, it is impossible to build a perfect labor market in the real world as efficiency is lost due to frictions such as incomplete information and an economy operating at less than full employment.

For an economist to claim free labor markets are efficient to the point where labor receives rising compensation with rising productivity is the same as an engineer who claims to have built a 100% efficient engine.

We need to realize labor is not the equivalent of widgets.  Classic demand and supply curves oversimplifies human behavior in the labor market.

However, a new, more complex theory of labor markets has emerged over the past few decades that merits real consideration as it predictions coincide with some real world observations.  And this is efficiency wage theory.  This theory predicts that employers have a menu of wages to pick from rather than a single market wage.  The tradeoffs involve low wage, low productivity, high employee turnover, vs. high wage, higher productivity, and low turnover.

Lets take a look how employers have responded to an increase in the minimum wage. For starters, layoffs are not the primary, or even a significant reaction. A variety of strategies are employed to offset the higher cost of labor. One is to train employees in a manner to boost productivity. Another is to reduce costly employee turnover. Also, some employers elect to reduce profits to cover the cost.  The real response to minimum wage increases are more reflective of the efficiency wage concept than the classic single wage demand and supply model.

New entrants in the labor market need to acquire social connections to move into higher wage brackets, even within the same skill level jobs.

A 1984 survey paper by current Fed chair Janet Yellen noted that efficiency wage theory predicts a two-tiered workforce.  One is a high wage workforce where jobs are obtained mostly via personal contacts.  Employers have a comfort level with employees they personally know and do not feel the need to go through expensive vetting and monitoring processes.  The other tier is a low wage, highly monitored workforce with a lot of turnover.  Sound familiar?  The latter seems to reflect low paid contract/temp workers who often perform the same functions as higher paid permanent employees at the same company.  Here again, the efficiency wage model seems to trump regular demand and supply.

It does appear to be time for policy makers to incorporate this more sophisticated model when addressing low wage workers and the chronically high unemployment rate in various demographic groups of the workforce.  In particular, is the need to promote a way for low wage workers to make the leap into the high wage sector.  As noted in Yellen’s paper, ability and education is not enough, social connections play a key role in obtaining a high wage job.  This, combined with proper fiscal/monetary policy, brings the best hope for lifting individuals from poverty.

It is often said that anyone who has taken Economics 101 will understand raising the minimum wage causes unemployment to increase.  The efficiency wage model is a bit beyond Econ 101, probably at an intermediate level course.  And that’s perfectly fine.  If you are diagnosed with a serious illness, do you want to be treated by a doctor who went to med school or is using Bio 101?  The same holds true in economics.

*Image on top is the Chicago Memorial Day Massacre of 1937.  Ten striking workers were killed and gave momentum for the Fair Labor Standards Act of 1938 which mandated the first minimum wage.  Photo:  U.S. National Archives and Records Administration.

Climate Change: Global vs. Local

As we conclude another month of unusual weather in my hometown (Buffalo, NY), I thought it would be a good time to take a look how climate change can be measured locally.  Coming off the heels of the coldest month in Buffalo history in February, May featured an average temperature 5.70 F warmer than normal including nine days over 80 degrees.  This month was on pace to be the 5th driest May in Buffalo history, but 2 1/2 inches of rain on May 31st more than doubled the precipitation experienced the first thirty days of the month.

Although our lives revolve around daily fluctuations in weather, the best way to determine how global warming may influence local climate is to examine annual temperature as that measure smooths out noise and gives a good take at long-term trends.  The global temperature history is below:

Temp jpeg 9.07.37 PM
One degree Celsius = 1.8 degrees Fahrenheit. Source: http://climate.nasa.gov/

A few key trends to notice, the post World War II cooling likely was a result of the global economy recovering after the Great Depression.  An increase in atmospheric aerosols caused by industrial emissions blocked sunlight during that period.  This helped to offset greenhouse gas heating.  Aerosols are particles suspended in the atmosphere rather then a gas such as carbon dioxide.

Buffalo Bethlehem Steel plant 1970's.  Photo:  George Burns, courtesy EPA
Buffalo Bethlehem Steel plant 1970’s. Photo: George Burns, courtesy EPA

That era ended with environmental regulations put into effect during the 1970’s.  The drop in aerosols prompted global temperatures to rise after 1980.  A slight cooling trend took place in the early 1990’s, this was precipitated by the Mt. Pinatubo eruption in 1991.  Large volcanic eruptions can eject sulfuric aerosols into the stratosphere and cause global cooling on a scale of 2-3 years.  The most famous example of this was the Year Without a Summer in 1816 produced by the eruption of Mt. Tambora.  After that brief period of cooling, global temperatures began to march upwards again.  This period included the 1998 El Nino warming spike, and the nine of the ten warmest years on record that have occurred after 2000.

So how does Buffalo match up to all this?  Lets take a look below:

Source:  http://www.weather.gov/buf/BUFtemp
Temperature in Fahrenheit. Data from http://www.weather.gov/buf/BUFtemp

We have a replication in the progression of temperature change.  The post World War II cooling can clearly be seen from the 1950’s through 1979.  The 1970’s saw the most infamous weather event in Buffalo history with the Blizzard of ’77.  Temperatures begin to warm up after 1980 just as happened globally.  The short term cooling effect from Mt. Pinatubo is clearly observed in the early 90’s (the summer of 1992 was very cool and wet) along with the (then) record breaking warmth from the 1998 El Nino event.

1998 El Nino event.  Image:  NASA

Despite the noise from short term climate forcings, the warming trend from 1980 to the current is plainly visible.  From 1940 to 1989, only two years clocked in at over 50 degrees.  The period from 1990 to 2014 featured six such years including 2012 which shattered the 1998 record by 1.20 F.  That might not seem like a lot, but the prior record was only 3 degrees above normal.  In other words, if climate models are correct and temperatures increase by more than 4 degrees, the average year will be hotter than the hottest year on record in Buffalo.

What I found really interesting is the greater variation of annual temperatures after 1990 as the yearly figures fluctuate widely across the moving average.  Also, even though the winter of 2014 was very cold in its own right, the temperature for the year was still greater than many years during the 1970’s.  The trend has been clearly increasing temperatures the past three decades.  Is this widening fluctuation in temperatures due to greenhouse gas induced climate change?  The answer is uncertain and as many a science article has concluded, more research needs to be done in this area.

What we do know is that we have to prepare to protect our regional economy against the influence of climate change.  Decreasing lake waters will make hydroelectricity production more expensive and lower cargo capacity of lake freighters.  General Mills, a major local employer (whose plant spreads the aroma of Cheerios downtown), has recognized the dangers of climate change to its supply chain and has a formal policy to mitigate its role in releasing greenhouse gases.  The next decade will be key in slowing climate change and will require crucial policy choices at both a regional and national level.

General Mills (right) and renewable windmill power takes over old Bethlehem Steel site.  Photo:  Gregory Pijanowski 2014.
General Mills (right) and renewable windmill power takes over old Bethlehem Steel site (upper right). Photo: Gregory Pijanowski 2014.

*Image on top of post is Buffalo from Sturgeon Point 15 miles south on Lake Erie.  Just wanted to demonstrate it does not snow here 12 months out of the year.

Transforming the Discussion on Race

As America is having one of its periodic discussions about race the past few weeks, it is time to consider how educators can induce a transformation to the conversation. When we talk about race, we are discussing skin color, which is determined by the amount of pigmentation in one’s skin. More specifically, a polymer (a repeating chemical pattern that forms a very large molecule) referred to as melanin.  The first time I heard of melanin was not in a biology class, but from Richard Pryor on a Tonight Show appearance.  Pryor discussed all kinds of humorous situations the high melanin level in his skin had caused.  Not really funny, it was only Pryor’s extraordinary comedic talent that made it seem so.

I know I am not alone in lacking a formal education as to what causes skin to appear in various colors in humans.  And rest assured, there are plenty of disreputable sources of information in American society to fill the void.  If we are going to have a reasonable discussion about race, it is time we educate ourselves what exactly race is.

The higher amount of melanin one has in their skin, the darker their skin will appear. This polymer also determines hair and eye color. Individuals with a high degree of melanin will have brown eyes, less melanin results in blue/green eyes. The same relationship holds with hair, the more melanin in one’s hair, the darker it is.

Melanin regulates skin color by the process of light absorption.  Light received by the skin is reflected back via a dermis layer below the melanin layer.  As the reflected light passes through melanin, it is absorbed.  The energy of the absorbed light triggers vibration in melanin molecules.  This vibrational energy is then converted to thermal energy and released as heat.  If the melanin level is low, little light is absorbed and skin appears white.  If the melanin level is high, more light is absorbed and skin appears darker.

Eye color is created in the same fashion.  Persons with low levels of melanin will have blue eyes.  The iris scatters light in the same manner as water droplets do to create a rainbow.  Blue light is scattered most and that is the color of light reflected out the eye.  Persons with high levels of melanin absorb most of the light reflected out and consequently, have darker eyes.

Thus, when we classify human beings by skin color, it is the same as if we classify by eye or hair color.

Melanin content regulates how skin responds to ultraviolet (UV) exposure as well as vitamin D production in the body.  Melanin absorbs UV radiation and having a lot of it protects against UV damage (sunburn), so dark skin is advantageous if you live around the equator.  People with high melanin content require longer exposure to sunlight to produce the necessary amount of vitamin D in the body, while people with low amounts of melanin do not require as much.  Thus, having little melanin is advantageous the farther away you live from the tropics.   Indeed, that is how evolution worked out as this map of skin color distribution from 1500 AD (before modern transportation and mass migration-voluntary and otherwise) demonstrates.

Image: Wiki Commons

The human race originated in Africa. All humans can trace their maternal ancestral roots to a single woman who lived in Africa about 150,000 years ago. All humans can also trace their paternal ancestral roots to a single man who lived in Africa 60,000 years ago. They were not Adam and Eve as in the biblical sense, they were one of many humans alive back then. However, those two individuals are the only ones from that era whose ancestry has survived to the current day.

The DNA of all humans is 99.9% identical. The National Geographic Genographic Project can use these small variations to trace one’s ancestral migration route from Africa.

Below is my maternal migration route (my mother was Irish).  My maternal line migrated out of Africa into the Middle East about 60,000 years ago, moved into West Asia 55,000 years ago, made their way into Western Europe about 22,000 years ago, and finally into what we now know as Ireland around 10,000 years ago.

MaternalAnd here is my paternal migration route (my father is Polish).  My paternal line made a similar migration out of Africa but took a right turn into Central Asia 35,000 years ago, eventually settling into Eastern Europe sometime around 15,000 years ago.

Paternal jpeg

As our ancestors migrated into colder climes, humans with genetic mutations resulting in an evolutionary advantage in those climes tended to survive and reproduce over those who did not. The end result, those whose ancestors migrated to Northern Europe have low levels of melanin and light skin and eye color. Light skin color is a relatively recent phenomenon, one that recent research indicates occurred 8,000 years ago.

Note that the genetic mutation that caused lighter shades of skin occurred thousands of years after human migration into Europe.  That’s right, my European ancestors, along with yours if you are white, were black upon their arrival into Europe.  That notwithstanding, skin color is used to classify individuals and justify the most cruelest behaviors targeting those deemed to have an improper melanin level in their skins.

How to change that?  I have no illusions that change will come overnight.  It will occur most probably like water wearing away on rock.  Over time, consistent pressure will wear the rock down.  As educators, we must insist any discussion involving race acknowledge precisely what the true differences are between the races, a minute, skin deep layer of polymer called melanin.  And we must insist those with racists attitudes describe in scientifically rigorous detail how that polymer contributes to the characteristics of the race being disparaged or exalted.

In a way, we need to disrupt the discussion on race in the same manner hi-tech start ups disrupt existing business models.  Why should we discuss race based on an antiquated social construct used to justify slavery?  Would we consider discussing astronomy as if the Copernican revolution never took place?  Certainly we would regard those who would to be cranks.  Any call for a discussion on race must be framed in its proper context or that discussion will be pointless.

Think of a world were people were subjected into slavery due to their eye color, or denied access to education and jobs due to their hair color. Can you imagine a world where Martin Luther King Jr. would have to say; “I look to a day when people will not be judged by the color of their eyes, but by the content of their character?” Sounds like some sort of bizarro world, or a Twilight Zone episode. However, that is the world we live in. If you were to judge someone by eye color the same way some will judge a person by their skin color…it’s the same damn thing.