(Slight) Changes in Latitude

At first glance, Buffalo and New York City would appear as different as two cities can be.  However, over the past two centuries both have been connected by the Erie Canal, the Empire State Express that linked Buffalo’s Central Terminal with Grand Central Terminal, and the New York State Thruway.  Infrastructure joining two cities not only moves people and goods, but ideas.  During the late 1800’s, Buffalo was a proving ground for many innovative architects who transferred their ideas to the big city.  A two-block area in downtown Buffalo has very significant architectural ties to New York City.

ESB1
Ellicott Square Building, Photo: Gregory Pijanowski

Above is the Ellicott Square Building. You may recognize it as the Ellicott Hotel from the movie The Natural. Built in 1896, it was the largest office building in the world at the time. In its basement was the Vitascope Theater, possibly the first movie theater in the United States.

Advertisement for Vitascope Theater.  Ten cents in 1897 is $2.75 in 2016 dollars. November 7, 1897. Credit: Wiki Commons.

On the marble floor of the Ellicott Square Building are several swastikas.  Before Nazi Germany, the swastika symbolized good fortune and is still used for that purpose in India and Indonesia.  The architect for the Ellicott Square Building was Daniel Burnham who six years later designed this building:

Flatiron Building, 1990, Photo: Gregory Pijanowski
Flatiron Building, 1990, Photo: Gregory Pijanowski

That, of course, is the classic Flatiron Building.  The shapes of the respective buildings were both determined by the street layout.  While most of Manhattan is laid out as a grid, the Flatiron Building lies where Broadway diagonally cuts across 5th Avenue necessitating its distinctive shape.  Daniel Burnham’s architecture firm still survives in the form of Graham, Anderson, Probst and White in Chicago.

Next door to the Ellicott Square Building is M&T Plaza:

MandT
M&T Plaza, Photo: Gregory Pijanowski

Does the exterior steel tubing and resultant narrow windows look familiar? The buildings below had the same type of framework:

World Trade Center, 2001. Credit: Jeff Mock, Wiki Commons.

Both M&T Plaza and the World Trade Center were designed by Minoru Yamasaki during the  mid-1960’s. In each building, the exterior steel columns were intended to carry the load of the building’s weight.  This precludes the need for interior columns maximizing floor space.  A century before M&T Plaza was built, Abraham Lincoln’s funeral train stopped in Buffalo and his body laid in state on the site as some 100,000 filed by to pay their respects.

April 27, 1865 – Lincoln’s funeral cortege. Credit: Buffalo & Erie County Public Library.

Minoru Yamasaki passed away in 1986 and his firm Yamasaki & Associates ceased operations in 2010, a victim of the Great Recession.  Of course, we are no longer able to appreciate the World Trade Center in person, but their architect’s legacy lives on in Buffalo which set the stage for his most prominent work.

The 2nd Amendment in the Classroom

The aftermath of another American mass shooting in Orlando means the gun control debate along with the interpretation of the 2nd Amendment is again front and center in the media.  How to handle this in the class?  The best bet is to allow your students to construct an interpretation by going back to the historical roots of the 2nd Amendment.

Before that is done, I would recommend students to be skeptical of the initial reports regarding the motives of a mass shooting.  Amid the confusion, the rush to get the story first, and now the need for everyone to get their hot takes in on social media, an awful lot of misinformation gets flung around the first few days after such an incident.  As documented in Dave Culler’s book Columbine, the initial reports that the shooters were part of the goth clique Trench Coat Mafia turned out false.  In fact, most of the Trench Coat Mafia had graduated the prior year.  However, in a classic case of circular reporting, an erroneous statement by a student was repeated throughout the day of the shooting by several media outlets.  The truth will often take days, weeks, months, sometimes years to illuminate.

That being said, it should be stressed to students that they build their own interpretations of the 2nd Amendment and not rely on someone to do it for them.  The full amendment has to be analyzed by the class:

“A well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed.”

An excellent start to this exercise is to have the class read the 29th Federalist Paper by Alexander Hamilton and the 46th Federalist Paper by James Madison.  These papers, written four years before the Bill of Rights were enacted, form the foundation of this amendment.  Before the class embarks on this endeavor, it’s a good idea for the students to discuss their current preconceptions of the 2nd Amendment to give a baseline how their understanding progresses throughout this lesson.

Title page for Federalist Papers. Credit: Library of Congress.

After the students have read the papers, a class discussion should ensue.  I like to compare this to my three stints on jury duty.  Some in the jury always wanted to vote right away.  Its been my experience a discussion first would bring to my attention angles of the case I had not considered.  And that is likely to be the case here as this is the student’s first attempt reading these documents.

The followup discussion should address the following themes:

Do the Federalist papers address individual self-defense or argue the right of states and/or federal government to form standing militias?

What was the importance of public militias during the time the constitution was drafted?  Do those reasons apply today?

In the era of industrialized warfare, could an armed militia protect the public from a tyrannical government given the asymmetry in firepower?  Examine some recent case studies such as the Soviet Union, Syria, and North Korea.

Are the popular arguments, pro or con, for gun control covered in any sort of context in the Federalist Papers?  Are media commentators knowledgeable on the topic?

And finally, ask your students how the assignment has changed their perceptions of the 2nd Amendment?

Having students construct their understanding of the 2nd Amendment does not mean whatever comes to their minds is to be taken as fact.  Their statements should undergo critical review by the rest of the class.  The class has to comprehend any criticism is not intended to be personal, but as a quality control measure on their understanding of the context of the Federalist Papers.  As a teacher, you must address the fact that any criticism should be based on what was read in the assignment and cannot devolve into ad hominem attacks.  The use of such attacks is an admission the student has lost the argument and did not integrate what was read as required to participate in the discussion properly.

The teacher in a sense acts as a referee during the discussion.  The classroom is not intended to be an ideological bubble, the students will get plenty of opportunity to experience that in today’s society.  A conflict of thought and ideas are healthy in the classroom.  The teacher should ensure the student’s arguments exhibit a solid understanding of the Federalist Papers and are not cherry-picking or taking out of context any of the readings.  Unlike social media, a student’s place in the discussion is earned with reading comprehension and a critical understanding of the material.  The loudest voice should not win in the classroom.

History has a certain advantage as original documents can be understood at the high school level.  This is opposed to science where journal articles usually require advanced training to grasp.  The internet makes the Federalist Papers easily available to each student and that was not the case when I was in high school.  In a controversial topic such as gun control, constructionist learning techniques allows the student to build their own understanding rather than rely on an authority figure to do it for them.  And this is a skill set that will serve your students well in the future.

*Image on top of post is an engraving of the Battle of Lexington.  Credit:  John H. Daniels & Son/Library of Congress.

Planets and Dwarf Planets – What’s in a Name?

The term dwarf planet was introduced in 2006, the same year I started teaching astronomy.  It has been a source of confusion for students ever since.  The confusion lies not only with the re-designation of Pluto, but also that an asteroid is considered a dwarf planet while the other four objects so designated lie in the Kuiper Belt outside the orbit of Neptune.  The term dwarf planet was something of a compromise for those reluctant to modify Pluto’s status as a planet.  As the living memory of Pluto as a planet fades, I suspect the term dwarf planet will fade as well.  So how should we categorize the objects that lie in our Solar System?

Rather than looking at the shape and size of these objects, I prefer to examine how they were created in the solar nebula that formed the Solar System.  When we divide the solar nebula into concentric circles, each section formed a distinct type of object.  The solar nebula concept was first hypothesized by Immanuel Kant in The Universal Natural History and Theories of the Heavens published in 1755.  As often is the case in astronomy, it took awhile for the ability to verify the theory to emerge.  In this case, the process spanned some three centuries.

Part of the holdup was determining if the Sun, planets, and asteroids are all the same age as they must be if formed together in the solar nebula.  A model of stellar evolution had to be created and that required Einstein’s relativity theory to explain nuclear fusion.  This model puts the Sun’s age at 4.6 billion years.  The age of the Earth also had to be determined and given the amount of erosion that takes place on the surface, finding rocks from Earth’s early days is difficult.  Nonetheless, radiocarbon dating has put the age of zircon found in Australia at  4.4 billion years.  This result also matches up with the age of the oldest Moon rocks brought back from the Apollo program.  Also during the 1970’s, Russian physicist Victor Safronov formulated a modern solar nebular theory to compare the evidence against.

The Solar System began when a rotating interstellar cloud containing gas and dust grains began to collapse under its gravity.  The rotation of the cloud caused this collapse to create a disk.  The bulk of the matter was still in the center of the solar nebula and this is where the Sun materialized.  Today, the Sun contains 99.8% of the Solar System’s mass.  There is no difficulty categorizing the Sun as a star as nuclear fusion occurs in its core.  The difficulty comes in the layers of the solar nebula outside the Sun.

The first concentric ring around the Sun is where the inner, rocky planets are located.  These would include Mercury, Venus, Earth, and Mars.  In this zone, as the Solar System was forming, the Sun kept temperatures warm enough to keep hydrogen and helium from condensing.  As these two elements comprised 98% of the solar nebula, only trace amounts of heavier elements were left to construct planets.  This explains the smaller size of the rocky planets.  However, they were still large enough to become spherical in shape.  The gravity of the planets pulled equally inward from all sides, overcoming the internal mechanical strength of its constituent material forming a sphere.

At the outer edge of this zone beyond the orbit of Mars lies the asteroid belt.  During the epoch of the solar nebula, there was enough material here to form a planet.  However, the presence of Jupiter’s gravitational influence caused enough disruption to keep this material from coalescing into a single planet.  Today, there is not enough matter here to form a body the mass of the Moon.  Nonetheless, one asteroid, Ceres, was large enough to become spherical in shape.  As such, it was designated as a dwarf planet in 2006.  When discovered in 1801, is was classified as a planet and remained so until the mid-1800’s.  Here you can see the ephemeral nature of this categorizing.  Ceres was in fact the first object discovered in a belt consisting of over 1 million asteroids.  Once it was understood Ceres was simply the most visible of a large number of asteroids, its classification was changed.  In this, Ceres is very similar to Pluto but their point or origin makes their physical makeup very different.

Ceres, Credit: NASA

Between the orbits of Mars and Jupiter lies what is called the frost line.  Beyond the frost line, both heavy elements and hydrogen compounds such as water, methane, and ammonia condensate (convert directly from gas to solid).  As the hydrogen compound ice particles and rocky material began to stick against each other, they eventually grew large enough to gravitationally attract the surrounding hydrogen and helium gas.  In this region, the hydrogen and helium gases’ temperature was colder than inside the frost line.  Colder gases move with slower velocity than hot gases and this enabled planets outside the frost line to trap huge amounts of hydrogen and helium.  Consequently, being outside the frost line allowed the outer planets to grow significantly larger than the inner planets.  Thus, the gas giant planets Jupiter, Saturn, Uranus, and Neptune bear little resemblance to Mercury, Venus, Earth, and Mars as you can see below.

Terrestrial planets have small rocky bodies with thin atmospheres while gas giants have small icy/rocky cores surrounded by large amounts of hydrogen and helium gas. Distance between planets not to scale. Credit: Wiki Commons.

The third concentric ring in the Solar System beyond the orbit of Neptune is the Kuiper Belt, of which Pluto is a member.  Also, short period comets such as Halley’s are thought to originate from the Kuiper Belt.  These objects differ from the asteroid belt in that they are more icy than rocky in nature.  This makes sense as the Kuiper Belt lies beyond the frost line where hydrogen compounds can condensate.  Pluto was the first Kuiper Belt object to be discovered in 1930.  It stood alone until 1992 when the second Kuiper Belt object was found.  While Pluto is highly reflective, most Kuiper Belt objects are dark, in fact, darker than coal.  That, along with their small size makes them difficult to detect.  To date, more that 1,000 Kuiper Belt objects have been discovered giving the Solar System a third zone of objects orbiting the Sun.

Kuiper Belt objects in orange, outer planetary orbits in green. Credit: The Johns Hopkins University Applied Physics Laboratory

Looking at the above image, it is tempting to think that the Kuiper Belt objects were formed beyond the orbit of Neptune.  However, the origins of the Kuiper Belt are still a matter of debate among astronomers.  As these are icy bodies, they originated beyond the frost line, but precisely where is uncertain.  One theory, called the Nice model, postulates these objects are left over remnants from where the gas giant planets formed and pushed outward by the migration of Neptune’s orbit beyond Uranus.  This model explains Kuiper Belt objects that have highly elliptical orbits but not those with circular orbits.  As it is estimated some 200,000 objects exist in the Kuiper Belt, there is quite a bit of discovery and mapping to perform to pin down the origins of these objects.

Beyond the Kuiper Belt is the Oort Cloud where long period (orbits that last thousands of years) comets are thought to originate.  The Oort Cloud consists of trillions of icy bodies ranging from 1-20 km and extends about 1 light year from the Sun.  To date, astronomers have not directly detected the Oort Cloud but we have observed long period comets traveling through the Solar System at different angles indicating an origination point from a spherical cloud.  Like the Kuiper Belt, models have determined these icy objects formed beyond the frost line near the gas giant planets and were ejected by the gravity of these planets to their current location.

Oort cloud relative to the planets. Credit: ESO

If the solar nebula existed 4.6 billion years ago, how can we prove this theory is correct?  We cannot observe the formation of our own Solar System, but we can observe, thanks to the Hubble and the next generation ALMA radio telescope, stellar and planetary systems forming around other stars.  Below is perhaps the most iconic image taken by the Hubble, the Pillars of Creation in the Eagle Nebula located 7,000 light years from Earth.  This is a large (the column on the left is four light years long) interstellar gas cloud acting as a nursery for new stars.  In fact, ultraviolet radiation from newly born stars eats away at the dust cloud which gives it its shape.

Credit: NASA, ESA, STScI, J. Hester and P. Scowen (Arizona State University)

Below is an image of a spinning protoplanetary disk in the Orion Nebula 1,500 light years from Earth.  The spinning motion has flattened the dust cloud to a disk shape.  The disk contains dust grains that will clump together to form planets, asteroids, and small icy bodies such as Kuiper Belt objects.

A Protoplanetary Disk Silhouetted Against the Orion Nebula
Credit: NASA, J. Bally (University of Colorado) and H. Throop (SWRI)

The next image is planet creation in process around HL Tauri 450 light years from Earth.  Taken by the ALMA radio array in Chile, you can see the gaps in a protoplanetary disk cleared of dust by planets forming in the rings.  This is direct evidence of stars and planets creation matching up with the solar nebula theory of how our Solar System formed.

Credit: ALMA (ESO/NAOJ/NRAO)

Getting back to the first point, when thinking how to categorize Solar System objects, it is best to consider how these objects formed.  The term dwarf planet covers objects that originated inside and outside the frost line in the solar nebula and is of little use here.  And as I mentioned before, most likely will be discarded as the collective memory of Pluto as a planet fades.  It is a transition term much like Ceres was referred to as a minor planet between its time as a planet and asteroid.  As such, I do not consider it a good point of emphasis when learning about the Solar System.  Instead, I would summarize as follows:

Objects formed inside the frost line:  Rocky planets with thin atmospheres, rocky asteroids, some of these asteroids are large enough to become spherical in shape, but most are not such as Eros below.

Credit: NASA/JHUAPL

Objects formed outside the frost line:  Gas giant planets with small icy-rocky cores and large atmospheres, small icy bodies such as the Kuiper Belt and Oort Cloud objects.  Some, like Pluto, are large enough to form a spherical shape but most are not.

The Solar System is not static:  After these objects are created and the solar nebula was dissipated by the young Sun’s solar wind, gravitational perturbations caused migration of these objects.  In our Solar System, the orbital resonance of Jupiter and Saturn caused Neptune to migrate outward and took the Kuiper Belt objects with it.  However, around other stars, Jupiter sized gas giants have migrated inwards to occupy orbits closer to their host star than Mercury is to the Sun.  Over the course of 4.6 billion years, the Solar System has been a dynamic place.

Our lifetimes are very small compared to the cosmic time scale and thus, we tend the think of the Solar System as a static system.  Nonetheless, we do see migrations of objects whenever a comet pays us a visit from the outer reaches of the Solar System.  By classifying objects in the Solar System by their composition, it allows you to understand how the Solar System formed and what path those objects took to reach their final destination.  And that is more important than worrying if a celestial body is a planet or a dwarf planet.

*Image atop of post is sunset over the mountains of Pluto taken 15 minutes after New Horizons closest approach. Credits: NASA/JHUAPL/SwRI.

Beware of Outliers

As we currently digest the run-up to the 2016 presidential election, it can be expected that the candidates will present exaggerated claims to promote their agenda.  Often, these claims are abetted by less than objective press outlets.  Now, that’s not supposed to be the press corps job obviously, but it is what it is.  How do we discern fact from exaggeration?  One way to do that is to be on the lookout for the use of outliers to promote falsities.  So what exactly is an outlier?  Merriam-Webster defines it as follows:

A statistical observation that is markedly different in value from the others of the sample.

The Wolfram MathWorld website adds:

Usually, the presence of an outlier indicates some sort of problem. This can be a case which does not fit the model under study, or an error in measurement.

The most simple case of an outlier is a single data point that strays greatly from an overall trend.  An example of this is the United States jobs report from September 1983.

bls
Credit: Bureau of Labor Statistics

In September 1983, the Bureau of Labor Statistics announced a net gain of 1.1 million new jobs.  As you can tell from the graph above, it is the only month since 1980 that has gained 1 million jobs.  And why would we care about a jobs report from three decades ago?  It is often used to promote the stimulus of the Reagan tax cuts.  When you see an outlier such as this being used to support an argument, you should be wary.  As it turned out, there is a simpler explanation for this that has nothing to do, pro or con, with Reagan’s economic policy.  See the job loss immediately preceding September 1983?  In August 1983, there was a net loss of 308,000 jobs.  This was caused by the strike of 650,000 AT&T workers who returned to work the following month.

If you eliminate the statistical noise of the striking workers from both months, you have a gain of over 300,000 jobs in August 1983, and 400,000 jobs in September 1983.  Those are still impressive numbers and require no need for the use of an outlier to exaggerate.  However, it has to be noted, it was the monetary policy of the Fed Chair Paul Volcker, rather than the fiscal policy of the Reagan administration that was the main driver of the economy then.  Volcker pushed the Fed Funds rate as high as 19% in 1981 to choke off inflation causing the recession.  When the Fed eased up on interest rates, the economy rebounded quickly as is the normal response as predicted by standard economic models.  So we really can’t credit Reagan for the recovery, or blame him for the 1981-82 recession, either.  It’s highly suspect to use an outlier to support an argument, it’s even more suspect to assume a correlation.

To present a proper argument, your data has to fit a model consistently.  In this case, the argument is tax cuts alone are the dominant driver determining job creation in the economy.  That argument is clearly falsified in the data above as the 1993 tax increases were followed by a sustained period of job creation in the mid-late 1990’s.  And that is precisely why supporters of the tax cuts equals job creation argument have to rely on an outlier to make their case.  It’s a false argument intended to rely on the fact that, unless one is a trained economist, you are not likely to be aware of what occurred in a monthly jobs report over three decades ago.  Clearly, a more sophisticated model with multiple inputs are required to predict an economy’s ability to create jobs.

When dealing with an outlier, you have to explore whether it is a measurement error, and if not, can it be accounted for with existing models.  If it cannot, you’ll need to determine what type of modification is required to make your model explain it.  In science, the classic case is the orbit of Mercury.  Newton’s Laws do not accurately predict this orbit.  Mercury’s perihelion precesses at a rate of 43 arc seconds per century greater than predicted by Newton’s Laws.  Precession of planetary orbits are caused by the gravitational influence of the other planets.  The orbital precession of the planets besides Mercury are correctly predicted by Newton’s laws.  Explaining this outlier was a key problem for astronomers in the late 1800’s.

At first, astronomers attempted to analyze this outlier within the confines of the Newtonian model.  The most prominent of these solutions was the proposal that a planet, whose orbit resided inside of Mercury’s, perturbed the orbit of Mercury in a manner that explained the extra precession.  This proposed planet was dubbed Vulcan, after the Roman god of fire.  Several attempts were made to observe this planet during solar eclipses and predicted transits of the Sun with no success.  In 1909, William W. Campbell of the Lick Observatory stated no such planet existed and declared the matter closed.  At the same time, Albert Einstein was working on a new model of gravity that would accurately predict the orbit of Mercury.

Vulcan’s Forge by Diego Velázquez, 1630. Apollo pays Vulcan a visit. Instead of having a real planet named after him, Vulcan settled for one of the most famous planets in science fiction.  Credit: Museo del Prado, Madrid.

The general theory of relativity describes the motion of matter in two areas that Newton could not.  That is, when located near a large gravity well such as the Sun or moving at a velocity close to the speed of light.  In all other cases, the solutions of Newton and Einstein match.  Einstein understood that if his new theory could predict the orbit of Mercury, this would pass a key test for his work.  On November 18, 1915, Einstein presented his successful calculation of Mercury’s orbit to the Prussian Academy of Sciences.  This outlier was finally understood and a new theory of gravity was required to do it.  Nearly 100 years later, another outlier was discovered that could have challenged Einstein’s theory.

Relativity puts a velocity limit in the universe at the speed of light.  A measurement of a particle traveling faster than this would, as the orbit of Mercury did to Newton, require a modification to Einstein’s work.  In 2011, a team of physicists announced they had recorded a neutrino with a velocity faster than the speed of light.  The OPERA (Oscillation Project with Emulsion-tRacking Apparatus) team could not find any evidence for a measurement error.  Understanding the ramifications of this conclusion, OPERA asked for outside help in verifying this result.  As it turned out, a loose fiber optic cable caused a delay in firing the neutrinos.  This delay resulted in the measurement error.  Once the cable was repaired, OPERA measured the neutrinos at its proper velocity in accordance with Einstein’s theory.

While the OPERA situation was concluding, another outlier was beginning to gain headlines.  This being the increase in the annual sea ice in Antarctica, seemingly contradicting the claim by climate scientists that global temperatures are on the rise.  Is it possible to reconcile this observation within the confines of a model of global warming?  What has to understood is this measurement is an outlier that cannot be extrapolated globally.  It only pertains to sea ice surrounding the Antarctica continent.

Glaciers on the land mass of Antarctica continue to recede, along with mountain ranges across the globe and in the Arctic as well.  Clearly something interesting is happening in Antarctica, but it is regional in nature and does not overturn current climate change models.  At least, none of the arguments I’ve seen using this phenomenon to rebut global warming models have provided an alternative model that also explains why glaciers are receding on a global scale.

Outliers are found in business as well.  Most notably, carelessly taking an outlier and incorporating it as a statistical average in a forecasting model is dangerous.  Lets take a look at the history of housing prices.

Credit: St. Louis Federal Reserve.
Credit: St. Louis Federal Reserve.

In the period from 2004-06, housing prices climbed over 25% per year.  This was clearly a historic outlier and yet, many assumed this was the new normal and underwrote mortgages and derivative products as such.  An example of this would be balloon mortgages, where it was assumed the homeowner could refinance the large balloon payment at the end of the note with newly acquired equity in the property as a result of rapid appreciation.  Instead, the crash in property values left these homeowners owing more than the property was worth causing high rates of defaults.  Often, the use of outliers for business purposes are justified with slogans such as this is a new era, or the new prosperity.  It turns out to be just another bubble.  Slogans are never enough to justify using an outlier as an average in a model and never be swayed by any outside noise demanding you accept an outlier as the new normal.  Intimidation in the workplace played no small role in the real estate bubble, and if you are a business major, you’ll need to prepare yourself against such a scenario.

If you are a student and have an outlier in your data set, what should you do?  Ask your teachers to start with.  Often outliers have a very simple explanation, such as the 1983 jobs report, that will not interfere with the overall data set.  Look at the long range history of your data.  In the case of economic bubbles, you will note a similar pattern, the “this time is different” syndrome.  Only to eventually find out this time was not different.  More often than not, an outlier can be explained as an anomaly within a current working model.  And if that is not the case, you’ll need to build a new model to explain the data in a manner that predicts the outlier, but also replicates the accurate predictions of the previous model.  It’s a tall order, but that is how science progresses.

*Image on top of post is record Antarctic sea ice from 2014.  This is an outlier as ice levels around the globe recede as temperatures warm.  Credit:  NASA’s Scientific Visualization Studio/Cindy Starr.

Vollmond, High Tides and Lunacy

When teaching astronomy to non-science majors, I try to make connections with the student’s field of study or personal interests.  Sometimes this is not difficult.  For example, I can discuss NASA budgets and cost estimating with business majors.  For art majors, the deep red sunsets that followed the Krakatoa eruption of 1883 found their way into many paintings of the era.  The most notable example of this was The Scream painted by Edvar Munch in 1893.

The Scream by Edvard Munch. National Gallery, Olso, Norway. Aerosols injected into the atmosphere by powerful volcanic eruptions can cause very deep red skies at sunset.

A while back, I talked with someone whose career was in the performing arts, specifically dance.  I was stumped at the time to think of a possible tie in between astronomy and dance.  The closest analogy I could come up with was the classic case of a figure skater demonstrating the concept of angular momentum during a spin such as below.

Angular momentum is conserved, that is, it is not created or destroyed (it can be converted to heat via friction).  Angular momentum (L) is defined as:

L = mrv

m = mass, r = radius, v = velocity

As angular momentum is conserved, the value L is constant.  In the case of the figure skater in the video, she reduces r by drawing in her arms and legs closer to herself.  As the skater’s radius decreases, velocity must increase.  Hence, the rate of spin increases as radius decreases.  You can try this at home even if you do not  know how to skate.  Just find a swivel chair and have a friend spin you around with your arms extended, then draw in your arms close to your body.  You’ll feel your spin velocity accelerate.  Not as much as the skater, but enough to notice.

The conservation of angular momentum has several applications in astronomy, in particular, pulsars.  Pulsars are the remnants of stars that went supernova.  As the outer layers of the star are dispersed in the aftermath of a supernova, its inner core compresses forming the pulsar.  In a pulsar, the gravitational force is so great that electrons merge with protons to form neutrons.  Consequently, pulsars are a sub-class of what are known as neutron stars.  As the radius of a pulsar is reduced, its spin rate greatly accelerates.

We can measure the spin rates of pulsars as they emit radio waves in the same fashion a lighthouse emits a light beam.  The most famous pulsar is located in the Crab Nebula, which is a remnant of a supernova observed by Chinese astronomers in 1054.  This pulsar spins at a rate of 30 times per second.  To put that in perspective, the skater in the video above is spinning 5 times per second.

The pulsar in the Crab Nebula emits high energy x-rays. Red is lowest energy and blue highest energy x-rays. Credit: NASA/CXC/SAO

Is there any sort of analogy in the world of dance?  Ballet dancers use the same method as figure skaters to increase their spin.  However, as there is more friction from a wood floor than there is from ice, the effect is not as pronounced.  Looking around I found a different approach when it came to this and found a connection, albeit allegorical, to the dance performance Vollmond.

Translated from German, Vollmond means Full Moon.  Choreographed by Pina Bausch, the performance centers on two themes addressed in my class.  One is scientific, how the full Moon increases tides, the other not so scientific, how a full Moon affects human behavior.

During a full (and new) Moon, the difference between high and low tides are at their greatest.  During these two phases, the Earth, Moon, and Sun are aligned with each other.  At this time, the effect of the Sun and Moon’s gravity is greatest on the oceans as can be seen below.

Full Moon Tides
Credit: Wikipedia

The gravity from the Sun amplifies the lunar tides.  During a full Moon, high and low tides occur twice a day.  Tides during the full and new phases are referred to as spring tides.  This has nothing to do with the season of Spring.  In a way, it is during this time when the tides spring to life.  When the Moon and Sun are at a right angle relative to Earth, the Sun’s gravity partially offsets the Moon’s gravity and modulates the tides so low and high tidal differences are not as great as the spring tide.  These are referred to as neap tides.  Local conditions can also amplify the tides.  The most dramatic example of this is the Bay of Fundy where high and low tide can differ by 56 feet.

Spring tides at the Bay of Fundy Credit: Samuel Wantman/Wiki Commons.

So, if you live by the ocean, you’ll associate high tides with a full (and new) Moon.  How about the Great Lakes?  Not so much.  The lakes greatest tide is only 5 cm, not enough to be noticed with the naked eye.  The earth you stand on also feels the tidal pull from the Moon.  Like the lakes, it is not noticeable at 25 cm.  As the landmarks rise up and down with the ground, your eye cannot detect ground tides.  We can say, quite confidentially, that the full Moon affects tidal motions.  Can we say the same regarding human behavior?

The words lunar and lunatic have their roots in the Latin word luna.  In ancient Rome, Luna was the goddess of the Moon.  Lunatic means to be moon struck.  We are all familiar with the phrase, “It must be a full Moon.”  Meaning that the full Moon provides an explanation for an increase in bizarre/criminal behavior.  Does the empirical evidence support this?  The short answer is no.  Studies have indicated no change in criminal behavior during a full Moon, or even a scientific model to explain why that would happen.  This highlights the key difference between science and mythology.

Statue of the Roman Goddess Luna. Credit: Wiki Commons

Whenever a student writes the phrase, “I believe” in a science paper, I advise them to take pause and ask yourself why do you believe that?  Science is not about beliefs, but about investigation of the nature and causes of phenomena we observe around us.  If you want to assert something as being true in science, you need a model to explain why it happens, empirical evidence it actually does happen, and independent verification of the original results.  Sometimes you might have a model you think is reasonable, but the empirical evidence does not back it up.  One such case is in economics, where demand and supply curves indicate a minimum wage set above the market rate creates unemployment.  The evidence does not support that meaning a newer, more sophisticated model is required to explain what is happening.

The purpose of this exercise was to find ways to connect a student’s personal interest to a scientific topic.  If that can be accomplished, the chances of building the student’s interest and motivation in the class increases.  In this case, we can use two situations to discern between what passes for science and what does not.  For the teacher, it provides the opportunity to explore areas that were previously unknown.  I would have never learned of the Vollmond dance performance without attempting to match my specialty with the student’s.  It’s a good experience to reach out of your comfort zone to find common ground with your students.  Often, in the classroom, who is the teacher and who is the learner can be a fluid situation.  You have to permit yourself enter your current student’s world of interests.  I have found that as a teacher, that prevents my lessons from going stale over the years.

*Image on top of post is the full Moon, or Vollmond in German.  Credit:  Katsiaryna Naliuka/Wiki Commons.

What’s Your Sign?

That question, for those of us of a certain age, is associated most often with the garish 70’s singles scene.  For teachers, it represents part of the struggle to educate students new to astronomy to disabuse the relationship between the location of the stars and planets with one’s personal future outlook.  While a teacher might recoil in horror when an assignment is turned in with the heading Astrology 101, a student’s interest in astrology can be used to teach certain astronomy concepts.  It is somewhat similar using science fiction in the classroom.  After all, part of the purpose of education is to enable students to make conceptual leaps from myth to reality.  And to do that, you have to meet the student on level ground.

Astrology is where many of us first learn of the constellations.  There are 88 constellations that divide the celestial sphere in the same manner states divide a nation.  However, astrology focuses on 12 zodiac constellations that lie in the ecliptic and form a background that the Sun and planets move through.  The ecliptic is a narrow path in the sky that the Sun and planets travel from our perspective on Earth.  As the Solar System formed 4.5 billion years ago, the solar nebula flattened causing the planets and Sun to coalesce in the same disk.  Conceptually, it can be difficult to imagine the Sun and planets moving in the same path in the sky as we see them during different times (the Sun during the day and planets at night).  A total solar eclipse does allow us to visualize this.

EclipseThis is a simulation of the total eclipse to occur on April 8, 2024 in Buffalo.  With the Sun’s light blocked by the Moon, you can see how the planets (in this case, Mercury, Venus, Mars, & Saturn) and Sun move along in the same path in the sky.  And this path is called the ecliptic.  You’ll also note two constellations in the ecliptic, Pisces and Aquarius, which correlate to astrological signs.  This illustrates how the Sun lies in constellations just as the planets do.  We typically do not get to visualize this as the Sun’s brightness does not allow us to see constellations during the day.  This image demonstrates how the zodiac constellations align with the Earth and Sun during the year.

Credit: David Darling

The Sun will be located in the constellation opposite from Earth.  Some caveats here, the month the Sun is located in a constellation will not match your astrological sign.  Also, there actually is another constellation, Ophiuchus, that lies in the ecliptic but the ancient Babylonian astrologers decided to casually toss that one out as they were using a 12 month calendar.  Noting that astrology has not kept up with the precession of the Earth’s axis the last few thousand years will hopefully be a first step in cracking any validity astrology may have with a student.  Note in the eclipse image above the Sun resides in Pisces, but your astrological sign is Taurus if born on April 8th.  Planetarium software such as Starry Night will allow the class to view the changing zodiac throughout time.

Retrograde Motion

Often referred to in astrology, retrograde motion pertains to a “backwards” motion of a planet in the ecliptic.  Before the Copernicus revolution putting the Sun, instead of the Earth, at the center of the Solar System, retrograde motions confounded astronomers.  Lets take a look at an example, Mars during 2016.  The image below tracks Mars motion in the ecliptic from the beginning of 2016 to the end of September, 2016.

MarsretroThe retrograde or backwards motion of Mars occurs from April 17th to June 30th.  What’s happening here?  Earth and Mars are approaching opposition, an event that will take place on May 22, 2016.  At this time, Mars and the Sun are on opposite sides of the Earth.  This means on that date, Mars will rise in the east as the Sun sets in the west.  As this is Mars closest approach to Earth, Mars is at its brightest.  Opposition is the optimal time to observe a planet.  In the case of Mars, impending opposition to Mars also represents launch windows for space agencies to send missions there.  If the launch window is missed, the mission must wait another 26 months until the next opposition.  So how does all this result in retrograde motion?  This is when Earth “passes” Mars like a race car with an inside track passes a car on the outside.  The resulting retrograde effect is visualized below:

Credit: NASA

Point d is when opposition occurs and is the midway point of the retrograde motion.

Conjunctions

As the planets all orbit the Sun in the same plane, sometimes they align in the same line of view to form a conjunction of planets.  One such example will occur in the early morning hours of October 28th when Venus, Mars, or Jupiter will all be within a few degrees of each other.  The scene will look like this:

ConjunctionThese conjunctions serve as excellent teaching opportunities as it allows students to locate several planets at once quite easily.  Below is an inner Solar System view of the event.  Jupiter is not in the image but would be aligned right behind Mars if visible in this view:

InnerconjFrom an planetary science perspective, conjunctions really do not have much to offer, but for the rest of us they can provide a really neat night time spectacle.  These events are an excellent way to introduce the planets to those new to astronomy.

Zodiacal Light

I’ve never heard the zodiacal light mentioned in an astrological context but, while we are learning about the zodiac, now is as good as any time to familiarize ourselves with this.  Besides the Sun, planets, and asteroids, the plane of ecliptic is occupied by cosmic dust.  This dust is the remnants of comet tails and asteroid collisions.  Best seen in dark sky locations, the zodiacal light is visible in the east just before sunrise or in the west just after sunset.  This faint glow, which follows the ecliptic in the sky, is usually most observable in the Spring or Fall.  During these seasons, the ecliptic has a steeper pitch relative to the horizon.  Fainter than the Milky Way, most urban dwellers do not get the opportunity to see it.  However, if you find yourself away from the city lights, this is what you can expect to see:

Zodiacal light from VLT in Chile Credit: ESO/Yuri Beletsky

Sagittarius

The Sun enters Sagittarius in mid-December and exits during mid-January.  During the Summer months, the classic teapot of Sagittarius lies high in the night sky along with the Milky Way.  In fact, the center of the Milky Way lies in Sagittarius.  What this means is that each December, both the center of the Milky Way and the Sun are in Sagittarius.  You might recall back in 2012, this alignment was, according to some less than reliable sources, going to result in the end of the world.  Needless to say, as an annual event, this would have destroyed the Earth a whole lot sooner than 2012.  That is the power of a liberal arts education, it clears a lot of silly stuff out of the way.

December 21, 2015 – The Sun and Milky Way center both reside in Sagittarius. Daylight was shut off on Starry Night software so both the Sun and constellation can be seen at the same time.

Speaking of which, although Sagittarian Jim Morrison gets it “right” about astrology here, you really should not need a celebrity to tell you what is useful information and what is not.  And the audience is clearly going with whatever Morrison is telling them here.  A liberal arts education provides a basic framework of knowledge to make up your own mind and not be concerned with looking cool or not with the results.

*Image on top of post is the painting of the 12 constellations of the Zodiac on the ceiling of the Grand Central Terminal in New York City.  Infamously, the designer goofed and placed the constellations backwards.  Photo:  Gregory Pijanowski.

Antimatter – Fact and Fiction

To clear things up to start with, antimatter is real.  Many people I talk to (including numerous teachers) have an assumption that antimatter is a mythical construct from science fiction such as Star Trek.  This confusion seems to lie in the fact that, unless you work in a particle collider, there is usually no interaction with antimatter of any sort in daily life.  Relativity has everyday applications, most famously  in nuclear power (and weaponry, which hopefully we’ll never have to experience).  Quantum mechanics is responsible for the transistor and laser technology.  While most people do not understand the intricacies of relativity or quantum theory, they most certainly understand both are very real.  Antimatter has very few practical applications and thus, is mostly heard about in a fictional context.

The story of antimatter began with the attempt by Paul Dirac in the late 1920’s to produce an equation that would describe the properties of an electron traveling near the speed of light.  This was very groundbreaking work as it would necessitate merging quantum mechanics (which describes the properties of atomic particles) with relativity (which describes the properties of matter as it travels at near light speeds).  The end result was an equation which produced an electron with a negative charge and one with a positive charge.  The actual equation is beyond the scope of this post, but I’ll use an analogy which we encounter in beginning algebra.  Take the following equation:

(x + 1)(x – 1) = 0

Since one of the terms in the parenthesis must equal zero to produce the proper result, we arrive at two different solutions, x = 1 and x = -1.  If we are modeling a situation with a zero-bound, that is, a physical constraint that cannot drop below zero, we would typically disregard the x = -1 solution.  An example of this might be in economics where a price of a consumer good cannot drop below $0.00.  Dirac faced this same dilemma.  Electrons have a negative charge.  The initial temptation would be to disregard the positive charge result.  However, Dirac had a different take on this development.  In 1928, Dirac published The quantum theory of the electron.  In this paper, Dirac postulated the existence of an antielectron.  This particle would have the same properties as an electron except for having a positive, rather than negative, charge.

Paul Dirac. Credit: Wiki Commons

This bold prediction was verified four years later by Carl Anderson when he detected particles with the same mass as an electron but with a positive charge.  How does one measure something like this?  Anderson was observing cosmic rays in a cloud chamber.  A magnetic field will move opposite charged particles in opposite directions.  This movement can be photographed as the particles leave trails in the cloud chamber filled with water vapor.  One year later, Dirac would be awarded the Noble Prize and Anderson followed suit in 1936.  Anderson’s paper dubbed the positively charged electron as a positron.

First recorded positron. An electron would have spiraled in the opposite direction. Credit: Carl Anderson, DOI: http://dx.doi.org/10.1103/PhysRev.43.491

During the 1950’s, the antiproton and antineutron would be discovered.  These particles have significantly more mass then positrons and thus, were more difficult to produce.  Protons have a positive charge and antiprotons have a negative charge.  Neutrons are electrically neutral.  However, subatomic particles have charges as well and antineutrons are made of three antiquarks that have opposite charges than the three quarks that neutrons are made of.  So, now that we have established these things are real, how can we use this in an educational setting?

If the student learned about antimatter from science fiction, the first task should be to discern how antimatter is presented in the reading/movie and how it exists in reality.  The most prominent example in popular culture is the use of matter-antimatter propulsion in Star Trek to power starships on interstellar explorations.  The overall premise is not bad in that when matter and antimatter collide, they annihilate each completely into pure energy.  Unlike fission and fusion reactions, which convert a small fraction of available mass into energy, a matter-antimatter reaction is 100% efficient.  So, what stops us from using this as a potential source of energy?

The primary challenges of antimatter production are the cost, amount, and storage of antimatter.  The cost to produce 10 milligrams of positrons is $250 million.  To put that in perspective, it would require about 1100 kilograms (about 1 ton) of antimatter to produce all the energy consumed by the United States in one year.  That would cost $2.75 x 1017 to produce, or about 17,000 times the GDP of the United States.  Obviously, not an equitable trade-off.  When producing antihydrogen, currently the best we can do is produce it on the scale of a few dozen atoms and store it for 1,000 seconds.  The storage problem comes from the fact that antimatter is annihilated as soon as it comes into contact with matter.  And that last fact would make it seem obvious as to why there is not a lot of antimatter in the universe.  However, there is one little problem.

Models of the Big Bang predict equal amounts of matter and antimatter to have been produced when the universe was formed.  The reason being is the laws of physics state matter and antimatter should form in pairs.  If that had happened, all the matter and antimatter would have mutually destroyed each other and we would be left in an universe with no matter and all energy.  That, of course, is not the case.  Very early in the universe’s existence, an asymmetry between matter and antimatter occurred.  For every billion antimatter particles, an extra matter particle existed in the universe.  And that extra particle per billion parts is responsible for the all the matter, including our bodies, in the universe.  How that extra particle of matter was produced remains a mystery for cosmologists to solve.

So what happened to all the antimatter that was created during the Big Bang?  It was destroyed as it collided with matter and can be observed as the cosmic microwave background radiation (CMB).  Originally released as high energy gamma rays, the CMB has been redshifted all the way to the radio end of the spectrum by the time it reaches Earth.  If our eyes could detect radio waves, instead of seeing stars in a dark sky at night, we would see a glow everywhere we looked.  That is because we are embedded within the remnants of the Big Bang.

CMB as imaged by the Planck mission. Cutting a sphere from pole to pole produces an oval when laid flat. This is a map of the entire celestial sphere and demonstrates the CMB is ubiquitous. Credit: ESA and the Planck Collaboration.

The discovery of the CMB in the 1960’s was a crucial piece of evidence in favor of the Big Bang theory over its then competitor, the Steady State theory.  The CMB produces what is known as a black body spectrum that is emitted by masses in a hot, dense, state.  The early universe with massive matter/antimatter annihilation would be in such a state.  And this leads us back to the original question, does antimatter exist today and can it have any practical purposes?

In its natural state, antimatter exists mostly in the form of cosmic rays.  The term ray is a bit of a misnomer.  Cosmic rays consist of atomic nuclei, mostly hydrogen protons, traveling near the speed of light.  Some cosmic rays are produced by the Sun but most originate outside the Solar System.  Their exact source is a mystery to astronomers although supernovae and jets emanating from black holes are suspected to be the primary culprits.  Trace amounts of positrons have been detected in cosmic rays in space before they reach the Earth’s atmosphere.  When cosmic rays collide into atmospheric molecules, the protons are shattered into various sub-atomic particles.  When a cosmic ray collided into Carl Anderson’s cloud chamber, one of the particles produced was that positron which became the first observed antimatter particle in 1932.

Today, supercolliders such as CERN in Switzerland produce antimatter in the name manner.  Protons are accelerated to very high speeds and sent colliding into a metal barrier.  The impact breaks apart the proton into various sub-atomic particles.  Positrons can also be created via radioactive decay and this process provides one of the few practical applications of antimatter.  Positron Emission Tomography (PET) scans operate on this principle.  Radioactive material is produced by a cyclotron on the hospital site and injected into the patient.  As positrons are released, they collide with matter in the body which mutually annihilate and eject gamma rays in opposite directions.  The scanner then detects the gamma rays to produce images of areas of the body that x-rays cannot.

What most people really want to know about antimatter is, can it be used to produce warp drive propulsion?  If the production and storage aspects of antimatter are solved, it must be understood that gamma rays produced by matter-antimatter collisions are random in direction.  That is, it acts more like a bomb than a rocket.  However, warp drive would not act like a conventional rocket.  Even a conventional antimatter rocket could not propel a spacecraft past the speed of light.  As a rocket accelerates closer to the speed of light, according to relativity theory, its mass would approach infinity.  No matter how much antimatter you could annihilate, you could not produce enough energy to break the light barrier.

Original 11 foot model of U.S.S. Enterprise. Model was used to represent a starship 947 feet long and 190,000 tons. The fictional Enterprise is almost as long as the Eiffel Tower is high and three times heavier than an aircraft carrier. Credit: Mark Avino, National Air and Space Museum, Smithsonian Institution

The concept of warp drive operates in a different principle.  Warp drive does not push a rocket through space, but rather compresses space in front of the spacecraft to, in effect, shorten the distance between two points.  The space is then expanded behind the spacecraft.  Is this possible?  Right now, no.  The space-time fabric is not very pliable.  The estimates on the amount of mass/energy required to propel a starship the size of Star Trek’s U.S.S. Enterprise range from all the mass in the universe to the mass the size of Jupiter.  Radical breakthroughs in both physics and engineering are required to make warp drive possible.

A unification of quantum mechanics and relativity might provide a pathway to warp drive.  The operative word here is might.  We simply do not know what such an advancement would bring.  When Max Planck discovered energy was transmitted in discrete packets rather than a continuous stream in 1901, he had no idea that development would lead to such applications as lasers, digital cameras, LED lights, cell phones, and modern computers.  And so it is today, we can only speculate what such theoretical advancements might bring for future applications to harness the great potential energy of antimatter.  Perhaps, we’ll have the good fortune to see our current students working to solve those problems.

*Image on top of post are bubble chamber tracks from first antihydrogen atoms produced at CERN in 1995.  Credit:  CERN

Elementary Einstein

While I was in grade school, a teacher wrote the equation E = mc2 on the board and flatly stated, “less than ten people in the world understand this equation.”  In retrospect, that really seems an odd statement to make about a rather simple algebraic equation.  However, it did speak to mystique relativity has among even the educated public.  Nonetheless, this classic equation, which demonstrates the equivalency between matter and energy, is perhaps the easiest aspect of relativity theory to understand.

Relativity typically deals with phenomena that we do not experience in our day to day lives.  In the case of special relativity, most of its esoteric quality deals with objects as they approach the speed of light that represents the highest velocity possible.  As an object approaches this upper bound, it’s clock runs slower compared to stationary observers and its mass approaches infinity.  The fastest speed we approach for most of us is when we fly a jet airliner at about 700 mph.  While that seems fast, it is only 0.000001 the speed of light, much too slow for relativistic effects to be noticed.  Thus, relativity has a strong counter-intuitive sense for us.

That alone does not explain relativity’s fearsome reputation as expressed by my teacher some forty years ago.  Some of that reputation can be attributed to how the media reported the experimental confirmation of general relativity during after the eclipse of 1919.  General relativity provides a more comprehensive theory of gravity than Newton’s Laws.  During the eclipse, astronomers were able to measure the Sun’s gravity bend light, something not predicted by Newton but is by general relativity.  The New York Times reported that:

“When he (Einstein) offered his last important work to the publishers he warned them that there were not more than twelve persons in the whole world who would understand it.”

That was referring to general relativity, which is very complex mathematically and was only four years old in 1919.  It is understandable for those not trained in modern physics to conflate special and general relativity.  Add to that the equation E = mc2  was most famously associated with Einstein and you got the perception it could not be understood unless you were a physicist.  As we will see below, that perception is most assuredly false.

To begin with, lets start with a hypothetical situation where mass can be completely converted to energy.  A science fiction example of this is the transporter in Star Trek that converts a person to energy, transmits that energy at another location, then reconverts the energy back into matter in the form of that person.  How much energy is present during the transmission stage?  Einstein’s famous equation gives us the answer.

Lets say Mr. Spock is about 200 pounds.  Converted to kilograms that comes out to 90 kg.  The speed of light is 3.0 x 108 m/s.  The mass-energy equation gives us:

E = (90 kg)(3.0 x 108 m/s)2

E = 8.1 x 1018 kg*m2/s2

The term kg*m2/s2 is a unit of energy called a Joule (J).  So as Mr. Spock is beaming down to the planet surface, his body is converted to 8.1 x 1018 J of energy.  Exactly how much energy is that?  Well, the average amount of energy consumed in the United States each month is 8.33 x 1018 J.  That’s right, if you converted your body to energy, it would almost provide enough to power the United States for an entire month.  As you can see, a small amount of matter has a whole lot of energy contained with it.

However, most nuclear fission and fusion processes convert a small fraction of matter to energy.  For example, lets take a look at the fusion process that powers the Sun.  It’s a three step process where four hydrogen atoms are fused to form a single helium atom.  The four hydrogen atoms have four protons in their nuclei whereas the final helium atom has two neutrons and two protons in its nucleus.  A proton becomes a neutron by releasing a positron and a neutrino, thus a neutron has slightly less mass than a proton.  In the solar fusion cycle, this mass is converted to energy.

The mass of four hydrogen atoms is 6.693 x 10-27 kg and the mass of the final helium atom is 6.645 x 10-27 kg with a difference between the two being 0.048 x 10-27  kg.  How much energy is that?  Using the famous Einstein equation:

E = (0.048 x 10-27 kg)(3 x 108 m/s)

E = 4.3 x 10-12 J

By itself, that might seem like a small amount of energy.  However, the Sun converts some four million tons of mass into energy each second for a total of 4 × 1026 watts (one watt = one J/s).  Worry not, although average sized for a star, the Sun is still pretty big.  In fact, it constitutes over 99% of the mass of the Solar System.  The Sun will burn up less than 1% of its mass during its lifetime before becoming a planetary nebula some five billion years from now.

Albert Einstein, 1904.

Einstein published this equation in 1905, what would later be called his Annus Mirabilis (Miracle Year).  During this year, Einstein would publish four groundbreaking papers along with his doctoral dissertation.  These papers would describe the photoelectric effect (how light acts as a particle as well as a wave-a key foundation of quantum mechanics), Brownian motion (heat in a fluid is caused by atomic vibrations-helped establish atoms as building blocks of matter), special relativity, and finally, the mass-energy equivalence.  Ironically, it was the photoelectric effect and not relativity that was cited when Einstein was awarded the Noble Prize in 1921.

Information traveled a lot slower back then, and the fame that awaited Einstein was more than ten years away.  The major news story that year would be the conclusion of the war between Russia and Japan as well as the election of Theodore Roosevelt to another term as president.  The New York Times would not mention Einstein at all in 1905.  Even in 1919, when Einstein became a famous public figures, some were mystified at the attention.  The astronomer W.J.S. Lockyer stated that Einstein’s ideas “do not personally concern ordinary human beings; only astronomers are affected.”  As we now know, the public was ahead of the curve in discerning the importance of Einstein’s work.

And that interest remains today.  Yet, there is very little opportunity for students to take a formal course in relativity (or quantum mechanics) unless they are college science majors.  Does the mathematics of relativity make it prohibitive for non-science majors to study relativity?  It shouldn’t.  A graduate level course in electromagnetism contains higher order mathematics that is very complex.  Yet, that does not stop us from presenting the concepts of magnetic fields and electrical circuits in grade school.  As educators, we should strive to do the same for relativity.  And I can’t think of a better place to start than that famous equation E = mc2.

*Photo on top of post is sunset at Sturgeon Point 20 mile south of Buffalo.  The light photons recorded in this image were produced via a nuclear fusion reaction in the Sun’s core that occurred 1 million years ago when only 18,500 humans lived on Earth.  Once the photons were released at the Sun’s surface, it took only an additional eight minutes to end their journey on Earth in my camera.  Photo:  Gregory Pijanowski

Pluto and Earth

The first thought I had watching the press conference on the initial images from the New Horizons flyby of Pluto was how much accessible these events are to the public than in the days of Voyager.  During the 1980’s, unless you had a NASA press pass, you did not get to watch mission updates live.  No twitter feeds to tell you right away when telemetry is being received, no websites to go back and review the images at your leisure.  And you had to wait at least a year, maybe more, for astronomy textbooks to be updated.  What you got was short segments on the nightly news such as this:

One of my favorite teaching techniques is to compare the surface features of planets to things we are familiar with here on Earth to give it proper perspective.  And that seems to me to be a good place to start with the first images released today.

Lets begin with the mountains located near the now famous heart-shaped region of Pluto.

Credit: NASA/Johns Hopkins University Applied Physics Laboratory/Southwest Research Institute

This image was taken while New Horizons was 77,000 km away from Pluto.  That’s 10 times farther away than the closest approach and gives a good idea what to look forward to as more images are released.

The tallest of these mountains are about 11,000 feet (3,500 m).  How does this compare to Earth?  These are less than half as tall as Mt. Everest which clocks in at 29,029 feet.  Still, pretty impressive mountains considering how small Pluto is.  The height of these mountains are similar to Mt. Hood in Oregon.

Image: Wiki Commons

The first age estimate of these mountains are about 100 million years.  That sounds pretty old.  In fact, dinosaurs were roaming around on Earth when these mountains formed.  In geological terms, this is pretty young, only 2% the age of the Solar System (4.5 billion years).  How do we know these mountains are young?  By the lack of craters in the region.  The less craters there are, the younger a surface is.  These mountains are younger than the Alps which are 300 million years old.  They are older than the Himalayan Mountains which formed as the Indian Sub-Continent plowed into Asia 25 million years ago.

Mountains on Earth are the result of plate tectonics.  At this very early juncture, planetary scientist have their work cut out for them as none of the current models can account for such mountain formation on an icy outer Solar System body in the absence of tidal flexing.  It is thought that the mountains are regions of water-ice bedrock poking through the methane ice surface.  Methane ice is too weak to build mountainous structures.

Below is Pluto’s largest moon Charon:

Credit: NASA/Johns Hopkins University Applied Physics Laboratory/Southwest Research Institute

The outstanding feature here is the large canyon in the upper right corner.  This canyon is 4 to 6 miles (7 to 9 km) deep.  The Grand Canyon’s greatest depth is a little over a mile.  This channel is comparable to the deepest reaches of the Pacific Ocean, the Mariana Trench, that lies about 6.8 miles below sea level.  It’s interesting to consider than more humans have walked the surface of the Moon (12) than have reached the bottom of the Mariana Trench (3).  To be fair, no nation has ever decided to spend $150 billion (2015 dollars) and employ 400,000 people to reach the Mariana Trench, such as the United States did during the Apollo program.

This image maps methane on the surface of Pluto.

Credit: NASA/Johns Hopkins University Applied Physics Laboratory/Southwest Research Institute

The New Horizons press release describes the greenish area of Pluto’s North Pole as methane ice diluted in nitrogen ice.  Does that sound odd?  Typically, we see neither of these substances in a solid state on Earth.  Methane and nitrogen are known as volatiles, which means they take gaseous form at room temperature.  As you may have surmised, Pluto is not at room temperature.  The freezing point of methane is -295.60 F (-1820 C) on Earth.  The freezing point of nitrogen is even lower at -3460 F (-2100 C).  These figures are lower on Pluto as the atmospheric pressure does not match that of Earth.  The temperature of Pluto ranges from -3870 to -3690 F (-2330 to -2230 C).  Yeah, the outer reaches of the Solar System are pretty chilly.

In our day to day lives, you may be familiar with methane as the main component of natural gas.  You may have learned about it first as a source of middle school humor.  While methane is a gas on Earth, the Saturn moon Titan is cold enough for it to be a liquid.  Below is an image of methane lakes on Titan.  Instead of raining water, you could dance in the methane rain on Titan.  Earth and Titan are the only bodies in the Solar System to have stable liquid lakes on the surface.

Credit: NASA/JPL-Caltech/ASI/USGS

Neptune has trace amounts of methane in its atmosphere.  Methane has the property of absorbing red light and scattering blue light.  The result is the rich blue hue of Neptune as first seen in the 1989 Voyager flyby:

Credit: NASA

Methane also absorbs infrared light at certain wavelengths.  The methane profile image of Pluto was produced by measuring infrared absorption from surface methane.   When methane absorbs infrared light at these wavelengths, the infrared energy is converted in vibrational motion in the molecular bonds.  Once the molecule settles down, the energy is released back out as infrared light.  We cannot see infrared light, but we feel it as heat.  In the atmosphere, some of this heat is directed back towards the Earth, warming the surface.  In other words, methane is a greenhouse gas like carbon dioxide and water vapor.

And for that, we should be grateful.  Without greenhouse gasses, the Earth would be 600 F colder (like the Moon), and human life would not be possible.  However, you can have too much of a good thing.  As temperatures rise in the Arctic warming up the permafrost, methane that has been locked up for thousands of years as frozen, undecomposed plant life, could be released into the atmosphere.  When you consider the Arctic region has been most affected by rising global temperatures, then you can understand why climate scientists are concerned about this scenario.

On Friday, New Horizons should be releasing the first color images from the flyby.  Should be quite an interesting week.

*Image on top shows part of Pluto’s heart region the mountain closeup was taken.  Credit:  NASA

Teaching About the Confederate Flag

The recent controversy concerning the display of the Confederate flag presents an excellent opportunity for teachers to employ constructivist learning techniques for students to understand the flag’s original intent.  Also, this can provide a good lesson in the value of examining original historical documents rather than relying on interpretations of those documents.  When I took American history in high school, back in the early 80’s, these documents were not readily available for inspection.  The internet now allows students to access these documents with little difficulty.

Four states, South Carolina, Georgia, Mississippi, and Texas, wrote formal declarations as to the cause of succeeding from the Union.  Students can access these documents here.  The students can read the documents as a homework assignment in preparation for a discussion segment follow-up in class.

Themes the teacher can present in the discussion are these:

What was the major cause for Southern states to leave the Union?

Did the assigned documents list any secondary causes?

Did the documents conflict with the student’s pre-existing notions as to why the South succeeded from the Union?

Explain what the word seminal means.  Ask your students if the lesson helped them to understand why it is important to review and cite seminal sources, rather than solely rely on secondary sources, for academic work.

The National Archives has several photographs from the Civil War.  A picture of the Confederate flag (seen at the top of the post) flying over Fort Sumter can be accessed here.

Students should be asked, how does this flag differ from the one normally associated as the Confederate flag?  Why does this flag only have 7 stars?  What is the significance of the date, April 14, 1861, and what happened at Fort Sumter that caused this event?  Why did Confederate battle flags evolve to look differently than the one that flew over Fort Sumter?

Finally, the class can discuss how the flag is displayed today.  Does it match the original intent of the flag?  Discuss the difference between a hate group displaying the flag and a historical exhibit of the flag.  Ask your students if they think the individuals who display the flag as a personal statement have inspected the historical documents as the class just did.  If those individuals did read those documents, would it alter their perspective on displaying the Confederate flag?

Going into this exercise, students may have been taught versions of what caused the Civil War that conflict with the historical record.  And they may have learned these alternative versions from the people they trust the most in their lives – family and friends.  If that is the case, it will often take some time for a student to resolve this internal conflict.  In fact, it could be after the student has completed the course before this conflict is resolved.  A teacher should be prepared for that.

And that might be the most difficult academic lesson to learn in life, always do your due diligence, no matter how much you trust a person.