Mars – From War of the Worlds to The Martian

“No one would have believed in the last years of the nineteenth century that this world was being watched keenly and closely by intellegences greater than man’s…”

So began H.G. Wells’ classic 1898 novel War of the Worlds.  Wells, of course, was describing a vision of Mars occupied by an advanced race.  That stands in stark contrast to the movie The Martian, which focuses on the isolation of an astronaut left stranded on the red planet.  In a sense, that movie completes a transformation of the public’s perception of Mars underway since the Mariner 4 mission transmitted pictures of the Martian surface fifty years ago.  While we can say that astronomy and the space age have played a key role in that transformation, it was also astronomers who provided the previous impression that Mars might be inhabited as well.

Prior to the 1990’s, no planets were known to exist outside our Solar System.  There was a sense that such planets did exist of course, science fiction like Star Trek is proof of that.  Giordano Bruno postulated as far back in the late 1500’s that, “numerable suns exist; innumerable earths revolve around these suns in a manner similar to the way the seven planets revolve around our sun. Living beings inhabit these worlds.”   That, along with a lot of other things, did not endear Bruno to the Catholic Church and he was burned at the stake for his troubles in 1600.  Nonetheless, without concrete observational proof of these planets, Mars seemed the best known candidate for life to exist beyond Earth.

In 1698, Christiaan Huygens published Cosmotheoroswhich speculated about not only life on Mars but on the other planets in the Solar System as well.  Of Mars Huygens wrote, “But the inhabitants…our Earth must appear to them almost as Venus doth to us, and by the help of a telescope will be found to have its wane, increase, and full, like the Moon.”  Huygens was the first to discern Saturn has rings and discovered the Saturn moon Titan.  In 2005, ESA landed a probe on Titan named in Huygens’ honor.  It remains the most distant landing attempted in space. While life on Mars was pure speculation on Huygens’ part, he was an accomplished astronomer.  And as we can tell by the rover Curiosity image below, his description of what Earth looked like from Mars is close to the mark.

Credit: NASA/JPL-Caltech/MSSS/TAMU

In 1784, William Herschel published On the Remarkable Appearances at the Polar Regions on the Planet Mars.  Like Huygens, Herschel ranks as one of the great observational astronomers with the discovery of Uranus among his many accomplishments.  And like Huygens, Herschel also speculated on the possibility of life on Mars, stating, ““And the planet (Mars) has a considerable but moderate atmosphere, so that its inhabitants probably enjoy a situation in many respects similar to our own.”  Both Huygens and Herschel set the stage for the boldest claim by an astronomer regarding life on Mars.

Percival Lowell was a contemporary of H.G. Wells.  Born in 1855, Lowell was a successful businessman who had an interest in astronomy.  This interest intensified when Lowell read Giovanni Schiaparelli published maps of Mars with channels across the surface in the 1890’s.  Schiaparelli was Italian, and the English version of his work translated the Italian word for channel -canalis – into canals.  As Mars headed towards opposition (closest approach to Earth) in 1894, Lowell set off to Arizona to make observations.  Perhaps with a strong preconception, or too much desire to make a groundbreaking discovery, Lowell published this drawing of Mars from his telescope.

Credit: Wiki Commons

Lowell speculated that intelligent life on Mars had built a series of canals to draw water from the polar ice caps to the mid-latitudes for irrigation.  Lowell’s work was rejected by other astronomers who also observed Mars during opposition but did not note canals.  Had Lowell been trained as a scientist, the lack of replication may had given him pause.  However, trained as a businessman, Lowell marketed his case directly to the public.  At first, through articles written for magazines such as the Atlantic Monthly, then through a series of books and continued defense of the canal theory until his death in 1916*.  Though rebuffed by astronomers, Lowell’s work on Mars provided a framework for popular culture during the next half century.

Against this backdrop, Wells published War of the Worlds four years after Lowell’s first observation of Mars.  Often lost in the subsequent radio and movie versions was Wells’ original intent to critique British colonialism, in particular, the concept of Social Darwinism.  This concept stated that various nations that are stronger are morally justified in the subjugation of weaker societies in a survival of the fittest competition for resources.  Wells’ point was, if that is the case, how could Britain complain if a stronger race colonized them?  In America, of course, it is the Orson Wells 1938 radio broadcast version of the story that is most well known.

The legendary broadcast was made so with media reports of panic induced by the realistic reporting of a Martian invasion.  However, the extent of the panic, if any existed at all, has been disputed.  From Wells’ work on, Martians became a cottage industry in both print and film.

And that cottage industry was all over the map.  From the classics such as Ray Bradbury’s The Martian Chronicles and Robert Heinlein’s Red Planet to horrendous efforts such as the movie Santa Claus Conquers the Martians, intelligent life from Mars was a staple in popular culture.  Remarkably, astronomers were publishing papers as late as the 1950’s that vegetation might exist on Mars.  Gerard Kuiper published a paper in the Astrophysical Journal during 1956 discussing the possibility of greenish moss (to be fair, Kuiper also postulated inorganic causes as well) on Mars during the spring/summer seasons.  William Sinton published an article in 1958 suggesting spectroscopic evidence of vegetation on Mars.  The concept of life on Mars would take a sobering turn in 1965.

Mariner 4 was launched on November 28, 1964 and begun its seven month journey to flyby Mars.  This mission would be the first to bring close up images of another planet back to Earth.  Prior to Mariner 4, astronomers had to rely on observatories which lacked digital CCD and adaptive optics technology available today.  Below are images of Mars taken from the 100-inch telescope at Mt. Wilson in 1956.

Credit: The Carnegie Institution for Science

What NASA got back from Mariner 4 in July, 1965 were images such as this:

Credit: NASA

The barren, cratered surface of Mars came as a disappointment.  Mariner 4 also measured a very thin atmosphere and lack of magnetic field.  As such, Mars does not have an ozone layer to protect organic compounds on the surface from ultraviolet radiation.  Without a magnetic field, the surface of Mars is also bombarded by a toxic stew of cosmic rays.  Quite simply, Mars is not capable of supporting life on a surface constantly exposed to harmful radiation from space.  However, future missions to Mars made it clear it is an interesting planet in an all together different way.  Much like the planet presented in The Martian.

In 1971, Mariner 9 became the first spacecraft to orbit a planet.  As a result, this mission was able to provide a comprehensive map of the Martian surface.  Imaging was delayed for two months by a massive dust storm, but once the imaging commenced, planetary scientists were delighted.  Among the findings were the largest canyon and volcanic features in the Solar System later named Valles Marineris and Olympic Mons.  Most importantly, Mariner 9 imaged ancient dry riverbeds and channels.  Water did once flow on the surface of Mars, albeit billions of years ago.  The success of Mariner 9 provided the impetus for Vikings 1 & 2, which landed on Mars in 1976 and gave us the first look at the surface.  This is how the landing was covered by ABC including an interview with Carl Sagan.

Viking searched for life on Mars and found none at the landing zones.  There was a 20 year lull in Mars exploration until 1997 when Pathfinder landed on Mars.  Tagging along for the ride was the Sojourner rover, the first of the Mars rovers, named after the 19th century abolitionist Sojourner Truth.  By 1997, the public had more access to NASA missions, specifically the mission website that provided updates and images.  The original website is still online and can be accessed here.

By this time, it was problematic to present a story with Martians that had serious social commentary a la War of the Worlds.  The notion of an advanced race on Mars could not be taken seriously and was reduced to efforts such as the 1996 comedy Mars Attacks.  During the course of the 20th century, the public perception of Mars went from a planet that might have an advanced race, to a planet that might have vegetation, to a planet that while geologically interesting, was devoid of life.  Conflict is the centerpiece of drama, and without the possibility of life on Mars, the traditional source of conflict had been removed.

Between Pathfinder landing on Mars in 1997 and its use as a plot device in 2015 in The Martian, there have been several orbiter, lander, and rover missions to Mars.  Mars Odyssey has been in orbit since 2001 and rover Opportunity has been exploring the surface since 2004.  NASA’s Mars Exploration website has images and video from all its active Mars missions.  Among the rover images are dust devils which were a feature of the landscape in The Martian.

The results of these missions were used quite effectively to provide a reasonably accurate take on what living on Mars would look like in the movie.  Without an alien race to provide drama, the central conflict is the harshness of space itself.  The challenges of human travel to Mars include limited availability of launch windows (once every 26 months as Mars approaches opposition), protection from cosmic rays, landing significant tonnage on Mars with very little atmosphere to provide braking, physical deterioration caused by Mars low (30% of Earth’s) gravity,  and utilizing recently discovered water resources below the surface.  The last point also underscores the need to determine if microbial life exists in the subsurface of Mars where water still exists.  Can we avoid contaminating Mars with microbial life from Earth and vise-versa?  NASA has an Office of Planetary Protection dedicated to that last issue.  Ironically, it was exposure to Earth’s microbes that did in the invading Martians to conclude H. G. Wells’ The War of the Worlds.

The Martian signifies that Hollywood has caught up with science in terms of presenting dramatic stories of Solar System exploration without intelligent life from Mars.  The other side of the human vs. harshness of space conflict is the fact that while we may send a handful of astronauts to Mars the next few decades, the vast majority of humanity will remain on Earth.  There will not be a mass migration to Mars if we foul things up on our home planet.  If space exploration can help discover a means to solve the challenges we face on Earth during the same time we go to Mars, it may be finding the right combination of international competition vs. international cooperation.  We can only hope that right mix may be found in reality as readily as it can be found in the movies.

*Percival Lowell’s true legacy to astronomy was founding the Lowell Observatory in Arizona where Pluto was discovered.  In 2015, its 4.3 meter telescope became fully operational.  You can check that out on the Lowell Observatory website.

**Image on top of post is Mars Pathfinder landing site in 1997, to be visited by Mark Watney in the future.  Credit:  NASA/JPL

Antimatter – Fact and Fiction

To clear things up to start with, antimatter is real.  Many people I talk to (including numerous teachers) have an assumption that antimatter is a mythical construct from science fiction such as Star Trek.  This confusion seems to lie in the fact that, unless you work in a particle collider, there is usually no interaction with antimatter of any sort in daily life.  Relativity has everyday applications, most famously  in nuclear power (and weaponry, which hopefully we’ll never have to experience).  Quantum mechanics is responsible for the transistor and laser technology.  While most people do not understand the intricacies of relativity or quantum theory, they most certainly understand both are very real.  Antimatter has very few practical applications and thus, is mostly heard about in a fictional context.

The story of antimatter began with the attempt by Paul Dirac in the late 1920’s to produce an equation that would describe the properties of an electron traveling near the speed of light.  This was very groundbreaking work as it would necessitate merging quantum mechanics (which describes the properties of atomic particles) with relativity (which describes the properties of matter as it travels at near light speeds).  The end result was an equation which produced an electron with a negative charge and one with a positive charge.  The actual equation is beyond the scope of this post, but I’ll use an analogy which we encounter in beginning algebra.  Take the following equation:

(x + 1)(x – 1) = 0

Since one of the terms in the parenthesis must equal zero to produce the proper result, we arrive at two different solutions, x = 1 and x = -1.  If we are modeling a situation with a zero-bound, that is, a physical constraint that cannot drop below zero, we would typically disregard the x = -1 solution.  An example of this might be in economics where a price of a consumer good cannot drop below $0.00.  Dirac faced this same dilemma.  Electrons have a negative charge.  The initial temptation would be to disregard the positive charge result.  However, Dirac had a different take on this development.  In 1928, Dirac published The quantum theory of the electron.  In this paper, Dirac postulated the existence of an antielectron.  This particle would have the same properties as an electron except for having a positive, rather than negative, charge.

Paul Dirac. Credit: Wiki Commons

This bold prediction was verified four years later by Carl Anderson when he detected particles with the same mass as an electron but with a positive charge.  How does one measure something like this?  Anderson was observing cosmic rays in a cloud chamber.  A magnetic field will move opposite charged particles in opposite directions.  This movement can be photographed as the particles leave trails in the cloud chamber filled with water vapor.  One year later, Dirac would be awarded the Noble Prize and Anderson followed suit in 1936.  Anderson’s paper dubbed the positively charged electron as a positron.

First recorded positron. An electron would have spiraled in the opposite direction. Credit: Carl Anderson, DOI: http://dx.doi.org/10.1103/PhysRev.43.491

During the 1950’s, the antiproton and antineutron would be discovered.  These particles have significantly more mass then positrons and thus, were more difficult to produce.  Protons have a positive charge and antiprotons have a negative charge.  Neutrons are electrically neutral.  However, subatomic particles have charges as well and antineutrons are made of three antiquarks that have opposite charges than the three quarks that neutrons are made of.  So, now that we have established these things are real, how can we use this in an educational setting?

If the student learned about antimatter from science fiction, the first task should be to discern how antimatter is presented in the reading/movie and how it exists in reality.  The most prominent example in popular culture is the use of matter-antimatter propulsion in Star Trek to power starships on interstellar explorations.  The overall premise is not bad in that when matter and antimatter collide, they annihilate each completely into pure energy.  Unlike fission and fusion reactions, which convert a small fraction of available mass into energy, a matter-antimatter reaction is 100% efficient.  So, what stops us from using this as a potential source of energy?

The primary challenges of antimatter production are the cost, amount, and storage of antimatter.  The cost to produce 10 milligrams of positrons is $250 million.  To put that in perspective, it would require about 1100 kilograms (about 1 ton) of antimatter to produce all the energy consumed by the United States in one year.  That would cost $2.75 x 1017 to produce, or about 17,000 times the GDP of the United States.  Obviously, not an equitable trade-off.  When producing antihydrogen, currently the best we can do is produce it on the scale of a few dozen atoms and store it for 1,000 seconds.  The storage problem comes from the fact that antimatter is annihilated as soon as it comes into contact with matter.  And that last fact would make it seem obvious as to why there is not a lot of antimatter in the universe.  However, there is one little problem.

Models of the Big Bang predict equal amounts of matter and antimatter to have been produced when the universe was formed.  The reason being is the laws of physics state matter and antimatter should form in pairs.  If that had happened, all the matter and antimatter would have mutually destroyed each other and we would be left in an universe with no matter and all energy.  That, of course, is not the case.  Very early in the universe’s existence, an asymmetry between matter and antimatter occurred.  For every billion antimatter particles, an extra matter particle existed in the universe.  And that extra particle per billion parts is responsible for the all the matter, including our bodies, in the universe.  How that extra particle of matter was produced remains a mystery for cosmologists to solve.

So what happened to all the antimatter that was created during the Big Bang?  It was destroyed as it collided with matter and can be observed as the cosmic microwave background radiation (CMB).  Originally released as high energy gamma rays, the CMB has been redshifted all the way to the radio end of the spectrum by the time it reaches Earth.  If our eyes could detect radio waves, instead of seeing stars in a dark sky at night, we would see a glow everywhere we looked.  That is because we are embedded within the remnants of the Big Bang.

CMB as imaged by the Planck mission. Cutting a sphere from pole to pole produces an oval when laid flat. This is a map of the entire celestial sphere and demonstrates the CMB is ubiquitous. Credit: ESA and the Planck Collaboration.

The discovery of the CMB in the 1960’s was a crucial piece of evidence in favor of the Big Bang theory over its then competitor, the Steady State theory.  The CMB produces what is known as a black body spectrum that is emitted by masses in a hot, dense, state.  The early universe with massive matter/antimatter annihilation would be in such a state.  And this leads us back to the original question, does antimatter exist today and can it have any practical purposes?

In its natural state, antimatter exists mostly in the form of cosmic rays.  The term ray is a bit of a misnomer.  Cosmic rays consist of atomic nuclei, mostly hydrogen protons, traveling near the speed of light.  Some cosmic rays are produced by the Sun but most originate outside the Solar System.  Their exact source is a mystery to astronomers although supernovae and jets emanating from black holes are suspected to be the primary culprits.  Trace amounts of positrons have been detected in cosmic rays in space before they reach the Earth’s atmosphere.  When cosmic rays collide into atmospheric molecules, the protons are shattered into various sub-atomic particles.  When a cosmic ray collided into Carl Anderson’s cloud chamber, one of the particles produced was that positron which became the first observed antimatter particle in 1932.

Today, supercolliders such as CERN in Switzerland produce antimatter in the name manner.  Protons are accelerated to very high speeds and sent colliding into a metal barrier.  The impact breaks apart the proton into various sub-atomic particles.  Positrons can also be created via radioactive decay and this process provides one of the few practical applications of antimatter.  Positron Emission Tomography (PET) scans operate on this principle.  Radioactive material is produced by a cyclotron on the hospital site and injected into the patient.  As positrons are released, they collide with matter in the body which mutually annihilate and eject gamma rays in opposite directions.  The scanner then detects the gamma rays to produce images of areas of the body that x-rays cannot.

What most people really want to know about antimatter is, can it be used to produce warp drive propulsion?  If the production and storage aspects of antimatter are solved, it must be understood that gamma rays produced by matter-antimatter collisions are random in direction.  That is, it acts more like a bomb than a rocket.  However, warp drive would not act like a conventional rocket.  Even a conventional antimatter rocket could not propel a spacecraft past the speed of light.  As a rocket accelerates closer to the speed of light, according to relativity theory, its mass would approach infinity.  No matter how much antimatter you could annihilate, you could not produce enough energy to break the light barrier.

Original 11 foot model of U.S.S. Enterprise. Model was used to represent a starship 947 feet long and 190,000 tons. The fictional Enterprise is almost as long as the Eiffel Tower is high and three times heavier than an aircraft carrier. Credit: Mark Avino, National Air and Space Museum, Smithsonian Institution

The concept of warp drive operates in a different principle.  Warp drive does not push a rocket through space, but rather compresses space in front of the spacecraft to, in effect, shorten the distance between two points.  The space is then expanded behind the spacecraft.  Is this possible?  Right now, no.  The space-time fabric is not very pliable.  The estimates on the amount of mass/energy required to propel a starship the size of Star Trek’s U.S.S. Enterprise range from all the mass in the universe to the mass the size of Jupiter.  Radical breakthroughs in both physics and engineering are required to make warp drive possible.

A unification of quantum mechanics and relativity might provide a pathway to warp drive.  The operative word here is might.  We simply do not know what such an advancement would bring.  When Max Planck discovered energy was transmitted in discrete packets rather than a continuous stream in 1901, he had no idea that development would lead to such applications as lasers, digital cameras, LED lights, cell phones, and modern computers.  And so it is today, we can only speculate what such theoretical advancements might bring for future applications to harness the great potential energy of antimatter.  Perhaps, we’ll have the good fortune to see our current students working to solve those problems.

*Image on top of post are bubble chamber tracks from first antihydrogen atoms produced at CERN in 1995.  Credit:  CERN

Elementary Einstein

While I was in grade school, a teacher wrote the equation E = mc2 on the board and flatly stated, “less than ten people in the world understand this equation.”  In retrospect, that really seems an odd statement to make about a rather simple algebraic equation.  However, it did speak to mystique relativity has among even the educated public.  Nonetheless, this classic equation, which demonstrates the equivalency between matter and energy, is perhaps the easiest aspect of relativity theory to understand.

Relativity typically deals with phenomena that we do not experience in our day to day lives.  In the case of special relativity, most of its esoteric quality deals with objects as they approach the speed of light that represents the highest velocity possible.  As an object approaches this upper bound, it’s clock runs slower compared to stationary observers and its mass approaches infinity.  The fastest speed we approach for most of us is when we fly a jet airliner at about 700 mph.  While that seems fast, it is only 0.000001 the speed of light, much too slow for relativistic effects to be noticed.  Thus, relativity has a strong counter-intuitive sense for us.

That alone does not explain relativity’s fearsome reputation as expressed by my teacher some forty years ago.  Some of that reputation can be attributed to how the media reported the experimental confirmation of general relativity during after the eclipse of 1919.  General relativity provides a more comprehensive theory of gravity than Newton’s Laws.  During the eclipse, astronomers were able to measure the Sun’s gravity bend light, something not predicted by Newton but is by general relativity.  The New York Times reported that:

“When he (Einstein) offered his last important work to the publishers he warned them that there were not more than twelve persons in the whole world who would understand it.”

That was referring to general relativity, which is very complex mathematically and was only four years old in 1919.  It is understandable for those not trained in modern physics to conflate special and general relativity.  Add to that the equation E = mc2  was most famously associated with Einstein and you got the perception it could not be understood unless you were a physicist.  As we will see below, that perception is most assuredly false.

To begin with, lets start with a hypothetical situation where mass can be completely converted to energy.  A science fiction example of this is the transporter in Star Trek that converts a person to energy, transmits that energy at another location, then reconverts the energy back into matter in the form of that person.  How much energy is present during the transmission stage?  Einstein’s famous equation gives us the answer.

Lets say Mr. Spock is about 200 pounds.  Converted to kilograms that comes out to 90 kg.  The speed of light is 3.0 x 108 m/s.  The mass-energy equation gives us:

E = (90 kg)(3.0 x 108 m/s)2

E = 8.1 x 1018 kg*m2/s2

The term kg*m2/s2 is a unit of energy called a Joule (J).  So as Mr. Spock is beaming down to the planet surface, his body is converted to 8.1 x 1018 J of energy.  Exactly how much energy is that?  Well, the average amount of energy consumed in the United States each month is 8.33 x 1018 J.  That’s right, if you converted your body to energy, it would almost provide enough to power the United States for an entire month.  As you can see, a small amount of matter has a whole lot of energy contained with it.

However, most nuclear fission and fusion processes convert a small fraction of matter to energy.  For example, lets take a look at the fusion process that powers the Sun.  It’s a three step process where four hydrogen atoms are fused to form a single helium atom.  The four hydrogen atoms have four protons in their nuclei whereas the final helium atom has two neutrons and two protons in its nucleus.  A proton becomes a neutron by releasing a positron and a neutrino, thus a neutron has slightly less mass than a proton.  In the solar fusion cycle, this mass is converted to energy.

The mass of four hydrogen atoms is 6.693 x 10-27 kg and the mass of the final helium atom is 6.645 x 10-27 kg with a difference between the two being 0.048 x 10-27  kg.  How much energy is that?  Using the famous Einstein equation:

E = (0.048 x 10-27 kg)(3 x 108 m/s)

E = 4.3 x 10-12 J

By itself, that might seem like a small amount of energy.  However, the Sun converts some four million tons of mass into energy each second for a total of 4 × 1026 watts (one watt = one J/s).  Worry not, although average sized for a star, the Sun is still pretty big.  In fact, it constitutes over 99% of the mass of the Solar System.  The Sun will burn up less than 1% of its mass during its lifetime before becoming a planetary nebula some five billion years from now.

Albert Einstein, 1904.

Einstein published this equation in 1905, what would later be called his Annus Mirabilis (Miracle Year).  During this year, Einstein would publish four groundbreaking papers along with his doctoral dissertation.  These papers would describe the photoelectric effect (how light acts as a particle as well as a wave-a key foundation of quantum mechanics), Brownian motion (heat in a fluid is caused by atomic vibrations-helped establish atoms as building blocks of matter), special relativity, and finally, the mass-energy equivalence.  Ironically, it was the photoelectric effect and not relativity that was cited when Einstein was awarded the Noble Prize in 1921.

Information traveled a lot slower back then, and the fame that awaited Einstein was more than ten years away.  The major news story that year would be the conclusion of the war between Russia and Japan as well as the election of Theodore Roosevelt to another term as president.  The New York Times would not mention Einstein at all in 1905.  Even in 1919, when Einstein became a famous public figures, some were mystified at the attention.  The astronomer W.J.S. Lockyer stated that Einstein’s ideas “do not personally concern ordinary human beings; only astronomers are affected.”  As we now know, the public was ahead of the curve in discerning the importance of Einstein’s work.

And that interest remains today.  Yet, there is very little opportunity for students to take a formal course in relativity (or quantum mechanics) unless they are college science majors.  Does the mathematics of relativity make it prohibitive for non-science majors to study relativity?  It shouldn’t.  A graduate level course in electromagnetism contains higher order mathematics that is very complex.  Yet, that does not stop us from presenting the concepts of magnetic fields and electrical circuits in grade school.  As educators, we should strive to do the same for relativity.  And I can’t think of a better place to start than that famous equation E = mc2.

*Photo on top of post is sunset at Sturgeon Point 20 mile south of Buffalo.  The light photons recorded in this image were produced via a nuclear fusion reaction in the Sun’s core that occurred 1 million years ago when only 18,500 humans lived on Earth.  Once the photons were released at the Sun’s surface, it took only an additional eight minutes to end their journey on Earth in my camera.  Photo:  Gregory Pijanowski

William Herschel, A Man for All Seasons

Located about 100 miles west of London, the city of Bath is known for the ancient Roman Baths that attract 1 million visitors each year.  One half mile west from the baths is the Herschel Museum of Astronomy, the 18th century residence of William Herschel.  As an observational astronomer, William Herschel tends to get overlooked by the great theorists such as Issac Newton.  Nonetheless, the work Herschel did in Bath greatly expanded our knowledge of the universe and remains topical in astronomy research.

In contemporary parlance, Herschel was a career changer.  Originally a musician by trade, Herschel took an interest in astronomy in 1773 at the age of 35.  Herschel was a self-made man.  He had no formal training in astronomy and taught himself the art of telescope making.  What had perked Herschel’s interest in astronomy was a book on musical mathematics called Harmonics by Robert Smith.  Herschel enjoyed the book so much he sought out other books by Smith and found one titled Opticks.  This book, along with Astronomy by James Ferguson, formed the basis of Herschel’s training in the field.  Herschel remained a music teacher during the the day and astronomer at night.  In his endeavors he was joined by his sister, Caroline Herschel, who became his lifelong assistant.

Above:  Herschel’s Symphony No. 8 in C minor by London Mozart Players.  Written in 1761, it is one of 24 symphonies composed by William Herschel.

Herschel was unable to buy a telescope suitable for his ambitions.  As a result, along with his sister Caroline, he took to the task of making his own telescopes.  Astronomers today do not need to do this obviously, but this is similar to the manner many astronomers write their own computer codes for their work.  This type of specialized software is not available at a store in your local shopping mall.  Over his lifetime, Herschel would grind and polish hundreds of mirrors, some of which he sold to help fund his work.

Herschel’s primary goal was quite formidable, to conduct an all-sky survey.  Motorized drives to track objects as they moved in the night sky were not available in the 19th century, so Herschel would observe at a fixed angle on the meridian and logged objects as they crossed the field of view.  The next evening, Herschel would lower or raise the telescope to a different angle for complete coverage of the night sky.   This effort resulted in the publication of the Catalogue of Nebulae and Clusters of Stars (CN) in 1786, the forerunner of the New General Catalouge (NGC).  Along the way, Herschel would make quite a few interesting discoveries.

On the night of March 13, 1781, from his residence in Bath, Herschel observed in his 6-inch telescope what he thought was a comet.  Herschel noted:

“On Tuesday, the 13th of March, 1781, between ten and eleven in the evening, while I was examining the small stars in the neighborhood of H Geminorum, I perceived one that appeared visibly larger than the rest: being struck with its uncommon magnitude, I compared it to H Geminorum and the small star in the quartile between Auriga and Gemini, and finding it so much larger than either of them, suspected it to be a comet.”

Measurements of the orbit of this object revealed it to be not a comet, but a planet, the first planet discovered since the ancient astronomers categorized the five naked eye planets of Mercury, Venus, Mars, Jupiter, and Saturn.  Below is an image of how the night sky appeared in Bath as Herschel made his first observation of this planet.

UranusHerschel wanted to call this planet Georgium Sidus (The Georgian Star) to honor King George III.  Others sought a less English-centric name.  Uranus was proposed as in Greek mythology, Uranus is the father of Saturn.  It was not until 1850 that the planet was officially designated as Uranus.  As Uranus is twice the distance (1,783,939,400 miles or 2,870,972,200 km) to the Sun as Saturn, this discovery doubled the size of the known Solar System.  It takes 84 years for Uranus to orbit the Sun.  Thus, Uranus has only made 2.8 revolutions of the Sun since its discovery.  In 1986, Voyager II would become the only spacecraft to date to pay a visit to Uranus.  A view of Uranus from Voyager II is below:

Uranus on January 1986. Image on right is false color to enhance color differentials. The South Pole (red) is darker than equatorial regions. Credit: NASA/JPL.

Uranus’  South Pole was facing Voyager II as it is inclined 98 degrees compared to Earth’s 23.5 degree axial tilt.  If Earth had the same axial tilt as Uranus, the Northern Hemisphere would face the Sun in June while the entire Southern Hemisphere would be in darkness.  The situation would be reversed in December.  When Voyager II flew past Uranus, the Northern Hemisphere was shrouded in darkness.  If NASA’s plans to send an orbiter around Uranus comes to fruition in the 2030’s, the Northern Hemisphere would then be visible.

This discovery was a game changer for Herschel.  King George III, as the Revolutionary War raged in the American colonies, provided Herschel with a salary to pursue astronomy on a full-time basis.  This would launch Herschel on a decade of discovery.

In 1784, Herschel published On the Remarkable Appearances at the Polar Regions on the Planet MarsThis paper presented the results of observations taken of Mars from 1777 to 1783.  A few of Herschel’s drawings of Mars is below:

Credit: Royal Astronomical Society
Credit: Royal Astronomical Society

Among the conclusions Herschel came to from these observations are:

The axial tilt of Mars is 280 42′, reasonably close to the now established value of 25 degrees.

The length of the Martian day as 24 hours, 39 minutes, and 21 seconds.  This measurement was off by only 2 minutes.

The luminous areas at the polar regions were ice caps, which like Earth, would vary in size on a seasonal basis.  Today, we know the northern ice cap has a permanent layer of water ice.  The southern ice cap has a permanent top layer of 8 meters of carbon dioxide ice and a much larger layer of water ice below.  The seasonal variations of the ice caps are due to the freezing and evaporation of carbon dioxide ice.

Herschel concluded his paper by stating, “And the planet has a considerable but moderate atmosphere, so that its inhabitants probably enjoy a situation in many respects similar to our own.” Ok, this one didn’t quite pan out as we know Mars’ mostly carbon dioxide atmosphere is much thinner than Earth’s and life does not exist on the surface.  However, Mars atmosphere in its ancient past must have been warmer and more substantial for water to have been present on the surface, of which the evidence is now pretty conclusive.  The search for life in Mars’ past and microbial life in the Martian sub-surface, which still has water, is a major component in NASA’s Mars Exploration Program.

While several rovers and orbiters have provided thousands of high resolution images of Mars, Earth bound telescopes still acquire key data on Mars past and present:

 

Herschel would also discover two moons of both Saturn and Uranus.

The Uranus moons were discovered on the same day in 1787 and were named Titania and Oberon.  Both moons were imaged by Voyager II on its flyby of Uranus.  Titania featured fault valleys as long as 1,500 km and Oberon has a mountain 4 miles high.

Titania taken by Voyager II 369,000 km (229,000 miles). Credit: NASA/JPL

Two years later, Herschel would discover the Saturn moons Mimas and Enceladus.  Both these moons have been imaged by the Cassini orbiter mission.  Mimas features a large impact crater that has given it the nickname “Death Star”.

Mimas, whose crater gives it a resemblance to the Star Wars Death Star. Credit: NASA/JPL/SSI

The crater has been named in Herschel’s honor.  The crater itself is 140 km (88 miles) wide and the outer walls are 5 km high with a central peak 6 km high.  An impact just a bit larger would have most likely destroyed Mimas.  As interesting as this is, it is Enceladus that has proven to be one of the biggest surprises of the Cassini mission.

Only 500 km wide, Enceladus is very bright as it reflects almost 100% of the sunlight it receives.  Thought to be too small for geologic activity, Enceladus provided an unexpected finding when Cassini imaged geysers spraying ice and water vapor into space.  Further gravity analysis indicates an ocean 10 km deep underneath a ice shell 30-40 km deep.  Recently, it has been determined the geysers are more akin to curtain eruptions seen in volcanic activity in Hawaii and Iceland.  Still, this water is thought to be at least 194 degrees Fahrenheit at the ocean floor, the heat generated by gravitational flexing from Saturn.  Where there is heat and water, there may be life.  Cassini has flown through the geysers but its instrument package was not specifically designed for this task.  As such, Enceladus is a priority for NASA exploration in the next decade.  Unlike the subsurface ocean of Europa, the ocean of Enceladus could be sampled without having to bore down through several kilometers of ice.

Plumes of water ice emanating from the south pole of Enceladus. Credit: NASA/JPL/Space Science Institute.

As impressive as Herschel’s Solar System discoveries were, the task to complete an all-sky survey meant he studied deep space objects moreso than planets and their satellites.  Herschel would discover numerous nebulae and binary stars that prior to his telescope, were not resolvable.  By 1785, with the salary granted by King George III, Herschel had moved from Bath to London and was using a 19-inch aperture telescope to map the Milky Way.  The results were published as On the Construction of the Heavens.  

Credit: Royal Astronomical Society.
Credit: Royal Astronomical Society.

The bright spot in the center is the Sun.  Herschel was operating under the handicap of observing in visible light only, which is extinguished by the interstellar medium.  This gave the illusion the Sun was located in the center of the Milky Way as the interstellar medium dampened optical light in all directions equally.  It is like trying to map trees in a foggy forest.  There may be more trees in one direction than the other, but the fog cuts down on your vision at equal depths in all directions.  In fact, it was not until the 1920’s when Harlow Shapley determined the Sun was located in a spiral arm of the Milky Way  and not in the center was this problem resolved.  For astronomers to obtain a comprehensive view of the universe, the entire electromagnetic spectrum had to be employed.  And it was Herschel who provided the first step in that direction.

In 1800, Herschel was measuring the temperatures of the different colors of sunlight separated by a prism.  As Herschel took temperatures from the violet end of the spectrum to the red he discovered an increase in temperature as the thermometer was moved towards the red.  Finally, the thermometer was placed just beyond the red light, and the temperature increased even more.  It was apparent the Sun was emitting some form of radiation beyond the furthest end of the visible spectrum.  More experiments revealed this invisible radiation had the same properties as visible light, it could be reflected and refracted.  Herschel published this result in the paper titled, Experiments on the Refrangibility of the Invisible Rays of the Sun.  Herschel referred to this radiation as calorific (heat) rays, today we call it infrared light.

Credit: NASA

Optical light is just a small part of the electromagnetic spectrum.  Among the other parts we are unable to detect with our eyes, we can detect radio waves with radio receivers, ultraviolet waves with our skin when we get sunburn, and x-rays with film when we go to the doctor.  Those forms of radiation only differ from light in the size of their respective wavelengths and consequently, their energy.  Infrared is used for remote control and night vision technology. Most of the heat we feel in our day-to-day activities is the result of infrared light and our bodies emit infrared radiation in the form of body heat which is detected in night vision sensors.

Cat in infrared. Eyes appear warmer than body as cat’s fur traps heat, not allowing it to escape into surrounding air to be detected by infrared camera. Credit: NASA/IPAC

Planets radiate mostly in the infrared, as do cool galactic gas clouds.  Certain wavelengths of infrared radiation has the ability to pass through dust clouds.  Thus, infrared observations can peer into dusty regions in space and see what lies behind the shroud of dust.  As a result, infrared astronomy is used for planetary observations, to detect protostars inside of nebulae, and to peer into the galactic center behind the wall of interstellar dust.  In other words, the form of radiation Herschel discovered is now used to better understand the very objects Herschel observed.

The video below is a montage of 2.5 million images of the Milky Way taken by the Spitzer Infrared Space Telescope.  As certain wavelengths of infrared are not absorbed by the interstellar medium as optical light is, the Spitzer images provide us with the true shape of our home galaxy including the central bulge that contains a massive black hole.

The Spitzer GLIMPSE360 website has an interactive where you can explore different regions of the Milky Way or select objects to view.  The Milky Way is not the only region that can be explored in infrared.  In 2014, the Keck Observatory imaged Uranus with infrared.

Images of Uranus, such as the ones taken by Voyager, tend to reveal a featureless planetary disk.  However, the Keck infrared image revealed storm activity to an extent not seen before on Uranus.  This might be indicative of an internal heat source that was not thought to exist previously on the gas giant.  Astronomers will need to revise current theories on the interior of Uranus as a result of this work.

Left-Uranus at 1.6 microns. White spots are storms below upper cloud layer. Right-Uranus as 2.2 microns. White spots are storm activity just below tropopause.  Uranus ring system is visible in this image. Credit: Imke de Pater (UC Berkeley) & W. M. Keck Observatory images.

As one would expect, many honors have been accorded upon the Herschel name.  This would include the 3.5 meter infrared Herschel Space Observatory and the 4.2 meter William Herschel Telescope in the Canary Islands.  However, the highest honor we can bestow upon William Herschel is the continued exploration of the celestial bodies he discovered, using the infrared radiation that he also discovered.

*Image on top of post, Sir William Herschel, by Lemuel Francis Abbott, oil on canvas, 1785, © National Portrait Gallery, London, Creative Commons License.

Mount Wilson – the Birthplace of Solar Physics

Perched 5,710 feet above the Los Angeles Basin in the San Gabriel Mountains, Mt. Wilson Observatory is noted for the ground breaking work of Edwin Hubble during the 1920’s.  In that decade, Hubble would discover galaxies beyond the Milky Way and the expansion of the universe at the observatory’s 100-inch telescope, then the world’s largest.  Located a few hundred feet from the famous telescope lies three solar telescopes whose observations provided the groundwork for our current understanding of the Sun.  This story did not begin in the warm climes of Southern California, but in the Upper Midwest at Yerkes Observatory, 90 miles northwest of Chicago.

The first director of Yerkes Observatory was George Ellery Hale.  The observatory, established in the 1890’s, is dubbed the birthplace of modern astrophysics.  Hale was the guiding force behind the building of the observatory and wanted to move astronomy from the study of the positions of celestial bodies in the night sky to the physics behind those objects.  Hale had an intense interest in the study of the Sun and set out to build a solar telescope on the grounds at Yerkes.  The result was the Snow Solar Telescope built in 1903.  The name of the telescope is not derived from Wisconsin winters, but from Helen Snow of Chicago who anted up $10,000 ($258,000 in 2014 dollars) to build it.  However, poor optical quality necessitated a move of the Snow from Wisconsin to California.

Driving down a highway on a hot summer day, you have probably seen heat waves rising from the ground and distorting your vision.  This effect is magnified if you attempt to take a picture through a telephoto lens.  The Snow Solar Telescope design had a movable mirror (coelostat) reflect the Sun’s image to a 30-inch mirror which in turn reflected the light 60 feet to 24-inch mirror that projected the final 6-inch image of the Sun.  Heats waves from the ground interfered with the image quality as the light traveled its 60 foot path horizontally to its final destination.  Hale thought relocating the Snow to an area with thinner air would reduce the heat interference problem.

As a result, the Snow was dismantled and transported to Mt. Wilson in California in 1904.  One does not normally associate the Los Angeles basin with good optics, but the summit of Mt. Wilson lies above the atmospheric inversion layer that traps the infamous Los Angeles smog like a lid on a pot.  This, combined with the thinner air of the higher altitude, improved the image quality of the Snow.  Hale set out to study sunspots, which would provide the first significant scientific finding from Mt. Wilson.

The Sun on July 28, 1906. Earth superimposed for scale. Credit: Mt. Wilson Observatory.

The oldest known observations of sunspots dates back to 800 B.C. both from ancient Chinese and Korean astronomers.  Historical recordings of sunspot numbers dates back to the 1600’s and constitute one of the longest ongoing scientific programs of observation.  At the dawn of the 1900’s, the nature of these spots on the Sun’s surface were not known.  Among the competing theories at the time were sunspots as debris clouds from solar tornadoes, areas hotter than the surrounding surface, and one of the most colorful ideas, sunspots as holes in a shroud of the Sun that hid a solid surface underneath.  The Snow Solar Telescope would begin the process to clarify the nature of sunspots.

The Snow was equipped with a high resolution spectrograph.  With this, Hale was able to record and compare spectra lines from regions of the Sun’s surface with and without sunspots.  These spectra lines were in turn compared to spectra produced in a laboratory under different temperature regimes.  In the cooler regime, many spectra lines were strengthened, and a few were weakened.  The spectra obtained from sunspots correlated with the spectra obtained in the laboratory in the cooler regime.  Hence, sunspots were regions on the solar surface that are cooler and thus, darker than the surrounding area.  The question remained, why were these regions cooler?  To answer this would require better solar images than the Snow could provide.

As George Ellery Hale was wont to do, he built a bigger and better telescope.  Despite the thinner air at Mt. Wilson, heat interference still proved to be an issue with the Snow.  To solve this, Hale built a telescope with a vertical, rather than horizontal design.  At 60-feet, the new solar tower was completed in 1908.  In his observations of sunspots, Hale was reminded how their structures were similar to the classic iron filings magnetic field experiments.  Based on this hunch, Hale set off to detect the presence of Zeeman lines in sunspot spectra.

60-foot Solar Tower (left) next to Snow Solar Telescope (right). Credit: Gregory Pijanowski
60-foot Solar Tower (left) next to Snow Solar Telescope (right). Credit: Gregory Pijanowski

The black lines seen in spectra are absorption lines.  Different elements absorb light at different wavelengths and this is how astronomers can figure out what stars, including the Sun, are made of.  If an atom absorbs light of the same energy as the difference between two electron orbital levels, the light energy is converted to energy that moves an electron to a higher orbit.  The result is the absorbed light creates a black line on a spectra.  The presence of a magnetic field creates more potential electron orbital levels.  As a consequence, a single absorption line can split into several absorption lines as can be seen below:

Credit: Astrophysics and Space Research Group, The University of Birmingham.

Using the new 60-foot solar tower, Hale was able to detect the presence of Zeeman lines in the spectra of sunspots.  In fact, the magnetic field in sunspots are several thousands times stronger than Earth’s magnetic field.  The intense magnetic fields in these areas of the Sun push plasma convection to areas outside of sunspot regions.  As it is this convection that transports heat to the solar surface, the magnetic blockage of this convection causes sunspots to be cooler by about 2,000 Celsius than the surrounding region.

Hale published this result in 1908six years after Zeeman won the Nobel Prize for his discovery of this effect.  This was the first time a magnetic field was discovered beyond Earth.  Hale would be nominated for a Nobel as a result of this discovery, but ultimately was not awarded.  Health issues eventually forced Hale away from Mt. Wilson, but not before building what would be the largest solar observatory from 1912 to 1962.

Mt. Wilson 150-foot Solar Tower. Photo: Gregory Pijanowski
Mt. Wilson 150-foot Solar Tower. Photo: Gregory Pijanowski

The 150-foot solar tower would be Hale’s last major contribution to solar astronomy.  The 150 foot vertical focal length produces a 17-inch image of the Sun at its base.  It was with this facility that Hale was able to determine the magnetic polarity of sunspots and the 22-year solar cycle.  The 11-year solar cycle had been long known and pertains to sunspot numbers only.  It usually takes 11 years (sometimes longer, sometimes shorter) for the solar cycle to reach one maximum to the next.

Magnetic fields are dipoles.  That is, a magnetic field will have a north and south pole.  Sunspots occur in pairs with one being the north pole and the other being the south pole, albeit at times a single spot in a pair will break up into several spots with the same polarity.  Hale discovered that sunspot pairs exhibit the opposite order of polarity in each solar hemisphere.  The polarities then reverse at the end of each 11-year cycle.  Consequently, a Hale 22-year solar cycle would look like this:

     Cycle (11-years)    Northern Hemisphere   Southern Hemisphere
                   1                        N-S                    S-N
                   2                        S-N                    N-S

The most recent occurrence of polarity reversal happened on January 4, 2008.  This event heralded the arrival of the current solar cycle.

By the mid-1920’s, Hale spent most of his time at his private solar observatory located in his residence in Pasadena.  He passed away in 1938 as work was ongoing for the 200-inch Mt. Palomar Observatory.  A new generation of solar astronomers would carry on his legacy at Mt. Wilson.

In 1957, Horace Babcock would install the first magnetograph in the 150-foot tower.  Rather than just study the strong magnetic fields of sunspots, the magnetograph was sensitive enough to map the magnetic field across the entire solar surface.  Essentially, the magnetograph maps the Zeeman effect across the entire solar disk.  Astronomers would take the work at the 150-foot tower a step further in the 1960’s by using it to study the Sun’s interior with a field called helioseismology.

In 1962, Robert Leighton discovered oscillations all across the solar surface which occurred in 5 minute cycles.  The theoretical modeling of these oscillations were refined by Roger Ulrich, who also kept the 150-foot Solar Tower in operation after the Carnegie Institution pulled their financial support in 1984.  These oscillations are caused by acoustic waves trapped inside the Sun.  Measurements of these waves allowed for modelling the solar interior.  One of the findings is the amount of hydrogen converted to helium in the solar core via fusion reactions.  This finding verified current models of solar evolution.  In other words, we know the Sun will be around for another 5 billion years or so.

The mapping of the solar magnetic field and helioseismology forms a key part of NASA’s current Solar Dynamics Observatory’s mission, as explained in the video below:

During the late 1990’s, I had the opportunity to go inside all three of the Mt. Wilson solar telescopes as a student in the observatory’s CUREA program.  The Snow was used primarily and I’ll never forget cleaning off the direct current switches, seemingly straight out of Frankenstein’s laboratory.  Also had encounters with both tarantulas and rattlesnakes.  In between those adventures, got to study the Sun’s spectrum (just as Hale did 90 years earlier), imaged the Moon at night, and gaze out over the cliff into Pasadena and the Rose Bowl.  The  60-foot solar tower had AC/DC blasting in the observation room, while the 150-foot tower had a visitor’s book signed by both Albert Einstein and Stephen Hawking.

Looking down the ladder on the 150-foot Solar Tower. Credit: Gregory Pijanowski
Looking down the ladder on the 150-foot Solar Tower. Credit: Gregory Pijanowski

Since then, there has been two recessions and a major financial crash.  The result has been cutbacks in funding and a need for the solar towers to reduce staff like many businesses have.  The Snow is still used by CUREA students every summer.  The 60-foot solar tower is run by USC.  The 150-foot tower had its funding shut down and is run on a volunteer basis.  The historic magnetograph made its last observation in 2013.  It could be much worse, in 2009, a forest fire came within a few hundred yards of the observatory which was saved by the efforts of several hundred firefighters.

Below is a video on the current effort to keep the 150-foot solar tower’s record of observations unbroken:

The observatory continues to reinvent itself.  The Pavilion, closed in the 1990’s, is now home to the popular Cosmic Cafe.  The grounds, once open to the public only on weekends is now open daily.  Public viewing is now offered on both the 60 and 100-inch telescopes.  The CHARA interferometer, which had started construction when I was there, is producing scientific results.  How do the solar towers fit in?  The 150-foot tower has been an important link in the continuous observations of sunspots since the early 1600’s.  In fact, those records compose 25% of that history.  And so far, it has been able to continue to do so.  I truly hope that chain is not broken.

*Image on top of post is the 60 and 150-foot towers keeping their vigil on the Sun.  Photo:  Gregory Pijanowski

To Catch a Star

Prior to the New Horizons flyby this month, the best image we had of Pluto came from the Hubble Space Telescope:

Credit: NASA, ESA, and M. Buie (Southwest Research Institute)

This struck me as being very similar to our best image of a stellar surface besides the Sun.  Very few stars have had their surfaces resolved.  Most stars appear as points of light in even the largest of telescopes.  Below is an image of Betelgeuse taken by the Hubble:

Credit: Andrea Dupree (Harvard-Smithsonian CfA), Ronald Gilliland (STScI), NASA and ESA

So the question to ask is, how can we obtain high resolution images of stellar disks as we did with Pluto?  Barring radical advancements in spacecraft propulsion, sending a mission to a star is not feasible.  The nearest star, Proxima Centauri, is 4.24 light years away from us.  The New Horizons probe, which is traveling at 35,000 mph (56,000 km/hr), would take over 100,000 years to reach that destination.  Obviously, the solution is constrained to be Earth bound or near Earth technology.  We need to build telescopes with higher resolution.

The theoretical resolution capability of a telescope is measured by the following equation:

R = 1.22(λ/d)

Where R is resolution in radians, λ is light wavelengths, and d is the diameter of the telescope’s primary mirror.  The smaller the value R, the higher the resolution and the greater the ability of a telescope to resolve finer detail.  Thus, to improve resolution, we’re gonna need a bigger telescope.

The other factor to consider is a telescope’s light gathering ability.  This is directly related to the area of the primary mirror.  If the mirror is circular in shape, this is described as:

A = πr2

Where A is area, π is the constant pi (about 3.14), and r is the radius of the mirror.  The larger A is, the larger a telescope’s ability to collect light.  Thus, a larger telescope can detect dimmer objects in the sky.  It can also detect objects farther away from Earth.  A mirror with a radius of 1 meter has an area of 3.14 square meters.  A mirror with a radius of 2 meters has an area of 12.56 square meters.  In other words, doubling the radius increases the area by four times.  Thus, what a mirror with a radius of 1 meter (diameter of 2 meters) can see 10 light years away, a mirror with a radius of 2 meters (diameter of 4 meters) can see 40 light years away.

To sum up the above, mirror diameter resolves finer detail on an object while surface area will enable more targets to be imaged.

As you might gather from the above, the best stellar targets to resolve would be stars that are very large (and hence, very bright) and relatively close.  And that is why Betelgeuse was the first star to be resolved in 1995 by the Hubble.  Betelgeuse is a red giant whose diameter could fit the orbit of Mars.  It is fairly close at 642 light years.  However, observing time is scarce on the Hubble as competition is fierce to use it.  Utilizing surface observatories can open up more opportunities for this line of research.

Besides the Hubble, another telescope to resolve a stellar disk is the CHARA array at Mt. Wilson.  CHARA relies on a technique referred to as interferometry.  An array of telescopes can have the same resolving ability of a single telescope the same size.  This is dependent upon the lightwaves received from all the telescopes in the array to be synced in phase together.

Credit: Mt. Wilson Institute

Radio astronomy was the first to use interferometry.  Radio waves are much larger than light waves.  In fact, radio waves can be several kilometers long.  Hence, it was easier to synch radio waves together than optical lightwaves.  The Very Large Array (VLA) in New Mexico uses this technique.  If you seen the movie Contact, you may be familiar with the VLA.  The VLA is an array of 27 radio antennas that in its widest configuration can provide the same resolution as a single antenna 22 miles (36 kilometers) wide.

Credit: Wikipedia/Hajor

Both economics and physics play a role here.  It is less expensive and structurally easier to build an array of small antennas rather than build a single dish 22 miles wide.  The tradeoff is gaining resolution at the cost of collecting ability.  The VLA does not collect the same amount of radio waves as a single 22 mile wide dish.  This plays a role in optical interferometry as well.

Optical interferometry presents a couple of challenges radio interferometry does not.  Visible light waves are disturbed by atmospheric turbulence, that is what causes the twinkling you see when you look at the stars at night.  The other is light waves are much smaller than radio waves.  In fact, radio waves are 1,000 – 1,000,000 times longer than visible light waves.  As a result, optical interferometry took longer than radio interferometry to develop.  Adaptive optics solved the turbulence problem.  Computer guided optical tracks solved the problem of combining very small visible light waves.

Credit: MRO/NMT

In the above example, light from the star travels a longer path to telescope 1 than to telescope 2.  Consequently, the light from telescope 2 must take an equally longer optical path before it is combined with the light from telescope 1 to ensure both are in phase together.  If the optical path is not calibrated correctly, the waves will arrive out of phase such as below:

Credit: Wiki Commons

The optical path is adjusted so that the red wavelength travels the distance θ longer than the optical path traveled by the blue wave so both waves are in synch and are combined where the peaks and valleys of the wave match up to produce a sharp image.

The CHARA interferometer currently has the longest baseline with six 1-meter telescopes spread out with a diameter of 330 meters.  The CHARA is located adjacent to the 100-inch telescope at Mt. Wilson where Edwin Hubble made his historic discovery of the expanding universe.

In 2007, the first resolved image of a disk from a near sun-like star was produced by the CHARA.  The star was Altair located 17 light years from Earth.  The image is below:

Credit: CHARA/GSU

Altair’s differs from the Sun in that its rotation rate is much faster.  The Sun rotates once every 25 days.  Altair rotates once every 9 hours.  Consequently, Altair is not spherical but bulges out from its equator.  This also causes Altair to have a cooler temperature at the equator and is thus darker in those regions.  This is the type of detail we can obtain when stars can be resolved as disks rather than points of light.

So what does the future hold?  Can future telescopes provide high resolution detail of stellar surfaces just as New Horizons provided for Pluto?  What should we hope to find if we can?  Stars, like planets, come in many sizes with differing physical characteristics.  The only star we can see with fine detail is the Sun.  The Solar Dynamics Observatory (SDO) images the Sun in both visible and ultraviolet light.  This video is an excellent overview of the solar surface detail provided by the SDO.

Given that attempts to resolve stars light years away from Earth is still in its nascent stage, it will be many years before we achieve that kind of quality imaging.  Nonetheless, there are promising developments that should improve our ability to acquire more detail of the features on stellar surfaces.

The first is a new generation of 30-40 meter telescopes set to go online in the 2020’s.  Combined with adaptive optics technology, they will have resolution on the order of 10-12 times that of the Hubble Space Telescope.  That, along with their immense light collecting ability, should improve the amount of suitable stellar targets to resolve.

The second, is the Magdalena Ridge Observatory Interferometer (MROI) in New Mexico.  The MROI will consist of an array of ten 1.4 meter telescopes with a maximum baseline of 1115 feet (340 meters).  The anticipated resolution will be 100 times that of the Hubble.  No definite timeline is given for completion of the MROI as it is a matter of obtaining funding to continue construction.  However, it is partially completed and hopefully will be up and running within ten years.

Credit: MROI

The next logical step is to build an optical interferometer in space.  Since 2000, NASA has been working on preliminary plans for the Stellar Imager (SI) mission.  The SI would consist of an array of 20-30 one meter mirrors with a baseline of 0.5 km (1,640 feet).  The array would be located in the Lagrange L2 point.  The L2 point is a gravitationally stable region 1.5 million km (1 million miles) from Earth.  This is also where the James Webb Space Telescope will reside a few years from now.

Credit: NASA/GSFC

Why should we endeavor to resolve stellar surfaces?  This is key to understanding the magnetic activity of a star and the stellar wind that emanates from the star towards orbiting planets.  These results can be integrated from the findings of exoplanet research.  Questions to be answered are, does the star generate space weather that could prevent life from forming on orbiting planets?  How strong a magnetic field does an orbiting planet require to protect life from local space weather conditions?  Does the star’s magnetic and irradiance cycle oscillate  in a manner that might inhibit the formation of life in its planetary system?  Detailed information from  main sequence stars older than the Sun could give us a look at the future of the Sun and how its evolution will impact life on Earth.

The SI is currently projected to start sometime around 2025-30.  More than likely, it will be later than that.  That is not unusual for cutting edge space missions.  The concept for an orbiting telescope was first proposed by Lionel Spitzer in 1946.  It took 44 more years to become reality as technical, funding, and political hurdles had to be overcome culminating with the launch of the Hubble Space Telescope in 1990.

I may never see the SI become reality in my lifetime, but it is important for my generation to lay down the ground work for our children to benefit from the results.  Astronomy, just like the building of the great cathedrals, can often be a multi-generational effort.

*Image on top of post are stars resolved by CHARA.  These stars are rapid rotators whose swift spinning motion flattens out their shape at the equator.  The images are false color and are designed to demonstrate temperature gradients on the surface of each star.  The brighter the color, the hotter the star is at that region.  The arrow on top indicates the true color of each star.  Credit:  CHARA/MIRC

Astronomy – The Next Generation

Astronomy is perhaps the most ubiquitous of human endeavors. Regardless of time or location, people have studied the night sky for an understanding of the universe. All five continents contain ancient sites used for astronomical purposes. From the ancient Egyptians noting the simultaneous rising of Sirius and the Sun to predict seasonal flooding, Chinese observation of the supernova that produced the Crab Nebula on July 4, 1054, Islamic astronomers high-precision measurements of celestial objects for time-keeping, astronomy has been a truly multi-cultural endeavor. The next generation of 30-meter telescopes promises to continue that heritage.

During the 20th century, large scale astronomy was dominated by the United States.  When the 200-inch Mt. Palomar telescope opened for business in 1948, the next largest telescope outside the U.S. was the 74-inch David Dunlap Observatory just north of Toronto.  The two World Wars created economic chaos across Europe and Asia.  At the same time, the vision of George Ellery Hale enabled the U.S. to surge ahead of the world in building large telescopes.  These two factors conspired to cause a listing of astronomy research towards the U.S. during this period.

From 1897 to 1993, the largest telescope in the world would be one designed by George Ellery Hale*. The first would be the 40-inch refracting telescope at Yerkes Observatory. This remains the largest refractor in the world. The final would be the 200-inch reflecting telescope at Mt. Palomar. In between was the 100-inch telescope at Mt. Wilson where galaxies outside the Milky Way and the expansion of the universe was discovered. These telescopes stretched the limits of the single mirror design.  A new type of mirror was required to construct larger telescopes, and the segmented mirror was the answer.

The 10-meter Keck Observatory in Hawaii ushered in the era of the segmented mirror design. The next generation of 30-meter telescopes, due to commence operations in the next decade, will radically expand Earth-based observatory capabilities.  In fact, these giant mirrors, combined with adaptive optics technology to remove atmospheric turbulence from their imaging, promise to have resolution capabilities several times that of the Hubble Space Telescope.

The first question one might ask is what is a segmented mirror? Lets take a look at the image below:

Credit: Palomar Observatory/California Institute of Technology

This is the 200-inch (5 meter) Palomar mirror as it is removed to be aluminized. Its total area is about 20 square meters. Single mirrors larger than this are problematic as the weight requires a massive support structure. Another factor is economics.  When the Keck Observatory was in the planning stage during the late 1970’s, a 10-meter mirror was estimated to cost $1 Billion ($2.9 Billion in 2015 dollars).  Funding prospects for that amount were rather bleak.

Now take a look at the mirror below:

Credit: This image was created by Prof. Andrea Ghez and her research team at UCLA and are from data sets obtained with the W. M. Keck Telescopes.

This is the 10-meter Keck mirror. As you can see, rather than a single mirror like Palomar, it consists of 36 hexagonal segments 1.8 meters wide. Each mirror segment is also very thin at 75 mm. The impact is two-fold as the total weight of the Keck mirror array is the same as the single Palomar mirror. As a result, the cost of the Keck Observatory was reduced to $270 million. The cost reduction enabled the Keck Observatory to obtain funding from the William M. Keck Foundation in 1985. When the Keck I opened in 1993, it was the first non-Hale telescope to become the world’s largest in 96 years.

The video below has an inside look at the Keck and its accomplishments:

Since the Keck, the Gran Telescopio Canarias 10.4-meter telescope was built in the Canary Islands as well as the 9.2-meter South African Large Telescope (SALT) in the Northern Cape region. The Gran Telescopio Canarias became the first world’s largest telescope outside the United States since the 1800’s. That telescope was funded mostly by the government of Spain, along with minor contributions from Mexico and the University of Florida. The SALT was funded by a partnership between South Africa, United States, Germany, Poland, India, United Kingdom and New Zealand. The next generation of telescopes will continue the trend of international partnerships for funding.

The success of the segmented mirror has prompted the design of 25-39 meter telescopes. This is an expansion of telescope size unprecedented is in modern times. Thirty years after the 100-inch Mt. Wilson telescope was built, the 200-inch Mt. Palomar telescope doubled aperture size for the world’s largest telescope. The planned 39-meter European Extremely Large Telescope (E-ELT) will nearly quadruple Keck’s mirror size in the same time frame.  Three telescopes of this class are set to commence operations in the next decade and they are:

The Giant Magellan Telescope (GMT)

The design of this telescope is quite interesting as it uses the upper end of monolithic mirror design as the basis for a segmented mirror.  The GMT will package seven 8.4 meter mirrors into an array 24.5 meters wide.  The telescope will be located at Las Campanas Peak in the Atacama Desert.  The total cost will be $1 billion and funding is provided by the Carnegie Institute for Science (who also funded both Mt. Wilson and Mt. Palomar), Smithsonian Institution, and several American, Australian, Korean, and Brazilian universities.  The GMT will begin observations in 2021 and will be fully operational in 2024.  The image below provides a good perspective on the size of the mirror array:

Credit: Giant Magellan Telescope – GMTO Corporation.
Credit: Giant Magellan Telescope – GMTO Corporation.

Thirty Meter Telescope (TMT)

This telescope will be located in Hawaii on the summit of Mauna Kae where the Keck Observatory is located.  Appropriate from an astronomy standpoint, as the TMT is the successor to Keck in that its mirrors will consist of 492 segments 1.44 meters wide.  The location has also become problematic as a result of protests by indigenous Hawaiians who consider the peak of Mauna Kae to be a scared site.   The protests have stopped construction for now.  While its more than likely the 18 story high TMT will eventually be built, its unknown what impact the current impasse will have on its planned timeline to be completed in 2024.  The TMT is expected to cost $1.4 billion to build and is funded by a consortium including Caltech and partners from China, India, Japan, Canada.

Credit: Courtesy TMT International Observatory

European Extremely Large Telescope (E-ELT)

The E-ELT will be the granddaddy of the next generation of telescopes.  Like the GMT, the E-ELT will be located in the Atacama Desert in Chile.  Its mirror will be 39 meters wide and consist of 798 hexagonal mirrors each 1.45 meters wide.  The cost of the E-ELT is expected to be 1.1 billion Euro.  The telescope will be funded by the European Southern Observatory with contributions from its 16 member nations.  Currently, the member nations are all from Europe with pending applications from Poland and Brazil.  First light is expected in 2026.  An artist conception of the E-ELT is below.  It is an ideal site to observe the Milky Way.

Credit: ESO

Why Chile?

The northern Chile desert provides the desired combination of dry air, high altitude, and lack of light pollution.  Currently, the ESO operates the Very Large Telescope (VLA) in the Atacama Desert.  The VLA has four telescopes each with 8.2 meter mirrors.  Also located there is the large 66 antennae ALMA radio telescope array which is run by a consortium consisting of the United States, Canada, Japan, ESO member states and Chile.  The  Astronomical Tourism website has a list of the remarkable number of observatories in Chile here.  Among some of the advantages of being in the Southern Hemisphere is that the Milky Way can arch overhead making it a very easy target to observe.  By 2025, it is expected that half of the world’s observing power will be located in Chile.  Below is a video that demonstrates the awesome clarity of the Chilean skies.

Expected Performance

All three observatories will utilize adaptive optics system.  This is a means to eliminate the twinkle in stars caused by atmospheric turbulence during observations.  While the twinkling of stars can have great aesthetic value, it hampers the performance of a telescope.  Adaptive optics works by shooting a laser into the sky near the observation target.  This laser excites sodium atoms located about 60 miles above the Earth’s surface.  The excited atoms then release the energy in the form of light causing an artificial star to be created in the sky.  The artificial star is used as a baseline to measure atmospheric turbulence.  This in turn is used to adjust a small deformable mirror in the instrument package of the telescope.  This deformable mirror removes most of the twinkling from the observed object prior to being imaged.  The picture below shows the galactic center before and after adaptive optics from the Keck Observatory is applied.

Credit: Keck Observatory and the UCLA Galactic Center Group

The large mirrors combined with adaptive optics is expected to give these telescopes resolution several times that of the Hubble Space Telescope.  In fact, the E-ELT is expected to provide 15 times the resolution ability of the Hubble.  To put the E-ELT in proper perspective, this telescope will collect more light than all the existing 8-10 meter telescopes combined.  Among the science objectives of these telescopes will be to peer into the farthest regions of the universe to study the first galaxies formed to the detection of Earth-sized planets and characterization of exoplanet atmospheres.  The latter could possible provide evidence of bio-signatures.

The cost of this science is not cheap.  However, it is not more expensive than other large scale infrastructure projects.  For example, the total cost of the three new observatories combined ($3.5 billion) is identical as the new Detroit-Windsor bridge to be built during the same period.  Nonetheless, the cost of large observatories are now on the scale that international partnerships must be used for funding. American society, competitive as it is, tends to fret when it comes to the possible loss of a dominant leadership position in any given field.  However, this recent development simply puts astronomy back at its natural state.  Rather than being an American endeavor, large scale astronomy research is now a global venture, just as it was during ancient times.  And that is exactly as it should be.

*There were other telescopes larger than Yerkes in 1897 such as the 72-inch Leviathan of Parsonstown, but those had fallen into disrepair and were no longer in use.

**Image on top of post is the E-ELT compared to the VLA and Statue of Liberty.  Credit:  ESO.

Pluto – Round Two

The images released today from New Horizons indicate the presence of carbon monoxide on the surface, possible wind erosion features, and the atmospheric loss of nitrogen.

In the heart shaped region of Pluto (dubbed Tombaugh Regio for now), New Horizons mapped a region of solid carbon monoxide ice.  Right now, it cannot be determined how extensive the carbon monoxide is.  It might be a sheen on the surface or it might be several meters deep.  We’ll find out more as the rest of New Horizons data comes in.  A map of the carbon monoxide ice is below:

Credit: NASA/JHUAPL/SWRI

Carbon monoxide (CO) differs from carbon dioxide as it only has one oxygen atom in its molecule instead of two.  Unlike carbon dioxide, CO is not a greenhouse gas.  That aside, carbon monoxide is pretty nasty stuff to be around.  If you live in a house that does not ventilate well, carbon monoxide poisoning is a serious threat.  On Earth, CO is emitted into the atmosphere by inefficient burning processes. This includes combustion engines and industrial emissions along with burning of forests.  Burning in the Amazon and in Africa releases large amounts of CO on a seasonal basis as can be seen on NASA’s Earth Observatory time lapse map of CO.

Unlike Pluto, we do not experience CO as an ice on Earth.  The freezing point of CO is -3370 F (-2050 C), so one has to go out into the furthest regions of the Solar System to see it in that form.  Comets originate from that region and have CO ice.  The gas in Halley’s Comet’s tail emanating from its solid nucleus during its last pass in 1986 was measured to be 10% CO.  Occasionally, Earth will pass through the tail of Halley’s Comet such as on the night of May 18, 1910.  However, the material in the comet’s tail is much too tenuous to have any effect on life.  The New York Times report on the events of that night can be found here.

CO gas does exist beyond the Solar System in the plane of the Milky Way.  Galactic CO was mapped by the ESA Planck mission and the results are below.  Where there is CO, there is hydrogen gas in far more abundance.  CO radiates more readily than hydrogen and serves as a useful guide for mapping galactic gas clouds where star formation occurs.

Credits: ESA/Planck Collaboration

Also within Tombaugh Regio, this interesting image was released:

Credits: NASA/JHUAPL/SWRI

Which might remind you of what you see in your backyard after a dry spell:

Image: Wiki Commons

As mud dries, it contracts and begins to crack.  A similar process on a much larger scale may have caused the segment formation on Pluto.  Another process that is theorized is the formations are caused by convection below the surface.  Subsurface heat would cause the ground to bubble up.  Right now, the data is too fresh to know for sure which geologic process caused these formations.  As more and more data comes in (only 1 gigabyte of 50 has been received from New Horizons), scientists will get a better handle on what exactly is going on here.

Credit: NASA/APL/SwRI

The final discovery announced today was the atmospheric loss experienced by Pluto.  Atmopsheric loss occurs when molecules attain escape velocity.  The lighter the molecule, the easier it is for heat to accelerate it enough to escape into space.  Mercury practically has no atmosphere as its closeness to the Sun imparts enough heat energy to any gas molecule on the surface to escape.  Both Venus and Mars lack the magnetic field Earth has which allows the solar wind to directly interact with the atmosphere and drag it away just like you see above with Pluto.  The video below describes the process on Venus:

Earth loses 50,000 tons of atmosphere a year.  Most of it is hydrogen and helium.  As these are the two lightest of the elements, they most easily reach escape velocity and leave our planet.  Worry not, at that rate, the Sun will turn into a red giant and swallow the Earth five billion years from now before our atmosphere is lost.

Pluto is losing atmosphere at a rate of 500 tons an hour or over 4,000,000 tons a year.  Projected over the course of Pluto’s lifetime, that equates to over a thousand feet of nitrogen ice lost.

As mentioned before, Pluto is pretty cold.  How does the nitrogen in its atmosphere acquire enough energy to escape.  At the mission update, it was explained that the greenhouse gas methane may trap just enough heat to give nitrogen atoms a boost into space.  The other part of the equation is Pluto’s small mass, only 0.002 of Earth’s.  This means Pluto’s escape velocity is 1.3 km/s compared to Earth’s 11.2 km/s.  Thus, it is much easier for nitrogen to escape Pluto than it is to escape Earth.  Pluto lacks a significant magnetic field and direct contact with the solar wind accelerates atmospheric loss.

One of the most important aspects of studying astronomy is to gain a greater perspective on Earth.  Looking at the atmospheric loss on Pluto and other planets in the Solar System, it can give a greater appreciation of the role the magnetic field here on Earth plays in protecting life.  The Pluto flyby is a great adventure, but it also goes to show, there is no place like home.

*Image on top of post is best Hubble image of Pluto vs New Horizons image. Credits: Hubble: NASA / ESA; New Horizons: NASA / JHU-APL / SWRI

Pluto and Earth

The first thought I had watching the press conference on the initial images from the New Horizons flyby of Pluto was how much accessible these events are to the public than in the days of Voyager.  During the 1980’s, unless you had a NASA press pass, you did not get to watch mission updates live.  No twitter feeds to tell you right away when telemetry is being received, no websites to go back and review the images at your leisure.  And you had to wait at least a year, maybe more, for astronomy textbooks to be updated.  What you got was short segments on the nightly news such as this:

One of my favorite teaching techniques is to compare the surface features of planets to things we are familiar with here on Earth to give it proper perspective.  And that seems to me to be a good place to start with the first images released today.

Lets begin with the mountains located near the now famous heart-shaped region of Pluto.

Credit: NASA/Johns Hopkins University Applied Physics Laboratory/Southwest Research Institute

This image was taken while New Horizons was 77,000 km away from Pluto.  That’s 10 times farther away than the closest approach and gives a good idea what to look forward to as more images are released.

The tallest of these mountains are about 11,000 feet (3,500 m).  How does this compare to Earth?  These are less than half as tall as Mt. Everest which clocks in at 29,029 feet.  Still, pretty impressive mountains considering how small Pluto is.  The height of these mountains are similar to Mt. Hood in Oregon.

Image: Wiki Commons

The first age estimate of these mountains are about 100 million years.  That sounds pretty old.  In fact, dinosaurs were roaming around on Earth when these mountains formed.  In geological terms, this is pretty young, only 2% the age of the Solar System (4.5 billion years).  How do we know these mountains are young?  By the lack of craters in the region.  The less craters there are, the younger a surface is.  These mountains are younger than the Alps which are 300 million years old.  They are older than the Himalayan Mountains which formed as the Indian Sub-Continent plowed into Asia 25 million years ago.

Mountains on Earth are the result of plate tectonics.  At this very early juncture, planetary scientist have their work cut out for them as none of the current models can account for such mountain formation on an icy outer Solar System body in the absence of tidal flexing.  It is thought that the mountains are regions of water-ice bedrock poking through the methane ice surface.  Methane ice is too weak to build mountainous structures.

Below is Pluto’s largest moon Charon:

Credit: NASA/Johns Hopkins University Applied Physics Laboratory/Southwest Research Institute

The outstanding feature here is the large canyon in the upper right corner.  This canyon is 4 to 6 miles (7 to 9 km) deep.  The Grand Canyon’s greatest depth is a little over a mile.  This channel is comparable to the deepest reaches of the Pacific Ocean, the Mariana Trench, that lies about 6.8 miles below sea level.  It’s interesting to consider than more humans have walked the surface of the Moon (12) than have reached the bottom of the Mariana Trench (3).  To be fair, no nation has ever decided to spend $150 billion (2015 dollars) and employ 400,000 people to reach the Mariana Trench, such as the United States did during the Apollo program.

This image maps methane on the surface of Pluto.

Credit: NASA/Johns Hopkins University Applied Physics Laboratory/Southwest Research Institute

The New Horizons press release describes the greenish area of Pluto’s North Pole as methane ice diluted in nitrogen ice.  Does that sound odd?  Typically, we see neither of these substances in a solid state on Earth.  Methane and nitrogen are known as volatiles, which means they take gaseous form at room temperature.  As you may have surmised, Pluto is not at room temperature.  The freezing point of methane is -295.60 F (-1820 C) on Earth.  The freezing point of nitrogen is even lower at -3460 F (-2100 C).  These figures are lower on Pluto as the atmospheric pressure does not match that of Earth.  The temperature of Pluto ranges from -3870 to -3690 F (-2330 to -2230 C).  Yeah, the outer reaches of the Solar System are pretty chilly.

In our day to day lives, you may be familiar with methane as the main component of natural gas.  You may have learned about it first as a source of middle school humor.  While methane is a gas on Earth, the Saturn moon Titan is cold enough for it to be a liquid.  Below is an image of methane lakes on Titan.  Instead of raining water, you could dance in the methane rain on Titan.  Earth and Titan are the only bodies in the Solar System to have stable liquid lakes on the surface.

Credit: NASA/JPL-Caltech/ASI/USGS

Neptune has trace amounts of methane in its atmosphere.  Methane has the property of absorbing red light and scattering blue light.  The result is the rich blue hue of Neptune as first seen in the 1989 Voyager flyby:

Credit: NASA

Methane also absorbs infrared light at certain wavelengths.  The methane profile image of Pluto was produced by measuring infrared absorption from surface methane.   When methane absorbs infrared light at these wavelengths, the infrared energy is converted in vibrational motion in the molecular bonds.  Once the molecule settles down, the energy is released back out as infrared light.  We cannot see infrared light, but we feel it as heat.  In the atmosphere, some of this heat is directed back towards the Earth, warming the surface.  In other words, methane is a greenhouse gas like carbon dioxide and water vapor.

And for that, we should be grateful.  Without greenhouse gasses, the Earth would be 600 F colder (like the Moon), and human life would not be possible.  However, you can have too much of a good thing.  As temperatures rise in the Arctic warming up the permafrost, methane that has been locked up for thousands of years as frozen, undecomposed plant life, could be released into the atmosphere.  When you consider the Arctic region has been most affected by rising global temperatures, then you can understand why climate scientists are concerned about this scenario.

On Friday, New Horizons should be releasing the first color images from the flyby.  Should be quite an interesting week.

*Image on top shows part of Pluto’s heart region the mountain closeup was taken.  Credit:  NASA

Success!

As expected, New Horizons ended its blackout period at 8:53 PM EDT tonight and verified the flyby of Pluto was a success.  All systems are nominal, which is to say running as it should.

If you were watching the flyby confirmation on NASA TV, the acronym MOM refers to the Mission Operations Manager.

NASA will have a presser from 3-4 PM EDT on NASA TV.  It is expected some of the first images downloaded from the flyby event will be presented.  These will be black and white images from the LORRI camera.  Color images take a bit longer to process and should be available for release later this week.

Congratulations to the New Horizons team!  Very much looking forward to sharing the results with my students.

*Image on top of post is false color representation of Pluto and its largest moon Charon.  This image was taken just prior to the blackout period during the flyby.  If you were flying along with New Horizons, this is not how you would see these two bodies.  False color is put into the image to allow are eyes to more easily discern differing regions.  Credit: NASA/Johns Hopkins University Applied Physics Laboratory/Southwest Research Institute