Climate Change: Global vs. Local

As we conclude another month of unusual weather in my hometown (Buffalo, NY), I thought it would be a good time to take a look how climate change can be measured locally.  Coming off the heels of the coldest month in Buffalo history in February, May featured an average temperature 5.70 F warmer than normal including nine days over 80 degrees.  This month was on pace to be the 5th driest May in Buffalo history, but 2 1/2 inches of rain on May 31st more than doubled the precipitation experienced the first thirty days of the month.

Although our lives revolve around daily fluctuations in weather, the best way to determine how global warming may influence local climate is to examine annual temperature as that measure smooths out noise and gives a good take at long-term trends.  The global temperature history is below:

Temp jpeg 9.07.37 PM
One degree Celsius = 1.8 degrees Fahrenheit. Source: http://climate.nasa.gov/

A few key trends to notice, the post World War II cooling likely was a result of the global economy recovering after the Great Depression.  An increase in atmospheric aerosols caused by industrial emissions blocked sunlight during that period.  This helped to offset greenhouse gas heating.  Aerosols are particles suspended in the atmosphere rather then a gas such as carbon dioxide.

Buffalo Bethlehem Steel plant 1970's.  Photo:  George Burns, courtesy EPA
Buffalo Bethlehem Steel plant 1970’s. Photo: George Burns, courtesy EPA

That era ended with environmental regulations put into effect during the 1970’s.  The drop in aerosols prompted global temperatures to rise after 1980.  A slight cooling trend took place in the early 1990’s, this was precipitated by the Mt. Pinatubo eruption in 1991.  Large volcanic eruptions can eject sulfuric aerosols into the stratosphere and cause global cooling on a scale of 2-3 years.  The most famous example of this was the Year Without a Summer in 1816 produced by the eruption of Mt. Tambora.  After that brief period of cooling, global temperatures began to march upwards again.  This period included the 1998 El Nino warming spike, and the nine of the ten warmest years on record that have occurred after 2000.

So how does Buffalo match up to all this?  Lets take a look below:

Source:  http://www.weather.gov/buf/BUFtemp
Temperature in Fahrenheit. Data from http://www.weather.gov/buf/BUFtemp

We have a replication in the progression of temperature change.  The post World War II cooling can clearly be seen from the 1950’s through 1979.  The 1970’s saw the most infamous weather event in Buffalo history with the Blizzard of ’77.  Temperatures begin to warm up after 1980 just as happened globally.  The short term cooling effect from Mt. Pinatubo is clearly observed in the early 90’s (the summer of 1992 was very cool and wet) along with the (then) record breaking warmth from the 1998 El Nino event.

1998 El Nino event.  Image:  NASA

Despite the noise from short term climate forcings, the warming trend from 1980 to the current is plainly visible.  From 1940 to 1989, only two years clocked in at over 50 degrees.  The period from 1990 to 2014 featured six such years including 2012 which shattered the 1998 record by 1.20 F.  That might not seem like a lot, but the prior record was only 3 degrees above normal.  In other words, if climate models are correct and temperatures increase by more than 4 degrees, the average year will be hotter than the hottest year on record in Buffalo.

What I found really interesting is the greater variation of annual temperatures after 1990 as the yearly figures fluctuate widely across the moving average.  Also, even though the winter of 2014 was very cold in its own right, the temperature for the year was still greater than many years during the 1970’s.  The trend has been clearly increasing temperatures the past three decades.  Is this widening fluctuation in temperatures due to greenhouse gas induced climate change?  The answer is uncertain and as many a science article has concluded, more research needs to be done in this area.

What we do know is that we have to prepare to protect our regional economy against the influence of climate change.  Decreasing lake waters will make hydroelectricity production more expensive and lower cargo capacity of lake freighters.  General Mills, a major local employer (whose plant spreads the aroma of Cheerios downtown), has recognized the dangers of climate change to its supply chain and has a formal policy to mitigate its role in releasing greenhouse gases.  The next decade will be key in slowing climate change and will require crucial policy choices at both a regional and national level.

General Mills (right) and renewable windmill power takes over old Bethlehem Steel site.  Photo:  Gregory Pijanowski 2014.
General Mills (right) and renewable windmill power takes over old Bethlehem Steel site (upper right). Photo: Gregory Pijanowski 2014.

*Image on top of post is Buffalo from Sturgeon Point 15 miles south on Lake Erie.  Just wanted to demonstrate it does not snow here 12 months out of the year.

Transforming the Discussion on Race

As America is having one of its periodic discussions about race the past few weeks, it is time to consider how educators can induce a transformation to the conversation. When we talk about race, we are discussing skin color, which is determined by the amount of pigmentation in one’s skin. More specifically, a polymer (a repeating chemical pattern that forms a very large molecule) referred to as melanin.  The first time I heard of melanin was not in a biology class, but from Richard Pryor on a Tonight Show appearance.  Pryor discussed all kinds of humorous situations the high melanin level in his skin had caused.  Not really funny, it was only Pryor’s extraordinary comedic talent that made it seem so.

I know I am not alone in lacking a formal education as to what causes skin to appear in various colors in humans.  And rest assured, there are plenty of disreputable sources of information in American society to fill the void.  If we are going to have a reasonable discussion about race, it is time we educate ourselves what exactly race is.

The higher amount of melanin one has in their skin, the darker their skin will appear. This polymer also determines hair and eye color. Individuals with a high degree of melanin will have brown eyes, less melanin results in blue/green eyes. The same relationship holds with hair, the more melanin in one’s hair, the darker it is.

Melanin regulates skin color by the process of light absorption.  Light received by the skin is reflected back via a dermis layer below the melanin layer.  As the reflected light passes through melanin, it is absorbed.  The energy of the absorbed light triggers vibration in melanin molecules.  This vibrational energy is then converted to thermal energy and released as heat.  If the melanin level is low, little light is absorbed and skin appears white.  If the melanin level is high, more light is absorbed and skin appears darker.

Eye color is created in the same fashion.  Persons with low levels of melanin will have blue eyes.  The iris scatters light in the same manner as water droplets do to create a rainbow.  Blue light is scattered most and that is the color of light reflected out the eye.  Persons with high levels of melanin absorb most of the light reflected out and consequently, have darker eyes.

Thus, when we classify human beings by skin color, it is the same as if we classify by eye or hair color.

Melanin content regulates how skin responds to ultraviolet (UV) exposure as well as vitamin D production in the body.  Melanin absorbs UV radiation and having a lot of it protects against UV damage (sunburn), so dark skin is advantageous if you live around the equator.  People with high melanin content require longer exposure to sunlight to produce the necessary amount of vitamin D in the body, while people with low amounts of melanin do not require as much.  Thus, having little melanin is advantageous the farther away you live from the tropics.   Indeed, that is how evolution worked out as this map of skin color distribution from 1500 AD (before modern transportation and mass migration-voluntary and otherwise) demonstrates.

Image: Wiki Commons

The human race originated in Africa. All humans can trace their maternal ancestral roots to a single woman who lived in Africa about 150,000 years ago. All humans can also trace their paternal ancestral roots to a single man who lived in Africa 60,000 years ago. They were not Adam and Eve as in the biblical sense, they were one of many humans alive back then. However, those two individuals are the only ones from that era whose ancestry has survived to the current day.

The DNA of all humans is 99.9% identical. The National Geographic Genographic Project can use these small variations to trace one’s ancestral migration route from Africa.

Below is my maternal migration route (my mother was Irish).  My maternal line migrated out of Africa into the Middle East about 60,000 years ago, moved into West Asia 55,000 years ago, made their way into Western Europe about 22,000 years ago, and finally into what we now know as Ireland around 10,000 years ago.

MaternalAnd here is my paternal migration route (my father is Polish).  My paternal line made a similar migration out of Africa but took a right turn into Central Asia 35,000 years ago, eventually settling into Eastern Europe sometime around 15,000 years ago.

Paternal jpeg

As our ancestors migrated into colder climes, humans with genetic mutations resulting in an evolutionary advantage in those climes tended to survive and reproduce over those who did not. The end result, those whose ancestors migrated to Northern Europe have low levels of melanin and light skin and eye color. Light skin color is a relatively recent phenomenon, one that recent research indicates occurred 8,000 years ago.

Note that the genetic mutation that caused lighter shades of skin occurred thousands of years after human migration into Europe.  That’s right, my European ancestors, along with yours if you are white, were black upon their arrival into Europe.  That notwithstanding, skin color is used to classify individuals and justify the most cruelest behaviors targeting those deemed to have an improper melanin level in their skins.

How to change that?  I have no illusions that change will come overnight.  It will occur most probably like water wearing away on rock.  Over time, consistent pressure will wear the rock down.  As educators, we must insist any discussion involving race acknowledge precisely what the true differences are between the races, a minute, skin deep layer of polymer called melanin.  And we must insist those with racists attitudes describe in scientifically rigorous detail how that polymer contributes to the characteristics of the race being disparaged or exalted.

In a way, we need to disrupt the discussion on race in the same manner hi-tech start ups disrupt existing business models.  Why should we discuss race based on an antiquated social construct used to justify slavery?  Would we consider discussing astronomy as if the Copernican revolution never took place?  Certainly we would regard those who would to be cranks.  Any call for a discussion on race must be framed in its proper context or that discussion will be pointless.

Think of a world were people were subjected into slavery due to their eye color, or denied access to education and jobs due to their hair color. Can you imagine a world where Martin Luther King Jr. would have to say; “I look to a day when people will not be judged by the color of their eyes, but by the content of their character?” Sounds like some sort of bizarro world, or a Twilight Zone episode. However, that is the world we live in. If you were to judge someone by eye color the same way some will judge a person by their skin color…it’s the same damn thing.

UND Capstone Projects

My alma mater, the University of North Dakota Space Studies Department, has rolled out websites for its 2015 Master’s Capstone projects.  They are:

Vision of Venus:  A mission proposal for an orbiter and two balloon systems to explore Venus.  The goal of the mission is to search for organic chemicals in the upper atmosphere of Venus.  The proposed mission would launch in 2021.

Lunar Impact Crater Explorer:  A mission proposal to explore the South Pole Aitken Basin also to be launched in 2021.  This region of the Moon contains ice and thus, is a key focus for lunar research.

Good luck to both teams with your presentations!

*Image above was taken during the Moon & Venus conjunction of March, 2012.

Farewell, Messenger

NASA’s Messenger mission is expected to end on April 30th at 3:30 PM EDT with a crash landing on Mercury.   Messenger has run out of fuel and NASA is using this phase of the mission to utilize the spacecraft’s low orbit to obtain very high-resolution images of Mercury’s surface. One of Messenger’s many remarkable discoveries was the confirmation that ice exists in the polar regions of Mercury, the planet closest to the Sun.

Messenger’s voyage to Mercury began on August 3, 2004. The trip to Mercury took almost seven years and Messenger finally achieved orbit on March 18, 2011. Why did the trip take so long? Even though Mercury is only, on average, 48 million miles from Earth, Messenger’s voyage to Mercury was 4.9 billion miles. The trajectory to Mercury involved one flyby of Earth, two flybys of Venus, and finally, three flybys of Mercury itself to insert Messenger into orbit. A video the Messenger’s journey to Mercury is below:

Messenger was the first spacecraft to reach Mercury since Mariner 10 did so in 1975. During this gap, tantalizing evidence emerged that ice might exist on the polar regions of Mercury. The Arecibo Radio Telescope (the same one Jodie Foster used in Contact and James Bond destroyed in Goldeneye) detected strong evidence that ice exists in the shadowed regions of polar craters on Mercury. That made the confirmation of ice deposits on Mercury one of the prime objectives of the Messenger mission. So how does ice exist on the closest planet to the Sun? The answer lies in two facts, the small axial tilt of Mercury and the lack of an atmosphere on Mercury.

The axial tilt of Mercury is 2.11 degrees compared to Earth’s axial tilt of 23.5 degrees. At the poles, this represents the highest angle above the horizon the Sun obtains at the Summer Solstice. The image below is Noon on June 21st at the North Pole on Earth complete with faux landscape courtesy of Starry Night.

North Pole Summer jpeg

Now lets take a look at the North Pole of Mercury when the Sun is at its highest.

Mercury Summer jpeg

Quite a difference, this is the same altitude the Sun has about 15 minuets after sunrise on Earth in the mid-latitudes. Think about how long the shadows are at that time. On Mercury, you have craters as deep as 1 km.  When the Sun stays low in the sky, and craters are that deep, there will be areas in those craters that will never see sunlight.

One might ask, with Mercury being so close to the Sun it must be hot enough to melt ice even if there is no sunlight in the craters. However, Mercury has practically no atmosphere, and with an atmosphere lacking, there is not any wind to transport heat from sunlight areas to dark areas on Mercury. The gravity on Mercury is only 38% that of Earth.   As a consequence, the escape velocity of Mercury is 4.3 km/s compared to Earth’s 11.2 km/s. Here, Mercury’s closeness to the Sun plays a key role. The hotter the temperature, the faster atoms and molecules are accelerated. On Mercury, atmospheric gases are accelerated faster than the escape velocity causing atmospheric loss. On Earth, cooler temperatures and a higher escape velocity enable Earth to retain its atmosphere. Hence, over the course of time, Mercury will lose any atmospheric gases it may have.

Mercury’s surface bears a resemblance to the Moon. The Moon also lacks an atmosphere and is unable to transfer heat from the daylight to the dark side. On Earth, the atmosphere distributes heat across the surface and as a result, there is only an order of difference of a few degrees between day and night. On the Moon, the same distance to the Sun as Earth, the temperature can range from 2500 F on the dayside to -2400 F on the dark side. On Mercury, the difference is more extreme. The range can be from 8000 F on the dayside to -2800 F on the dark side. A region in a crater that is permanently in shadow will never approach the sublimation point (meaning changing from ice to water vapor, the atmospheric pressure on Mercury is too low for liquid water to exist) of ice.

Credit: NASA/UCLA/Johns Hopkins University Applied Physics Laboratory/Carnegie Institution of Washington

Messenger’s detection of ice on Mercury is a classic case of matching a predicted result with observation. Messenger used a neutron spectrometer to measure the amount of neutrons emitted from the surface of Mercury. These neutrons are radiated as a result of high-energy cosmic rays striking the surface and ejecting them into space. Areas with ice absorb these neutrons, leading to a lower count in those regions. Scientists built a model predicting the neutron count, matched it up with observation, and voila, as the video below illustrates, a match was found.

This is a map of Mercury’s North Pole released by the Messenger mission in 2012.  The red regions are shadowed regions and the yellow indicate areas of ice.

Credits: NASA/Johns Hopkins University Applied Physics Laboratory/Carnegie Institution of Washington/National Astronomy and Ionosphere Center, Arecibo Observatory

In 2014, Messenger obtained visual confirmation of the existence of ice in the Kandinsky crater located near the North Pole of Mercury.  The image was taken by Messenger’s wide angle camera (WAC) with a 600 nanometer (orange) broadband filter.  Normally, this camera’s function was to image stars for calibration purposes, but it had the capability to capture ice in the craters of Mercury lying beneath the shadows using scattered sunlight. This capped off a 25 year hunt for proof of existence of ice on the closest planet to the Sun.

This image compares a view of the crater without the filter (left) and with the filter (right) that can detect detail in the dark recesses of the crater.

Credit: NASA/Johns Hopkins University Applied Physics Laboratory/Carnegie Institution of Washington

Even though Messenger is in its final days, it is still productive. As its orbit decays and the spacecraft gets closer and closer to the surface, this will enable Messenger to take some of the highest resolution images ever of a planetary surface. Messenger was selected by NASA for its mission in 1999. The total cost over those 16 years has been about $450 million. That is about the same as it cost to launch one shuttle mission. Or on more Earthly terms, the same amount Texas A&M will spend to renovate its football stadium.  Messenger has sent back over 10 terabytes (or 10,000 gigabytes) of data and over 200,000 images that will keep planetary scientists busy for years to come.  At least until 2024, when ESA’s BepiColumbo mission will reach Mercury.

Corning, Kodak, and Hubble

This being the 25th anniversary of the launch of the Hubble Space Telescope, it is an opportune time to take a look at the connections between Upstate New York and the Hubble. I will focus on two companies. Corning, whose technological innovations made the Hubble possible, and Kodak, whose efforts could have spared the NASA the grief of Hubble’s original design flaw.  The story begins in the 1930’s, as the proposed 200-inch Mt. Palomar Observatory mirror required a low expansion material to make the large mirror possible.

During the 1920’s, the largest telescope in the world was the 100-inch reflector at Mt. Wilson Observatory. It was in this decade that Edwin Hubble would make his historic observations at Mt. Wilson, which included the discovery of galaxies outside the Milky Way and the expansion of the universe. At the time, George Ellery Hale was in the planning stages of building the 200-inch telescope at Mt. Palomar. To understand the engineering problem at hand, the surface area of the Mt. Wilson mirror is:

A = πr2

Which is (3.14)(502 inches) = 7850 square inches or 54.5 square feet.

The surface size of the Mt. Palomar mirror would be:

(3.14)(1002 inches) = 31,400 square inches or 218 square feet.

Despite the fact the diameter of the mirror at Palomar would be only double that at Mt. Wilson, the surface area would be 4 times larger. The mirror at Mt. Wilson was made at the French Glass Works and weighed 9,000 pounds (the mirror at Mt. Wilson is green just like a wine bottle). The greater surface area of the Mt. Palomar mirror demanded a material that did not expand or contract as much to temperature changes. Unless an alternative material could be found, this expansion and contraction would distort the optics to the point of making the primary mirror useless.

Enter Pyrex

The invention of Pyrex at the Corning Glass Works was the result of an effort to make lantern glass suitable for railroad watchmen. Traditional glass lanterns would shatter when the heat of the flame combined with cold winter weather. By 1915, Pyrex was being produced for its most well known application-kitchenware. The low heat expansion quality of Pyrex made it an excellent material for cooking.

Image: Wiki Commons.

In 1932, George Hale was looking for a material to cast the 200-inch Palomar mirror. His first choice was fused quartz, but the efforts at General Electric to cast the mirror this way proved unsuccessful. After spending $600,000 ($10 million in 2015 dollars) on the failed quartz effort, Hale turned to the Corning Glass Works and its Pyrex product to give it a try for the Palomar mirror.

Besides kitchenware, you may be familiar with Pyrex from your high school chemistry lab. Pyrex is regular glass with borax oxide added to it. The combination produces borosilicate glass, which experiences low expansion when exposed to heat. This is what keeps Pyrex kitchenware and lab equipment from breaking when it is heated up rapidly. So, why is this important for a telescope mirror that does not experience the same type of heating when Pyrex is put in an oven or over a Bunsen burner?

The optical precision required for the Palomar mirror meant that the mirror could not deviate more than two-millionths of an inch from its prescribed shape. Needless to say, the slightest amount of thermal expansion would have grievous effects on the optical quality of the images produced by the mirror. With this in mind, the Corning went to work on producing the Palomar mirror blank with its Pyrex material.

It would take two tries for Corning to build the mirror to be used at Palomar. During the first attempt, pieces of the mold in the mirror broke off due to the heat used in the casting process. This flawed mirror is now on display at the Corning Museum of Glass (see video below). The second mirror was shipped via a highly publicized train ride cross-country to California in 1936. There, over 10,000 pounds of the glass was shaved off in an extensive grinding process to polish the mirror to its required shape to produce the high-quality images for the telescope. World War II delayed this work a few years, but eventually the mirror was installed in the telescope in 1948.

Mt. Palomar would remain the world’s largest telescope until 1993 when the Keck Observatory in Hawaii surpassed it. During its run as the world’s largest telescope, astronomers at Palomar would refine the measurement of the expansion of the universe and discover quasars among its many other discoveries.  At the same time, Corning’s experience with producing observatory mirror blanks would be called upon again to make another groundbreaking instrument of astronomy.

In 1946, Lyman Spitzer proposed an observatory be placed in orbit above the distorting effects of the Earth’s atmosphere. In 1962, at the dawn of the space age, the National Academy of Sciences recommended that Spitzer’s concept be adopted by NASA as a long-term goal of the space agency. In 1977, Congress approved funding for an orbiting space telescope. In 1978, Corning went to work to produce two mirror blanks for the Hubble.

The Hubble mirrors were not made from Pyrex. By the 1970’s, Corning developed Ultra Low Expansion (ULE) fused titanium glass. Rather than use borax oxide as Pyrex does, ULE is made with a blend of titanium and silica to give it a nearly zero expansion coefficient. Besides being used for telescope mirrors, ULE was also utilized for space shuttle windows, as it could resist expansion when frictional heat built up during re-entry into Earth’s atmosphere.  This ability to maintain its shape made ULE an excellent candidate for the space telescope mirror.

Even though the Hubble mirror is kept at constant 700 F, the mirror shape only deviates 1/800,000 of an inch. If the Hubble mirror were the diameter of the Earth, its highest “mountain” would only be six inches. The nearly expansionless ULE maintains this optical precision.  ULE is also very lightweight.  Despite being the same size as the 100-inch Mt. Wilson mirror, the Hubble mirror is only 20% the weight.

After the production of the two mirror blanks, one was shipped to the Perkin-Elmer Corporation, the other to Kodak-Eastman for polishing. The blank sent to Perkin-Elmer eventually was used for the Hubble.  It was at this stage the fateful flaw would be made in the Hubble mirror.

A Kodak Moment NASA Regrets Passing Up

NASA received two bids to polish the mirror blanks from Corning. The Kodak bid was for $105 million while Perkin-Elmer bid was for $70 million ($300 million and $200 million in 2015 dollars respectively). Naturally, the Perkin-Elmer bid was viewed as the most competitive but it contained some troubling aspects. The Kodak bid was to polish two mirrors with different testing techniques. The testing mechanism on each mirror would then be used on the other to determine which was the better mirror and as a quality control measure. Perkin-Elmer relied on a single method to polish the mirror. It then sub-contracted Kodak to polish the back-up mirror, albeit at NASA’s request.

Further complicating matters (wonderfully described in Robert Capers and Eric Lipton’s report) was the budgetary and time constraints the Perkin-Elmer employees were working under. Perkin-Elmer had deliberately low-balled their bid to win the contract with the expectation Congress would approve more funding as the project progressed. However, the early 1980’s experienced the greatest recession since World War II as unemployment climbed towards 10% and Congress was in no mood to allocate more funding to polish the mirror.

washers
Wiki Commons

In the proverbial cruel twist of fate, the famous flaw in the Hubble mirror was a result of the use of three washers (yes, the very same kind used in your kitchen faucet) by Perkin-Elmer technicians to shim the optical testing device referred to as a null corrector. A piece of worn paint caused a misalignment of a laser that calibrated the distance from the null corrector to the mirror.  The overworked and rushed Perkin-Elmer technicians failed to report the calibration error to meet the deadline to produce the mirror.  This was combined with overconfidence in the null corrector device as signs of a design flaw in the mirror were ignored by the Perkin-Elmer project management. In the end, the mirror was polished perfectly to the wrong prescription, as the null corrector was 1.3 mm closer to the mirror than it should have been. The flaw in the mirror itself was 2 micrometers or about 1/50th the width of a piece of paper.

Consequently, the $1.5 billion Hubble was launched into orbit 25 years ago today with spherical aberration in its mirror causing blurry images due to three washers worth about twenty cents.

And what happened to the Kodak mirror? It stayed here on Earth. Kodak was not able to use its cross testing method as it only made one mirror rather than two. However, Kodak had used more traditional, time-tested methods to grind its mirror and finished its work in 1980, well before the Perkin-Elmer mirror. When the Hubble mirror flaw was discovered shortly after launch, the Kodak mirror played a key role in the ensuing investigation. The final determination was that the Kodak mirror was ground to the right specifications and the corrective measures would not have been required had it been placed in the Hubble. Since Kodak was subcontracted by Perkin-Elmer, it was the latter who had the final say which mirror to use and quite naturally, Perkin-Elmer decided to use its own mirror. The flaw was corrected in 1993 by the STS-61 mission.  The shuttle mission replaced the original Wide Field and Planetary Camera (WFPC1) with another* (WFPC2) that contained optics to counteract the spherical aberration in the Hubble mirror images.  The difference before and after are below:

M100 before and after Hubble repair mission. Credit: NASA

For the other instruments on the Hubble, the Corrective Optics Space Telescope Axial Replacement (COSTAR) was installed.  This was a set of mirrors used to act as “glasses” to correct the spherical aberration for the Faint Object Camera & Spectrograph, along with the High Resolution Spectrograph.

The Kodak mirror (below) now resides at the Smithsonian Air & Space Museum.

Courtesy National Air and Space Museum

* Both the WFPC2 and COSTAR that corrected the mirror flaw in the Hubble were removed in 2009 by the final Hubble shuttle servicing mission. The WFPC2 and COSTAR were also donated to the Smithsonian Air and Space Museum.

Image on top of post is the Hubble mirror.  Photo:  NASA/ESA

Regional Astronomy Events This Week

A couple of intriguing astronomy events in New York state this week.

Thursday, April 16 – Professor Lynne Hillenbrand discusses young stars across the electromagnetic spectrum at the Cornell Astronomy Colloquia from 4-5 PM at the Space Sciences Building starting at 4 PM.

April 18th & 19th – The Northeast Astronomy Forum at Rockland Community College in Suffern, NY.

Apollo 11

Each semester during the Earth & Moon segment of my astronomy course, I like to show this video for the class of the Apollo 11 liftoff. It gives the students, most too young to have witnessed this, an opportunity to see how the event was covered at the time. Also, it ties in well with the concepts learned in the prior module involving Newton’s Laws of Force. Many of the concepts apply to launches today and this is a good opportunity to break down NASA jargon into comprehensible English.

Working the broadcast that day for CBS was Walter Cronkite and Wally Schirra, who was an astronaut on three space missions including Apollo 7.  The description of the video is as follows with the time being for the video rather than launch itself.

0:12 These are the 5 stage one F-1 engines each capable of producing 1.5 million pounds of thrust for a total of 7.5 million pounds of thrust. The Saturn V weight was 5 million pounds at launch. Newton’s third law states for every action, there is an equal and opposite reaction. The excess 2.5 million pounds of thrust is what lifted the Saturn V at launch. The engines were produced by Rocketdyne (dyne is Greek for power) which is now part of GenCorp Inc.  Now known as Aerojet Rocketdyne, the company has had some difficult times recently. The center engine is referred to as the inboard engine and the four outer engines as the outboard engines. The outer four gimbal and guide the rocket.

0:21 The voice you hear is Jack King, then the Kennedy Space Center Chief of Public Information.  King passed away in June of 2015.

0:30 The steam you see from the Saturn V is boil off from the cryogenically cooled liquid hydrogen and oxygen.  Hydrogen boils at – 4230 F.  That is only 360 F warmer than absolute zero.  Oxygen boils at -2970 F.  Venting the boiling liquid hydrogen and oxygen prevents the fuel tanks from being deformed.  The black markings on the Saturn V are quarter marks and are used to study the roll of the rocket during launch.

0:43 The fuel tanks are continually pressurized until right before launch as discussed above, liquid hydrogen and oxygen boil at very low temperatures and the boiled off fuel needs to be replenished.

1:53 Walter Cronkite mentions the water deluge on the launch pad. This system could release 45,000 gallons per minute as a sound suppression device to avoid acoustical damage to the Saturn V.  A nuclear weapon is the only human made device louder than a Saturn V.

2:21 Ignition of the F-1 engines starts 8.9 seconds before launch. This is the amount of time it takes to build up the required thrust for lift-off.

2:32 Lift-off! The Saturn V is angled 1.25 degrees away from the launch pad to avoid contact.  Close up videos of the launch (below) will reveal large chucks of ice vibrating off the rocket.  The Saturn V would have 1,200 pounds of ice on its sides created by the very cold liquid hydrogen and oxygen in the fuel tanks.

2:41 Jack King announces the tower is cleared. At this point, control of the flight is transferred from the Kennedy Space Center in Florida to the Johnson Space Center in Houston.

2:43 Neil Armstrong announces the beginning of the roll and pitch program to send the Saturn V over the Atlantic. This is the same direction the Earth rotates.  At the Cape, the Earth rotates at 914 mph (1471 km/hr).  That is the amount of velocity boost Apollo 11 receives from Earth’s rotation to help attain orbit and that is why all launches are eastward.  The closer to the equator, the faster the Earth’s rotational movement is. Launch facilities located near the equator such as ESA’s Guiana Space Center have a competitive advantage of being able to life more payload per amount of thrust.

2:48 Walter Cronkite mentions the building is shaking, a common occurrence during Apollo launches.  The press was stationed three miles from the launch pad. One (of many) reason the recent movie Apollo 18 was not realistic, lift-off would have set the entire Cape shaking. It would not be possible to launch a Saturn V there in secret.

3:31 As Apollo 11 approaches the speed of sound, the pressure differences from the shock waves lift water vapor away from the vehicle.

4:06 The region of maximum dynamic pressure, or Max Q, is when the combination of velocity and air pressure is greatest on the Saturn V. Although the velocity will continue to increase, the atmospheric density begins to drop off rapidly after this point. This can be seen by the widening thrust field from the rocket due to the rapid decline in atmospheric pressure.

5:12 Staging, the first stage is released and dropped into the Atlantic Ocean and the second stage ignites.  Apollo 11 is now in the stratosphere at an altitude of 42 miles.

5:16 The second stage has 5 J-2 engines. Like the F-1, these are also made by Rocketdyne. At this point in the flight, thrust is less important as the rocket is lighter and burn time takes precedent. The thrust of the J-2 is 230,000 pounds each but the burn time is about 7 minutes.

5:43 Skirt sep refers to the skirt between the first and second stage being separated.

5:58 Mike Collins reports visual is a go. He is referring to the command module launch shield being removed along with the escape vehicle. At this point, the astronauts now have a view out the window.

7:50 Here, many of my younger students express shock at the animation used in the coverage. A common device during the early years of the Space Age as there was not the miniaturization of cameras as there is today which allowed for on-board cameras famously seen on the shuttle launches.

8:45 Fitted between the third stage and the service module was the IBM computer for Saturn V. The computer ring was 3 feet high, 22 feet in diameter, and had 32 kb of memory-about half the size of a blank Word page.

IBM Saturn V Computer Ring – Courtesy: NASA

9:20 The water deluge on launch pad 39 to mitigate damage from the lift-off burn. After the Apollo program was complete, this launch pad was converted for use during the Shuttle program.  Today, launch pad 39 is leased out to SpaceX for its future space operations.

10:38 This was a very troubled time for American passenger railroads as alluded to by Walter Cronkite. Penn Central would file for bankruptcy less than a year later prompting Congress to form Amtrak in 1971.

Two hours and fifty minutes after launch, Apollo 11 began trans-lunar injection and was on its way to the Moon.

*Top image launch of Apollo 11, July 16, 1969.  Photo:  NASA.

Magnetic Reconnection, Part Deux

From the feedback I got from my last post I wanted to clarify a few things, mostly the Interplanetary Magnetic Field generated by the Sun. But first, the x,y,z axis scheme. Below is an image of a three dimensional axis:

The orbits of the planets reside in the x-y plane along with the Sun itself.  In astronomy, this is known as the plane of ecliptic. As the solar wind spreads out through the Solar System, it takes the IMF field lines with it. Because the Sun rotates just like the Earth, the solar wind, and thus, the IMF looks like the pattern you see from a water sprinkler when looking down on it towards x-y plane.

imf_big
Courtesy: NASA

However, the IMF is not flat along the x-y plane.  In addition to the sprinkler type formation seen above, it also has a wavy type feature as well.  This is represented in the image below:

Courtesy: NASA

The z-axis is up and down.  When the IMF slopes upward, it is said to have a positive Bz value as B is used in physics to represent a magnetic field.  The Earth’s magnetic field flows from the geographical South to North Pole and also has a positive Bz value.  When two magnetic fields are flowing in the same direction the probability of reconnection is low.  When the IMF is sloping downwards, it has a negative Bz value.  In this case, the probability of reconnection is high which brings along with it a high chance of auroral activity and magnetic storms.  This is why if you visit a space weather website to check out possible aurora, you’ll want to be on the lookout for a negative Bz value.

 

Magnetic Reconnection & Why it Matters to You

Recently, NASA launched the Magnetospheric Multiscale (MMS) mission to detect a phenomenon known as magnetic reconnection. Unless you have taken a course in space physics, chances are you have not run across the concept. However, magnetic reconnection plays a key role here on Earth as well as throughout the universe. To understand why NASA wants to learn more about reconnection, we’ll start off with a few basics.

All magnetic fields are dipoles, that is, they have two poles, north and south. The flow of the magnetic field runs from the North Pole to the South Pole. The Earth itself acts like a giant bar magnet. The Earth’s magnetic North Pole lies near the geographic South Pole and vise versa. Hence, geographically, the flow of Earth’s magnetic field runs from south to north as seen below:

Courtesy: NASA

Three things to consider regarding electric and magnetic fields:

A changing magnetic field creates an electric field.

An electric current creates a magnetic field.

A particle with an electric charge will travel along the path of a magnetic field line.

Space itself consists mostly of particles with an electric charge called plasma. The intense heat of the Sun separates the atomic bonds between electrons (negative charge) and the nucleus that has protons (positive charge). Thus, normally electrically neutral atoms are broken apart and spread throughout the Solar System as an electrified gas (plasma) via the solar wind.

The Sun’s magnetic field is embedded in the solar wind and expands past the most distant planets. This magnetic field is thus known as the Interplanetary Magnetic Field or IMF. As this plasma travels out into the Solar System, some of it will encounter the Earth’s magnetic field on the day side. This is one of two areas where reconnection occurs around Earth.

Reconnection is when the IMF field lines merge with the Earth’s magnetic field lines to transfer mass and energy from the solar wind into Earth’s magnetic field. The possibility of reconnection is most strong when the IMF and Earth’s magnetic field lines flow is opposite of one another. Remember that Earth’s magnetic field flows from the geographic South Pole to North Pole. Thus, reconnection occurs most often when the IMF is flowing southward.

In physics, the letter B signifies a magnetic field. A three dimensional coordinate field is signified by the letters x, y, and z. The z-axis is up and down with negative z values running southward. When the IMF has a strong negative Bz direction, this means the IMF is flowing opposite of the northward flow of the Earth’s magnetic field and the potential for reconnection is high.

So why does any of this matter to us? When reconnection occurs the probability for magnetic storms on Earth is high. These storms create the aurora, which is the most aesthetic feature of these events. The plasma from the IMF is transferred to the Earth’s magnetic field during reconnection and follows the solar wind over to the night side.  Here, reconnection occurs a second time as the Earth’s magnetic field is stretched out by the solar wind.  Opposing magnetic field lines reconnect and unleash plasma at a high velocity.  The process is diagrammed below:

mms_graphic

Courtesy: NASA

The unleashed plasma then follows the Earth’s magnetic field lines into the Polar Regions. Unabated, the kinetic energy of these particles would be harmful to life. However, oxygen and nitrogen atoms in the upper atmosphere absorb the kinetic energy lifting their electrons to a higher energy orbit. When the electrons recede back to a lower energy orbit, the harmful kinetic energy is converted into harmless light radiation resulting in the aurora at the Polar Regions.  The aurora as seen from the International Space Station is below:

The story does not end there. As mentioned above, a changing magnetic field produces electric currents. Reconnection causes disturbances in the Earth’s magnetic field that can induce electric currents in both orbiting satellites and ground based power grids. These currents can cause expensive damage to electrical systems. A better understanding of the process of reconnection can provide more accurate forecasts of magnetic storms. These forecasts can inform satellite operators when to go into protective shutdown mode to prevent damage and allow power grids to take preventative action against blackouts.

The potential for damage is not insignificant. The 1989 magnetic storm caused a blackout across Quebec for nine hours. During that event, auroras were seen as far south as the Gulf Coast. The Carrington event of 1859 was much stronger than the 1989 storm. The electrical currents induced by this magnetic storm were so strong that telegraphs were able to operate even when disconnected from their batteries. A similar event today would, of course, result in much more significant damage given how much more reliant we are on electronics than in 1859. This is the most pressing reason NASA wants to understand the nature of reconnection. There are many other astrophysical applications as well.

The MMS Senior Project Scientist Tom Moore describes how the mission will research magnetic reconnection below:

The MMS mission will cost $850 million.  To put that in perspective, it cost $1.6 billion to build MetLife Stadium and $1.3 billion for the luxury high rise at 432 Park Avenue.  Space is not cheap, but not more expensive than major construction projects back here on Earth.

Replicate, Replicate, Replicate

Typically, during grade school we are introduced to the scientific method with the following steps:

Hypothesis

Gather Data

Experiment – test hypothesis

Reject or accept hypothesis

However, there is one extra step that is very important and that is replication of the original research result. Replication of results never gets the headlines, or wins a Nobel, but without it, science cannot advance. This is a vital, if often overlooked, aspect of the scientific process that guards against fraudulent work and/or erroneous findings. One such example of the replication process nullifying an original research result is the vaccination/autism link. In the late 1990’s, a research paper was published indicating a link between vaccinations and autism. Those results could not be replicated independently and were later discovered to be fraudulent.

Another controversial science paper was published around the same time as the fraudulent vaccine/autism study. That being, the hockey stick graph indicating a rapid rise in global surface temperatures during the 20th century. Unlike the vaccine/autism link, the hockey stick result was replicated independently. Also, an investigation of fraud charges against the author, Michael Mann, cleared him of any wrongdoing. Hence, the vaccine/autism link was fraudulent science that is not valid, but the hockey stick graph is a valid result confirmed independently and cleared of fraudulent charges.

In astronomy, the replication process recently nullified a discovery that attracted quite a bit of publicity. In 2014, a team of astronomers announced the detection of gravity waves in the cosmic background radiation left over from the Big Bang. This had huge implications, as this would have confirmed the inflationary model of the Big Bang as well as gravity being transmitted via waves as predicted by Einstein in 1916. However, the replication process determined the signal the team detected in the form of polarized light was actually caused by dust in the Milky Way.

It happens, another case was the exoplanet Gliese 581g. When discovered, it was thought to be a habitable planet and garnered quite a bit of press coverage. However, subsequent observations determined the signal received was caused by hyperactivity of the star itself and not an exoplanet at all. Nor is this restricted to the natural sciences. In economics, there was a notable failure to replicate a research result that had significant policy implications during the height of the worst economic downturn since the Great Depression.  As you can tell, a failure to replicate does not necessarily mean the original work was fraudulent, sometimes it is caused by a breakdown in methodology or misunderstanding of cause and effect.

All this can be very confusing to students and can lead to disillusionment with science unless the process is understood properly. We cannot be experts at everything, so how do we know if a scientific report is trustworthy or not? How do we know if that latest nutrition study the press is hyping will pan out in the long run? I tell my students to see if the results have been verified independently. For example, the link between lung cancer and smoking has been replicated by many studies and thus, can be trusted to be quality info. That is the hallmark of good science.

The lesson here is, never jump on an initial finding (no matter how interesting) as a conclusive result. Best to wait for replication of the original study as confirmation of those results.  Failure to do that with the autism/vaccine link has caused a significant increase in measles cases the past year.  And as we have discovered, once a concept gets lodged in a mindset, it can be very difficult to dislodge it.

So, another bit of advice on how to do good science. If we develop a “rooting” interest in a scientific result as we do at a sporting event or in politics, we have already gone off the rails are far as the scientific method goes. Nature will not bend to our wishes. We can only employ the scientific method to understand it better.

*Image at top of post is replication of Mann’s hockey stick graph courtesy NOAA.