Jump to content

55 Years Ago: President Nixon Establishes Space Task Group to Chart Post-Apollo Plans


NASA

Recommended Posts

  • Publishers

In early 1969, the goal set by President John F. Kennedy to land a man on the Moon seemed within reach. A new president, Richard M. Nixon, now sat in the White House and needed to chart America’s course in space in the post-Apollo era. President Nixon directed his science advisor to evaluate proposals for America’s next steps in space. He established a Space Task Group (STG), chaired by Vice President Spiro T. Agnew, to report back to him with their recommendations. The STG delivered its report to President Nixon on Sept. 15, 1969, who declined to select any of the options proposed. Instead, more than two years later, he directed NASA to build the space shuttle, just one element of the ambitious plans the STG had proposed.

President John F. Kennedy announces his goal of a Moon landing during a Joint Session of Congress in May 1961 President Kennedy reaffirms the goal during his address at Rice University in Houston in September 1962
Left: President John F. Kennedy announces his goal of a Moon landing during a Joint Session of Congress in May 1961. Right: President Kennedy reaffirms the goal during his address at Rice University in Houston in September 1962.

On May 25, 1961, President Kennedy, before a Joint Session of Congress, committed the United States to the goal, before the decade was out, of landing a man on the Moon and returning him safely to the Earth. President Kennedy reaffirmed the commitment during an address at Rice University in Houston in September 1962. Vice President Lyndon B. Johnson, who played a key role in establishing NASA in 1958, and under Kennedy served as the Chair of the National Aeronautics and Space Council, worked with members of Congress to ensure adequate funding for the next several years to provide NASA with the proper resources to meet that goal. Following Kennedy’s assassination in November 1963, now President Johnson continued his strong support of the space program to ensure that his predecessor’s goal of a Moon landing could be achieved within the stipulated time frame. But with increasing competition for scarce federal resources from the conflict in southeast Asia and from domestic programs, Johnson showed less interest in any space endeavors that might follow the Moon landing. The space agency’s annual budget peaked in 1966 and began a steady decline three years before Kennedy’s goal was met. From a budgetary standpoint, the prospects of a vibrant post-Apollo space program did not look too rosy, the Apollo triumphs of 1968 and 1969 notwithstanding.

President Richard M. Nixon, right, meets with his science advisor Lee DuBridge in the Oval Office President Nixon, left, and Vice President Spiro T. Agnew, right, introduce Thomas O. Paine as the nominee to be NASA administrator on March 5, 1969
Left: President Richard M. Nixon, right, meets with his science advisor Lee DuBridge in the Oval Office – note the Apollo 8 Earthrise photo on the wall. Right: President Nixon, left, and Vice President Spiro T. Agnew, right, introduce Thomas O. Paine as the nominee to be NASA administrator on March 5, 1969.

On Feb. 4, just two weeks after taking office, President Nixon directed his Science Advisor Lee A. DuBridge to appoint an interagency committee to advise him on a post-Apollo space program. Nine days later, the President announced the formation of the STG to develop a strategy for America’s space program for the next decade. Vice President Agnew, as the Chair of the National Aeronautics and Space Council, led the group. Other members of the STG included NASA Acting Administrator Thomas O. Paine (the Senate confirmed him as administrator on March 20), the Secretary of Defense, and the Director of the Office of Science and Technology.

Proposed lunar landing sites through Apollo 20, per NASA planning in August 1969 Illustration of the Apollo Applications Program experimental space station
Left: Proposed lunar landing sites through Apollo 20, per NASA planning in August 1969. Right: Illustration of the Apollo Applications Program experimental space station.

At the time, the only approved human space flight programs included lunar missions through Apollo 20 and the Apollo Applications Program (AAP), later renamed Skylab, that involved three flights to an experimental space station based on Apollo technology. Beyond a general vague consensus that the United States human space flight program should continue, no approved projects existed to follow these missions when they ended by about 1975.

Concept of a fully reusable space shuttle system from early 1969 Illustration from early 1969 of low Earth orbit infrastructure, including a large space station supported by space shuttles Cover page of NASA’s report to the interagency Space Task Group
Left: Concept of a fully reusable space shuttle system from early 1969. Middle: Illustration from early 1969 of low Earth orbit infrastructure, including a large space station supported by space shuttles. Right: Cover page of NASA’s report to the interagency Space Task Group.

Within NASA, given the intense focus on achieving the Moon landing within President Kennedy’s time frame, officials paid less attention to what would follow the Apollo Program and AAP. During a Jan. 27, 1969 meeting at NASA chaired by Paine, a general consensus evolved that the next step after the Moon landing should involve the development of a 12-person earth-orbiting space station by 1975, followed by an even larger outpost capable of housing up to 100 people “with a multiplicity of capabilities.” In June, with the goal of the Moon landing about to be realized, NASA’s internal planning added the development of a space shuttle by 1977 to support the space station, and truly optimistically, the development of a lunar base by 1976, among other highly ambitious endeavors that included the idea that the U.S. should begin preparing for a human mission to Mars as early as the 1980s. These proposals were presented to the STG for consideration in early July in a report titled “America’s Next Decade in Space.”

The Space Task Group’s (STG) Report to President Nixon Meeting in the White House to present the STG Report to President Nixon
Left: The Space Task Group’s (STG) Report to President Nixon. Right: Meeting in the White House to present the STG Report to President Nixon. Image credit: courtesy Richard Nixon Presidential Library and Museum.

Still bathing in the afterglow of the successful Moon landing, the STG presented its 29-page report “The Post-Apollo Space Program:  Directions for the Future” to President Nixon on Sep. 15, 1969, during a meeting in the White House Cabinet Room. In its Conclusions and Recommendations section, the report noted that the United States should pursue a balanced robotic and human space program but emphasized the importance of the latter, with a long-term goal of a human mission to Mars before the end of the 20th century. The report proposed that NASA develop new systems and technologies that emphasized commonality, reusability, and economy in its future programs. To accomplish these overall objectives, the report presented three options:

Option I – this option required more than a doubling of NASA’s budget by 1980 to enable a human Mars mission in the 1980s, establishment of a lunar orbiting space station, a 50-person Earth orbiting space station, and a lunar base. A decision would be required by 1971 on development of an Earth-to-orbit transportation system to support the space station. A strong robotic scientific and exploration program would be maintained.

Option II – this option maintained NASA’s budget at then current levels for a few years then anticipated a gradual increase to support the parallel development of both an earth orbiting space station and an Earth-to-orbit transportation system, but deferred a Mars mission to about 1986. A strong robotic scientific and exploration program would be maintained, but smaller than in Option I.

Option III – essentially the same as Option II but deferred indefinitely the human Mars mission.

In separate letters, both Agnew and Paine recommended to President Nixon to choose Option II. 

Illustration of a possible space shuttle orbiter from 1969 Illustration of a possible 12-person space station from 1969
Left: Illustration of a possible space shuttle orbiter from 1969. Right: Illustration of a possible 12-person space station from 1969.

The White House released the report to the public at a press conference on Sep. 17 with Vice President Agnew and Administrator Paine in attendance. Although he publicly supported a strong human spaceflight program and enjoyed the positive press he received when photographed with Apollo astronauts, and initially sounding positive about the STG options, President Nixon ultimately chose not to act on the report’s recommendations. Faced with the still ongoing conflict in southeast Asia and domestic programs competing for scarce federal dollars, the fiscally conservative Nixon decided these plans were just too grandiose and far too expensive. He also believed that NASA should be considered as one America’s domestic programs without the special status it enjoyed during the 1960s, one of the lasting legacies of the Nixon space doctrine. Even some of the already planned remaining Moon landing missions fell victim to the budgetary axe. On Jan. 4, 1970, NASA canceled Apollo 20 since it needed its Saturn V rocket to launch the Skylab experimental space station – NASA Administrator James E. Webb had turned off the Saturn V assembly line in 1968 and none remained beyond the original 15 built under contract. In September 1970, reductions in NASA’s budget forced the cancellation of two more Apollo missions, and for a time in 1971 President Nixon considered cancelling two more but he relented, and they flew as the final two Apollo Moon landing missions in 1972.

NASA Administrator James C. Fletcher, left, and President Richard M. Nixon announce the approval to proceed with space shuttle development in 1972 First launch of the space shuttle in 1981
Left: NASA Administrator James C. Fletcher, left, and President Richard M. Nixon announce the approval to proceed with space shuttle development in 1972. Right: First launch of the space shuttle in 1981.

More than two years after the STG submitted its report, in January 1972 President Nixon directed NASA Administrator James C. Fletcher to develop the Space Transportation System, the formal name for the space shuttle, the only element of the recommendations to survive the budgetary challenges. At that time, the first flight of the program was expected in 1979; in actuality, the first flight occurred two years later. It would be 12 years after Nixon’s shuttle decision before President Ronald W. Reagan approved the development of a space station, the second major component of the STG recommendation, and another 14 years after that before the first element of that program reached orbit. In those intervening years, the original American space station had been redesigned and evolved into the multinational partnership called the International Space Station.

The International Space Station as it appeared in 2021
The International Space Station as it appeared in 2021.

View the full article

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Similar Topics

    • By NASA
      In October 1604, a new star appeared in the sky, puzzling astronomers of the day. First observed on Oct. 9, German astronomer Johannes Kepler (1571-1630) began his observations on Oct. 17 and tracked the new star for over a year. During that time, it brightened to magnitude -2.5, outshining Jupiter, and for several weeks remained visible in the daytime. Publication of his detailed observations in 1606 led astronomers to call the star Kepler’s Supernova, today formally designated as supernova SN 1604. Astronomers of the day did not know what caused the star’s sudden appearance and eventual disappearance, but the phenomenon helped shape European cosmology toward the heliocentric model proposed by Polish astronomer Nicolaus Copernicus half a century earlier. Today, astronomers designate SN 1604 as a Type Ia supernova, resulting from the explosion of a white dwarf star, and use ground-based and space-based telescopes to study its remnants.

      Left: Portrait of Johannes Kepler by August Köhler. Middle: Kepler’s book about his observations of the 1604 supernova open to the page depicting the location of the new star. Right: Closeup of Kepler’s illustration of the location of the new star, designated N, in the constellation Ophiuchus near the right foot of the serpent-bearer.
      Italian astronomer Lodovico delle Colombo first observed the supernova in the constellation Ophiuchus on Oct. 9. Kepler, then working in Prague, heard rumors of the new star but did not observe it until Oct. 17. He continued to monitor the star for over a year, inspired by the earlier work of Danish astronomer Tycho Brahe’s observations of a similar phenomenon, the 1572 supernova. The new star quickly brightened to magnitude -2.5, outshining Jupiter, and for three weeks could be seen in the daytime before finally fading into obscurity in March 1606. Kepler could only make naked eye observations, since Italian astronomer Galileo Galilei didn’t turn his newly invented telescope to the skies for another four years after SN 1604 faded from view.
      Later in 1606, Kepler summarized his observations in his book De Stella nova in pede Serpentarii (On the New Star in Ophiuchus’ Foot), published in Prague. SN 1604 is believed to be about 20,000 light years away, near the edge of a dark nebula complex. Kepler and his contemporaries observed not only the last known supernova to occur in the Milky Way Galaxy but also the last supernova visible to the naked eye until 1987. That one, Supernova 1987A, appeared in the Large Magellanic Cloud, a small satellite galaxy of the Milky Way.

      A Type Ia supernova results from a white dwarf drawing in material from a nearby red giant star, the additional mass leading to a runaway thermonuclear explosion.
      Astronomers today understand that what Kepler and others believed as the birth of a new star actually represented the violent death of a star. Astronomers today classify supernovas according to their characteristics, and SN 1604 belongs to the group known as Type Ia supernovas, typically found in binary star systems composed of a white dwarf and a red giant. The gravitation force of the white dwarf draws in material from its larger less dense companion until it reaches a critical mass, around 1.4 times the mass of our Sun. At that point, a runaway thermonuclear chain reaction begins, causing a release of tremendous amounts of energy, including light, that we see as a sudden brightening of an otherwise dim star.

      Images of Kepler’s supernova remnants in different portions of the electromagnetic spectrum. Left: X-ray image from the Chandra X-ray Observatory. Middle: Visible image from the Hubble Space Telescope. Right: Infrared image from the Spitzer Space Telescope.
      Supernova explosions leave remnants behind and those of SN 1604 remain visible today. Ground-based and space-based instruments using different parts of the electromagnetic spectrum study these remnants to gain a better understanding of their origins. The remnants of SN 1604 emit energy most strongly in the radio and X-ray parts of the electromagnetic spectrum. In recent years, astronomers have used Type Ia supernovas to determine the rate of expansion of the universe. Because Type Ia supernovas all occur in stars of about 1.4 solar masses, they give out about the same amount of light. This makes them useful as distance indicators – if one Type Ia supernova is dimmer than another one, it is further away by an amount that astronomers can calculate. Based on this information, astronomers believe that the expansion of the universe is accelerating, possibly caused by the presence of a mysterious substance called dark energy.
      Events in world history in 1604:
      January 1 – First performance of William Shakespeare’s play A Midsummer’s Night’s Dream.
      March 22 – Karl IX begins his rule as King of Sweden.
      August 5 – Sokolluzade Mehmed Pasha becomes the new Ottoman Grand Vizier in Constantinople.
      August 18 – England and Spain sign the Treaty of London, ending their 20-year war.
      September 1 – Sri Guru Granth Sahib, Sikhism’s religious text, is installed at Hamandir Sahib in Amritsar, India.
      October 4 – Emperor of Ethiopia Za Dengel is killed in battle with the forces of Za Sellase, who restores his cousin Yaqob to the throne.
      November 1 – First performance of William Shakespeare’s tragedy Othello.
      December 29 – A magnitude 8.1 earthquake shakes the Taiwan Strait causing significant damage.
      Explore More
      13 min read 40 Years Ago: STS-41G – A Flight of Many Firsts and Records
      Article 2 days ago 12 min read 30 Years Ago: STS-68 The Second Space Radar Lab Mission
      Article 1 week ago 15 min read 55 Years Ago: Celebrations for Apollo 11 Continue as Apollo 12 Prepares to Revisit the Moon
      Article 3 weeks ago View the full article
    • By NASA
      4 Min Read NASA Terminal Transmits First Laser Communications Uplink to Space 
      NASA's LCOT (Low-Cost Optical Terminal) located at the agency's Goddard Space Flight Center in Greenbelt, Md. Credits: NASA NASA’s LCOT (Low-Cost Optical Terminal), a ground station made of modified commercial hardware, transmitted its first laser communications uplink to the TBIRD (TeraByte Infrared Delivery), a tissue box-sized payload formerly in low Earth orbit.
      During the first live sky test, NASA’s LCOT produced enough uplink intensity for the TBIRD payload to identify the laser beacon, connect, and maintain a connection to the ground station for over three minutes. This successful test marks an important achievement for laser communications: connecting LCOT’s laser beacon from Earth to TBIRD required one milliradian of pointing accuracy, the equivalent of hitting a three-foot target from over eight American football fields away.
      The test was one of many laser communications achievements TBIRD made possible during its successful, two-year mission. Prior to its mission completion on Sept. 15, 2024, the payload transmitted at a record-breaking 200 gigabits per second. In an actual use case, TBIRD’s three-minute connection time with LCOT would be sufficient to return over five terabytes of critical science data, the equivalent of over 2,500 hours of high-definition video in a single pass. As the LCOT sky test demonstrates, the ultra-high-speed capabilities of laser communications will allow science missions to maintain their connection to Earth as they travel farther than ever before.
      Measurement data of the power, or “fluency,” of the connection between NASA’s LCOT (Low-Cost Optical Terminal) laser beacon and TBIRD’s (TeraByte Infrared Delivery) receiver provided by Massachusetts Institute of Technology Lincoln Laboratory (MIT-LL). LCOT and TBIRD maintained a sufficient connection for over three minutes — enough time for TBIRD to return over five terabytes of data. NASA/Dave Ryan NASA’s SCaN (Space Communications and Navigation) program office is implementing laser communications technology in various orbits, including the upcoming Artemis II mission, to demonstrate its potential impact in the agency’s mission to explore, innovate, and inspire discovery.
      “Optical, or laser, communications can transfer 10 to 100 times more data than radio frequency waves,” said Kevin Coggins, deputy associate administrator and SCaN program manager. “Literally, it’s the wave of the future, as it’ll enable scientists to realize an ever-increasing amount of data from their missions and will serve as our critical lifeline for astronauts traveling to and from Mars.” 
      To view this video please enable JavaScript, and consider upgrading to a web browser that supports HTML5 video
      A recording of TBIRD’s (TeraByte Infrared Delivery) successful downlink from NASA’s LCOT (Low-Cost Optical Terminal) Wide Field Camera. The light saturation from the downlink caused a secondary reflection in the upper right of the video.NASA Historically, space missions have used radio frequencies to send data to and from space, but with science instruments capturing more data, communications assets must meet increasing demand. The infrared light used for laser communications transmits the data at a shorter wavelength than radio, meaning ground stations on Earth can send and receive more data per second. 
      The LCOT team continues to refine pointing capabilities through additional tests with NASA’s LCRD (Laser Communications Relay Demonstration). As LCOT and the agency’s other laser communications missions continue to reach new milestones in connectivity and accessibility, they demonstrate laser communications’ potential to revolutionize scientists’ access to new data about Earth, our solar system, and beyond. 
      “It’s a testament to the hard work and skill of the entire team,” said Dr. Haleh Safavi, project lead for LCOT. “We work with very complicated and sensitive transmission equipment that must be installed with incredible precision. These results required expeditious planning and execution at every level.” 
      NASA’s LCOT (Low-Cost Optical Terminal) at the agency’s Goddard Space Flight Center in Greenbelt, Maryland, uses slightly modified commercial hardware to reduce the expense of implementing laser communications technology. NASA Experiments like TBIRD and LCRD are only two of SCaN’s multiple in-space demonstrations of laser communications, but a robust laser communications network relies on easily reconfigurable ground stations on Earth. The LCOT ground station showcases how the government and aerospace industry can build and deploy flexible laser communications ground stations to meet the needs of a wide variety of NASA and commercial missions, and how these ground stations open new doors for communications technology and extremely high data volume transmission. 
      NASA’s LCOT is developed by the agency’s Goddard Space Flight Center in Greenbelt, Maryland. TBIRD was developed in partnership with the Massachusetts Institute of Technology Lincoln Laboratory (MIT-LL) in Lexington. TBIRD was flown and operated as a collaborative effort among NASA Goddard; NASA’s Ames Research Center in California’s Silicon Valley; NASA’s Jet Propulsion Laboratory in Southern California; MIT-LL; and Terran Orbital Corporation in Irvine, California. Funding and oversight for LCOT and other laser communications demonstrations comes from the (SCaN) Space Communications and Navigation  program office within the Space Operations Mission Directorate at NASA Headquarters in Washington. 
      About the Author
      Korine Powers
      Senior Writer and Education LeadKorine Powers, Ph.D. is a writer for NASA's Space Communications and Navigation (SCaN) program office and covers emerging technologies, commercialization efforts, education and outreach, exploration activities, and more.
      Share
      Details
      Last Updated Oct 09, 2024 EditorKorine PowersContactKatherine Schauerkatherine.s.schauer@nasa.govLocationGoddard Space Flight Center Related Terms
      Space Communications Technology Communicating and Navigating with Missions Goddard Space Flight Center Space Communications & Navigation Program Space Operations Mission Directorate Technology Technology Demonstration View the full article
    • By European Space Agency
      In 2023, ESA published more than 400 vacancies in engineering, science and business and administration and more positions continue to be published as we are always on the lookout for talented new colleagues to join us. So, what does it mean to join ESA? Here are five reasons why you should consider ESA as the next step in your career!
      View the full article
    • By NASA
      Throughout the life cycles of missions, Goddard engineer Noosha Haghani has championed problem-solving and decision-making to get to flight-ready projects.
      Name: Noosha Haghani
      Title: Plankton Aerosol Clouds and Ecosystem (PACE) Deputy Mission Systems Engineer
      Formal Job Classification: Electrical engineer
      Organization: Engineering and Technology Directorate, Mission Systems Engineering Branch (Code 599)
      Noosha Haghani is a systems engineer for the Plankton Aerosol Clouds and Ecosystem (PACE) mission at NASA’s Goddard Space Flight Center in Greenbelt, Md. Credit: NASA What do you do and what is most interesting about your role here at Goddard?
      As the PACE deputy mission systems engineer, we solve problems every day, all day long. An advantage I have is that I have been on this project from the beginning.
      Why did you become an engineer? What is your educational background?
      I was always very good at math and science. Both of my parents are engineers. I loved building with Legos and solving puzzles. Becoming an engineer was a natural progression for me.
      I have a BS in electrical engineering and a master’s in reliability engineering from the University of Maryland, College Park. I had completed all my course work for my Ph.D. as well but never finished due to family obligations.
      How did you come to Goddard?
      As a freshman in college, I interned at Goddard. After graduation, I worked in industry for a few years. In 2002, I returned to Goddard because I realized that what we do at Goddard is so much more unique and exciting to me.
      My mother also works at Goddard as a software engineer, so I am a second-generation Goddard employee. Early on in my career, my mother and I met for lunch occasionally. Now I am just too busy to even schedule lunch.
      Describe the advantages you have in understanding a system which you have worked on from the original design through build and testing?
      I came to the PACE project as the architect of an avionics system called MUSTANG, a set of hardware electronics that performs the function of the avionics of the mission including command and data handling, power, attitude control, and more. As the MUSTANG lead, I proposed an architecture for the PACE spacecraft which the PACE manager accepted, so MUSTANG is the core architecture for the PACE spacecraft. I led the team in building the initial hardware and then moved into my current systems engineering role.
      Knowing the history of a project is an advantage in that it teaches me how the system works. Understanding the rationale of the decision making we made over the years helps me to better appreciate why we built the system way we did.
      How would you describe your problem-solving techniques?
      A problem always manifests as some incorrect reading or some failure in a test, which I refer to as evidence of the problem. Problem solving is basically looking at the evidence and figuring out what is causing the problem. You go through certain paths to determine if your theory matches the evidence. It requires a certain level of understanding of the system we have built. There are many components to the observatory including hardware and software that could be implicated. We compartmentalize the problem and try to figure out the root cause systematically. Sometimes we must do more testing to get the problem to recreate itself and provide more evidence.
      As a team lead, how do you create and assign an investigation plan?
      As a leader, I divide up the responsibilities of the troubleshooting investigation. We are a very large team. Each individual has different roles and responsibilities. I am the second-highest ranking technical authority for the mission, so I can be leading several groups of people on any given day, depending on the issue.
      The evidence presented to us for the problem will usually implicate a few subsystems. We pull in the leads for these subsystems and associated personnel and we discuss the problem. We brainstorm. We decide on investigation and mitigation strategies. We then ask the Integration and Test team to help carry out our investigation plan.
      As a systems engineer, how do you lead individuals who do not report to you or through your chain of command?
      I am responsible for the technical integrity of the mission. As a systems engineer, these individuals do not work for me. They themselves answer to a line manager who is not in my chain of command. I lead them through influencing them.
      I use leadership personality and mutual respect to guide the team and convince them that the method we have chosen to solve the problem is the best method. Because I have a long history with the project, and was with this system from the drawing board, I generally understand how the system works. This helps me guide the team to finding the root cause of any problem.
      How do you lead your team to reach consensus?
      Everything is a team effort. We would be no where without the team. I want to give full credit to all the teams.
      You must respect members of your team, and each team member must respect you as a leader. I first try to gather and learn as much as possible about the work, what it takes to do the work, understanding the technical aspects of the work and basically understanding the technical requirements of the hardware. I know a little about all the subsystems, but I rely on my subsystem team leads who are the subject matter experts.
      The decision on how to build the system falls on the Systems Team. The subject matter experts provide several options and define risks associated with each.  We then make a decision based on the best technical solution for the project that falls within the cost/schedule and risk posture.
      If my subject matter experts and I do not agree, we go back and forth and work together as a team to come to a consensus on how to proceed. Often we all ask many questions to help guide out path. The team is built on mutual respect and good communication. When we finally reach a decision, almost everyone agrees because of our collaboration, negotiation and sometimes compromise.
      What is your favorite saying?
      Better is the enemy of good enough. You must balance perfectionism with reality.
      How do you balance perfectionism with reality to make a decision?
      Goddard has a lot of perfectionists. I am not a perfectionist, but I have high expectations. Goddard has a lot of conservatism, but conservatism alone will not bring a project to fruition.
      There is a level of idealism in design that says that you can always improve on a design. Perfection is idealistic. You can analyze something on paper forever. Ultimately, even though I am responsible for the technical aspects only, we still as a mission must maintain cost and schedule. We could improve a design forever but that would take time and money away from other projects. We need to know when we have built something that is good enough, although maybe not perfect.
      In the end, something on paper is great, but building and testing hardware is fundamental in order to proceed. Occasionally the decisions we make take some calculated risk. We do not always have all the facts and furthermore we do not always have the time to wait for all the facts. We must at some point make a decision based on the data we have.
      Ultimately a team lead has to make a judgement call. The answer is not in doing bare minimum or cutting corners to get the job done, but rather realizing what level of effort is the right amount to move forward.
      Why is the ability to make a decision one of your best leadership qualities?
      There is a certain level of skill in being able to make a decision. If you do not make a decision, at some point that inability to make a decision becomes a decision. You have lost time and nothing gets built.
      My team knows that if they come to me, I will give them a path forward to execute. No one likes to be stuck in limbo, running in circles. A lot of people in a project want direction so that they can go forward and implement that decision. The systems team must be able to make decisions so that the team can end up with a finished, launchable project.
      One of my main jobs is to access risk. Is it risky to move on? Or do I need to investigate further? We have a day-by-day risk assessment decision making process which decides whether or not we will move on with the activities of that day.
      As an informal mentor, what is the most important advice you give?
      Do not give up. Everything will eventually all click together.
      What do you like most about your job?
      I love problem solving. I thrive in organized chaos. Every day we push forward, complete tasks. Every day is a reward because we are progressing towards our launch date.
      Who inspires you?
      The team inspires me. They make me want to come to work every day and do a little bit better. My job is very stressful. I work a lot of hours. What motivates me to continue is that there are other people doing the same thing, they are amazing. I respect each of them so much.
      What do you do for fun?
      I like to go to the gym and I love watching my son play sports. I enjoy travel and I love getting immersed in a city of a different country.
      By Elizabeth M. Jarrell
      NASA’s Goddard Space Flight Center, Greenbelt, Md.
      Conversations With Goddard is a collection of Q&A profiles highlighting the breadth and depth of NASA’s Goddard Space Flight Center’s talented and diverse workforce. The Conversations have been published twice a month on average since May 2011. Read past editions on Goddard’s “Our People” webpage.
      Share
      Details
      Last Updated Oct 08, 2024 EditorMadison OlsonContactRob Garnerrob.garner@nasa.govLocationGoddard Space Flight Center Related Terms
      People of Goddard Earth Goddard Space Flight Center PACE (Plankton, Aerosol, Cloud, Ocean Ecosystem) People of NASA Explore More
      6 min read Astrophysicist Gioia Rau Explores Cosmic ‘Time Machines’
      Article 7 days ago 8 min read Julie Rivera Pérez Bridges Business, STEM to ‘Make the Magic Happen’
      Article 2 weeks ago 5 min read Rob Gutro: Clear Science in the Forecast
      Article 3 weeks ago View the full article
    • By NASA
      4 min read
      Preparations for Next Moonwalk Simulations Underway (and Underwater)
      A major component of NASA’s Nancy Grace Roman Space Telescope just took a spin on the centrifuge at NASA’s Goddard Space Flight Center in Greenbelt, Maryland. Called the Outer Barrel Assembly, this piece of the observatory is designed to keep the telescope at a stable temperature and shield it from stray light.
      This structure, called the Outer Barrel Assembly, will surround and protect NASA’s Nancy Grace Roman Space Telescope from stray light that could interfere with its observations. In this photo, engineers prepare the assembly for testing.NASA/Chris Gunn The two-part spin test took place in a large, round test chamber. Stretching across the room, a 600,000-pound (272,000-kilogram) steel arm extends from a giant rotating bearing in the center of the floor.
      The test itself is like a sophisticated version of a popular carnival attraction, designed to apply centrifugal force to the rider — in this case, the outer covering for Roman’s telescope. It spun up to 18.4 rotations per minute. That may not sound like much, but it generated force equivalent to just over seven times Earth’s gravity, or 7 g, and sent the assembly whipping around at 80 miles per hour.
      “We couldn’t test the entire Outer Barrel Assembly in the centrifuge in one piece because it’s too large to fit in the room,” said Jay Parker, product design lead for the assembly at Goddard. The structure stands about 17 feet (5 meters) tall and is about 13.5 feet (4 meters) wide. “It’s designed a bit like a house on stilts, so we tested the ‘house’ and ‘stilts’ separately.”
      The “stilts” went first. Technically referred to as the elephant stand because of its similarity to structures used in circuses, this part of the assembly is designed to surround Roman’s Wide Field Instrument and Coronagraph Instrument like scaffolding. It connects the upper portion of the Outer Barrel Assembly to the spacecraft bus, which will maneuver the observatory to its place in space and support it while there. The elephant stand was tested with weights attached to it to simulate the rest of the assembly’s mass.
      This photo shows a view from inside the Outer Barrel Assembly for NASA’s Nancy Grace Roman Space Telescope. The inner rings, called baffles, will help protect the observatory’s primary mirror from stray light.NASA/Chris Gunn Next, the team tested the “house” — the shell and a connecting ring that surround the telescope. These parts of the assembly will ultimately be fitted with heaters to help ensure the telescope’s mirrors won’t experience wide temperature swings, which make materials expand and contract.
      To further protect against temperature fluctuations, the Outer Barrel Assembly is mainly made of two types of carbon fibers mixed with reinforced plastic and connected with titanium end fittings. These materials are both stiff (so they won’t warp or flex during temperature swings) and lightweight (reducing launch demands).
      If you could peel back the side of the upper portion –– the house’s “siding” –– you’d see another weight-reducing measure. Between inner and outer panels, the material is structured like honeycomb. This pattern is very strong and lowers weight by hollowing out portions of the interior.
      Designed at Goddard and built by Applied Composites in Los Alamitos, California, Roman’s Outer Barrel Assembly was delivered in pieces and then put together in a series of crane lifts in Goddard’s largest clean room. It was partially disassembled for centrifuge testing, but will now be put back together and integrated with Roman’s solar panels and Deployable Aperture Cover at the end of the year.
      In 2025, these freshly integrated components will go through thermal vacuum testing together to ensure they will withstand the temperature and pressure environment of space. Then they’ll move to a shake test to make sure they will hold up against the vibrations they’ll experience during launch. Toward the end of next year, they will be integrated with rest of the observatory.
      To virtually tour an interactive version of the telescope, visit:
      https://roman.gsfc.nasa.gov/interactive
      The Nancy Grace Roman Space Telescope is managed at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, with participation by NASA’s Jet Propulsion Laboratory and Caltech/IPAC in Southern California, the Space Telescope Science Institute in Baltimore, and a science team comprising scientists from various research institutions. The primary industrial partners are BAE Systems, Inc in Boulder, Colorado; L3Harris Technologies in Rochester, New York; and Teledyne Scientific & Imaging in Thousand Oaks, California.
      By Ashley Balzer
      NASA’s Goddard Space Flight Center, Greenbelt, Md.
      ​​Media Contact:
      Claire Andreoli
      NASA’s Goddard Space Flight Center
      301-286-1940
      Share
      Details
      Last Updated Oct 08, 2024 EditorJamie AdkinsContactClaire Andreoli Related Terms
      Nancy Grace Roman Space Telescope Goddard Space Flight Center Science-enabling Technology Technology Explore More
      2 min read Tech Today: Spraying for Food Safety
      Article 19 hours ago 5 min read NASA: New Insights into How Mars Became Uninhabitable
      NASA’s Curiosity rover, currently exploring Gale crater on Mars, is providing new details about how…
      Article 20 hours ago 2 min read Hubble Observes a Peculiar Galaxy Shape
      This NASA/ESA Hubble Space Telescope image reveals the galaxy, NGC 4694. Most galaxies fall into…
      Article 4 days ago View the full article
  • Check out these Videos

×
×
  • Create New...