Jump to content

Recommended Posts

  • Publishers
Posted

3 min read

Preparations for Next Moonwalk Simulations Underway (and Underwater)

Autonomous Tritium Micropowered Sensors concept in space
Artist concept highlighting the novel approach proposed by the 2025 NIAC awarded selection of Autonomous Tritium Micropowered Sensors concept.
NASA/Peter Cabauy

Peter Cabauy
City Labs, Inc.

The NIAC Phase I study confirmed the feasibility of nuclear-micropowered probes (NMPs) using tritium betavoltaic power technology for autonomous exploration of the Moon’s permanently shadowed regions (PSRs). This work advanced the technology’s readiness level (TRL) from TRL 1 to TRL 2, validating theoretical models and feasibility assessments. Phase II will refine the technology, address challenges, and elevate the TRL to 3, with a roadmap for further maturation toward TRL 4 and beyond, supporting NASA’s mission for lunar and planetary exploration. A key innovation is tritium betavoltaic power sources, providing long-duration energy in extreme environments. The proposed 5cm x 5cm gram-scale device supports lunar spectroscopy and other applications. In-situ analyses at the Moon’s south pole are challenging due to cold, limited solar power, and prolonged darkness. Tritium betavoltaics harvest energy from radioactive decay, enabling autonomous sensing in environments unsuitable for conventional photovoltaics and chemical-based batteries.

The proposal focuses on designing an ultrathin light weight tritium betavoltaic into an NMP for integrating various scientific instruments. Tritium-powered NMPs support diverse applications, from planetary science to scouting missions for human exploration. This approach enables large-scale deployment for high-resolution remote sensing. For instance, a distributed NMP array could map lunar water resources, aiding Artemis missions. Beyond the Moon, tritium-powered platforms enable a class of missions to Mars, Europa, Enceladus, and asteroids, where alternative power sources are impractical.

Phase II objectives focus on improving energy conversion efficiency and resilience of tritium betavoltaic power sources, targeting 1-10 μW continuous electrical power with higher thermal output. The project will optimize NMP integration with sensor platforms, enhancing power management, data transmission, and environmental survivability in PSR conditions. Environmental testing will assess survivability under lunar landing conditions, including decelerations of 27,000-270,000g and interactions with lunar regolith. The goal is to advance TRL from 2 to 3 by demonstrating proof-of-concept prototypes and preparing for TRL 4. Pathways for NASA mission integration will be explored, assessing scalability, applicability, and cost-effectiveness compared to alternative technologies.

A key discovery in Phase I was the thermal-survivability benefit of the betavoltaic’s tritium metal hydride, which generates enough heat to keep electronic components operational. This dual functionality–as both a power source and thermal stabilizer–allows NMP components to function within temperature specifications, a breakthrough for autonomous sensing in extreme environments. Beyond lunar applications, this technology could revolutionize planetary science, deep-space exploration, and terrestrial use cases. It could aid Mars missions, where dust storms and long nights challenge solar power, and Europa landers, which need persistent low-power operation. Earth-based applications such as biomedical implants and environmental monitoring could benefit from the proposed advancements in betavoltaic energy storage and micro-scale sensors. The Phase II study supports NASA’s Artemis objectives by enabling sustainable lunar exploration through enhanced resource characterization and autonomous monitoring. Tritium-powered sensing has strategic value for PSR scouting, planetary-surface mapping, and deep-space monitoring. By positioning tritium betavoltaic NMPs as a power solution for extreme environments, this study lays the foundation for transitioning the technology from concept to implementation, advancing space exploration and scientific discovery.

2025 Selections

Share

Details

Last Updated
May 27, 2025
Editor
Loura Hall

View the full article

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Similar Topics

    • By European Space Agency
      Two spacecraft flying as one – that is the goal of European Space Agency’s Proba-3 mission. Earlier this week, the eclipse-maker moved a step closer to achieving that goal, as both spacecraft aligned with the Sun, maintaining their relative position for several hours without any control from the ground.
      View the full article
    • By NASA
      5 min read
      Preparations for Next Moonwalk Simulations Underway (and Underwater)
      Artist’s concept of drones flying in an urban environment near large city skyscrapers.NASA / Maria Werries Remotely piloted aircraft could transform the way we transport people and goods and provide our communities with better access to vital services, like medical supply deliveries and efficient transportation. 
      NASA’s Pathfinding for Airspace with Autonomous Vehicles (PAAV) subproject is working with partners to safely integrate remote air cargo and air taxi aircraft into our national airspace alongside traditional crewed aircraft.
      These new types of vehicles could make air cargo deliveries and air travel more affordable and accessible to communities across the country.  
      The Need
      The United States large air cargo fleet is expected to grow significantly through 2044 to meet cargo demand, according to the Federal Aviation Administration (FAA).
      However, pilot shortages exacerbated by early retirements and crew reductions implemented during the coronavirus outbreak continue to present a challenge to the air cargo industry.  
      In the future, one pilot could potentially manage multiple aircraft remotely. This could help meet the rising demand for air cargo operations, mitigate pilot shortages and costs, and increase the number of daily air cargo deliveries.
      Additionally, remotely piloted air taxis could reduce travel time for passengers and alleviate traffic congestion because they could avoid crowded roads and highways.  
      Identifying the Technical Challenges 
      Commercial companies are investing in autonomous technologies to enable remote air cargo deliveries and air taxi operations.
      NASA is working with the industry along the way to identify the unique technical challenges that must be overcome to safely put these new types of aircraft into routine operation.  
      The agency has identified several challenges that need to be addressed for safe and scalable remote operations. Among these challenges are airspace integration, avoiding airborne and ground-based hazards, and resilient communication technologies. 
      The main difference between conventional crewed aircraft and remotely piloted aircraft is the location of the pilot. Remote pilots operate aircraft from a control station on the ground instead of the cockpit.
      This means remote pilots will need new automation and decision support systems for operating the aircraft since they can’t rely on their eyes and view from the cockpit. Since remote pilots are on the ground, they need a reliable communications link that allows remote pilots to interact with the aircraft and maintain command and control.
      If the command-and-control capabilities are lost, an autonomous system would need to take over to make sure the uncrewed aircraft can fly and land safely, according to NASA researchers. Adequate software and procedures must be in place to safely manage off-nominal losses of the command-and-control capabilities.
      Air Traffic Control may help keep the uncrewed aircraft’s path clear from some traffic during takeoff and landing, while onboard automation technologies would need to avoid all other traffic, fly the aircraft along a known path, and check to ensure the runway is clear to land.  
      A significant related challenge is that pilots are typically responsible for looking out the window for nearby aircraft and remaining well clear of them. Since the remote pilot is not in the aircraft, they will need an electronic detect and avoid system. 
      Detect and avoid systems rely on information, sensors, and algorithms to help the remotely piloted aircraft remain clear of other aircraft. Some detect and avoid configurations are expected to use ground surveillance systems for detecting nearby air traffic at lower altitudes.
      These systems could improve overall situational awareness of traffic near the airport by providing a more comprehensive picture of live traffic. 
      Additionally, automation and decision support tools could help remote pilots with other responsibilities that typically require pilot decisions from the cockpit, like integrating with traffic at non-towered airports.  
      Implementing Solutions 
      To address these challenges and others, NASA researchers are working with industry partners to research and test technologies, concepts, and airspace procedures that will enable remotely piloted operations.  
      For example, industry is developing automated taxi, takeoff, and landing capabilities to help integrate remotely piloted aircraft operating at busy airports.
      These technologies could enable aircraft to navigate and integrate with other airport traffic autonomously, following standard routes and air traffic control commands for safe sequencing and spacing between other aircraft. 
      Automated hazard detection would enable the aircraft to identify potential conflicts or hazards and take corrective actions without input from a remote pilot. This would ensure the aircraft safely navigates the airport environment even if the remote pilot is supervising multiple aircraft or their response is delayed. 
      NASA researchers are beginning to test emerging technologies for remotely piloted aircraft operations with commercial partners. The goal is to help mature technical standards and assist in the development of certification requirements anrtd procedures required to integrate remotely piloted operations into the airspace.  
      NASA aims to bridge technical and regulatory gaps through these industry partnerships involving research, testing, and development. Ultimately, NASA hopes to enable pilots to remotely fly multiple large aircraft to airports across the country at once, more efficiently transporting people and goods.
      This could enable carriers to meet rising air travel and transport demands in a safe, affordable, scalable way and expand access to new communities. 
      PAAV is a subproject under NASA’s Air Traffic Management Exploration project within the agency’s Aeronautics Research Mission Directorate.
      Facebook logo @NASA@NASAaero@NASA_es @NASA@NASAaero@NASA_es Instagram logo @NASA@NASAaero@NASA_es Linkedin logo @NASA Explore More
      4 min read NASA Kicks off Testing Campaign for Remotely Piloted Cargo Flights
      Article 2 months ago 2 min read NASA Flight Rerouting Tool Curbs Delays, Emissions
      Article 3 months ago 3 min read NASA Moves Drone Package Delivery Industry Closer to Reality
      Article 3 months ago Keep Exploring Discover More Topics From NASA
      Missions
      Artemis
      Aeronautics STEM
      Explore NASA’s History
      View the full article
    • By NASA
      NASA’s Ames Research Center in Silicon Valley invites media to learn more about Distributed Spacecraft Autonomy (DSA), a technology that allows individual spacecraft to make independent decisions while collaborating with each other to achieve common goals – without human input. The DSA team achieved multiple firsts during tests of such swarm technology as part of the agency’s project. 
      DSA develops software tools critical for future autonomous, distributed, and intelligent spacecraft that will need to interact with each other to achieve complex mission objectives. Testing onboard the agency’s Starling mission resulted in accomplishments including the first fully distributed autonomous operation of multiple spacecraft, the first use of space-to-space communications to autonomously share status information between multiple spacecraft, and more. 
      DSA’s accomplishments mark a significant milestone in advancing autonomous systems that will make new types of science and exploration possible. 
      Caleb Adams, DSA project manager, is available for interview on Wednesday, Feb. 5 and Thursday, Feb. 6. To request an interview, media can contact the Ames Office of Communications by email at arc-dl-newsroom@nasa.gov or by phone at 650-604-4789.  
      Learn more about NASA Ames’ world-class research and development in aeronautics, science, and exploration technology at: 
      https://www.nasa.gov/ames
      -end- 
      Tiffany Blake
      Ames Research Center, Silicon Valley 
      650-604-4789 
      tiffany.n.blake@nasa.gov  

      To receive local NASA Ames news, email local-reporters-request@lists.arc.nasa.gov with “subscribe” in the subject line. To unsubscribe, email the same address with “unsubscribe” in the subject line.  

      View the full article
    • By NASA
      This article is from the 2024 Technical Update

      Autonomous flight termination systems (AFTS) are being progressively employed onboard launch vehicles to replace ground personnel and infrastructure needed to terminate flight or destruct the vehicle should an anomaly occur. This automation uses on-board real-time data and encoded logic to determine if the flight should be self-terminated. For uncrewed launch vehicles, FTS systems are required to protect the public and governed by the United States Space Force (USSF). For crewed missions, NASA must augment range AFTS requirements for crew safety and certify each flight according to human rating standards, thus adding unique requirements for reuse of software originally intended for uncrewed missions. This bulletin summarizes new information relating to AFTS to raise awareness of key distinctions, summarize considerations and outline best practices for incorporating AFTS into human-rated systems.
      Key Distinctions – Crewed v. Uncrewed
      There are inherent behavioral differences between uncrewed and crewed AFTS related to design philosophy and fault tolerance. Uncrewed AFTS generally favor fault tolerance against failure-to-destruct over failing silent
      in the presence of faults. This tenet permeates the design, even downto the software unit level. Uncrewed AFTS become zero-fault-to-destruct tolerant to many unrecoverable AFTS errors, whereas general single fault
      tolerance against vehicle destruct is required for crewed missions. Additionally, unique needs to delay destruction for crew escape, provide abort options and special rules, and assess human-in-the-loop insight, command, and/or override throughout a launch sequence must be considered and introduces additional requirements and integration complexities.

      AFTS Software Architecture Components and Best-Practice Use Guidelines
      A detailed study of the sole AFTS currently approved by USSF and utilized/planned for several launch vehicles was conducted to understand its characteristics, and any unique risk and mitigation techniques for effective human-rating reuse. While alternate software systems may be designed in the future, this summary focuses on an architecture employing the Core Autonomous Safety Software (CASS). Considerations herein are intended for extrapolation to future systems. Components of the AFTS software architecture are shown, consisting of the CASS, “Wrapper”, and Mission Data Load (MDL) along with key characteristics and use guidelines. A more comprehensive description of each and recommendations for developmental use is found in Ref. 1.
      Best Practices Certifying AFTS Software
      Below are non-exhaustive guidelines to help achieve a human-rating
      certification for an AFTS.

      References
      NASA/TP-20240009981: Best Practices and Considerations for Using
      Autonomous Flight Termination Software In Crewed Launch Vehicles
      https://ntrs.nasa.gov/citations/20240009981 “Launch Safety,” 14 C.F.R., § 417 (2024). NPR 8705.2C, Human-Rating Requirements for Space Systems, Jul 2017,
      nodis3.gsfc.nasa.gov/ NASA Software Engineering Requirements, NPR 7150.2D, Mar 2022,
      nodis3.gsfc.nasa.gov/ RCC 319-19 Flight Termination Systems Commonality Standard, White
      Sands, NM, June 2019. “Considerations for Software Fault Prevention and Tolerance”, NESC
      Technical Bulletin No. 23-06 https://ntrs.nasa.gov/citations/20230013383 “Safety Considerations when Repurposing Commercially Available Flight
      Termination Systems from Uncrewed to Crewed Launch Vehicles”, NESC
      Technical Bulletin No. 23-02 https://ntrs.nasa.gov/citations/20230001890 View the full article
    • By NASA
      9 Min Read Towards Autonomous Surface Missions on Ocean Worlds
      Artist’s concept image of a spacecraft lander with a robot arm on the surface of Europa. Credits:
      NASA/JPL – Caltech Through advanced autonomy testbed programs, NASA is setting the groundwork for one of its top priorities—the search for signs of life and potentially habitable bodies in our solar system and beyond. The prime destinations for such exploration are bodies containing liquid water, such as Jupiter’s moon Europa and Saturn’s moon Enceladus. Initial missions to the surfaces of these “ocean worlds” will be robotic and require a high degree of onboard autonomy due to long Earth-communication lags and blackouts, harsh surface environments, and limited battery life.
      Technologies that can enable spacecraft autonomy generally fall under the umbrella of Artificial Intelligence (AI) and have been evolving rapidly in recent years. Many such technologies, including machine learning, causal reasoning, and generative AI, are being advanced at non-NASA institutions.  
      NASA started a program in 2018 to take advantage of these advancements to enable future icy world missions. It sponsored the development of the physical Ocean Worlds Lander Autonomy Testbed (OWLAT) at NASA’s Jet Propulsion Laboratory in Southern California and the virtual Ocean Worlds Autonomy Testbed for Exploration, Research, and Simulation (OceanWATERS) at NASA’s Ames Research Center in Silicon Valley, California.
      NASA solicited applications for its Autonomous Robotics Research for Ocean Worlds (ARROW) program in 2020, and for the Concepts for Ocean worlds Life Detection Technology (COLDTech) program in 2021. Six research teams, based at universities and companies throughout the United States, were chosen to develop and demonstrate autonomy solutions on OWLAT and OceanWATERS. These two- to three-year projects are now complete and have addressed a wide variety of autonomy challenges faced by potential ocean world surface missions.
      OWLAT
      OWLAT is designed to simulate a spacecraft lander with a robotic arm for science operations on an ocean world body. The overall OWLAT architecture including hardware and software components is shown in Figure 1. Each of the OWLAT components is detailed below.
      Figure 1. The software and hardware components of the Ocean Worlds Lander Autonomy Testbed and the relationships between them. NASA/JPL – Caltech The hardware version of OWLAT (shown in Figure 2) is designed to physically simulate motions of a lander as operations are performed in a low-gravity environment using a six degrees-of-freedom (DOF) Stewart platform. A seven DOF robot arm is mounted on the lander to perform sampling and other science operations that interact with the environment. A camera mounted on a pan-and-tilt unit is used for perception. The testbed also has a suite of onboard force/torque sensors to measure motion and reaction forces as the lander interacts with the environment. Control algorithms implemented on the testbed enable it to exhibit dynamics behavior as if it were a lightweight arm on a lander operating in different gravitational environments.
      Figure 2. The Ocean Worlds Lander Autonomy Testbed. A scoop is mounted to the end of the testbed robot arm. NASA/JPL – Caltech The team also developed a set of tools and instruments (shown in Figure 3) to enable the performance of science operations using the testbed. These various tools can be mounted to the end of the robot arm via a quick-connect-disconnect mechanism. The testbed workspace where sampling and other science operations are conducted incorporates an environment designed to represent the scene and surface simulant material potentially found on ocean worlds.
      Figure 3. Tools and instruments designed to be used with the testbed. NASA/JPL – Caltech The software-only version of OWLAT models, visualizes, and provides telemetry from a high-fidelity dynamics simulator based on the Dynamics And Real-Time Simulation (DARTS) physics engine developed at JPL. It replicates the behavior of the physical testbed in response to commands and provides telemetry to the autonomy software. A visualization from the simulator is shown on Figure 4.
      To view this video please enable JavaScript, and consider upgrading to a web browser that
      supports HTML5 video
      Figure 7. Screenshot of OceanWATERS lander on a terrain modeled from the Atacama Desert. A scoop operation has just been completed. NASA/JPL – Caltech The autonomy software module shown at the top in Figure 1 interacts with the testbed through a Robot Operating System (ROS)-based interface to issue commands and receive telemetry. This interface is defined to be identical to the OceanWATERS interface. Commands received from the autonomy module are processed through the dispatcher/scheduler/controller module (blue box in Figure 1) and used to command either the physical hardware version of the testbed or the dynamics simulation (software version) of the testbed. Sensor information from the operation of either the software-only or physical testbed is reported back to the autonomy module using a defined telemetry interface. A safety and performance monitoring and evaluation software module (red box in Figure 1) ensures that the testbed is kept within its operating bounds. Any commands causing out of bounds behavior and anomalies are reported as faults to the autonomy software module.
      Figure 5. Erica Tevere (at the operator’s station) and Ashish Goel (at the robot arm) setting up the OWLAT testbed for use. NASA/JPL – Caltech OceanWATERS
      At the time of the OceanWATERS project’s inception, Jupiter’s moon Europa was planetary science’s first choice in searching for life. Based on ROS, OceanWATERS is a software tool that provides a visual and physical simulation of a robotic lander on the surface of Europa (see Figure 6). OceanWATERS realistically simulates Europa’s celestial sphere and sunlight, both direct and indirect. Because we don’t yet have detailed information about the surface of Europa, users can select from terrain models with a variety of surface and material properties. One of these models is a digital replication of a portion of the Atacama Desert in Chile, an area considered a potential Earth-analog for some extraterrestrial surfaces.
      Figure 6. Screenshot of OceanWATERS. NASA/JPL – Caltech JPL’s Europa Lander Study of 2016, a guiding document for the development of OceanWATERS, describes a planetary lander whose purpose is collecting subsurface regolith/ice samples, analyzing them with onboard science instruments, and transmitting results of the analysis to Earth.
      The simulated lander in OceanWATERS has an antenna mast that pans and tilts; attached to it are stereo cameras and spotlights. It has a 6 degree-of-freedom arm with two interchangeable end effectors—a grinder designed for digging trenches, and a scoop for collecting ground material. The lander is powered by a simulated non-rechargeable battery pack. Power consumption, the battery’s state, and its remaining life are regularly predicted with the Generic Software Architecture for Prognostics (GSAP) tool. To simulate degraded or broken subsystems, a variety of faults (e.g., a frozen arm joint or overheating battery) can be “injected” into the simulation by the user; some faults can also occur “naturally” as the simulation progresses, e.g., if components become over-stressed. All the operations and telemetry (data measurements) of the lander are accessible via an interface that external autonomy software modules can use to command the lander and understand its state. (OceanWATERS and OWLAT share a unified autonomy interface based on ROS.) The OceanWATERS package includes one basic autonomy module, a facility for executing plans (autonomy specifications) written in the PLan EXecution Interchange Language, or PLEXIL. PLEXIL and GSAP are both open-source software packages developed at Ames and available on GitHub, as is OceanWATERS.
      Mission operations that can be simulated by OceanWATERS include visually surveying the landing site, poking at the ground to determine its hardness, digging a trench, and scooping ground material that can be discarded or deposited in a sample collection bin. Communication with Earth, sample analysis, and other operations of a real lander mission, are not presently modeled in OceanWATERS except for their estimated power consumption. Figure 7 is a video of OceanWATERS running a sample mission scenario using the Atacama-based terrain model.
      To view this video please enable JavaScript, and consider upgrading to a web browser that
      supports HTML5 video
      Figure 7. Screenshot of OceanWATERS lander on a terrain modeled from the Atacama Desert. A scoop operation has just been completed. NASA/JPL – Caltech Because of Earth’s distance from the ocean worlds and the resulting communication lag, a planetary lander should be programmed with at least enough information to begin its mission. But there will be situation-specific challenges that will require onboard intelligence, such as deciding exactly where and how to collect samples, dealing with unexpected issues and hardware faults, and prioritizing operations based on remaining power. 
      Results
      All six of the research teams funded by the ARROW and COLDTech programs used OceanWATERS to develop ocean world lander autonomy technology and three of those teams also used OWLAT. The products of these efforts were published in technical papers, and resulted in development of software that may be used or adapted for actual ocean world lander missions in the future. The following table summarizes the ARROW and COLDTech efforts.
        Principal Investigator (PI) PI Institution Project Testbed Used Purpose of Project ARROW Projects Jonathan Bohren Honeybee Robotics Stochastic PLEXIL (SPLEXIL) OceanWATERS Extended PLEXIL with stochastic decision-making capabilities by employing reinforcement learning techniques. Pooyan Jamshidi University of South Carolina Resource Adaptive Software Purpose-Built for Extraordinary Robotic Research Yields (RASPBERRY SI) OceanWATERS & OWLAT Developed software algorithms and tools for fault root cause identification, causal debugging, causal optimization, and causal-induced verification. COLDTech Projects Eric Dixon Lockheed Martin Causal And Reinforcement Learning (CARL) for COLDTech OceanWATERS Integrated a model of JPL’s mission-ready Cold Operable Lunar Deployable Arm (COLDarm) into OceanWATERS and applied image analysis, causal reasoning, and machine learning models to identify and mitigate the root causes of faults, such as ice buildup on the arm’s end effector. Jay McMahon University of Colorado Robust Exploration with Autonomous Science On-board, Ranked Evaluation of Contingent Opportunities for Uninterrupted Remote Science Exploration (REASON-RECOURSE) OceanWATERS Applied automated planning with formal methods to maximize science return of the lander while minimizing communication with ground team on Earth. Melkior Ornik U Illinois, Urbana-Champaign aDaptive, ResIlient Learning-enabLed oceAn World AutonomY (DRILLAWAY) OceanWATERS & OWLAT Developed autonomous adaptation to novel terrains and selecting scooping actions based on the available image data and limited experience by transferring the scooping procedure learned from a low-fidelity testbed to the high-fidelity OWLAT testbed. Joel Burdick Caltech Robust, Explainable Autonomy for Scientific Icy Moon Operations (REASIMO) OceanWATERS & OWLAT Developed autonomous 1) detection and identification of off-nominal conditions and procedures for recovery from those conditions, and 2) sample site selection Acknowledgements: The portion of the research carried out at the Jet Propulsion Laboratory, California Institute of Technology was performed under a contract with the National Aeronautics and Space Administration (80NM0018D0004).  The portion of the research carried out by employees of KBR Wyle Services LLC at NASA Ames Research Center was performed under a contract with the National Aeronautics and Space Administration (80ARC020D0010). Both were funded by the Planetary Science Division ARROW and COLDTech programs.
      Project Leads: Hari Nayar (NASA Jet Propulsion Laboratory, California Institute of Technology), K. Michael Dalal (KBR, Inc. at NASA Ames Research Center)
      Sponsoring Organizations: NASA SMD PESTO
      View the full article
  • Check out these Videos

×
×
  • Create New...