close
close

Gottagopestcontrol

Trusted News & Timely Insights

NASA’s optical navigation technology could streamline planetary exploration
Alabama

NASA’s optical navigation technology could streamline planetary exploration

  • As astronauts and rovers explore unexplored worlds, developing new methods to navigate these celestial bodies is critical due to the absence of traditional navigation systems such as GPS.
  • Optical navigation based on data from cameras and other sensors can help spacecraft – and in some cases astronauts themselves – find their way in areas where orientation with the naked eye would be difficult.
  • Three NASA researchers are advancing optical navigation technology by making groundbreaking advances in 3D environmental modeling, photography-based navigation, and deep learning image analysis.

It’s easy to get lost in a dark, barren landscape like the lunar surface. Since there are few recognizable landmarks to guide you with the naked eye, astronauts and rovers must resort to other means to plot their course.

As part of their Moon-Mars missions, which include exploring the lunar surface and taking their first steps on the Red Planet, finding new and efficient ways to navigate these new areas is crucial. This is where optical navigation comes in – a technology that helps map new areas using sensor data.

NASA’s Goddard Space Flight Center in Greenbelt, Maryland, is a leading developer of optical navigation technology. For example, GIANT (Goddard Image Analysis and Navigation Tool) helped the OSIRIS-REx mission safely collect samples from the asteroid Bennu by creating 3D maps of the surface and calculating accurate distances to targets.

Now three research teams at Goddard are pushing optical navigation technology even further.

Chris Gnam, an intern at NASA Goddard, is leading the development of a modeling engine called Vira, which already renders large 3D environments about 100 times faster than GIANT. These digital environments can be used to evaluate potential landing sites, simulate solar radiation, and more.

While consumer-grade graphics engines, such as those used for video game development, can render large environments quickly, most of them cannot provide the detail needed for scientific analysis. For scientists planning a planetary landing, every detail is critical.

“Vira combines the speed and efficiency of consumer graphical modelers with the scientific rigor of GIANT,” said Gnam. “This tool enables scientists to quickly model complex environments such as planetary surfaces.”

The Vira modeling engine is used to support the development of LuNaMaps (Lunar Navigation Maps). The goal of this project is to improve the quality of maps of the south polar region of the Moon, a key exploration objective of NASA’s Artemis missions.

Vira also uses ray tracing to model how light behaves in a simulated environment. While ray tracing is often used in video game development, Vira uses it to model solar radiation pressure, which refers to changes in a spacecraft’s momentum caused by sunlight.

Another team at Goddard is developing a tool that will enable navigation using images of the horizon. Andrew Liounis, product design lead for optical navigation, leads the team, working with NASA interns Andrew Tennenbaum and Will Driessen, and Alvin Yew, gas processing lead for NASA’s DAVINCI mission.

An astronaut or rover using this algorithm could take a picture of the horizon, which the program would compare to a map of the area being explored. The algorithm would then output the estimated location where the photo was taken.

Based on one photo, the algorithm can achieve an accuracy of several hundred meters. Attempts are currently being made to prove that, based on two or more images, the algorithm can determine the location with an accuracy of several tens of meters.

“We take the data points from the image and compare them to the data points on a map of the area,” Liounis explained. “It’s almost like how GPS uses triangulation, but instead of having multiple observers triangulating an object, there are multiple observations from a single observer, so we figure out where the lines of sight intersect.”

This type of technology could be useful for lunar exploration, where GPS signals cannot be relied upon to determine location.

To automate optical navigation and visual perception processes, Goddard intern Timothy Chase is developing a programming tool called GAVIN (Goddard AI Verification and Integration) Tool Suit.

This tool helps build deep learning models, a type of machine learning algorithm that is trained to process inputs like a human brain. In addition to developing the tool itself, Chase and his team are using GAVIN to develop a deep learning algorithm designed to identify craters in poorly lit areas like the moon.

“As we develop GAVIN, we want to test it,” Chase explained. “This model, which detects craters in faint bodies, will not only help us improve GAVIN, but it will also prove useful for missions like Artemis, which will be the first time astronauts will explore the south polar region of the moon – a dark area with large craters.”

As NASA continues to explore previously unexplored areas of our solar system, technologies like these could help make planetary exploration at least a little easier. Whether by developing detailed 3D maps of new worlds, navigating using photos, or building deep learning algorithms, the work of these teams could bring easy navigation on Earth to new worlds.

By Matthew Kaufman
NASA’s Goddard Space Flight Center, Greenbelt, Maryland.

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *