3D Lidar for Autonomous Landing Site Selection

Aerospace
3D Lidar for Autonomous Landing Site Selection (GSC-TOPS-353)
Ocellus 3D lidar, a compact altimetry and hazard avoidance system
Overview
Next generation autonomous planetary exploration missions require advanced sensing capabilities for choosing proper landing areas for the vehicles. Current tools do not have the capability to allow vehicles outside the range of terrestrial control to autonomously perform safe landing operations. The Ocellus 3D lidar developed by the NASA Goddard Space Flight Center is a lightweight, small-footprint 3D lidar system for planetary and lunar exploration. The new 3D lidar can perform both altimetry (or range-finding) measurements from high altitudes and, at lower altitudes, terrain mapping and imaging. These measurements provide the necessary data for autonomous systems to select safe landing areas for planetary exploration vehicles. Developed to aid in the safe landing and navigation of the rotocopter for the Dragonfly mission to explore Titan, the Ocellus 3D lidar may be used for a wide variety of altimetry and terrain mapping purposes both in space and terrestrially.

The Technology
Aerial planetary exploration spacecraft require lightweight, compact, and low power sensing systems to enable successful landing operations. The Ocellus 3D lidar meets those criteria as well as being able to withstand harsh planetary environments. Further, the new tool is based on space-qualified components and lidar technology previously developed at NASA Goddard (i.e., the Kodiak 3D lidar) as shown in the figure below. The Ocellus 3D lidar quickly scans a near infrared laser across a planetary surface, receives that signal, and translates it into a 3D point cloud. Using a laser source, fast scanning MEMS (micro-electromechanical system)-based mirrors, and NASA-developed processing electronics, the 3D point clouds are created and converted into elevations and images onboard the craft. At ~2 km altitudes, Ocellus acts as an altimeter and at altitudes below 200 m the tool produces images and terrain maps. The produced high resolution (centimeter-scale) elevations are used by the spacecraft to assess safe landing sites. The Ocellus 3D lidar is applicable to planetary and lunar exploration by unmanned or crewed aerial vehicles and may be adapted for assisting in-space servicing, assembly, and manufacturing operations. Beyond exploratory space missions, the new compact 3D lidar may be used for aerial navigation in the defense or commercial space sectors. The Ocellus 3D lidar is available for patent licensing.
https://ntrs.nasa.gov/api/citations/20230000798/downloads/UTA%20Feb%202023%20Troupaki%20STRIVES.pdf Images showing the design and development of the Ocellus that is based on the previous Kodiak 3D lidar system (MEB = main electronics box, FEB = front-end box).
Benefits
  • Dual mode operation: high altitude (~2 km) altimetry and lower altitude (20 to 200 m) imaging and hazard detection.
  • Space qualified components: the MEMS-based mirror assembly is space qualified.
  • Meets low SWaP requirements: the 3D lidar system will be 6 kg and require 70W of power.
  • Robust against temperature variation: the tool can survive the significant temperature variations that will be encountered during planetary exploration missions.
  • High radiation tolerance: Ocellus is resistant to radiation damage.

Applications
  • Aerospace: enhanced navigation and imaging for planetary and lunar exploration as well as for in-space servicing, assembly, and manufacturing.
  • Defense: improved autonomous aerial vehicular sensing and navigation.
  • Terrain mapping: the 3D lidar may be used in space and terrestrially for mapping terrain features at altitudes below 2 km.
Technology Details

Aerospace
GSC-TOPS-353
GSC-19042-1
Similar Results
https://science.nasa.gov/mission/viper/
3D Lidar for Improved Rover Traversal and Imagery
The SQRLi system is made up of three major components including the laser assembly, the mirror assembly, and the electronics and data processing equipment (electronics assembly) as shown in the figure below. The three main systems work together to send and receive the lidar signal then translate it into a 3D image for navigation and imaging purposes. The rover sensing instrument makes use of a unique fiber optic laser assembly with high, adjustable output that increases the dynamic range (i.e., contrast) of the lidar system. The commercially available mirror setup used in the SQRLi is small, reliable, and has a wide aperture that improves the field-of-view of the lidar while maintaining a small instrument footprint. Lastly, the data processing is done by an in-house designed processor capable of translating the light signal into a high-resolution (sub-millimeter) 3D map. These components of the SQRLi enable successful hazard detection and navigation in visibility-impaired environments. The SQRLi is applicable to planetary and lunar exploration by unmanned or crewed vehicles and may be adapted for in-space servicing, assembly, and manufacturing purposes. Beyond NASA missions, the new 3D lidar may be used for vehicular navigation in the automotive, defense, or commercial space sectors. The SQRLi is available for patent licensing.
NASA robotic vehicle prototype
Super Resolution 3D Flash LIDAR
This suite of technologies includes a method, algorithms, and computer processing techniques to provide for image photometric correction and resolution enhancement at video rates (30 frames per second). This 3D (2D spatial and range) resolution enhancement uses the spatial and range information contained in each image frame, in conjunction with a sequence of overlapping or persistent images, to simultaneously enhance the spatial resolution and range and photometric accuracies. In other words, the technologies allows for generating an elevation (3D) map of a targeted area (e.g., terrain) with much enhanced resolution by blending consecutive camera image frames. The degree of image resolution enhancement increases with the number of acquired frames.
Seaweed Farms in South Korea acquired by The Operational Land Imager (OLI) on Landsat 8
Non-Scanning 3D Imager
NASA Goddard Space Flight Center's has developed a non-scanning, 3D imaging laser system that uses a simple lens system to simultaneously generate a one-dimensional or two-dimensional array of optical (light) spots to illuminate an object, surface or image to generate a topographic profile. The system includes a microlens array configured in combination with a spherical lens to generate a uniform array for a two dimensional detector, an optical receiver, and a pulsed laser as the transmitter light source. The pulsed laser travels to and from the light source and the object. A fraction of the light is imaged using the optical detector, and a threshold detector is used to determine the time of day when the pulse arrived at the detector (using picosecond to nanosecond precision). Distance information can be determined for each pixel in the array, which can then be displayed to form a three-dimensional image. Real-time three-dimensional images are produced with the system at television frame rates (30 frames per second) or higher. Alternate embodiments of this innovation include the use of a light emitting diode in place of a pulsed laser, and/or a macrolens array in place of a microlens.
The Apollo 11 Lunar Module Eagle, in a landing configuration was photographed in lunar orbit from the Command and Service Module Columbia.
eVTOL UAS with Lunar Lander Trajectory
This NASA-developed eVTOL UAS is a purpose-built, electric, reusable aircraft with rotor/propeller thrust only, designed to fly trajectories with high similarity to those flown by lunar landers. The vehicle has the unique capability to transition into wing borne flight to simulate the cross-range, horizontal approaches of lunar landers. During transition to wing borne flight, the initial transition favors a traditional airplane configuration with the propellers in the front and smaller surfaces in the rear, allowing the vehicle to reach high speeds. However, after achieving wing borne flight, the vehicle can transition to wing borne flight in the opposite (canard) direction. During this mode of operation, the vehicle is controllable, and the propellers can be powered or unpowered. This NASA invention also has the capability to decelerate rapidly during the descent phase (also to simulate lunar lander trajectories). Such rapid deceleration will be required to reduce vehicle velocity in order to turn propellers back on without stalling the blades or catching the propeller vortex. The UAS also has the option of using variable pitch blades which can contribute to the overall controllability of the aircraft and reduce the likelihood of stalling the blades during the deceleration phase. In addition to testing EDL sensors and precision landing payloads, NASA’s innovative eVTOL UAS could be used in applications where fast, precise, and stealthy delivery of payloads to specific ground locations is required, including military applications. This concept of operations could entail deploying the UAS from a larger aircraft.
Hubble Finds a Lenticular Galaxy Standing Out in the Crowd
Credit: NASA/ESA/Hubble; acknowledgements: Judy Schmidt (Geckzilla)
FlashPose: Range and intensity image-based terrain and vehicle relative pose estimation algorithm
Flashpose is the combination of software written in C and FPGA firmware written in VHDL. It is designed to run under the Linux OS environment in an embedded system or within a custom development application on a Linux workstation. The algorithm is based on the classic Iterative Closest Point (ICP) algorithm originally proposed by Besl and McKay. Basically, the algorithm takes in a range image from a three-dimensional imager, filters and thresholds the image, and converts it to a point cloud in the Cartesian coordinate system. It then minimizes the distances between the point cloud and a model of the target at the origin of the Cartesian frame by manipulating point cloud rotation and translation. This procedure is repeated a number of times for a single image until a predefined mean square error metric is met; at this point the process repeats for a new image. The rotation and translation operations performed on the point cloud represent an estimate of relative attitude and position, otherwise known as pose. In addition to 6 degree of freedom (DOF) pose estimation, Flashpose also provides a range and bearing estimate relative to the sensor reference frame. This estimate is based on a simple algorithm that generates a configurable histogram of range information, and analyzes characteristics of the histogram to produce the range and bearing estimate. This can be generated quickly and provides valuable information for seeding the Flashpose ICP algorithm as well as external optical pose algorithms and relative attitude Kalman filters.
Stay up to date, follow NASA's Technology Transfer Program on:
facebook twitter linkedin youtube
Facebook Logo Twitter Logo Linkedin Logo Youtube Logo