Super Resolution 3D Flash LIDAR

sensors
Super Resolution 3D Flash LIDAR (LAR-TOPS-168)
Real-time algorithm producing 1M pixels or greater 3D image frames
Overview
NASA Langley Research Center has developed 3-D imaging technologies (Flash LIDAR) for real-time terrain mapping and synthetic vision-based navigation. To take advantage of the information inherent in a sequence of 3-D images acquired at video rates, NASA Langley has also developed an embedded image processing algorithm that can simultaneously correct, enhance, and derive relative motion, by processing this image sequence into a high resolution 3-D synthetic image. Traditional scanning LIDAR techniques generate an image frame by raster scanning an image one laser pulse per pixel at a time, whereas Flash LIDAR acquires an image much like an ordinary camera, generating an image using a single laser pulse. The benefits of the Flash LIDAR technique and the corresponding image to image processing enable autonomous vision based guidance and control for robotic systems. The current algorithm offers up to eight times image resolution enhancement and well as a 6 degree of freedom state vector of motion in the image frame.

The Technology
This suite of technologies includes a method, algorithms, and computer processing techniques to provide for image photometric correction and resolution enhancement at video rates (30 frames per second). This 3D (2D spatial and range) resolution enhancement uses the spatial and range information contained in each image frame, in conjunction with a sequence of overlapping or persistent images, to simultaneously enhance the spatial resolution and range and photometric accuracies. In other words, the technologies allows for generating an elevation (3D) map of a targeted area (e.g., terrain) with much enhanced resolution by blending consecutive camera image frames. The degree of image resolution enhancement increases with the number of acquired frames.
NASA robotic vehicle prototype Original (left) and enhanced resolution flash LIDAR image (right)
Benefits
  • Improved spatial resolution of 3D flash LIDAR video images by a factor of 8 times
  • Provides platform relative position and attitude angles
  • Desirable video processing speeds and high speed data rates

Applications
  • Autonomous rover and robot guidance and control
  • On-orbit inspection and servicing
  • Topographical/terrain mapping
  • Automotive collision avoidance, adaptive cruise control, situational awareness
  • Already licensed exclusively for space, air, land and sub-aquatic vehicle navigation.
Technology Details

sensors
LAR-TOPS-168
LAR-17799-1 LAR-17894-1
8,655,513 8,494,687 9,354,880
Similar Results
Hubble Finds a Lenticular Galaxy Standing Out in the Crowd
Credit: NASA/ESA/Hubble; acknowledgements: Judy Schmidt (Geckzilla)
FlashPose: Range and intensity image-based terrain and vehicle relative pose estimation algorithm
Flashpose is the combination of software written in C and FPGA firmware written in VHDL. It is designed to run under the Linux OS environment in an embedded system or within a custom development application on a Linux workstation. The algorithm is based on the classic Iterative Closest Point (ICP) algorithm originally proposed by Besl and McKay. Basically, the algorithm takes in a range image from a three-dimensional imager, filters and thresholds the image, and converts it to a point cloud in the Cartesian coordinate system. It then minimizes the distances between the point cloud and a model of the target at the origin of the Cartesian frame by manipulating point cloud rotation and translation. This procedure is repeated a number of times for a single image until a predefined mean square error metric is met; at this point the process repeats for a new image. The rotation and translation operations performed on the point cloud represent an estimate of relative attitude and position, otherwise known as pose. In addition to 6 degree of freedom (DOF) pose estimation, Flashpose also provides a range and bearing estimate relative to the sensor reference frame. This estimate is based on a simple algorithm that generates a configurable histogram of range information, and analyzes characteristics of the histogram to produce the range and bearing estimate. This can be generated quickly and provides valuable information for seeding the Flashpose ICP algorithm as well as external optical pose algorithms and relative attitude Kalman filters.
https://ntrs.nasa.gov/api/citations/20230000798/downloads/UTA%20Feb%202023%20Troupaki%20STRIVES.pdf
3D Lidar for Autonomous Landing Site Selection
Aerial planetary exploration spacecraft require lightweight, compact, and low power sensing systems to enable successful landing operations. The Ocellus 3D lidar meets those criteria as well as being able to withstand harsh planetary environments. Further, the new tool is based on space-qualified components and lidar technology previously developed at NASA Goddard (i.e., the Kodiak 3D lidar) as shown in the figure below. The Ocellus 3D lidar quickly scans a near infrared laser across a planetary surface, receives that signal, and translates it into a 3D point cloud. Using a laser source, fast scanning MEMS (micro-electromechanical system)-based mirrors, and NASA-developed processing electronics, the 3D point clouds are created and converted into elevations and images onboard the craft. At ~2 km altitudes, Ocellus acts as an altimeter and at altitudes below 200 m the tool produces images and terrain maps. The produced high resolution (centimeter-scale) elevations are used by the spacecraft to assess safe landing sites. The Ocellus 3D lidar is applicable to planetary and lunar exploration by unmanned or crewed aerial vehicles and may be adapted for assisting in-space servicing, assembly, and manufacturing operations. Beyond exploratory space missions, the new compact 3D lidar may be used for aerial navigation in the defense or commercial space sectors. The Ocellus 3D lidar is available for patent licensing.
Device prototype in use
Optical Head-Mounted Display System for Laser Safety Eyewear
The system combines laser goggles with an optical head-mounted display that displays a real-time video camera image of a laser beam. Users are able to visualize the laser beam while his/her eyes are protected. The system also allows for numerous additional features in the optical head mounted display such as digital zoom, overlays of additional information such as power meter data, Bluetooth wireless interface, digital overlays of beam location and others. The system is built on readily available components and can be used with existing laser eyewear. The software converts the color being observed to another color that transmits through the goggles. For example, if a red laser is being used and red-blocking glasses are worn, the software can convert red to blue, which is readily transmitted through the laser eyewear. Similarly, color video can be converted to black-and-white to transmit through the eyewear.
https://science.nasa.gov/mission/viper/
3D Lidar for Improved Rover Traversal and Imagery
The SQRLi system is made up of three major components including the laser assembly, the mirror assembly, and the electronics and data processing equipment (electronics assembly) as shown in the figure below. The three main systems work together to send and receive the lidar signal then translate it into a 3D image for navigation and imaging purposes. The rover sensing instrument makes use of a unique fiber optic laser assembly with high, adjustable output that increases the dynamic range (i.e., contrast) of the lidar system. The commercially available mirror setup used in the SQRLi is small, reliable, and has a wide aperture that improves the field-of-view of the lidar while maintaining a small instrument footprint. Lastly, the data processing is done by an in-house designed processor capable of translating the light signal into a high-resolution (sub-millimeter) 3D map. These components of the SQRLi enable successful hazard detection and navigation in visibility-impaired environments. The SQRLi is applicable to planetary and lunar exploration by unmanned or crewed vehicles and may be adapted for in-space servicing, assembly, and manufacturing purposes. Beyond NASA missions, the new 3D lidar may be used for vehicular navigation in the automotive, defense, or commercial space sectors. The SQRLi is available for patent licensing.
AAM
Vision-based Approach and Landing System (VALS)
The novel Vision-based Approach and Landing System (VALS) provides Advanced Air Mobility (AAM) aircraft with an Alternative Position, Navigation, and Timing (APNT) solution for approach and landing without relying on GPS. VALS operates on multiple images obtained by the aircraft’s video camera as the aircraft performs its descent. In this system, a feature detection technique such as Hough circles and Harris corner detection is used to detect which portions of the image may have landmark features. These image areas are compared with a stored list of known landmarks to determine which features correspond to the known landmarks. The world coordinates of the best matched image landmarks are inputted into a Coplanar Pose from Orthography and Scaling with Iterations (COPOSIT) module to estimate the camera position relative to the landmark points, which yields an estimate of the position and orientation of the aircraft. The estimated aircraft position and orientation are fed into an extended Kalman filter to further refine the estimation of aircraft position, velocity, and orientation. Thus, the aircraft’s position, velocity, and orientation are determined without the use of GPS data or signals. Future work includes feeding the vision-based navigation data into the aircraft’s flight control system to facilitate aircraft landing.
Stay up to date, follow NASA's Technology Transfer Program on:
facebook twitter linkedin youtube
Facebook Logo X Logo Linkedin Logo Youtube Logo