Super Resolution 3D Flash LIDAR

sensors
Super Resolution 3D Flash LIDAR (LAR-TOPS-168)
Real-time algorithm producing 1M pixels or greater 3D image frames
Overview
NASA Langley Research Center has developed 3-D imaging technologies (Flash LIDAR) for real-time terrain mapping and synthetic vision-based navigation. To take advantage of the information inherent in a sequence of 3-D images acquired at video rates, NASA Langley has also developed an embedded image processing algorithm that can simultaneously correct, enhance, and derive relative motion, by processing this image sequence into a high resolution 3-D synthetic image. Traditional scanning LIDAR techniques generate an image frame by raster scanning an image one laser pulse per pixel at a time, whereas Flash LIDAR acquires an image much like an ordinary camera, generating an image using a single laser pulse. The benefits of the Flash LIDAR technique and the corresponding image to image processing enable autonomous vision based guidance and control for robotic systems. The current algorithm offers up to eight times image resolution enhancement and well as a 6 degree of freedom state vector of motion in the image frame.

The Technology
This suite of technologies includes a method, algorithms, and computer processing techniques to provide for image photometric correction and resolution enhancement at video rates (30 frames per second). This 3D (2D spatial and range) resolution enhancement uses the spatial and range information contained in each image frame, in conjunction with a sequence of overlapping or persistent images, to simultaneously enhance the spatial resolution and range and photometric accuracies. In other words, the technologies allows for generating an elevation (3D) map of a targeted area (e.g., terrain) with much enhanced resolution by blending consecutive camera image frames. The degree of image resolution enhancement increases with the number of acquired frames.
NASA robotic vehicle prototype Original (left) and enhanced resolution flash LIDAR image (right)
Benefits
  • Improved spatial resolution of 3D flash LIDAR video images by a factor of 8 times
  • Provides platform relative position and attitude angles
  • Desirable video processing speeds and high speed data rates

Applications
  • Autonomous rover and robot guidance and control
  • On-orbit inspection and servicing
  • Topographical/terrain mapping
  • Automotive collision avoidance, adaptive cruise control, situational awareness
  • Already licensed exclusively for space, air, land and sub-aquatic vehicle navigation.
Technology Details

sensors
LAR-TOPS-168
LAR-17799-1 LAR-17894-1
8,655,513 8,494,687 9,354,880
Similar Results
Seaweed Farms in South Korea acquired by The Operational Land Imager (OLI) on Landsat 8
Non-Scanning 3D Imager
NASA Goddard Space Flight Center's has developed a non-scanning, 3D imaging laser system that uses a simple lens system to simultaneously generate a one-dimensional or two-dimensional array of optical (light) spots to illuminate an object, surface or image to generate a topographic profile. The system includes a microlens array configured in combination with a spherical lens to generate a uniform array for a two dimensional detector, an optical receiver, and a pulsed laser as the transmitter light source. The pulsed laser travels to and from the light source and the object. A fraction of the light is imaged using the optical detector, and a threshold detector is used to determine the time of day when the pulse arrived at the detector (using picosecond to nanosecond precision). Distance information can be determined for each pixel in the array, which can then be displayed to form a three-dimensional image. Real-time three-dimensional images are produced with the system at television frame rates (30 frames per second) or higher. Alternate embodiments of this innovation include the use of a light emitting diode in place of a pulsed laser, and/or a macrolens array in place of a microlens.
Hubble Finds a Lenticular Galaxy Standing Out in the Crowd
Credit: NASA/ESA/Hubble; acknowledgements: Judy Schmidt (Geckzilla)
FlashPose: Range and intensity image-based terrain and vehicle relative pose estimation algorithm
Flashpose is the combination of software written in C and FPGA firmware written in VHDL. It is designed to run under the Linux OS environment in an embedded system or within a custom development application on a Linux workstation. The algorithm is based on the classic Iterative Closest Point (ICP) algorithm originally proposed by Besl and McKay. Basically, the algorithm takes in a range image from a three-dimensional imager, filters and thresholds the image, and converts it to a point cloud in the Cartesian coordinate system. It then minimizes the distances between the point cloud and a model of the target at the origin of the Cartesian frame by manipulating point cloud rotation and translation. This procedure is repeated a number of times for a single image until a predefined mean square error metric is met; at this point the process repeats for a new image. The rotation and translation operations performed on the point cloud represent an estimate of relative attitude and position, otherwise known as pose. In addition to 6 degree of freedom (DOF) pose estimation, Flashpose also provides a range and bearing estimate relative to the sensor reference frame. This estimate is based on a simple algorithm that generates a configurable histogram of range information, and analyzes characteristics of the histogram to produce the range and bearing estimate. This can be generated quickly and provides valuable information for seeding the Flashpose ICP algorithm as well as external optical pose algorithms and relative attitude Kalman filters.
https://ntrs.nasa.gov/api/citations/20230000798/downloads/UTA%20Feb%202023%20Troupaki%20STRIVES.pdf
3D Lidar for Autonomous Landing Site Selection
Aerial planetary exploration spacecraft require lightweight, compact, and low power sensing systems to enable successful landing operations. The Ocellus 3D lidar meets those criteria as well as being able to withstand harsh planetary environments. Further, the new tool is based on space-qualified components and lidar technology previously developed at NASA Goddard (i.e., the Kodiak 3D lidar) as shown in the figure below. The Ocellus 3D lidar quickly scans a near infrared laser across a planetary surface, receives that signal, and translates it into a 3D point cloud. Using a laser source, fast scanning MEMS (micro-electromechanical system)-based mirrors, and NASA-developed processing electronics, the 3D point clouds are created and converted into elevations and images onboard the craft. At ~2 km altitudes, Ocellus acts as an altimeter and at altitudes below 200 m the tool produces images and terrain maps. The produced high resolution (centimeter-scale) elevations are used by the spacecraft to assess safe landing sites. The Ocellus 3D lidar is applicable to planetary and lunar exploration by unmanned or crewed aerial vehicles and may be adapted for assisting in-space servicing, assembly, and manufacturing operations. Beyond exploratory space missions, the new compact 3D lidar may be used for aerial navigation in the defense or commercial space sectors. The Ocellus 3D lidar is available for patent licensing.
Device prototype in use
Optical Head-Mounted Display System for Laser Safety Eyewear
The system combines laser goggles with an optical head-mounted display that displays a real-time video camera image of a laser beam. Users are able to visualize the laser beam while his/her eyes are protected. The system also allows for numerous additional features in the optical head mounted display such as digital zoom, overlays of additional information such as power meter data, Bluetooth wireless interface, digital overlays of beam location and others. The system is built on readily available components and can be used with existing laser eyewear. The software converts the color being observed to another color that transmits through the goggles. For example, if a red laser is being used and red-blocking glasses are worn, the software can convert red to blue, which is readily transmitted through the laser eyewear. Similarly, color video can be converted to black-and-white to transmit through the eyewear.
https://science.nasa.gov/mission/viper/
3D Lidar for Improved Rover Traversal and Imagery
The SQRLi system is made up of three major components including the laser assembly, the mirror assembly, and the electronics and data processing equipment (electronics assembly) as shown in the figure below. The three main systems work together to send and receive the lidar signal then translate it into a 3D image for navigation and imaging purposes. The rover sensing instrument makes use of a unique fiber optic laser assembly with high, adjustable output that increases the dynamic range (i.e., contrast) of the lidar system. The commercially available mirror setup used in the SQRLi is small, reliable, and has a wide aperture that improves the field-of-view of the lidar while maintaining a small instrument footprint. Lastly, the data processing is done by an in-house designed processor capable of translating the light signal into a high-resolution (sub-millimeter) 3D map. These components of the SQRLi enable successful hazard detection and navigation in visibility-impaired environments. The SQRLi is applicable to planetary and lunar exploration by unmanned or crewed vehicles and may be adapted for in-space servicing, assembly, and manufacturing purposes. Beyond NASA missions, the new 3D lidar may be used for vehicular navigation in the automotive, defense, or commercial space sectors. The SQRLi is available for patent licensing.
Stay up to date, follow NASA's Technology Transfer Program on:
facebook twitter linkedin youtube
Facebook Logo Twitter Logo Linkedin Logo Youtube Logo