FlashPose: Range and intensity image-based terrain and vehicle relative pose estimation algorithm

robotics automation and control
FlashPose: Range and intensity image-based terrain and vehicle relative pose estimation algorithm (GSC-TOPS-102)
Provides a relative navigation capability to enable autonomous rendezvous and capture of non-cooperative spaceborn targets.
Overview
NASA Goddard Space Flight Center has developed FlashPose, a relative navigation measurement software and VHDL, for space flight missions requiring vehicle-relative and terrain-relative navigation and control. FlashPose processes real-time or recorded range and intensity images from 3D imaging sensors such as Lidars, and compares them to known models of the target surfaces to output the position and orientation of the known target relative to the sensor coordinate frame. FlashPose provides a relative navigation (pose estimation) capability to enable autonomous rendezvous and capture of non-cooperative space-borne targets. All algorithmic processing takes place in the software application, while custom FPGA firmware interfaces directly with the Ball Vision Navigation System (VNS) Lidar and provides imagery to the algorithm.

The Technology
Flashpose is the combination of software written in C and FPGA firmware written in VHDL. It is designed to run under the Linux OS environment in an embedded system or within a custom development application on a Linux workstation. The algorithm is based on the classic Iterative Closest Point (ICP) algorithm originally proposed by Besl and McKay. Basically, the algorithm takes in a range image from a three-dimensional imager, filters and thresholds the image, and converts it to a point cloud in the Cartesian coordinate system. It then minimizes the distances between the point cloud and a model of the target at the origin of the Cartesian frame by manipulating point cloud rotation and translation. This procedure is repeated a number of times for a single image until a predefined mean square error metric is met; at this point the process repeats for a new image. The rotation and translation operations performed on the point cloud represent an estimate of relative attitude and position, otherwise known as pose. In addition to 6 degree of freedom (DOF) pose estimation, Flashpose also provides a range and bearing estimate relative to the sensor reference frame. This estimate is based on a simple algorithm that generates a configurable histogram of range information, and analyzes characteristics of the histogram to produce the range and bearing estimate. This can be generated quickly and provides valuable information for seeding the Flashpose ICP algorithm as well as external optical pose algorithms and relative attitude Kalman filters.
Hubble Finds a Lenticular Galaxy Standing Out in the Crowd
Credit: NASA/ESA/Hubble; acknowledgements: Judy Schmidt (Geckzilla)
Benefits
  • Fuses space flight hardware and software to provide a real-time pose estimate for non-cooperative targets
  • Operates reliably in a space environment
  • Can be adapted to any physical object

Applications
  • Spacecraft servicing rendezvous and docking
  • Space junk removal
  • High accuracy real-time relative navigation
  • Remotely operated terrestrial vehicles
  • Machine vision
Technology Details

robotics automation and control
GSC-TOPS-102
GSC-16598-1
10024664
Similar Results
NASA robotic vehicle prototype
Super Resolution 3D Flash LIDAR
This suite of technologies includes a method, algorithms, and computer processing techniques to provide for image photometric correction and resolution enhancement at video rates (30 frames per second). This 3D (2D spatial and range) resolution enhancement uses the spatial and range information contained in each image frame, in conjunction with a sequence of overlapping or persistent images, to simultaneously enhance the spatial resolution and range and photometric accuracies. In other words, the technologies allows for generating an elevation (3D) map of a targeted area (e.g., terrain) with much enhanced resolution by blending consecutive camera image frames. The degree of image resolution enhancement increases with the number of acquired frames.
capsule drop test
Photogrammetric Method for Calculating Relative Orientation
The NASA technology uses a photogrammetry algorithm to calculate the relative orientation between two rigid bodies. The software, written in LabVIEW and MATLAB, quantitatively analyzes the photogrammetric data collected from the camera system to determine the 6-DOF position and rotation of the observed object. The system comprises an arrangement of arbitrarily placed cameras, rigidly fixed on one body, and a collection of photogrammetric targets, rigidly fixed on the second body. The cameras can be either placed on rigidly fixed objects surrounding the second body (facing inwards), or can be placed on an object directed towards the surrounding environment (facing outwards). At any given point in time, the cameras must capture at least five non-collinear targets. The 6-DOF accuracy increases as additional cameras and targets are used. The equipment requirements include a set of heterogeneous cameras, a collection of photogrammetric targets, a data storage device, and a processing PC. Camera calibration and initial target measurements are required prior to image capture. A nonprovisional patent application on this technology has been filed.
AAM
Vision-based Approach and Landing System (VALS)
The novel Vision-based Approach and Landing System (VALS) provides Advanced Air Mobility (AAM) aircraft with an Alternative Position, Navigation, and Timing (APNT) solution for approach and landing without relying on GPS. VALS operates on multiple images obtained by the aircraft’s video camera as the aircraft performs its descent. In this system, a feature detection technique such as Hough circles and Harris corner detection is used to detect which portions of the image may have landmark features. These image areas are compared with a stored list of known landmarks to determine which features correspond to the known landmarks. The world coordinates of the best matched image landmarks are inputted into a Coplanar Pose from Orthography and Scaling with Iterations (COPOSIT) module to estimate the camera position relative to the landmark points, which yields an estimate of the position and orientation of the aircraft. The estimated aircraft position and orientation are fed into an extended Kalman filter to further refine the estimation of aircraft position, velocity, and orientation. Thus, the aircraft’s position, velocity, and orientation are determined without the use of GPS data or signals. Future work includes feeding the vision-based navigation data into the aircraft’s flight control system to facilitate aircraft landing.
https://ntrs.nasa.gov/api/citations/20230000798/downloads/UTA%20Feb%202023%20Troupaki%20STRIVES.pdf
3D Lidar for Autonomous Landing Site Selection
Aerial planetary exploration spacecraft require lightweight, compact, and low power sensing systems to enable successful landing operations. The Ocellus 3D lidar meets those criteria as well as being able to withstand harsh planetary environments. Further, the new tool is based on space-qualified components and lidar technology previously developed at NASA Goddard (i.e., the Kodiak 3D lidar) as shown in the figure below. The Ocellus 3D lidar quickly scans a near infrared laser across a planetary surface, receives that signal, and translates it into a 3D point cloud. Using a laser source, fast scanning MEMS (micro-electromechanical system)-based mirrors, and NASA-developed processing electronics, the 3D point clouds are created and converted into elevations and images onboard the craft. At ~2 km altitudes, Ocellus acts as an altimeter and at altitudes below 200 m the tool produces images and terrain maps. The produced high resolution (centimeter-scale) elevations are used by the spacecraft to assess safe landing sites. The Ocellus 3D lidar is applicable to planetary and lunar exploration by unmanned or crewed aerial vehicles and may be adapted for assisting in-space servicing, assembly, and manufacturing operations. Beyond exploratory space missions, the new compact 3D lidar may be used for aerial navigation in the defense or commercial space sectors. The Ocellus 3D lidar is available for patent licensing.
Taken from within PowerPoint attachment submitted with NTR. Attachment titled "SPLICE DLC Interface Overview"
Unique Datapath Architecture Yields Real-Time Computing
The DLC platform is composed of three key components: a NASA-designed field programmable gate array (FPGA) board, a NASA-designed multiprocessor on-a-chip (MPSoC) board, and a proprietary datapath that links the boards to available inputs and outputs to enable high-bandwidth data collection and processing. The inertial measurement unit (IMU), camera, Navigation Doppler Lidar (NDL), and Hazard Detection Lidar (HDL) navigation sensors (depicted in the diagram below) are connected to the DLC’s FPGA board. The datapath on this board consists of high-speed serial interfaces for each sensor, which accept the sensor data as input and converts the output to an AXI stream format. The sensor streams are multiplexed into an AXI stream which is then formatted for input to a XAUI high speed serial interface. This interface sends the data to the MPSoC Board, where it is converted back from the XAUI format to a combined AXI stream, and demultiplexed back into individual sensor AXI streams. These AXI streams are then inputted into respective DMA interfaces that provide an interface to the DDRAM on the MPSoC board. This architecture enables real-time high-bandwidth data collection and processing by preserving the MPSoC’s full ability. This sensor datapath architecture may have other potential applications in aerospace and defense, transportation (e.g., autonomous driving), medical, research, and automation/control markets where it could serve as a key component in a high-performance computing platform and/or critical embedded system for integrating, processing, and analyzing large volumes of data in real-time.
Stay up to date, follow NASA's Technology Transfer Program on:
facebook twitter linkedin youtube
Facebook Logo Twitter Logo Linkedin Logo Youtube Logo