FlashPose: Range and intensity image-based terrain and vehicle relative pose estimation algorithm

robotics automation and control
FlashPose: Range and intensity image-based terrain and vehicle relative pose estimation algorithm (GSC-TOPS-102)
Provides a relative navigation capability to enable autonomous rendezvous and capture of non-cooperative spaceborn targets.
Overview
NASA Goddard Space Flight Center has developed FlashPose, a relative navigation measurement software and VHDL, for space flight missions requiring vehicle-relative and terrain-relative navigation and control. FlashPose processes real-time or recorded range and intensity images from 3D imaging sensors such as Lidars, and compares them to known models of the target surfaces to output the position and orientation of the known target relative to the sensor coordinate frame. FlashPose provides a relative navigation (pose estimation) capability to enable autonomous rendezvous and capture of non-cooperative space-borne targets. All algorithmic processing takes place in the software application, while custom FPGA firmware interfaces directly with the Ball Vision Navigation System (VNS) Lidar and provides imagery to the algorithm.

The Technology
Flashpose is the combination of software written in C and FPGA firmware written in VHDL. It is designed to run under the Linux OS environment in an embedded system or within a custom development application on a Linux workstation. The algorithm is based on the classic Iterative Closest Point (ICP) algorithm originally proposed by Besl and McKay. Basically, the algorithm takes in a range image from a three-dimensional imager, filters and thresholds the image, and converts it to a point cloud in the Cartesian coordinate system. It then minimizes the distances between the point cloud and a model of the target at the origin of the Cartesian frame by manipulating point cloud rotation and translation. This procedure is repeated a number of times for a single image until a predefined mean square error metric is met; at this point the process repeats for a new image. The rotation and translation operations performed on the point cloud represent an estimate of relative attitude and position, otherwise known as pose. In addition to 6 degree of freedom (DOF) pose estimation, Flashpose also provides a range and bearing estimate relative to the sensor reference frame. This estimate is based on a simple algorithm that generates a configurable histogram of range information, and analyzes characteristics of the histogram to produce the range and bearing estimate. This can be generated quickly and provides valuable information for seeding the Flashpose ICP algorithm as well as external optical pose algorithms and relative attitude Kalman filters.
Hubble Finds a Lenticular Galaxy Standing Out in the Crowd
Credit: NASA/ESA/Hubble; acknowledgements: Judy Schmidt (Geckzilla)
Benefits
  • Fuses space flight hardware and software to provide a real-time pose estimate for non-cooperative targets
  • Operates reliably in a space environment
  • Can be adapted to any physical object

Applications
  • Spacecraft servicing rendezvous and docking
  • Space junk removal
  • High accuracy real-time relative navigation
  • Remotely operated terrestrial vehicles
  • Machine vision
Technology Details

robotics automation and control
GSC-TOPS-102
GSC-16598-1
10024664
Similar Results
NASA robotic vehicle prototype
Super Resolution 3D Flash LIDAR
This suite of technologies includes a method, algorithms, and computer processing techniques to provide for image photometric correction and resolution enhancement at video rates (30 frames per second). This 3D (2D spatial and range) resolution enhancement uses the spatial and range information contained in each image frame, in conjunction with a sequence of overlapping or persistent images, to simultaneously enhance the spatial resolution and range and photometric accuracies. In other words, the technologies allows for generating an elevation (3D) map of a targeted area (e.g., terrain) with much enhanced resolution by blending consecutive camera image frames. The degree of image resolution enhancement increases with the number of acquired frames.
https://ntrs.nasa.gov/api/citations/20230000798/downloads/UTA%20Feb%202023%20Troupaki%20STRIVES.pdf
3D Lidar for Autonomous Landing Site Selection
Aerial planetary exploration spacecraft require lightweight, compact, and low power sensing systems to enable successful landing operations. The Ocellus 3D lidar meets those criteria as well as being able to withstand harsh planetary environments. Further, the new tool is based on space-qualified components and lidar technology previously developed at NASA Goddard (i.e., the Kodiak 3D lidar) as shown in the figure below. The Ocellus 3D lidar quickly scans a near infrared laser across a planetary surface, receives that signal, and translates it into a 3D point cloud. Using a laser source, fast scanning MEMS (micro-electromechanical system)-based mirrors, and NASA-developed processing electronics, the 3D point clouds are created and converted into elevations and images onboard the craft. At ~2 km altitudes, Ocellus acts as an altimeter and at altitudes below 200 m the tool produces images and terrain maps. The produced high resolution (centimeter-scale) elevations are used by the spacecraft to assess safe landing sites. The Ocellus 3D lidar is applicable to planetary and lunar exploration by unmanned or crewed aerial vehicles and may be adapted for assisting in-space servicing, assembly, and manufacturing operations. Beyond exploratory space missions, the new compact 3D lidar may be used for aerial navigation in the defense or commercial space sectors. The Ocellus 3D lidar is available for patent licensing.
Taken from within PowerPoint attachment submitted with NTR. Attachment titled "SPLICE DLC Interface Overview"
Unique Datapath Architecture Yields Real-Time Computing
The DLC platform is composed of three key components: a NASA-designed field programmable gate array (FPGA) board, a NASA-designed multiprocessor on-a-chip (MPSoC) board, and a proprietary datapath that links the boards to available inputs and outputs to enable high-bandwidth data collection and processing. The inertial measurement unit (IMU), camera, Navigation Doppler Lidar (NDL), and Hazard Detection Lidar (HDL) navigation sensors (depicted in the diagram below) are connected to the DLC’s FPGA board. The datapath on this board consists of high-speed serial interfaces for each sensor, which accept the sensor data as input and converts the output to an AXI stream format. The sensor streams are multiplexed into an AXI stream which is then formatted for input to a XAUI high speed serial interface. This interface sends the data to the MPSoC Board, where it is converted back from the XAUI format to a combined AXI stream, and demultiplexed back into individual sensor AXI streams. These AXI streams are then inputted into respective DMA interfaces that provide an interface to the DDRAM on the MPSoC board. This architecture enables real-time high-bandwidth data collection and processing by preserving the MPSoC’s full ability. This sensor datapath architecture may have other potential applications in aerospace and defense, transportation (e.g., autonomous driving), medical, research, and automation/control markets where it could serve as a key component in a high-performance computing platform and/or critical embedded system for integrating, processing, and analyzing large volumes of data in real-time.
https://images.nasa.gov/details-iss062e000422
Computer Vision Lends Precision to Robotic Grappling
The goal of this computer vision software is to take the guesswork out of grapple operations aboard the ISS by providing a robotic arm operator with real-time pose estimation of the grapple fixtures relative to the robotic arms end effectors. To solve this Perspective-n-Point challenge, the software uses computer vision algorithms to determine alignment solutions between the position of the camera eyepoint with the position of the end effector as the borescope camera sensors are typically located several centimeters from their respective end effector grasping mechanisms. The software includes a machine learning component that uses a trained regional Convolutional Neural Network (r-CNN) to provide the capability to analyze a live camera feed to determine ISS fixture targets a robotic arm operator can interact with on orbit. This feature is intended to increase the grappling operational range of ISSs main robotic arm from a previous maximum of 0.5 meters for certain target types, to greater than 1.5 meters, while significantly reducing computation times for grasping operations. Industrial automation and robotics applications that rely on computer vision solutions may find value in this softwares capabilities. A wide range of emerging terrestrial robotic applications, outside of controlled environments, may also find value in the dynamic object recognition and state determination capabilities of this technology as successfully demonstrated by NASA on-orbit. This computer vision software is at a technology readiness level (TRL) 6, (system/sub-system model or prototype demonstration in an operational environment.), and the software is now available to license. Please note that NASA does not manufacture products itself for commercial sale.
Front Image
Low Cost Star Tracker Software
The current Star Tracker software package is comprised of a Lumenera LW230 monochrome machine-vision camera and a FUJINON HF35SA-1 35mm lens. The star tracker cameras are all connected to and powered by the PC/104 stack via USB 2.0 ports. The software code is written in C++ and is can easily be adapted to other camera and lensing platforms by setting new variables in the software for new focal conditions. In order to identify stars in images, the software contains a star database derived from the 118,218-star Hipparcos catalog [1]. The database contains a list of every star pair within the camera field of view and the angular distance between those pairs. It also contains the inertial position information for each individual star directly from the Hipparcos catalog. In order to keep the star database size small, only stars of magnitude 6.5 or brighter were included. The star tracking process begins when image data is retrieved by the software from the data buffers in the camera. The image is translated into a binary image via a threshold brightness value so that on (bright) pixels are represented by 1s and off (dark) pixels are represented by 0s. The binary image is then searched for blobs, which are just connected groups of on pixels. These blobs represent unidentified stars or other objects such as planets, deep sky objects, other satellites, or noise. The centroids of the blob locations are computed, and a unique pattern recognition algorithm is applied to identify which, if any, stars are represented. During this process, false stars are effectively removed and only repeatedly and uniquely identifiable stars are stored. After stars are identified, another algorithm is applied on their position information to determine the attitude of the satellite. The attitude is computed as a set of Euler angles: right ascension (RA), declination (Dec), and roll. The first two Euler angles are computed by using a linear system that is derived from vector algebra and the information of two identified stars in the image. The roll angle is computed using an iterative method that relies on the information of a single star and the first two Euler angles. [1] ESA, 1997, The Hipparcos and Tycho Catalogues, ESA SP-1200
Stay up to date, follow NASA's Technology Transfer Program on:
facebook twitter linkedin youtube
Facebook Logo X Logo Linkedin Logo Youtube Logo