Photogrammetric Method for Calculating Relative Orientation
optics
Photogrammetric Method for Calculating Relative Orientation (LAR-TOPS-38)
Highly accurate, flexible system measures relative dynamics in six degrees of freedom
Overview
NASA's Langley Research Center has developed a novel method to calculate the relative position and orientation between two rigid objects using a simplified photogrammetric technique. The system quantitatively captures the relative orientation of objects in six degrees of freedom (6-DOF), using
one or more cameras with non-overlapping fields of view (FOV) that record strategically placed photogrammetric targets.
This high-speed camera system provides an algorithmic foundation for various photogrammetry applications where detecting relative positioning is important. Originally developed to evaluate the separation stage of NASA's Max Launch Abort System (MLAS) spacecraft crew module, this technology has also been used to evaluate the effect of water impact on the MLAS crew module (above Figure) and for trajectory analysis of military aircraft.
The Technology
The NASA technology uses a photogrammetry algorithm to calculate the relative orientation between two rigid bodies. The software, written in LabVIEW and MATLAB, quantitatively analyzes the photogrammetric data collected from the camera system to determine the 6-DOF position and rotation of the observed object.
The system comprises an arrangement of arbitrarily placed cameras, rigidly fixed on one body, and a collection of photogrammetric targets, rigidly fixed on the second body. The cameras can be either placed on rigidly fixed objects surrounding the second body (facing inwards), or can be placed on an object directed towards the surrounding environment (facing outwards). At any given point in time, the cameras must capture at least five non-collinear targets. The 6-DOF accuracy increases as additional cameras and targets are used. The equipment requirements include a set of heterogeneous cameras, a collection of photogrammetric targets, a data storage device, and a processing PC. Camera calibration and initial target measurements are required prior to image capture.
A nonprovisional patent application on this technology has been filed.
Benefits
- Short set-up time: Overlapping camera FOV are not required, ideal where physical constraints may limit camera placement.
- Flexibility: Placement of camera and target locations is arbitrary, and multiple types of camera lenses may be used simultaneously.
- Minimal user intervention: Algorithm automatically calculates relative orientation after initial set-up.
- Low cost: Simplified photogrammetry system has minimal equipment requirements.
- Adjustable accuracy: Accuracy can be increased by adding additional cameras and photogrammetric targets to the system.
- Quantitative data: The system provides both quantitative and qualitative motion measurements.
Applications
- Astronomy - satellite-based star tracking
- Automobiles and other vehicles -- car crash dynamics -- vehicle separation tests
- Medical - computer-assisted surgery
- Military - ballistics testing
- Terrestrial surveying
- Wind tunnel testing
Technology Details
optics
LAR-TOPS-38
LAR-17908-1
Photogrammetric Technique for Center of Gravity Determination, Thomas W. Jones, Thomas H. Johnson, Dave Shemwell, and Christopher M. Shreves,
53rd AIAA/ASME/ASCE/AHS/ASC Strucrures, Structural Dynamics and Materials Conference, April 23-26, 2012, AIAA Paper 2012-1882, https://ntrs.nasa.gov/api/citations/20120008800/downloads/20120008800.pdf
Similar Results
FlashPose: Range and intensity image-based terrain and vehicle relative pose estimation algorithm
Flashpose is the combination of software written in C and FPGA firmware written in VHDL. It is designed to run under the Linux OS environment in an embedded system or within a custom development application on a Linux workstation. The algorithm is based on the classic Iterative Closest Point (ICP) algorithm originally proposed by Besl and McKay. Basically, the algorithm takes in a range image from a three-dimensional imager, filters and thresholds the image, and converts it to a point cloud in the Cartesian coordinate system. It then minimizes the distances between the point cloud and a model of the target at the origin of the Cartesian frame by manipulating point cloud rotation and translation. This procedure is repeated a number of times for a single image until a predefined mean square error metric is met; at this point the process repeats for a new image.
The rotation and translation operations performed on the point cloud represent an estimate of relative attitude and position, otherwise known as pose.
In addition to 6 degree of freedom (DOF) pose estimation, Flashpose also provides a range and bearing estimate relative to the sensor reference frame. This estimate is based on a simple algorithm that generates a configurable histogram of range information, and analyzes characteristics of the histogram to produce the range and bearing estimate. This can be generated quickly and provides valuable information for seeding the Flashpose ICP algorithm as well as external optical pose algorithms and relative attitude Kalman filters.
Low Cost Star Tracker Software
The current Star Tracker software package is comprised of a Lumenera LW230 monochrome machine-vision camera and a FUJINON HF35SA-1 35mm lens. The star tracker cameras are all connected to and powered by the PC/104 stack via USB 2.0 ports. The software code is written in C++ and is can easily be adapted to other camera and lensing platforms by setting new variables in the software for new focal conditions. In order to identify stars in images, the software contains a star database derived from the 118,218-star Hipparcos catalog [1]. The database contains a list of every star pair within the camera field of view and the angular distance between those pairs. It also contains the inertial position information for each individual star directly from the Hipparcos catalog. In order to keep the star database size small, only stars of magnitude 6.5 or brighter were included. The star tracking process begins when image data is retrieved by the software from the data buffers in the camera. The image is translated into a binary image via a threshold brightness value so that on (bright) pixels are represented by 1s and off (dark) pixels are represented by 0s. The binary image is then searched for blobs, which are just connected groups of on pixels. These blobs represent unidentified stars or other objects such as planets, deep sky objects, other satellites, or noise. The centroids of the blob locations are computed, and a unique pattern recognition algorithm is applied to identify which, if any, stars are represented. During this process, false stars are effectively removed and only repeatedly and uniquely identifiable stars are stored. After stars are identified, another algorithm is applied on their position information to determine the attitude of the satellite. The attitude is computed as a set of Euler angles: right ascension (RA), declination (Dec), and roll. The first two Euler angles are computed by using a linear system that is derived from vector algebra and the information of two identified stars in the image. The roll angle is computed using an iterative method that relies on the information of a single star and the first two Euler angles.
[1] ESA, 1997, The Hipparcos and Tycho Catalogues, ESA SP-1200
Computer Vision Lends Precision to Robotic Grappling
The goal of this computer vision software is to take the guesswork out of grapple operations aboard the ISS by providing a robotic arm operator with real-time pose estimation of the grapple fixtures relative to the robotic arms end effectors. To solve this Perspective-n-Point challenge, the software uses computer vision algorithms to determine alignment solutions between the position of the camera eyepoint with the position of the end effector as the borescope camera sensors are typically located several centimeters from their respective end effector grasping mechanisms.
The software includes a machine learning component that uses a trained regional Convolutional Neural Network (r-CNN) to provide the capability to analyze a live camera feed to determine ISS fixture targets a robotic arm operator can interact with on orbit. This feature is intended to increase the grappling operational range of ISSs main robotic arm from a previous maximum of 0.5 meters for certain target types, to greater than 1.5 meters, while significantly reducing computation times for grasping operations.
Industrial automation and robotics applications that rely on computer vision solutions may find value in this softwares capabilities. A wide range of emerging terrestrial robotic applications, outside of controlled environments, may also find value in the dynamic object recognition and state determination capabilities of this technology as successfully demonstrated by NASA on-orbit.
This computer vision software is at a technology readiness level (TRL) 6, (system/sub-system model or prototype demonstration in an operational environment.), and the software is now available to license. Please note that NASA does not manufacture products itself for commercial sale.
Ruggedized Infrared Camera
This new technology applies NASA engineering to a FLIR Systems Boson® Model No. 640 to enable a robust IR camera for use in space and other extreme applications. Enhancements to the standard Boson® platform include a ruggedized housing, connector, and interface. The Boson® is a COTS small, uncooled, IR camera based on microbolometer technology and operates in the long-wave infrared (LWIR) portion of the IR spectrum. It is available with several lens configurations. NASA's modifications allow the IR camera to survive launch conditions and improve heat removal for space-based (vacuum) operation. The design includes a custom housing to secure the camera core along with a lens clamp to maintain a tight lens-core connection during high vibration launch conditions. The housing also provides additional conductive cooling for the camera components allowing operation in a vacuum environment. A custom printed circuit board (PCB) in the housing allows for a USB connection using a military standard (MIL-STD) miniaturized locking connector instead of the standard USB type C connector. The system maintains the USB standard protocol for easy compatibility and "plug-and-play" operation.
Vision-based Approach and Landing System (VALS)
The novel Vision-based Approach and Landing System (VALS) provides Advanced Air Mobility (AAM) aircraft with an Alternative Position, Navigation, and Timing (APNT) solution for approach and landing without relying on GPS. VALS operates on multiple images obtained by the aircraft’s video camera as the aircraft performs its descent. In this system, a feature detection technique such as Hough circles and Harris corner detection is used to detect which portions of the image may have landmark features. These image areas are compared with a stored list of known landmarks to determine which features correspond to the known landmarks. The world coordinates of the best matched image landmarks are inputted into a Coplanar Pose from Orthography and Scaling with Iterations (COPOSIT) module to estimate the camera position relative to the landmark points, which yields an estimate of the position and orientation of the aircraft. The estimated aircraft position and orientation are fed into an extended Kalman filter to further refine the estimation of aircraft position, velocity, and orientation. Thus, the aircraft’s position, velocity, and orientation are determined without the use of GPS data or signals. Future work includes feeding the vision-based navigation data into the aircraft’s flight control system to facilitate aircraft landing.