Computer Vision Lends Precision to Robotic Grappling
information technology and software
Computer Vision Lends Precision to Robotic Grappling (MSC-TOPS-114)
AI analysis of live camera feed yields delta commands to human operator
Overview
Innovators at NASA Johnson Space Center (JSC) have developed computer vision software that derives target posture determinations quickly and then instructs an operator how to properly align a robotic end-effector with a target that they are trying to grapple. As an added benefit, the softwares object identification capability can also help detect physical defects on targets.
This technology was originally created to aid robotic arm operators aboard the International Space Station (ISS) that relied more heavily upon grappling instructional maneuvers derived from flight controllers on the ground at JSCs Mission Control Center (MCC). Despite the aid of computer-based models to predict the alignment of both robotic arm and target, iterative realignment procedures were often required to correct botched grapple operations, costing valuable time.
To solve this problem, NASAs computer vision software analyzes the live camera feed from the robotic arms single borescope camera and provides the operator with the delta commands required for an ideal grasp operation. This process is aided by a machine learning component that monitors the camera feed for any of the ISSs potential target fixtures. Once a target fixture is identified, proper camera and target parameters are automatically sequenced to prepare for grasping operations.
The Technology
The goal of this computer vision software is to take the guesswork out of grapple operations aboard the ISS by providing a robotic arm operator with real-time pose estimation of the grapple fixtures relative to the robotic arms end effectors. To solve this Perspective-n-Point challenge, the software uses computer vision algorithms to determine alignment solutions between the position of the camera eyepoint with the position of the end effector as the borescope camera sensors are typically located several centimeters from their respective end effector grasping mechanisms.
The software includes a machine learning component that uses a trained regional Convolutional Neural Network (r-CNN) to provide the capability to analyze a live camera feed to determine ISS fixture targets a robotic arm operator can interact with on orbit. This feature is intended to increase the grappling operational range of ISSs main robotic arm from a previous maximum of 0.5 meters for certain target types, to greater than 1.5 meters, while significantly reducing computation times for grasping operations.
Industrial automation and robotics applications that rely on computer vision solutions may find value in this softwares capabilities. A wide range of emerging terrestrial robotic applications, outside of controlled environments, may also find value in the dynamic object recognition and state determination capabilities of this technology as successfully demonstrated by NASA on-orbit.
This computer vision software is at a technology readiness level (TRL) 6, (system/sub-system model or prototype demonstration in an operational environment.), and the software is now available to license. Please note that NASA does not manufacture products itself for commercial sale.
Benefits
- Machine-guided efficiency: high fidelity position data allows human flight controllers to eliminate errors and save mission time.
- Trainable object recognition: a neural network capable of learning new targets automatically identifies objects as they enter a cameras field of view.
- Real-time tracking: recognized objects can be tracked, and positional data updated from frame to frame.
- Dynamic feedback: flight controllers are shown solution quality of target object position calculations.
Applications
- Orbital service: grasping operations for deployment or capture of spacecraft, satellites, and debris; robotic in-situ assembly
- Hazardous environments: robotic vision systems for mining, power generation, marine, reconnaissance and military
- Vehicle docking: mechanized capture, transport, and storage solutions
- Industrial automation: robotic assembly and manufacturing, inspection, automated local area transport
- Telesurgery: robotic vision systems to assist surgeons remotely in the operative field
Technology Details
information technology and software
MSC-TOPS-114
MSC-27184-1
Lucier et. al. International Space Station (ISS) Robotics Development Operations Team Results in Robotic Remote Sensing, Control, and Semi-Automated Ground Control Techniques. 16th International Conference on Space Operations, Cape Town, South Africa. May 3-5, 2021.
Similar Results
Method and Associated Apparatus for Capturing, Servicing, and De-Orbiting Earth Satellites Using Robotics
This method begins with the optical seeking and ranging of a target satellite using LiDAR. Upon approach, the tumble rate of the target satellite is measured and matched by the approaching spacecraft. As rendezvous occurs the spacecraft deploys a robotic grappling arm or berthing pins to provide a secure attachment to the satellite. A series of robotic arms perform servicing autonomously, either executing a pre-programmed sequence of instructions or a sequence generated by Artificial Intelligence (AI) logic onboard the robot. Should it become necessary or desirable, a remote operator maintains the ability to abort an instruction or utilize a built-in override to teleoperate the robot.
Robotic Inspection System for Fluid Infrastructures
The Robotic Inspection System improves the inspection of deep sea structures such as offshore storage cells/tanks, pipelines, and other subsea exploration applications. Generally, oil platforms are comprised of pipelines and/or subsea storage cells. These storage cells not only provide a stable base for the platform, they provide intermediate storage and separation capability for oil. Surveying these structures to examine the contents is often required when the platforms are being decommissioned. The Robotic Inspection System provides a device and method for imaging the inside of the cells, which includes hardware and software components. The device is able to move through interconnected pipes, even making 90 degree turns with minimal power. The Robotic Inspection System is able to display 3-dimentional range data from 2-dimensional information. This inspection method and device could significantly reduce the cost of decommissioning cells. The device has the capability to map interior volume, interrogate integrity of cell fill lines, display real-time video and sonar, and with future development possibly sample sediment or oil.
FlashPose: Range and intensity image-based terrain and vehicle relative pose estimation algorithm
Flashpose is the combination of software written in C and FPGA firmware written in VHDL. It is designed to run under the Linux OS environment in an embedded system or within a custom development application on a Linux workstation. The algorithm is based on the classic Iterative Closest Point (ICP) algorithm originally proposed by Besl and McKay. Basically, the algorithm takes in a range image from a three-dimensional imager, filters and thresholds the image, and converts it to a point cloud in the Cartesian coordinate system. It then minimizes the distances between the point cloud and a model of the target at the origin of the Cartesian frame by manipulating point cloud rotation and translation. This procedure is repeated a number of times for a single image until a predefined mean square error metric is met; at this point the process repeats for a new image.
The rotation and translation operations performed on the point cloud represent an estimate of relative attitude and position, otherwise known as pose.
In addition to 6 degree of freedom (DOF) pose estimation, Flashpose also provides a range and bearing estimate relative to the sensor reference frame. This estimate is based on a simple algorithm that generates a configurable histogram of range information, and analyzes characteristics of the histogram to produce the range and bearing estimate. This can be generated quickly and provides valuable information for seeding the Flashpose ICP algorithm as well as external optical pose algorithms and relative attitude Kalman filters.
Circumferential Scissor Spring Enhances Precision in Hand Controllers
The traditional scissor spring design for hand controllers has been improved upon with a circumferential spring controller mechanism that facilitates easy customization, enhanced durability, and optimum controller feedback. These advantages are partially facilitated by locating the spring to the outside of the mechanism which allows for easier spring replacement to adjust the deflection force or for maintenance.
The new mechanism is comprised of two rounded blades, or cams, that pivot forward and back under operation and meet to form a circle. An expansion spring is looped around the blade perimeter and resides in a channel, providing the restoring force that returns the control stick to a neutral position. Due to the use of a longer circumferential spring, the proportion of spring expansion is smaller for a given distance of deflection, so the forces associated with the deflection remain on a more linear portion of the force deflection curve.
The Circumferential Scissor Spring for Controllers is at technology readiness level (TRL) 8 (actual system completed and flight qualified through test and demonstration) and is available for patent licensing. Please note that NASA does not manufacture products itself for commercial sale.
Spacecraft to Remove Orbital Debris
An approach to mitigating the creation of additional orbital debris is to remove the sources of future medium debris by actively removing large spent objects from congested orbits. NASA has introduced the ADRV, an efficient and effective solution to remove large debris from LEO such as spent rocket bodies and non-functional satellites. The concept yields a single use, low-cost, lightweight, high mass fraction vehicle that enables the specific removal of large orbital debris (1000 - 4000 kg mass, 200 - 2000 km altitude, and 20 98-degree inclination). The ADRV performs rendezvous, approach, and capture of non-cooperative tumbling debris objects, maneuvering of the mated vehicle, and controlled, targeted reposition or deorbit of the mated vehicle. Due to its small form factor, up to eight ADRVs can be launched in a single payload, enabling high impact orbital debris removal missions within the same inclination group.
Three key technologies were developed to enable the ADRV: - 1) The spacecraft control system (SCS) is a guidance, navigation, and control system that provides vehicle control during all phases of a mission; - (2) The debris object characterization system (DOCS) characterizes movement and capture of non-cooperative targets; and - (3) The capture and release system (CARS) allows the vehicle to capture and mate with orbital debris targets. These technologies can improve the current state-of-the-art capabilities of automated rendezvous and docking technology significantly for debris objects with tumbling rates up to 25 degrees per second. This approach leverages decades of spaceflight experience while automating key mission areas to reduce cost and improve the likelihood of success.