Search

optics
Image from internal NASA presentation developed by inventor and dated May 4, 2020.
Reflection-Reducing Imaging System for Machine Vision Applications
NASAs imaging system is comprised of a small CMOS camera fitted with a C-mount lens affixed to a 3D-printed mount. Light from the high-intensity LED is passed through a lens that both diffuses and collimates the LED output, and this light is coupled onto the cameras optical axis using a 50:50 beam-splitting prism. Use of the collimating/diffusing lens to condition the LED output provides for an illumination source that is of similar diameter to the cameras imaging lens. This is the feature that reduces or eliminates shadows that would otherwise be projected onto the subject plane as a result of refractive index variations in the imaged volume. By coupling the light from the LED unit onto the cameras optical axis, reflections from windows which are often present in wind tunnel facilities to allow for direct views of a test section can be minimized or eliminated when the camera is placed at a small angle of incidence relative to the windows surface. This effect is demonstrated in the image on the bottom left of the page. Eight imaging systems were fabricated and used for capturing background oriented schlieren (BOS) measurements of flow from a heat gun in the 11-by-11-foot test section of the NASA Ames Unitary Plan Wind Tunnel (see test setup on right). Two additional camera systems (not pictured) captured photogrammetry measurements.
information technology and software
https://images.nasa.gov/details-iss062e000422
Computer Vision Lends Precision to Robotic Grappling
The goal of this computer vision software is to take the guesswork out of grapple operations aboard the ISS by providing a robotic arm operator with real-time pose estimation of the grapple fixtures relative to the robotic arms end effectors. To solve this Perspective-n-Point challenge, the software uses computer vision algorithms to determine alignment solutions between the position of the camera eyepoint with the position of the end effector as the borescope camera sensors are typically located several centimeters from their respective end effector grasping mechanisms. The software includes a machine learning component that uses a trained regional Convolutional Neural Network (r-CNN) to provide the capability to analyze a live camera feed to determine ISS fixture targets a robotic arm operator can interact with on orbit. This feature is intended to increase the grappling operational range of ISSs main robotic arm from a previous maximum of 0.5 meters for certain target types, to greater than 1.5 meters, while significantly reducing computation times for grasping operations. Industrial automation and robotics applications that rely on computer vision solutions may find value in this softwares capabilities. A wide range of emerging terrestrial robotic applications, outside of controlled environments, may also find value in the dynamic object recognition and state determination capabilities of this technology as successfully demonstrated by NASA on-orbit. This computer vision software is at a technology readiness level (TRL) 6, (system/sub-system model or prototype demonstration in an operational environment.), and the software is now available to license. Please note that NASA does not manufacture products itself for commercial sale.
Optics
Ruggedized Infrared Camera
This new technology applies NASA engineering to a FLIR Systems Boson® Model No. 640 to enable a robust IR camera for use in space and other extreme applications. Enhancements to the standard Boson® platform include a ruggedized housing, connector, and interface. The Boson® is a COTS small, uncooled, IR camera based on microbolometer technology and operates in the long-wave infrared (LWIR) portion of the IR spectrum. It is available with several lens configurations. NASA's modifications allow the IR camera to survive launch conditions and improve heat removal for space-based (vacuum) operation. The design includes a custom housing to secure the camera core along with a lens clamp to maintain a tight lens-core connection during high vibration launch conditions. The housing also provides additional conductive cooling for the camera components allowing operation in a vacuum environment. A custom printed circuit board (PCB) in the housing allows for a USB connection using a military standard (MIL-STD) miniaturized locking connector instead of the standard USB type C connector. The system maintains the USB standard protocol for easy compatibility and "plug-and-play" operation.
robotics automation and control
Flying drone
Airborne Machine Learning Estimates for Local Winds and Kinematics
The MAchine learning ESTimations for uRban Operations (MAESTRO) system is a novel approach that couples commodity sensors with advanced algorithms to provide real-time onboard local wind and kinematics estimations to a vehicle's guidance and navigation system. Sensors and computations are integrated in a novel way to predict local winds and promote safe operations in dynamic urban regions where Global Positioning System/Global Navigation Satellite System (GPS/GNSS) and other network communications may be unavailable or are difficult to obtain when surrounded by tall buildings due to multi-path reflections and signal diffusion. The system can be implemented onboard an Unmanned Aerial Systems (UAS) and once airborne, the system does not require communication with an external data source or the GPS/GNSS. Estimations of the local winds (speed and direction) are created using inputs from onboard sensors that scan the local building environment. This information can then be used by the onboard guidance and navigation system to determine safe and energy-efficient trajectories for operations in urban and suburban settings. The technology is robust to dynamic environments, input noise, missing data, and other uncertainties, and has been demonstrated successfully in lab experiments and computer simulations.
Aerospace
AAM
Vision-based Approach and Landing System (VALS)
The novel Vision-based Approach and Landing System (VALS) provides Advanced Air Mobility (AAM) aircraft with an Alternative Position, Navigation, and Timing (APNT) solution for approach and landing without relying on GPS. VALS operates on multiple images obtained by the aircraft’s video camera as the aircraft performs its descent. In this system, a feature detection technique such as Hough circles and Harris corner detection is used to detect which portions of the image may have landmark features. These image areas are compared with a stored list of known landmarks to determine which features correspond to the known landmarks. The world coordinates of the best matched image landmarks are inputted into a Coplanar Pose from Orthography and Scaling with Iterations (COPOSIT) module to estimate the camera position relative to the landmark points, which yields an estimate of the position and orientation of the aircraft. The estimated aircraft position and orientation are fed into an extended Kalman filter to further refine the estimation of aircraft position, velocity, and orientation. Thus, the aircraft’s position, velocity, and orientation are determined without the use of GPS data or signals. Future work includes feeding the vision-based navigation data into the aircraft’s flight control system to facilitate aircraft landing.
Stay up to date, follow NASA's Technology Transfer Program on:
facebook twitter linkedin youtube
Facebook Logo Twitter Logo Linkedin Logo Youtube Logo