Search
Aerospace
Real-Time Drag Opti-mization Control Framework
According to the International Air Transport Association statistics, the annual fuel cost for the global airline industry is estimated to be about $140 billion in 2017. Therefore, fuel cost is a major cost driver for the airline industry. Advanced future transport aircraft will likely employ adaptive wing technologies that enable the wings of those aircraft to adaptively reconfigure themselves in optimal shapes for improved aerodynamic efficiency throughout the flight envelope. The need for adaptive wing technologies is driven by the cost of fuel consumption in commercial aviation. NASA Ames has developed a novel way to address aerodynamic inefficiencies experienced during aircraft operation. The real-time drag optimization control method uses an on-board, real-time sensor data gathered from the aircraft conditions and performance during flight (such as engine thrust or wing deflection). The sensor data are inputted into an on-board model estimation and drag optimization system which estimates the aerodynamic model and calculates the optimal settings of the flight control surfaces. As the wings deflect during flight, this technology uses an iterative approach whereby the system continuously updates the optimal solution for the flight control surfaces and iteratively optimizes the wing shape to reduce drag continuously during flight. The new control system for the flight control surfaces can be integrated into an existing flight control system. This new technology can be used on passenger aircraft, cargo aircraft, or high performance supersonic jets to optimize drag, improve aerodynamic efficiency, and increase fuel efficiency during flight. In addition, it does not require a specific aircraft math model which means it does not require customization for different aircraft designs. The system promises both economic and environmental benefits to the aviation industry as less fuel is burned.
information technology and software
Computer Vision Lends Precision to Robotic Grappling
The goal of this computer vision software is to take the guesswork out of grapple operations aboard the ISS by providing a robotic arm operator with real-time pose estimation of the grapple fixtures relative to the robotic arms end effectors. To solve this Perspective-n-Point challenge, the software uses computer vision algorithms to determine alignment solutions between the position of the camera eyepoint with the position of the end effector as the borescope camera sensors are typically located several centimeters from their respective end effector grasping mechanisms.
The software includes a machine learning component that uses a trained regional Convolutional Neural Network (r-CNN) to provide the capability to analyze a live camera feed to determine ISS fixture targets a robotic arm operator can interact with on orbit. This feature is intended to increase the grappling operational range of ISSs main robotic arm from a previous maximum of 0.5 meters for certain target types, to greater than 1.5 meters, while significantly reducing computation times for grasping operations.
Industrial automation and robotics applications that rely on computer vision solutions may find value in this softwares capabilities. A wide range of emerging terrestrial robotic applications, outside of controlled environments, may also find value in the dynamic object recognition and state determination capabilities of this technology as successfully demonstrated by NASA on-orbit.
This computer vision software is at a technology readiness level (TRL) 6, (system/sub-system model or prototype demonstration in an operational environment.), and the software is now available to license. Please note that NASA does not manufacture products itself for commercial sale.