Airborne Machine Learning Estimates for Local Winds and Kinematics

robotics automation and control
Airborne Machine Learning Estimates for Local Winds and Kinematics (TOP2-277)
GPS-free Estimations using COTS Sensors for UAS and Air Taxi Operations in Complex Urban Environments
Overview
Future Unmanned Aerial Systems (UAS) and air taxis will require advanced onboard autonomy to operate safely within complex and dynamic urban environments. Urban landscapes are dynamic and constantly evolving. In addition to multi-directional, intense, and seemingly unpredictable winds often created in urban canyons, an exact knowledge of current building sizes, shapes, and positions is also often unavailable for real-time navigation. NASA Ames has developed a novel system, MAESTRO: MAchine learning ESTimations for uRban Operations, which not only improves the flight safety of UAS and air taxis in complex dynamic environments but also allows them to make smart and rapid on-board estimations of the local surrounding winds and vehicle kinematics using commercial off-the-shelf (COTS) sensors and advanced onboard computing.

The Technology
The MAchine learning ESTimations for uRban Operations (MAESTRO) system is a novel approach that couples commodity sensors with advanced algorithms to provide real-time onboard local wind and kinematics estimations to a vehicle's guidance and navigation system. Sensors and computations are integrated in a novel way to predict local winds and promote safe operations in dynamic urban regions where Global Positioning System/Global Navigation Satellite System (GPS/GNSS) and other network communications may be unavailable or are difficult to obtain when surrounded by tall buildings due to multi-path reflections and signal diffusion. The system can be implemented onboard an Unmanned Aerial Systems (UAS) and once airborne, the system does not require communication with an external data source or the GPS/GNSS. Estimations of the local winds (speed and direction) are created using inputs from onboard sensors that scan the local building environment. This information can then be used by the onboard guidance and navigation system to determine safe and energy-efficient trajectories for operations in urban and suburban settings. The technology is robust to dynamic environments, input noise, missing data, and other uncertainties, and has been demonstrated successfully in lab experiments and computer simulations.
Left: UAS in synthetic environment using MAESTRO for safe urban operations
Right: GPS-free Onboard Sensing and Computing Integrated with MAESTRO
Benefits
  • Accurate and fast estimates of the local wind environment and vehicle kinematics
  • Uses advanced machine learning algorithms linked with commodity sensors
  • Allows for safe navigation through and around complex urban environments
  • Allows UAS and air taxis to estimate winds based on geometry of surroundings
  • Provides all-azimuth predictions
  • Integrates well with existing airborne sensors
  • Requires no external communication or network after deployment
  • Provides robust and efficient predictions for dynamic (or possibly unknown) urban geometry
  • Runs efficiently (less than 0.1 sec) on commodity portable computer hardware
  • Produces actionable advisories that fit seamlessly into the GNC process stream
  • Provides GPS-free position and attitude estimations

Applications
  • Urban air taxis / Urban Air Mobility (UAM)
  • Urban UAS Package delivery
  • UAS Emergency Medical Services (EMS) services (like: toxic plume/smoke/ash prediction for urban fires and pollution spills)
  • UAS-based surveillance and infrastructure inspection services
  • Defense and Intelligence operations
  • Ship air wake predictions for safe maritime UAS operations
  • Landing zone wind field predictions for precision parachute airdrops
  • Detailed wind field predictions at urban airports
  • Wind predictions in mountain valleys, canyons, etc.
  • Improved local ballistic trajectory predictions
Technology Details

robotics automation and control
TOP2-277
ARC-17836-1
11,046,430
Patent-pending

"Patent Only/No Software"
Similar Results
Flying drone
Unmanned Aerial Systems (UAS) Traffic Management
NASA Ames has developed an Autonomous Situational Awareness Platform system for a UAS (ASAP-U), a traffic management system to incorporate Unmanned Aerial Systems (UASs) into the National Airspace System. The Autonomous Situational Awareness Platform (ASAP) is a system that combines existing navigation technology (both aviation and maritime) with new procedures to safely integrate Unmanned Aerial Systems (UASs) with other airspace vehicles. It uses a module called ASAP-U, which includes a transmitter, receivers, and various links to other UAS systems. The module collects global positioning system GPS coordinates and time from a satellite antenna, and this data is fed to the UAS's flight management system for navigation. The ASAP-U module autonomously and continuously sends UAS information via a radio frequency (RF) antenna using Self-Organized Time Division Multiple Access (SOTDMA) to prevent signal overlap. It also receives ASAP data from other aircraft. In case of transmission overload, priority is given to closer aircraft. Additionally, the module can receive weather data, navigational aid data, terrain data, and updates to the UAS flight plan. The collected data is relayed to the flight management system, which includes various databases and a navigation computer to calculate necessary flight plan modifications based on regulations, right-of-way rules, terrain, and geofencing. Conflicts are checked against databases, and if none are found, the flight plan is implemented. If conflicts arise, modifications can be made. The ASAP-U module continuously receives and transmits data, including UAS data and data from other aircraft, to detect conflicts with other aircraft, terrain, weather, and geofencing. Based on this information, the flight management system determines the need for course adjustments and the flight control system executes them for a safe flight route.
AAM
Vision-based Approach and Landing System (VALS)
The novel Vision-based Approach and Landing System (VALS) provides Advanced Air Mobility (AAM) aircraft with an Alternative Position, Navigation, and Timing (APNT) solution for approach and landing without relying on GPS. VALS operates on multiple images obtained by the aircraft’s video camera as the aircraft performs its descent. In this system, a feature detection technique such as Hough circles and Harris corner detection is used to detect which portions of the image may have landmark features. These image areas are compared with a stored list of known landmarks to determine which features correspond to the known landmarks. The world coordinates of the best matched image landmarks are inputted into a Coplanar Pose from Orthography and Scaling with Iterations (COPOSIT) module to estimate the camera position relative to the landmark points, which yields an estimate of the position and orientation of the aircraft. The estimated aircraft position and orientation are fed into an extended Kalman filter to further refine the estimation of aircraft position, velocity, and orientation. Thus, the aircraft’s position, velocity, and orientation are determined without the use of GPS data or signals. Future work includes feeding the vision-based navigation data into the aircraft’s flight control system to facilitate aircraft landing.
Adaptive wind estimation for small unmanned aerial systems using motion data
The technology presents an on-board estimation, navigation and control architecture for multi-rotor drones flying in an urban environment. It consists of adaptive algorithms to estimate the vehicle's aerodynamic drag coefficients with respect to still air and urban wind components along the flight trajectory, with guaranteed fast and reliable convergence to the true values. Navigation algorithms generate feasible trajectories between given way-points that take into account the estimated wind. Control algorithms track the generated trajectories as long as the vehicle retains a sufficient number of functioning rotors that are capable of compensating for the estimated wind. The technology provides a method of measuring wind profiles on a drone using existing motion sensors, like the inertial measurement unit (IMU), rate gyroscope, etc., that are observably necessary for any drone to operate. The algorithms are used to estimate wind around the drone. They can be used for stability or trajectory calculations, and are adaptable for use with any UAV regardless of the knowledge of weight and inertia. They further provide real-time calculations without additional sensors. The estimation method is implemented using onboard computing power. It rapidly converges to true values, is computationally inexpensive, and does not require any specific hardware or specific vehicle maneuvers for the convergence. All components of this on-board system are computationally effective and are intended for a real time implementation. The method's software is developed in a Matlab/Simulink environment, and has executable versions, which are suitable for majority of existing onboard controllers. The algorithms were tested in simulations.
NASA UAV
Low Weight Flight Controller Design
Increasing demand for smaller UAVs (e.g., sometimes with wingspans on the order of six inches and weighing less than one pound) generated a need for much smaller flight and sensing equipment. NASA Langley's new sensing and flight control system for small UAVs includes both an active flight control board and an avionics sensor board. Together, these compare the status of the UAVs position, heading, and orientation with the pre-programmed data to determine and apply the flight control inputs needed to maintain the desired course. To satisfy the small form-factor system requirements, micro-electro-mechanical systems (MEMS) are used to realize the various flight control sensing devices. MEMS-based devices are commercially available single-chip devices that lend themselves to easy integration onto a circuit board. The system uses less energy than current systems, allowing solar panels planted on the vehicle to generate the systems power. While the lightweight technology was designed for smaller UAVs, the sensors could be distributed throughout larger UAVs, depending on the application.
GEDACS, earth view, mountain elevations
Real-Time, High-Resolution Terrain Information in Computing-Constrained Environments
NASA Armstrong collaborated with the U.S. Air Force to develop algorithms that interpret highly encoded large area terrain maps with geographically user-defined error tolerances. A key feature of the software is its ability to locally decode and render DTMs in real time for a high-performance airplane that may need automatic course correction due to unexpected and dynamic events. Armstrong researchers are integrating the algorithms into a Global Elevation Data Adaptive Compression System (GEDACS) software package, which will enable customized maps from a variety of data sources. How It Works The DTM software achieves its high performance encoding and decoding processes using a unique combination of regular and semi-regular geometric tiling for optimal rendering of a requested map. This tiling allows the software to retain important slope information and continuously and accurately represent the terrain. Maps and decoding logic are integrated into an aircraft's existing onboard computing environment and can operate on a mobile device, an EFB, or flight control and avionics computer systems. Users can adjust the DTM encoding routines and error tolerances to suit evolving platform and mission requirements. Maps can be tailored to flight profiles of a wide range of aircraft, including fighter jets, UAVs, and general aviation aircraft. The DTM and GEDACS software enable the encoding of global digital terrain data into a file size small enough to fit onto a tablet or other handheld/mobile device for next-generation ground collision avoidance. With improved digital terrain data, aircraft could attain better performance. The system monitors the ground approach and an aircraft's ability to maneuver by predicting several multidirectional escape trajectories, a feature that will be particularly advantageous to general aviation aircraft. Why It Is Better Conventional DTM encoding techniques used aboard high-performance aircraft typically achieve relatively low encoding process ratios. Also, the computational complexity of the decoding process can be high, making them unsuitable for the real-time constrained computing environments of high-performance aircraft. Implementation costs are also often prohibitive for general aviation aircraft. This software achieves its high encoding process ratio by intelligently interpreting its maps rather than requiring absolute retention of all data. For example, the DTM software notes the perimeter and depth of a mining pit but ignores contours that are irrelevant based on the climb and turn performance of a particular aircraft and therefore does not waste valuable computational resources. Through this type of intelligent processing, the software eliminates the need to maintain absolute retention of all data and achieves a much higher encoding process ratio than conventional terrain-mapping software. The resulting exceptional encoding process allows users to store a larger library of DTMs in one place, enabling comprehensive map coverage at all times. Additionally, the ability to selectively tailor resolution enables high-fidelity sections of terrain data to be incorporated seamlessly into a map.
Stay up to date, follow NASA's Technology Transfer Program on:
facebook twitter linkedin youtube
Facebook Logo Twitter Logo Linkedin Logo Youtube Logo