Real-Time, High-Resolution Terrain Information in Computing-Constrained Environments

information technology and software
Real-Time, High-Resolution Terrain Information in Computing-Constrained Environments (DRC-TOPS-8)
Software for aeronautics collision avoidance and a range of research areas
Overview
Data adaptive algorithms are the critically enabling technology for automatic collision avoidance system efforts at NASA's Armstrong Flight Research Center. These Armstrong-developed algorithms provide an extensive and highly efficient encoding process for global-scale digital terrain maps (DTMs) along with a real-time decoding process to locally render map data. Available for licensing, these terrain-mapping algorithms are designed to be easily integrated into an aircraft's existing onboard computing environment or into an electronic flight bag (EFB) or mobile device application. In addition to its use within next-generation collision avoidance systems, the software can be adapted for use in a wide variety of applications, including aerospace satellites, automobiles, scientific research, marine charting systems, and medical devices.

The Technology
NASA Armstrong collaborated with the U.S. Air Force to develop algorithms that interpret highly encoded large area terrain maps with geographically user-defined error tolerances. A key feature of the software is its ability to locally decode and render DTMs in real time for a high-performance airplane that may need automatic course correction due to unexpected and dynamic events. Armstrong researchers are integrating the algorithms into a Global Elevation Data Adaptive Compression System (GEDACS) software package, which will enable customized maps from a variety of data sources. How It Works The DTM software achieves its high performance encoding and decoding processes using a unique combination of regular and semi-regular geometric tiling for optimal rendering of a requested map. This tiling allows the software to retain important slope information and continuously and accurately represent the terrain. Maps and decoding logic are integrated into an aircraft's existing onboard computing environment and can operate on a mobile device, an EFB, or flight control and avionics computer systems. Users can adjust the DTM encoding routines and error tolerances to suit evolving platform and mission requirements. Maps can be tailored to flight profiles of a wide range of aircraft, including fighter jets, UAVs, and general aviation aircraft. The DTM and GEDACS software enable the encoding of global digital terrain data into a file size small enough to fit onto a tablet or other handheld/mobile device for next-generation ground collision avoidance. With improved digital terrain data, aircraft could attain better performance. The system monitors the ground approach and an aircraft's ability to maneuver by predicting several multidirectional escape trajectories, a feature that will be particularly advantageous to general aviation aircraft. Why It Is Better Conventional DTM encoding techniques used aboard high-performance aircraft typically achieve relatively low encoding process ratios. Also, the computational complexity of the decoding process can be high, making them unsuitable for the real-time constrained computing environments of high-performance aircraft. Implementation costs are also often prohibitive for general aviation aircraft. This software achieves its high encoding process ratio by intelligently interpreting its maps rather than requiring absolute retention of all data. For example, the DTM software notes the perimeter and depth of a mining pit but ignores contours that are irrelevant based on the climb and turn performance of a particular aircraft and therefore does not waste valuable computational resources. Through this type of intelligent processing, the software eliminates the need to maintain absolute retention of all data and achieves a much higher encoding process ratio than conventional terrain-mapping software. The resulting exceptional encoding process allows users to store a larger library of DTMs in one place, enabling comprehensive map coverage at all times. Additionally, the ability to selectively tailor resolution enables high-fidelity sections of terrain data to be incorporated seamlessly into a map.
GEDACS, earth view, mountain elevations
Benefits
  • Efficient: Provides very high encoding process ratios (5,000:1 in some configurations) and rapid, high-performance down sampling in ultrafast, real-time, constrained-computing environments
  • Powerful: Integrates more than 250 billion separate pieces of terrain information into a single terrain map
  • Improved Imaging: Features images that are 1,000 times more detailed with 2 to 3 times more fidelity when compared with current aircraft mapping systems
  • Highly configurable: Merges any number of DTM products to create the best available global DTM at any desired resolution with easily defined geo-referenced variable fidelity that requires a minimum file size
  • Accurate: Features spatially controlled allowable-error induction (vertical and horizontal) in several independent regions
  • Portable: Works on mobile devices or EFB applications, making it usable for the general aviation community
  • Affordable and accessible: Enables implementation on existing aircraft systems, offering industry standard C, C++ code base and map formats

Applications
  • Military and civil aeronautics (collision avoidance, aerial firefighting, crop dusting)
  • Unmanned aerial vehicle (UAV) navigation and research
  • Automotive global positioning systems (GPS)
  • Geographical predication and planning (wind turbines, watershed, weather, urban planning)
  • Marine charting systems
  • Geospatial information systems
  • Medical software
  • Earth science data collection
  • Gaming systems
Technology Details

information technology and software
DRC-TOPS-8
DRC-009-008 DRC-009-008DIV
8886445 10019835
Similar Results
small aircraft crash, handheld collision avoidance device, small craft, topography screen
Improved Ground Collision Avoidance System
This critical safety tool can be used for a wider variety of aircraft, including general aviation, helicopters, and unmanned aerial vehicles (UAVs) while also improving performance in the fighter aircraft currently using this type of system. Demonstrations/Testing This improved approach to ground collision avoidance has been demonstrated on both small UAVs and a Cirrus SR22 while running the technology on a mobile device. These tests were performed to the prove feasibility of the app-based implementation of this technology. The testing also characterized the flight dynamics of the avoidance maneuvers for each platform, evaluated collision avoidance protection, and analyzed nuisance potential (i.e., the tendency to issue false warnings when the pilot does not consider ground impact to be imminent). Armstrong's Work Toward an Automated Collision Avoidance System Controlled flight into terrain (CFIT) remains a leading cause of fatalities in aviation, resulting in roughly 100 deaths each year in the United States alone. Although warning systems have virtually eliminated CFIT for large commercial air carriers, the problem still remains for fighter aircraft, helicopters, and GAA. Innovations developed at NASAs Armstrong Flight Research Center are laying the foundation for a collision avoidance system that would automatically take control of an aircraft that is in danger of crashing into the ground and fly it—and the people inside—to safety. The technology relies on a navigation system to position the aircraft over a digital terrain elevation data base, algorithms to determine the potential and imminence of a collision, and an autopilot to avoid the potential collision. The system is designed not only to provide nuisance-free warnings to the pilot but also to take over when a pilot is disoriented or unable to control the aircraft. The payoff from implementing the system, designed to operate with minimal modifications on a variety of aircraft, including military jets, UAVs, and GAA, could be billions of dollars and hundreds of lives and aircraft saved. Furthermore, the technology has the potential to be applied beyond aviation and could be adapted for use in any vehicle that has to avoid a collision threat, including aerospace satellites, automobiles, scientific research vehicles, and marine charting systems.
The Apollo 11 Lunar Module Eagle, in a landing configuration was photographed in lunar orbit from the Command and Service Module Columbia.
eVTOL UAS with Lunar Lander Trajectory
This NASA-developed eVTOL UAS is a purpose-built, electric, reusable aircraft with rotor/propeller thrust only, designed to fly trajectories with high similarity to those flown by lunar landers. The vehicle has the unique capability to transition into wing borne flight to simulate the cross-range, horizontal approaches of lunar landers. During transition to wing borne flight, the initial transition favors a traditional airplane configuration with the propellers in the front and smaller surfaces in the rear, allowing the vehicle to reach high speeds. However, after achieving wing borne flight, the vehicle can transition to wing borne flight in the opposite (canard) direction. During this mode of operation, the vehicle is controllable, and the propellers can be powered or unpowered. This NASA invention also has the capability to decelerate rapidly during the descent phase (also to simulate lunar lander trajectories). Such rapid deceleration will be required to reduce vehicle velocity in order to turn propellers back on without stalling the blades or catching the propeller vortex. The UAS also has the option of using variable pitch blades which can contribute to the overall controllability of the aircraft and reduce the likelihood of stalling the blades during the deceleration phase. In addition to testing EDL sensors and precision landing payloads, NASA’s innovative eVTOL UAS could be used in applications where fast, precise, and stealthy delivery of payloads to specific ground locations is required, including military applications. This concept of operations could entail deploying the UAS from a larger aircraft.
Taken from within PowerPoint attachment submitted with NTR. Attachment titled "SPLICE DLC Interface Overview"
Unique Datapath Architecture Yields Real-Time Computing
The DLC platform is composed of three key components: a NASA-designed field programmable gate array (FPGA) board, a NASA-designed multiprocessor on-a-chip (MPSoC) board, and a proprietary datapath that links the boards to available inputs and outputs to enable high-bandwidth data collection and processing. The inertial measurement unit (IMU), camera, Navigation Doppler Lidar (NDL), and Hazard Detection Lidar (HDL) navigation sensors (depicted in the diagram below) are connected to the DLC’s FPGA board. The datapath on this board consists of high-speed serial interfaces for each sensor, which accept the sensor data as input and converts the output to an AXI stream format. The sensor streams are multiplexed into an AXI stream which is then formatted for input to a XAUI high speed serial interface. This interface sends the data to the MPSoC Board, where it is converted back from the XAUI format to a combined AXI stream, and demultiplexed back into individual sensor AXI streams. These AXI streams are then inputted into respective DMA interfaces that provide an interface to the DDRAM on the MPSoC board. This architecture enables real-time high-bandwidth data collection and processing by preserving the MPSoC’s full ability. This sensor datapath architecture may have other potential applications in aerospace and defense, transportation (e.g., autonomous driving), medical, research, and automation/control markets where it could serve as a key component in a high-performance computing platform and/or critical embedded system for integrating, processing, and analyzing large volumes of data in real-time.
Flying drone
Airborne Machine Learning Estimates for Local Winds and Kinematics
The MAchine learning ESTimations for uRban Operations (MAESTRO) system is a novel approach that couples commodity sensors with advanced algorithms to provide real-time onboard local wind and kinematics estimations to a vehicle's guidance and navigation system. Sensors and computations are integrated in a novel way to predict local winds and promote safe operations in dynamic urban regions where Global Positioning System/Global Navigation Satellite System (GPS/GNSS) and other network communications may be unavailable or are difficult to obtain when surrounded by tall buildings due to multi-path reflections and signal diffusion. The system can be implemented onboard an Unmanned Aerial Systems (UAS) and once airborne, the system does not require communication with an external data source or the GPS/GNSS. Estimations of the local winds (speed and direction) are created using inputs from onboard sensors that scan the local building environment. This information can then be used by the onboard guidance and navigation system to determine safe and energy-efficient trajectories for operations in urban and suburban settings. The technology is robust to dynamic environments, input noise, missing data, and other uncertainties, and has been demonstrated successfully in lab experiments and computer simulations.
AAM
Vision-based Approach and Landing System (VALS)
The novel Vision-based Approach and Landing System (VALS) provides Advanced Air Mobility (AAM) aircraft with an Alternative Position, Navigation, and Timing (APNT) solution for approach and landing without relying on GPS. VALS operates on multiple images obtained by the aircraft’s video camera as the aircraft performs its descent. In this system, a feature detection technique such as Hough circles and Harris corner detection is used to detect which portions of the image may have landmark features. These image areas are compared with a stored list of known landmarks to determine which features correspond to the known landmarks. The world coordinates of the best matched image landmarks are inputted into a Coplanar Pose from Orthography and Scaling with Iterations (COPOSIT) module to estimate the camera position relative to the landmark points, which yields an estimate of the position and orientation of the aircraft. The estimated aircraft position and orientation are fed into an extended Kalman filter to further refine the estimation of aircraft position, velocity, and orientation. Thus, the aircraft’s position, velocity, and orientation are determined without the use of GPS data or signals. Future work includes feeding the vision-based navigation data into the aircraft’s flight control system to facilitate aircraft landing.
Stay up to date, follow NASA's Technology Transfer Program on:
facebook twitter linkedin youtube
Facebook Logo Twitter Logo Linkedin Logo Youtube Logo