Reconfigurable Image Generator and Database Generation System

information technology and software
Reconfigurable Image Generator and Database Generation System (TOP2-171)
Patent Only, No Software Available For License.
Overview
This invention was developed as part of the U.S. Air Force sponsored Operational Based Vision Assessment (OBVA) program. The program was tasked with developing a high fidelity flight simulation laboratory, creating the capability to examine the relationship between pilot visual capabilities and performance in simulated operationally relevant tasks. The exceptional visual acuities of the Air Force pilot population required the simulator to present significantly greater pixel density than was currently available in existing technologies. This necessitated the development of a higher-fidelity image generator system, based on emerging technologies, than systems currently available as a complete solution. This innovation resulted in a minimum 150 mega-pixel synchronized, continuous display system driving real-time computer generated imagery at a 120-Hz refresh rate or higher.

The Technology
The system, the Reconfigurable Image Generator (RIG), consists of software and a hardware configuration, and a Synthetic Environment Database Generation System (RIG-DBGS). This innovative Image Generator (IG) uses Commercial-Off-The-Shelf (COTS) technologies and is capable of supporting virtually any display system. The DBGS software leverages high-fidelity real-world data, including aerial imagery, elevation datasets, and vector data. Through a combination of COTS tools and in-house created applications, the semi-automated system can process large amounts of data in days rather than weeks or months, a disadvantage of manual database generation. A major benefit of the RIG technology is that existing simulation users can leverage their investment in existing real-time 3D databases (such as OpenFlight) as part of the RIG system.
Operational Based vision Assessment (OBVA) Simulator
Benefits
  • Very low cost simulator
  • Highly scalable/reconfigurable
  • COTS hardware/software
  • Easy technology upgrades
  • Drives 100+ megapixel visuals
  • Based on proven technology
  • 120 Hz refresh capable
  • Synchronized, continuous, multiple 4K displays
  • Leverages existing databases
  • No recurring software fees

Applications
  • Flight Simulation
  • Virtual Environments
  • DataWalls
  • Entertainment
  • Ultra Realistic Scenery
  • Planetariums
  • Remote Visualization
Technology Details

information technology and software
TOP2-171
ARC-16933-1
9,583,018
Similar Results
Video Acuity Measurement System
Video Acuity Measurement System
The Video Acuity metric is designed to provide a unique and meaningful measurement of the quality of a video system. The automated system for measuring video acuity is based on a model of human letter recognition. The Video Acuity measurement system is comprised of a camera and associated optics and sensor, processing elements including digital compression, transmission over an electronic network, and an electronic display for viewing of the display by a human viewer. The quality of a video system impacts the ability of the human viewer to perform public safety tasks, such as reading of automobile license plates, recognition of faces, and recognition of handheld weapons. The Video Acuity metric can accurately measure the effects of sampling, blur, noise, quantization, compression, geometric distortion, and other effects. This is because it does not rely on any particular theoretical model of imaging, but simply measures the performance in a task that incorporates essential aspects of human use of video, notably recognition of patterns and objects. Because the metric is structurally identical to human visual acuity, the numbers that it yields have immediate and concrete meaning. Furthermore, they can be related to the human visual acuity needed to do the task. The Video Acuity measurement system uses different sets of optotypes and uses automated letter recognition to simulate the human observer.
Technology Example
Computational Visual Servo
The innovation improves upon the performance of passive automatic enhancement of digital images. Specifically, the image enhancement process is improved in terms of resulting contrast, lightness, and sharpness over the prior art of automatic processing methods. The innovation brings the technique of active measurement and control to bear upon the basic problem of enhancing the digital image by defining absolute measures of visual contrast, lightness, and sharpness. This is accomplished by automatically applying the type and degree of enhancement needed based on automated image analysis. The foundation of the processing scheme is the flow of digital images through a feedback loop whose stages include visual measurement computation and servo-controlled enhancement effect. The cycle is repeated until the servo achieves acceptable scores for the visual measures or reaches a decision that it has enhanced as much as is possible or advantageous. The servo-control will bypass images that it determines need no enhancement. The system determines experimentally how much absolute degrees of sharpening can be applied before encountering detrimental sharpening artifacts. The latter decisions are stop decisions that are controlled by further contrast or light enhancement, producing unacceptable levels of saturation, signal clipping, and sharpness. The invention was developed to provide completely new capabilities for exceeding pilot visual performance by clarifying turbid, low-light level, and extremely hazy images automatically for pilot view on heads-up or heads-down display during critical flight maneuvers.
NASA robotic vehicle prototype
Super Resolution 3D Flash LIDAR
This suite of technologies includes a method, algorithms, and computer processing techniques to provide for image photometric correction and resolution enhancement at video rates (30 frames per second). This 3D (2D spatial and range) resolution enhancement uses the spatial and range information contained in each image frame, in conjunction with a sequence of overlapping or persistent images, to simultaneously enhance the spatial resolution and range and photometric accuracies. In other words, the technologies allows for generating an elevation (3D) map of a targeted area (e.g., terrain) with much enhanced resolution by blending consecutive camera image frames. The degree of image resolution enhancement increases with the number of acquired frames.
AAM
Vision-based Approach and Landing System (VALS)
The novel Vision-based Approach and Landing System (VALS) provides Advanced Air Mobility (AAM) aircraft with an Alternative Position, Navigation, and Timing (APNT) solution for approach and landing without relying on GPS. VALS operates on multiple images obtained by the aircraft’s video camera as the aircraft performs its descent. In this system, a feature detection technique such as Hough circles and Harris corner detection is used to detect which portions of the image may have landmark features. These image areas are compared with a stored list of known landmarks to determine which features correspond to the known landmarks. The world coordinates of the best matched image landmarks are inputted into a Coplanar Pose from Orthography and Scaling with Iterations (COPOSIT) module to estimate the camera position relative to the landmark points, which yields an estimate of the position and orientation of the aircraft. The estimated aircraft position and orientation are fed into an extended Kalman filter to further refine the estimation of aircraft position, velocity, and orientation. Thus, the aircraft’s position, velocity, and orientation are determined without the use of GPS data or signals. Future work includes feeding the vision-based navigation data into the aircraft’s flight control system to facilitate aircraft landing.
Player preparing to practice putt wearing the VRZONE headset.
Apparatus and Method for Biofeedback Training
Measured values of physiological signals may be associated with physiological states and may be used to define the presence of such states. For example, in a physiological state of anxiety, adrenaline diverts blood from the body surface to the core of the body in response to a perceived danger. As warm blood is withdrawn from the surface of the skin, the skin temperature drops. Similarly, in a physiological state of stress, perspiration generally increases making the skin more conductive to the passage of an electrical current, thereby increasing the galvanic skin response. It is well known in the field of performance psychology that the peak performance of a task, such as, for example, putting in golf, foul shooting in basketball, serving in tennis, marksmanship in archery or on a gunnery range, shooting pool, or throwing darts, requires the presence of a physiological state, comprising one or more optimal measured values of physiological signals, coincident with the physical performance of the task. The presence of such an optimal physiological state in athletics is colloquially referred to as being in the zone. The technology provides: an apparatus and method of performance-enhancing biofeedback training that has intuitive and motivational appeal to the trainee, by tightly embedding the biofeedback training in the actual task whose performance is to be improved; and, an apparatus and method of performance-enhancing biofeedback training that is operational in real-time, precisely at the moment when a task or exercise, such as an athletic or military maneuver, is required to be performed. The feedback behavior of the physical environment provided by the present invention has the added benefit of providing aids to visualization that the trainee can use in the real-world skill performance setting.
Stay up to date, follow NASA's Technology Transfer Program on:
facebook twitter linkedin youtube
Facebook Logo Twitter Logo Linkedin Logo Youtube Logo