Video Acuity Measurement System

optics
Video Acuity Measurement System (TOP2-164)
Patent Only, No Software Available For License.
Overview
There is a widely acknowledged need for metrics to quantify the performance of video systems. NASAs new empirical Video Acuity metric, is simple to measure and relates directly to task performance. Video acuity is determined by the smallest letters that can be automatically identified using a video system. It is expressed most conveniently in letters per degree of visual angle. Video systems are used broadly for public safety, and range from very simple, inexpensive systems to very complex, powerful, and expensive systems. These systems are used by fire departments, police departments, homeland security, and a wide variety of commercial entities. They are used in streets, stores, banks, airports, cars, and aircraft, as well as many other settings. They are used for a variety of tasks, including detection of smoke and fire, recognition of weapons, face identification, and event perception. In all of these contexts, the quality of the video system impacts the performance in the visual task. The Video Acuity metric matches the quality of the system to the demands of its tasks.

The Technology
The Video Acuity metric is designed to provide a unique and meaningful measurement of the quality of a video system. The automated system for measuring video acuity is based on a model of human letter recognition. The Video Acuity measurement system is comprised of a camera and associated optics and sensor, processing elements including digital compression, transmission over an electronic network, and an electronic display for viewing of the display by a human viewer. The quality of a video system impacts the ability of the human viewer to perform public safety tasks, such as reading of automobile license plates, recognition of faces, and recognition of handheld weapons. The Video Acuity metric can accurately measure the effects of sampling, blur, noise, quantization, compression, geometric distortion, and other effects. This is because it does not rely on any particular theoretical model of imaging, but simply measures the performance in a task that incorporates essential aspects of human use of video, notably recognition of patterns and objects. Because the metric is structurally identical to human visual acuity, the numbers that it yields have immediate and concrete meaning. Furthermore, they can be related to the human visual acuity needed to do the task. The Video Acuity measurement system uses different sets of optotypes and uses automated letter recognition to simulate the human observer.
Video Acuity Measurement System Video Acuity Measurement System
Benefits
  • Simple
  • 100% objective
  • Collapses all system issues into one single metric
  • Metric is relevant to end user
  • Metric can be related to human visual acuity
  • Automated

Applications
  • Monitor events and locations
  • Video Surveillance
  • Face identification
  • Homeland security
  • Safety and security
  • Detection of smoke and fire
  • Recognition of weapons
Technology Details

optics
TOP2-164
ARC-16661-1
9,232,215
Similar Results
Automated Vision Test
Automated Vision Test
The Wavefront Aberrations (WA) are a collection of different sorts of optical defects, including the familiar defocus and astigmatism that are corrected by eyeglasses, but also more complex higher order aberrations such as coma, spherical aberration, and others. The WA provide a comprehensive description of the optics of the eye, and thus determine the acuity. But until recently, a practical method of computing this relationship did not exist. Our solution to this problem is to simulate the observer performing the acuity task with an eye possessing a particular set of WA. When a letter is presented, we first distort a digital image of the letter by the specified WA, and add noise to mimic the noisiness of the visual system. From previous research, we have determined the appropriate noise level to match human performance. We then attempt to match the blurred noisy image to similarly blurred candidate letter images, and select the closest match. We repeat this for many trials at many letter sizes, and thereby determine the smallest letter than can be reliably identified: the visual acuity. We have streamlined and simplified the key steps for this simulation approach so that the entire process is robust, accurate, simple and fast. Results are typically obtained in a few seconds.
ARC RIG
Reconfigurable Image Generator and Database Generation System
The system, the Reconfigurable Image Generator (RIG), consists of software and a hardware configuration, and a Synthetic Environment Database Generation System (RIG-DBGS). This innovative Image Generator (IG) uses Commercial-Off-The-Shelf (COTS) technologies and is capable of supporting virtually any display system. The DBGS software leverages high-fidelity real-world data, including aerial imagery, elevation datasets, and vector data. Through a combination of COTS tools and in-house created applications, the semi-automated system can process large amounts of data in days rather than weeks or months, a disadvantage of manual database generation. A major benefit of the RIG technology is that existing simulation users can leverage their investment in existing real-time 3D databases (such as OpenFlight) as part of the RIG system.
AAM
Vision-based Approach and Landing System (VALS)
The novel Vision-based Approach and Landing System (VALS) provides Advanced Air Mobility (AAM) aircraft with an Alternative Position, Navigation, and Timing (APNT) solution for approach and landing without relying on GPS. VALS operates on multiple images obtained by the aircraft’s video camera as the aircraft performs its descent. In this system, a feature detection technique such as Hough circles and Harris corner detection is used to detect which portions of the image may have landmark features. These image areas are compared with a stored list of known landmarks to determine which features correspond to the known landmarks. The world coordinates of the best matched image landmarks are inputted into a Coplanar Pose from Orthography and Scaling with Iterations (COPOSIT) module to estimate the camera position relative to the landmark points, which yields an estimate of the position and orientation of the aircraft. The estimated aircraft position and orientation are fed into an extended Kalman filter to further refine the estimation of aircraft position, velocity, and orientation. Thus, the aircraft’s position, velocity, and orientation are determined without the use of GPS data or signals. Future work includes feeding the vision-based navigation data into the aircraft’s flight control system to facilitate aircraft landing.
Device prototype in use
Optical Head-Mounted Display System for Laser Safety Eyewear
The system combines laser goggles with an optical head-mounted display that displays a real-time video camera image of a laser beam. Users are able to visualize the laser beam while his/her eyes are protected. The system also allows for numerous additional features in the optical head mounted display such as digital zoom, overlays of additional information such as power meter data, Bluetooth wireless interface, digital overlays of beam location and others. The system is built on readily available components and can be used with existing laser eyewear. The software converts the color being observed to another color that transmits through the goggles. For example, if a red laser is being used and red-blocking glasses are worn, the software can convert red to blue, which is readily transmitted through the laser eyewear. Similarly, color video can be converted to black-and-white to transmit through the eyewear.
capsule drop test
Photogrammetric Method for Calculating Relative Orientation
The NASA technology uses a photogrammetry algorithm to calculate the relative orientation between two rigid bodies. The software, written in LabVIEW and MATLAB, quantitatively analyzes the photogrammetric data collected from the camera system to determine the 6-DOF position and rotation of the observed object. The system comprises an arrangement of arbitrarily placed cameras, rigidly fixed on one body, and a collection of photogrammetric targets, rigidly fixed on the second body. The cameras can be either placed on rigidly fixed objects surrounding the second body (facing inwards), or can be placed on an object directed towards the surrounding environment (facing outwards). At any given point in time, the cameras must capture at least five non-collinear targets. The 6-DOF accuracy increases as additional cameras and targets are used. The equipment requirements include a set of heterogeneous cameras, a collection of photogrammetric targets, a data storage device, and a processing PC. Camera calibration and initial target measurements are required prior to image capture. A nonprovisional patent application on this technology has been filed.
Stay up to date, follow NASA's Technology Transfer Program on:
facebook twitter linkedin youtube
Facebook Logo Twitter Logo Linkedin Logo Youtube Logo