Spatial Standard Observer (SSO)

optics
Spatial Standard Observer (SSO) (TOP2-102)
Patent Only, No Software Available For License.
Overview
The Spatial Standard Observer (SSO) was developed to predict the detectability of spatial contrast targets such as those used in the ModelFest project. The SSO is a lumped parameter model basing its predictions on the visible contrast generalized energy. Visible contrast means that the contrast has been reduced by a contrast sensitivity function (CSF). Generalized energy means that the visible contrast is raised to a power higher than 2 before spatial and temporal integration. To adapt the SSO to predict the effects of variations of optical image quality on tasks, the optical component of the SSO CSF needs to be removed, leaving the neural CSF. Also, since target detection is not the typical criterion task for assessing optical image quality, the SSO concept needs to be extended to other tasks, such as Sloan character recognition.

The Technology
The Spatial Standard Observer (SSO) provides a tool that allows measurement of the visibility of an element, or visual discriminability of two elements. The device may be used whenever it is necessary to measure or specify visibility or visual intensity. The SSO is based on a model of human vision, and has been calibrated by an extensive set of human test data. The SSO operates on a digital image or a pair of digital images. It computes a numerical measure of the perceptual strength of the single image, or of the visible difference between the two images. The visibility measurements are provided in units of Just Noticeable Differences (JND), a standard measure of perceptual intensity. A target that is just visible has a measure of 1 JND. The SSO will be useful in a wide variety of applications, most notably in the inspection of displays during the manufacturing process. It is also useful in for evaluating vision from unpiloted aerial vehicles (UAV) predicting visibility of UAVs from other aircraft, from the control tower of aircraft on runways, measuring visibility of damage to aircraft and to the shuttle orbiter, evaluation of legibility of text, icons or symbols in a graphical user interface, specification of camera and display resolution, inspection of displays during the manufacturing process, estimation of the quality of compressed digital video, and predicting outcomes of corrective laser eye surgery.
Spatial Standard Observer (SSO) One of the applications of the technology is predicting outcomes of corrective laser eye surgery.
Benefits
  • Rapid, objective means of estimating degrees of visibility and discriminability
  • Simple and efficient design that produces an accurate visibility metric
  • Avoids the need for complicated spatial frequency filter banks
  • Permits accurate visibility predictions of the visibility of oblique patterns

Applications
  • Evaluating vision from unmanned aerial vehicles
  • Predicting outcomes of corrective laser eye surgery
  • Inspection of displays during the manufacturing process
  • Estimation of the quality of compressed digital video
  • Evaluation of legibility of text
  • Measuring visibility of damage to aircraft and to the shuttle arbiter
Technology Details

optics
TOP2-102
ARC-14569-1 ARC-14569-2
Similar Results
Front image
Strobing to Mitigate Vibration for Display Legibility
The dominant frequency of the vibration that requires mitigation can be known in advance, measured in real time, or predicted with simulation algorithms. That frequency (or a lower frequency multiplier) is then used to drive the strobing rate of the illumination source. For example, if the vibration frequency is 20 Hz, one could employ a strobe rate of 1, 2, 4, 5, 10, or 20 Hz, depending on which rate the operator finds the least intrusive. The strobed illumination source can be internal or external to the display. Perceptual psychologists have long understood that strobed illumination can freeze moving objects in the visual field. This effect can be used for artistic effect or for technical applications. The present innovation is instead applicable for environments in which the human observer rather than just the viewed object undergoes vibration. Such environments include space, air, land, and sea vehicles, or on foot (e.g., walking or running on the ground or treadmills). The technology itself can be integrated into handheld and fixed display panels, head-mounted displays, and cabin illumination for viewing printed materials.
Technology Example
Computational Visual Servo
The innovation improves upon the performance of passive automatic enhancement of digital images. Specifically, the image enhancement process is improved in terms of resulting contrast, lightness, and sharpness over the prior art of automatic processing methods. The innovation brings the technique of active measurement and control to bear upon the basic problem of enhancing the digital image by defining absolute measures of visual contrast, lightness, and sharpness. This is accomplished by automatically applying the type and degree of enhancement needed based on automated image analysis. The foundation of the processing scheme is the flow of digital images through a feedback loop whose stages include visual measurement computation and servo-controlled enhancement effect. The cycle is repeated until the servo achieves acceptable scores for the visual measures or reaches a decision that it has enhanced as much as is possible or advantageous. The servo-control will bypass images that it determines need no enhancement. The system determines experimentally how much absolute degrees of sharpening can be applied before encountering detrimental sharpening artifacts. The latter decisions are stop decisions that are controlled by further contrast or light enhancement, producing unacceptable levels of saturation, signal clipping, and sharpness. The invention was developed to provide completely new capabilities for exceeding pilot visual performance by clarifying turbid, low-light level, and extremely hazy images automatically for pilot view on heads-up or heads-down display during critical flight maneuvers.
Image from internal NASA presentation developed by inventor and dated May 4, 2020.
Reflection-Reducing Imaging System for Machine Vision Applications
NASAs imaging system is comprised of a small CMOS camera fitted with a C-mount lens affixed to a 3D-printed mount. Light from the high-intensity LED is passed through a lens that both diffuses and collimates the LED output, and this light is coupled onto the cameras optical axis using a 50:50 beam-splitting prism. Use of the collimating/diffusing lens to condition the LED output provides for an illumination source that is of similar diameter to the cameras imaging lens. This is the feature that reduces or eliminates shadows that would otherwise be projected onto the subject plane as a result of refractive index variations in the imaged volume. By coupling the light from the LED unit onto the cameras optical axis, reflections from windows which are often present in wind tunnel facilities to allow for direct views of a test section can be minimized or eliminated when the camera is placed at a small angle of incidence relative to the windows surface. This effect is demonstrated in the image on the bottom left of the page. Eight imaging systems were fabricated and used for capturing background oriented schlieren (BOS) measurements of flow from a heat gun in the 11-by-11-foot test section of the NASA Ames Unitary Plan Wind Tunnel (see test setup on right). Two additional camera systems (not pictured) captured photogrammetry measurements.
Automated Vision Test
Automated Vision Test
The Wavefront Aberrations (WA) are a collection of different sorts of optical defects, including the familiar defocus and astigmatism that are corrected by eyeglasses, but also more complex higher order aberrations such as coma, spherical aberration, and others. The WA provide a comprehensive description of the optics of the eye, and thus determine the acuity. But until recently, a practical method of computing this relationship did not exist. Our solution to this problem is to simulate the observer performing the acuity task with an eye possessing a particular set of WA. When a letter is presented, we first distort a digital image of the letter by the specified WA, and add noise to mimic the noisiness of the visual system. From previous research, we have determined the appropriate noise level to match human performance. We then attempt to match the blurred noisy image to similarly blurred candidate letter images, and select the closest match. We repeat this for many trials at many letter sizes, and thereby determine the smallest letter than can be reliably identified: the visual acuity. We have streamlined and simplified the key steps for this simulation approach so that the entire process is robust, accurate, simple and fast. Results are typically obtained in a few seconds.
ARC RIG
Reconfigurable Image Generator and Database Generation System
The system, the Reconfigurable Image Generator (RIG), consists of software and a hardware configuration, and a Synthetic Environment Database Generation System (RIG-DBGS). This innovative Image Generator (IG) uses Commercial-Off-The-Shelf (COTS) technologies and is capable of supporting virtually any display system. The DBGS software leverages high-fidelity real-world data, including aerial imagery, elevation datasets, and vector data. Through a combination of COTS tools and in-house created applications, the semi-automated system can process large amounts of data in days rather than weeks or months, a disadvantage of manual database generation. A major benefit of the RIG technology is that existing simulation users can leverage their investment in existing real-time 3D databases (such as OpenFlight) as part of the RIG system.
Stay up to date, follow NASA's Technology Transfer Program on:
facebook twitter linkedin youtube
Facebook Logo Twitter Logo Linkedin Logo Youtube Logo