Computational Visual Servo

information technology and software
Computational Visual Servo (LAR-TOPS-61)
Automatic measurement and control for smart image enhancement
Overview
NASA's Langley Research Center researchers have developed an automatic measurement and control method for smart image enhancement. Pilots, doctors, and photographers will benefit from this innovation that offers a new approach to image processing. Initial advantages will be seen in improved medical imaging and nighttime photography. Standard image enhancement software is unable to improve poor quality conditions such as low light, poor clarity, and fog-like conditions. The technology consists of a set of comprehensive methods that perform well across a wide range of conditions encountered in arbitrary images. Conditions include large variations in lighting, scene characteristics, and atmospheric (or underwater) turbidity variations. NASA is seeking market insights on commercialization of this new technology, and welcomes interest from potential producers, users, and licensees.

The Technology
The innovation improves upon the performance of passive automatic enhancement of digital images. Specifically, the image enhancement process is improved in terms of resulting contrast, lightness, and sharpness over the prior art of automatic processing methods. The innovation brings the technique of active measurement and control to bear upon the basic problem of enhancing the digital image by defining absolute measures of visual contrast, lightness, and sharpness. This is accomplished by automatically applying the type and degree of enhancement needed based on automated image analysis. The foundation of the processing scheme is the flow of digital images through a feedback loop whose stages include visual measurement computation and servo-controlled enhancement effect. The cycle is repeated until the servo achieves acceptable scores for the visual measures or reaches a decision that it has enhanced as much as is possible or advantageous. The servo-control will bypass images that it determines need no enhancement. The system determines experimentally how much absolute degrees of sharpening can be applied before encountering detrimental sharpening artifacts. The latter decisions are stop decisions that are controlled by further contrast or light enhancement, producing unacceptable levels of saturation, signal clipping, and sharpness. The invention was developed to provide completely new capabilities for exceeding pilot visual performance by clarifying turbid, low-light level, and extremely hazy images automatically for pilot view on heads-up or heads-down display during critical flight maneuvers.
Technology Example Aerial photo before enhancement
Benefits
  • Systematic improvements in contrast, light, and sharpness
  • Compatible with varied imaging technology
  • Correction for both overexposure and underexposure
  • Dusk and fog image resolution

Applications
  • Photography - expanded enhancement capabilities
  • Aviation - improved pilot visibility
  • Automobile - improved driver visibility
  • Video - Real-time digital enhancement
  • Medical imaging - X-rays, computed tomography (CT), and magnetic resonance imaging (MRI)
  • Surveillance - thermal and night vision
  • Military - enhanced pilot vision and targeting
Technology Details

information technology and software
LAR-TOPS-61
LAR-17240-1
8,111,943
Similar Results
Image from internal NASA presentation developed by inventor and dated May 4, 2020.
Reflection-Reducing Imaging System for Machine Vision Applications
NASAs imaging system is comprised of a small CMOS camera fitted with a C-mount lens affixed to a 3D-printed mount. Light from the high-intensity LED is passed through a lens that both diffuses and collimates the LED output, and this light is coupled onto the cameras optical axis using a 50:50 beam-splitting prism. Use of the collimating/diffusing lens to condition the LED output provides for an illumination source that is of similar diameter to the cameras imaging lens. This is the feature that reduces or eliminates shadows that would otherwise be projected onto the subject plane as a result of refractive index variations in the imaged volume. By coupling the light from the LED unit onto the cameras optical axis, reflections from windows which are often present in wind tunnel facilities to allow for direct views of a test section can be minimized or eliminated when the camera is placed at a small angle of incidence relative to the windows surface. This effect is demonstrated in the image on the bottom left of the page. Eight imaging systems were fabricated and used for capturing background oriented schlieren (BOS) measurements of flow from a heat gun in the 11-by-11-foot test section of the NASA Ames Unitary Plan Wind Tunnel (see test setup on right). Two additional camera systems (not pictured) captured photogrammetry measurements.
Automatic Extraction of Planetary Image Features
Automatic Extraction of Planetary Image Features and Multi-Sensor Image Registration
NASAs Goddard Space Flight Centers method for the extraction of Lunar data and/or planetary features is a method developed to extract Lunar features based on the combination of several image processing techniques. The technology was developed to register images from multiple sensors and extract features from images in low-contrast and uneven illumination conditions. The image processing and registration techniques can include, but is not limited to, a watershed segmentation, marked point processes, graph cut algorithms, wavelet transforms, multiple birth and death algorithms and/or the generalized Hough Transform.
Front image
Strobing to Mitigate Vibration for Display Legibility
The dominant frequency of the vibration that requires mitigation can be known in advance, measured in real time, or predicted with simulation algorithms. That frequency (or a lower frequency multiplier) is then used to drive the strobing rate of the illumination source. For example, if the vibration frequency is 20 Hz, one could employ a strobe rate of 1, 2, 4, 5, 10, or 20 Hz, depending on which rate the operator finds the least intrusive. The strobed illumination source can be internal or external to the display. Perceptual psychologists have long understood that strobed illumination can freeze moving objects in the visual field. This effect can be used for artistic effect or for technical applications. The present innovation is instead applicable for environments in which the human observer rather than just the viewed object undergoes vibration. Such environments include space, air, land, and sea vehicles, or on foot (e.g., walking or running on the ground or treadmills). The technology itself can be integrated into handheld and fixed display panels, head-mounted displays, and cabin illumination for viewing printed materials.
ARC RIG
Reconfigurable Image Generator and Database Generation System
The system, the Reconfigurable Image Generator (RIG), consists of software and a hardware configuration, and a Synthetic Environment Database Generation System (RIG-DBGS). This innovative Image Generator (IG) uses Commercial-Off-The-Shelf (COTS) technologies and is capable of supporting virtually any display system. The DBGS software leverages high-fidelity real-world data, including aerial imagery, elevation datasets, and vector data. Through a combination of COTS tools and in-house created applications, the semi-automated system can process large amounts of data in days rather than weeks or months, a disadvantage of manual database generation. A major benefit of the RIG technology is that existing simulation users can leverage their investment in existing real-time 3D databases (such as OpenFlight) as part of the RIG system.
Plenoptic camera
Plenoptic Camera
This camera incorporates an array of 470 x 360 microlenses, with each microlens producing an image onto a 14 x 14 pixel array. Specific colors or spectra can be continuous or arbitrarily determined; and can be easily and inexpensively modified. Modifications of the collected spectra can be useful for different applications where the emitted light needs to be analyzed to determine qualitative or quantitative information about a flow, object, or scene. The sensor can measure fluid, mechanical, thermodynamic, or structural properties of gases, liquids, and solids.
Stay up to date, follow NASA's Technology Transfer Program on:
facebook twitter linkedin youtube
Facebook Logo Twitter Logo Linkedin Logo Youtube Logo