Computational Visual Servo

information technology and software
Computational Visual Servo (LAR-TOPS-61)
Automatic measurement and control for smart image enhancement
Overview
NASA's Langley Research Center researchers have developed an automatic measurement and control method for smart image enhancement. Pilots, doctors, and photographers will benefit from this innovation that offers a new approach to image processing. Initial advantages will be seen in improved medical imaging and nighttime photography. Standard image enhancement software is unable to improve poor quality conditions such as low light, poor clarity, and fog-like conditions. The technology consists of a set of comprehensive methods that perform well across a wide range of conditions encountered in arbitrary images. Conditions include large variations in lighting, scene characteristics, and atmospheric (or underwater) turbidity variations. NASA is seeking market insights on commercialization of this new technology, and welcomes interest from potential producers, users, and licensees.

The Technology
The innovation improves upon the performance of passive automatic enhancement of digital images. Specifically, the image enhancement process is improved in terms of resulting contrast, lightness, and sharpness over the prior art of automatic processing methods. The innovation brings the technique of active measurement and control to bear upon the basic problem of enhancing the digital image by defining absolute measures of visual contrast, lightness, and sharpness. This is accomplished by automatically applying the type and degree of enhancement needed based on automated image analysis. The foundation of the processing scheme is the flow of digital images through a feedback loop whose stages include visual measurement computation and servo-controlled enhancement effect. The cycle is repeated until the servo achieves acceptable scores for the visual measures or reaches a decision that it has enhanced as much as is possible or advantageous. The servo-control will bypass images that it determines need no enhancement. The system determines experimentally how much absolute degrees of sharpening can be applied before encountering detrimental sharpening artifacts. The latter decisions are stop decisions that are controlled by further contrast or light enhancement, producing unacceptable levels of saturation, signal clipping, and sharpness. The invention was developed to provide completely new capabilities for exceeding pilot visual performance by clarifying turbid, low-light level, and extremely hazy images automatically for pilot view on heads-up or heads-down display during critical flight maneuvers.
Technology Example Aerial photo before enhancement
Benefits
  • Systematic improvements in contrast, light, and sharpness
  • Compatible with varied imaging technology
  • Correction for both overexposure and underexposure
  • Dusk and fog image resolution

Applications
  • Photography - expanded enhancement capabilities
  • Aviation - improved pilot visibility
  • Automobile - improved driver visibility
  • Video - Real-time digital enhancement
  • Medical imaging - X-rays, computed tomography (CT), and magnetic resonance imaging (MRI)
  • Surveillance - thermal and night vision
  • Military - enhanced pilot vision and targeting
Technology Details

information technology and software
LAR-TOPS-61
LAR-17240-1
8,111,943
Similar Results
Image from internal NASA presentation developed by inventor and dated May 4, 2020.
Reflection-Reducing Imaging System for Machine Vision Applications
NASAs imaging system is comprised of a small CMOS camera fitted with a C-mount lens affixed to a 3D-printed mount. Light from the high-intensity LED is passed through a lens that both diffuses and collimates the LED output, and this light is coupled onto the cameras optical axis using a 50:50 beam-splitting prism. Use of the collimating/diffusing lens to condition the LED output provides for an illumination source that is of similar diameter to the cameras imaging lens. This is the feature that reduces or eliminates shadows that would otherwise be projected onto the subject plane as a result of refractive index variations in the imaged volume. By coupling the light from the LED unit onto the cameras optical axis, reflections from windows which are often present in wind tunnel facilities to allow for direct views of a test section can be minimized or eliminated when the camera is placed at a small angle of incidence relative to the windows surface. This effect is demonstrated in the image on the bottom left of the page. Eight imaging systems were fabricated and used for capturing background oriented schlieren (BOS) measurements of flow from a heat gun in the 11-by-11-foot test section of the NASA Ames Unitary Plan Wind Tunnel (see test setup on right). Two additional camera systems (not pictured) captured photogrammetry measurements.
NASA robotic vehicle prototype
Super Resolution 3D Flash LIDAR
This suite of technologies includes a method, algorithms, and computer processing techniques to provide for image photometric correction and resolution enhancement at video rates (30 frames per second). This 3D (2D spatial and range) resolution enhancement uses the spatial and range information contained in each image frame, in conjunction with a sequence of overlapping or persistent images, to simultaneously enhance the spatial resolution and range and photometric accuracies. In other words, the technologies allows for generating an elevation (3D) map of a targeted area (e.g., terrain) with much enhanced resolution by blending consecutive camera image frames. The degree of image resolution enhancement increases with the number of acquired frames.
NASA UAV
Low Weight Flight Controller Design
Increasing demand for smaller UAVs (e.g., sometimes with wingspans on the order of six inches and weighing less than one pound) generated a need for much smaller flight and sensing equipment. NASA Langley's new sensing and flight control system for small UAVs includes both an active flight control board and an avionics sensor board. Together, these compare the status of the UAVs position, heading, and orientation with the pre-programmed data to determine and apply the flight control inputs needed to maintain the desired course. To satisfy the small form-factor system requirements, micro-electro-mechanical systems (MEMS) are used to realize the various flight control sensing devices. MEMS-based devices are commercially available single-chip devices that lend themselves to easy integration onto a circuit board. The system uses less energy than current systems, allowing solar panels planted on the vehicle to generate the systems power. While the lightweight technology was designed for smaller UAVs, the sensors could be distributed throughout larger UAVs, depending on the application.
Spatial Standard Observer (SSO)
Spatial Standard Observer (SSO)
The Spatial Standard Observer (SSO) provides a tool that allows measurement of the visibility of an element, or visual discriminability of two elements. The device may be used whenever it is necessary to measure or specify visibility or visual intensity. The SSO is based on a model of human vision, and has been calibrated by an extensive set of human test data. The SSO operates on a digital image or a pair of digital images. It computes a numerical measure of the perceptual strength of the single image, or of the visible difference between the two images. The visibility measurements are provided in units of Just Noticeable Differences (JND), a standard measure of perceptual intensity. A target that is just visible has a measure of 1 JND. The SSO will be useful in a wide variety of applications, most notably in the inspection of displays during the manufacturing process. It is also useful in for evaluating vision from unpiloted aerial vehicles (UAV) predicting visibility of UAVs from other aircraft, from the control tower of aircraft on runways, measuring visibility of damage to aircraft and to the shuttle orbiter, evaluation of legibility of text, icons or symbols in a graphical user interface, specification of camera and display resolution, inspection of displays during the manufacturing process, estimation of the quality of compressed digital video, and predicting outcomes of corrective laser eye surgery.
GONASA grids mapping clouds on Titan. Credit: NASA
Grid-Oriented Normalization for Analysis of Spherical Areas (GONASA)
NASA's GONASA technology is a mathematical formula / algorithm built around creating a grid composed of equal-area cells that span the entire visible hemisphere of a spherical object. Traditional longitude and latitude grids produce cells that diminish in size toward the poles due to convergence of longitudinal lines. GONASA circumvents this problem by carefully adjusting the latitude increments, resulting in a network of truly equal-area cells. This adjustment ensures that any feature observed on the spherical surface is accurately represented, regardless of its location. To implement GONASA, the spherical surface is first segmented into discrete latitude bands or rings, each chosen to encompass an identical surface area. Within each ring, longitude divisions maintain equal cell areas, creating a uniform Cartesian grid. The result is a consistent, distortion-corrected matrix suitable for automatic computation, enabling simplified, efficient, and accurate measurements of spatial characteristics such as feature area, centroid location, perimeter, compactness, orientation, and aspect ratio. GONASA grids are computationally efficient and readily adaptable to a range of data processing workflows, from spreadsheets to sophisticated data analysis frameworks like Pandas data frames in Python. Due to their consistent cell sizing and straightforward indexing, GONASA grids facilitate automation, enabling rapid, high-volume data processing and analysis, essential for modern remote sensing and planetary missions that require immediate, reliable data analysis in limited-bandwidth communications environments. At NASA, GONASA has already been successfully implemented to study images of Titan (e.g., mapping its clouds) taken by the Cassini space probe.
Stay up to date, follow NASA's Technology Transfer Program on:
facebook twitter linkedin youtube
Facebook Logo X Logo Linkedin Logo Youtube Logo