Multispectral Imaging, Detection, and Active Reflectance (MiDAR)

optics
Multispectral Imaging, Detection, and Active Reflectance (MiDAR) (TOP2-262)
A novel next-generation remote sensing instrument
Overview
NASA has developed a novel next-generation remote sensing instrument with advanced scientific capabilities for Multispectral Imaging, Detection and Active Reflectance (MiDAR). The MiDAR transmitter and receiver demonstrate a novel and cost-effective solution for simultaneous high-frame-rate, high-signal-to-noise ratio (SNR) multispectral imaging, with hyperspectral potential, high-bandwidth simplex communication, and in-phase radiometric calibration. The use of computational imaging further allows for multispectral data to be fused using Structure from Motion (SfM) and Fluid Lensing algorithms to produce 3D multispectral scenes and high-resolution underwater imagery of benthic systems as part of future scientific airborne field campaigns.

The Technology
The MiDAR transmitter emits coded narrowband structured illumination to generate high-frame-rate multispectral video, perform real-time radiometric calibration, and provide a high-bandwidth simplex optical data-link under a range of ambient irradiance conditions, including darkness. A theoretical framework, based on unique color band signatures, is developed for multispectral video reconstruction and optical communications algorithms used on MiDAR transmitters and receivers. Experimental tests demonstrate a 7-channel MiDAR prototype consisting of an active array of multispectral high-intensity light-emitting diodes (MiDAR transmitter) coupled with a state-of-the-art, high-frame-rate NIR computational imager, the NASA FluidCam NIR, which functions as a MiDAR receiver. A 32-channel instrument is currently in development. Preliminary results confirm efficient, radiometrically-calibrated, high signal-to-noise ratio (SNR) active multispectral imaging in 7 channels from 405-940 nm at 2048x2048 pixels and 30 Hz. These results demonstrate a cost-effective and adaptive sensing modality, with the ability to change color bands and relative intensities in real-time, in response to changing science requirements or dynamic scenes. Potential applications of MiDAR include high-resolution nocturnal and diurnal multispectral imaging from air, space and underwater environments as well as long- distance optical communication, bidirectional reflectance distribution function characterization, mineral identification, atmospheric correction, UV/fluorescent imaging, 3D reconstruction using Structure from Motion (SfM), and underwater imaging using Fluid Lensing. Multipurpose sensors, such as MiDAR, which fuse active sensing and communications capabilities, may be particularly well-suited for mass-limited robotic exploration of Earth and the solar system and represent a possible new generation of instruments for active optical remote sensing.
MiDAR
Benefits
  • High resolution, SNR, and frame-rate multispectral imaging technology
  • Demonstrated 7-channel NASA-developed MiDAR transmitter and receiver
  • MiDARs SNR is not limited by ambient light
  • MiDAR transmitter directly illuminates an object with narrowband structured light
  • Cost-effective multispectral system
  • MiDAR transmitter simultaneously provides high-bandwidth one-way communication while imaging

Applications
  • Multispectral Remote Sensing from aircraft, robotic explorers, spacecraft, and underwater environments in both low-light and normal lighting conditions
  • Hyperspectral Imaging
  • Simultaneous Optical Communications
  • Fluid Lensing for cm-scale benthic imaging
  • Mineral identification
  • UV/fluorescent imaging from UAVs
  • 3D imaging using Structure from Motion (SfM)
  • Mass-limited robotic exploration of Earth and the solar system
  • Noninvasive medical imaging and diagnosis
  • Semiconductor imaging and engineering structure analysis
Technology Details

optics
TOP2-262
ARC-17697-1
10,041,833
Chirayath, Ved, and Alan Li. "Next-Generation Optical Sensing Technologies for Exploring Ocean Worlds-NASA FluidCam, MiDAR, and NeMO-Net." Frontiers in Marine Science 6 (2019): 521. https://doi.org/10.3389/fmars.2019.00521
Similar Results
A Porites coral imaged at the air-water interface that causes fluid lensing. From 2013 American Samoa field campaign, Dr. Ved Chirayath
Fluid Lensing System for Imaging Underwater Environments
Fluid lensing exploits optofluidic lensing effects in two-fluid interfaces. When used in such interfaces, like air and water, coupled with computational imaging and a unique computer vision-processing pipeline, it not only removes strong optical distortions along the line of sight, but also significantly enhances the angular resolution and signal-to-noise ratio of an otherwise underpowered optical system. As high-frame-rate multi-spectral data are captured, fluid lensing software processes the data onboard and outputs a distortion-free 3D image of the benthic surface. This includes accounting for the way an object can look magnified or appear smaller than usual, depending on the shape of the wave passing over it, and for the increased brightness caused by caustics. By running complex calculations, the algorithm at the heart of fluid lensing technology is largely able to correct for these troublesome effects. The process introduces a fluid distortion characterization methodology, caustic bathymetry concepts, Fluid Lensing Lenslet Homography technique, and a 3D Airborne Fluid Lensing Algorithm as novel approaches for characterizing the aquatic surface wave field, modeling bathymetry using caustic phenomena, and robust high-resolution aquatic remote sensing. The formation of caustics by refractive lenslets is an important concept in the fluid lensing algorithm. The use of fluid lensing technology on drones is a novel means for 3D imaging of aquatic ecosystems from above the water's surface at the centimeter scale. Fluid lensing data are captured from low-altitude, cost-effective electric drones to achieve multi-spectral imagery and bathymetry models at the centimeter scale over regional areas. In addition, this breakthrough technology is developed for future in-space validation for remote sensing of shallow marine ecosystems from Low Earth Orbit (LEO). NASA's FluidCam instrument, developed for airborne and spaceborne remote sensing of aquatic systems, is a high-performance multi-spectral computational camera using Fluid lensing. The space-capable FluidCam instrument is robust and sturdy enough to collect data while mounted on an aircraft (including drones) over water.
Collage of applications for this technology--bridges, buildings, oil rigs, cargo, and robotics
Adaptive Spatial Resolution Enables Focused Fiber Optic Sensing
This technology can be applied to most optical frequency domain reflectometry (OFDR) fiber optic strain sensing systems. It is particularly well suited to Armstrong's FOSS technology, which uses efficient algorithms to determine from strain data in real time a variety of critical parameters, including twist and other structural shape deformations, temperature, pressure, liquid level, and operational loads. How It Works This technology enables smart-sensing techniques that adjust parameters as needed in real time so that only the necessary amount of data is acquired—no more, no less. Traditional signal processing in fiber optic strain sensing systems is based on fast Fourier transform (FFT), which has two key limitations. First, FFT requires having analysis sections that are equal in length along the whole fiber. Second, if high resolution is required along one portion of the fiber, FFT processes the whole fiber at that resolution. Armstrong's adaptive spatial resolution innovation makes it possible to efficiently break up the length of the fiber into analysis sections that vary in length. It also allows the user to measure data from only a portion of the fiber. If high resolution is required along one section of fiber, only that portion is processed at high resolution, and the rest of the fiber can be processed at the lower resolution. Why It Is Better To quantify this innovation's advantages, this new adaptive method requires only a small fraction of the calculations needed to provide additional resolution compared to FFT (i.e., thousands versus millions of additional calculations). This innovation provides faster signal processing and precision measurement only where it is needed, saving time and resources. The technology also lends itself well to long-term bandwidth-limited monitoring systems that experience few variations but could be vulnerable as anomalies occur. More importantly, Armstrong's adaptive algorithm enhances safety, because it automatically adjusts the resolution of sensing based on real-time data. For example, when strain on a wing increases during flight, the software automatically increases the resolution on the strained part of the fiber. Similarly, as bridges and wind turbine blades undergo stress during big storms, this algorithm could automatically adjust the spatial resolution to collect more data and quickly identify potentially catastrophic failures. This innovation greatly improves the flexibility of fiber optic strain sensing systems, which provide valuable time and cost savings to a range of applications. For more information about the full portfolio of FOSS technologies, see DRC-TOPS-37 or visit https://technology-afrc.ndc.nasa.gov/featurestory/fiber-optic-sensing
Hierarchical Image Segmentation (HSEG)
Hierarchical Image Segmentation (HSEG)
Currently, HSEG software is being used by Bartron Medical Imaging as a diagnostic tool to enhance medical imagery. Bartron Medical Imaging licensed the HSEG Technology from NASA Goddard adding color enhancement and developing MED-SEG, an FDA approved tool to help specialists interpret medical images. HSEG is available for licensing outside of the medical field (specifically for soft-tissue analysis).
Berlin, Germany
CubeSat Compatible High Resolution Thermal Infrared Imager
This dual band infrared imaging system is capable of spatial resolution of 60 m from orbit and earth observing expected NEDT less than 0.2o C. It is designed to fit within the top two-thirds of a 3U CubeSat envelope, installed on the International Space Station, or deployed on other orbiting or airborne platforms. This infrared imaging system will utilize a newly conceived strained-layer superlattice GaSb/InAs broadband detector array cooled to 60 K by a miniature mechanical cryocooler. The camera is controlled by a sensor chip assembly consisting of a newly developed 25 m pitch, 640 x 512 pixel.
https://science.nasa.gov/mission/viper/
3D Lidar for Improved Rover Traversal and Imagery
The SQRLi system is made up of three major components including the laser assembly, the mirror assembly, and the electronics and data processing equipment (electronics assembly) as shown in the figure below. The three main systems work together to send and receive the lidar signal then translate it into a 3D image for navigation and imaging purposes. The rover sensing instrument makes use of a unique fiber optic laser assembly with high, adjustable output that increases the dynamic range (i.e., contrast) of the lidar system. The commercially available mirror setup used in the SQRLi is small, reliable, and has a wide aperture that improves the field-of-view of the lidar while maintaining a small instrument footprint. Lastly, the data processing is done by an in-house designed processor capable of translating the light signal into a high-resolution (sub-millimeter) 3D map. These components of the SQRLi enable successful hazard detection and navigation in visibility-impaired environments. The SQRLi is applicable to planetary and lunar exploration by unmanned or crewed vehicles and may be adapted for in-space servicing, assembly, and manufacturing purposes. Beyond NASA missions, the new 3D lidar may be used for vehicular navigation in the automotive, defense, or commercial space sectors. The SQRLi is available for patent licensing.
Stay up to date, follow NASA's Technology Transfer Program on:
facebook twitter linkedin youtube
Facebook Logo Twitter Logo Linkedin Logo Youtube Logo