Non-Scanning 3D Imager
optics
Non-Scanning 3D Imager (GSC-TOPS-34)
High-resolution, real-time three-dimensional imaging using an innovative single lens system
Overview
The present innovation is a method and instrument to simultaneously generate a topographic profile of an object, surface or
landscape.
Scanning LiDAR systems are most often used to simultaneously (i.e. in as short a time as possible) to achieve high spatial resolution and height (or depth) resolution over the maximum possible optical field of view. The disadvantage of the scanning system is the time it takes to scan, which prevents simultaneous image acquisition. This can be a problem for systems involving moving observers or objects, or both. Diffraction grating systems are also commonly used for three-dimensional imaging. These systems are limited by the grating throughput efficiency, and create difficulty generating a large number of spots and spots with equal energy. Flash LiDAR systems with uniform light distribution are also used, but these systems suffer from adjacent pixel crosstalk, reduced system measurement efficiency and difficulty in giving equal intensity weighting to each pixel. The present innovation overcomes the shortfalls of previously used three-dimensional imaging systems by employing a simple lens system.
The Technology
NASA Goddard Space Flight Center's has developed a non-scanning, 3D imaging laser system that uses a simple lens system to simultaneously generate a one-dimensional or two-dimensional array of optical (light) spots to illuminate an object, surface or image to generate a topographic profile.
The system includes a microlens array configured in combination with a spherical lens to generate a uniform array for a two dimensional detector, an optical receiver, and a pulsed laser as the transmitter light source. The pulsed laser travels to and from the light source and the object. A fraction of the light is imaged using the optical detector, and a threshold detector is used to determine the time of day when the pulse arrived at the detector (using picosecond to nanosecond precision). Distance information can be determined for each pixel in the array, which can then be displayed to form a three-dimensional image.
Real-time three-dimensional images are produced with the system at television frame rates (30 frames per second) or higher.
Alternate embodiments of this innovation include the use of a light emitting diode in place of a pulsed laser, and/or a macrolens array in place of a microlens.
Benefits
- Simple design: the invention does not require scanning or moving parts to produce high resolution images.
- Greatly improved system efficiency and reduced crosstalk: the physical separation of spots in the object plane using a a microlens to generate an array of equal intensity improves efficiency and reduces crosstalk between pixels.
Applications
- Remote sensing (i.e. LiDAR mapping)
- Machine vision
- Robotic vision
Similar Results
Super Resolution 3D Flash LIDAR
This suite of technologies includes a method, algorithms, and computer processing techniques to provide for image photometric correction and resolution enhancement at video rates (30 frames per second). This 3D (2D spatial and range) resolution enhancement uses the spatial and range information contained in each image frame, in conjunction with a sequence of overlapping or persistent images, to simultaneously enhance the spatial resolution and range and photometric accuracies. In other words, the technologies allows for generating an elevation (3D) map of a targeted area (e.g., terrain) with much enhanced resolution by blending consecutive camera image frames. The degree of image resolution enhancement increases with the number of acquired frames.
Real-Time LiDAR Signal Processing FPGA Modules
The developed FPGA modules discern time-of-flight of laser pulses for LiDAR applications through the correlation of a Gaussian pulse with a discretely sampled waveform from the LiDAR receiver. For GRSSLi, up to eight cross-correlation engines were instantiated within a FPGA to process the discretely sampled transmit, receive pulses from the LiDAR receiver, and ultimately measure the time-of-flight of laser pulses at 20-picosecond resolution. Engine number is limited only by the resources within the FPGA fabric, and is configurable with a constant. Thus, potential time-of-flight measurement rates could go well beyond the 200-KHz mark required by GRSSLi. Additionally, the engines have been designed in an extremely efficient manner and utilize the least amount of FPGA resources possible.
3D Lidar for Autonomous Landing Site Selection
Aerial planetary exploration spacecraft require lightweight, compact, and low power sensing systems to enable successful landing operations. The Ocellus 3D lidar meets those criteria as well as being able to withstand harsh planetary environments. Further, the new tool is based on space-qualified components and lidar technology previously developed at NASA Goddard (i.e., the Kodiak 3D lidar) as shown in the figure below.
The Ocellus 3D lidar quickly scans a near infrared laser across a planetary surface, receives that signal, and translates it into a 3D point cloud. Using a laser source, fast scanning MEMS (micro-electromechanical system)-based mirrors, and NASA-developed processing electronics, the 3D point clouds are created and converted into elevations and images onboard the craft. At ~2 km altitudes, Ocellus acts as an altimeter and at altitudes below 200 m the tool produces images and terrain maps. The produced high resolution (centimeter-scale) elevations are used by the spacecraft to assess safe landing sites.
The Ocellus 3D lidar is applicable to planetary and lunar exploration by unmanned or crewed aerial vehicles and may be adapted for assisting in-space servicing, assembly, and manufacturing operations. Beyond exploratory space missions, the new compact 3D lidar may be used for aerial navigation in the defense or commercial space sectors. The Ocellus 3D lidar is available for patent licensing.
ShuttleSCAN 3-D
How It Works
The scanners operation is based on the principle of Laser Triagulation. The ShuttleSCAN contains an imaging sensor; two lasers mounted on opposite sides of the imaging sensor; and a customized, on-board processor for processing the data from the imaging sensor. The lasers are oriented at a given angle and surface height based on the size of objects being examined. For inspecting small details, such as defects in space shuttle tiles, a scanner is positioned close to the surface. This creates a small field of view but with very high resolution. For scanning larger objects, such as use in a robotic vision application, a scanner can be positioned several feet above the surface. This increases the field of view but results in slightly lower resolution. The laser projects a line on the surface, directly below the imaging sensor. For a perfectly flat surface, this projected line will be straight. As the ShuttleSCAN head moves over the surface, defects or irregularities above and below the surface will cause the line to deviate from perfectly straight. The SPACE processors proprietary algorithms interpret these deviations in real time and build a representation of the defect that is then transmitted to an attached PC for triangulation and 3-D display or printing. Real-time volume calculation of the defect is a capability unique to the ShuttleSCAN system.
Why It Is Better
The benefits of the ShuttleSCAN 3-D system are very unique in the industry. No other 3-D scanner can offer the combination of speed, resolution, size, power efficiency, and versatility. In addition, ShuttleSCAN can be used as a wireless instrument, unencumbered by cables. Traditional scanning systems make a tradeoff between resolution and speed. ShuttleSCANs onboard SPACE processor eliminates this tradeoff. The system scans at speeds greater than 600,000 points per second, with a resolution smaller than .001". Results of the scan are available in real time, whereas conventional systems scan over the surface, analyze the scanned data, and display the results long after the scan is complete.
Receiver for Long-distance, Low-backscatter LiDAR
The NASA receiver is specifically designed for use in coherent LiDAR systems that leverage high-energy (i.e., > 1mJ) fiber laser transmitters. Within the receiver, an outgoing laser pulse from the high-energy laser transmitter is precisely manipulated using robust dielectric and coated optics including mirrors, waveplates, a beamsplitter, and a beam expander. These components appropriately condition and direct the high-energy light out of the instrument to the atmosphere for measurement. Lower energy atmospheric backscatter that returns to the system is captured, manipulated, and directed using several of the previously noted high-energy compatible bulk optics. The beam splitter redirects the return signal to mirrors and a waveplate ahead of a mode-matching component that couples the signal to a fiber optic cable that is routed to a 50/50 coupler photodetector. The receiver’s hybrid optic design capitalizes on the advantages of both high-energy bulk optics and fiber optics, resulting in order-of-magnitude enhancement in performance, enhanced functionality, and increased flexibility that make it ideal for long-distance or low-backscatter LiDAR applications.
The related patent is now available to license. Please note that NASA does not manufacturer products itself for commercial sale.