Assistive Technologies

Mobility
Circuit Design
Optics, Machine Vision, OCR
3D Printing
AI, Software
Haptic Feedback
Audio
Optics, Machine Vision, OCR
LAR-TOPS-123
NASA's Langley Research Center has created a new system that allows a laser operator to safely view a laser beam while using a high-powered laser. Currently, viewing an otherwise invisible laser beam requires cumbersome equipment such as laser viewing cards and video cameras. This system uses an optical head-mounted display integrated with laser safety eyewear to allow an operator to safely see a laser beam in real-time while also providing freedom of movement. The display provides a picture-in-picture, augmented reality, which can include additional information and provide multiple viewing options.
LAR-TOPS-101
During instrument flight training the pilot must have his/her view through the aircraft windscreen restricted to simulate low / visibility conditions while permitting the pilot to view the instrument panel. In one current method, a hood is draped across the aircraft windscreen, or a face mask or blackened glasses is worn by the pilot. All such current methods create potentially hazardous disorientation and an unnatural environment for the trainee. In particular, the face mask and blackened glasses restrict the pilots peripheral vision and require uncomfortable and unnatural head positions in order to see the entire instrument panel. Researchers at NASA's Langley Research Center have developed and tested special glasses to be worn by a pilot during instrument flight training. Using novel sensors to determine head position, the glasses restrict the view out of the aircraft windscreen but allow the pilot to clearly see the entire instrument panel, providing a much more realistic low visibility instrument flying experience.
LAR-TOPS-159
NASA Langley Research Center has developed a process methodology for making red, green and blue LED device structures on the same substrate (wafer), which is not possible today using current techniques. Such devices are manufactured individually because of different crystal structures. This innovation is enabled by the prior innovations by NASA Langley Research Center.
TOP2-102
The Spatial Standard Observer (SSO) was developed to predict the detectability of spatial contrast targets such as those used in the ModelFest project. The SSO is a lumped parameter model basing its predictions on the visible contrast generalized energy. Visible contrast means that the contrast has been reduced by a contrast sensitivity function (CSF). Generalized energy means that the visible contrast is raised to a power higher than 2 before spatial and temporal integration. To adapt the SSO to predict the effects of variations of optical image quality on tasks, the optical component of the SSO CSF needs to be removed, leaving the neural CSF. Also, since target detection is not the typical criterion task for assessing optical image quality, the SSO concept needs to be extended to other tasks, such as Sloan character recognition.
LAR-TOPS-61
NASA's Langley Research Center researchers have developed an automatic measurement and control method for smart image enhancement. Pilots, doctors, and photographers will benefit from this innovation that offers a new approach to image processing. Initial advantages will be seen in improved medical imaging and nighttime photography. Standard image enhancement software is unable to improve poor quality conditions such as low light, poor clarity, and fog-like conditions. The technology consists of a set of comprehensive methods that perform well across a wide range of conditions encountered in arbitrary images. Conditions include large variations in lighting, scene characteristics, and atmospheric (or underwater) turbidity variations. NASA is seeking market insights on commercialization of this new technology, and welcomes interest from potential producers, users, and licensees.
LAR-TOPS-168
NASA Langley Research Center has developed 3-D imaging technologies (Flash LIDAR) for real-time terrain mapping and synthetic vision-based navigation. To take advantage of the information inherent in a sequence of 3-D images acquired at video rates, NASA Langley has also developed an embedded image processing algorithm that can simultaneously correct, enhance, and derive relative motion, by processing this image sequence into a high resolution 3-D synthetic image. Traditional scanning LIDAR techniques generate an image frame by raster scanning an image one laser pulse per pixel at a time, whereas Flash LIDAR acquires an image much like an ordinary camera, generating an image using a single laser pulse. The benefits of the Flash LIDAR technique and the corresponding image to image processing enable autonomous vision based guidance and control for robotic systems. The current algorithm offers up to eight times image resolution enhancement and well as a 6 degree of freedom state vector of motion in the image frame.
LEW-TOPS-82
Scientists at NASA's Glenn Research Center have successfully developed a novel subcutaneous structure imager for locating veins in challenging patient populations, such as juvenile, elderly, dark-skinned, or obese patients. Spurred initially by the needs of pediatric sickle-cell anemia patients in Africa, Glenn's groundbreaking system includes a camera-processor-display apparatus and uses an innovative image-processing method to provide two- or three-dimensional, high-contrast visualization of veins or other vasculature structures. In addition to assisting practitioners to find veins in challenging populations, this system can also help novice healthcare workers locate veins for procedures such as needle insertion or excision. Compared to other state-of-the-art solutions, the imager is inexpensive, compact, and very portable, so it can be used in remote third-world areas, emergency response situations, or military battlefields.
GSC-TOPS-34
The present innovation is a method and instrument to simultaneously generate a topographic profile of an object, surface or landscape. Scanning LiDAR systems are most often used to simultaneously (i.e. in as short a time as possible) to achieve high spatial resolution and height (or depth) resolution over the maximum possible optical field of view. The disadvantage of the scanning system is the time it takes to scan, which prevents simultaneous image acquisition. This can be a problem for systems involving moving observers or objects, or both. Diffraction grating systems are also commonly used for three-dimensional imaging. These systems are limited by the grating throughput efficiency, and create difficulty generating a large number of spots and spots with equal energy. Flash LiDAR systems with uniform light distribution are also used, but these systems suffer from adjacent pixel crosstalk, reduced system measurement efficiency and difficulty in giving equal intensity weighting to each pixel. The present innovation overcomes the shortfalls of previously used three-dimensional imaging systems by employing a simple lens system.
LAR-TOPS-347
NASA researchers have developed a compact, cost-effective imaging system using a co-linear, high-intensity LED illumination unit to minimize window reflections for background-oriented schlieren (BOS) and machine vision measurements. The imaging system tested in NASA wind tunnels can reduce or eliminate shadows that occur when using many existing BOS and photogrammetric measurement systems; these shadows occur in existing systems for a variety of reasons, including the severe back-reflections from wind tunnel viewing port windows and variations in the refractive index of the imaged volume. Due to its compact size, the system can easily fit in the space behind a typical wind tunnels view port. As a cost-effective, compact imaging system, NASAs technology could be deployed for use in BOS, Tomo BOS, photogrammetric, and general machine vision applications.
GSC-TOPS-102
NASA Goddard Space Flight Center has developed FlashPose, a relative navigation measurement software and VHDL, for space flight missions requiring vehicle-relative and terrain-relative navigation and control. FlashPose processes real-time or recorded range and intensity images from 3D imaging sensors such as Lidars, and compares them to known models of the target surfaces to output the position and orientation of the known target relative to the sensor coordinate frame. FlashPose provides a relative navigation (pose estimation) capability to enable autonomous rendezvous and capture of non-cooperative space-borne targets. All algorithmic processing takes place in the software application, while custom FPGA firmware interfaces directly with the Ball Vision Navigation System (VNS) Lidar and provides imagery to the algorithm.
LAR-TOPS-164
Electron Beam Freeform Fabrication, or EBF3, is a process that uses an electron beam gun, a dual wire feed and computer controls to manufacture metallic structures for building parts or tools in hours, rather than days or weeks. EBF3 can manufacture complex geometries in a single operation and provides efficient use of power and feedstock. The technology has a wide range of applications, including automotive, aerospace, and rapid prototyping. It can build large metallic parts measuring feet in length, and has been reduced in size and power to enable 0-gravity experiments conducted on NASA's Reduced Gravity aircraft.
LAR-TOPS-64
NASA's Langley Research Center researchers have a strong technology foundation in the use of electron-beam (e-beam) deposition for free-form fabrication of complex shaped metal parts. While e-beam wire deposition is of interest for rapid prototyping of metal parts, cost-effective near-net shape manufacturing, and potential use in space, it is also of intense interest for industrial welding and fabrication in a range of applications, from small components to large aerospace structures. Through significant advancements in techniques to improve control of the process, NASA greatly expands upon the capabilities of the e-beam fabrication and welding process.
LAR-TOPS-173
NASA Langley Research Center has developed a method to apply soft-lithography to mold micro-scale structures into the surface of polymer and composite parts. The micro-scale, polymer structures can be utilized for super-hydrophobic surfaces, drag reduction, and adhesion between composite parts.
LAR-TOPS-101
During instrument flight training the pilot must have his/her view through the aircraft windscreen restricted to simulate low / visibility conditions while permitting the pilot to view the instrument panel. In one current method, a hood is draped across the aircraft windscreen, or a face mask or blackened glasses is worn by the pilot. All such current methods create potentially hazardous disorientation and an unnatural environment for the trainee. In particular, the face mask and blackened glasses restrict the pilots peripheral vision and require uncomfortable and unnatural head positions in order to see the entire instrument panel. Researchers at NASA's Langley Research Center have developed and tested special glasses to be worn by a pilot during instrument flight training. Using novel sensors to determine head position, the glasses restrict the view out of the aircraft windscreen but allow the pilot to clearly see the entire instrument panel, providing a much more realistic low visibility instrument flying experience.
TOP2-268
This novel technology is a screening tool to screen for neurological disorders or injury detecting oculomotor signatures. The tool can be used to measure/monitor the severity and nature of such symptoms. Eye movements are the most frequent, shortest-latency, and biomechanically simplest voluntary motor behavior, and thus provide a model system to assess perceptual and sensory processing disturbances arising from trauma, fatigue, aging, environmental exposures, or disease states. Scientists at NASA have developed and validated a rapid, non-invasive, eye-movement-based testing system to evaluate neural health across a range of brain regions. The technology applies a 5-minute behavioral tracking task consisting of randomized step-ramp radial target motion to capture several aspects of neural responses to dynamic visual stimuli, including pursuit initiation, steady-state tracking, direction and speed tuning, pupillary responses, and eccentric gaze holding.
Stay up to date, follow NASA's Technology Transfer Program on:
facebook twitter linkedin youtube
Facebook Logo Twitter Logo Linkedin Logo Youtube Logo