Computer-Brain Interface for Display Control

Communications
Computer-Brain Interface for Display Control (LAR-TOPS-369)
Visually induced brain signals determine eye-gaze location on a display
Overview
NASA has developed a novel brain-computer display interface to improve a user’s ability to interact with a computer by automatically and continuously locating the position of the user’s gaze on a computer display. This interaction can significantly increase the user’s ability to communicate with the computer and the systems the computer controls. Compared to commercial optical eye tracking devices, the NASA innovation relies on matching signals from the visual cortex region of the brain to invisible signals embedded in the display information to directly determine the location on the display where the user is looking. A few electrodes arranged over the back part of the user’s head are used to capture these brain signals. Importantly, the NASA innovation indicates eye gaze location only on the area of the display where the user is focused or paying attention, not simply where the user is looking. This technology may be paired with other eye gaze technology to provide robust gaze tracking without requiring the user to calibrate the gaze tracking system. The inventors developed this innovation specifically for aircraft and space vehicle cockpits where users rely heavily on computer display interaction but also have many other competing distractions and actions.

The Technology
The basis of the NASA innovation is the brain signal created by flashing light, referred to as a Visually-Evoked Cortical Potential (VECP). The VECP brain signal can be detected by electroencephalogram (EEG) measurements recorded by electrode sensors placed over the brain’s occipital lobe. In the case of the NASA innovation, the flashing light is embedded as an independent function in an electronic display, e.g. backlit LCD or OLED display. The frequency of the flashing light can be controlled separate from the display refresh rate frequency so as to provide a large number of different frequencies for identifying specific display pixels or pixel regions. Also, the independently controlled flashing allows flashing rates to be chosen such that the display user sees no noticeable flickering. Further, because the VECP signal is correlated with the frequency of the signal in specific regions of the display, the approach determines the absolute location of eye fixation, eliminating the need to calibrate the gaze tracker to the display. Another key advantage of this novel method of brain-display eye gaze tracking is that it is only sensitive to where the user is focused and attentive to the information being displayed. Conventional optical eye tracking devices detect where the user is looking, regardless of whether they are paying attention to what they are seeing. <br>An early-stage prototype has proven the viability of this innovation. NASA seeks partners to continue development and commercialization.
Purchased from Shutterstock, shutterstock_1478828816.pn Schematic of the computer-brain interface with display control.
Benefits
  • Improves and speeds up a user’s interaction with a computer via the computer display.
  • Enables hands-free cursor movement and selection control.
  • Increases a user’s ability to do other tasks while using their computer.
  • Increases the rate of interaction between a user and their computer, particularly important for situations where displays are becoming large and unwieldy for conventional cursor control.
  • Provides insight to automated assistants regarding where the user is looking and the information they have accessed from the display.

Applications
  • Aerospace: Advanced cockpit computer display systems
  • Health: Hands-free computer control for handicapped persons
  • Electronics: Improved ability to interact with the display environment for use in computer games, virtual reality, augmented reality, and other time sensitive human computer interaction environments
Technology Details

Communications
LAR-TOPS-369
LAR-19816-1 LAR-19816-1-CON
Similar Results
A NASA researcher using the technology
Oculometric Testing for Detecting/Characterizing Mild Neural Impairment
To assess various aspects of dynamic visual and visuomotor function including peripheral attention, spatial localization, perceptual motion processing, and oculomotor responsiveness, NASA developed a simple five-minute clinically relevant test that measures and computes more than a dozen largely independent eye-movement-based (oculometric) measures of human neural performance. This set of oculomotor metrics provide valid and reliable measures of dynamic visual performance and may prove to be a useful assessment tool for mild functional neural impairments across a wide range of etiologies and brain regions. The technology may be useful to clinicians to localize affected brain regions following trauma, degenerative disease, or aging, to characterize and quantify clinical deficits, to monitor recovery of function after injury, and to detect operationally-relevant altered or impaired visual performance at subclinical levels. This novel system can be used as a sensitive screening tool by comparing the oculometric measures of an individual to a normal baseline population, or from the same individual before and after exposure to a potentially harmful event (e.g., a boxing match, football game, combat tour, extended work schedule with sleep disruption, blast or toxic exposure, space mission), or on an ongoing basis to monitor performance for recovery to baseline. The technology provides set of largely independent metrics of visual and visuomotor function that are sensitive and reliable within and across observers, yielding a signature multidimensional impairment vector that can be used to characterize the nature of a mild deficit, not just simply detect it. Initial results from peer-reviewed studies of Traumatic Brain Injury, sleep deprivation with and without caffeine, and low-dose alcohol consumption have shown that this NASA technology can be used to assess subtle deficits in brain function before overt clinical symptoms become obvious, as well as the efficacy of countermeasures.
Device prototype in use
Optical Head-Mounted Display System for Laser Safety Eyewear
The system combines laser goggles with an optical head-mounted display that displays a real-time video camera image of a laser beam. Users are able to visualize the laser beam while his/her eyes are protected. The system also allows for numerous additional features in the optical head mounted display such as digital zoom, overlays of additional information such as power meter data, Bluetooth wireless interface, digital overlays of beam location and others. The system is built on readily available components and can be used with existing laser eyewear. The software converts the color being observed to another color that transmits through the goggles. For example, if a red laser is being used and red-blocking glasses are worn, the software can convert red to blue, which is readily transmitted through the laser eyewear. Similarly, color video can be converted to black-and-white to transmit through the eyewear.
Combat training course
System for Incorporating Physiological Self-Regulation Challenge into Parcourse/Orienteering Type Games and Simulations
Although biofeedback is an effective treatment for various physiological problems and can be used to optimize physiological functioning in many ways, the benefits can only be attained through a number of training sessions, and such gains can only be maintained over time through regular practice. However, adherence to regular training has been a problem that has plagued the field of physiological self-regulation limiting its utility. As with any exercise, incorporating biofeedback training with another activity encourages participation and enhances its usefulness.
Front Image
Low Cost Star Tracker Software
The current Star Tracker software package is comprised of a Lumenera LW230 monochrome machine-vision camera and a FUJINON HF35SA-1 35mm lens. The star tracker cameras are all connected to and powered by the PC/104 stack via USB 2.0 ports. The software code is written in C++ and is can easily be adapted to other camera and lensing platforms by setting new variables in the software for new focal conditions. In order to identify stars in images, the software contains a star database derived from the 118,218-star Hipparcos catalog [1]. The database contains a list of every star pair within the camera field of view and the angular distance between those pairs. It also contains the inertial position information for each individual star directly from the Hipparcos catalog. In order to keep the star database size small, only stars of magnitude 6.5 or brighter were included. The star tracking process begins when image data is retrieved by the software from the data buffers in the camera. The image is translated into a binary image via a threshold brightness value so that on (bright) pixels are represented by 1s and off (dark) pixels are represented by 0s. The binary image is then searched for blobs, which are just connected groups of on pixels. These blobs represent unidentified stars or other objects such as planets, deep sky objects, other satellites, or noise. The centroids of the blob locations are computed, and a unique pattern recognition algorithm is applied to identify which, if any, stars are represented. During this process, false stars are effectively removed and only repeatedly and uniquely identifiable stars are stored. After stars are identified, another algorithm is applied on their position information to determine the attitude of the satellite. The attitude is computed as a set of Euler angles: right ascension (RA), declination (Dec), and roll. The first two Euler angles are computed by using a linear system that is derived from vector algebra and the information of two identified stars in the image. The roll angle is computed using an iterative method that relies on the information of a single star and the first two Euler angles. [1] ESA, 1997, The Hipparcos and Tycho Catalogues, ESA SP-1200
Two young women playing video games
Game and Simulation Control
The technology is constructed to allow modulation of player inputs to a video game or simulation from a user interface device based on the players psychophysiological state. The invention exploits current wireless motion-sensing technologies to utilize physiological signals for input modulation. These include, but are not limited to, heart rate, muscle tension, and brain wave activity. The current capability has been successfully prototyped using the Nintendo Wii console and wireless Wii remote. The experience of electronic game play may also be enhanced by introducing a multiplayer component in which various players collaboratively pursue the goals of the game. The device can also enhance multiplayer experiences such as a video game tournament, in which the skill set required in competitive game play is increased by allowing players to interact with the game, and compete with one another, on a psychophysiological level. This system is compatible with the Nintendo Wii, and prototypes have been designed and are being developed to extend this capability to the PlayStation Move, Xbox Kinect, and other similar game platforms.
Stay up to date, follow NASA's Technology Transfer Program on:
facebook twitter linkedin youtube
Facebook Logo Twitter Logo Linkedin Logo Youtube Logo