Expanded COBRA Oculometrics (ECO) to Include Perimetry and Directional Assessment of Visual and Visuomotor Performance

Health Medicine and Biotechnology
Expanded COBRA Oculometrics (ECO) to Include Perimetry and Directional Assessment of Visual and Visuomotor Performance (TOP2-324)
Oculometric expansion for localized retinal and visual field impairments
Overview
Injury and disease can cause localized impairment of the retina and visual cortex. However, standard functional clinical assessment of sub-regions of the visual field and retina employ a crude spatial measure, called perimetry, that is insensitive to milder localized defects and vulnerable to eye-movement artifacts. NASA Ames Research Center’s current patented suite of COBRA oculometrics (TOP2-268) provides assessment of general retinal/visual health and performance for the entire visual field or retina, and would not detect defects limited to a small specific location on the retina or in the visual field. NASA Ames has developed a novel Expanded COBRA Oculometrics (ECO), which enhances COBRA by systematically exploring “localized” impairment of specific portions of the retina or visual field that could be the result of patchy retinal or brain degenerative disease, brain tumors, strokes, etc., and by extending the range of retina or visual field tested using a set of concentric rings divided up into octants (or quadrants).

The Technology
Expanded COBRA Oculometrics (ECO) expands the oculometric assessment into the peripheral visual field and sub-divides the visual field into sub-regions defined by their eccentricity (ring) and angular range (sector) for more direct comparison with current state-of-the-art clinical imagining (e.g., OCT (Optical Coherence Tomography)) and functional testing (e.g., perimetry) methodologies. ECO improves on COBRA in two ways, 1) ECO probes multiple rings at multiple eccentricities (nominally at ~3 and ~6 deg of eccentricity to correspond to classic OCT imaging measures) and 2) ECO subdivides the conventional COBRA analysis into angular sectors (nominally into Nasal, Temporal, Superior, and Inferior quadrants again to correspond to classic OCT imaging measures but also potentially into more refined octants) to sub-sample the retina (monocularly) or the visual field (binocularly), thus parceling the data into smaller subregions organized in polar coordinates to better correspond to classic OCT imaging and other clinical imaging measures. This enables ECO to detect spatially localized impairments that would otherwise be blurred out by larger healthy regions of the retina/brain and allows for direct comparison with clinical OCT results and other clinical imaging systems. ECO sectors/rings could be tailored to enable comparison with brain MRI/A (Magnetic Resonance Imaging/Angiography) or CAT (Computer Aided Tomography) commonly used in neurology clinics so that the structural damage or pathology revealed by these standard imaging techniques can be correlated to actual function loss in corresponding sub-regions of the visual field in support of both clinical and research applications.
Example ECO dataset/plot --
TOP: Blue dashed line indicates the average oculometric value across all directions. Red line indicates the directional tuning across eight different octants and is significantly different than a circle indicating localized variation in performance across the retina and sectors of the visual field (p < 0.001).
Bottom: OCT data set illustrating its quadrant analysis.
Benefits
  • Expanded COBRA Oculometrics (ECO) can be as quick as a 5-minute test
  • Enhanced precision and depth of data analysis - an advancement of COBRA technology by separately examining the sub-regions of the retinae or of the visual field
  • Provides enhanced sensitivity to detect impairment even if it is restricted to a small sub-region of the retina or visual field
  • By design, ECO provides additional direct information about the spatial location of the impairment that conventional COBRA does not
  • Expands the technology’s scope - added rings of ECO enables the examination of visual function in more eccentric regions of the retina and visual field
  • Offers a comprehensive assessment of visual function - measures subtle perceptual impairment of motion perception that can resist detection even in the presence of pathology
  • Provides early detection of visual impairment due to neural degeneration - assesses several sensitive local performance measures that may deteriorate due to deficits in synaptic function before actual neural death (and blindness) occurs
  • Prevents data contamination or blurring caused by eye-movement artifacts - monitors eye position to ensure accurate stimulus presentation in the proper peripheral locations of the visual field

Applications
  • Healthcare
  • Neuroscience Research
  • Medical Monitoring and Telemedicine
  • Training and Simulation
  • Sports Performance Analysis
  • Assistive Technologies
  • Clinical research facilities
  • Neurology clinics
  • Ophthalmology clinics
  • Emergency Department (stroke and TBI screening)
  • Law enforcement (screening for DUI)
  • Evaluation of effectiveness (and adverse effects) of therapeutic intervention
Technology Details

Health Medicine and Biotechnology
TOP2-324
ARC-18890-1
https://ntts.arc.nasa.gov/file/arc/attachment/2023/ARC-18890-1_1675699288/HRPIWSPoster2023StanfordFin.pdf https://www.frontiersin.org/journals/neuroscience/articles/10.3389/fnins.2025.1504628/full https://www.frontiersin.org/journals/ophthalmology/articles/10.3389/fopht.2024.1354892/full https://technology.nasa.gov/patent/TOP2-268
Similar Results
A NASA researcher using the technology
Oculometric Testing for Detecting/Characterizing Mild Neural Impairment
To assess various aspects of dynamic visual and visuomotor function including peripheral attention, spatial localization, perceptual motion processing, and oculomotor responsiveness, NASA developed a simple five-minute clinically relevant test that measures and computes more than a dozen largely independent eye-movement-based (oculometric) measures of human neural performance. This set of oculomotor metrics provide valid and reliable measures of dynamic visual performance and may prove to be a useful assessment tool for mild functional neural impairments across a wide range of etiologies and brain regions. The technology may be useful to clinicians to localize affected brain regions following trauma, degenerative disease, or aging, to characterize and quantify clinical deficits, to monitor recovery of function after injury, and to detect operationally-relevant altered or impaired visual performance at subclinical levels. This novel system can be used as a sensitive screening tool by comparing the oculometric measures of an individual to a normal baseline population, or from the same individual before and after exposure to a potentially harmful event (e.g., a boxing match, football game, combat tour, extended work schedule with sleep disruption, blast or toxic exposure, space mission), or on an ongoing basis to monitor performance for recovery to baseline. The technology provides set of largely independent metrics of visual and visuomotor function that are sensitive and reliable within and across observers, yielding a signature multidimensional impairment vector that can be used to characterize the nature of a mild deficit, not just simply detect it. Initial results from peer-reviewed studies of Traumatic Brain Injury, sleep deprivation with and without caffeine, and low-dose alcohol consumption have shown that this NASA technology can be used to assess subtle deficits in brain function before overt clinical symptoms become obvious, as well as the efficacy of countermeasures.
Purchased from Shutterstock, shutterstock_1478828816.pn
Computer-Brain Interface for Display Control
The basis of the NASA innovation is the brain signal created by flashing light, referred to as a Visually-Evoked Cortical Potential (VECP). The VECP brain signal can be detected by electroencephalogram (EEG) measurements recorded by electrode sensors placed over the brain’s occipital lobe. In the case of the NASA innovation, the flashing light is embedded as an independent function in an electronic display, e.g. backlit LCD or OLED display. The frequency of the flashing light can be controlled separate from the display refresh rate frequency so as to provide a large number of different frequencies for identifying specific display pixels or pixel regions. Also, the independently controlled flashing allows flashing rates to be chosen such that the display user sees no noticeable flickering. Further, because the VECP signal is correlated with the frequency of the signal in specific regions of the display, the approach determines the absolute location of eye fixation, eliminating the need to calibrate the gaze tracker to the display. Another key advantage of this novel method of brain-display eye gaze tracking is that it is only sensitive to where the user is focused and attentive to the information being displayed. Conventional optical eye tracking devices detect where the user is looking, regardless of whether they are paying attention to what they are seeing. An early-stage prototype has proven the viability of this innovation. NASA seeks partners to continue development and commercialization.
First Responders
Subcutaneous Structure Imager
Current subcutaneous vessel imagers use large, multiple, and often separate assemblies with complicated optics to image subcutaneous structures as two-dimensional maps on a wide monitor, or as maps extracted by a computer and focused onto the skin by a video projection. The scattering of infrared light that takes place during this process produces images that are shadowy and distorted. By contrast, Glenn's innovative approach offers a relatively compact and inexpensive alternative to the conventional setup, while also producing clearer images that can be rendered in either two or three dimensions. Glenn's device uses off-the-shelf near-infrared technology that is not affected by melanin content and can also operate in dark environments. In Glenn's novel subcutaneous imager, a camera is configured to generate a video frame. Connected to the camera is a processor that receives the signal for the video frame and adjusts the thresholds for darkness and whiteness. The result is that the vein (or other subcutaneous structure) will show very dark, while other surrounding features (which would register as gray) become closer to white due to the heightened contrast between thresholds. With no interval of complex algorithms required, the image is presented in real-time on a display, yielding immediate results. Glenn's advanced technology also allows the operator to achieve increased depth perception through the synchronization of a pair of imaging devices. Additionally, the novel use of a virtual-reality headset affords a three-dimensional view of the field, thereby improving the visualization of veins. In short, Glenn's researchers have produced an inexpensive, lightweight, high-utility device for locating and identifying subcutaneous structures in patients.
Video Acuity Measurement System
Video Acuity Measurement System
The Video Acuity metric is designed to provide a unique and meaningful measurement of the quality of a video system. The automated system for measuring video acuity is based on a model of human letter recognition. The Video Acuity measurement system is comprised of a camera and associated optics and sensor, processing elements including digital compression, transmission over an electronic network, and an electronic display for viewing of the display by a human viewer. The quality of a video system impacts the ability of the human viewer to perform public safety tasks, such as reading of automobile license plates, recognition of faces, and recognition of handheld weapons. The Video Acuity metric can accurately measure the effects of sampling, blur, noise, quantization, compression, geometric distortion, and other effects. This is because it does not rely on any particular theoretical model of imaging, but simply measures the performance in a task that incorporates essential aspects of human use of video, notably recognition of patterns and objects. Because the metric is structurally identical to human visual acuity, the numbers that it yields have immediate and concrete meaning. Furthermore, they can be related to the human visual acuity needed to do the task. The Video Acuity measurement system uses different sets of optotypes and uses automated letter recognition to simulate the human observer.
Spatial Standard Observer (SSO)
Spatial Standard Observer (SSO)
The Spatial Standard Observer (SSO) provides a tool that allows measurement of the visibility of an element, or visual discriminability of two elements. The device may be used whenever it is necessary to measure or specify visibility or visual intensity. The SSO is based on a model of human vision, and has been calibrated by an extensive set of human test data. The SSO operates on a digital image or a pair of digital images. It computes a numerical measure of the perceptual strength of the single image, or of the visible difference between the two images. The visibility measurements are provided in units of Just Noticeable Differences (JND), a standard measure of perceptual intensity. A target that is just visible has a measure of 1 JND. The SSO will be useful in a wide variety of applications, most notably in the inspection of displays during the manufacturing process. It is also useful in for evaluating vision from unpiloted aerial vehicles (UAV) predicting visibility of UAVs from other aircraft, from the control tower of aircraft on runways, measuring visibility of damage to aircraft and to the shuttle orbiter, evaluation of legibility of text, icons or symbols in a graphical user interface, specification of camera and display resolution, inspection of displays during the manufacturing process, estimation of the quality of compressed digital video, and predicting outcomes of corrective laser eye surgery.
Stay up to date, follow NASA's Technology Transfer Program on:
facebook twitter linkedin youtube
Facebook Logo X Logo Linkedin Logo Youtube Logo