Search

information technology and software
Circuit board
PCI Assembly Design
The PCI assembly design features flash based Field Programmable Gate Array (FPGA) technology providing in-system reprogrammable functionality from power-on. The system hosts flash-based FPGAs, SRAMs, I/O interfaces, and clocking circuitry. The circuit accommodates in-system-programming via on-board connectors, and allows functionality with separate hardware implementations with controller functionality or add-on-card functionality.
information technology and software
Hubble Spies Charming Spiral Galaxy Bursting with Stars
Radiation Hardened 10BASE-T Ethernet Physical Interface
Currently there is no radiation hardened Ethernet interface device/circuit available commercially. In this Ethernet solution, the portion of the PHY in the FPGA is responsible for meeting the IEEE 802.3 protocol, decoding received packets and link pulses, and encoding transmitted data packets. The decoded payload data is sent to a user interface internal to the FPGA which sends data for transmission back to the FPGA PHY. The transmit portion is composed of two AD844 op amps from Analog Devices with appropriate filtering. The receive portion is composed of a transformer, an Aeroflex Low-Voltage Differential Multi-drop device, and appropriate filtering.
information technology and software
Hierarchical Image Segmentation (HSEG)
Hierarchical Image Segmentation (HSEG)
Currently, HSEG software is being used by Bartron Medical Imaging as a diagnostic tool to enhance medical imagery. Bartron Medical Imaging licensed the HSEG Technology from NASA Goddard adding color enhancement and developing MED-SEG, an FDA approved tool to help specialists interpret medical images. HSEG is available for licensing outside of the medical field (specifically for soft-tissue analysis).
optics
Video Acuity Measurement System
Video Acuity Measurement System
The Video Acuity metric is designed to provide a unique and meaningful measurement of the quality of a video system. The automated system for measuring video acuity is based on a model of human letter recognition. The Video Acuity measurement system is comprised of a camera and associated optics and sensor, processing elements including digital compression, transmission over an electronic network, and an electronic display for viewing of the display by a human viewer. The quality of a video system impacts the ability of the human viewer to perform public safety tasks, such as reading of automobile license plates, recognition of faces, and recognition of handheld weapons. The Video Acuity metric can accurately measure the effects of sampling, blur, noise, quantization, compression, geometric distortion, and other effects. This is because it does not rely on any particular theoretical model of imaging, but simply measures the performance in a task that incorporates essential aspects of human use of video, notably recognition of patterns and objects. Because the metric is structurally identical to human visual acuity, the numbers that it yields have immediate and concrete meaning. Furthermore, they can be related to the human visual acuity needed to do the task. The Video Acuity measurement system uses different sets of optotypes and uses automated letter recognition to simulate the human observer.
information technology and software
Heaven's Carousel premiere; Credit: NASA, ESA, and Pam Jeffries (STScI)
Otoacoustic Protection In Biologically-Inspired Systems
This innovation is an autonomic method capable of transmitting a neutralizing data signal to counteract a potentially harmful signal. This otoacoustic component of an autonomic unit can render a potentially harmful incoming signal inert. For selfmanaging systems, the technology can offer a selfdefense capability that brings new levels of automation and dependability to systems.
sensors
Front Image Internet Security
Method and Device for Biometric Verification and Identification
The advantage of using cardiac biometrics over existing methods is that heart signatures are more difficult to forge compared to other biometric devices. Iris scanners can be fooled by contact lenses and sunglasses, and a segment of the population does not have readable fingerprints due to age or working conditions. Previous electrocardiographic signals employed a single template and compared that template with new test templates by means of cross-correlation or linear-discriminant analysis.The benefit of this technology over competing cardiac biometric methods is that it is more reliable with a significant reduction in error rates. The benefit of this technology is that it creates a probabilistic model of the electrocardiographic features of a person instead of a single signal template of the average heartbeat. The probabilistic model described as Gaussian mixture model allows various modes of the feature distribution, in contrast to a template model that only characterizes a mean waveform. Another advantage is that the model uses both physiological and anatomical characterization of the heart, unlike other methods that mainly use only physiological characterization of the heart. By combining features from different leads, the heart of the person is better characterized in terms of anatomical orientation because each lead represents a different projection of the electrical vector of the heart. Thus, employing multiple electrocardiographic leads provides a better performance in subject verification or identification.
information technology and software
'Honeycombs' and Hexacopters Help Tell Story of Mars
Shaped External Occulter Software Suite
Assessment of the optical performance of a given occulter design requires Fresnel diffraction in 2-dimensions to propagate the light from the external occulter to the telescope and then further diffractive propagation through the telescope to the focal plane. This software performs these two propagations, with the addition of manufacturing, deployment, vibrational errors, holes and deformations on the external occulter, and with the addition of wavefront and amplitude errors within the telescope. The propagation approach is parametric with respect to the wavelength of light and with respect to mispointing of the occulter and telescope relative to the star. The software propagates the ideal occulter separately from calculating errors, eliminating the need for large computers to perform these calculations.
manufacturing
Lab Tech loading samples
Fabrication of Nanopipette Arrays for Biosensing
This invention provides an array of nanopipette channels, formed and controlled in a metal-like material that supports anodization. The invention also permits selective first and second functionalizations, which may be the same or be different, of first and second channel surfaces so that different reactions of a multi-component fluid flowing in these channels can be evaluated simultaneously. The materials that support anodization include aluminum, magnesium, zinc, titanium, tantalum and niobium, referred to as "AN-metals." The relevant, controllable anodization parameters include applied electrical potential, current density, electrolyte concentration,solution pH, solution temperature and anodization time. The channel parameters that can be controlled include pore diameter, pore density or spacing and maximum channel length of a pore. An anodization process is initially applied to provide a plurality of adjacent nanopipette channels having inner diameters in a selected range, such as 10-50 nanometers (nm). The nanopipette array can sense the presence of a specified component(s), by production of a characteristic signal associated with the functionalized site in the presence of the specified component. Differing concentrations of the same specified component can also be estimated and controlled.
information technology and software
Inductive Monitoring System
Inductive Monitoring System
The Inductive Monitoring System (IMS) software provides a method of building an efficient system health monitoring software module by examining data covering the range of nominal system behavior in advance and using parameters derived from that data for the monitoring task. This software module also has the capability to adapt to the specific system being monitored by augmenting its monitoring database with initially suspect system parameter sets encountered during monitoring operations, which are later verified as nominal. While the system is offline, IMS learns nominal system behavior from archived system data sets collected from the monitored system or from accurate simulations of the system. This training phase automatically builds a model of nominal operations, and stores it in a knowledge base. The basic data structure of the IMS software algorithm is a vector of parameter values. Each vector is an ordered list of parameters collected from the monitored system by a data acquisition process. IMS then processes select data sets by formatting the data into a predefined vector format and building a knowledge base containing clusters of related value ranges for the vector parameters. In real time, IMS then monitors and displays information on the degree of deviation from nominal performance. The values collected from the monitored system for a given vector are compared to the clusters in the knowledge base. If all the values fall into or near the parameter ranges defined by one of these clusters, it is assumed to be nominal data since it matches previously observed nominal behavior. The IMS knowledge base can also be used for offline analysis of archived data.
optics
NASAs Equatorial Vortex Experiment
Measuring Optical Null Lens Aberrations
This innovation uses image-based wavefront sensing or phase retrieval which uses light traveling through the optical system in generally the same way that it will in its as-used configuration. It estimates the wavefront of the optical system using images collected on a light detector (which can be the same detector used by the optical system in its intended use). This method uses several images taken through the null optic under test, with some aspect of the test setup being systematically varied from one image to the next. Usually this variation involves somewhat out-of-focus images that are taken systematically with different amounts of defocus. The light for each diversity-defocus image passes through the optical test setup and the null lens. The resulting irradiance of the transmitted beam is measured by a light detector and this image is saved on a computer where algorithms determine an estimate of the null lens's wavefront aberrations.
Stay up to date, follow NASA's Technology Transfer Program on:
facebook twitter linkedin youtube
Facebook Logo Twitter Logo Linkedin Logo Youtube Logo