Noise is difficult to escape in our daily lives. Such noise is generated by transportation vehicles, industrial equipment, hospital machines, phones, alarms, crowds, and more. Some sounds we want to suppress (e.g., airplane noise vs. conversation) and others we want to enhance (e.g., our ringing phone vs. subway noise). Predicting the extent that one sound is heard over another is difficult, yet could help engineers to better design for sound management. Innovators at the NASA Langley Research Center (LaRC) and the National Institute of Aerospace (NIA) have developed an algorithm for Statistical Audibility Prediction (SAP) of an arbitrary signal in the presence of noise. The SAP algorithm compares the loudness of signal and noise samples at matching time instances to assess audibility versus time. The continued development of this algorithm could allow engineers to suppress how we hear noise relative to sounds of interest. SAP can be implemented either as software or hardware. The algorithm has been tested using subject response data gathered in the Exterior Effects Room (EER) at NASA LaRC.
The field of autonomic computing (also known in other parlance as organic computing, biologically inspired computing, self managing systems, etc...) has emerged as a promising means of ensuring reliability, dependability, and survivability in computer based systems, in particular in systems where autonomy is important. Scientists at NASA Goddard Space Flight Center have been looking at various mechanisms inspired by nature, and the human body, to improve dependability and security in such systems. Otoaural emission is used by the mammalian ear to protect from exceptionally loud noises; tailoring it to autonomic systems would enable the system to be protected by spurious signals or signals from rogue agents.