NASA Ames has developed a community-driven, context-aware intelligent research assistant system (MATA - Sanskrit name for the Earth) which is capable of engaging with users in a conversational manner using natural language dialogue, invoking external community-provided web services to obtain information or to perform actions, and vocalizes the responsive action back to the user. This novel patent-pending technology is an intelligent, virtual, personalized conversational research assistant system. Specifically designed for geospatial queries of Earth science data, this software application provides conversational computing, not just a conversational assistant. It is able to run on a personal computer or a mobile phone that facilitates user interaction with the system. This technology allows users to add new capabilities and new Application Programming Interfaces (APIs) so that it can be applied to a wide variety of applications.
Researchers and expert operators may be familiar with the concept of trust in automation, but how would advance automation make decisions regarding control without establishing trust in the operator? Vehicles outfitted with sensors and systems that can operate with varying degrees of autonomy are being developed. Optimizing human machine interaction remains critical for maintaining and improving safety as vehicles become increasingly autonomous. Human status is highly variable and difficult to predict. Despite a recent history of consistent reliability, in the current moment the operator status may range from completely incapacitated to ready to take control as necessary or as preferred. The intelligent system itself needs to know what the human is doing now to make decisions in real time regarding role assignments, safe operation and critical functional task allocation.
In-space and planetary surface assembly for human exploration is a challenging domain that encompasses various technological thrusts to support human missions. NASA is developing autonomous assembly agents to build structures like habitats and antennae on the Moon. These modular and reconfigurable Assembler robots will provide robotic assembly of structures, even in locations that prohibit constant human oversight and teleoperation. This system is capable of scheduling, reconfiguring, and executing structural assembly tasks; assessing construction; and correcting errors in assembly as needed. On command, the Assemblers stack themselves into robot team members for the task. For example, a few Assemblers might build a solar array as shown in the above image. The Assembler technology builds upon recent advancements in lightweight materials, state estimation, modern control theory, and machine learning. The Assembler technology builds upon recent advancements in multi-agent planning, state estimation, modern control theory, and machine learning. Compared with existing short-reach/high-accuracy and long-reach/low accuracy-assembly robots, Assemblers provides both long- and short-reach capability with accuracy and precision. NASA has developed a prototype of the technology and seeks companies that are interested in licensing the technology and commercializing it for space or other applications.
The field of autonomic computing (also known in other parlance as organic computing, biologically inspired computing, self managing systems, etc...) has emerged as a promising means of ensuring reliability, dependability, and survivability in computer based systems, in particular in systems where autonomy is important. Scientists at NASA Goddard Space Flight Center have been looking at various mechanisms inspired by nature, and the human body, to improve dependability and security in such systems. Otoaural emission is used by the mammalian ear to protect from exceptionally loud noises; tailoring it to autonomic systems would enable the system to be protected by spurious signals or signals from rogue agents.
NASA sensor networks can be highly distributed autonomous systems of systems that must operate with a high degree of reliability. The solar system and planetary exploration networks necessarily experience long communications delays with Earth. The exploration networks are partly and occasionally out of touch with the Earth and mission control for long periods of time, and must operate under extremes of dynamic environmental conditions. Due to the complexity of these systems as well as the distributed and parallel nature of the exploration networks, the exploration networks have an extremely large state space and are impossible to test completely using traditional testing techniques. The more code or instructions that can be generated automatically from a verifiably correct model, the less likely that human developers will introduce errors.
Inspired by psychology, these algorithms could be developed and applied towards creating stable, predictable, and artificially intelligent networks. These algorithms collectively represent ways for intelligent systems to identify and correct unpredictable or unstable behaviors, creating stable emotional states that govern behaviors with given specific circumstances, and establishing an evolvable synthetic neural network that can eventually be scaled from low-level functions to higher level decision making processes. These algorithms could be key to research in autonomous spacecraft, nanorobotic swarms, and sensor networks.