Lara Crawford, Graduate Student
Tak-Kuen John Koo, Graduate Student
Yi Ma, Graduate Student
George Pappas, Graduate Student
Claire Tomlin, Graduate Student
Jeff Wendlandt, Graduate Student
Datta Godbole, Postdoctoral Researcher
Jana Kosecka, Postdoctoral Researcher
John Lygeros, Postdoctoral Researcher
(Professor Leon O. Chua) (Professor Thomas A. Henzinger) (Professor Jitendra Malik) (Professor Stuart J. Russell) (Professor S. Shankar Sastry) (Professor Pravin Varaiya) (Professor Lotfi A. Zadeh)
Funding Sources: ARO MURI DAAH04-96-1-0341,
Impressive advances in computation, communication, smart materials, and MEMS bring closer to realization the promise of furthering the cybernetic dream of building autonomous intelligent systems. These are systems that sense and manipulate their environment by gathering multi-modal sensor data, compressing and representing it in symbolic form at various levels of granularity, and using the representations to reason and learn about how to optimally interact with the environment. The problem is hard because real-world environments are complex, spatially extended, dynamic, stochastic, and largely unknown; intelligent systems must also accommodate massive sensory and motor uncertainty and must act in real time.
We believe that qualitative leaps in scope and performance will emerge from addressing the basic problems together. Complexity and spatial extent are addressed by system decomposition based on hierarchical, hybrid, and multi-agent designs, using multiple levels of abstraction for sensory and control functions. Structural and parametric learning methods adapt the system to initially unknown environments, while generalized estimation methods, uncertainty management, and robust control techniques cope with the residual uncertainty inherent in stochastic, partially observable environments. Real-time decision-making is achieved by parallelism, reflexive control, compilation, and anytime approximation algorithms. Above and beyond the development of these specific methods, we see the possibility for a reunification of control, AI, and computational neuroscience into a theoretical and technological continuum, with enormous benefits for the science and engineering of intelligent systems.
The main scope of the project is to examine new paradigms in the analysis and design of intelligent multi-agent hybrid systems. Such systems typically arise in the domains where multiple agents complete for scare resource such as in Air Traffic Management Systems or Intelligent Highway Systems. Since the complexity of these types of systems prevents us from using traditional Central Control Paradigm. New design and analysis techniques for distributed systems will be addressed and examined in theory, simulation and experiment. The techniques will provide tools and guidelines for the designers to assess various safety requirements as well as address performance issues.
The successful functioning of a large scale distributed system is conditioned on robust operation of the individual agents. In order to assure reliable operation of individual agents in dynamically changing environments rich sensory information needs to be integral part of the sensory-motor strategies. New ideas and designs will be evaluated in two experimental platforms: Intelligent Highway Systems and the helicopter project. In both cases various control strategies using visual information will be examined. Both platforms are suitable for addressing important questions of appropriate representation of perceptual data extracted by visual sensors as well as technical challenges of acquiring the desired information in real-time using VLSI technology.
Due to the inaccuracies in models of these systems and the uncertainities of the environments they reside in, adaptation and learning need to take place at all levels of the system. We will examine the relation between work on learning environment models from noisy sensors and work on perception and representation. Similarly the work on learning control laws from reinforcement signal will be closely integrated with work on representation and control.
The WWW homepage for this MURI program is at http://robotics.eecs.berkeley.edu/~mayi/muri/muri.html .