[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference heron::euro_swas_ai

Title:Europe-Swas-Artificial-Intelligence
Moderator:HERON::BUCHANAN
Created:Fri Jun 03 1988
Last Modified:Thu Aug 04 1994
Last Successful Update:Fri Jun 06 1997
Number of topics:442
Total number of notes:1429

219.0. "An interesting article for late night discussion!" by HERON::ROACH (TANSTAAFL !) Fri Aug 10 1990 13:05

Printed by: Pat Roach                                    Document Number: 012573
--------------------------------------------------------------------------------

                  I N T E R O F F I C E   M E M O R A N D U M

                                        Date:     09-Aug-1990 11:11pm CET
                                        From:     ROACH
                                                  ROACH@UTURN@SELECT@HERON@MRGATE@HUGHI
                                        Dept:      
                                        Tel No:    

TO:  ROACH@A1NSTC


Subject: what do you think about this?


 Artificial Intelligence - 1 1/2 page article on MIT's AI Laboratory
	{The Boston Globe, 30-Jul-90, p. 29}
   [There are two articles: "At MIT, robots march toward independence" and
 "To hackers, it's all doable." The article includes photographs of robots
 Genghis, Squirt, Attila and several of the people who create and work with the
 machines. Here's a majority of the two articles - TT]
   Genghis is a 2.2. pound robot that can climb over rough terrain. Squirt is
 one cubic inch in size and contains a motor, battery, microchip, interfaces
 and three sensors.
   Colin Angle received his bachelor's degree in electrical engineering from
 MIT in 1989. While still an undergraduate, he constructed Genghis, the first
 legged robot to apply professor Rodney Brooks' ideas on artificial
 intelligence. Genghis has 57 behavioral modules, each a simple computer
 program that receives impulses from sensors, interacts with other programs,
 then activates the appropriate motors. Now 23, Angle is working on Attila,
 Genghis' successor, which may end up with 3,000 modules. He never took an AI
 course before he came into the Mobot Lab.
   Traditionally, research in artificial intelligence has been based on the
 premise that AI in a computer must arise from a model of the human mind,
 translated into symbols and implanted in the machine. Intelligence, in this
 view, lies in perceiving the world as humans perceive it, reasoning, then
 acting. "Nouvelle AI," as its practitioners sometimes call it, embraces a
 different definition of intelligence, and it reverse science's approach to
 creating it. Nouvelle AI denies that a mathematical model of the world, as
 perceived by humans, must be installed in a computer, and holds that
 "creatures" that can grasp and choose among ways of accomplishing tasks in
 real environments are, by definition, intelligent. Instead of trying to
 simulate the human brain and work down from there to practical
 applications, lugging along the huge memory and computing power the
 simulation requires, practitioners of nouvelle AI advocate starting with
 robots the size of rabbits, with nervous systems more like those of roaches
 and rat than of humans, and working up.
   "Problem-solving behavior, language, expert knowledge and application, and
 reason, are all rather simple once the essence of being and reacting are
 available," asserts Rodney A. Brooks, the mathematician at MIT's AI Lab who
 has taken the lead in elaborating the new approach. "That essence is the
 ability to move around in  a dynamic environment, sensing the surroundings"
 to a sufficient degree to survive and, in the human case, to reproduce, or, in
 the robot case, to accomplish the programmed mission.
   The national Aeronautics and Space Administration's Jet Propulsion
 Laboratory three times has rejected proposals from the MIT group to develop
 model robots for space exploration, and there has been a hail of criticism in
 MIT's direction from other AI shops.
   "The descent from human-level AI to 'artificial orthoptera' [insects] is not
 to be recommended," wrote one scientist reviewing a proposal to the Jet
 Propulsion Lab. "No doubt these mechanical insects amuse the MIT graduate
 students."
   Advocates of modeling the human brain as a basis for AI have defended that
 approach stoutly in the face of the MIT challenge. "Any intelligent coupling
 of perception to action, any planning at a higher level than a reflex, needs a
 model of the world," wrote Charles E. Thorpe of Carnegie-Mellon University,
 assessing MIT's heresy. "Robots without models make make great protozoans, or
 even with much cleverness may produce artificial insects, but I wouldn't trust
 one to be a chauffeur."
   Donald Michie, chief scientist at Turing Labs in Scotland, a leading AI
 shop, last year dismissed the work as "a Great Leap Backward at MIT's
 Artificial Insect Lab."
   The mobot - for mobile-robot-lab - people are, not surprisingly, undaunted.
 Brooks keeps the most painful critiques posted on his office door. Angle
 shrugs: "People have been afraid to build little tiny robots because others
 would say 'what a nice toy.'"
   Professor Patrick H. Winston, director of the MIT AI Laboratory of which the
 Mobot Lab is a part, says much of the resistance is "what always goes on when
 someone has an idea that threatens the conventional wisdom. There are always
 people looking for ways to dismiss it and carry on with the old idea. That is
 natural in science and engineering."
   MIT's AI Lab encompasses 202 faculty, staff and graduate students and last
 year spent $8.9 million in pursuit of interests ranging from robots to
 parallel computing to vision. According to a recently completed section for
 the MIT president's annual report, seven of the 22 professors in the AI Lab
 are working on aspects of robots, the greatest concentration of interest in
 the shop.
   The idea that complex behavior could emerge from comparatively simple
 systems was being kicked around in AI circles long before Rod Brooks began to
 make waves; so was the intuition that AI had gone too far in separating the
 internal reasoning of its creations from their external capacities to perceive
 and act.
   Theories that complex behavior could simply emerge, without having to be
 installed by deliberate human effort, were attractive on several counts:
   For one, they opened up the possibility of creating behavior without having
 to understand it first, a conceptual advantage that grew as the difficulty of
 understanding the nature of intelligence grew progressively more daunting.
   For another, such theories offer a way out to those repelled by the notion
 of reducing the concept of thinking to mechanical functions that can be
 replaced by mathematics.
   But until Brooks came along, there was no practical approach to building a
 from-the-bottom-up system.
   He conceived of layering, adding module upon module of functions: The
 capacity to wander about, followed by the capacity to explore, followed by the
 capacity to build internal mathematical maps of where the robot had been. As
 higher levels are reached, conditions would encourage the machine - or would
 it be creature by this point? - to make plans for changing its world.
   Each individual module is capable of activating the robot, in sharp contrast
 to systems built on internal models, where stimuli are analyzed and actions
 chosen in a centralized fashion, and many stimuli may be required to activate
 the robot. When responses that conflict are activated - for example "hide in
 the dark" vs. "go to the noise" - a system devised by Brooks to organize the
 layered processes lets one response go forward and blocks the other.
   Described originally in 1986 and refined last year, this system - called
 submission architecture - has been subject to criticism that it is just a
 fancy computer program that falls short of artificial intelligence.
   But also last year, Patti Maes, a Belgian AI researcher then working at the
 lab, devised an overlay on the subsumption architecture which appears to make
 possible much more complicated and sophisticated behavior.
   The approach is analogous to the way hormones work in biological systems.
 Instead of producing a certain behavior - say, stopping in the face of
 danger - only in response to s preset level of activation in the sensors for
 danger, the robot becomes cautious when other sensors detect certain kinds of
 conditions, and then will stop with less stimulation of the danger sensors.
   Think of a household robot setting out to mop the floor. In its background -
 the Maes overlay - is awareness of a need to recharge itself at some point,
 analogous to human hunger. The robot decides there is enough charge in its
 batteries to do the floor before recharging, but signals of the charge level
 keep circulating, gradually becoming stronger as the charge level decreases.
   The children come home from school and continuously redirty the floor; the
 robot keeps trying to mop it up. But instead of running itself down, the robot
 eventually changes behavior in response to its power need and puts aside the
 floor until after a recharge, much as a housekeeper might put aside until
 after lunch a task that has proven more complicated than expected.
   Professor John Laird of Michigan, a leader in tradition AI, says, "You've
 got to put your scientific bet somewhere, and Rod and I are putting it in
 different places. But he has systems that do things. AI had sort of ignored
 that for a while, and it's had a very good affect on the AI community.
   "The open question of his research is whether it extends upward to what
 would be called higher levels of intelligence and problems. That is actually
 his research agenda. Rod has demonstrated rudimentary planning and learning;
 it is still an open question whether it will scale up."
   Whatever happens with Brooks' work, attempts to model the human mind will
 continue, Laird and others are certain.
   "Modeling of human cognition will help us understand people better," he
 reasons. "And on the functional, engineering side, well, humans solve
 problems. We would be crazy to ignore an existing system that is intelligent
 and can do the things we want our [mechanical] systems to do."
   Such confidence in the value and richness of the field of artificial
 intelligence - whatever its evolution, whatever its commercial failures so
 far - is widespread.
   As a noted mathematician explained it recently, "The idea of artificial
 intelligence was a brilliant and large idea from the beginning. The first
 people [in the field] invented the idea. The idea produced children and
 grandchildren, and that was a great accomplishment, even though the
 grandchildren scorn the grandparents.
   "It has both led and exploited the technology of its time."
T.RTitleUserPersonal
Name
DateLines