next up previous contents
Next: AI Seminar 2 ``Bugs Up: Artificial Intelligence (AI) Previous: AI Assignment   Contents

AI Tutorial

Based on Assignment 2 : AI

Some early AI language understanding programs

ELIZA (Joseph Weizenbaum, 1966)

Aim: To simulate the language behaviour of a psychiatrist talking to a patient: for any keyboard input from a human user to produce an acceptable typed response.

ELIZA is based on ``non-directive psychotherarapy'' which aims to elicit feelings from the patient and reflect them back to him/her ``the active listening strategies of a touchy-feely 1960s Carl Rogerian therapist''. Contributions of the therapist are noncommittal (they would sound odd outside the psychotherapeutic context). ELIZA never expresses her own attitudes or feelings. She asks questions on topics introduced by the patient and guides the discussion to topics of likely emotional significance to the patient, like, father and family.

System is robust - never breaks down, even in response to a word not in it's vocabulary. Program is free to assume the pose of ignorance about the world.

Both ELIZA and PARRY (K. M. Colby's artificial paranoiac, 1971 - a program that was subjected to systematic Turing tests) have built-in rules that enable them to generate appropriate responses to given inputs (the following rules refer mainly to ELIZA):

With zero understanding ELIZA gives plausible responses. But ELIZA can be caught out if the human a. ``plays it straight'', refuses to play the role of a patient and starts to question the therapist, or b. starts to talk nonsense. Also ELIZA does not understand idiomatic language.

Both ELIZA and PARRY (in anthropomorphic terms) are frauds. In this respect BASEBALL and SHRDLU are different from them.

BASEBALL (B. F. Green and others, 1963) Stored lists of information on results of all baseball games played in one season in the USA. In a highly restricted domain, BASEBALL had a complete understanding.

See comparison of ELIZA and BASEBALL in Greene (1986), pp. 108-109. sec. [*]

SHRDLU (Terry Winograd, 1972) (a nonsense word composed of the 7th-12th most frequent letters in a printer's array). The program operated in a microworld of blocks. It simulated the action of a robot arm that manipulated toy blocks on a table in accordance with instructions from the user. SHRDLU's knowledge base:

1. A sentence parser ``Syntax module''
2. Lexicon and semantic rules ``Semantic module''
3. Knowledge of the blocks world (colour, shape, position) ``Blocks module''
4. Procedural problem solver (syntactic and semantic processing)

Heterarchical model but syntactically driven. Simulated real understanding of a small domain. Declarative as well as procedural knowledge was represented. Ability to make inferences based on linguistic and general knowledge. Flexible rule-based, unlike BASEBALL.

In later years Winograd and Flores (1986) Understanding Computers and Cognition interpreted the restrictedness of programs like SHRDLU in the broader perspective of the failure of the rationalist tradition in Western thought.

next up previous contents
Next: AI Seminar 2 ``Bugs Up: Artificial Intelligence (AI) Previous: AI Assignment   Contents