Kenneth Colby

Source: Wikipedia, the free encyclopedia.
Kenneth Mark Colby
Bornc. 1920
Yale Medical School
Institutions
Main interests
Artificial intelligence, psychiatry

Kenneth Mark Colby (1920 – April 20, 2001) was an American

machine intelligence
.

Early life and education

Colby was born in

Yale Medical School
in 1943.

Career

Colby began his career in

UCLA as a professor of psychiatry
in 1974, and was jointly appointed professor in the Department of Computer Science a few years later. Over the course of his career, he wrote numerous books and articles on psychiatry, psychology, psychotherapy and artificial intelligence.

Psychoanalysis

Early in his career, in 1955, Colby published Energy and Structure in Psychoanalysis, an effort to bring Freud's basic doctrines into line with modern concepts of

Freud
sets forth explanations for a patient's mental state without regard for whether the patient agrees or not. If the patient does not agree, s/he has repressed the truth, that truth that the psychoanalyst alone can be entrusted with unfolding. The psychoanalyst's authority for deciding the nature or validity of a patient's state and the lack of empirical verifiability for making this decision was not acceptable to Colby.

Colby's disenchantment with psychoanalysis would be further expressed in several publications, including his 1958 book, A Skeptical Psychoanalyst. He began to vigorously criticize psychoanalysis for failing to satisfy the most fundamental requirement of a science, that being the generation of reliable data. In his 1983 book, Fundamental Crisis in Psychiatry, he wrote, “Reports of clinical findings are mixtures of facts, fabulations, and fictives so intermingled that one cannot tell where one begins and the other leaves off. …we never know how the reports are connected to the events that actually happened in the treatment sessions, and so they fail to qualify as acceptable scientific data.”.[2]

Likewise, in Cognitive Science and Psychoanalysis, he stated, "In arguing that psychoanalysis is not a science, we shall show that few scholars studying this question get to the bottom of the issue. Instead, they start by accepting, as do psychoanalytic theorists, that the reports of what happens in psychoanalytic treatment -- the primary source of the data -- are factual, and then they lay out their interpretations of the significance of facts for theory. We, on the other hand, question the status of the facts."[3] These issues would shape his approach to psychiatry and guide his research efforts.

Computer science

In the 1960s, Colby began thinking about the ways in which

semantic clues they were able to generate.[4]

Later, Colby would be one of the first to explore the possibilities of computer-assisted

U.S. Navy and Department of Veteran Affairs and would be distributed to individuals who used it without supervision from a psychiatrist. Needless to say, this practice was challenged by the media. To one journalist Colby replied that the program could be better than human therapists because "After all, the computer doesn't burn out, look down on you or try to have sex with you."[5]

Artificial intelligence

In the 1960s at Stanford University, Colby embarked on the creation of software programs known as "chatterbots," which simulate conversations with people. One well known

chatterbot at the time was ELIZA, a computer program developed by Joseph Weizenbaum in 1966 to parody a psychologist. ELIZA, by Weizenbaum's own admission, was developed more as a language-parsing tool than as an exercise in human intelligence. Named after the Eliza Doolittle character in Pygmalion
it was the first conversational computer program, designed to imitate a psychotherapist asking questions instead of giving advice. It appeared to give conversational answers, although it could be led to lapse into obtuse nonsense.

In 1972, at the

Stanford Artificial Intelligence Laboratory, Colby built upon the idea of ELIZA to create a natural language program called PARRY that simulated the thinking of a paranoid
individual. This thinking entails the consistent misinterpretation of others' motives – others must be up to no good, they must have concealed motives that are dangerous, or their inquiries into certain areas must be deflected - which PARRY achieved via a complex system of assumptions, attributions, and “emotional responses” triggered by shifting weights assigned to verbal inputs.

PARRY: A Computer Model of Paranoia

Colby's aim in writing PARRY had been practical as well as theoretical. He thought of PARRY as a virtual reality teaching system for students before they were let loose on real patients.[6] However, PARRY's design was driven by Colby's own theories about paranoia. Colby saw paranoia as a degenerate mode of processing symbols where the patient's remarks "are produced by an underlying organized structure of rules and not by a variety of random and unconnected mechanical failures."[7] This underlying structure was an algorithm, not unlike a set of computer processes or procedures, which is accessible and can be reprogrammed, in other words "cured."

Shortly after it was introduced, PARRY would go on to create intense discussion and controversy over the possibility or nature of machine intelligence. PARRY was the first program to pass the

Turing Test, named for the British mathematician Alan Turing
, who in 1950 suggested that if a computer could successfully impersonate a human by carrying on a typed conversation with a person, it could be called intelligent. PARRY succeeded in passing this test when human interrogators, interacting with the program via remote keyboard, were unable with more than random accuracy to distinguish PARRY from an actual paranoid individual.

As philosopher Daniel Dennett stated in Alan Turing: Life and Legacy of a Great Thinker,

To my knowledge, the only serious and interesting attempt by any program designer to win even a severely modified Turing test has been Kenneth Colby. He had genuine psychiatrists interview PARRY. He did not suggest that they might be talking or typing to a computer; rather he made up some plausible story about why they were communicating with a real live patient via teletype. Then he took the PARRY transcript, inserted it into a group of teletype transcripts and gave them to another group of experts—more psychiatrists—and said, 'One of these was a conversation with a computer. Can you figure out which one it was?' They couldn't.[8]

Much of the criticism of ELIZA as a model for artificial intelligence focused on the program's lack of an internal world model that influenced and tracked the conversation. PARRY simulates paranoid behavior by tracking its own internal emotional state on a few different dimensions. To illustrate this, Colby created another program called RANDOM-PARRY which chose responses at random. Responses from RANDOM-PARRY did not model the human patients' responses as well as standard PARRY. Some have argued that PARRY fooled its judges because paranoid behavior makes inappropriate responses or

non sequiturs appropriate. But there is still a certain logic to them that PARRY simulates effectively. It is simpler to simulate paranoid behavior, perhaps, but it is not trivial.[9]

The two computer programs, ELIZA and the paranoid PARRY, eventually met for series of "talks," executing different programs. One exchange such went like this:

PARRY: I know the mob controls the big rackets.

ELIZA: Who are you really talking about?

PARRY: Have you ever really thought about the Mafia?

ELIZA: Suppose you ever really thought about the Mafia?

PARRY: Let's talk about something else.

ELIZA: I am not sure I understand you fully.

PARRY: You keep asking me the same things, why not change the topic?

ELIZA: In what way?[10]

Colby would claim that PARRY mimics the natural process by which a person (in this case a person with paranoid schizophrenia) engages in conversation. The structure of the program that makes the linguistic decisions in PARRY is isomorphic to the 'deep structure' of the mind of the paranoiac. As Colby stated: "Since we do not know the structure of the 'real' simulative processes used by the mind-brain, our posited structure stands as an imagined theoretical analogue, a possible and plausible organization of processes analogous to the unknown processes and serving as an attempt to explain their workings".[11]

Yet, some critics of PARRY expressed the concern that this computer program does not in actuality "understand" the way a person understands and continued to assert that the idiosyncratic, partial and idiolectic responses from PARRY cover up its limitations.[12] Colby attempted to answer these and other criticisms in a 1974 publication entitled, "Ten Criticisms of PARRY."[13]

Colby also raised his own ethical concerns over the application of his work to real life situations. In 1984, he wrote,

With the great amount of attention now being paid by the media to artificial intelligence, it would be naive, shortsighted, and even self-deceptive to think that there will not be public interest in scrutinizing, monitoring, regulating, and even constraining our efforts. What we do can affect people’s lives as they understand them. People are going to ask not only what we are doing but also whether it should be done. Some might feel we are meddling in areas best left alone. We should be prepared to participate in open discussion and debate on such ethical issues."[14]

Still, PARRY has withstood the test of time and for many years has continued to be acknowledged by researchers in computer science for its apparent achievements. In a 1999 review of human-computer conversation, Yorick Wilks and Roberta Catizone from the University of Sheffield comment:

The best performance overall in HMC (Human-machine conversation) has almost certainly been Colby’s PARRY program since its release on the net around 1973. It was robust, never broke down, always had something to say and, because it was intended to model paranoid behaviour, its zanier misunderstandings could always be taken as further evidence of mental disturbance, rather than the processing failures they were."[15]

Other areas of study

During his career, Colby ventured into other, more esoteric areas of research including classifying dreams in "primitive tribes." His findings suggested that men and women of

primitive tribes differ in their dream life, these differences possibly contributing an empirical basis to our theoretical constructs of masculinity and femininity.[16]

Colby was also a chess player, and published a respected chess book called "Secrets of a Grandpatzer."[17] The book focuses on improving one's Elo rating from an average level ("patzer") to a very strong level ("grandpatzer", in the range 1700 to 2200).[18]

Books

  • (1951) A Primer for Psychotherapists. ()
  • (1955) Energy and Structure in Psychoanalysis.
  • (1957) An exchange of views on psychic energy and psychoanalysis.
  • (1958) A Skeptical Psychoanalyst.
  • (1960) Introduction to Psychoanalytic Research
  • (1973) Computer Models of Thought and Language.
  • (1975) Artificial Paranoia : A Computer Simulation of Paranoid Processes ()
  • (1979) Secrets of a Grandpatzer: How to Beat Most People and Computers at Chess ()
  • (1983) Fundamental Crisis in Psychiatry: Unreliability of Diagnosis ()
  • (1988) Cognitive Science and Psychoanalysis ()

Publications

  • "Sex Differences in Dreams of Primitive Tribes" American Anthropologist, New Series, Vol. 65, No. 5, Selected Papers in Method and Technique (Oct., 1963), pp. 1116–1122
  • "Computer Simulation of Change in Personal Belief Systems." Behavioral Science, 12 (1967), pp. 248–253
  • "Dialogues Between Humans and an Artificial Belief System." IJCAI (1969), pp. 319–324
  • "Experiments with a Search Algorithm for the Data Base of a Human Belief System." IJCAI (1969), pp. 649–654
  • "Artificial Paranoia." Artif. Intell. 2(1) (1971), pp. 1–25
  • "Turing-like Indistinguishability Tests for the Validation of a Computer Simulation of Paranoid Processes." Artif. Intell. 3(1-3) (1972), pp. 199–221
  • "Idiolectic Language-Analysis for Understanding Doctor-Patient Dialogues." IJCAI (1973), pp. 278–284
  • "Pattern-matching rules for the recognition of natural language dialogue expressions." Stanford University, Stanford, CA, 1974
  • "Appraisal of four psychological theories of paranoid phenomena." Journal of Abnormal Psychology. Vol 86(1) (1977), pp. 54–59
  • "Conversational Language Comprehension Using Integrated Pattern-Matching and Parsing." Artif. Intell. 9(2) (1977), pp. 111–134
  • "Cognitive therapy of paranoid conditions: Heuristic suggestions based on a computer simulation model." Journal Cognitive Therapy and Research Vol 3 (1) (March 1979)
  • "A Word-Finding Algorithm with a Dynamic Lexical-Semantic Memory for Patients with Anomia Using a Speech Prosthesis." AAAI (1980), pp. 289–291
  • "Reloading a Human Memory: A New Ethical Question for Artificial Intelligence Technology." AI Magazine 6(4) (1986), pp. 63–64

See also

References

  1. ^ Energy and Structure in Psychoanalysis (1958)
  2. ^ Fundamental Crisis in Psychiatry (1983)
  3. ^ Cognitive Science and Psychoanalysis (1988)
  4. ^ "Kenneth Mark Colby". Archived from the original on 2008-06-07. Retrieved 2008-07-05.
  5. ^ quoted in Mind as Machine: A History of Cognitive Science By Margaret A. Boden
  6. ^ Mind as Machine: A History of Cognitive Science By Margaret A. Boden p. 370
  7. ^ Artificial Paranoia : A Computer Simulation of Paranoid Processes p. 99-100
  8. ^ In: Alan Turing: Life and Legacy of a Great Thinker By Christof Teuscher, Douglas Hofstadter, p 304
  9. ^ http://robot-club.com/lti/pub/aaai94.html Archived 2008-07-04 at the Wayback Machine "Chatterbots, Tinymuds, And The Turing Test: Entering The Loebner Prize Competition" by Michael L. Mauldin
  10. ^ http://www.stanford.edu/group/SHR/4-2/text/dialogues.html Archived 2007-07-11 at the Wayback Machine "Dialogues with colorful personalities of early AI"
  11. ^ Artificial Paranoia: A Computer Simulation of Paranoid Processes. p.21
  12. ^ http://cultronix.eserver.org/sengers/ Archived 2006-10-12 at the Wayback Machine "Wallowing in the Quagmire of Language: Artificial Intelligence, Psychiatry, and the Search for the Subject". Phoebe Sangers, Cultronix.
  13. S2CID 1553976
    .
  14. ^ "Reloading a Human Memory: A New Ethical Question for Artificial Intelligence Technology." AI Magazine 6(4) (1986), pp. 63-64
  15. ^ arXiv:cs.CL/9906027 v1 25 Jun 1999 "Human-Computer Conversation" by Yorick Wilks and Roberta Catizone
  16. ^ "Sex Differences in Dreams of Primitive Tribes," American Anthropologist, New Series, Vol. 65, No. 5: 1116-1122
  17. ^ Spar, James. McGuire, Michael. "IN MEMORIAM". University of California. Archived from the original on 17 April 2015. Retrieved 12 September 2013.{{cite web}}: CS1 maint: multiple names: authors list (link)
  18. ^ Pearson, Robert (3 December 2007). ""Secrets of a Grandpatzer" (Part 1)". Archived from the original on 20 July 2012. Retrieved 12 September 2013.

External links