Although, at the time, the failure of computers to pass what appeared to be fairly fundamental benchmarks was considered as a major setback, it had the advantage that it caused some people to question their initial assumptions about the nature of intelligence. Initially, these assumptions were based on a paradigm that suggested intelligence would simply emerge as we built bigger, faster computers. However, without first understanding the nature of intelligence, it became increasingly difficult to see how it was going to be replicated. In 1951, English mathematician, Alan Turing devised the ‘Imitation Game’ in which a man and woman try to fool the questioner as to their real genre identity. The basic Turing test is only a slight modification of this game, in which one set of answers comes from a computer.
|Aims to discover if A or B is the Computer||(A) Computer: aims to fool the questioner|
|(B) Human: aims to help the questioner|
The job of the questioner is to try to determine whether the answers are coming from a computer or human. Of course, the real challenge is for the computer to be ‘intelligent’ enough to play and win. The basic rationale behind this test being that intelligence may or may not be locked within the box, but as long as the responses from the box appear intelligent it must, for all practical purposes, be considered intelligent. Although Turing devised his test back in 1951 and the Loebner prize of $100,000 was offered in 1991, no program has yet achieved the necessary success rate. However, even if a program were to pass, there has always been much debate surrounding the Turing Test and what it really proves in terms of intelligence. So, although the Turing Test has not been passed, in its original format, many already want it to be extended so that true AI has to replicate not only intelligence, but also the total human persona. Yet others believe that a test based on human criteria is misleading, as the important issue is that the computer demonstrates its cognitive ability regardless of behaviour.
However, in 1980, John Searle devised another test called the 'The Chinese Room' in order to demonstrate the weakness of Turing's test by showing that a box could appear to respond intelligently without any real understanding.
|Questioner only speaks Chinese||Answerer only speaks English.
Answerer uses instructions in English to respond to specific Chinese ideograms
In this thought experiment, a man who speaks no Chinese is trapped inside a room. However, in the room is a book that provides the rules on how to produce new Chinese symbols, based on the ones received. On receiving a slip of paper with Chinese symbols, the man responds according to the rules. At no time does the man understand the questions or answers that are always in Chinese. However, to the world outside the room, the responses appear intelligent. The essence of Searle's argument was that the computer in Turing's test is only playing the role of the man who speaks no Chinese. This man does not create the rules that allow his responses to appear intelligent; he is only mechanically following the instructions or program. Even today, it is still true that computers only act out what we will call 'codified human reasoning' that is, in essence, aligned to the following programming principles:
If (x=y) then do <this task>
Else do <this other task>
While this is a gross simplification of the real complexity within computer science, it is representative of the type of logic that drives a program to perform different tasks. The limitation of this type of logic quickly becomes self-evident when you try to instruct an imaginary robot to go and make a cup of coffee:
- What is the current location of your robot?
- Where is the facility to make coffee?
- How does the robot get there?
- What could prevent it getting there?
- Having got there, where are the cups?
- What happens if the cups are dirty?
- Where are the coffee, sugar and milk?
- Where is the water and kettle?
OK, that is enough to get the idea. Unless every possible contingency is covered, this type of computer logic has no capability to apply any common sense. However, even today, complex software programs are still developed using essentially this type of logic that requires millions of lines of instructions. After having designed and written all this code, there is an even bigger problem:
How do you ensure that every logic path through the total maze of computer instructions actually works?
Of course, computer science has made incredible advances, especially in terms of the speed, size and cost of hardware. There are now computer chips that can perform tens of millions of instructions per second at the cost of a few hundred dollars. While there are some physical limitations that may slow further hardware progress by 2020, this could still allow a 1000-fold increase in computer performance by this date. However, speed alone only adds to the problem by allowing software programs to become ever bigger and more complex. Microsoft Windows stands, as an obvious example of the potential to create an ever-larger software system that is taxing even this corporate giant to be released bug-free. At face value, pre-programmed human reasoning is probably not a viable approach for AI.