Alan Turing: Can machines think? Do “thinking” machines think? Testing with passion

11.07.2023

Alan Turing published a large article, which later became a textbook: Computing Machinery and Intelligence. The article is often translated into Russian as follows: Can a machine think? In the section of the article “Opposing points of view on the main issue,” the author discussed various objections, myths associated with artificial intelligence, modeling of creative processes and gave his comments...

1. Theological objection. “Thinking is a property of the immortal soul of man, God gave an immortal soul to every man and every woman, but did not give a soul to any other animal or machine. Therefore, neither animal nor machine can think.”

I cannot agree with anything that has just been said, and I will try to argue using theological terms. I should find this objection more convincing if animals were placed in the same class with men, for, in my opinion, there is a greater difference between the typical animate and the typical inanimate than between man and other animals. The arbitrary character of this orthodox point of view will become still clearer if we consider in what light it may appear to a person professing some other religion. How, for example, will Christians react to the point of view of Muslims who believe that women do not have souls? But let's leave this question and turn to the main objection. It seems to me that from the above argument with reference to the soul of man follows a serious limitation on the omnipotence of the Almighty.

Even though there are certain things that God cannot do, such as making one equal two; but who among the believers would not agree that He is free to infuse a soul into an elephant if He finds that the elephant deserves it? We can look for a way out in the assumption that He uses His power only in combination with mutations that improve the brain so much that the latter is able to satisfy the requirements of the soul that He wants to infuse into the elephant. But the same can be said in the case of machines. This reasoning may seem different only because in the case of machines it is more difficult to “digest.” This essentially means that we consider it highly unlikely that God would consider the circumstances suitable for giving a soul to a machine, i.e. this is really about other arguments that are discussed in the rest of the article. In attempting to build thinking machines we act more disrespectfully towards God by usurping His power to create souls than we do in procreating offspring; in both cases we are only instruments of his will and produce only refuges for souls, which again God creates.

All this, however, is empty speculation. Whatever these kinds of theological arguments are made for, they do not make much impression on me. However, in the old days such arguments were found very convincing. During times Galilee they believed that such church texts as “The sun stood in the middle of the sky and did not hurry towards the west almost the whole day” (Joshua 10:3) and “You have established the earth on solid foundations; it will not be shaken forever and ever” (Psalm 103:5), sufficiently refuted the theory Copernicus. In our time, this kind of evidence seems groundless. But when the modern level of knowledge had not yet been achieved, such arguments produced a completely different impression.

2. Objection from the “ostrich” point of view “The consequences of machine thinking would be too terrible. Let us hope and believe that machines cannot think.”

This objection is rarely expressed in such open form. But it sounds convincing to most of those who even think of it. We are inclined to believe that man is intellectually superior to the rest of nature. It would be best if it could be proven that man is necessarily the most perfect being, because in this case he may be afraid of losing his dominant position. It is clear that the popularity of the theological objection is due to this feeling. This feeling is probably especially strong among intelligent people, since they value the power of thinking more highly than other people, and are more likely to base their belief in the superiority of man on this ability. I do not believe that this objection is sufficiently significant to require any rebuttal. Consolation would be more appropriate here; Should we not suggest looking for it in the doctrine of the transmigration of souls?

3. Mathematical objection. There are a number of results from mathematical logic that can be used to show that there are certain limitations on the capabilities of discrete state machines. The most famous of these results, Gödel's theorem, shows that in any sufficiently powerful logical system it is possible to formulate statements that within that system can neither be proven nor disproved, unless the system itself is consistent. There are other, in some respects similar, results due to Church, Kleene, Rosser And Turing. The result of the latter is especially convenient for us, since it relates directly to machines, while other results can only be used as a relatively indirect argument (for example, if we were to rely on the theorem Gödel, we would also need some means of describing logical systems in terms of machines and machines in terms of logical systems). Turing's result refers to such a machine, which is essentially a digital computing machine with unlimited memory capacity, and establishes that there are certain things that the machine cannot do. If it is designed to answer questions, as in the "imitation game," then there will be questions to which it will either answer incorrectly or fail to answer at all, no matter how much time is given to it. There can, of course, be many such questions, and questions that cannot be answered by one machine can be answered satisfactorily by another. We are, of course, assuming here that the questions are of the yes-or-no type rather than of questions such as: “What do you think about Picasso?. The following types of questions are those that we know a machine cannot answer: “Consider a machine characterized by the following: ...Will this machine always answer “yes” to every question?” If in place of the dots we put a description (in some standard form, for example, similar to the one we used in Section V) of such a machine, which stands in some relatively simple relation to the machine to which we address our question, then we can show that the answer to this question will either be incorrect or not exist at all. This is the mathematical result; they claim that it proves the limitations of machines, which are not inherent in the human mind. […]

The answer to this objection, briefly, is as follows. It has been established that the capabilities of any particular machine are limited, but the objection being examined contains an unsubstantiated assertion, without any evidence, that such limitations do not apply to the human mind. I don't think this aspect of the matter can be so easily ignored. When one of these kinds of machines is asked a relevant critical question and it gives a certain answer, we know in advance that the answer will be wrong, and this gives us a feeling of a certain superiority. Isn't this feeling illusory? Undoubtedly, it can be quite sincere, but I do not think that too much importance should be attached to it. We ourselves give incorrect answers to questions too often for the feeling of satisfaction that arises in us at the sight of the fallibility of machines to be justified. In addition, the feeling of superiority can only apply to a machine over which we have won our - in essence, very modest - victory. There can be no talk of a simultaneous triumph over all machines. So, in short, for any given machine there can be people who are smarter than it, but in this case there can again be other, even smarter machines, etc. I think that those who share the view expressed in the mathematical objection will generally be willing to accept the "imitation game" as a basis for further consideration. Those who are convinced of the validity of the two previous objections will probably not be interested in any criterion at all.

Can a machine think?

It is not entirely clear how a computer can do anything that is not “in the program”? Is it possible to command anyone to reason, guess, draw conclusions?

Opponents of the thesis about “thinking machines” usually consider it sufficient to refer to a well-known fact: a computer in any case does only what is specified in its program - and, therefore, will never be able to “think”, since “thoughts according to the program” are no longer possible count as "thoughts".

This is both true and false. Strictly speaking, indeed: if a computer does not do what is currently prescribed to it by the program, then it should be considered damaged.

However, what appears to be a “program” to a person and what appears to be a program to a computer are very different things. No computer can carry out the grocery shopping “program” that you put in your ten-year-old son’s head—even if that “program” includes only completely unambiguous instructions.

The difference is that computer programs are made up of a huge number of much smaller, private commands. Tens and hundreds of such microcommands make up one step, thousands and even millions make up the entire grocery shopping program in the form in which a computer could execute it.

No matter how ridiculous such petty regulation may seem to us, for a computer this method is the only applicable one. And the most amazing thing is that it gives the computer the opportunity to be much more “unpredictable” than is commonly believed!

In fact: if the entire program consisted of one order to “go grocery shopping,” then the computer, by definition, would not be able to do anything else - it would stubbornly go to the supermarket, no matter what was happening around. In other words, although “human” intelligence is required to understand a short program, the result of such a program - if it were executed by a computer rather than a person - would be very strictly determined.

We, however, are forced to give computers much more detailed instructions, determining their smallest step. At the same time, we have to add instructions to the program that are not directly related to this task. So, in our example, the robot needs to be told the rules for crossing the street (and the rule “if a car is coming at you, jump to the side”).

These instructions must necessarily include checking certain conditions for making decisions, seeking information (about the weather, about the location of stores) to certain databases, comparing the importance of various circumstances, and much more. As a result, a computer with such a program has many more "degrees of freedom" - there are many places in which it can deviate from the path to the final goal.

Of course, in the overwhelming majority of cases, these deviations will be undesirable, and we try to create conditions for the computer to operate in which the risk of a “car jumping out from around the corner” would be minimal. But life is life, and it is impossible to foresee all conceivable surprises. That is why a computer is capable of surprising both with an unexpectedly “reasonable” reaction to seemingly unpredictable circumstances, and with incredible “stupidity” even in the most ordinary situations (more often, unfortunately, the latter).

It is the construction of complex programs based on a detailed analysis of the smallest steps that make up the human thinking process that constitutes the modern approach to creating “thinking machines” (at least, one of the approaches). Of course, complexity isn't everything. And yet, among the scientists dealing with this problem, few doubt that “smart” programs of the 21st century will differ from modern ones, primarily in their immeasurably greater complexity and the number of elementary instructions.

Many modern information processing systems are already so complex that some features of their behavior simply cannot be deduced from the programs themselves - they have to be literally studied by conducting experiments and testing hypotheses. And vice versa - many features of intelligent human activity, which at first glance seem almost like “insights from above,” are already quite well modeled by complex programs consisting of many simple steps.

Altov Genrikh

Can a machine think?

Genrikh Altov

Can a machine think?

I'm going to look at the question: "Can a machine think?" But to do this, you must first define the meaning of the term “think”...

A. Turing. Trigger chain.

twice a week, in the evenings, the grandmaster came to the Institute of Cybernetics and played with an electronic machine.

In the spacious and deserted room there was a low table with a chessboard, a clock and a push-button control panel. The grandmaster sat down in a chair, placed the pieces and pressed the "Start" button. A moving mosaic of indicator lamps lit up on the front panel of the electronic machine. The tracking system lens was aimed at the chessboard. Then a short inscription flashed on the matte display. The car was making its first move.

It was quite small, this car. It sometimes seemed to the grandmaster that the most ordinary refrigerator was standing against him. But this “refrigerator” always won. In a year and a half, the grandmaster barely managed to draw only four games.

The machine never made a mistake. The threat of time pressure never loomed over her. The grandmaster more than once tried to bring down the car by making a deliberately ridiculous move or sacrificing a piece. As a result, he had to hastily press the “Give up” button.

The Grandmaster was an engineer and experimented with the machine to refine the theory of self-organizing automata. But at times he was infuriated by the absolute equanimity of the “refrigerator.” Even at critical moments in the game, the machine did not think for more than five or six seconds. Calmly blinking the multi-colored lights of the indicator lamps, she wrote down the strongest possible move. The machine was able to make adjustments to the playing style of its opponent. Sometimes she raised the lens and looked at the person for a long time. The grandmaster was worried and made mistakes...

During the day, a silent laboratory assistant came into the room. Gloomily, without looking at the machine, he reproduced on the chessboard games played at different times by outstanding chess players. The “refrigerator” lens extended all the way and hung over the board. The machine did not look at the laboratory assistant. She recorded the information dispassionately.

The experiment for which the chess machine was created was nearing its end. It was decided to organize a public match between man and machine. Before the match, the grandmaster began to appear at the institute even more often. The grandmaster understood that loss was almost inevitable. And yet he persistently looked for weak points in the “fridge” game. The machine, as if guessing about the upcoming fight, played stronger and stronger every day. She unraveled the grandmaster's most cunning plans with lightning speed. She smashed his figures with sudden and exceptional attacks...

Shortly before the start of the match, the machine was transported to the chess club and installed on the stage. The grandmaster arrived at the very last minute. He already regretted agreeing to the match. It was unpleasant to lose to the “fridge” in front of everyone.

The grandmaster put all his talent and all his will to win into the game. He chose a start that he had never played with a machine before, and the game immediately escalated.

On the twelfth move, the grandmaster offered the machine a bishop for a pawn. A subtle, pre-prepared combination was associated with the elephant's sacrifice. The machine thought for nine seconds - and rejected the victim. From that moment on, the grandmaster knew that he would inevitably lose. However, he continued the game - confidently, boldly, riskily.

None of those present in the hall had ever seen such a game. It was super art. Everyone knew that the machine always won. But this time the position on the board changed so quickly and so dramatically that it was impossible to say who would win.

After the twenty-ninth move, the inscription “Draw” flashed on the machine’s display. The grandmaster looked at the “refrigerator” in amazement and forced himself to press the “No” button. The indicator lights shot up, rearranging the light pattern - and froze warily.

At the eleventh minute, she made the move that the grandmaster feared most. A rapid exchange of pieces followed. The grandmaster's situation worsened. However, the word “Draw” reappeared on the car’s signal board. The grandmaster stubbornly pressed the “No” button and led the queen into an almost hopeless counterattack.

The machine's tracking system immediately began to move. The glass eye of the lens stared at the man. The grandmaster tried not to look at the car.

Gradually, yellow tones began to predominate in the light mosaic of indicator lamps. They became richer, brighter - and finally all the lamps went out except the yellow ones. A golden sheaf of rays fell on the chessboard, surprisingly similar to warm sunlight.

In tense silence, the hand of the large control clock clicked, jumping from division to division. The machine was thinking. She thought for forty-three minutes, although most of the chess players sitting in the hall believed that there was nothing special to think about and that she could safely attack with her knight.

Suddenly the yellow lights went out. The lens, shuddering uncertainly, took its usual position. A record of the move made appeared on the scoreboard: the machine carefully moved the pawn. There was a noise in the hall; many felt that this was not the best move.

After four moves, the machine admitted defeat.

The grandmaster, pushing away the chair, ran up to the car and jerked up the side shield. Under the shield, the red light of the control mechanism flashed on and off.

A young man, a correspondent for a sports newspaper, barely made his way onto the stage, which was immediately filled with chess players.

It looks like she just gave in,” someone said uncertainly. - She played so amazingly - and suddenly...

Well, you know,” objected one of the famous chess players, “it happens that even a person does not notice a winning combination. The machine played at full strength, but its capabilities were limited. That's all.

The grandmaster slowly lowered the dashboard of the car and turned to the correspondent.

So,” he repeated impatiently, opening his notebook, “what is your opinion?”

My opinion? - the grandmaster asked. - Here it is: the trigger chain in the one hundred and ninth block has failed. Of course, the pawn move is not the strongest. But now it is difficult to say where the cause and where the effect is. Maybe because of this trigger chain the car didn't notice a better move. Or maybe she really decided not to win - and it cost her the triggers. After all, it’s not so easy for a person to overcome himself...

But why this weak move, why lose? - the correspondent was surprised. If a machine could think, it would strive to win.

The grandmaster shrugged his shoulders and smiled:

How to say... Sometimes it is much more humane to make a weak move. Ready for takeoff!

The yak stood on a high rock, far out into the sea. People appeared at the lighthouse only occasionally to check the automatic equipment. About two hundred meters from the lighthouse an island rose out of the water. For many years, a spaceship was installed on the island as if on a pedestal, which returned to Earth after a long voyage. It made no sense to send such ships into space again.

I came here with an engineer who was in charge of lighthouses along the entire Black Sea coast. When we climbed to the top platform of the lighthouse, the engineer handed me binoculars and said:

There will be a storm. Very lucky: before bad weather he always comes to life.

The reddish sun glowed dimly on the gray crests of the waves. The rock cut the waves, they went around it and noisily climbed onto the slippery, rusty stones. Then, with a loud sigh, they spread out into foamy streams, opening the way for new waves. This is how the Roman legionaries advanced: the front row, having struck, retreated back through the open system, which then closed and launched an attack with renewed vigor.

Through binoculars I could clearly see the ship. It was a very old two-seater Long-Range Reconnaissance type starship. Two neatly repaired holes stood out in the bow. There was a deep dent running along the body. The gravity accelerator ring was split in two and flattened. Cone-shaped seekers of a long-outdated system and infrasonic weather observation slowly rotated above the wheelhouse.

You see,” said the engineer, “he feels that there will be a storm.”

Somewhere a seagull screamed in alarm, and the sea responded with dull crashes of waves. A gray haze rising above the sea gradually obscured the horizon. The wind pulled the lightened wave crests towards the clouds, and the clouds, overloaded with bad weather, sank towards the water. A storm was supposed to break out from the contact of sky and sea.

Well, I still understand that,” the engineer continued: “solar batteries power the batteries, and the electronic brain controls the devices.” But everything else... Sometimes he seems to forget about the land, the sea, the storms and begins to be interested only in the sky. The radio telescope extends, the locator antennas rotate day and night... Or something else. Suddenly a pipe rises and begins to look at people. In winter there are cold winds here, the ship is covered with ice, but as soon as people appear at the lighthouse, the ice instantly disappears... By the way, algae does not grow on it...

One of the most remarkable inventions of our time is high-speed electronic computers. In some cases, they are able to do the work for a “thinking” person. But some people, rightly admiring their successes, equate human thinking and the computational work of electronic devices.

Scientific psychology shows that this identification is impermissible. At the same time, its data help to compare the operation of machines and mental activity and to identify their fundamental differences.

The comparison is based on the fact that computers, under certain conditions, give the same result as a thinking person. Moreover, they achieve this much faster and more accurately and often do things that are generally inaccessible to humans. Thus, it took the English mathematician Shanks almost fifteen (!) years to find out the number “pi” with an accuracy of 707 digits. In less than one (!) day, the electronic machine “derived” this number with 2048 decimal places.

Currently, there are machines that play chess, translate from one language to another, solve algebraic equations with many unknowns, and perform many other actions that before them were the “privilege” of only human thinking.

It would seem that this is proof of the identity of human thought and the work of computers. However, one should not rush to such a conclusion. It is necessary first to understand whether there is identity in the methods of achieving the same results when thinking and working a machine.

Scientific psychology answers this question in the negative. Let's return to what has already been said about human thinking when solving problems. In Yablochkov’s invention of his “candle”, in Kekule’s discovery of the benzene ring formula, in our crossing out the nine dots, a distinctive feature of human thinking is revealed - the ability to find a new principle, a new way to solve a problem that a person has not solved before and the ways of solving which he did not yet know. Human thinking manifests itself in the formulation of more and more new problems, in the search for solutions for which there are no ready-made recipes. At the same time, previously found methods are compared, attempts are made to find a solution in areas that seem to be dissimilar to the problem being solved (remember the circumstances of the discoveries made by Yablochkov and Kekule).

But as soon as a person finds the principle of a solution, he turns it into a general rule, into a formula, following which one can cope with problems of the same type without much searching.

We all know well that a “difficult” school problem ceases to be “difficult” when the rule for solving it is found - then it becomes typical, essentially already turned into an example. So, if you have found the principle for solving a problem with nine points, then it will be easy for you to solve a problem with four points located in the shape of a square.

As the history of mathematics testifies, at one time the proof and use of the famous Pythagorean theorem was so difficult and required such intense and complex work of thought that it was considered the limit of learning. Now the use of formulas based on this theorem is quite accessible to any schoolchild familiar with elementary geometry.

But it is precisely this search for new problems, principles for solving them and the determination of new methods of action in certain conditions that are inaccessible to electronic machines.

In all their even the most complex actions, machines are guided by a special table of commands compiled for them by a person who has already previously found a principle for solving a problem that can be reproduced and repeated by the machine. Such a table of commands, which precisely gives guidance in actions when solving problems of this type, is called a program. And the machine can perform any job for which a person, based on his thinking, has previously compiled such a program. Without it, and therefore without the preliminary mental activity of a person, a “thinking” machine cannot work. But according to the program, the machine will perform the necessary actions millions of times faster than a person. That is why she can output the number “pi” with a thousand digits, but only according to the rules already discovered by man and transformed by him into the necessary program.

Thus, the machine can only perform those actions, the principle of which has already been discovered and thought out by man. That is why electronic computing devices facilitate a person’s mental work, freeing him from tedious work, for which a fundamental solution has been found. But these machines can never replace the very thinking and mental work of people, aimed at finding the principles for solving more and more new problems put forward by life.

Therefore, the term “thinking” machine is only a metaphor, but one that correctly captures the connection between electronic machines and thinking. These machines use the results of the work of the human mind, facilitating it, but they themselves do not possess thinking. Thinking is inherent only to man.

If you find an error, please highlight a piece of text and click Ctrl+Enter.

Alan Turing proposed an experiment that would test whether a computer has consciousness, and John Searle proposed a thought experiment that should disprove Turing's experiment. We understand both arguments and at the same time try to understand what consciousness is.

Turing test

In 1950, in his work “Computing Machines and the Mind,” British mathematician Alan Turing proposed his famous test, which, in his opinion, allows one to determine whether a particular computer is capable of thinking. The test, in fact, copied the imitation game then widespread in Britain. Three people took part in it: the presenter and a man with a woman. The host sat behind a screen and could communicate with the other two players only through notes. His task was to guess what gender each of his interlocutors was. However, they were not at all obliged to answer his questions truthfully.

Turing used the same principle in the test for the intelligence of a machine. Only the host must guess not the gender of the interlocutor, but whether he is a machine or a person. If the machine can successfully imitate human behavior and confuse the host, then it will pass the test and, presumably, prove that it has consciousness and that it thinks.

Young Alan Turing (passport photo).
Source: Wikimedia.org

Chinese room

In 1980, philosopher John Searle proposed a thought experiment that could refute Turing's position.

Let's imagine the following situation. A person who does not speak or read Chinese enters the room. In this room there are tablets with Chinese characters, as well as a book in the language that the person speaks. The book describes what to do with the symbols if other symbols enter the room. There is an independent observer outside the room who speaks Chinese. Its task is to talk to the person in the room, for example through notes, and find out whether the other person understands Chinese.

The purpose of Searle's experiment is to demonstrate that even if an observer believes that his interlocutor can speak Chinese, the person in the room will still not know Chinese. He will not understand the symbols with which he operates. In the same way, a “Turing machine” that could pass the test of the same name would not understand the symbols it uses and, accordingly, would not have consciousness.

According to Searle, even if such a machine could walk, talk, operate objects and pretend to be a full-fledged thinking person, it would still not have consciousness, since it would only execute the program embedded in it, responding with given reactions to given signals.

Philosophical Zombie

However, imagine the following situation, proposed by David Chalmers in 1996. Let's imagine a so-called “philosophical zombie” - a creature that, in all respects, resembles a person. It looks like a person, talks like a person, reacts to signals and stimuli like a person, and generally behaves like a person in all possible situations. But at the same time it has no consciousness, and it does not experience any feelings. It reacts to something that would cause pain or pleasure to a person as if it were the person experiencing those sensations. But at the same time, it does not actually experience them, but only imitates the reaction.

Is such a creature possible? How do we distinguish it from a real person who is experiencing feelings? What generally distinguishes a philosophical zombie from people? Could it be that they are among us? Or maybe everyone except us are philosophical zombies?

The fact is that in any case we do not have access to the internal subjective experience of other people. No consciousness other than our own is inaccessible to us. We initially only assume that other people have it, that they are like us, because in general we have no particular reason to doubt it, because others behave the same way as we do.

© ru-opel.ru, 2023
Car portal