The future of AI: towards truly intelligent artificial intelligence

The ultimate goal of AI, to make a machine have a general type of intelligence similar to that of humans, is one of the most ambitious goals that science has set itself. Two centuries later, the metaphor was the telephone systems since it seemed that their connections could be assimilated to a neural network.


In a presentation, on the occasion of the reception of the prestigious Turing Prize in 1975, Allen Newell and Herbert Simon (Newell and Simon, 1975) formulated the hypothesis of the System of Physical Symbols according to which “every system of physical symbols has the necessary means and enough to carry out smart actions. On the other hand, since human beings are capable of displaying intelligent behavior in the general sense, then, according to the hypothesis, we are also physical symbol systems. It should be clarified what Newell and Simon refer to when they speak of the System of Physical Symbols (SSF). An SSF consists of a set of entities called symbols that, through relationships, can be combined to form larger structures —such as atoms that combine to form molecules— and that can be transformed by applying a set of processes. These processes can generate new symbols, create and modify relationships between symbols, store symbols, compare whether two symbols are the same or different, and so on. These symbols are physical insofar as they have a physical-electronic (in the case of computers) or a physical-biological substrate (in the case of human beings). Indeed, in the case of computers, symbols are made by digital electronic circuits and in the case of human beings by networks of neurons. Ultimately, according to the SSF hypothesis, the nature of the substrate (electronic circuits or neural networks) is irrelevant as long as the substrate allows symbols to be processed. Let’s not forget that it is a hypothesis and, therefore, it should not be accepted or rejected. a priori. In any case, its validity or refutation must be verified according to the scientific method, with experimental tests. AI is precisely the scientific field dedicated to trying to verify this hypothesis in the context of digital computers, that is, to verify whether a properly programmed computer is capable or not of generally intelligent behavior.

It is important to note that it should be a general intelligence and not a specific intelligence since the intelligence of human beings is of a general type. Exhibiting specific intelligence is quite another thing. For example, programs that play chess at the Grandmaster level are unable to play checkers despite being a much simpler game. It is necessary to design and execute a different and independent program from the one that allows you to play chess so that the same computer also plays checkers. That is to say, you cannot use your ability to play chess to adapt it to checkers. In the case of human beings, this is not the case since any chess player can take advantage of their knowledge of this game to, in a matter of a few minutes, play checkers perfectly. Weak AI  as opposed to the strong AI  which Newell and Simon and other founding fathers of AI were referring to.

The ultimate goal of AI, to make a machine have a general type of intelligence similar to that of humans, is one of the most ambitious goals that science has set itself. Due to its difficulty, it is comparable to explaining the origin of life, the origin of the universe, or knowing the structure of matter.

It was the philosopher John Searle who introduced this distinction between weak and strong AI in an article critical of AI published in 1980 (Searle, 1980) that caused, and continues to provoke, much controversy. Strong AI would imply that a suitably designed computer does not simulate a mind but is a mind and therefore it should be capable of having an intelligence equal to or even superior to that of a human. Searle in his article tries to show that strong AI is impossible. At this point, it should be clarified that general AI is not the same as strong AI. There is a connection but only in one sense, that is, all strong AI will necessarily be general but there may be general AI, that is, multitasking, that is not strong, that emulate the ability to exhibit general intelligence similar to human but without experiencing states. mental.

Weak AI, on the other hand, would consist, according to Searle, in building programs that perform specific tasks and, obviously, without the need for mental states. The ability of computers to perform specific tasks, even better than people, has already been widely demonstrated. In certain domains, the advances of weak AI far exceed human expertise, such as finding solutions to logical formulas with many variables or playing chess or Go, or in medical diagnostics and many other aspects related to decision making. decisions. Also associated with weak AI is the fact of formulating and testing hypotheses about aspects related to the mind (for example the ability to reason deductively, to learn inductively, etc.)

Leave a Comment