Ñòàòüÿ 'Äâå êîíöåïöèè èñêóññòâåííîãî èíòåëëåêòà: ðåàëèñòè÷åñêàÿ è óòîïè÷åñêàÿ ' - æóðíàë 'Ôèëîñîôñêàÿ ìûñëü' - NotaBene.ru
ïî
Journal Menu
> Issues > Rubrics > About journal > Authors > About the journal > Requirements for publication > Editorial collegium > Peer-review process > Policy of publication. Aims & Scope. > Article retraction > Ethics > Online First Pre-Publication > Copyright & Licensing Policy > Digital archiving policy > Open Access Policy > Article Processing Charge > Article Identification Policy > Plagiarism check policy > Editorial board
Journals in science databases
About the Journal

MAIN PAGE > Back to contents
Philosophical Thought
Reference:

Two Concepts of Artificial Intelligence: Realistic and Utopian

Rozin Vadim Markovich

Doctor of Philosophy

Chief Scientific Associate, Institute of Philosophy of the Russian Academy of Sciences 

109240, Russia, Moskovskaya oblast', g. Moscow, ul. Goncharnaya, 12 str.1, kab. 310

rozinvm@gmail.com
Other publications by this author
 

 

DOI:

10.25136/2409-8728.2023.2.39739

EDN:

DDLZIT

Received:

06-02-2023


Published:

20-02-2023


Abstract: The article analyzes modern concepts of artificial intelligence. The author suggests distinguishing between two main ones: utopian and realistic; in the first, it is argued that it is possible to create machines that think like a person and even better than him, in the second, intelligence is presented as a kind of psychobiological computer. An excursion into the history of the development of artificial intelligence is made, within the framework of which three approaches are considered that have had a serious impact on the development of this phenomenon: the reducinalist approach, psychological and computer. Artificial intelligence is compared with natural intelligence, the latter is considered as a complex semiotic, cultural and social education, involving human communication, development, filiation of ideas and techniques. The conclusion is made and justified, according to which the utopian concept of artificial intelligence cannot be implemented, although efforts to implement it will continue for a long time. One of the arguments here are contradictions in the ethical interpretation of artificial intelligence, which is discussed in the form of a virtual dialogue between the author and the designer of modern robots. On the contrary, the realistic concept is implemented in the form of a permanent project. There are practically no anthropological claims in this concept, and the concept of intelligence is reduced here, i.e. intelligence means imitation of only those established intellectual processes that can be modeled and algorithmized, which is a necessary condition for programming.


Keywords:

intelligence, mind, conscience, robot, project, realization, programming, modeling, a computer, utopia

This article is automatically translated. You can find original text of the article here.

 

 

Although I have works on the philosophy of technology (several books and even more articles), in previous years I have not considered the topic of artificial intelligence. But, since artificial intelligence is related to technology (of course, it does not come down to it completely), and there is so much talk and writing about it, I decided that it was necessary to think about this topic anyway. Moreover, we are talking about an important trend of our time. I began to read and realized that ideologists and theorists of artificial intelligence honestly admit that they themselves cannot agree on how to understand this phenomenon. "There is no single answer to the question of what artificial intelligence does," we read in a very good article "Artificial Intelligence" in Wikipedia. Almost every author writing a book about artificial intelligence starts from some definition in it, considering the achievements of this science in its light" [4].

More precisely, the authors point out that understanding artificial intelligence presupposes knowledge of the essence and features of intelligence and more broadly creativity, as well as the presence of a more or less strict, unambiguous concept, however, the relevant knowledge and concept cannot currently be considered satisfactory. "Philosophy has not solved the question of the nature and status of human intelligence. There is also no exact criterion for computers to achieve "intelligence", although at the dawn of artificial intelligence a number of hypotheses were proposed, for example, the Turing test or the Newell?Simon hypothesis ... The nature of human creativity is even less studied than the nature of intelligence" [4].

Despite much criticism, the concept of artificial intelligence is still popular and seriously perceived, which claims that it is possible to create machines that think like a person and even better than him. At the same time, machines are understood as computers and modern robots operating on the basis of programs; both are summed up under the concept of modern technology. We will call this concept "anthropocomputer". Here is just one example. "Artificial intelligence is a technology, or rather a branch of modern science that studies ways to train a computer, robotic equipment, analytical system to think intelligently as a person… Research in the field of AI is conducted by studying human mental abilities and translating the results into the field of computer activity... A neural network is a mathematical model that simulates the structure and functioning of nerve cells of a living organism. Accordingly, ideally, it is an independently trained system" [5]. (our italics. ? V.R.)

Only at first glance, this is one concept, in fact there are several of them. After all, it is not the same: learning (self-learning), the brain, understood as a network of neurons, the mental abilities of an individual, their research, "the transfer of the results obtained in the field of computer activity", psychological interpretation of thinking (understanding, ways of solving problems, processes of analysis, synthesis, reflection, etc.), computer, including There is a computational version of thinking ? in science, all this is different, both in terms of concepts and discourses. But in the literature on artificial intelligence, these and similar concepts go "through and", easily combined into complexes and definitions. Approximately the same thing we see in the already cited article "Artificial Intelligence"

"The problem is," wrote back in 1956, John McCarthy, ? that while we cannot generally determine which computational procedures we want to call intelligent. We understand some of the mechanisms of intelligence and do not understand the rest. Therefore, intelligence within this science is understood only as the computational component of the ability to achieve goals in the world."

At the same time, there is also a point of view according to which intelligence can only be a biological phenomenon ... The following definitions of artificial intelligence are given:

·                    The scientific direction within which the tasks of hardware or software modeling of those types of human activity that are traditionally considered intellectual are set and solved.

·                    The property of intelligent systems to perform functions (creative), which are traditionally considered the prerogative of a person. At the same time, an intelligent system is a technical or software system capable of solving tasks that are traditionally considered creative, belonging to a specific subject area, knowledge about which is stored in the memory of such a system. The structure of the intelligent system includes three main blocks — a knowledge base, a solver and an intelligent interface that allows you to communicate with a computer without special programs for data entry.

· The direction in computer science and information technology, the task of which is to recreate intelligent reasoning and actions with the help of computer systems and other artificial devices.

·                    The ability of the system to correctly interpret external data, learn from such data and use the knowledge gained to achieve specific goals and objectives through flexible adaptation.

One of the particular definitions of intelligence, common to humans and "machines", can be formulated as follows: "Intelligence is the ability of a system to create programs (primarily heuristic) during self—study to solve problems of a certain class of complexity and solve these problems"" [4].

Such material, and it is typical, forces us to raise two fundamental questions. Maybe there is still a holistic concept of artificial intelligence, but it is unusual, as they say today, hybrid, or another option ? is it, according to V.S. Stepin, made within the framework of post-non-classical rationality? If there is no such concept, then is the existence of many artificial intelligence concepts correct, and also how to explain such a multiplicity in this case? For example, L.S. Vygotsky in the 20s of the last century believed that the existence of several psychologies that explain the human psyche in different ways is unacceptable, while he proceeded from the natural science understanding and the ideal of cognition [13, pp. 44-54]. However, I show that he is wrong, because, firstly, psychology without contradictions and other problems cannot be brought under this ideal, and secondly, each version of psychology serves and provides a certain type of personality characteristic of the New European culture (and there are indeed several such types).  [13, pp. 219-233]. But is it possible to compare artificial intelligence with the types of a New European personality? Let's hang up the answers to these questions for now and consider under the influence of which approaches the concept of artificial intelligence developed.  

The first approach (let's call it "reducinalist") was set by a program of reforming logic, which assumed reducing it to a kind of mathematics. Here the forerunner was Leibniz, who formulated the task of constructing a symbolic calculus that allows calculating instead of reasoning (recall McCarthy, who wrote that "intelligence within this science is understood only as a computational component"). Leibniz says that "he attacked the idea worthy of surprise that it is possible to find a well-known alphabet of human thoughts, and that by combining the letters of this alphabet and analyzing the words made up of them, it is possible to deduce everything and discuss everything <...>"Universal characteristic”, understood in logical terms, is a system of precisely defined signs by means of which in logic and other deductive sciences simple elements of objects that make up the subject of research of this science should be designated. First of all, these signs should be short and concise in form; they should contain the maximum of meaning in the minimum of extension. Secondly, these signs should correspond isomorphically to the concepts they denote, represent simple ideas in the most natural way possible. Complex ideas will be represented by connections or combinations of elementary ideas. In the language of “universal characteristics", abstract theses of logic will appear in the form of visual rules governing actions with symbols. These rules describe the formal properties of sign transformations and have as their source the habits of visual representation <...> the only way to improve our conclusions, to make them, like mathematicians, visual, so that we can find our mistakes with our eyes, and if there is a dispute among people, we need to say: “Let's count!"; then, without any special formalities, it will be possible to see who is right" (chit. according to [16, p. 122]).

As I show, the path from this idea to its realization took several centuries and assumed: "1) the interpretation of concepts as mathematical functions (Frege and Russell), which made it possible to designate individual statements with symbols and understand them as variables, 2) the reduction of all the meanings of statements to two main ones ? truth and falsity (Frege) and 3) the creation of truth tables based on two previous inventions (a kind of generative system), with the help of which an autonomous language of symbolic logic is built (Wittgenstein); 4) the construction of models of logical systems based on the axiomatic method" [10, p. 133] But there was also a serious fee for the reduction of thinking to mathematics: the reduction of an ordinary language statements in accordance with the artificial symbolic logic, limiting symbolic discourse to only typical cases, mathematical interpretation of events, reducing the functions of truth to two ? truth and falsity [10, pp. 138-139]. It was not realized, not only the reduction of objective meanings to mathematical ones, but also 1) the difference between logic as a tool for constructing rules that allow consistent reasoning, and logic as a justification tool that allows organizing already established fields of reasoning into a single system (theory), 2) contradictions between the sense of autonomy of symbolic logic and its dependence on traditional logic, 3) lack of philosophical understanding of the new discipline [10, pp. 140-142].

The second approach, which had a serious impact on the concept of artificial intelligence, is "psychological", in which understanding, thinking and creativity (respectively, intelligence) were reduced to the abilities of the individual. In contrast to the philosophical interpretation of these phenomena, which assumed the analysis of a broader whole, including Others, God, culture, consciousness, experience, values, reflection, development and a number of other factors, psychologists reduced intelligence to psychological processes, which turned out to be very convenient for modeling and programming (this is the third approach, actually, "computer"). Computer modeling and programming also involves reduction, but more complex, not only to mathematical constructions, but also to circuits, processes, technical structures [11, pp. 140-143].  A separate variant of the psychological approach is "physiological" ("biological"), here intelligence is reduced to the work of the brain and further "neural networks".

These three influences led to the formation of the second concept of artificial intelligence ? "realistic", according to which intelligence is a kind of psychobiological computer. Recall: "an intelligent system is a technical or software system capable of solving problems ... knowledge about which is stored in the memory of such a system ... includes three main blocks ? a knowledge base, a solver and an intelligent interface that allows you to communicate with a computer without special programs for data entry ... a direction in computer science and information technology, the task of which is the re-creation of intelligent reasoning and actions with the help of computer systems and other artificial devices."

Accordingly, the first concept of artificial intelligence mentioned above can be called not only anthropocomputer, but also "utopian", and that's why. Whatever reservations you make, artificial intelligence is all such a technique, albeit with the inclusion of anthropological components (Unlike, say, the Internet, which is a planetary anthropo?socio-technical form of life. [14]) And it is impossible to teach technology to think and create as a person. But to understand the last statement, we need to discuss what constitutes "natural intelligence" as opposed to artificial. Let us confine ourselves to the consideration of human thinking and creativity, realizing that the concept of natural intelligence includes some other components, for example, semiotic and technical filiation of ideas. Let's first consider one case in order to provide the reader with material on which he could rely, understanding what, from the author's point of view, natural intelligence is.

Plato's dialogue in The Feast may well be considered one of the first examples of both creativity and thinking, although it is only just becoming. The heroes of the Feast, in particular, give two fundamentally new definitions of love: this is "the search for one's half and the pursuit of integrity", as well as "bearing spiritual fruits ? beauty, goodness and immortality" [7, p. 100]. By the way, if we follow the realistic concept of artificial intelligence, then such a representation of love can be interpreted procedurally and it is sufficient for modeling, algorithmization and programming. However, the analysis shows that these definitions point only to the tip of the iceberg, its whole is hidden under water. The whole can be understood if we reconstruct the situation and the problem that forced Plato to introduce new ideas about love, take into account the reaction of Athenian citizens who are accustomed to believe that love is caused by the gods of love (Aphrodite and Eros), (and Plato proceeded from the fact that an individual should choose whom to love), explain how in this way Plato came to new ideas about love, and who influenced him.

I show that Plato in The Feast was guided by a new anthropological situation (he had in mind the needs of the emerging ancient personality) and solved the problem of the impossibility for the individual to love in the old social model of love. To do this, he created new schemes and definitions of love, switching it to personality and independent choice, taking into account the possibilities of understanding new ideas, therefore, in every possible way belittles the image of the traditional goddess of love, calling her Aphrodite vulgar and vulgar, contrasting her with Aphrodite of heaven. Plato sought to think dialectically, which presupposed the fulfillment of a number of conditions ? naming love, building its definitions, building schemes, consistent reasoning, friendly questions and answers, and finally, a kind of insight that allows you to recall the idea of love, which the soul contemplated before its incarnation in the earthly human body. This also included reflection, so in the "Phaedra", this is the dialogue following the "Feast", Plato writes that he gave definitions to love, attributed them to the idea of love (which presupposed the coordination of these definitions) and this allowed us to think of love without contradictions. [9, p. 19-20; 8, p. 176] One should not think that Plato was the discoverer of everything. No, he relied on the Pythagorean teaching and implemented the program of building the thinking of Parmenides, as well as a number of propositions expressed by Socrates. [9, pp. 23-34]           

  The above case is just one example, but other studies of thinking and creativity, for example, Galileo's creativity and thinking, confirm the main positions obtained in the analysis of the "Feast". Namely, the whole, which makes it possible to rationally explain the structure, functioning and development of natural intelligence, includes not only the processes of thinking and creativity, which are modeled in a realistic concept, but also communication, within which these processes unfold, situations and problems that force new ideas to be created, mechanisms for solving these problems, reaction to new ones user perceptions, correction of these perceptions, changing conditions, situations and problems, forms of awareness and conceptualization. All this is conditioned in the communication of people, historically and culturally. Not only semiotics, language, and discursive constructions contribute to the creation of new forms and contents of intellectual activity, but also the personality with its feelings, relationships, and understanding of sociality.

I do not claim priority in this case, only a more voluminous, detailed picture, a number of the characteristics of natural intelligence indicated here in comparison with artificial intelligence have already been indicated in the scientific literature. "The decision of artificial intelligence," writes, for example, Sonia Spielerg, "looks like shuffling and choosing from loaded options of "experience", whereas a person analyzes and chooses the best option not only empirically, but also emotionally. Homo sapiens also has associative thinking, so his decision is ontologically correct, whereas AI does not have its own phenomenal experience ? that is, ideas about the logic of the development of culture. Therefore, his decision may shock us with "unethicality" [15].

You can also refer to book X. Dreyfus "What machines can't do." "The main assumption underlying research in the field of artificial intelligence is the assumption that a person acts like a device for symbolic processing of information. In turn, this assumption breaks down into four assumptions, which Dreyfus consistently refutes: 1. Psychological assumption. "Thinking can be considered as the processing of information given in binary code, and processing occurs in accordance with some formal rules" [2, p. 105]. 2. Epistemological assumption. "All knowledge can be formalized, that is, everything that can be understood can be expressed in terms of logical relations, more precisely, in terms of Boolean functions ? logical calculus that sets the rules for handling information given in binary code" [2, p. 105]. 3. Ontological assumption. "The machine model of thinking assumes that all information about the world, everything that forms the basis of reasonable behavior, should, in principle, allow analysis in terms of a multitude of elements that are indifferent to situations" [2, p. 105]. Thus, everything that happens in the world can be represented as a set of facts, each of which logically does not depend on the others. 4. Biological assumption. “At some level ? it is usually assumed that at the level of neurons ? information processing operations are discrete in nature and occur on the basis of some biological equivalent of switching circuits” [2, p. 105].

The psychological assumption turns out to be possible only on the basis of mixing the usual meaning of the word "information" with a special meaning that takes place in the cybernetic theory of information. The refutation of the psychological assumption, however, does not in the least prevent the epistemological assumption from continuing to exist. In this case, there is an unjustified extension of the methodology of natural sciences to the field of mental phenomena. However, Dreyfus writes, “a complete refutation of the epistemological assumption would require proof that the world cannot in principle be analyzed in terms of well-defined data” [2, p. 164], and such proof is simultaneously a refutation of the ontological assumption. According to Dreyfus, the ontological assumption is more inconsistent with our experience, It was a consequence of the need to understand the world and the ability to manage it, forcing the Western tradition to simplify reality, while it is much more complicated. Advances in physics contributed to the spread of this desire for simplification. However, although within the framework of physical theory the world can be represented as a set of many atomic facts, however, this representation, going beyond the limits of this theory, does not agree well with our experience. The fact that the world can be “divided” into atomic facts does not mean that if we try to “assemble” a complete picture of the world from these elements, we will actually get it. Within the framework of the biological assumption lies the interpretation of the neural impulse as a unit of information circulating in the brain, like a machine bit. The possibility of such an interpretation, however, is refuted by Dreyfus, which cancels the biological assumption.

Due to the fact that the assumptions refuted by Dreyfus were the basis for the assumption that a person acts like a device for symbolic information processing, both this assumption and the entire concept of classical AI, for which it is fundamental, are refuted. All the serious difficulties faced by AI developments can no longer be solved by increasing the speed and memory capacity. The refutation of the above-mentioned fundamental assumption of AI leaves no grounds for confidence in the possibility of modeling human behavior. And because of this, even the very appearance of the question of the possibility of the machine passing the Turing test will be superfluous" [6, pp. 129-131]. Indeed, the possibility of confusing human behavior with the action of a machine is very far from the distinctions of natural and artificial intelligence indicated by Dreyfus and us.

Sometimes they refer to the fact that "in May 1997, the computer Deep Blue beat World chess champion Harry in a 6- game match Kasparov.But such identification, in our opinion, is not entirely legitimate. Firstly, because human thinking is a psychological, semiotic and social phenomenon (the brain is only a substrate of thinking), and not a machine one. Secondly, supporters of computer thinking do not understand that Deep Blue programs included not only iterations and comparisons of moves and their consequences, but also generalization of the best chess games and even the practice of preparing for chess championships, which was developed and successfully used by our world champion M. Botvinnik. In this sense, Kasparov fought not with a machine, but with chess culture embodied in the machine environment; culture is an order of magnitude more powerful than an individual" [12, p. 264]. And Gasparov was beaten by a machine-enhanced chess culture.

 Natural intelligence is a fundamentally anthropological, historical and cultural education. Therefore, it is not only law-like, but also singular. From this point of view, the expression "to train a computer, a robotic technique, an analytical system to think intelligently as a person" is not meaningful, but as a utopian task. Artificial intelligence is an education that is fundamentally programmable and constructed, acting as a means. But also the "social body" of a person, since it begins to set his new capabilities, abilities and competencies.     

Albert Efimov in his dissertation tried to bring together all three versions of intelligence ? natural, realistic and utopian. On the one hand, he agrees with the criticism of the utopian version, saying that J. Searle and T. Nagel "criticized machine functionalism for ignoring epistemological issues of value-semantic plan (emphasizing the lack of computer understanding of the meaning of perceived information), for an extremely simplified interpretation of the concept of natural intelligence. These philosophers and their followers expressed fundamental objections to the creation of machine intelligence with capabilities comparable to human thinking" [3, p. 15-16]. On the other hand, Efimov proposes the concept of "General Artificial Intelligence", which is basically close to the concept of realistic artificial intelligence. However, probably under the influence of his supervisor, Professor David Dubrovsky, whose main works are devoted to the study of consciousness, Efimov, on the third hand, enriches his concept with anthropological overtones, arguing that it is possible to create such programs and technical devices (we are talking about robots) that will reproduce human consciousness, emotions, thinking.       

"The modern specialization of artificial intelligence and its latest results," writes Efimov, "reveal a significant gap between theoretical research in the field of philosophy and methodology of artificial intelligence and the practical efforts of researchers to create intelligent machines (robots) that will have abilities not inferior to human ones <...> This approach significantly increases the complexity of the task due to the universalism of the systems being developed. It requires wider use to create appropriate cognitive architectures of the results of modern phenomenological studies of the dynamic structure of the phenomena of consciousness (D.I. Dubrovsky). To a certain extent, this approach reflects the process of formation of the psyche in the course of biological evolution…

 "The robot "E.LENA" is a virtual humanoid who possesses a virtual humanoid body endowed with facial expressions, can use natural language, fully supporting the Russian language (visemes, phonemes), is autonomous (does not require operator actions when performing basic functions), has his own person (depending on the person of the chatbot) and having the primary opportunity to learn about changes in the external world (the robot implements face recognition of the robot's interlocutors). For the first time in the scientific literature, the architecture of such a robot is presented. Its uniqueness lies in the fact that in its construction, not only a number of essential characteristics of the subjective reality of a person were used (based on modern studies of the phenomenology of consciousness), but also a complex set of external manifestations of the conscious activity of a person acting as a presenter on television: facial expressions, eye expression, gestures, voice intonation, etc ... Modeling and programming such properties of robots that would meet our legal and ethical principles, would completely exclude their aggressiveness and "unfriendly intentions", will probably require the creation of "virtual people" ? programs that will emotionally identify themselves with people, possess the features of human self-consciousness and self-identity." [3, p. 7, 19-21] (our italics. ? V.R.)

Efimov's work is pioneering and very informative (it was brilliantly defended at the Institute of Philosophy), but it is difficult to agree with the anthropological overtones introduced by him; human emotions, meanings, consciousness, self ? all this is from natural intelligence, they, in my opinion, cannot be identically recreated in the form of artificial intelligence.

 At first glance, the dream of ideologists and designers of artificial intelligence to create a technique (robots, computers) that thinks like a person is very clear and clear, but only at first glance. If you think about it, serious contradictions are revealed, which can be seen, for example, on ethical issues. For the sake of explanation, let's imagine such a virtual conversation between the author and the ideologue-designer (briefly And-K), at the level of approximately Efimov.

 

? The author. Why do you think that an ideal robot, for example, psychologically superior to "E.LENA", can suddenly go against a person and harm him, which is an endless theme of many stories and films?

? And-K. How, because he can think and generally behave like a human being, only an artificial human created by us. And some people commit a crime, go against society, steal, rape, kill. If a robot thinks and behaves like a human, then why can't it go crazy too?

? The author. Why then create a robot that can, as you say, go crazy and violate the first two laws of robotics?

? And-K. And how else to create a robot with the necessary competencies, for example, so that it helps a person and communicates with him. And a robot can go crazy not because such a possibility is provided for in it, but because of the complexity and ambiguity for us until the end of its structure, because, say, you work on a computer or talk on a mobile phone, not knowing exactly how they are arranged.

? The author. I prefer to communicate not with technology, but with people close or interesting to me. If you create an artificial intelligence for a robot that is identical to a human one, then you will have to put up with the idea that a robot can harm a person, because you yourself write about self-awareness and self-identity."LENA." If you are the self, be prepared for unprogrammed actions.

? And-K. The laws of robotics are designed for this case. Perhaps they are imperfect, and it is better to talk about the ethics and morality of artificial intelligence in order to formulate more effective laws based on them.

? The author. It sounds strange "ethics and morality for technology", I always thought that they refer only to a person. In addition, won't your robots resemble the slaves of the Ancient World, there were rules and laws for them too, however, as you know, slaves periodically rebelled, remember at least Spartacus. Won't your robots also rise up against man?

? And-K. It is possible that you need to think about this possibility and introduce additional programs into the artificial intelligence of robots. In addition, any new technology eventually reveals negative consequences. You can't foresee everything at once.  

? The author. So your robots ? are these artificial people or technology? If people, then for them to behave like people, and not soulless technology, we need not only laws, but also the human environment ? other people, communication, a sense of their finiteness and limitation (people are not gods), changes, history, present and future, and much, much more that is inherent natural intelligence. But if this is a technique, even if it is very complex and looks like people, then you do not need to strive for it to have self-awareness and self-consciousness, on the contrary, remember that this is just a technique, that you cannot weaken or remove control at all. Well, yes, in modern robots, a person controls through special programs, and not directly with the help of control panels, but the essence does not change from this ? it is not himself who controls the robot, but a person. Ultimately, a normal program is also a tool (naturally, more complex), also a technique, but semiotic.   

 

The inability to implement the utopian concept of artificial intelligence does not mean doubt about the effectiveness and prospects of the realistic concept. There are practically no anthropological claims in this concept, and the concept of intelligence is reduced here, i.e. intelligence is understood as imitation of only those established intellectual processes that can be modeled and algorithmized, which is a necessary condition for programming [11, pp. 140-144].    

With the development of artificial intelligence and the Internet, intelligence began to change rapidly. In addition to natural intelligence and development (they persist in many places in the conditions of solving traditional problems), programming, on the one hand, is gaining more and more influence, on the other hand, thinking, creativity and the filiation of ideas in the Internet environment.

Although the utopian concept of artificial intelligence cannot be realized, it infects developers and users, encouraging them to work towards the implementation of such a concept. What does the long-term existence of the utopian concept of artificial intelligence mean? 

In general, we are dealing with a humanitarian and social situation, and therefore, as noted by Z. Bauman, with a variety of interpretations and the popularity (multiplicity) of the existence of a phenomenon (in this case, artificial intelligence) [1]. In my opinion, for example, the interpretation of thinking and creativity in biological terms leads to an epistemological impasse, but since there is a scientific community that shares biological views on man and sociality, attempts not just to create a neural computer, but also to reduce intelligence to the work of neurons will continue. Other interpretations of artificial intelligence and work on the creation of appropriate types of equipment will continue.

"Recently," Efimov notes, "despite the successful development of science and technology in the field of artificial intelligence, researchers and engineers do not have a clear answer to the question of how to develop technologies in the future for the next decades. The reason is that the theoretical, methodological and philosophical foundations of such planning are very poorly developed. The relevance of K. Jaspers' thought remains entirely that “the whole future fate of a person depends on the way in which he subordinates the consequences of technical development and their impact on his life.”

There are constant discrepancies in the forecasts for the creation of a General artificial intelligence. Computing power, which seemed to A. Turing sufficient to create a “thinking machine”, has long been achieved, the possibilities of creating, storing and transmitting information on the Internet exceed the wildest dreams of half a century ago. However, all these are only partial results, far from creating a General intelligence" [3, p. 5].

If the General artificial intelligence is understood as a utopian concept, then this project cannot be implemented in principle, but if it is realistic, then it has already been implemented and is developing as a "permanent project". At the same time, it is not difficult to predict that the project of realistic artificial intelligence will be refined more than once and even, perhaps, revised, however, still in the process of its implementation.

 

 

References
1. Bauman, Z. (1996). Think sociologically. M., 1996. Moscow: Aspect-Press.
2. Dreyfus, X. (1978). What computers cannot do. Criticism of artificial intelligence. Moscow: Progress.
3. Efimov, A.R. (2021). Philosophical and methodological foundations of post-Turing intelligent robotics. Abstract dis. for the competition uch. degree. Moscow.
4. Artificial intelligence (2021). https://ru.wikipedia.org/wiki/Artificial_intelligence
5. Artificial intelligence (2020). https://www.calltouch.ru/blog/glossary/iskusstvennyj-intellekt/
6. Klyashtorny, A.S. (2021). Criticism of the concept of "strong" AI in the work of H. Dreyfus "What can computers not do?" chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://elar.urfu.ru/bitstream/10995/93892/1/978-5-7525-4025-7_2011_037.pdf
7. Plato. (1993). Feast. Inc. op. in 4 vols. T. 2. Moscow: Thought.
8. Plato. Phaedrus Inc. op. in 4 volumes. T. 2. Moscow: Thought.
9. Rozin, V.M. (2015). "Feast" of Plato: New reconstruction and some reminiscences in philosophy and culture. Moscow: LENAND.
10. Rozin, V.M. (2014). Logic and Methodology: From Aristotle's "Analyst" to L. Wittgenstein's "Tractatus Logico-Philosophicus". Moscow: LENAND.
11. Rozin, V.M. (2018). Design and Programming: A Methodological Study. Design. Development. Implementation. Historical and social context. Moscow: LENAND.
12. Rozin, V.M. (2016). Technique and technology: from stone tools to the Internet and robots. Yoshkar-Ola: PSTU.
13. Rozin, V.M. (2019). Methodological problems of psychology. 2nd ed. Moscow: Yurayt.
14. Rozin, V.M., Golubkova, L.G. (2016). Internet as a technical structure and a living planetary organism. Rozin V.M. Technique and technology: from stone tools to the Internet and robots. Yoshkar-Ola: PSTU.
15. The Seven Deadly Sins of Artificial Intelligence (2021). https://trends.rbc.ru/trends/social/5eb299089a79476e9fd77f5c
16. Styazhkin, N.I. (1967). Formation of mathematical logic. Moscow: Nauka.

Peer Review

Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
The list of publisher reviewers can be found here.

The author of the reviewed article, referring to the consideration of one of the most relevant and widely interesting problems – the problem of artificial intelligence, points out the dependence of the nature of the evaluation of developments in this area on one or another understanding of "intelligence" as such, since only the latter is able to act as a criterion for the success of any specific developments. However, intelligence (in the direct sense) is only one of the aspects of human creative activity, which is essentially what researchers are talking about when they pose the problem of creating artificial intelligence. And defining creativity and activity seems to be an even more difficult task. Analyzing the state of development of these problems, the author of the article rightly points out that the bearers of "optimistic" ideas about the possibilities of artificial intelligence, as a rule, do not adequately differentiate the characteristics of those tasks that computers or robots are already solving today. The main theoretical content of the article is presented in the form of characteristics of the approaches proposed to solve the problem of artificial intelligence. One of them (which the author calls reductionist in connection with its inherent reduction of logic to "calculus", that is, a kind of mathematics) goes back to the idea of Leibniz's "universal characteristic" (note that Leibniz's developments had a "reverse" influence on mathematics, giving rise to the so-called "logicism" in mathematics). It is opposed by a psychological approach based on the idea of an individual's abilities as a criterion for understanding intelligence and creativity (and here again it must be pointed out that the understanding of the soul as a "system of abilities" also goes back centuries, for example, it dominated the philosophical psychology of the 18th century). From it lies the path to the actual "computer" approach, convenient for solving programming and modeling problems. On the basis of these three approaches, the author argues, the so-called "realistic" concept of artificial intelligence was formed, in which the latter is understood as a "psychobiological computer". As far as the author's subsequent examples can be understood, he is inclined to believe that the "realism" in question in this case indicates a fundamental and ineradicable dependence of "machine" intelligence on "human" intelligence; "utopianism", in contrast to "realism", begins with the fact that this dependence is ignored: "Natural intelligence is a fundamentally anthropological, historical and cultural education. Therefore, it is not only law-like, but also singular. From this point of view, the expression "to train a computer, a robotic technique, an analytical system to think intelligently like a human being" is not meaningful, but as a utopian task. Artificial intelligence is an education that is fundamentally programmable and constructed, acting as a means, etc.". "If by General artificial intelligence we mean a utopian concept," the author summarizes his thoughts, "then this project cannot be implemented in principle, but if realistic, then it has already been implemented and is developing as a "permanent project".". There is no doubt that the reviewed article will arouse the interest of readers, however, before publication it is necessary to correct typos (for example, already in the first sentence), there are also some errors in punctuation. I recommend the article for publication in a scientific journal.
Link to this article

You can simply select and copy link from below text field.


Other our sites:
Official Website of NOTA BENE / Aurora Group s.r.o.