Статья 'Этическое регулирование искусственного интеллекта как фактор безопасности финансово-банковского сектора: опыт Китая' - журнал 'Вопросы безопасности' - NotaBene.ru
по
Journal Menu
> Issues > Rubrics > About journal > Authors > About the Journal > Requirements for publication > Editorial collegium > Peer-review process > Policy of publication. Aims & Scope. > Article retraction > Ethics > Online First Pre-Publication > Copyright & Licensing Policy > Digital archiving policy > Open Access Policy > Article Processing Charge > Article Identification Policy > Plagiarism check policy
Journals in science databases
About the Journal

MAIN PAGE > Back to contents
Security Issues
Reference:

Ethical Regulation of Artificial Intelligence as a factor of Financial and Banking Sector Security: China's Experience

Gorian Ella

ORCID: 0000-0002-5962-3929

PhD in Law

Associate Professor, Vladivostok State University

690014, Russia, Primorsky Krai, Vladivostok, Gogol str., 41, office 5502

ella-gorjan@yandex.ru
Other publications by this author
 

 

DOI:

10.25136/2409-7543.2022.2.38380

EDN:

IGFWAA

Received:

27-06-2022


Published:

04-07-2022


Abstract: The object of the study is the relations in the field of ethical regulation of the use of artificial intelligence technologies in the financial and banking sector. The subject of the study is represented by China's regulatory documents that establish requirements for the deployment and application of artificial intelligence technologies. The features of the Chinese approach to the regulation of relations in this area are determined. General and special ethical requirements for artificial intelligence technologies are characterized. The article examines the activities of the financial regulator – the People's Bank of China in regulating relations in the field of ethics of artificial intelligence and fintech companies, identifies key aspects of security in respect of which ethical requirements are especially necessary. China systematically implements the state policy on the legal regulation of the digital economy through the regulatory inclusion of ethical requirements for artificial intelligence technologies legal mechanisms. Ethical regulations cover all aspects of relations and focus on the security of personal data, freedom of competition and responsibility of subjects. We especially note the emphasis of the Chinese authorities on the principle of non-discrimination of individual social groups and administrative-territorial entities, which will reduce and level the gap in the use of digital technologies on a national scale, and will also contribute to the unity of the nation. The financial regulator keeps the issue of ethics of artificial intelligence under control, developing appropriate regulatory acts for financial institutions. The People's Bank of China pays special attention to the "weak" aspects of the financial and banking sector in legal regulation: personal data protection and unfair competition.


Keywords:

financial and banking sector, information security, ethics, artificial intelligence, financial regulator, China, data security, digital economy, fintech, personal data

This article is automatically translated. You can find original text of the article here.

Introduction. Artificial intelligence (hereinafter - AI) has become the main driving force of a new round of industrial transformation around the world. Many countries are using the opportunity of the AI revolution to promote domestic economic and technological development. Over the past few decades, we have witnessed many advances in computing and approaches related to artificial intelligence (AI), thanks to advances in machine learning, computing power of computers, availability of mobile Internet, availability of large amounts of data and data storage technologies. This has led to a number of breakthroughs that have allowed AI to become the engine of economic development in many countries.

The government, the scientific community and the industry have come to a consensus that the security of applications using AI technologies is becoming a key factor in their application, therefore, at the state level, it is necessary to adopt guarantees to reduce the potential risks associated with AI. To this end, the private and public sectors are developing catalogs of ethical AI principles, compliance with which in existing AI systems and products is associated with the need for high-tech AI management systems, including its training, testing and security verification. These management systems are still at the stage of intensive development, therefore they are not sufficiently ready for widespread commercial implementation. The main technical obstacles are deeply rooted in the fundamental problems of modern AI research, such as moral cognition at the human level, ethical reasoning based on common sense and interdisciplinary development of AI ethics. Nevertheless, some states at the governmental level define fundamental ethical principles within which it is necessary to develop and apply AI technologies.

The researchers note that against the background of global competition for the use of the opportunities provided by artificial intelligence, many countries and regions are openly participating in the "race for AI" [1]. In their opinion, the increased transparency of the risks associated with AI technology has led to increasingly loud calls for regulators not to limit themselves to the benefits, but also to ensure proper regulation to ensure "trustworthy" AI, that is, legitimate, ethical and reliable. In addition to minimizing risks, such regulation could contribute to the introduction of AI, increase legal certainty and, therefore, also promote the positions of countries in the race. Consequently, according to the researchers, the "race for AI" also generates a "race for AI regulation." China is striving to take a leading place in such "races", but approaches this issue fundamentally, developing legal instruments as part of the formation of a national information security system. In July 2017, the State Council of China unveiled the country's strategy for the development of artificial intelligence under the title "Plan for the development of artificial intelligence of the new generation". This strategy sets out China's goals to become a world leader in artificial intelligence by 2030, monetize artificial intelligence and turn it into an industry worth a trillion yuan (about $150 billion) and become a driving force in defining ethical norms and standards for artificial intelligence. Several reports analyze specific aspects of China's AI policy or assess the country's technical capabilities. This document is intended to be a blueprint for a complete AI ecosystem for the country. Chinese scientists expect AI to change all aspects of economic activity and lead to new requirements, new products, new technologies and new forms of business of all sizes. In their opinion, China should take advantage of this opportunity and initiate a serious evolution of the economic structure [2].

The National Professional Committee for the Management of Artificial Intelligence of the People's Republic of China in the fall of 2021 issued the Code of Ethics for Artificial Intelligence of the New Generation. And in January, the People's Bank of China issued a Financial Technology Development Plan for 2022-2025 (2022-2025), in which it outlined the basic principles of its ethics management system in the field of financial technology. China's financial regulator is stepping up efforts to create an ethical governance system for the financial technology sector by the end of 2022, as algorithms and new technologies, including blockchain, artificial intelligence and big data, are increasingly being introduced into financial products and services. Therefore, the Chinese authorities plan to create a holistic regulatory mechanism to hold businesses accountable and punish abuses.

The purpose of our study is to determine the features of the Chinese approach to the regulation of ethical relations in the use of artificial intelligence in the financial and banking sector. In order to obtain the most reliable scientific results, a number of general scientific (system-structural and formal-logical methods) and special legal methods of cognition (comparative legal and formal-legal methods) were used.

The main part. Chinese researchers have laid a scientifically sound foundation for the need for ethical regulation of the use of AI in the aspect of security. In 2019, a study was conducted, as a result of which scientists proposed to create a universal ethical basis for AI management within the framework of the theory of responsible research and innovation theory in order to form an open scientific management system [3]. To do this, ethical risks associated with AI in society were identified and elements of the AI management system were analyzed, the authors also compared foreign and Chinese approaches to AI regulation.

The 2017 Plan for the development of a new generation of artificial intelligence sets, first of all, material goals, but it also highlights China's concrete desire to become a world leader in defining ethical norms and standards for AI. After the approval of this document, the public and private sectors have been relatively slow to develop the foundations of AI ethics. But in March 2019, the Ministry of Science and Technology of China established a National Expert Committee on the Management of Artificial Intelligence of the new generation. In June of the same year, this body published eight principles of AI management. The principles emphasized that, first of all, the development of AI should begin with improving the overall well-being of mankind. The Principles also emphasized respect for human rights, privacy and justice. Finally, they noted the importance of transparency, responsibility, cooperation and flexibility to deal with new and emerging risks [4]. In continuation of this process, the Standardization Administration of the People's Republic of China, responsible for the development of technical standards, has issued a White Paper on the Standardization of AI. The document discusses security and ethics issues related to technology. Three key principles of establishing ethical requirements for AI technologies are outlined. First, the principle of human interest states that the ultimate goal of AI is the well-being of people. Secondly, the principle of responsibility emphasizes the need to establish accountability as a requirement for the development and deployment of AI systems and solutions. Transparency is included in this principle, which supports the requirement to understand the principles of the AI system. Thirdly, the principle of consistency of rights and obligations emphasized that, on the one hand, data should be properly registered and have a place for supervision, and on the other hand, that commercial organizations should be able to protect their intellectual property.

Government agencies and private companies have also developed their own principles of AI ethics. For example, in November 2018, the Beijing Academy of Artificial Intelligence (a research institution that includes leading Chinese companies and Beijing universities) was established, which published the "Beijing AI Principles" that must be followed in the research and development, use and management of AI (Beijing AI Principles). This document defines that AI technologies should be used for the benefit of humanity "properly" and anticipate future threats and adapt to them. The principles have been supported by various elite Chinese universities and companies, including Baidu, Alibaba and Tencent[5]. In our opinion, these principles have some similarities with those supported by Western countries [6], however, institutional and cultural differences mean that the results are likely to differ significantly: the ethics of AI in China must be understood from the point of view of the culture, ideology and public opinion of the country. The principles adopted by China place greater emphasis on social responsibility and group and community relations, while paying relatively less attention to individual rights, thus repeating earlier discussions about Confucian ethics on social networks.

China's approaches to AI regulation are part of its broad and consistent strategy of adapting to digital governance. AI is part of a digital technology package that China's authoritarian regime is actively using not only to improve public services, but also to strengthen its authoritarian governance. There is an opinion that China's digital progress benefits from its huge Internet market, strong state power and weak civil consciousness, which makes it more competitive than Western democratic societies, where concern for privacy limits their AI development. However, China's ambitious AI plan comes with significant risks; its overall impact depends on how AI affects the main sources of political legitimacy, including economic growth, social stability and ideology. China's approach is to stake its success on (a) ensuring a booming AI economy, (b) ensuring a smooth social transformation in the AI era, and (c) proving the ideological superiority of its authoritarian and communist values [7].

Another group of Chinese researchers notes the need to establish consensus on the ethical principles of AI based on the joint universal intersectoral and interdisciplinary involvement of all representatives of the public and private sectors. They should step up joint research and development of AI management technologies in order to keep up with the rapid progress of AI. The researchers emphasize the need to actively promote ethical education for all stakeholders in research and development, application and management of AI in order to significantly increase their awareness of ethics and promote the establishment of common rules of responsible behavior with AI [8].

In September 2021, the National Professional Committee for the Management of Artificial Intelligence of the New Generation issued the "Code of Ethics of Artificial Intelligence of the New Generation" (hereinafter referred to as the Code of Ethics), which aims to integrate ethics into the entire life cycle of AI and provide ethical recommendations for individuals, legal entities and other related institutions engaged in activities related to AI (? ? ? ? ? ).

The Code of Ethics consists of 6 chapters, including 25 articles, which establish both basic ethical requirements and special requirements for specific types of activities. The scope of application of the Code covers the activities of individuals and legal entities (and other related institutions) in the management of AI, scientific research in the field of AI, the distribution of AI products and services and the use of AI technologies (Article 2).At the same time, AI management activities relate to strategic planning, formulation and implementation of policies, rules and technical standards related to AI, resource allocation, supervision and review. Research activity is understood as scientific research, development of technologies and products related to artificial intelligence. By the distribution of AI products and services, the Code means the production, operation and sale of AI products and services. Use is considered as the purchase, consumption and operation of AI products and services.

The main ethical requirements for AI are set out in Article 3 of the Code. They include six detailed items. Firstly, it is the improvement of human well-being. In the first place should be a person, his rights and interests, national or regional values. It is necessary to adhere to the priority of public interests, to promote harmony between man and machine, to promote the economic, social and environmental sustainable development of human society.

Secondly, when using AI technologies, it is necessary to adhere to universality and inclusiveness, effectively protect the legitimate rights and interests of all relevant actors, promote the fair distribution of the benefits of AI technologies to the whole society, as well as promote social justice, equity and equal opportunities. Special attention is paid to vulnerable categories of the population (the elderly, people with disabilities, children) who need to be provided with technical support or alternative ways of obtaining services.

The third principle defines the provision and protection of privacy. The collection, processing and use of personal data must be carried out in accordance with the principles of legality, necessity and integrity. It is necessary to exclude the possibility of violating the right to privacy, as well as the illegal collection and use of personal data by theft, falsification, leakage, etc.

AI technologies should be manageable and justify the trust of users. An entity using AI technologies is obliged to ensure that people have full independent decision-making power, have the right to choose and accept services provided by AI, have the right to refuse to interact with AI at any time and have the right to suspend the operation of AI systems at any time. The presumption of constant human control over AI technology has been established.

The use of AI technologies implies the strengthening of the responsibility of the subject. The ultimate subject of responsibility is an individual, the rights and obligations of all involved subjects should be defined, all aspects of the life cycle of AI should be subject to constant analysis and adjustment. Each organization should develop an effective and constantly reviewed AI accountability mechanism.

And finally, the requirement to increase ethical literacy. It is necessary to popularize knowledge about AI, participate in discussions on ethical issues of AI, in-depth promotion of the practice of ethical management of AI technologies and improve response capabilities.

Special requirements for specific activities are divided into 4 groups depending on the type of activity: (1) management, (2) scientific research, (3) dissemination and (4) use.

There are 5 ethical requirements in the field of AI management. Firstly, it is the promotion of flexible management (Article 5) by taking into account the laws of AI development, understanding the potential and its limitations, constant optimization of management mechanisms and methods, as well as promoting the sustainable development of AI in the process of strategic decision-making, system building and resource allocation. Secondly, it is compliance with relevant regulations, policies and standards in the field of AI, integration of ethical standards into the management process, timely generalization and promotion of AI management experience (Article 6).

The requirement of the "correct" use of authority (Article 7) deserves attention. It includes a) specification of the powers and limits of management activities related to AI, as well as b) standardization of working conditions and procedures. When exercising authority, the subject is obliged to respect and protect the rights of the relevant persons (privacy, freedom, dignity and security, etc.), as well as respond to violations of the rights and freedoms of persons by other subjects of managerial activity. Risk management occupies a special place in management activities. In terms of ethical requirements when using AI technologies, management personnel should be engaged in risk prevention by conducting timely and systematic monitoring and risk assessment, creating effective early warning mechanisms about risks and improving risk control and management capabilities within the framework of AI ethics (Article 8).And finally, the requirement in the field of organizational activity of the head is attention to the rights, interests and requirements of various stakeholders in the field of AI, encouraging the use of diversified AI technologies to solve practical problems of economic and social development, encouraging interdisciplinary, interregional and cross-border exchanges and cooperation, as well as promoting the formation of an AI management structure and standards with a broad by consensus (article 9).

In the field of scientific research, ethical requirements include (1) self-discipline and adherence to ethical requirements (Article 10), (2) improving data quality (Article 11), (3) improving security and transparency (Article 12) and (4) non-discrimination (Article 13). When conducting research activities, it is important to be aware of responsibility and independently limit actions that may violate ethical rules and cause harm to protected interests. It is necessary to integrate scientific ethics and ethics of AI into all aspects of technological research and development, consciously conduct introspection and not engage in research and development in the field of AI that violate ethics (Article 10).

When working with data (collection, storage, processing, transmission, etc.), it is important to observe the principles of completeness, timeliness, consistency and accuracy. Strict compliance with laws, standards and regulations in the field of data security is an integral protocol for conducting research and development in the field of AI (Article 11). The development, implementation and application of AI algorithms should be transparent, interpreted, understandable, reliable and manageable. Increasing the stability and adaptability of AI systems is one of the security factors achieved through verification, audit, supervision, tracking and forecasting of processes (Article 12). The problem of potential discrimination in the use of AI in decision-making processes is solved by taking into account the requirement of differentiation and prevention of possible distortions of data and algorithms (Article 13).

There are four basic requirements for the sale of AI products and services: competition, quality control, user protection and emergency preparedness. It requires compliance with the laws of the market and competition (antimonopoly) legislation, as well as intellectual property rights of other entities (Article 14), strengthening quality monitoring and the introduction of a system for evaluating the use of AI products and services. The assessment should take into account the level of personal and property security of users, confidentiality. It is unacceptable to operate, sell or provide products and services that do not meet quality standards (Article 15). The logical continuation of this rule is the requirement to create an effective and accessible mechanism for protecting the rights and interests of users. It should provide clear and unambiguous information to users about the set (catalog) of functions and limitations of AI products and services. An important element of such a mechanism should be the function of obtaining the user's informed consent to interact with AI (Article 16). In case of emergency situations, emergency mechanisms, plans and measures for the elimination of harm and compensation of losses should be developed and regularly reviewed. Responsible entities are responsible for monitoring AI systems, maintaining feedback from users and responding to their requests in a timely manner. In addition, the subject is obliged to prevent system failures and maintain readiness to assist relevant organizations in interfering with AI systems in accordance with laws and regulations to reduce losses and avoid risks (Article 17).

When using AI products and services, the following requirements should be followed. Firstly, it is necessary to promote the fair use of AI technologies by demonstrating and evaluating products and services before using them, and to familiarize yourself with the technical characteristics and advantages of technologies (Article 18). The use of products and services with AI technologies should correspond to their purpose: the subject must understand in which area the product or service is used, as well as what negative consequences from their use may arise. Therefore, the subject is obliged to respect the right of users not to use AI products or services, to avoid both improper use and abuse of them, and unintentional damage to the legitimate rights and interests of others (Article 19). Ethical rules prohibit the use of AI products and services that do not comply with laws, regulations, ethics and standards, as well as the use of technologies to carry out illegal activities in the field of national, public and industrial safety, it is strictly prohibited to harm social and public interests (Article 20). When using AI products and services, it is necessary to provide feedback to relevant actors and assist in eliminating technical security vulnerabilities, political and regulatory gaps found in the process of using artificial intelligence products and services (Article 21). And finally, the last ethical requirement is the continuous improvement of the skills of operation, maintenance, emergency response and other aspects of the use of AI products and services (Article 22).

Based on this Code of Ethics, the financial regulator, the People's Bank of China, outlined the basic principles of its ethics management system in the financial and banking sector in the Financial Technology Development Plan for 2022-2025 (2022-2025)). The financial regulator has taken the initiative to create an ethics management system that will contain not only rules, but also a self-management structure that companies must follow both in relation to existing and new financial products and services, such as online payments, insurance and banking. It will cover issues such as the recognition, assessment and elimination of risks associated with unethical actions.

Setting up the system was one of the first tasks set in the plan under consideration. Its importance was highlighted at the March meeting of the Financial Technology Committee of the People's Bank of China, where it was listed as one of the main policy priorities in 2022: the system "will improve early warning, monitoring, analysis and flexible management regarding risks related to technology ethics, directing relevant institutions to fulfill their ethical responsibilities." management ... for the effective protection of the legitimate rights and interests of consumers and the service of the real economy" (PBC Fintech Committee Convenes Meeting to Set Tasks for 2022, URL: http://www.pbc.gov.cn/en/3688110/3688172/4437084/4516922/index.html ). Despite the existence of a number of laws regulating relations in the field of technology use (the Law on the Protection of Personal Information, the Law on Cybersecurity, the Law on Data Security and the Antitrust Law), they do not specifically address issues of ethics in relation to the use of new technologies.

Ethics is not the only issue requiring regulation in the financial and banking sector. In March 2021, the financial regulator published the "Evaluation Specification of the Artificial Intelligence Algorithm in Financial Applications", which aims to provide financial institutions with a systematic framework for evaluating the use of intelligent algorithms in their products based on security, accuracy and performance, so that they can better manage the new technology and prevent risks.

The basic principles of the ethics management system in the financial and banking sector should be based on the promotion of innovation and risk prevention, and industry participants should adhere to both institutional rules and their own management policies. The plan provides for assigning joint responsibility for the application of ethical standards to all entities of the financial sector. Thus, financial institutions should take responsibility for setting up internal ethics committees, as well as for implementing work processes, such as ethics checks and disclosure of information, to prevent misuse of technology. Industry associations should develop and implement self-regulation agreements, and financial institutions should undertake self-regulation obligations in order to control their activities. As for employees of the fintech sector, they are encouraged to take the initiative in studying ethical digital practices and follow them in their work.

Given the size of China's financial and banking sector, the need to address ethical issues is becoming increasingly urgent. The volume of financial transactions carried out by financial institutions in China has reached 7.5 trillion. yuan (1.1 trillion dollars) in 2020, and the number of fintech unicorns (financial companies that have reached a capitalization of 1 billion US dollars) reached 23, which accounted for 41% of the total number in the world (Bartels J. China Fintech Regulations: Fintech Ethics Governance Regulations Loom, URL: https://www.biia.com/china-fintech-regulations-fintech-ethics-governance-regulations-loom /).

The ethics management system in the financial and banking sector proposed by the People's Bank of China is based on consumer protection. Therefore, the plan under consideration will be aimed at eradicating uncontrolled actions used by some companies to make a profit: 1) data leakage, including the practice of illegally obtaining users' personal data without consent, the sale of users' personal data to third parties, as well as the use of data for targeted advertising; 2) abuse of algorithms and bias (discrimination); 3) abuse of technologies that promote disorderly and unfair competition; 4) elimination of the current digital divide in the country - unbalanced financial development among various demographic groups and regions, which led to segmentation and excessive focus on product development for digitally literate groups, while ignoring others (elderly people, people with disabilities and ethnic minorities) (Bartels J. China Fintech Regulations: Fintech Ethics Governance Regulations Loom, URL: https://www.biia.com/china-fintech-regulations-fintech-ethics-governance-regulations-loom /).

Conclusions. As a result of the conducted research, we came to the following conclusions. China systematically implements the state policy on the legal regulation of the digital economy through the regulatory inclusion of ethical requirements for artificial intelligence technologies legal mechanisms. Ethical regulations cover all aspects of relations and focus on the security of personal data, freedom of competition and responsibility of subjects. We especially note the emphasis of the Chinese authorities on the principle of non-discrimination of individual social groups and administrative-territorial entities, which will reduce and level the gap in the use of digital technologies on a national scale, and will also contribute to the unity of the nation. The financial regulator keeps the issue of ethics of artificial intelligence under control, developing appropriate regulatory acts for financial institutions. The People's Bank of China pays special attention to the "weak" aspects of the financial and banking sector in legal regulation: protection of personal data and unfair competition.

The research was carried out with the financial support of the RFBR as part of the scientific project 20-011-00454  "Ensuring the rights of investors in the banking and financial sectors in the context of the digitalization of the economy in the Russian Federation and the leading financial centers of East Asia: a comparative legal aspect".

References
1. Smuha, N.A. (2021). From a ‘race to AI’ to a ‘race to AI regulation’: regulatory competition for artificial intelligence. Law, Innovation and Technology, 13(1), 57-84.
2. Wu, F., Lu, C., & Zhu, M. (2020). Towards a new generation of artificial intelligence in China. Nature Machine Intelligence, 2, 312–316.
3. Zhangand, H. & Gao, L. (2019). Shaping the Governance Framework towards the Artificial Intelligence from the Responsible Research and Innovation. IEEE International Conference on Advanced Roboticsandits Social Impacts (ARSO), 213-218.
4. Roberts, H., Cowls, J., & Morley, J. (2021). The Chinese approach to artificial intelligence: ananalysis of policy, ethics, and regulation. AI & Society, 36, 59–77.
5. Daly, A., Hagendorff, T., Li, H., Mann, M. et al. (2019). Artificial Intelligence, Governance and Ethics: Global Perspectives. The Chinese University of Hong Kong Faculty of Law Research Paper, 15.
6. Gorian, E.V. (2020). Artificial Intelligence in the financial and banking sector: experience of Singapore. The Territory of New Opportunities. The Herald of Vladivostok State University of Economics and Service, 3(12), 86–99.
7. Zeng, J. (2020). Artificial intelligence and China's authoritarian governance. International Affairs, 6(96), 1441–1459.
8. Wu, W., Huang, T., & Gong, K. (2020). Ethical Principles and Governance Technology Development of AI in China. Engineering, 3(6), 302-309.

Peer Review

Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
The list of publisher reviewers can be found here.

The subject of the study. The global digitalization of all spheres of public life, including all sectors of the economy, really poses problems for legislators in finding legal mechanisms to ensure the security of public relations. The subject of the research of the reviewed article is the peculiarities of the legal regulation of public relations involving artificial intelligence in the financial and banking sector on the example of China. It is the search for legislative solutions for the participation of artificial intelligence (consolidation of ethical norms) in the financial and banking sector on the example of China that is the subject of this article. Research methodology. The methodological basis of the article is a system-structural approach to the subject of the study. The main research method is analysis, including a comparative analysis of legislation and law enforcement using the example of China. Without studying foreign experience and using comparative jurisprudence, which makes it possible to avoid repeating mistakes in domestic legislation and judicial practice, it is difficult to assess the effectiveness of solving problems of one's own law and the practice of its application, which was the purpose of the article. The methodological apparatus consists of the following dialectical methods of scientific cognition: abstraction, induction, deduction, hypothesis, analogy, synthesis, historical, theoretical-prognostic, formal-legal, systemic-structural legal modeling, as well as the application of typology, classification, systematization and generalization. The relevance of research. The relevance of the study is beyond doubt, due to the growing trend of participation (or, more correctly, use) of artificial intelligence in public relations, including financial and banking. The author points out that China's approaches to regulating public relations regarding artificial intelligence are part of its broad and consistent strategy of adapting to management through digital means. Artificial intelligence is part of a digital technology package that China is actively using to improve public services. The Chinese experience can be used to legislatively define artificial intelligence as an object rather than a subject of legal relations. For this reason, it is difficult to underestimate the relevance of this article. Scientific novelty. The article "Ethical regulation of artificial intelligence as a factor in the security of the financial and banking sector: the experience of China" is a new scientific study. The scientific novelty of the article is given by a special aspect of the study of the Chinese experience of legal regulation of public relations involving artificial intelligence in a special area: financial and banking. In addition, the ethical norms of legal regulation of these special relations are in the author's field of view. The style, structure, and content of the article. I would like to note that the article is written in a scientific style. The article is structured. According to the content, the article reveals the topic stated by the author. The material is presented consistently, competently and clearly. Bibliography. We believe that the author has sufficiently studied the bibliographic sources on the topic of the article. However, it would be necessary to refer to individual works of Russian scientists dealing with the legal regime of artificial intelligence (A.V. Minbaleev, T.A. Polyakova, T.Ya Khabrieva and some others) to assess the correctness of the position of the Chinese legislator. At the same time, the noted remark does not detract from the work done by the author on the research topic. Appeal to opponents. The author, in order to confirm his own position, very correctly appeals to other people's opinions. The scientific discussion is conducted by him at a high level. Conclusions, the interest of the readership. Undoubtedly, the article will be of interest to specialists in the field of information law, financial and banking law, as well as to a wide range of readers. The article "Ethical regulation of artificial intelligence as a factor in the security of the financial and banking sector: the Chinese experience" meets the requirements for scientific articles and is recommended for publication.
Link to this article

You can simply select and copy link from below text field.


Other our sites:
Official Website of NOTA BENE / Aurora Group s.r.o.