по
Theoretical and Applied Economics
12+
Journal Menu
> Issues > Rubrics > About journal > Authors > About the Journal > Requirements for publication > Editorial collegium > Peer-review process > Policy of publication. Aims & Scope. > Article retraction > Ethics > Online First Pre-Publication > Copyright & Licensing Policy > Digital archiving policy > Open Access Policy > Article Processing Charge > Article Identification Policy > Plagiarism check policy > Editorial Board
Journals in science databases
About the Journal

MAIN PAGE > Back to contents
Publications of Kopyrin Andrey Sergeevich
Theoretical and Applied Economics, 2022-2
Kopyrin A.S., Vidishcheva E.V. - Development of a model of the influence of tourist flows on the stability of the tourist territory (region) pp. 46-56

DOI:
10.25136/2409-8647.2022.2.36623

Abstract: The development of the tourism sector of the economy is one of the priorities set by the leadership of the Krasnodar Territory and the Russian Federation. Thus, the construction of a model of the influence of tourist flows on the stability of the territory is very relevant. The object of the study is the interconnected economic, social and ecological system of the resort destination. The subject of the study is the interaction of key indicators affecting the sustainability of a tourist destination and tourist flows. The work is aimed at building an integrated computer model that can be used for: -studying the interaction of key variables; - conducting scenario analysis and modeling to determine the possible consequences of management decisions.The authors develop a single synthetic model that combines social, economic and environmental aspects of the subject of research. This model is evaluated using an adjusted net savings indicator and allows us to study the trends in the development of the tourism sector, as well as to conduct a scenario analysis of the consequences of various management decisions. Using this tool for medium- and long-term planning will give the decision-maker more information in conditions of uncertainty, which will avoid many managerial mistakes. In the future, it is planned to refine and adjust the model using new statistical data; to conduct computational experiments to identify economic trends in the impact of integrated programs and scenarios for the development of the tourism sector.
Software systems and computational methods, 2021-3
Makarova I.L., Ignatenko A.M., Kopyrin A.S. - Detection and interpretation of erroneous data in statistical analysis of consumption of energy resources pp. 40-51

DOI:
10.7256/2454-0714.2021.3.36564

Abstract: Monitoring and analysis of consumption of energy resources in various contexts, as well as measuring of parameters (indicators) in time are of utmost importance for the modern economy. This work is dedicated to examination and interpretation of the anomalies of collecting data on consumption of energy resources (on the example of gas consumption) in the municipal formation. Gas consumption is important for the socioeconomic sphere of cities. Unauthorized connections are the key reason for non-technological waste of the resource. The traditional methods of detection of stealing of gas are ineffective and time-consuming. The modern technologies of data analysis would allow detecting and interpreting the anomalies of consumption, as well as forming the lists for checking the objects for unauthorized connections. The author’s special contribution lies in application of the set of statistical methods aimed at processing and identification of anomalies in energy consumption of a municipal formation. It is worth noting that the use of such technologies requires the development of effective algorithms and implementation of automation and machine learning algorithms. The new perspective upon time-series data facilitates identification of anomalies, optimization of decision-making, etc. These processes can be automated. The presented methodology tested on time-series data that describes the consumption of gas can be used for a broader range of tasks. The research can be combined with the methods of knowledge discovery and deep learning algorithms.
Software systems and computational methods, 2021-1
Kopyrin A.S., Kopyrina A.O. - Building a standard system of inference rules based on the knowledge base pp. 1-9

DOI:
10.7256/2454-0714.2021.1.34798

Abstract: The authors propose to combine logical inference with the apparatus of fuzzy sets. When each solution is associated with a set of possible outcomes with known conditional probabilities, the solution is chosen based on digital information under conditions of uncertainty. Therefore, the main purpose of using fuzzy logic in expert systems is to create computing devices (or software complexes) capable of simulating human thinking and explaining decision-making methods The purpose of the work is to describe in detail a reproducible standard method of constructing rules for the output of an expert system for various economic subject areas, using a universal knowledge base scheme To make decisions in a fuzzy system, it is proposed to use the process of identifying the structure of a rule - determining the structural characteristics of a fuzzy system, such as the number of fuzzy rules, the number of linguistic terms into which incoming variables are divided. This identification is carried out using fuzzy cluster analysis, which is carried out using fuzzy decision trees. The authors present a block diagram of the inference methodology based on fuzzy logic. The method of constructing rules and the algorithm of fuzzy inference presented in the article can be used in various spheres of the economy. The novelty of the work lies in the automation and integration of the system for determining fuzzy inference rules with the stage of collecting input data in the subject area
Software systems and computational methods, 2020-3
Kopyrin A.S., Makarova I.L. - Algorithm for preprocessing and unification of time series based on machine learning for data structuring pp. 40-50

DOI:
10.7256/2454-0714.2020.3.33958

Abstract: The subject of the research is the process of collecting and preliminary preparation of data from heterogeneous sources. Economic information is heterogeneous and semi-structured or unstructured in nature. Due to the heterogeneity of the primary documents, as well as the human factor, the initial statistical data may contain a large amount of noise, as well as records, the automatic processing of which may be very difficult. This makes preprocessing dynamic input data an important precondition for discovering meaningful patterns and domain knowledge, and making the research topic relevant.Data preprocessing is a series of unique tasks that have led to the emergence of various algorithms and heuristic methods for solving preprocessing tasks such as merge and cleanup, identification of variablesIn this work, a preprocessing algorithm is formulated that allows you to bring together into a single database and structure information on time series from different sources. The key modification of the preprocessing method proposed by the authors is the technology of automated data integration.The technology proposed by the authors involves the combined use of methods for constructing a fuzzy time series and machine lexical comparison on the thesaurus network, as well as the use of a universal database built using the MIVAR concept.The preprocessing algorithm forms a single data model with the ability to transform the periodicity and semantics of the data set and integrate data that can come from various sources into a single information bank.
Software systems and computational methods, 2019-4
Ignatenko A.M., Makarova I.L., Kopyrin A.S. - Methods for preparing data for the analysis of poorly structured time series pp. 87-94

DOI:
10.7256/2454-0714.2019.4.31797

Abstract: The aim of the study is to prepare for the analysis of poorly structured source data, their analysis, the study of the influence of data "pollution" on the results of regression analysis. The task of structuring data, preparing them for a qualitative analysis is a unique task for each specific set of source data and cannot be solved using a general algorithm, it will always have its own characteristics. The problems that may cause difficulties when working (analysis, processing, search) with poorly structured data are considered. Examples of poorly structured data and structured data that are used in the preparation of data for analysis are given. These algorithms for preparing weakly structured data for analysis are considered and described. The cleaning and analysis procedures on the data set were carried out. Four regression models were constructed and compared. As a result, the following conclusions were formulated: Exclusion from the analysis of various kinds of suspicious observations can drastically reduce the size of the population and lead to an unreasonable decrease in variation. At the same time, such an approach would be completely unacceptable if, as a result, important objects of observation are excluded from the analysis and the integrity of the population is violated. The quality of the constructed model may deteriorate in the presence of abnormal values, but may also improve due to them.
Other our sites:
Official Website of NOTA BENE / Aurora Group s.r.o.