Cybernetics and programming - rubric Mathematical models and computer simulation experiment
Cybernetics and programming
Journal Menu
> Issues > Rubrics > About journal > Authors > About the Journal > Requirements for publication > Council of Editors > Peer-review process > Policy of publication. Aims & Scope. > Article retraction > Ethics > Online First Pre-Publication > Copyright & Licensing Policy > Digital archiving policy > Open Access Policy > Article Processing Charge > Article Identification Policy > Plagiarism check policy
Journals in science databases
About the Journal

MAIN PAGE > Journal "Cybernetics and programming" > Rubric "Mathematical models and computer simulation experiment"
Mathematical models and computer simulation experiment
Karimov T.I., Karimov A.I., Butusov D.N., Sol'nitsev R.I., Bulakhov A.V., Sigaeva M.S. - Delta-operator in the design of digital systems on low-bit computers pp. 1-14


Abstract: The use of low-bit computers is a well-known way to reduce the cost and energy consumption of embedded control and signal processing systems. At the same time, the construction of adequate digital models of dynamic systems on low-discharge computers is hindered by a number of problems associated with the influence of the rounding error of the coefficients. To increase the accuracy of the representation of discrete systems, instead of the traditional z operator, alternative discrete operators can be used, in particular, the δ operator. This mathematical apparatus, in addition, makes it possible to synthesize discrete systems that cannot be realized with a short grid length by standard methods. This article discusses the methodology for using the δ operator in the construction of digital systems on low-bit hardware platforms. The theoretical results obtained are proved analytically and supported by experimental results. The criteria and the preference region for the δ-operator in comparison with the z-operator are shown. The paper considers issues related to the implementation of discrete systems using the δ-operator on low-bit computers with a floating point. In the course of the work, it was found that the accuracy of the machine representation of numbers (digit capacity of the calculator) does not affect the choice of a method for implementing a discrete system. The key parameter when choosing a discrete operator is the sampling rate. A generalized criterion is formulated, which allows one to choose the preferred discrete operator by the type of continuous system and the required sampling frequency.
Alpatov A.N. - Assessment of the impact of the distributed computing complex parameters of on the performance of load balancing algorithms. pp. 1-10


Abstract: The purpose of this article is to consider the correlation of the performance of computational load balancing algorithms for globally distributed computing systems implementing the principle of volunteer computing and the basic attributes of a distributed system. As the main considered parameters the author examines file system structure and the type of network protocol. The object of the study is a globally distributed computing system with scheduling nodes loading. The subject of the study are the balancing methods of loading units of the system, implementing the principle of dynamic computational load balancing strategy. In this article, the methodological basis of the article makes methods of fundamental and applied sciences: analysis methods, methods of mathematical statistics, simulation modeling. The author suggests a model of node computational load in form of nonlinear piecewise-stationary model. The paper shows a method of computing experiment to determine the effectiveness of the balancing algorithms. The author develops a simulation model of distributed complex with the possibility of setting the basic system parameters computer system  and assess their impact on system response time. It is shown that a particular impact on the efficiency of load balancing algorithms have such parameters as file system structure and the type of network protocol. Thus, the necessity of taking into account these parameters to ensure the adequacy of the developed model of the distributed computing system implemented on the basis of volunteer computing.
Litvinov V.A. - On the sensitivity of particle flow to diffusion coefficient variations pp. 7-14


Abstract: The subject of the study is the sensitivity of the experimentally observed characteristics of the particle and radiation flow to variations in the parameters of the model of interaction of particles with the medium. The object of study is the diffusion flow of particles propagating in an unlimited medium. The author considers the variations in particle flux due to changes in the diffusion coefficient both over time and at different points in space. It is emphasized that the expressions obtained in this work can be considered as a statement of the inverse problem in the form of integral equations of the first kind. The research method is variational analysis and numerical experiment. The justification of the method is based on a comparison of the results of calculations by the proposed method and those obtained by the numerical diffusion equation by the difference method. The main conclusions of the study are the expressions for the function of the sensitivity of the diffusion flux to functional variations of the diffusion coefficient. The novelty of the study lies in the formulation of the inverse problem of determining the functional form of the coefficients of the diffusion equation (thermal conductivity) through the function of the sensitivity of the particle flux to variations of the desired parameters.
Toropov B.A. - Flajolet-Martin algorithm as an effective social network analysis tool. pp. 8-16


Abstract: The study is devoted to the influence (centrality) model of a social network participant.  The object of studies involves calculations for the centrality metrics, which are based upon the shortest path lengths between the graph vertices for the social graph based upon the iterative performance of the Flajolet-Martin algorithm. The author evaluates the possibility of approximate evaluation of closeness-centrality based on a simple example. Then, having the calculation results, teh author compares the computed values with the real closeness values, as computed by breadth-first search algorithm (BFS-Algorithm). The methodology of the study involves graph theory elements, as well as the social network analysis apparatus, which allows to compute different centrality metrics of a graph vertex. The key conclusion provides that the Flajolet-Martin algorithm is an easily adaptable tool for the approximate social graph vertex centrality evaluation related with the shortest paths, such as closeness or centrality of the disintegration, as provided for in the work so M. Jackson. In turn, it provides for the new possibilities for the process modeling for the spread of information in the social networks.
Gorbaneva O.I., Murzin A.D., Lazareva E.I. - Parameters of sustainable ecological and economic development in the dynamic model of the combination of general and private interests of the development of the system of subjects pp. 9-17


Abstract: The article is devoted to the development of a dynamic model for the coordination of general and private interests in the system of complex socio-ecological-economic synergetic development of a system of economic entities, the target control parameter of which is maximization of specific consumption. The parameters of the model under consideration are supposed to be supplemented with the criteria of ecological and economic sustainability of a group of subjects localized on a separate territory, within the boundaries of a separate industrial cluster, municipal formation, region or macroregion. The conditions for sustainable development (homeostasis) of the socio-ecological-economic system in the model are set by the requirements for the economic development of the agent, the maximum permissible emissions and discharges of pollutants into the environment in the process of economic activity. The resulting formalized dependencies make it possible to substantiate the environmental and social significance of the management decisions taken, as well as the effects associated with the dynamics and uncertainty of the external environment. The paper presents an algorithm for finding the Nash equilibrium in the previously presented dynamic model for combining general and private interests of territorial development. Some components of the vector of control actions are found analytically, for the rest an economical procedure is described. It is shown that in the absence of established threshold values for GRP and the concentration of pollutants in water and air, it is unprofitable for territorial entities to spend resources neither on investments in production or fixed production assets, nor on cleaning up pollution. Consequently, the expediency of introducing conditions for sustainable development, which are fulfilled by the subject in the form of equality, is shown.
Borodin A.V. - Reconstruction and study of the pseudo-random number sensor in the VBA-subsystem of the Microsoft Office pp. 14-45


Abstract: The article reviews some aspects of practical use of the pseudo-random number sensors in computational mathematics and cryptography. In particular the author studied inappropriate behavior of the Monte Carlo method in solving the task of risk assessment of uniform credit portfolio using regular pseudo-random number sensor of the Microsoft Office programming system. The article identifies limits of the regular pseudo-random number sensor. The author reconstructs it in terms of single-module residue arithmetic and on that basis proves unreasonableness of its use in cryptographic applications and explained certain aspects of inadequate behavior of the Monte Carlo in the given example. The article proposes a solution as alternative pseudo-random number sensor based on the Mersenne twister for solving the complex tasks of computational mathematics. The article shows results of corresponding numerical experiments. The research is based on the numerical experiments based on the Monte Carlo method. The reconstruction and study of the pseudo-random number sensor of the Microsoft Office programming system involved number and theoretic methods. Probability theory formalism is used in formulating of the model problem and interpreting the results of its solution. The paper for the first time shows the comparison of the "Value at Risk" results for a task of risk assessment of uniform credit portfolio received using Monte Carlo method with exact values, calculated using the methods of the algebraic theory of risk. This comparison allowed to determine the limitations for Monte Carlo based methods and other methods using regular pseudo-random number sensor of the Microsoft Office programming system. The author proposes alternative solution for the problem and shows its adequacy in the corresponding numerical experiment.
Dyakonova T., Khoperskov A., Khrapov S. - Numerical Simulation of the Dynamics of Floods in the Event of Emergency Situations Using Parallel Computing Technology pp. 17-34


Abstract: The subject of the study is the hydrological conditions in the territory of the Volga-Akhtuba floodplain taking into account high volumes of water pass through the dam of the Volzhsk hydroelectric power station. The hydrological regime determines both the environmental position of the unique landscape of about 20 thousand square kilometers and the possibility of the rational usage of the territory for the purposes of fish farming, agricultural and recreational functions. Mathematical modeling allows to solve a large number of problems of the floodplain hydrology and to maintain an optimal level of the territory control. The research is focused on the study of emergency situation consequences that occurred because of the high water level in the Volgograd reservoir or in case of accidents that happened at the hydroelectric power station dam. The models offered by the author can be used to provide the optimal schemes of people evacuation from the danger zone between the rivers depending on the external conditions. Particular attention is paid to the development of the effective software for hydrodynamic computational experiments. The study is based on the numerical simulations of the surface water dynamics by using the Eulerian-Lagrangian scheme of Saint-Venant equations integration and parallel CUDA-technology for GPUs calculations. The author has created the model of sowtware which allows to calculate hydrological conditions in the territory of the Volgo-Akhtuba floodplain in case of the very hight water flow through the Volga hydroelectric power station. Application of the CUBA technology-based parallelized code for GPUs can reduce the duration of simulations down by several hours using Tesla K40 processors. Another striking feature of the author's model is the precise digital terrain based on remote sensing data. It was found that the area between the Volga river and the Akhtuba river right bank as well as the urbanized territory including Volgograd, Volzhsky, Southern Industrial Areas, Svetlyi Jar will not be much affected even in case of the catastrophic flooding.  .
Golik F.V. - Pearson distributions of sum of single distributed independent random variables. pp. 17-41


Abstract: The article is devoted to working out the constructive method of approximation the sum of independent random variables with the same distribution by Pearson curves. The summation theory was and still is one of the key parts of the theory of probability. The limiting theorems are proven within this theory, and they allow one to understand which frequencies may be used for the approximation for the sum so random values with large m. At the same time the approximation error is evaluated by the admissible error. However, in most practical cases the number of the summed values is not large, so the admissible error evaluation may not be sufficiently precise. The purpose of the study is to develop a constructive method for the approximation of the frequency function for the spread of the final sum of  the independent random values with the same frequency. The Pearson curves are then used as approximative frequencies. Such an approximation lacks the defects related to the application of limiting theorems. It is applicable for any number of summed accidental frequencies m>1. The calculated ratios for the initial moments of the final sum of independent random variables are obtained. It is shown that the parameters of the Pearson curves for the sum m of random variables are related by simple ratios with the corresponding parameters of the summed value. The solution used in order to achieve the goal is based upon the moments method. Th author offers a recursion formula for calculating the starting moments of for the sum of independent random values, allowing to find the central moments of the sum, as well as the parameters for the Pearson curves.  It is proven that there's a dependency between the distance from the point of  The exact expression for the distance from  the point, corresponding to the distribution of the sum of the random variables in the coordinate system of Pearson parameters to the point (0, 3), corresponding to the normal distribution is found. By the distance value, one can indirectly assess the possibility of applying normal approximation. The author studies the possibility for the approximation of Pearson curves with normal distribution. An approximate formula for estimating the error in approximating the sum  of random variables by normal distribution is given. The author provides examples of approximations for the distribution of the sum of random variables are found, which are often met in statistical radio engineering tasks. The reference materials include complete formulae for the key types of Pearson curves. All the obtained results are applicable for any random variables having finite first four initial moments. The correctness of the conclusions is confirmed by numerical calculations performed in the MathCad program.
Efimov A.I., Balily N.A. - Methods of teaching control systems for unmanned aerial vehicles by immersing them in virtual reality pp. 17-22


Abstract: The subject of study is the implementation of control systems for unmanned aerial vehicles. As their solution, a method of teaching and testing of these systems by immersing the entire system and its individual components into a virtual reality as close as possible to real conditions is proposed. The advantages and difficulties of implementation in relation to each of the system modules involved are considered. For each of the difficulties the authors propose solutions. The most successful scopes of application are revealed, and also possibility of application of the given method to land and surface vehicles is allocated. In the framework of this work, the existing aircraft control systems and the use of virtual reality within the framework of training their individual parts are investigated and the option of extending the use of such methods to the entire control system with an analysis of the advantages and disadvantages of this approach is proposed. The novelty of this article lies in the training of control systems for unmanned aerial vehicles by immersing it in virtual reality. The completeness and flexibility of such a training system is able, on the one hand, to adapt to any equipment configuration, on the other hand, to provide the highest quality training. The most important aspect is ensuring that you can use a larger proportion of the learning algorithms than is possible in other cases. In addition, this approach to the crane is useful in the framework of video navigation in connection with the possibility of better implementation of computer vision. The article shows the relevance of the research and the effectiveness of this method in the framework of aircraft control systems and proposed its application to other vehicles.
Gorbaneva O.I., Murzin A.D., Anopchenko T.Y. - Application of a dynamic model for the coordination of general and private interests of sustainable development at the regional level pp. 18-28


Abstract: The article presents the results of approbation of the previously presented dynamic socio-ecological-economic model of the synergetic development of individual entities within the South Russian macro-region, which allows to harmonize the general and private interests of each region. The model is investigated on the materials of the Southern Federal District. The identification of the model for the macro-region of the Southern Federal District was carried out on the basis of the data of the website of the State Statistics Committee, which are in the public domain. For identification, the time series of 2005, 2010, 2015-2017 were taken. This choice is explained by both theoretical reasons (a step of five years was chosen for the long-term data and one year for the near-term data), and empirical (the data of the crisis periods of 2008 and 2012, as well as the years nearest to them, were not taken). As a result of the study, several controversial conclusions were drawn regarding the strategies of the subjects' behavior. Calculations show, in particular, that in the current conditions it is not profitable for any of the regions to develop its own production sphere; the optimal strategy for each of them is only to increase consumption in the hope of the production activity of neighboring regions. In view of the choice of this rational strategy by all regions at the same time, a general degradation of the production sphere and stagnation of the regional economy are predicted. At the same time, it was revealed that a further reduction in production becomes unprofitable for almost all lagging regions (the republics of the Southern Federal District), while the leading regions (the regions of the Southern Federal District) remain profitable for production inaction.
Filgus D.I. - The method of forming the load on the arcs of the search graph of the shortest Hamiltonian path in relation to solving the problem of scheduling the implementation of multiple transactions and queries in a network database pp. 19-34


Abstract: The subject of the research is network databases of distributed computing systems. The object of the research is the model and scheduling algorithms for the distribution of workload in network databases. The author considers in detail such aspects of the topic as a model for solving the problem of optimizing the process of managing queries in distributed information systems. It is shown that in order to eliminate the queue of requests, it is necessary to choose the most appropriate way to select queries and the best method for implementing the selected method. Promising options for serving queries are group sampling, as well as group sampling with individual segmentation. Special attention is paid to the application and development of a method for solving problems of Boolean linear and nonlinear programming based on the rank approach, which have a small time complexity and provide the required accuracy for solving these problems, as well as developing a common approach to solving arbitrary problems of boolean programming. The research methodology is the methods and models for planning, optimizing and improving the performance of network databases of distributed computing systems. The scientific novelty lies in the fact that an increase in the number of constraints in problems of nonlinear Boolean programming leads to a decrease in the error of their solution, and the degree of nonlinearity itself does not significantly affect the magnitude of the error. From an experimental study of the developed algorithms for solving the problems considered in the work, it follows that in the case of the formalization of Boolean programming problems in graph formulation, in most cases it is possible to reduce the time complexity of the algorithms for solving them. The proposed model can be used to solve the problem of optimizing the process of managing queries in distributed computing systems.
Pekunov V.V. - Simulation of the absorption of gaseous SO2 by fog droplets using a refined interpolation-sectional droplet model pp. 19-32


Abstract: This article examines the problem of numerical simulation of interaction between the gaseous sulfur dioxide emitted by road transport and fog in the conditions of high humidity. For this purpose, the author applies a multi-factor two-phase mathematical model, which takes into account the dynamics of turbulent main phase, dynamics and kinetics of the multi-sectional droplet phase, presence of thermal inconsistencies formed as a result of direct and diffused solar radiation in various ranges, diffusion of sulfur dioxide, and its absorption by the fog droplets. The article carries out a numerical calculation of the corresponding task within the modeling system of environmental processes AirEcology-P, which allows generating the optimal calculation code for a particular mathematical model. The proposed complex mathematical model that descries interaction between the emitted sulfur dioxide gas and the fog droplets is new; it specifies the calculation of the kinetics of droplet phase based on consideration of the additional factor of droplet fusion characteristic to fog. The submodel of the droplet phase was tested in the numerical simulation (the results were compared with the data of direct Lagrangian modeling of the composite of 1,000 droplets), indicating decent accuracy results. The article obtains the results of numerical simulation of interaction between the emitted SO2 and the droplets. The author demonstrates the self-cleaning ability of the atmosphere, the degree of which correlates with the initial concentration of the smallest droplets and the height from the surface.
Rumyantsev A.A., Bikmuratov F.M., Pashin N.P. - Entropy estimation of the fragments of chest X-ray images pp. 20-26


Abstract: The subject of this research is medical chest X-ray images. After fundamental pre-processing, the accumulated database of such images can be used for training deep convolutional neural networks that have become one of the most significant innovations in recent years. The trained network carries out preliminary binary classification of the incoming images and serve as an assistant to the radiotherapist. For this purpose, it is necessary to train the neural network to carefully minimize type I and type II errors. Possible approach towards improving the effectiveness of application of neural networks, by the criteria of reducing computational complexity and quality of image classification, is the auxiliary approaches: image pre-processing and preliminary calculation of entropy of the fragments. The article provides the algorithm for X-ray image pre-processing, its fragmentation, and calculation of the entropy of separate fragments. In the course of pre-processing, the region of lungs and spine is selected, which comprises approximately 30-40% of the entire image. Then the image is divided into the matrix of fragments, calculating the entropy of separate fragments in accordance with Shannon’s formula based pm the analysis of individual pixels. Determination of the rate of occurrence of each of the 255 colors allows calculating the total entropy. The use of entropy for detecting pathologies is based on the assumption that its values differ for separate fragments and overall picture of its distribution between the images with the norm and pathologies. The article analyzes the statistical values: standard deviation of error, dispersion. A fully connected neural network is used for determining the patterns in distribution of entropy and its statistical characteristics on various fragments of the chest X-ray image.
Litvinov V.A. - On the influence of the method of approximation of an unknown function on the stability of numerical methods for solving the anomalous diffusion equation pp. 23-29


Abstract: The subject of the research is numerical algorithms for solving fractional partial differential equations. The object of the study is the stability of several algorithms for the numerical solution of the anomalous diffusion equation. Algorithms based on the difference representation of the fractional Riemann-Liuville derivative and the Caputo derivative for various orders of accuracy are considered. A comparison is made of the results of numerical calculations using the analyzed algorithms for a model problem with the exact solution of the anomalous diffusion equation for various orders of the fractional derivative with respect to the spatial coordinate. The results of the work were obtained on the basis of the analysis of the constructed difference schemes for stability, the conducted numerical experiments and a comparative analysis of the data obtained. The main conclusions of the study are the advantage of using the approximation of the fractional Caputo derivative compared to using the difference scheme for the fractional Riemann-Liouville derivative in the numerical solution of the anomalous diffusion equation. The paper also indicates the importance of choosing the method of difference approximation of the second derivative, which is a derivative of the Caputo.
Milovanov M.M. - Using Wealth Lab software for technical and graphical analysis to evaluate effectiveness and appliance of the trading algorithm pp. 24-29


Abstract: Modern software allows using technical and graphical analysis to build charts and predictions based on the technical indicators and oscillators. The article describes a technique of making a trading algorithm for stock market using Wealth Lab. The author reviews features of Wealth Lab and describes trading algorithm using standard indicators available in Wealth Lab. The article gives and analysis of the developed algorithm and shows the evaluation of its effectiveness based on the gathered data. Observation is the main method of the study. The author observes a set of data, described by the price and time. Since stock market is constantly changing, it is urgent to have an accurate trading algorithm to make a profit. Applying software allows to evaluate the algorithm. Using techniques of finding the optimal solution of the problem of selection of the parameters, such as exhaustive search and Monte Carlo method, author gathers all data needed. The Wealth Lab allows to test the algorithm using C#, find parameters using optimizer and build charts using build-in methods to evaluate the performance of the algorithm visually.
Urazaeva T.A. - On the mathematical essence of the related borrowers concept. pp. 24-40


Abstract: The object of study in the article is the credit, however it does not concern its economic essence regarding both the creditor, and the borrower. The key aspects of stud concern the mathematical foundations for modeling the  phenomenon of related borrowers. The author stars her research by analyzing economic maintenance of a phenomenon of being related, while directly giving strictly mathematical interpretation to the economic terms, which she employs. Further the author explicitly considers functional and stochastic aspects of coherence of borrowers, showing continuous connection among these aspects. The methodological basis of a research is formed with the set-theoretic and probability-theoretic approaches. Application-oriented aspects of the loan portfolio modeling are explained in terms of the algebraic theory of risk. The graphic illustration of the main results of operation is based on classical submissions of the Cartesian product of finite sets and on traditional (quasi) trees of classification. The key result of this  study is revealing of the complete mathematical list of possible options for the coherence of borrowers in the functional and probability-theoretic contexts. At the same time the article contains detailed descriptions of all possible relation options within the object field of study. This a  novel relation classification. Similar studies are not found among either Russian, or foreign studies so far. Another result of this study is a practical one, since some of the received ratios can be used in order to form efficient algorithms for the direct analysis of risk processes within the complicated systems.
Shelemetev A.M. - Principle of control of wheeled vehicle, based on desynchronization of rotational speeds of the steered wheels taking into account the effect of suspension travel pp. 26-32


Abstract: When controlling the wheeled vehicle there’s always a need in the forced changing of its trajectory, ie in a steering system the operation of which is based on that steering angle changes through steering mechanism (using external force). That requires a kinematic connection between a steering mechanism and steering drive and thus leads to increase of complexity of machine design. In this regard, the question arises: is it possible to avoid this complication? It turns out that it is possible with usage of steering system based on the difference between the rotational speeds of the steered wheels. The article describes a method of controlling a wheeled vehicle by setting a certain speed of rotation of the steered wheels without using the steering mechanism. The author describes and algorithm for controlling of vehicle based on a proposed method. For the inventive method in the form as it is described in the appended claims the possibility of its implementation using the methods described in the application and means known before the priority date is confirmed.  Therefore, the claimed invention meets the condition of "industrial applicability". Thus, it is possible to control rotation of the vehicle wheels using the chosen law providing the conditions necessary for a rotation motion taking into account the surface irregularities.
Pekunov V.V. - Modification of the Marquardt method for training a neural network predictor in eddy viscosity models pp. 27-34


Abstract: The subject of this article is the numerical optimization techniques used in training neural networks that serve as predicate components in certain modern eddy viscosity models. Qualitative solution to the problem of training (minimization of the functional of neural network offsets) often requires significant computational costs, which necessitates to increase the speed of such training based on combination of numerical methods and parallelization of calculations. The Marquardt method draws particular interest, as it contains  the parameter that allows speeding up the solution by switching the method from the descent away from the solution to the Newton’s method of approximate solution. The article offers modification of the Marquardt method, which uses the limited series of random samples for improving the current point and calculate the parameter of the method. The author demonstrate descent characteristics of the method in numerical experiments, both on the test functions of Himmelblau and Rosenbrock, as well as the actual task of training the neural network predictor applies in modeling of the turbulent flows. The use of this method may significantly speed up the training of neural network predictor in corrective models of eddy viscosity. The method is less time-consuming in comparison with random search, namely in terms of a small amount of compute kernels; however, it provides solution that is close to the result of random search and is better than the original Marquardt method.
Fatkhullin R.R., Sidorkina I.G. - Automated information system tool Monitoring for regional assessment of education quality pp. 28-33


Abstract: One of the leading trends in the development of education in the world now is to create a comprehensive system of education quality assessment. Today a very urgent task is to develop an automated information system providing collection, storage and structuring information on the education system and allowing promptly processing and interpreting information on the education system. The article presents a “Monitoring of the education system in the Republic of Mari El” automated information system tool with integrated mathematical apparatus. The authors suggest an integrated model of comprehensive evaluation of the quality of regional education activities, allowing calculating the complex indicator of the quality of educational organizations within the cluster. Usage f the tool to assess the quality of the region solves a number of tasks such as complex automation of the collection, processing, analysis and consolidation of indicators and indicators monitoring in the region of the education system. It also provides information support for the implementation of regional policy in the region in the field of education.
Fatkhullin R.R. - Substantiation of invariant solutions based on educational qualimetry and theory of neural networks in evaluating the quality of educational institutions pp. 33-73


Abstract: One of the leading trends in the development of education in today's world is the creation of a system for comprehensive evaluation of the quality of education. The approaches to defining the quality of education are being actively developed, different aspects, indicators and parameters are being studied. Complex estimation the effectiveness of educational organizations is a multicriteria task that requires establishing the criteria and procedures for their evaluation for solving. The article analyzes the methods of multi-criteria optimization which may become components of a comprehensive assessment of the quality of educational activities. The author analyzes methods of multicriteria optimization and theory of neural networks. The article reviews multicriteria statistical models for a comprehensive evaluation of the quality of educational organizations. One of them is based on the educational qualimetry and involves the use of methods of convolution dominant and compensated indicators into an integrated indicator. Another model involves the use of neural networks and is based on the analytical data processing technology. With the implementation of the models considered such important quality indicators of a comprehensive assessment of the effectiveness of the quality of educational institutions as objectivity of evaluation, scalability, simplicity and ease of use, may increase. The results obtained can be widely used in a comprehensive assessment of the quality of educational institutions at various education levels.
Oleinikova S.A. - Approximation of the distribution law of the sum of beta distributed random variables pp. 35-54


Abstract: The subject of the research in this paper is the probability density function (PDF) of the random variable, which is the sum of a finite number of beta values. This law is widespread in the theory of probability and mathematical statistics, because using it can be described by a sufficiently large number of random events, if the value of the corresponding continuous random variable concentrated in a certain range. Since the required sum of beta values can not be expressed by any of the known laws, there is the problem of estimating its density distribution. The aim is to find such approximation for the PDF of the sum of beta-values that would have the least error. To achieve this goal computational experiment was conducted, in which for a given number of beta values the numerical value of the PDF with the approximation of the desired density were compared. As the approximations it were used the normal and the beta distributions. As a conclusion of the experimental analysis the results, indicating the appropriateness the approximation of the desired law with the help of the beta distribution, were obtained. As one of the fields of application of the results the project management problem with the random durations of works is considered. Here, the key issue is the evaluation of project implementation time, which, because of the specific subject area, can be described by the sum of the beta values.
Agafonnikova E., Khoperskov A., Khrapov S. - The Problem of Forecasting and Managing Hydrological Conditions in the Mountainous Area During Flash Floods Based on Hydrodynamical Numerical Experiments pp. 35-53


Abstract: The object of the research is the mathematical modeling and its numerical implementation meant for the description of non-stationary floods. The following is discussed concisely: the structure, the interface and the computing abilities of the software complex EcoGIS-Simulation that is based on geo-information and super-computing technologies for modeling the hydrological mode of floods. The digital model of the terrain is considered to be the most important factor that determines the quality of modeling results. The main focus of the study is the hydrological situation and emergency situation that occurred in 2012 in the area of Krymsk and caused numerous deaths of people. The digital terrain model has been build for the Krymsk area in the Krasnodar region enabling to model the hydrological mode under strong flood conditions adequately using the software package «EcoGIS-Simulation-2.0». The above mentioned software enables to take into consideration all the main physical factors determining flooding dynamics, and uses numerical algorithms especially adapted for the solution of St. Venan equations. The implemented two-unit client-server architecture allows a user to launch several calculations from his client machine on the remote computing clusters. Through the implemented numerical experiments the authors have succeeded in reproducing the dynamics of a flood wave that may cause an emergency situation like that in 2012. A number of features of hydrological conditions that are connected with the landscape and allocation of precipitation have been detected during the flash flood in 2012.
Ipatov Y.A., Kalagin I.V. - Analysis of the dynamic characteristics for target groups of social networks pp. 37-50


Abstract: The object of research is the dynamic characteristics for target groups of social networks. The subject of this study is to analyze the methods and models of the evolutionary characteristics of the social graphs of large dimension. The study examines in detail the approaches of analysis, quantitative characteristics graph models. Synthesized an algorithm to analyze the dynamic characteristics for target groups of social networks. The experimental results show the fact of adding a user to the subject area of interest, as well as visualize the entire process in real time. The developed software tools can be useful for further development and research topics related to the social network. When solving tasks used methods of mathematical logic, graph theory, mathematical statistics, the apparatus of mathematical analysis, linear algebra, mathematical modeling methods, theory of algorithms, as well as object-oriented programming techniques. The novelty of the study is to determine the dynamic characteristics of the target groups of social networks, as well as the visualization of the entire process in real time. The main conclusions of the study is that the developed software tool will enable to trace cause and effect indicators of changes in the social graph. The proposed prototype of the software will be of interest primarily marketers, system analysts, and professionals involved in the analysis and the study of social networks.
Korobeinikov A.G., Grishentsev A.Y. - The increase in the rate of convergence of finite difference method based on the use of middleware solutions pp. 38-46
Abstract: To study the characteristics of the process of functioning of any system of mathematical methods, including the engine, should be carried out formalization of the process, ie. a mathematical model. By mathematical modeling we mean the process of establishing compliance with this real object of a mathematical object (mathematical model), and the study of the mathematical model, which allows to obtain characteristics of this real object. A type of a mathematical model depends both on the nature of the real object and tasks of the research and the required reliability and accuracy of the solution of this problem. Any mathematical model, like any other, describes the real object only with a certain degree of approximation to reality. This paper presents a method for calculating interim solution in n-dimensional problem with boundary conditions, contributing to the acceleration of the convergence process of a finite difference method. In the practical implementation of this method the number of iterations to achieve a given residual was reduced to 10 - 100 times, due to the search of the intermediate solutions. Thus, this method can be used to significantly improve the efficiency of a finite difference method.
Matrosova N. - A sequence of transformations of information in the simulation of management of educational institution pp. 42-77


Abstract: The subject of the study is analysis of a sequence of transformations of information about functioning of an educational institution in the simulation of management of its activities as a training and research center. The paper analyzes the structure and functioning of the educational institutions of higher education system within the trends of modernization and quality of education set by legal documents of the educational system of the Russian Federation. In addition, the article presents an example of a developed diagram of the university, which allows searching and predicting the causes of the inefficiency of its activities, and is relevant for choosing ways to achieve the best possible performance. The article describes the developed mathematical model of the university as a dynamic control system for a continuous period of time, allowing to estimate the characteristics of the dynamics of an educational establishment, and also allowing to develop a software system to improve decision-making in the university management. A set of techniques used includes methods of system analysis, expert evaluation methods and functional analysis, methods of the theory of differential equations, methods of mathematical statistics, methods of decision theory, methods of control theory of complex systems, methods of the theory of databases and expert systems, methods of discrete mathematics, the theory of automatic methods management. A list of indicators of the university as an educational and scientific center is formed in terms of indicators to measure the quality of modern functioning education-enforcement agencies. Author impact diagram of components of the university as a dynamic control system. A mathematical model of the educational institution as a dynamic control system for a continuous period of time is given.
Gulyaikhina E.S. - Computer simulation of biochemical analysis in medical education pp. 46-56


Abstract: The article is devoted to the problem of computer simulation of biochemical analysis of biological fluids using the Stat Fax® 1904 + R. The author describes methodology and stages of creating an educational simulator (formulation of the problem, project planning, defining the system, gathering of input data, translation of the model, verification and validation, implementation of the system into the educational process). The article presents main methods for calculation of substances’ concentration, imitated by the computer simulator (based on the standard, factor calculation, multi-point calibration and kinetic characteristics measuring). The author pays significant attention to the interface design of the application’s main window that visualize virtual desktop for carrying out biochemical analysis. The essential elements of the simulated desktop are the images of the device with working keyboard, display, printing area for displaying the results of analysis, tubes (blank, standard, sample) with corresponding buttons, as well as the power button. The task of simulation of the quantitative and qualitative characteristics of the test mixture is solved using random number generator. The results of the presented study can be used in building educational computer simulators not only in medicine but also in other areas of professional activity. 
Kormilitsyn A.Y., Skorubskiy V. - Model of endocrine subsystem and calculation of injection parameters pp. 46-60


Abstract: The human body is seen as an information system. An endocrine subsystem is selected for the study. The authors use data on the level of sugar in blood to calculate it’s dependencies for healthy patients as well as for patients with second type diabetes during control intervals between injections. Authors propose a generalization in a form of mathematical model for autoregulation of these processes in endocrine system using natural internal and medical external injection of insulin and glucose. The model should be used for calculating time intervals between possible critical states of patients and parameters of external deflection compensation of concentration (the intake of dietary carbohydrates and insulin injections) to maintain the normal state of patient. Dependencies of the concentration of sugar in the blood that define the state of endocrine subsystem can be presented in form of continuous exponential graphics with typical monotonous increase and decrease. This allows suggesting the dependence of the control law as a polynomial with coefficients which are specified in real time at the control points. The use of interpolation and extrapolation help to predict the nearest glucose concentration level in the blood based on the accumulated data measured by portable glucometers and, most importantly - to predict extreme states of hyperglycemia and hypoglycemia and perform an approximate calculation of parameters of external insulin and glucose injections. The article present a new model of human endocrine subsystem for diabetes and gives recommendation on use of injections to maintain normal life. The described model is simple and can be efficiently used for controlling the state of patient and calculating the parameters of the external injection (the intake of dietary carbohydrates and insulin injections) for maintaining a normal state using available mobile measurement and evaluation means.
Teplovodskii A.V. - Motion simulator algorithm for aircraft guided missiles pp. 48-60


Abstract: The subject of the study is to develop algorithmic  for test methods for aircraft guided missiles, based on the integration of differential equations in the Cauchy form adopted for the mathematical description of the movement of aircraft, which is the basis for modeling the motion of aircraft-guided missiles. The author suggests an approach to algorithmization of the flight path of aircraft guided missiles, which should be used in the development of methods of research and testing of aircraft guided missiles using modeling systems that allow reliably determining and assessing the compliance of the motion characteristics of the aviation guided rockets specified tactical and technical requirements. The research methodology is based on the methods of mathematical modeling, optimal control, computational mathematics, differential and integral calculus. The main result of the study is formed by the basic movement algorithm for simple model of aircraft guided missile based on the impact of wind disturbances. The algorithm allows increasing the complexity by including missile guidance algorithms and control algorithms that take into account the angular movement of the rocket center of mass, dynamic data sensors and steering dynamics rocket drives.
Sklyar A. - Analysis and elimination of noise components in time series with variable pitch pp. 51-59


Abstract: The article discusses the methodology for estimating the noise component in time series with variable pitch, its justification, and suggests an algorithm for removing noise from data. The analysis is based on the requirement of smoothness of a function representing the original data and having continuous derivatives up to the third order. The proposed method and algorithms for estimating and eliminating noise in the data under the assumption of smoothness, the function they represent, allow reasonably determining both absolute and relative noise in the data, regardless of the uniformity of the measurement step in the source data, the noise level in the data, remove the noise component from the data . The algorithm for solving the problem is based on minimizing the deviations of the calculated values from the smooth function, provided that the deviations from the source data correspond to the noise level. The proposed method and algorithms for estimating and eliminating noise in the data under the assumption of smoothness, the function they represent, allow reasonably determining both absolute and relative noise in the data, regardless of the uniformity of the measurement step in the source data and their noise, and remove the noise component from the data. Considering the smoothness of the data obtained as a result of noise elimination, the data obtained by noise elimination are suitable for detecting both analytical and differential dependencies in them.
Pesterev E.V., Klyushin Y.G. - Decision-Making Simulation Based on Multidimensional Data Analysis pp. 54-65


Abstract: Methods of multidimensional data processing in the decision making processes are an essential part of the business-processes analysis. In this research the authors intend to analyze multidimensional operations when numerous alternatives are presented. Thus, the subject of the research is the decision making process based on the analysis of multidimensional data received from the system functioning statistics. The authors suggest to analyze the subject of the research that is proposed in terms of statistical data processing from the point of view of the general approach, in particular, within the framework of the image discrimination theory and chemometrics. The authors of the article offer particular methods for generating and processing statistical data. These methods involve developing a multidimensional data structure followed by processing its production function (certainty function). The authors suggest to analyze oiriginal features constituting selected data from the point of view of the dominating motivation principle that is mathematically demonstrated as the mutual influence between these features as well as on the decision to be made. To verify the methods, the authors have performed a number of numerical experiments aimed at both comparing developed algorithms reflecing the authors' approach with Bayesian approach and comparing different production functions. A number of experiments intend to find out the arithmetical mean of several generated random numbers. As a result, the authors have proved the dependence of the number of correct responses on constitutive parameters (the number of objects, the number of features, the volume of the selected data, the number of possible values for each feature). The results of the research demonstrate the better practice of classifying objects based on the authors' methods compared to the classification based on the probability approach. The results can be used to solve a wide range of tasks that are not directly related to the decision making process but deal with multidimensional data analysis. 
Borodin A.V. - Feasibility study on solution of redundant network component of the fault-tolerant scalable computing system of a special-purpose pp. 55-70


Abstract: The research is devoted to architectural aspects of creation of fault-tolerant scalable computing systems of a special-purpose. In particular the study focuses on the principles of backup which can be used in a network subsystem of the computing system in the conditions of essential dependence of total cost of ownership of system on a level of degradation of performance metrics. Authors consider such approaches to backup as duplicating and a triple redundancy. For the principle of a triple redundancy in this research the new concept of the functional adaptation of elements of redundancy is offered. Special attention in the paper is paid to the dependence of a "Value at Risk" measure of risk, that characterizes by a random variable of total cost of ownership of the computing system and defines the greatest possible loss at the given level of probability, from such parameters of a system as number of the functional groups of hosts and a level of influence of single and group faults on degradation of performance metrics. For a risk process description in the computing system the notation of ordinary stochastic Petri nets is used. For computation of a measure of risk of "Value at Risk" on the given time-interval the methods of the algebraic theory of risk are used. The main result of the research is in proving the concept of productivity of a triple redundancy approach with the functional adaptation of elements of redundancy in the task of synthesis of topology of a network subsystem. The novelty of research consists in use of methods of the algebraic theory of risk in a task of synthesis of an optimum architecture of computing systems on the given discrete sets of possible decisions.
Sklyar A. - Time series analysis and identification of processes with diffuse periodicity pp. 56-64


Abstract: The subject of research is the method of estimating the noise component in the time series and its removal, the selection of the trend and fluctuations with different periods, the concept of T-ε and T-h-ε almost periods for the final series is introduced. The analysis is based on the requirement of smoothness of a function representing the original data and having derivatives up to the fourth order inclusive and the allocation of almost periods based on functions of the Alter-Johnson type. Separately, the trend of the length of the periods identified in the data of a number of fluctuations. The algorithm for solving the problem is based on minimizing the deviations of the calculated values from the smooth function, provided that the deviations from the source data correspond to the noise level. To identify the oscillatory component and the trend of almost periods, the modified Alter-Johnson function is used. The proposed methodology and algorithms for estimating and eliminating noise in the data allow us to reasonably determine the noise level in the data, remove the noise component from the data, identify almost the periods in the data in the sense of the definitions introduced in the article, highlight the trend and oscillation components in the data, identify, if necessary, the trend of changes almost periods.
Mayer R.V. - Computational experiments in studying of wave processes in linear and nonlinear media pp. 57-65


Abstract: The course of physics in colleges and universities includes various wave processes: reflection and transmission of the pulse through the interface between two media, interference, wave propagation in a dispersive medium, the formation and interaction of solutions. Here it is important to combine theoretical and practical approaches to studying of these phenomena with computer models, allowing creating visual image of the phenomenon and analyzing its behavior in different conditions. The subjects of the study are the simple computer models and computational experiments helping to show the wave processes in the one-dimensional linear and nonlinear media. The experiments require mathematical and computer modeling, building a mathematical model, creating software simulating the studied phenomenon based on the numerical solution of the corresponding system of equations. The novelty of the work is in the fact that the author presents three simple computer programs written in Pascal, simulating pulse propagation in one-dimensional medium, its reflection from the boundary between two media, and its passage of the second medium, the wave propagation in a dispersive medium, the formation of different solutions and their interactions. The analysis of the results of computer modeling allows to state that the use of such computational experiments based on the simulation of a one-dimensional medium by a system of coupled springing or simple pendulums or solving the sine-Gordon equation really allows to study the wave processes at a higher level and to form interest in physics and information technologies. 
Mayer R.V. - Numerical method of the solution of a boundary problem for the oscillating membrane pp. 59-67


Abstract: The article considers the simple method of the numerical solution of the two-dimensional wave equation which allows to simulate the following phenomena: 1) propagation and reflection of waves; 2) change of wavelength upon its transition from one environment to another; 3) an interference of waves from several coherent sources; 4) formation of a standing wave; 5) rounding by a wave of obstacles, diffraction of waves; 6) the forced oscillations of an elastic plate; 7) free oscillations of an elastic plate of any form; 8) self-oscillations of an elastic plate. The method of mathematical modeling, a method of the numerical solution of the differential equations in private derivatives, and also a method of color mapping of two-dimensional fields on the screen are used. Novelty of work: the article presents two simple computer programs written in Free Pascal, allowing to simulate rather big set of the phenomena connected about distribution of waves in two-dimensional environments and oscillations of an elastic plate. The given programs can be used in studying numerical methods and bases of computer modeling.
Rozhentsov V.V. - Assessment technology for evaluating abilities of distance perception and evaluation pp. 61-66


Abstract: Perception and evaluation of distance is possible both in monocular (one eye) and binocular (using both eyes) vision.  In the latter case, the distance is estimated much more precisely. Questions of perception and distance estimates are relevant in hiring employees in different specialties, such as motor vehicle drivers, surveyors, experts in the field of engineering graphics and graphic activity. These abilities are especially important in physical education and sports, particularly in games, martial arts and orienteering. The article proposes a method for evaluation of distance perception using a camera and computer-controlled light emitters placed on a flat surface. Light emitters create two light spots on the surface. Examinee is placed in the middle of the line drawn through the centers of the light spots. Software randomly changes the area, the direction and velocity of the light spots during a predetermined time. Examinee evaluates the transformation and movement of light spots and moves to stay in the middle of the line going through the centers of the light spots. The transformation of the light spots and movements of the examinee are recorded, the video then analyzed by software that calculates the middle point, the examinee’s location and the arithmetic mean of the calculated distances. Using the arithmetic mean the examinee’s ability of distance perception and evaluation is judged. Known methods of evaluating the perception of distance uses a static position of the examinee and are less useful in the professional selection for various activities, including sport. The proposed technology is aimed at assessing the perception of distance as a result of motor actions of the examinee, which is typical for many activities.
Gatchin Y.A., Polyakov V.I., Skorubskii V.I., Chebanenko A.A. - Endocrine subsystem model pp. 62-69


Abstract: For many years various tools from conventional thermometer to sophisticated measuring instruments were used in medicine to improve the diagnosis of disease. Verification of the effectiveness and safety of diagnostic techniques is the basis of evidence-based medicine. It requires mathematical methods of for assessing the quality of treatment and measuring tools. Diabetes is a massive and incurable disease, however, with self-control, following the recommendations depending on the type of disease and gathering information on the state of the organism the parameters of injections can be calculated to keep the near normal state of health, which is highly important at the pace of modern life. In this article the authors suggest a model of endocrine subsystem of a person with the II-type diabetes. The model is based on the analysis of characteristics of changes of glucose concentration in blood. The model can not be considered exhaustive due to a wide variety of internal and external factors, affecting the subsystem state and therefore influencing general health of the person and reducing the threat of dangerous consequences. However, with a certain objectivity, the model can be used to calculate parameters of external influences (caloric intake and insulin injections) to maintain a normal state of the patient using the mobile means.
Urazaeva T.A., Smirnova S.Y. - On the Experience of Using Various Pseudorandom Number Sensors in Random Search Algorithms for the Global Extremum of Functions pp. 64-69


Abstract: The subject of the research is the methods of optimization, in particular, methods of random search for the global extremum of functions. The object of the research is the problems of the random search connected with the replacing the flow of equally distributed truly random numbers with pseudorandom sequences. The authors have created a simulative example that can clearly demonstrate limitations of the method of equal random search in case when the period length of a pseudo-random sequence used is comparable to potentially achievable number of target function calculations in the given group of traditional calculations. The authors demonstrate that there are serious limitations of the random number generator installed in the VBA-subsystem of the Microsoft Office package when using the option of random search in algorithms. When synthesizing the model target function, the author has used the methods of algebra and analysis. The main results of the research is the statement about impractibility of using the pseudo-random number generator installed in the VBA-subsystem of the Microsoft Office package in random search algorithms and recommendations on how to replace the installed sensor with the new generation generators such as Mersenne twister.   
Zakharov V.M., Pesoshin V.A., Shalagin S.V., Eminov B.F. - Automation Model of Nonlinear Pseudorandom Sequences with the Output Function on the Basis of Injective Mapping System pp. 64-78


Abstract: The subject of the research is the methods of complicating the analytical structure of pseudorandom sequences by applying additional mapping of nonlinear external logic, in particular, nonlinear complication function, to elements fo initial pseudorandom sequence. The purpose of the research is to define and develop an algorithm of a mathematical model of nonlinear complication function presented on the basis of the modular operation of the simple modul exponentiation. This allows to obtain nonlinear pseudorandom sequences that have statistical properties close to the properties of a random sequence at the set maximum period. To present the model, the authors have used the formalism of the automation theory, finite field theory, residue number and primes theories. The authors offer their own automation model for creating nonlinear pseudorandom sequences with the set periods of L = 2^n-1 and L = 2^n, n >1 with the output function as the nonlinear complication function on the basis of nonlinear mapping of modules belonging to Fermal primes. It has been proved that the automation output function is an injective function that displaces elements of the De Bruijn sequence. It has been demonstrated that the algorithmic model of the automation output function allows to change the structure of  nonlinear sequences by pseudorandomly displacing values of the primitive roots of the Fermal prime. The size of the nonlinear sequence assemble formed by the nonlinear complication function depends on the number of primitive roots and is determined by the lower bound of (2^n), n>1 type. 
Apartsev O.R. - Algebra of Generalization theory pp. 65-90


Abstract: The article continues provision of a rationale for a new philosophic point of view on the processes of Life and Consciousness, using Population Dynamics as the main mechanism of evolution of biological, psychic, social, and other systems. The presented study is the first step in developing mathematical analysis of population dynamics with the use of Ambivalent Generalization Theory for building an operational mathematical system for simulations and studies of arbitrary populations. Simplicity and consistency of the Theory allows not only to solve qualitative problems in the fields of different sciences, but also to build models of natural populations with any level of approximation to the original, if we ignore the limitations of the available computing power and the knowability of the initial parameters of these populations. The author is purposely not looking for a specific field of application for the new Algebra to focus the attention exactly on the methodological part of the universal approach. In fact, the article presents quite simple adaptation of the Ambivalent Generalization Theory on the language of mathematical objects, such as vectors and matrixes. The author expects a wide field of appliance for the new Algebra, and, if it so happens, it will be consequence of the ambiguity of the Ambivalent Generalization Theory in the description of the laws of populations evolving. In that case Ambivalent Generalization Theory will be a step on the way of giving mathematical basis for both biological and social sciences. For further stressing out the importance of such direction of development as the Theory of Population Dynamics the author points out one of the main conclusions of Generalization Theory, which is in the fact, that development of populations, in itself, is a mechanism for world perception and that the development of the Generalization Theory Algebra may become a new tool for materialistic scientific knowledge.
Goryachev A.V., Novakova N.E. - Network traffic modeling based on the marker basket algorithm pp. 65-79


Abstract: The object of research in this article is a system for simulating network traffic and its optimization. The subject of research in this article is the marker basket algorithm and methods for optimizing network traffic. Particular attention is paid to the network parameters of special control. We consider the problem of traffic management in order to ensure the quality of network service. Dynamic filter models are proposed based on a marker basket algorithm and a multiplexer that supports network quality control. The task of choosing the optimal strategy for controlling the parameters of traffic filters, working by the marker basket algorithm, is considered. The main research methodology is simulation modeling. Such metaheuristic optimization algorithms such as the genetic algorithm, the harmony algorithm, and the lifting algorithm are investigated. As a result of the research, a mathematical model for assessing the effectiveness of a network site was developed.A simulation and analytical model of network traffic based on the marker basket algorithm has been developed and implemented.The possibilities of several optimization algorithms are analyzed. Conducted simulation experiments, which resulted in the identification of optimal solutions.The study presented can be used to solve problems of improving the quality of network services.
Ivanov K.V., Koshpaev A.A., Vasyaeva N.S. - Software model of PCI Express switch arbitration pp. 66-75


Abstract: The authors study arbitration system for data threads between ports of modern serial PCI Express bus. The article is devoted to the development of a model for that system. For the model the authors make an assumption, that switching matrix of the switch is non-blocking. Authors describe principles of the software model operation that allows to study the arbitration algorithm and the dependence of flow characteristics upon various factors. Using the mentioned above model the authors examines the influence of the virtual channels quantity and of the unevenness of the input ports load of the commutator on the amount of buffering memory of the port and virtual channels’ arbiters. The article uses computational experiment based on the software model of PCI Express bus arbitration as a method of the research. The produced model is based on the concepts and algorithms regulated by the official protocol specification for PCI Express. The authors present a software model of multistage arbitrator switch for PCI Express, in which varies the set of parameters of the arbitration specific for the real bus load on the cluster systems. The modular approach allows to modify the software model and include different priority schemes. The model designed and described in the article may be used in the building of the switch structure, as well as in configuration of an arbitrator, which can be useful when creating a cluster system with external PCI Express switches, which found practical use relatively recently. The model can be also used during the study of the PCI Express protocol.
Mukhametzyanov I.Z. - Identification of the Structure in Computer Simulations of Clusters in Oil Disperse Systems pp. 66-75


Abstract: The subject of the research is the identification of the cluster system in general during computer simulation of the cluster-cluster agregation process. The object of the research is the computer simulation model of formation/destruction of macromolecular clusters heavy oil and reisual oil products. The researcher examines such aspects of the topic as developing significant indicators for identifying the cluster system for the cluster-cluster agregation simulation model. The author analyzes two kinds of indicators, average statistical dispersion that characterizes homogeneity of clusters, and entropy of the cluster system that characterizes the order of the cluster system. The research method is based on the numerical experiment involving various managing parameters of the model and following statistical analysis of indicators under review. Evaluation of the quality of the cluster system identification integral indicators is performed based on the minimum criterion of the variation ratio and verification of statistical hypotheses about a significant difference between indicators when changing managing parameters of the model. Based on the results of the numerical experiment a so called 'divergence ratio' has been defined as the best metrics for the statistical dispersion indicator depending on the minimum criterion. The statistical dispersion indicator and the indicator of the cluster system entropy allow to perform a qualitative analysis of the macroscopic structure of oil systems by the means of simulating the growth of clusters when chaging physical and chemical properties of the gas system as well as technological parameters of the industrial process. For numerical experiments simulating the process of the thermal cracking and following thermocondensation of high-boiling fractions of petroleum hydrocarbons the author has defined technological parameters that trigger the growth of minor clusters with dense structures and, on the contrary, major clusters with less dense structures. The patterns described by the author are important for further usage of cracking residuals as the raw material for manufacturing petroleum coke with the set structure in the carbonic industry. 
Labkovskaya R.Y., Kozlov A.S., Pirozhnikova O.I., Korobeinikov A.G. - Modeling the dynamics of reed sensors of control systems pp. 70-77


Abstract: The article is devoted to developing of methods and algorithms of finding eigen values for analyzing dynamics and stability of reed sensors. Matrix-topological method, PC-oriented, uses a model with lumped parameters to describe the oscillatory processes of microsensors beam elements and reeds contact cores. Replacement of the system with distributed parameters with the equivalent model with lumped parameters is achieved by using the Rayleigh method. This method leads to considering a system with a higher hardness than original, thereby obtaining a higher frequency compared with the actual. The authors developed a matrix-topological model for frequency analysis of geometrically-complex and multilink reed sensors based on the switching to the system with lumped parameters using following methods: electromechanical analogies, method Rayleigh-Ritz method and graph theory. On the example of one of the three constructs of reeds oscillatory systems the authors compare theoretical results of calculation of reed sensors eigen frequencies, gathered using matrix-topological method with corresponding experimentally obtained values with piezoelectric and optical methods of the reed sensors frequency spectra analysis. The correctness of the frequency analysis was achieved using Fourier frequency filters.
Mayer R.V. - On the use of computational experiments in the study of physics pp. 74-84


Abstract: The author reviews a problem of use of educational computational experiments in the study of physical phenomena. By educational computational experiment author means experiment on the mathematical model of the object carried out with the aid of a computer for the purpose of learning. A set of simplified versions of educational computational experiments adapted to the conditions of learning form a System of educational computational experiments. The article analyzes the examples of the use of educational computer experiment for: 1) for the study of the magnetization of a ferromagnet, calculation of the magnetization curve and hysteresis loop; 2) study of chaotic oscillations of a Dafing pendulum, occurrence of bifurcation in changes in the profile of the potential well, study of Poincaré section and the evolution of the phase volume. The authors apply mathematical and computational methods (simulation) modeling, which involves the construction of a mathematical model and building a software simulating the phenomenon under study based on the numerical solution of the corresponding to its’ system of equations. The novelty of the work is in the proposed four simple computer programs in Pascal, allowing: 1) obtain the magnetization curve and hysteresis loop for a ferromagnet in a changing magnetic field; 2) simulate the oscillation of the Dafing pendulum; 3) study the transition of oscillator into the chaotic regime at a change of the potential well; 4) get a Poincaré section and to study the evolution of the phase volume for Dafing pendulum.
Glushenko S.A., Dolzhenko A.I. - Training of a Neural-Fuzzy Network Using a Genetic Algorithm pp. 79-88


Abstract: The authors of the article describe the features of the use and advantages of the genetic algorithm for learning the neural-fuzzy network. The authors review the literature sources that consider modifications of the genetic algorithm adapted to solve various problems. The authors found that the existing approaches to the implementation of the genetic algorithm contain a number of drawbacks for learning the neural-fuzzy network, so when forming a chromosome, the interval containing the peak of the membership function is encoded with 1, otherwise it is 0. This affects the resolving power when searching for the optimal solutions. The authors of the article also discuss in detail such aspects as the operators of the basic algorithm and give the scheme of the combined method of learning the network. To teach a neural-fuzzy network based on a genetic algorithm, the authors propose an approach for coding membership functions based on the α-level. As a fitness function of the genetic algorithm, the root-mean-square error was used. The novelty of the research is caused by the fact that the authors offer their own approach of coding membership functions based on the α-level which allows to increase the resolution of the algorithm when searching for the optimal solution. The main conclusion of the research is the fact that the approach proposed in the article makes it possible to adjust the parameters of the membership functions of linguistic variables for the neural-fuzzy network and to obtain more adequate values for the parameters of the output layer of the network.
Evstigneeva O.A., Yuganson A.N., Korobeinikov A.G., Grishentsev A.Y., Kubasheva E.S. - Optimization of Transitional Commutation Processes in Thermal Operations Systems pp. 89-102


Abstract: The subject of the research is the method of reducing the spark load and commutation noise by separating the power elements of thermal plants. The object of the research is thermal installations. The aim of the research is to develop a method for separating the power elements of thermal plants in order to reduce the switching currents when performing thermal control. In the course of their research the authors have conducted simulation based on the example of a heating installation. The results of simulation and comparison of the systems with separation and without separation of the thermal circuits show the efficiency of the application of the separation method to reduce the switching currents while maintaining the admissible system characteristics by the cosφ criterion. The authors also offer their method for reducing switching currents in the process of temperature control of the temperature of thermal installations by transition from one power element of the thermal installation to several, the power of which is distributed in proportion to the first members of the Fibonacci series. The analysis carried out in the research has showed the possibility of reducing switching currents by increasing the number of thermal circuits. The possibility of reducing the switching voltages in several times is demonstrated. The authors offer to divide powers between consumers (thermal loops or cryogenic elements) in proportion to several first terms of Fibonacci numbers. The typical feature for dividing the system into several power circuits is the increase in reacting power. The authors offer the method to return part of the reacting power in thermal plants by back connection of thermal spiral elements.   
Kolesnikov A.G. - Simulation and Optimization of Rigid Pavement Parameters pp. 103-109


Abstract: The subject of the research is the simulation model of pavements in the form of multilayered construction on the rigid (elastic) base. In his research Kolesnikov pays special attention to the simulation of rigid pavement physical behavior as a set of equations that reflect changes in stress-deformed state of such constructions. The equation system takes into account geometric nonlinearity of the construction material and gives an opportunity to analyse the influence of various parameters on the stress and displacement indicators. Boundary conditions have been chosen to satisfy various types of description. The author has shown an opportunity to optimize geometrical parameters of the constructino and to achieve minimization of weight and volume of consumable material. Nonlinear equations that model the construction behavior were solved using the Bubnov-Galerkin method. The author has used such optimization criteria as the construction shape and thickness and features of the elastic base. The novelty of the research is caused by the fact that the author offers a simulation of the rigid pavement surface formed as a shallow shell on the elastic base with a slight chamber of arch in the middle. This makes the task more compolicated, however, it also allows to consider more real conditions of work. The task to optimize parameters and the solutions offered by the author will allow to significantly minimize the weight and to reduce the stress on the construction. 
Kolesnikov A.G. - The algorithm for determining the optimal parameters of thin-walled spatial structures pp. 191-198


Abstract: The subject of this study is an algorithm for determining the optimal parameters of shallow thin-walled spatial geometrically nonlinear shells on the elastic base. A main attention is paid to the selection algorithm optimization of such structures and its adaptation to the problem of finding the optimal form of shallow shell construction taking into account the material nonlinearity. The paper also demonstrates the possibility of achieving significant savings in volume (weight) of structures such as thin shallow shells on elastic base by changing its shape and thickness distribution. The optimization algorithm is based on a modification of a method of random search, including a combination of random and gradient search, as well as a method of "ravines". The algorithm for determining the optimal parameters of thin-walled spatial structures can be used to determine critical force and stresses for geometrically nonlinear shallow shells on an elastic base with a variable middle surface shape under various constraints. The novelty of the research is to use the combined method of finding the extremum of nonlinear functions with various restrictions.
Toropov B.A. - Independent cascade model of reposts spreading in online social network pp. 199-205


Abstract: The article focuses on the modeling of information dissemination in online social networks through reposts. The author considers the threshold model and an independent cascade model of information spreading in network. In the context of the independent cascade model author suggests a method of estimating the probability of spreading of information for each pair of connected users. The article reviews visible activity of online social network users that affect the likelihood for a particular user to read the information message, published by one of his friends, and then make a repost. The methodology of the study is related to the mathematical modeling of social processes. To build the model the author uses elements of set theory, graph theory. The main conclusion is that of the models considered (the threshold model and the independent cascade model), which both are special cases of same generalized model, it is the independent cascade model that most accurately reflects the logic of the process of information spreading in the online social network via reposts. The scientific novelty of the work lies in the fact that using the results for independent cascade model the author provides a method for the evaluation of information transmission probabilities for each pair of connected users.
Stepanov P.P., Ezhov A.K. - To a question about the prospects of the use of methods of game theory in conducting strategic planning pp. 206-212


Abstract: The object of this study is strategic planning in the management process. The authors note the importance of innovation and strategic planning in forming an innovative economy. The article gives detailed review of the main tool of strategic analysis - mathematical modeling. The authors make a conclusion about the need to use a more flexible mathematical apparatus, such as the apparatus of game theory. The authors also note that the theory of games is a very complex area of scientific knowledge. It is especially important to understand its application boundaries. A simplified interpretation of the apparatus of game theory can be very dangerous. The theoretical analysis has been used as a method of research. Individual aspects, features and characteristics of mathematical modeling in strategic planning were identified and reviewed. The main conclusion of this paper is to identify and clearly marked a promising field of research. Since the prospects and hopes placed by the authors on the apparatus of game theory for strategic planning aimed at the transition to an innovation economy, can significantly increase the efficiency of the state management of the economy and to ensure its rapid transition from a resource-production to the economy.
Levina T.M., Filippov V.N., Nasyrova R.T. - Mathematical model of information-measuring system for electric current and the magnetic field control pp. 292-309


Abstract: The article studies methods of constructing sets of elements required for the existence and functioning of developed mathematical model of information-measuring system for electric current and the magnetic field control. The authors research numerical methods for constructing a model of an information-measuring system. On the basis of system analysis the authors review specifications of information-measuring system for electric current and the magnetic field control, ways of constructing its mathematical model using numerical methods, the accuracy of which will reduce the accuracy of measured values. The study is based on the use of system analysis, statistical theory of optics, systems theory, and transformation of electromagnetic field theory, numerical methods. For the majority of tasks a full implementation of mathematical models of a information-measuring system is difficult because of the complicated structure of system components as well as the presence of the factors affecting the operation of the system. In this connection the authors attempt to determine the relationship between the characteristics and parameters of the process of transformation of the measuring system component of the input signal, taking into account all factors.
Other our sites:
Official Website of NOTA BENE / Aurora Group s.r.o.