по
Cybernetics and programming
12+
Journal Menu
> Issues > Rubrics > About journal > Authors > About the Journal > Requirements for publication > Council of Editors > List of peer reviewers > Review procedure > Policy of publication. Aims & Scope. > Article retraction > Ethics > Legal information
Journals in science databases
About the Journal

MAIN PAGE > Journal "Cybernetics and programming" > Rubric "Databases"
Databases
Melikov A.V. - Multidimensional data organization in surveys information systems pp. 1-16

DOI:
10.7256/2306-4196.2014.1.10331

Abstract: The author analyzes the advantages of multi-dimensional logic data organization in survey information systems. Author proposes a conceptual model of the process of questioning in the information system, characterized by the presence of processing the results of expert evaluation during their multivariate analysis to decision-making. The author developed a mathematical model of data transformation from the source to relational database storage. Such a representation of multidimensional data model provides a reliable and compact storage in the complex information structures and the ability to highlight important information in the data processing, all of which increases the efficiency of processing expert information, and facilitates the design based on it adaptive, integrable and dynamic information system survey. To achieve the objectives the author uses the theory of information processes and systems, database theory, set theory, graph theory. The author developed a new structure of the data warehouse, based on the algebra of tuples, which increases the reliability and informativeness of the conclusions derived from the data of expert interviews, excluding processing semantically equivalent information and reducing the number of empty values presented in tables hypercubes. The proposed structure of the data warehouse enables to analyze unexpected data not covered by the survey plan, thereby increasing saturation resulting from the processing of expert information terminals.
Sokol'nikov A.M. - Comparative analysis of the approaches in development of the database management systems and its’ architecture for highly loaded web-services pp. 1-13

DOI:
10.7256/2306-4196.2014.4.12800

Abstract: In today’s world the problem of processing and storing huge amounts of data becomes increasingly pressing. Messages in social networks, photos, streaming video – altogether creates a high load on the server-side software. For this reason common approaches used in desktop-software design may be ineffective since they don’t take into account the huge load on the application created by the vast number of users. Currently, there is no clear definition for highly-loaded systems. In most cases this term is used in situations, when software fails to operate under some momentary load. There’s no specific values set at which a system can be considered highly-loaded, since each software is different and same amount of requests can lead to completely different loads on the resources. The given study of the database management systems consisted of several experiments, measuring the speed of common operation on databases, such as adding, selecting and deleting. Based on the result of these experiments the author makes conclusions and gives recommendations on choosing the database management system. The article reviews approaches in developing highly loaded systems, highlights their features and disadvantages and shows examples of the use of these approaches in popular web-services such as ВКонтакте, Facebook, Google and Яндекс. The articles brings a comparative analysis of MySQL and MongoDB database management systesms. In conclusion the author gives recommendations on selecting a database management system depending on the approach to designing architecture of a highly-loaded project.
Malashkevich V.B., Malashkevich I.A. - Efficient data structure pp. 1-6
Abstract: The efficiency of information retrieval systems depends significantly on the structure of the data. The selected data structure determines the speed of data operations (search, insert, delete), and the necessary cost of memory. Due to the importance of the problem of optimizing the structure of data in modern scientific and technical literature are well represented implement a variety of data structures and analysis of their effectiveness. A wide range of known effective data structures uses the properties of linear arrays of data, and binary trees. The article deals with one of the special data structure known as a digital trie (Trie unlike Tree). Search speed in the proposed structure is the statistical value and the worst value is characterized by O (log (N / 2)) and the average value of O (log (N / 2) / 2) operations. It also has the best memory cost in comparison with the traditional characteristics of a digital tree. Thus the aturhos propose and implemented an efficient data structure - "vertical" digital tree, which is characterized by high-speed data retrieval and low memory consumption.
Milushkov V.I., Gatchin Y.A. - Using a binary search to optimize the query to retrieve data pp. 1-9
Abstract: With the increasing popularity of DBMS its use inevitably begins to demand more and more resources. The first time is possible (and, of course, necessary) to lower the load through optimization of algorithms and / or architecture of the application. However, what if anything that can be optimized is already optimized, and the application still cannot cope with the load? In this article the methods and ways to use binary search to optimize the query to retrieve data are reviewed. Authors giv an overview of php + MySQL and solved the problem of the transfering the queue from fields without indexes to tables with primary keys, which significantly speeds up the query and the database itself. Proposed solution greatly accelerates the search for the desired item by reducing the search range but at the same time sacrificing some accuracy computations. For statistical reasons it is not critical if a few elements of millions will not be taken into account. Otherwise, it is necessary to make and complete epsilon zero search only after reaching the last level of the tree.
Belikova M.Y., Karanina S.Y., Karanin A.V., Glebova A.V. - Visualization and analysis of WWLLN network data on the territory of the Altai-Sayan region using Web-GIS pp. 1-8

DOI:
10.25136/2306-4196.2018.2.25405

Abstract: At present, the technology of creating information and analytical systems in the field of climate-ecological monitoring has been developed quite well. The construction of such systems is based on the use of GIS and Internet technologies and includes both data from monitoring stations and remote sensing data. The article describes the architecture of a web application that implements elements of GIS technologies and is developed to solve the tasks of collecting, storing, visualizing, searching and analyzing information on lightning discharges recorded by the World Wide Lightning Location Network (WWLLN). The software and technology platform of the system is based on the use of freely distributed technologies and software, including the Ubuntu operating system, the NGINX web server,  Python  as the main development language and the Django framework, the PostgreSQL / PostGIS database, the GDAL libraries, the OpenLayers. The WWWLNN archive data and the results of clustering are included in the web-GIS database. The system provides the feature of selecting information about lightning discharges, as well as performing cluster analysis for the sample obtained. The developed web-based GIS can provide specialists with convenient web-based tools for using WWLLN data to study regional climatology of lightning activity. 
Ivanov A.Y., ., . - Pathfinding System for 3d Space pp. 6-11
Abstract: The article is devoted to the extension of the navigation graph (NG) method for 3D space pathfinding systems using numerous NGs relevant to each object instead of a single graph. This method significantly reduces the volume of manual work for setting NGs as well as the general time of algorythm without distoring the adequacy of the path being found. 
Luchinin Z.S. - Method of referential integrity of the document-oriented database pp. 17-22

DOI:
10.7256/2306-4196.2014.1.11081

Abstract: The subject of this study is a document-oriented database that can store semistructured information related to non-relational data storage approach. The basis of the study is in referential integrity. An explanation of the concept of referential integrity is given not only for relational databases. Based on the analysis of the relational approach to support referential integrity, the author proposes the task of handling large amounts of data in a distributed environment using the technology of document-oriented databases taking into account the structure of the data. The article proposes a method to maintain the referential integrity of distributed document-oriented database, based on estimates of links between documents. The method involves the introduction of two types of links between documents, namely the strong and weak links. These types of links mimic the strategy for dealing with foreign keys in relational databases. The author substantiates the introduction of this method as a separate unit from the database management system.
Raikhlin V.A., Minyazev R.S., Klassen R.K. - The efficiency of a large conservative type DBMS on a cluster platform pp. 44-62

DOI:
10.25136/2306-4196.2018.5.22301

Abstract: The results of original research on the principles of organization and features of the operation of conservative DBMS of cluster type are discussed. The relevance of the adopted orientation to work with large-scale databases is determined by modern trends in the intellectual processing of large information arrays. Increasing the volume of databases requires them to be hashed over cluster nodes. This necessitates the use of a regular query processing plan with dynamic segmentation of intermediate and temporary relationships. A comparative evaluation of the results obtained with the alternative "core-to-query" approach is provided, provided that the database is replicated across cluster nodes. A significant place in the article is occupied by a theoretical analysis of GPU-accelerations with respect to conservative DBMS with a regular query processing plan. Experimental studies were carried out on specially developed full-scale models - Clusterix, Clusterix-M, PerformSys with MySQL at the executive level. Theoretical analysis of the GPU-accelerations is performed using the example of the proposed project Clusterix-G. The following are shown: the peculiarities of the behavior of the Clusterix DBMS in dynamics and the optimal architectural variant of the system; Increased "many times" scalability and system performance in the transition to multiclustering (DBMS Clusterix-M) or to the advanced technology "core-to-query" (PerformSys); Non-competitiveness of GPU-acceleration in comparison with the "core-to-query" approach for medium-sized databases that do not exceed the size of the cluster's memory, but do not fit into the GPU's global memory. For large-scale databases, a hybrid technology (the Clusterix-G project) is proposed with the cluster divided into two parts. One of them performs selection and projection over a hashed by nodes and a compressed database. The other is a core-to-query connection. Functions of GPU accelerators in different parts are peculiar. Theoretical analysis showed greater effectiveness of such technology in comparison with Clusterix-M. But the question of the advisability of using graphic accelerators within this architecture requires further experimental research. It is noted that the Clusterix-M project remains viable in the Big Data field. Similarly - with the "core-to-query" approach with the availability of modern expensive information technologies.
Suchkova E.A., Nikolaeva Y.V. - Developing the Best Possible Data Storage Structure for Decision Support Systems pp. 58-64

DOI:
10.7256/2306-4196.2016.4.18281

Abstract: The article presents the results of the development and experimental comparison of data structures and data storage methods. The basis for building the models included the financial market decision support system and expert evaluations of the electronic tendering system. In both cases the authors built conceptual data models, stored data in text files, relational and non-relational databases and evaluated efficiency of an organized structure from the point of view of efficient storage and access, automatic integrity control and data consistency. By using theoretical methods (abstraction, analysis, synthesis, and idealization) the authors developed conceptual database models. In its turn, by using empirical methods (experiment and comparison) they checked the efficiency of data storage with the use of text files, relational and non-relational databases. As the main conclusion of the research, the authors provide recommendations on how to select the best data storage structures for electronic decision support systems. The experimental comparison allowed to discover that for a developed expert evaluation storage structure the relational database control system is the most effective method while in case of storing information about financial markets, it is better to use text files for a developed decision support system. 
Lobanov A.A., Filgus D.I. - pp. 63-75

DOI:
10.25136/2306-4196.2018.5.26513

Abstract:
Other our sites:
Official Website of NOTA BENE / Aurora Group s.r.o.
"History Illustrated" Website