Software systems and computational methods - rubric Quality aspects and improving the margin of reliability of software systems
ïî
Software systems and computational methods
12+
Journal Menu
> Issues > Rubrics > About journal > Authors > Policy of publication. Aims & Scope. > Council of Editors > Editorial board > About the journal > Requirements for publication > Peer-review process > Article retraction > Ethics > Online First Pre-Publication > Copyright & Licensing Policy > Digital archiving policy > Open Access Policy > Article Processing Charge > Article Identification Policy > Plagiarism check policy
Journals in science databases
About the Journal
MAIN PAGE > Journal "Software systems and computational methods" > Rubric "Quality aspects and improving the margin of reliability of software systems"
Quality aspects and improving the margin of reliability of software systems
Volchenskaya T.V., Knyaz'kov V.S. -
Abstract:
Kubasheva E.S. -

DOI:
10.7256/2454-0714.2013.1.7232

Abstract:
Tikhanychev O.V. - On clarifying the quality control of software products pp. 1-12

DOI:
10.7256/2454-0714.2022.2.37985

EDN: ZXYEKP

Abstract: Despite the extensive volume of experience in the field of control automation, there are quite a lot of problems in the process of developing automated systems, including those related to the development of application software for them. With this in mind, the process of software development of automated control systems is chosen as the subject of research. The object of the study is a model of quality control of this process. Currently, legal regulation of software quality control is based on a paradigm that determines that the quality of programs will be checked exclusively for compliance with the requirements of the terms of contract. But, as practice has shown, such a paradigm does not fully meet modern conditions, providing not full-fledged quality control -- the verification of compliance of programs with customer expectations formulated at the stage of system design is needed. To find ways to solve the problem, the article uses general scientific methods of analysis. Based on the analysis of currently used methods and models of software testing, proposals for clarifying the paradigm of its evaluation and control are synthesized. The article formulates a scientific and practical problem and suggests a possible approach to its solution based on the refinement of the quality assessment paradigm currently used, on the transition from a "rigid", preset model to an expanded quality assessment model that takes into account not only the requirements of the terms of the contract, but also the conditions for their implementation. The novelty of the proposed approach lies in the fact that the solution of the formulated task will provide an overall improvement in the quality of control by improving the safety and effectiveness of programs based on the transition to the use of an extended dynamic testing model of the software being developed, implemented within the framework of a refined quality assessment paradigm
Keywords: quality assessment model, the quality assessment paradigm, program quality assessment, software quality, software, decision support, automated control system, quality management methodology, the principle of quality assessment, testing programs
Knyazkov V.S., Volchenskaya T.V. - Markov models for performance evaluation of parallel computing pp. 10-27

DOI:
10.7256/2454-0714.2013.1.62445

Abstract: the article discusses a hierarchical approach to the analysis of the complexity of parallel computing process implementation. The authors introduce the concept of elementary homogeneous and elementary heterogeneous computational processes on which the analysis of the heterogeneous branching parallel processes and heterogeneous computational processes is based. The article shows that complex parallel process and heterogeneous computational processes can be represented in the form of discrete Markov processes of a special type. This approach allows to transform such processes to a classical discrete Markov processes with a consecutive change of states.
Keywords: Software, efficiency, parallel computations, computing, computational structures, Markov models, organization of computations, conveyor computations, computations with parallel objects, optimization of computations
Tikhanychev O.V. - On the quality indicators of automated control systems software pp. 22-36

DOI:
10.7256/2454-0714.2020.2.28814

Abstract: The subject of the research is the process of developing automated control systems software. The object of the research is the quality control system of this process. The regulatory documents establish a list of the main characteristics of program quality assessment, which, as practice has shown, does not fully meet its purpose, providing not quality control, but verification of the compliance of programs with the customer's requirements formulated in the terms of reference. One of the reasons for this lies in the impossibility of evaluating exclusively quantitative indicators of the quality of systems, including both technical means and a person. An attempt to use world practice, for example, relatively successful quality models from the ISO / IEC 25000: 2014 standards have not yet been implemented: the model itself is allowed to be used by regulatory documents (GOST R ISO / IEC 25010-2015), but the quality indicators described in it are not accepted. Private improvements to existing methods do not solve the problem systematically. The article uses general scientific methods of analysis and synthesis. Based on the analysis of existing approaches to assessing the quality of software development, proposals for improving this process are synthesized.The article formulates a scientific and practical problem and offers one of the approaches to its solution, based on the refinement of existing methods for assessing quality based on the model described in GOST R ISO / IEC 25010, taking into account the real needs of users, interpreted through reducing the likelihood of errors of the first and second kind arising from the use of software. The solution of the formulated problem will provide a general increase in the efficiency of automated control through the use of quantitative and qualitative assessments of the software being developed.
Keywords: quality assessment model, control automation, decision support, qualitative assessments, quantitative methods, quality control, software, automated management system, regulations, program errors
Mikheev I.V., Vishtak O.V., Kondratov D.V. - System of quantitative characteristics of software quality assessment pp. 28-35

DOI:
10.7256/2454-0714.2018.2.25981

Abstract: The subject of the study is the process of teaching programming. Information technologies are in close integration with various spheres of human activity. The peculiarities of information technologies include rapid rates of development of technologies in this field and the need to modernize already functioning information and technical means. Existing standards and other normative documents can be applied only for real developments, which can not be attributed to the programs developed by students in the learning process, so such documents and standards can not be fully used to assess the level of knowledge of students, because before their application it is necessary to carry out the analysis of the most significant ones and to adapt to the specifics of the learning process. In the process of research, those quantitative characteristics were singled out - metrics that allow to reveal the real level of possession of the student's technologies. Using the obtained quantitative characteristics as an integral system, the teachers have an opportunity to get an objective assessment of the program developed by the student, and such an assessment will fully meet the requirements of a graded rating of students' performance. As a result of the analysis of this area of research, the metrics from the group "Metric characteristics" were singled out and described: the program completion code, the total program execution time, the maximum amount of physical memory used, the maximum number of used memory pages, the maximum amount of virtual memory used, the use of processor time, total processor time, which can be used as a basis for building a software product that performs testing and student programs based on a dynamic approach.
Keywords: benchmarking system, quality of the program, program evaluation, automation of knowledge testing, dynamic testing of programs, testing system, student programs, program metrics, pedagogical tool, system of program metrics
Kubasheva E.S., Gavrilov A.G. - Methodology of the Web-application quality evaluation pp. 28-34

DOI:
10.7256/2454-0714.2013.1.62446

Abstract: the article offers a methodology for the websites quality evaluation. The key feature of this methodology is that it is based on the governmental standard requirements and it contains more quality characteristics than other methodologies. Additionally, the proposed methodology allows to evaluate a web-project in its entirety, as well as to work on its separate parts (i.e. pages, sections).
Keywords: Software, quality, web-development, metrics, methodology, standard, script, design layout
Sharipov R.R., Yusupov B.Z. - The research of electrical parameters of threshold detectors pp. 29-47

DOI:
10.7256/2454-0714.2023.3.43682

EDN: ZSVLGS

Abstract: This research work provides an in-depth analysis of the fire alarm system, considered as a security tool for a variety of facilities, from industrial buildings to residential premises. Two key subsystems serve as the basis for the study: the fire alarm system, which is designed to detect and report the occurrence of fire, and the intrusion alarm system, whose task is to detect attempts at illegal intrusion. For each of these subsystems, their functions and components are investigated, their mechanism of action, principles of operation, and possible implementation options are described, depending on the specific conditions and security requirements. At the same time, emphasis is placed on three types of automatic fire alarm systems: threshold, addressable and addressable-analog, each of which has its own features, advantages and disadvantages. The article goes beyond theoretical analysis and presents the results of a practical study of the three main types of fire alarm systems: wired, wireless and addressable. The study is based on a specially designed training stand, which allows you to simulate the operation of fire alarm systems in conditions as close to reality as possible. The article demonstrates the connection schemes of detectors, explains their states in the "normal" and "alarm" modes. It is noted that the currents and voltages of alarm loops in different modes were measured. Dependency diagrams of these parameters are presented, and the alarm threshold levels are measured. The research presented in the paper provides a valuable contribution to the study and optimization of alarm and fire systems, providing meaningful data for the development and testing of these systems. The paper can be useful for fire and security professionals and those interested in improving the performance of these systems.
Keywords: security systems, measurements, voltammetric characteristics, wiring diagram, alarm loop, monitoring device, siren, fire protection system, detector, laboratory bench
Shchemelinin D., Efimov V.V. - Methodology for assessing the intensity of maintenance of a globally distributed computing system pp. 39-47

DOI:
10.7256/2454-0714.2018.4.27842

Abstract: The subject of the research is new software releases and subsequent restrictions on the intensity of installing new software releases for modern cloud services, which are complex globally distributed computing systems that require continuous maintenance. The object of the research is the globally distributed cloud computing system of RingCentral (USA). The authors consider in detail the important criteria for business in the transition to a cloud service, including the levels of availability of modern information services for globally distributed computing systems, taking into account the growing number of changes in them. The article proposes a new method of experimental calculation of the maximum intensity of installation and software updates in cloud information systems without degrading the quality of information services. With the increasing degree of functional decomposition of cloud systems and the number of servers, the issue of assessing the intensity of software changes for globally distributed computing systems becomes urgent. The proposed method allowed the authors to efficiently serve RingCentral's global cloud information system without interruption in its operation.
Keywords: software deployment, software release cycle, cloud technologies, high availability, resilience, maintenance, distributed IT systems, change management, outage prevention, continuous deployment
Dobrynin A.S., Koynov R.S., Kulakov S.M., Purgina M.V. - On the practical application of flexible mechanisms of software development pp. 45-53

DOI:
10.7256/2454-0714.2018.3.26877

Abstract: The subject of the study are life cycle models and approaches to software development in the face of significant time and financial constraints. A software development workflow is considered using a flexible methodology and an iterative life-cycle model. The objects of research are modern approaches to the development of software and increasing the efficiency of labor through the proposed incentive mechanisms, based on customer requirements. The authors also pay attention to the organization of labor in software development. We used the methods of system analysis, approaches to collective decision-making under uncertainty, including survey methods and expert assessments. The authors propose an incentive mechanism for software developers working in an iterative model of an incomplete life cycle. An important result of the study is the conclusion about the effectiveness of the incentive approach, based on the implemented functionality in the current iteration of the development. The presented approach allows an objective assessment of the contribution of each developer to the solution of individual project tasks.
Keywords: unit testing, iterative model, testing, incentive mechanism, development, software, life cycle, agile development, development management, stimulating the developer
Zorchenkov A.M. - Automating third-party library migrations pp. 71-87

DOI:
10.7256/2454-0714.2022.1.34337

Abstract: Manual migration between various third-party libraries is a problem for software developers. Developers usually need to study the application programming interfaces of both libraries, as well as read their documentation to find suitable comparisons between the replacement and the replaced methods. In this article, I will present a new approach (MIG) to machine learning that recommends mappings between the methods of two API libraries. My model learns from manually found data of implemented migrations, extracts a set of functions related to the similarity of the method signature and text documentation. I evaluated the model using 8 popular migrations compiled from 57,447 open source Java projects. The results show that the model can recommend appropriate library API mappings with an average accuracy rate of 87%.   This study examines the problem of recommending method comparisons when migrating between third-party libraries. A new approach is described that recommends the comparison of methods between two unknown libraries using features extracted from the lexical similarity between method names and textual similarity of method documentation. I evaluated the result by checking how this approach and three other most commonly used approaches recommend a comparison of migration methods for 8 popular libraries. I have shown that the proposed approach shows much better accuracy and performance than the other 3 methods. Qualitative and quantitative analysis of the results shows an increase in accuracy by 39.51% in comparison with other well-known approaches.
Keywords: Library documentation, feature engineering, Reverse document frequency, the method of support vectors, Frequency of the term, Natural Language Processing, machine learning, library migrations, extracting information, vector space model
Afonderkin S.Y., Gayazov S.E., Ignatov F.V. - The use of distributed processing for data quality control in CMD-3 detector.

DOI:
10.7256/2454-0714.2016.2.19120

Abstract: CMD-3 – a particle detector, running on electron-positron collider VEPP-2000 in the Budker Institute of Nuclear Physics. In order to ensure the final quality control of recorded data and diagnosis of faults in the subsystems of a detector a full reconstruction of events is performed by software developed by CMD-3 collaboration. Depending on the luminosity of the collider up to 1500 events per second can be detected, which is 30 times higher than the average speed of its processing by a single process. In order to improve the efficiency of data collection and detector system as a whole, the speed of reconstruction was increased using multiple distribution processing processes to keep up with the values arising at peak luminosity. The article provides an overview of systems for data collection and processing of the detector and also describes the architecture of a distributed version of the software reconstruction of the events recorded by the detector. To find the best architectural solution methods of system analysis and structural decomposition components of data collection and processing systems were used, resulting in a proposed solution that implements a pattern of "pipes and filters". A methodology of imperative parallel programming has been applied. Modules for the transfer of necessary data processing between processes were added to the application. The implemented system can handle events with the speed of recording, which provides information on the integral characteristics of the complex, and allows responding to malfunctions in the electronics and software of the detector in real time.
Keywords: reconstruction of the events of the detector, rapid analysis, accelerating complex , data analysis, distributed system, increasing the speed of data processing, CMD-3 , distributed processing, data processing, quality control
Afonderkin S.Yu., Gayazov S.E., Ignatov F.V. - The use of distributed processing for data quality control in CMD-3 detector. pp. 124-135

DOI:
10.7256/2454-0714.2016.2.67833

Abstract: CMD-3 – a particle detector, running on electron-positron collider VEPP-2000 in the Budker Institute of Nuclear Physics. In order to ensure the final quality control of recorded data and diagnosis of faults in the subsystems of a detector a full reconstruction of events is performed by software developed by CMD-3 collaboration. Depending on the luminosity of the collider up to 1500 events per second can be detected, which is 30 times higher than the average speed of its processing by a single process. In order to improve the efficiency of data collection and detector system as a whole, the speed of reconstruction was increased using multiple distribution processing processes to keep up with the values arising at peak luminosity. The article provides an overview of systems for data collection and processing of the detector and also describes the architecture of a distributed version of the software reconstruction of the events recorded by the detector. To find the best architectural solution methods of system analysis and structural decomposition components of data collection and processing systems were used, resulting in a proposed solution that implements a pattern of "pipes and filters". A methodology of imperative parallel programming has been applied. Modules for the transfer of necessary data processing between processes were added to the application. The implemented system can handle events with the speed of recording, which provides information on the integral characteristics of the complex, and allows responding to malfunctions in the electronics and software of the detector in real time.
Keywords: reconstruction of the events of the detector, rapid analysis, accelerating complex, data analysis, distributed system, increasing the speed of data processing, CMD-3, distributed processing, data processing, quality control
Mironov S.V., Kulikov G.V. - Analysis of the potential methods for software testing without source code

DOI:
10.7256/2454-0714.2015.2.16767

Abstract: The article considers the prevailing contradictions between the nature of the vulnerabilities in source code, safety requirements limitations of regulatory and methodological basis of tests and software developers who do not provide the source code for testing purposes. Methods of software products analysis that do not require the source code of programs, are widely used abroad but in our country are not well known yet. The article investigates the question can such methods and means increase the effectiveness of certification testing of software. The authors determine the necessary changes in the regulations to open up the possibility of applying the methods of testing programs without source code in the certification tests. Methods used in the study: software engineering, analysis of complex systems, the theory of reliability of complex systems, the synthesis software, compiling software. The paper shows that the use of methods for testing without source code allows to find such common vulnerabilities in the software that can’t be effectively detected because of the regulatory restrictions for the presence of source code. The experience of certification tests on the absence of undeclared features and program bookmarks, as well as independent software testing allows to determine the priority areas for improvement of the regulatory, based on the application of the methods of testing software without source code.
Mironov S.V., Kulikov G.V. - Analysis of the potential methods for software testing without source code pp. 150-162

DOI:
10.7256/2454-0714.2015.2.67097

Abstract: The article considers the prevailing contradictions between the nature of the vulnerabilities in source code, safety requirements limitations of regulatory and methodological basis of tests and software developers who do not provide the source code for testing purposes. Methods of software products analysis that do not require the source code of programs, are widely used abroad but in our country are not well known yet. The article investigates the question can such methods and means increase the effectiveness of certification testing of software. The authors determine the necessary changes in the regulations to open up the possibility of applying the methods of testing programs without source code in the certification tests. Methods used in the study: software engineering, analysis of complex systems, the theory of reliability of complex systems, the synthesis software, compiling software. The paper shows that the use of methods for testing without source code allows to find such common vulnerabilities in the software that can’t be effectively detected because of the regulatory restrictions for the presence of source code. The experience of certification tests on the absence of undeclared features and program bookmarks, as well as independent software testing allows to determine the priority areas for improvement of the regulatory, based on the application of the methods of testing software without source code.
Keywords: evaluation of data protection, software security, detection of undeclared features, detection of software bookmarks, software testing, software certification, software vulnerability, identification of vulnerable programs, signature analysis, testing programs
Boykov S.A. -

DOI:
10.7256/2454-0714.2014.2.12291

Abstract:
Boikov S.A. - Expert evaluation of functional completeness of automated information systems for public institutions pp. 166-173

DOI:
10.7256/2454-0714.2014.2.65260

Abstract: the article studies methods of determining functional completeness of automated information systems implemented in state social institutions. The author defines a list of automated functions, directly affecting the efficiency of the institution. The article reviews the use of technique based on the Delphi method in expert evaluation. The author considers features of the above mentioned technique as well as Spearman’s rank correlation coefficient for the convergence of expert evaluations. The application of the method is demonstrated on the example of eight different software products for automation of services provisioning by public social institutions. For the first time the article proposes the use of technique based on the Delphi method in expert evaluation with the Spearman’s rank correlation coefficient for the convergence of expert evaluations for evaluating the quality and functional completeness of automated information systems implemented in state social institutions. The analysis described in the article proves that appliance of the technique mentioned above significantly increases the objectiveness of the evaluation, through the use of feedback, analysis of the results of previous stages, as well as their account when estimating the significance of expert opinion.
Keywords: functional completeness, expert evaluation, information systems, Delphi method, Spearman coefficient, methods of peer review, the convergence of expert evaluations, the pair correlation coefficient, coefficient of concordance, Kendall coefficient
Ponachugin A.V., Sokruta V.V. - The Data Storage System as a Factor in Increasing the Efficiency of Information Systems

DOI:
10.7256/2454-0714.2016.3.19380

Abstract: In their article the authors discuss ways to improve the performance of the enterprise by increasing the speed of access to data bases and overall system reliability. They study the characteristics of the data storage systems such as IOPS (the number of input-output operations per second), types of storage devices and how they connect. The authors also analyze specific scenarios of using data storate systems depending on their process connections and other basic characteristics as well as their application at the enterprise. The authors discuss further ways to expand the disk space and actions to be undertaken during when scaling of enterprise computing networks taking into account the emergence of new hybrid storage solutions based on flash memory. At the beginning of the article the authors discuss types of data storage and compares data storage sytems depending on enterprise objectives. They also study methods to select connections as well as advantages and disadvantages of each of them by studying the specifications and features of the software and hardware storage systems. The impact of IOPS on the speed of getting access to databases is also analyzed. The authors provide a table where characteristics of various data storage systems are compared. As a result of the work they identify specific patterns of using the storage system for small and medium-sized enterprises. This approach to the problem allows to create a flexible solution that does not require the complete replacement of the equipment by increasing the required disk space and connecting additional servers, thus improving the economic and process efficiency of the enterprise. 
Keywords: fault tolerance, network storage, disk arrays, SSDs, hard disks, hypervisors, storage system, servers, client-server, scalability
Ponachugin A.V., Sokruta V.V. - The Data Storage System as a Factor in Increasing the Efficiency of Information Systems pp. 240-249

DOI:
10.7256/2454-0714.2016.3.68104

Abstract: In their article the authors discuss ways to improve the performance of the enterprise by increasing the speed of access to data bases and overall system reliability. They study the characteristics of the data storage systems such as IOPS (the number of input-output operations per second), types of storage devices and how they connect. The authors also analyze specific scenarios of using data storate systems depending on their process connections and other basic characteristics as well as their application at the enterprise. The authors discuss further ways to expand the disk space and actions to be undertaken during when scaling of enterprise computing networks taking into account the emergence of new hybrid storage solutions based on flash memory. At the beginning of the article the authors discuss types of data storage and compares data storage sytems depending on enterprise objectives. They also study methods to select connections as well as advantages and disadvantages of each of them by studying the specifications and features of the software and hardware storage systems. The impact of IOPS on the speed of getting access to databases is also analyzed. The authors provide a table where characteristics of various data storage systems are compared. As a result of the work they identify specific patterns of using the storage system for small and medium-sized enterprises. This approach to the problem allows to create a flexible solution that does not require the complete replacement of the equipment by increasing the required disk space and connecting additional servers, thus improving the economic and process efficiency of the enterprise. 
Keywords: fault tolerance, network storage, disk arrays, SSDs, hard disks, hypervisors, storage system, servers, client-server, scalability
Gagarina L.G., Malykhin A.Y., Slyusar V. -

DOI:
10.7256/2454-0714.2014.3.13340

Abstract:
Malykhin A.Yu., Slyusar’ V.V. - On the possibility of debugging and testing of applications for the Android OS on a sample application for the measurement of characteristics of electric transport pp. 273-281

DOI:
10.7256/2454-0714.2014.3.65643

Abstract: on the sample application for measuring characteristics of electric transport the authors discuss debugging and logging in integrated programming environment Eclipse IDE used for mobile software development under Android operating system. The article reviews a practical example of classical black box testing approach for the mentioned Android application. The authors show results of a brief study of the existing Android-specific information about the ways and practical use of reporting an error to the developer directly. The experimentally obtained data, a study during software development (getting an incorrect result followed by testing, correcting input data and code, and, as a result, developing workable software). The article shows specific methods of software testing, applicable only to software for Android OS, as well as for software, receiving and processing data from sensors electric in transport. As a result, the article reviews debugging, logging, white and black box testing, sending bug reports. The authors also present samples of data that can be used for validation of the performance of various parts of the program for displaying the characteristics of the electric vehicles.
Keywords: testing, Android OS, Eclipse IDE, characteristics of electric transport, application, black box, white box, mobile devices, debugging, bug reports
Ponachugin A.V. - Problems of functioning and maintenance reliability of modern computer systems

DOI:
10.7256/2454-0714.2015.4.17891

Abstract: In the middle of the last century the problem of functioning and maintenance reliability has acquired the status of scientific. This was facilitated by the active development of computer systems. Implementation and support of information systems are long, difficult and complicated processes, which affect how the future interactions between users and the system. The article discusses various approaches to the allocation and use of time required for supporting information system. Author analyzes the importance of supporting information system at all stages of its life cycle. Author reviews different approaches to ensuring the reliability of the operation and maintenance of the various classes of information systems. The article gives a defined list of recommendations for ensuring functioning of the information system at different levels of management. The study shows an analysis of existing software and hardware tools and methods related to maintenance and reliable functioning of information systems. The author concludes: the maintenance of computer systems is equally important at all stages of the life cycle and occupies a considerable share of time in the work of the information technology department and is inextricably linked with the indicators of reliability of information systems. Also, the author suggests the technique of the use of software tools for the control of characteristics of information system.
Ponachugin A.V. - Problems of functioning and maintenance reliability of modern computer systems pp. 365-373

DOI:
10.7256/2454-0714.2015.4.67454

Abstract: In the middle of the last century the problem of functioning and maintenance reliability has acquired the status of scientific. This was facilitated by the active development of computer systems. Implementation and support of information systems are long, difficult and complicated processes, which affect how the future interactions between users and the system. The article discusses various approaches to the allocation and use of time required for supporting information system. Author analyzes the importance of supporting information system at all stages of its life cycle. Author reviews different approaches to ensuring the reliability of the operation and maintenance of the various classes of information systems. The article gives a defined list of recommendations for ensuring functioning of the information system at different levels of management. The study shows an analysis of existing software and hardware tools and methods related to maintenance and reliable functioning of information systems. The author concludes: the maintenance of computer systems is equally important at all stages of the life cycle and occupies a considerable share of time in the work of the information technology department and is inextricably linked with the indicators of reliability of information systems. Also, the author suggests the technique of the use of software tools for the control of characteristics of information system.
Keywords: durability, life cycle, reliability, soft agents, support of information system, computer systems, infallibility, obsolescence of the information system, design of information systems, automated control systems
Other our sites:
Official Website of NOTA BENE / Aurora Group s.r.o.