Software systems and computational methods - rubric Data encryption and data protection
по
Software systems and computational methods
12+
Journal Menu
> Issues > Rubrics > About journal > Authors > Policy of publication. Aims & Scope. > Council of Editors > Editorial board > About the journal > Requirements for publication > Peer-review process > Article retraction > Ethics > Online First Pre-Publication > Copyright & Licensing Policy > Digital archiving policy > Open Access Policy > Article Processing Charge > Article Identification Policy > Plagiarism check policy
Journals in science databases
About the Journal
MAIN PAGE > Journal "Software systems and computational methods" > Rubric "Data encryption and data protection"
Data encryption and data protection
Savinov A.N. -
Abstract:
Baltaev R.K. - Method of covert information transfer in still images using a chaotic oscillator. pp. 1-7

DOI:
10.7256/2454-0714.2020.2.32359

Abstract: The subject of the research is the steganographic method of embedding information in digital images. Steganography is able to hide not only the content of information, but also the fact of its existence. The paper presents a method of embedding and extracting information into digital images using a chaotic dynamic system. Chaotic systems are sensitive to certain signals and at the same time immune to noise. These properties allow the use of chaotic systems for embedding information with small image distortions in statistical and visual terms. The methodological basis of the study is the methods of the theory of dynamical systems, mathematical statistics, as well as the theory of image processing. The novelty of the study lies in the development of a new method of embedding information in static images. The author examines in detail the problem of using a chaotic dynamic Duffing system for embedding and extracting information in digital still images. It is shown that the proposed method allows you to embed information in digital images without significant distortion.
Keywords: information embedding algorithm, image processing, signal detection, digital images, information security, chaotic oscillators, steganography, image distortion, transfer of hidden information, lyapunov exponent
Iureva R.A., Komarov I.I., Dorodnikov N.A. - Designing the information interloper model for the multi-agent decentralized control robotic system

DOI:
10.7256/2454-0714.2016.1.17946

Abstract: The primary objective of the interloper is to keep a swarm of robots from performing their functions by any means including their destruction. The primary goal of the information security system is to provide the proper security level for the swarm of robots against any natural or artificial hindrances for it is necessary to take into account not only the information safety but also physical security of actors. For the physical security of a wide range of facilities (the authors offer to call them 'facilities under potential interloper's influence or FPIP) it is very important to prepare the list of potential threats, especially design threats. The list would allow to design a necessary system for the physical protection of a particular facility and to evaluate its efficiency. The composite element of the design threat is the interloper modeling and therefore development of such a model is a high priority. According to the guidline documents, the interloper model should be based on both the featurs of the facility and performed technological operations (stable factors) and variable factors, i.e. social conditions of the territory where the facility is located, social relations, psychological traits of a particular group of workers and/or individuals as well as the global, country's, regional environment, etc. Therefore, the model should take into account all kinds of factors that are related to various aspects of the reality and often divorced from one another. The authors of the present article analyze the offered interloper models from different points of view. Nevertheless, these models are interconnected, so examination of these connections is one of the goals of the vulnarability analysis. In case the obtained characteristics of the operational interloper model are below the desired characteristics of the design interloper model (for example, the number of interloper agents that can be potentially involved in the destructive activity is lower than the number of interlopers which activities are to be prevented by the designed Information Security System according to the design project prescribed for the facility, i.e. Z>У), then the decision about sufficient facility hardness can be made meaning there is no need to perform unscheduled actions (facility vulnarability analysis with ISS performance evaluation, improvement of ISS, changes in the task execution technology, etc.).
Keywords: information security, multi-agent robotic system, decentralized control, interloper model, modeling, robotic system, destructive influence, swarm robotechnics, disorganized behavior, protective measures
Yur'eva R.A., Komarov I.I., Dorodnikov N.A. - Designing the information interloper model for the multi-agent decentralized control robotic system pp. 42-48

DOI:
10.7256/2454-0714.2016.1.67596

Abstract: The primary objective of the interloper is to keep a swarm of robots from performing their functions by any means including their destruction. The primary goal of the information security system is to provide the proper security level for the swarm of robots against any natural or artificial hindrances for it is necessary to take into account not only the information safety but also physical security of actors. For the physical security of a wide range of facilities (the authors offer to call them 'facilities under potential interloper's influence or FPIP) it is very important to prepare the list of potential threats, especially design threats. The list would allow to design a necessary system for the physical protection of a particular facility and to evaluate its efficiency. The composite element of the design threat is the interloper modeling and therefore development of such a model is a high priority. According to the guidline documents, the interloper model should be based on both the featurs of the facility and performed technological operations (stable factors) and variable factors, i.e. social conditions of the territory where the facility is located, social relations, psychological traits of a particular group of workers and/or individuals as well as the global, country's, regional environment, etc. Therefore, the model should take into account all kinds of factors that are related to various aspects of the reality and often divorced from one another. The authors of the present article analyze the offered interloper models from different points of view. Nevertheless, these models are interconnected, so examination of these connections is one of the goals of the vulnarability analysis. In case the obtained characteristics of the operational interloper model are below the desired characteristics of the design interloper model (for example, the number of interloper agents that can be potentially involved in the destructive activity is lower than the number of interlopers which activities are to be prevented by the designed Information Security System according to the design project prescribed for the facility, i.e. Z>У), then the decision about sufficient facility hardness can be made meaning there is no need to perform unscheduled actions (facility vulnarability analysis with ISS performance evaluation, improvement of ISS, changes in the task execution technology, etc.).
Keywords: information security, multi-agent robotic system, decentralized control, interloper model, modeling, robotic system, destructive influence, swarm robotechnics, disorganized behavior, protective measures
Rebrun I.A., Borodina N.I. - Automatic evaluation of the indistinct attri bu tes of graphic objects. pp. 53-59
Abstract: In this article the authors discuss the ways to automate the growing method of diagnosis – a tezigraphy. It has long been known that biological ß uids crystallize when dried. Under pathological conditions, the crystallization properties of biological ß uids change. As a result of the studies it has been found that this method is very sensitive and can be used on a preclinical stage to diagnose the viral and infectious diseases. To determine the nature of the disease it is necessary to analyze the obtained images of biocrystals to describe their shape and determine the way of distribution by the number of attributes. The article gives the algorithms and methods for automated evaluation of the image attri butes on the image samples obtained during the tezigraphy researches. The article inves tigates the segmentation algorithm based on edge detection using Þ ltering algorithm for calculating the areas and detecting attributes of form and algorithms of modeling and fractal analysis in the consideration of tree and other complex crystallogramms. The authors pre sents a program, which allows to obtain esti mates of the distribution in area, perimeter and other characteristics of the crystals, as well as statistical data for a variety of crystals grown on the same surface.
Keywords: Software, tezigramma, biochip, seg me ntation, sign, analysis, image, form, object, salvia
Lyapustin A., Kolesnikova S., Mel'nik D. - Security Model Based on a Multi-Agent System pp. 81-90

DOI:
10.7256/2454-0714.2018.3.26575

Abstract: The article is devoted to the urgent problem of ensuring the security of heterogeneous information platforms using a multi-agent threat detection system. The object of the study is a multi-agent platform. The authors pay special attention to such aspects of the topic as: security of multi-agent platforms, management of threat detection agents, interconnection between different threat detection agents, vulnerability in multi-agent platforms. Trends are being considered for the development of new distributed security models. The paper presents a multi-agent architecture for security services, as well as a general scenario for deploying security in multi-agent threat detection systems. The paper presents a multi-agent architecture for security services, as well as a general scenario for deploying security in multi-agent threat detection systems. Using a multi-agent system approach to a security model that provides control from the sender using the mobility and extensibility characteristics of the agent. The multi-agent protection system model for heterogeneous information platforms provides an improved approach for online communications, providing flexible mechanisms for the comprehensive protection of heterogeneous information platforms that can satisfy various security requirements and provide better control over the data to its sender.
Keywords: threat detection agent, heterogeneous information platforms, multiagent platform, intelligent protection system, threat detection, information security, analysis systems, detection algorithms, agent platform, multi-agent system
Voitiuk T., Zudilova T., Tsymzhitov G.B. - Protection against password guessing using two-factor authentication

DOI:
10.7256/2454-0714.2016.2.19025

Abstract: Two-factor authentication is required to establish a secure connection when a remote user tries to connect to the corporate web services. Authentication is a prerequisite for web services that process confidential information. Two-factor authentication is a way to improve the corporate information security. There are many ready solutions for the implementation of two-factor authentication system but these solutions have several disadvantages, such as high cost or difficult integration into existing corporate information structure. The aim of this study is to define the architecture of the system that overcomes the mentioned disadvantages. For designing a protection system against password guessing the authors previously used a method of static analysis to justify the demand for systems of this type. The authors also used data analysis method to determine the requirements for the system of two-factor authentication; an experiment confirmed the results of a research. Presented architecture provides protection from password guessing, does not depend on additional hardware or software and has a modular structure, which gives the advantage of scalability. The architecture defines advanced functionality for such systems: determining geographic location of real IP-addresses, address filtering based on geolocation and proxy addresses using a POST requests. It also allows building modules, which can be easily integrated with existing enterprise infrastructure. The result of using the proposed system shows that the percentage of intruders accessing corporate information system is reduced.
Keywords: information security, password guessing, single sign-on technology, service-oriented architecture, control permissions, one-time password, authentication code, two-factor authentication, secure connection, web service
Voytyuk T.E., Zudilova T.V., Tsymzhitov G.B. - Protection against password guessing using two-factor authentication pp. 173-183

DOI:
10.7256/2454-0714.2016.2.67837

Abstract: Two-factor authentication is required to establish a secure connection when a remote user tries to connect to the corporate web services. Authentication is a prerequisite for web services that process confidential information. Two-factor authentication is a way to improve the corporate information security. There are many ready solutions for the implementation of two-factor authentication system but these solutions have several disadvantages, such as high cost or difficult integration into existing corporate information structure. The aim of this study is to define the architecture of the system that overcomes the mentioned disadvantages. For designing a protection system against password guessing the authors previously used a method of static analysis to justify the demand for systems of this type. The authors also used data analysis method to determine the requirements for the system of two-factor authentication; an experiment confirmed the results of a research. Presented architecture provides protection from password guessing, does not depend on additional hardware or software and has a modular structure, which gives the advantage of scalability. The architecture defines advanced functionality for such systems: determining geographic location of real IP-addresses, address filtering based on geolocation and proxy addresses using a POST requests. It also allows building modules, which can be easily integrated with existing enterprise infrastructure. The result of using the proposed system shows that the percentage of intruders accessing corporate information system is reduced.
Keywords: information security, password guessing, single sign-on technology, service-oriented architecture, control permissions, one-time password, authentication code, two-factor authentication, secure connection, web service
Prokhozhev N.N., Mikhailichenko O.V., Bashmakov D.A., Sivachev A.V., Korobeinikov A.G. - Study the effectiveness of statistical algorithms of quantitative steganalysis in the task of detecting hidden information channels

DOI:
10.7256/2454-0714.2015.3.17233

Abstract: Countering the hidden channels of information transmission is an important task in the organization of information security. One kind of passive physical resistance methods is detection of the steganographic impact on the investigated container. The widespread use of digital still images as stegano-containers is due to their large share in total data traffic. The task of passive counteraction (steganalysis) allowing identifying the digital image with the built-in information is actually a binary classification problem. At the core of the classifier lies statistical algorithm of quantitative steganalysis for determining the amount of modified pixels in the data container. The accuracy of the algorithm directly affects the quality classification and the practical effectiveness of passive physical resistance as a whole. By effective counteraction the article refers to the ratio of probabilities between true positive classification and the probability of a false positive classification. Currently there are many statistical algorithms for quantitative steganalysis. However, there are no studies on their comparative analysis which complicates the selection of an algorithm while solving the problem of counteraction to steganography channels of information leakage. The practical effectiveness of passive physical resistance to steganography channels by inserting the least significant bits of pixel digital image also remains an open question. The subject of the study is the effectiveness of the application of modern quantitative statistical algorithms steganalysis. Based on the results of the study the authors have formed graphics of trust regions, allowing to make a comparative assessment of the effectiveness of the passive counteraction in LSB-steganography. For the study the authors selected the following steganalysis algorithms: RS- analysis, Sample pair analysis, Difference image histogram, Triples analysis, Weighted stego-image. From the test of multiple images an image is selected. An evaluation of its capacity (defined by the maximum payload) is performed. In the experiments for this value authrs accepted the total number of pixels in the image. Steganographic effects modeled by changing the value of the least significant bit for a predetermined number of pixels (the payload). The modified image used as an input to a particular implementation of the algorithm steganalysis. The result of the algorithm is the number of changed pixels in the image. The experiments were carried out under the same conditions for all implementations of algorithms steganalysis. The main conclusions of the study is the fact that based on modern statistical steganalysis algorithms it is possible to organize an effective opposition to the passive channels with LSB steganography with embedding payload container more than 5%. Reducing the payload container of less than 5% dramatically reduces the effectiveness of the passive counteraction. A small 600x400 pixels image converted to steganography with payload of 1-2% is practically not detected by classifiers based on statistical quantitative algorithms steganalysis. Taking into account the possibility of pre-compression hidden data and matrix embedding, the considered modern algorithms for steganalysis need further improvement.
Prokhozhev N.N., Mikhaylichenko O.V., Bashmakov D.A., Sivachev A.V., Korobeynikov A.G. - Study the effectiveness of statistical algorithms of quantitative steganalysis in the task of detecting hidden information channels pp. 281-292

DOI:
10.7256/2454-0714.2015.3.67272

Abstract: Countering the hidden channels of information transmission is an important task in the organization of information security. One kind of passive physical resistance methods is detection of the steganographic impact on the investigated container. The widespread use of digital still images as stegano-containers is due to their large share in total data traffic. The task of passive counteraction (steganalysis) allowing identifying the digital image with the built-in information is actually a binary classification problem. At the core of the classifier lies statistical algorithm of quantitative steganalysis for determining the amount of modified pixels in the data container. The accuracy of the algorithm directly affects the quality classification and the practical effectiveness of passive physical resistance as a whole. By effective counteraction the article refers to the ratio of probabilities between true positive classification and the probability of a false positive classification. Currently there are many statistical algorithms for quantitative steganalysis. However, there are no studies on their comparative analysis which complicates the selection of an algorithm while solving the problem of counteraction to steganography channels of information leakage. The practical effectiveness of passive physical resistance to steganography channels by inserting the least significant bits of pixel digital image also remains an open question. The subject of the study is the effectiveness of the application of modern quantitative statistical algorithms steganalysis. Based on the results of the study the authors have formed graphics of trust regions, allowing to make a comparative assessment of the effectiveness of the passive counteraction in LSB-steganography. For the study the authors selected the following steganalysis algorithms: RS- analysis, Sample pair analysis, Difference image histogram, Triples analysis, Weighted stego-image. From the test of multiple images an image is selected. An evaluation of its capacity (defined by the maximum payload) is performed. In the experiments for this value authrs accepted the total number of pixels in the image. Steganographic effects modeled by changing the value of the least significant bit for a predetermined number of pixels (the payload). The modified image used as an input to a particular implementation of the algorithm steganalysis. The result of the algorithm is the number of changed pixels in the image. The experiments were carried out under the same conditions for all implementations of algorithms steganalysis. The main conclusions of the study is the fact that based on modern statistical steganalysis algorithms it is possible to organize an effective opposition to the passive channels with LSB steganography with embedding payload container more than 5%. Reducing the payload container of less than 5% dramatically reduces the effectiveness of the passive counteraction. A small 600x400 pixels image converted to steganography with payload of 1-2% is practically not detected by classifiers based on statistical quantitative algorithms steganalysis. Taking into account the possibility of pre-compression hidden data and matrix embedding, the considered modern algorithms for steganalysis need further improvement.
Keywords: the steganalysis algorithm, weighted stego-image, difference histogram analysis, simple pair analysis, LSB-based steganography, statistical quantative steganalysis, steganography, digital watermark, still images, statistical analysis algorithms
Savinov A.N., Merkushev O.Y. -

DOI:
10.7256/2454-0714.2013.4.11092

Abstract:
Savinov A.N., Merkushev O.Yu. - Protection of biometric access control subsystems pp. 335-343

DOI:
10.7256/2454-0714.2013.4.63908

Abstract: the article discusses the zero-knowledge authentication protocol based on the based biometric fuzzy extractor and Elgamal cryptosystem. The authors review advantages, disadvantages and practical aspects of the application of this protocol. The article describes the types of biometric cryptographic systems (key release cryptosystems, key binding cryptosystems, key generation cryptosystems), provides their brief descriptions and reviews possible attacks. The authors state that there are two ways generating a biometric key from biometric data that will meet the requirements of modern cryptography at the same time having a low probability of type II errors. One of the major factors determining the level of security for a key-based informational infrastructure is the eff ectiveness of its information security access control subsystem functioning. The authors propose a zero-knowledge biometric authentication protocol. The key element of reliability of the protocol is in a single use of a session key “k”. There is no need to store confidential user data on the side of the access control subsystem and it is the main advantage of the presented protocol.
Keywords: biometric cryptographic system, fuzzy extractor, Elgamal cryptosystem, protection of biometric subsystems, access control, biometric authentication protocol, session key, reliability, authentication, threats
Gorokhov V.G., Syuntyurenko O.V. -

DOI:
10.7256/2454-0714.2013.4.9708

Abstract:
Gorokhov V.G., Syuntyurenko O.V. - Technological risks: information aspects of public security pp. 344-353

DOI:
10.7256/2454-0714.2013.4.63909

Abstract: the definition of technological risks diff ers not only in diff erent areas, but also in one field of technology. The problem of technological risks in information technologies in modern society of knowledge acquires strongly pronounced social character. Informatization, convergence of computer, telecommunication and multimedia technologies provides a fundamentally new level of civilization development, increasingly aff ecting human life and society. Is it clearly positive? Does it help finding the path of sustainable development of civilization? Does the development of information and communication technologies contain any new sources of instability and threats? All of these questions require special studies and discussions. Currently society of knowledge is fundamentally ambivalent. Modern society becomes a field of permanent experiment with new informational technologies, the consequences of which can be not only positive but also negative for both society as a whole and for its individual citizens.
Keywords: technological risks, the knowledge society, information security, information technologies, negative consequences, expertocracy, technocracy, computer science, information society, internet
Iureva R.A., Komarov I.I., Maslennikov O.S. - Development of the method for detection and identification of hidden destructive impact on the multi-agent robotic systems

DOI:
10.7256/2454-0714.2016.4.21128

Abstract: Increased information security risks in multi-agent robotic systems create a need for new and known assessment in terms of security algorithms. Such risks include the loss or inaccessibility of data, spreading false information about the purpose of grouping and the use of distorted information, lack of energy resources. Authors note that common approaches to information security in multi-agent robotic systems so far are not formed. The research is aimed at developing a model of information security in multi-agent robotic systems taking into account the specifics of the technology. Existing scientific and methodical apparatus and technical solutions of ensuring information security in multi-agent systems are not applicable to the tasks of ensuring information security in multi-agent robotic systems due to the specific technology and a special type of threat models and the offending patterns associated with them. Existing methods of providing information security of multi-agent systems do not provide a comprehensive solution of the problems of information security in the multi-agent robotic systems, since they do not take into account the specificity of their composition and structure. Scientific novelty consists in the development of software models of information security risks in multi-agent robotic systems. An advantage of the method is the ability to detect new types of attacks without having to modify or update the model parameters, since the invasion of the offending system can be described as a deviation from the nominal behavior.
Keywords: hidden destructive effect, identification of the destructive impact, feature space, informative signs, information attack, data protection, swarm intelligence, information security model, decentralized control, multi-agent robotic system
Yur'eva R.A., Komarov I.I., Maslennikov O.S. - Development of the method for detection and identification of hidden destructive impact on the multi-agent robotic systems pp. 375-382

DOI:
10.7256/2454-0714.2016.4.68454

Abstract: Increased information security risks in multi-agent robotic systems create a need for new and known assessment in terms of security algorithms. Such risks include the loss or inaccessibility of data, spreading false information about the purpose of grouping and the use of distorted information, lack of energy resources. Authors note that common approaches to information security in multi-agent robotic systems so far are not formed. The research is aimed at developing a model of information security in multi-agent robotic systems taking into account the specifics of the technology. Existing scientific and methodical apparatus and technical solutions of ensuring information security in multi-agent systems are not applicable to the tasks of ensuring information security in multi-agent robotic systems due to the specific technology and a special type of threat models and the offending patterns associated with them. Existing methods of providing information security of multi-agent systems do not provide a comprehensive solution of the problems of information security in the multi-agent robotic systems, since they do not take into account the specificity of their composition and structure. Scientific novelty consists in the development of software models of information security risks in multi-agent robotic systems. An advantage of the method is the ability to detect new types of attacks without having to modify or update the model parameters, since the invasion of the offending system can be described as a deviation from the nominal behavior.
Keywords: hidden destructive effect, identification of the destructive impact, feature space, informative signs, information attack, data protection, swarm intelligence, information security model, decentralized control, multi-agent robotic system
Goryainov S.I. -

DOI:
10.7256/2454-0714.2014.4.13755

Abstract:
Goryainov S.I. - Rebuilding of binary trees in Huffman algorithm pp. 464-471

DOI:
10.7256/2454-0714.2014.4.65864

Abstract: the subject of this study is the time required to complete a full rebuilding of a binary tree, as well as the degree of compression of text in Huffman algorithm. The author defines dependencies of both time of program execution and the level of text compression from the length of string formed of random set of unique symbols, from the length of string if it consists of fixed set of unique symbols and in the case of the fixed length of string having different number of unique symbols. It is shown that the time required to rebuild a binary tree is a small part of the total time of the program execution. An algorithm for constructing the character codes comprises the following steps: 1) reading the text from file; 2) counting different symbols of the text; 3) filling and sorting the data array; 4) building binary tree. Some sources state that the approach with full rebuilding of a binary tree is ineffective. However, that statement is not supported by relevant facts. The author proves through the analysis of texts of varying length and different sets of unique characters, presented in tabular and graphical form, that the rebuilding of a binary tree has little effect on the program execution time.
Keywords: Huffman algorithm, binary tree rebuilding, compression of text data, program execution time, thrifty information encoding, prefix encoding, unique symbols, the degree of text compression, length of input string, average approximation error
Other our sites:
Official Website of NOTA BENE / Aurora Group s.r.o.