New method for assets sensitivity calculation and technical risks assessment in the information systems
2019; Institution of Engineering and Technology; Volume: 14; Issue: 1 Linguagem: Inglês
10.1049/iet-ifs.2018.5390
ISSN1751-8717
AutoresMohammad Shakibazad, Ali Jabbar Rashidi,
Tópico(s)Advanced Malware Detection Techniques
ResumoIET Information SecurityVolume 14, Issue 1 p. 133-145 Research ArticleFree Access New method for assets sensitivity calculation and technical risks assessment in the information systems Mohammad Shakibazad, Corresponding Author Mohammad Shakibazad shakibazad@gmail.com Information Technology Engineering, Security Branch, Malek Ashtar University of Technology, Tehran, IranSearch for more papers by this authorAli Jabbar Rashidi, Ali Jabbar Rashidi Department of Electrical Engineering, Malek-Ashtar University of Technology, Tehran, IranSearch for more papers by this author Mohammad Shakibazad, Corresponding Author Mohammad Shakibazad shakibazad@gmail.com Information Technology Engineering, Security Branch, Malek Ashtar University of Technology, Tehran, IranSearch for more papers by this authorAli Jabbar Rashidi, Ali Jabbar Rashidi Department of Electrical Engineering, Malek-Ashtar University of Technology, Tehran, IranSearch for more papers by this author First published: 01 January 2020 https://doi.org/10.1049/iet-ifs.2018.5390Citations: 1AboutSectionsPDF ToolsRequest permissionExport citationAdd to favoritesTrack citation ShareShare Give accessShare full text accessShare full-text accessPlease review our Terms and Conditions of Use and check box below to share full-text version of article.I have read and accept the Wiley Online Library Terms and Conditions of UseShareable LinkUse the link below to share a full-text version of this article with your friends and colleagues. Learn more.Copy URL Share a linkShare onFacebookTwitterLinkedInRedditWechat Abstract One of the most important components constructing the information security management system is the risk assessment process. Information technology system risks have a direct impact on the mission of organisations. Risk assessment allows organisations to identify weaknesses and security threats, and adopt appropriate solutions to deal with risks. The risk identification and assessment is the most important and complex part of the risk management process. In this study, a method has been presented to asset technical risks with regard to the sensitivity of each of the assets. In this research, the cyber battlefield framework has been presented to analyse the assets' sensitivity and then to determine the risk of each. The cyber battlefield contains exact information about cyber environment, including a vulnerability of knowledge repository, tangible and intangible components of cyber environment, and the relationships between them. Cyber-attacks are performed using vulnerabilities in the cyber environment components, so the present study focuses on the provision of a method to determine the risk due to the vulnerabilities. Considering the cost of risks treatment, the risks have been prioritised. 1 Introduction Information is a perennially significant business asset in all organisations. Therefore, it must be protected as any other valuable asset. This is the objective of information security, which provides such protection for a company's information assets as well as the company as a whole. One of the best ways to address information security problems in the corporate world is to apply a risk-based approach. It is predicted that by 2020 there would be 50 billion internet-connected devices, including fixed communications, mobile communications, computers, consumer electronic devices, medical devices, industrial devices, and automotive devices [1]. The 'internet of things' is bringing about a worldwide culture where citizens are entwined in a digital world. While cyberspace brings advantages to citizens, business, and governments, it also introduces risk from a security and privacy perspective. Cyber risks can lead to financial loss, reputational damage, and even the violation of human rights because of ignorance on the part of the user or organisation, or as a result of targeted cybercrime. It provides a systematic way for the organisation to obtain a comprehensive view of existing information security risks and their consequences, and the countermeasures to deal with them [2]. Note that mistakes in risk assessment can be dangerous and costly. Underestimating the risks can leave the organisation vulnerable to severe threats, whereas by overestimating them, some useful IT services and technologies might be withdrawn [3]. Reaching 15 years of Expertise in cyber security, one of the main interests in this regard is risk assessment [4]. Risk analysis identifies the organisation's valuable information assets and their vulnerabilities, reveals those threats that may take advantage of those vulnerabilities and puts the organisation at some sort of risk. Finally, it estimates the possible damages and potential losses resulting from those risks [5]. The purpose of identifying the risk is to determine the consequences of a potential disadvantage and the point where this disadvantage may occur as well as its reason. Those activities that are being performed at the risk identification stage are assets' identification, threats' identification, vulnerabilities' identification, consequences' identification, and existing controls' identification [6]. When a white hat researcher discovers a vulnerability, the next transition is likely to be the internal disclosure leading to patch development. After notifying a discovery by a white hat researcher, software vendors are given a few days, typically 30 or 45 days, for developing patches [7, 8]. Shameli-Sendi et al. [9] presented a taxonomy of intrusion response systems and a review of existing intrusion response systems. It helped to the prediction and autonomic response components to determine the probability of a detected abnormality. Risk assessment may be static or dynamic. Static assessment assigns a value to every system resource. It is done offline and is not suitable for real-time assessments. Dynamic assessment is a real-time process that returns a risk index for critical system resources. In response systems, it minimises the performance penalty due to attacks [10]. Studies show that the time gap between the public disclosure and the exploit is getting smaller [11]. Norwegian Honeynet Project found that from the public disclosure to the exploit event takes a median of 5 days (the distribution is highly asymmetric). When a script is available, it enhances the probability of exploitations which could be disclosed to a small group of people or to the public. Alternatively, the vulnerability could be patched. Usually, public disclosure is the next transition right after the patch availability. When the patch is flawless, applying it causes the death of the vulnerability although sometimes a patch can inject a new fault [12, 8]. Frei in [13] has found that 78% of examined exploitations occur within a day, and 94% of them occur by 30 days from the public disclosure day. In addition, he has analysed the distribution of discovery, exploit, and patch time with respect to the public disclosure date, using a very large dataset. The current information security risk assessment approaches are classified based on three criteria: quantitative, qualitative, and hybrid (semi-quantitative) appraisements [14-17]. The major problem with the quantitative appraisements is the lengthy and time-consuming process, which depends on the detailed information [18, 6]. The qualitative appraisements are widely used, since there are often not enough accurate historical data to calculate the impact and probability of the occurrence of risk scenarios, and also because they are much easier to understand and be implemented [19]. Moreover, they are based on the knowledge and experience. Accurate analysis was made based on the quantitative data, provided that the data has clear semantics. Although quantitative data is more precise than qualitative data, the latter is often more descriptive but harder to compare as both the syntax and the semantics might be unclear [20]. Thus, a quantitative risk assessment model is preferable. In this paper, the quantitative approach has been used for risk assessment and ranking. Implementation and investigation of the proposed method has been shown in a sample network. One of the limitations is attributed to the lack of dataset [21]. To create dataset and knowledge base to examine the algorithms, the framework and process presented in Figs. 1 and 2 were used. Fig. 1Open in figure viewerPowerPoint Macro-architecture of technical risk assessment Fig. 2Open in figure viewerPowerPoint Process of creating a vulnerability knowledge base The issues discussed and solved in this study are as follows: (a) an architecture for technical risk assessment was provided; (b) automatic identification of vulnerable points of the software and hardware of cyber environment were conducted and they were prioritised; (c) a vulnerability knowledge base and the dynamic updating module was created; (d) the issue of implementing penetration testing in organisations was solved; (e) some algorithms for sensitivity factors' calculation were provided; (f) some algorithms for risk assessment were provided. This paper is organised as follows: in Section 2, some related work is discussed; in Section 3, the architecture of technical risk assessment is regarded; in Section 4, sample network specifications, the process of creating a vulnerability knowledge base, and the hierarchy structure of the information items existing in the model are discussed; in Section 5, sensitivity factors calculation for zone, mission, user type, vulnerability, service, host and users' are expressed; in Section 6, risk assessment algorithms for service, host, zone, and network are stated; and finally, Section 7 concludes the paper. 2 Related work One of the known methods to assess the security level and reduce security risks is called penetration test. Pentest is a controlled tentative way to penetrate into asset on network in order to identify vulnerabilities. Penetration test applies the same techniques used in a regular attack by a hacker. This alternative allows to take appropriate measures in order to eliminate the vulnerabilities before they can be explored by unauthorised people. Regarding the activities and criteria of penetration test, there are several issues that have to be taken into consideration, such as legal implications and type of information that is being accessed [22]. One of the issues and requirements of this area is the identification of vulnerable points of the software and hardware of cyber environment. There are challenges to implement the penetration test in the organisations. These challenges include the possibility of creating disruption in the service delivery, the impossibility of complete trust in the companies conducting the breach test and being costly. Hence, it is essential to provide an alternative solution to identify vulnerabilities and risks existing in the cyber environment. This issue has been solved in this study. Risk assessment: In [23], attack graphs and Bayesian networks were used regarding risk assessment. It had proposed an attack graph-based probabilistic security metric to quantify the overall security of network system. Attack graph was used to represent the casual relationship between vulnerabilities encoded in the attack graph. Poolsappasit et al. [24] propose a risk assessment framework using Bayesian networks that enables a network administrator to quantify the chances of network compromises at various levels and shows how to use this information to develop a security mitigation and management plan. Ralston et al. [25] provide a broad overview of cyber security and risk assessment for SCADA and DCS. Major concepts related to the risk assessment methods are introduced with references cited for more detail which include risk assessment methods such as hierarchical holographic modeling (HHM), input-output mode (IIM), and risk filtering, ranking, and management (RFRM). They have been applied successfully to SCADA systems with many interdependencies and have highlighted the need for quantifiable metrics. This paper concluded with a general discussion of two recent methods (one based on compromise graphs and one on augmented vulnerability trees) that quantitatively determine the probability of an attack, the impact of the attack, and the reduction in risk associated with a particular countermeasure. Risk assessment standards: A number of information security risk assessment approaches have been developed, such as NIST SP 800-30 [14]; ISO/IEC 27005 [6]; CRAMM (Central Computing and Telecommunications Agency Risk Analysis and Management Method) [26]; Microsoft's Risk Assessment model [15], which can be applied in all types of organisations [27, 28]. However, despite the variety of these approaches, there are some business requirements, none of which can be met [29]. The problem with many of these methodologies is that they concentrate mainly on general principles and guidelines, leaving users without adequate details for implementation [30]. Even industry standards, like COBIT (Control Objectives for Information and Related Technology) [31] and ISO/IEC 27002 [32], fail to provide managers with a clear and simple visualisation of the security risk assessment and leave the operational details untouched [33]. Rezvani et al. [34] introduce two concepts for risk assessment method: (1) an interdependency relationship among the risk scores of a network flow and its source and destination hosts and (2) a method called flow provenance represents risk propagation among network flows and considers the likelihood that a particular flow is caused by the other flows. Based on these two concepts, they develop an iterative algorithm for computing the risk score of hosts and network flows. Threat intelligence (TI): TI means evidence-based knowledge representing threats that can inform decisions. There is a general awareness of the need for TI while vendors today are rushing to provide a diverse array of TI products, specifically focusing on technical threat intelligence. Although TI is being increasingly adopted, there is little consensus on what it actually is, or how to use it. Without any real understanding of this need, organisations invest large amounts of time and money other than solving existing security problems [4, 35, 36]. This paper was a new approach to cyber threat intelligence (CTI) to help protect their organisations. This aimed at using this framework to gather TI that would improve the efficiency and effectiveness of their risk management, automate their processes, and allow them to go on the hunt for potential threats and stop them before they happen. In some papers [37], wide-band Delphi method was proposed as a scientific means to collect the information necessary for the assessment of security risks. Cyber maneuvers: Performing cyber maneuvers and risk assessment in an operational environment is not easy. Some researches resolved the challenges of implementing maneuvers through dynamic simulation of the cyber battlefield. The provided battlefield model has essential information for detecting cybercrime events [38-40]. Limitations: One of the limitations regarding this issue was the lack of databases and related papers for confidential reasons. In a review part of their paper, they posed the challenge and limitation of the impossibility of validating and verifying [21]. Prototyping has been used in several papers, such as [41, 42] to compare and evaluate the model's performance. Thus, the present paper has used prototyping and simulation. Innovation: The innovation of this research was to provide an integrated framework including assets identification, vulnerabilities knowledge base, sensitivity and risk calculation. The purpose of this research was to identify and prioritise the technical risks existing in the cyber environment in a quantitative manner to make a correct and timely decision to confront with cyber-attacks. Considering that cyber-attacks were performed by the use of vulnerabilities existing in the assets of cyber environment, the focus of this research was to provide a method to identify at the moment and to determine the risk resulted from these vulnerabilities and to inform network administrators. After general identification and the registration of vulnerability, it took some time for network administrators to be informed about this issue through the security media. A lot of breaches occurred during this period [11, 12, 8]. In this research, current issue has been removed by creating a vulnerability knowledge base and the dynamically updating module of the field. One of the issues of information security field is the lack of tools to provide security solutions to improve the overall security of the network. As buying security equipment, network equipment, purchasing or upgrading software and changing network configuration needed a lot of cost and time. Considering this security analyses, the results offered some prioritisation and suggestions to network administrator to improve network security level. Dynamic updating of subsystem has been designed to automatically update the vulnerability knowledge base, change the topology and features of elements, accesses, services, hosts, and users. 3 Research findings In this section, research findings including architecture, processes, structure, sensitivity calculation algorithms, and risk assessment algorithms have been explained. Macro-architecture of technical risk assessment: There are eight processes in the architecture presented in Fig. 1. They are categorised in three general categories including: (1) assets identification, (2) extraction of vulnerabilities and mapping them to services, and (3) sensitivity calculation and risk assessment. The main processes are as follows: (1) producing and updating asset database, (2) producing and updating network images, (3) producing and updating vulnerability knowledge base, (4) performing sensitivity factors' calculation algorithms, and (5) performing risk assessment algorithms. In the following, these processes are described. Fig. 3Open in figure viewerPowerPoint Hierarchy structure of the information items existing in the model Assets identification: In this step, a basic information database was created. Assets in the cyber environment included tangible and intangible components. Some tangible components consisted working stations, servers, and users. These data were collected from the network environment using network scanners and configuration files. Some intangible components included services and vulnerabilities. The generator of network and service model performed the process of preprocessing, integration, and storing in the database after collecting the basic data. In this section, creating connection between services and hosts was also performed. The review and finalisation of network topology were performed by network administrator. Finally, the network and service model (network image) was created as the output of this step. The output of assets identification step included a tree graphic model. This model was represented by the hierarchy structure of the information items in Fig. 3. Extraction of vulnerabilities and mapping: The process of creation of a vulnerability knowledge base is shown in Fig. 2. Solving the problem of automatic identification of vulnerabilities and announcing them in the cyber environment were performed in this section. The vulnerabilities discovered in daily cyberspace were recorded in some basic references. Besides these references, some standards have been developed to classify and scan vulnerabilities and the information required to understand the current state of the cyberspace. Using integration, correlation, and categorising vulnerabilities, vulnerability knowledge base generator was created and updated the process. Each server or workstation in cyberspace contained several active services that could contain vulnerabilities. Identification of the vulnerability of services and other components of the battlefield was done by this subsystem. The CPE standard has been used to identify and name services and to establish connections with vulnerabilities. Through an automated factor, registration of vulnerability references is investigated in the internet in online mode and the list of vulnerabilities is updated. If a new vulnerability is detected so that one of the network services is affected by that vulnerability, then it is necessary to investigate the threats related to it. Hence, the risk assessment algorithms would be implemented again and the current threats' level of network would be notified to the administrator. Complementary information required to identify the removal of this vulnerability is also given to the network administrator by the aid of OVAL Definitions section. Integration and mapping of network model information, service model and vulnerability knowledge base are also performed in this section. The proposed structure contained the necessary information for conducting security analyses. In the two previous sections, how to collect, process, and integrate this information has been explained. The resulting output is an integrated cyber battlefield. This model includes the mapping of network model information, service model, and vulnerability knowledge base. One of the duties of this section is to integrate the information, physical, and conceptual model. In Fig. 3, the hierarchy structure of the information items existing in the cyber battlefield model can be observed. The components used in the field, the characteristics of each of the components, and the relationship between the components are considered in this structure. For example, the cyber battlefield includes a number of hosts, each of which includes a number of services and each service includes a number of vulnerabilities. This structure is stored in the XML format. 4 Selected sample network Simulation and sampling have been used to analyse and assess the model. It is necessary to select a sample network similar to the organisational networks, in a way where it has a variety of security zones, services (those used in the servers or working stations) the mission of host and users. Therefore, the sample network has been selected with the specification presented in Table 1. This sample had six zones, each of which has a different degree of security and importance. Table 1. Sample network specifications Title Count Description zone 6 DMZ, internal, Department 1, Department 2, Department 3, Department 4 host clusters 4 Host cluster of department 1, 2, 3 and 4 network assets 20 Server (11), workstation (4 categories), firewall (5) users 11 Network Administrator (1), Department Manager (4), User (6) used services 10 Windows File Server, Linux Mail Server, IIS Web Server, Windows Domain Controller, FTP Server, SQL Server, Windows XP, SSH, Telnet, RealPlayer According to the proposed framework (Fig. 1), in the first step, the network components and the relationships between them have been identified to construct the network image. According to the process of creating a vulnerability knowledge base (Fig. 2), and regarding the network model and the created service model of the preceding step, mapping of vulnerabilities and services has been performed. The required raw data of the vulnerability knowledge base have been collected by registering the vulnerability references [43]. Due to the high dynamic cyber environment, the vulnerability knowledge base was created and updated by an automated process. In this process (Fig. 2), real-time identification of new vulnerabilities and automatically mapping of them to network services were necessary. In the first step, from the CPE reference, the service database was created or updated. In the second step, from the CVE reference, the vulnerabilities were extracted or updated. In the third step, mapping was done between the vulnerability and the service. In the fourth step, the classification of vulnerabilities was based on CWE. In step five, based on the common vulnerability scoring system (CVSS) framework, the sensitivity of each vulnerability was calculated. In step 6, the OVAL definitions were extracted for each vulnerability. In the seventh step, the aggregation and integration of the extracted information were done in six steps. Finally, a new recording or updating of the knowledge base was conducted. In accordance with the proposed model and architecture, the design and implementation of the simulator has been accomplished. The processes in Figs. 1-3 and the calculations were done automatically by the simulator. Moreover, the network administrator would not be faced with computational complexity. After this process, according to Fig. 3, the structure of the required information items has been created. With the help of these informational items and identified relationships, the security analyses (sensitivity calculation and risk calculation) have been performed in accordance with the following equations. The tree structure consisted of four general levels. The first level regarded the characteristics of the router/switches, users, hosts/cluster hosts, communication links, and activity status at time intervals and subnet specifications. In the second level, the list and profile of the network hosts were described. In the third level, service specifications were described for each host. To extract and create information items at levels one to three, the output of steps 1, 3, and 4 was used in macro- architecture (Fig. 1). In the fourth level, the vulnerabilities of each service were described. The output of steps 2 and 5 in Fig. 1 was utilised to create the fourth level in the tree structure. The process of extracting and building the knowledge base is shown in Fig. 2. 5 Sensitivity factors calculation The sensitivity factor is to determine the intensity of sensitivity of an element in the cyber environment used for risk assessment and impact assessment of cyber-attacks. The value of sensitivity factors is between 0 and 1; zero indicates that the component is not related to the mission and one indicates the full necessity of that component for the mission realisation. Sensitivity scores should be calculated dynamically and in a real time with regard to the current state of the network. Zone sensitivity factor: Ranging is performed in order to secure computer networks with regard to the sensitivity ratio of the systems. With regard to the importance ratio and the mission of each zone, the zone sensitivity factor is determined. For example, DMZ zone is directly connected to the internet and therefore it is exposed to the direct attacks of hackers, and, as compared to other zones, less important services are placed there. Mission sensitivity factor: Each service or host in the network has been created with the aim of performing a mission. For example, with regard to the importance of database service mission, the mission sensitivity factor of the database is a high number. User type sensitivity factor: It has been categorised according to the users' access ratio at three levels (normal user, department administrator, network administrator). The network administrator's sensitivity is considered 1, because he has access to the whole components of the field. Zone, mission, and user type sensitivity factors have been determined by the network administrator and the focal group. Vulnerability sensitivity factor: In the process of producing a vulnerability knowledge base, various characteristics of vulnerability are collected from various sources. One of these characteristics is the sensitivity factor based on CVSS scoring. The CVSS provides open frameworks to capture the principal characteristics of a vulnerability, and produce a numerical score reflecting its severity, as well as a textual representation of that score. It provides standardised vulnerability scores. CVSS enables prioritised risk and helps to provide a better understanding of the risk posed to the organisation by this vulnerability [44]. A CVSS score is a decimal number in the range [0.0, 10.0], where the value 0.0 indicates no rating (it is nearly impossible to exploit the vulnerability) and the value 10.0 shows the full score (it is easy to exploit the vulnerability). Service sensitivity factor: The sensitivity factor of each service is specified with regard to the mission sensitivity factor of that service. For each service, at least one mission is determined. Each mission has a different importance degree. For example, the service mission importance of a database is higher than the File Sharing service mission. Host sensitivity factor: Host sensitivity depends on the three main parameters of host's mission, host's services, and host's location zone. In order to achieve the best results, the host sensitivity factor is calculated using four various methods, and then the results are compared with each other. First method: Averaging of the host's services sensitivity (Avg. Sn. of services column in Table 2) is the product of the host's mission sensitivity (Mission Sn.) and the host's sensitivity zone (Zone Sn.). Results have been presented in Chost Avg column in Table 5 (formula (1)) (1)where C is critical, S is service, and H is host. Second method: Maximisation method (host's services sensitivity) is the product of the host's sensitivity and the zone sensitivity (Formula (2)). Results have been presented in Chost Max column in Table 2. In this method, even if a sensitive service be on a host, in the final result, that host would have a great importance. Therefore, this method is suitable for networks with high sensitivity (2) Third method:
Referência(s)