A Research on Some Cloud Computing Security Management Areas: A Review

Enhancing Security Management for the Inter-Cloud Environment

by P. Nageswara Rao*, Dr. K. Venkatesh Sharma,

- Published in Journal of Advances and Scholarly Researches in Allied Education, E-ISSN: 2230-7540

Volume 15, Issue No. 1, Apr 2018, Pages 643 - 649 (7)

Published by: Ignited Minds Journals


ABSTRACT

Within the context of Cloud Computing, one of the most important security challenges is to manage and assure a secure usage over multi-provider Inter-Cloud environments with dedicated communication infrastructures, security mechanisms, processes and policies. The aim of Security controls in Cloud computing is, for the most part, no different than security controls in any IT environment from a functional security management perspective. The adaption and reuse of existing traditional security management areas that have to be enhanced for specific Cloud computing requirements (e.g., dynamic reconfiguration, distributed services, etc.), is proposed. Based on the collection of various Inter-Cloud use cases and scenarios within the private and public sector like DMTF (Distributed Management Task Force), NIST (National Institute of Standards and Technology), GICTF (Global Inter- Cloud Technology Forum) and ENISA (European Network and Information Security Agency) we analyzed and summarized the range of requirements for security management. As these requirements are not yet fulfilled by current security management approaches, we derived a set of security management areas that describe all identified functional aspects. This set will serve as a foundation of our future development towards a security management architecture for the Inter-Cloud.

KEYWORD

Cloud Computing, Security Management, Inter-Cloud, Security Controls, IT Environment, Dynamic Reconfiguration, Distributed Services, Use Cases, Scenarios, Security Management Architecture

INTRODUCTION

Security Management (SM) consolidates limits that control and secure access to an affiliation's advantages, information, data, and IT advantages remembering the true objective to ensure order, trustworthiness, and availability. Security deployment limits are methods for affirmation, endorsement, encryption, etc. Unfortunately, the all-inclusive definitions and models around security deployment don't portray a run of the mill game plan of security deployment districts. The Security Management Infrastructure approach, which is in like manner called Enterprise Security Management (ESM), is alluded to fill in as an exhaustive security structure for our investigation. This methodology contains security deployment limits, for instance, Identity Management, Privilege Management Metadata Management, Policy Management, and Crypto Key Management. Additionally, there are a couple of sources that delineate Cloud enlisting security regions. In any case, they balance in their consistence with crucial security deployment down to earth zones and joint exertion perspectives that can be used for a broad Cloud Security Management. For example, the deployment of meta-data or game plan deployment of security capacities isn't verified. Generally, they focus on Identity, Privilege, Access, and Crypto Key Management (Greene.T et al., 2009). In light of the showed sources above, we display the going with ten security the executives regions for Cloud computing, that can be immediately depicted as takes after: Character Management is the ability to attest and manage the life cycle of an ensured character (human/device/get ready) Amalgamated Identity Management outfits end customers with secure access over various outside applications through join single sign-on. Certification Management is the ability to manage the existence cycle of modernized capabilities. Instances of capabilities join statements, private keys, customer IDs, and passwords. The accreditation deployment is furthermore responsible for checking the realness of verifications. Property Management is the ability to manage the consigned properties of components. A quality is a detail which portrays a property of a component. The key parts are: distributing of attribute requirements, reinforce for customer consent, and typical trademark approaches. In addition, it is accountable for requesting the new trademark and related characteristics from the deployment upon undertakings to get to the deployment. can get to and giving control of that strategy to those in the workplaces who settle on the decisions. Computerized Policy Management (DPM) mitigates consistence and ensured development risks through beneficial and reasonable propelled game plan deployment. It is the ability to deliver, change over, regulate and supersede propelled systems. Electronic methodologies are those that are in machine-specific tongues and can be used to deal with the lead of structures in a robotized or semi-motorized way. Setup Management (CM) is a field of service that spotlights on setting up and keeping up consistency of a structure or thing's execution and its down to earth and physical properties with its essentials, plan, and operational information for an incredible span. It manages the security-related plan things, for instance, portraying, controlling, mentioning, and stacking of setup data for deployments. Cryptographic Key Management envelops most of the activities and to give show security deployments, especially decency, affirmation and protection. Metadata Management incorporates securing information about other information. It is a basic bit of Enterprise Information Management (EIM). It is the ability to create and manage all security-appropriate metadata sythesis and values over their life-cycle. Review Management is the limit that develops auditable security-relevant events. Deployment surveys are routinely required by genuine changes in a business. A portion of the events that require an deployment survey are beat deployment changes, mergers and acquisitions, and movement orchestrating. Examination and evaluation of aptitudes and limits of an association's deployment remembering the true objective to survey their suitability, especially with regards to the essential targets and procedures of the business. • The SM Information Management is the ability to amass and administer security-huge information of Cloud deployments, (for instance, area, time, etc.) that are not a nearby section of the security deployment ranges. An item that robots the aggregation of event log data from security contraptions, for instance, firewalls, middle person servers, intrusion revelation structures, and antivirus programming. The Security Information Management (SIM) makes an elucidation of the logged data into associated and modified structure

Figure 4: Security fits • Safety controls peril, it does never again get rid of it. • Information methods have vulnerabilities. • Vulnerabilities have countermeasures. • Countermeasures control danger.

MOST SIGNIFICANT PARTS OF SECURITY

Figure, contains three noteworthy parts of Security • Integrity • Availability • Confidentiality Uprightness • Defence towards malevolent or unplanned endeavors to change data. • Perform unapproved information change. • skip ventures to hold information uprightness in a programmed methodology float. • Integrity covers information away, in preparing and in travel. • Availability • Availability is incorporated towards unapproved Deletion. • Orgenerally cause a refusal of section to the information or service.

Figure 5 Major parts of Security Secrecy

• Confidentiality is shielded from unapproved endeavors to peruse information. • Confidentiality covers information away, in handling and in travel. • Confidentiality isn't privateness. B. Farroha et al., (2010) introduced insurance deployment regions, helpful and approach additional items for assurance administrator structure inside the Inter-Cloud should be distinguished and characterized. Related to determined wellbeing data relics, this may help the Cloud supplier network to put into impact a Security Manager framework for an Inter-Cloud environment and encourage the selection of those outcomes inside the private and public area. Structure of System Architecture The system agent engineering for information stockpiling in cloud frameworks utilized in existing frameworks ([13, 80,165,169,197]), it comprises of three substances: Client, Cloud Service Provider (CSP) and Third Party Auditor (TPA). In propose model, we are utilizing a similar stockpiling design with various Algorithm methods as appeared in figure.

Fig. 6 Structure of Proposed System Architecture

(All through this proposition, the terms verifier or TPA and server or CSP are utilized reciprocally) Customers: - the Clients are the individuals who have information to be put away and interface with the Cloud Service Provider (CSP) to deal with their information on the cloud. They are ordinarily: PCs, workstations, cell phones. Subsequent to, putting away the information in cloud, the Client should deal with their put away information in cloud, which implies, they can much of the time confirm the security of their information without having a nearby duplicate of the information. On the off chance that Clients don't have room schedule-wise to confirm the security of their information in cloud, they can allocate this activity to trusted in Third gathering Auditor (TPA). Cloud Service Provider (CSP):- Cloud Service Provider (CSP) are the individuals who have significant assets and mastery in structure, overseeing conveyed cloud storage servers and offers stockpiling or programming services to clients by means of the Internet. The CSP is in charge of information support. The CSP additionally reacts to Verifier questions genuinely. Third Parity Auditor (TPA):- the TPA, who has ability and capacities that Clients might not have, is trusted and checking the danger of cloud information stockpiling services for the benefit of Clients. The examiner is free, therefore has no motivating force to support (or intrigue with) the specialist deployment or Client in allotting fault. To permit contract intervention, the capacity service and Client must both trust the examiner in allocating fault. We accept the reviewer, who is in the matter of evaluating, is more solid than Clients and, hence, can keep up a limited quantity of review results as long as possible. At long last, Client information and results got from it have outside esteem, so the reviewer has an impulse to find out about its substance. 1) The Client pre-forms the record and sends metadata to the TPA or keeps locally for later Integrity verification and sends the document to the CSP. 2) The CSP stores the record 3) The TPA stores the Metadata 4) The verifier (Client/TPA) produces a test and sends to the CSP and checks the legitimacy of reaction, in the event that it is substantial returns 1 generally return 0 5) The CSP creates a reaction and sends to the verifier 6) The Client produces an update solicitation and sends to the CSP and confirms the whether the CSP has refreshed the information effectively or not? On the off chance that yes it sends refreshed metadata to the TPA or resends challenge to the CSP 7) The CSP refreshes information and creates an update reaction dependent on Client demands. 8) The TPA stores the refreshed metadata if there should arise an occurrence of adjustment or inclusion. The point by point depictions of these Algorithms in these three stages are portrayed in the accompanying segments

Setup Phase

To guarantee the Confidentiality, Availability and Integrity of the record, the Client pre-forms the document before putting away it in the cloud in setup stage. The setup stage comprises of four strategies as appeared in Fig. 3.2: a) Encoding To accomplish the certification of the Availability of information put away in the cloud, the Client encodes the document. b) Key Generation In this Algorithm, the Client produces private and public key pair for the later handling of the document in the propose framework. c) Encryption On the off chance that, the Client needs to guarantee the information Confidentiality, the Clients scrambles the information utilizing public key cryptography. cloud, the Client registers the metadata for each square of record. By and large setup stage is ordered into two kinds: 1) Application without encryption: it comprises of record encoding, key generation, met data generation Algorithms. This is helpful where an application requests Integrity and Availability of information. For instance: in the wake of composing theory, we will store it in the cloud. For this kind of information, we need just Availability and Integrity and needn't bother with Confidentiality. Moreover, associations, governments, and colleges store their monetary, individual, and general information to ensure its Availability and Integrity over the time. 2) Application with encryption: it comprises of encoding, key Generation, encryption, metadata Generation Algorithms. This is helpful where application needs Confidentiality, Availability and Integrity of information. For instance, when we are composing exploration paper that is put away in the cloud. For this sort of information, we need Availability, Integrity and Confidentiality, the clinical reports. Due to written falsification we don't need reveal information to outside until it distributes in some place.

Fig. Setup Phase

VERIFICATION PHASE

At whatever point the Client needs to confirm the information that is put away in the cloud servers, the verifier (either Client himself or his masterminded operator TPA) checks the Integrity of information without having the neighborhood duplicate of information through Challenge-Response Protocol. The verification stage comprises of three techniques as appeared in Fig 3.3:

So as to check the Integrity of information, the verifier initially makes an irregular test and sends it to the CSP.

b) Response

After accepting a test demand from the verifier, the CSP creates reaction as Integrity verification compares to the test and sends back to the verifier.

c) Check Integrity

Subsequent to accepting a reaction from the CSP, the verifier checks the whether update verification is substantial or not by contrasting reaction and recently registered metadata, To hold the Integrity, the reaction must be equivalent with the metadata else it demonstrates information has tainted.

Fig: Verification Phase

The over two stages are fundamental for the applications where the static or chronicled information are utilized, for example, libraries, declarations, and so on. Be that as it may, cloud information stockpiling is a dynamic one; at that point we need the accompanying stage to help information elements and review of cloud information stockpiling.

DYNAMIC DATA OPERATIONS AND VERIFICATION PHASE

One of the center plan standards of cloud information stockpiling is to give dynamic adaptability of information to different down to earth applications. This implies remotely put away information can be gotten to as well as refreshed and scaled by the Clients, which incorporates Modification, Insertion and Deletion. One clear answer for help every single unique datum tasks is for the Client to download the whole information from the CSP and update it. This would be secure yet exceedingly wasteful. We currently demonstrate that propose conventions bolster dynamic information activities on information put away in cloud without recovering it. This stage comprises of 5 techniques as appeared. Assume the Client needs to refresh the information in the cloud, the Client runs Prepare Update Algorithm to make an update solicitation and sends to the CSP. The update solicitation indicates the specific information task (adjustment, embed, erase) which needs to perform.

2. Execute Update

After accepting an update demand from the Client, the CSP runs the Execute Update Algorithm to refresh the information in the cloud.

3. Update Challenge

The update activities may likewise acquaint extra dangers with the evaluating framework. Thus, so as to confirm whether CSP has played out the update task effectively, the Client quickly challenges the CSP for the verification of update activity.

4. Update Response

After getting an update challenge, the CSP creates a proof for update activity and come back to the Client.

5. Check Update

In the wake of getting an update reaction from the CSP, the Client runs the Check Update Algorithm to check the security of update activity by contrasting reaction and pre-processed metadata for new refreshed square.

Fig 6 Dynamic Data activities and Verification Phase

In light of above framework engineering, In this theory, we propose New Probabilistic Efficient and Secure Protocols for information stockpiling security in cloud computing specifically Confidentiality, Integrity and Availability of information. Those are: Homomorpic Cloud Verification Protocol (HDVP) guarantees the Availability and Integrity of information in cloud with fractional unique information support through private undeniable nature. Be that as it may, it doesn't bolster a productive addition activity and and restricted computing power. RSA-based Dynamic Public Audit convention (RSA-DPAP) guarantees the Availability and Integrity of information put away in Cloud with help of public certainty and effective unique information tasks and conquers the disadvantages of Homomorpic Distribution check convention and. In any case, it needs in tending to the Confidentiality issue, which is one of significant security of information stockpiling in certain applications. ECC-based Dynamic Public Audit Protocol (ECC-DPAP) guarantees the Confidentiality, Availability and Integrity of information put away in Cloud with public certainty and dynamic information bolster utilizing Elliptic Curve Cryptography (ECC) rather than RSA. The convention can offer same dimensions of security with little keys practically identical to RSA-based convention. It is chiefly intended for gadgets with restricted computing power and additionally memory, for example, smartcards, cell phones and PDAs. In any case, it is presenting the non-paltry key service issues for the Clients. So as to ensure encryption keys for Confidentiality of information, the Clients need to encode keys once more, which change the issue instead of unravel it.

Table 1: structure of Protocols

The detail depictions of these proposed conventions are clarified in the following parts from 4 to 7. Publicly Verifiable Dynamic Secret Sharing Protocol(PVDSSP) Solves the key service issues to the Clients, and guarantees the every one of the three fundamental security properties of information put away in Cloud proficiently allude to the Confidentiality, Integrity and Availability without duty of keeping up encryption key for the Clients. An Efficient Cloud Verification Protocol (EDVP) executes the above Protocols in an appropriated way. Here, the n verifiers challenge the n servers consistently. To approve the Integrity, the verifier The general plan procedures of these conventions are given in Table 1

CONCLUSION

Cloud computing is very attractive environment for business world in term of providing required services in a very cost effective way. However, assuring and enhancing security and privacy practices will attract more enterprises to world of the cloud computing. IAM should be properly implemented to ensure the mutual authentication, authorization and auditing for cloud computing management. Based upon the presented security management areas, functional and process components for Security Manager Architecture in the Inter-Cloud need to be identified and defined. Together with derived security data artifacts, this will support the Cloud provider community to implement a Security Manager system for a future Inter-Cloud environment and facilitate the adoption of these results in the private and public sector.

REFERENCES

1. A. Celesti, F. Tusa, M. Villari and A. Puliafito (2010). How to enhance Cloud architectures to enable cross-federation, Cloud Computing (Cloud), 2010 IEEE 3rd International Conference on, Seiten pp. 337 – 345. 2. B. Farroha and D. Farroha (2010). Cyber security components for pervasive Enterprise Security Management and the virtualization aspects, Systems Conference, 2010 4th Annual IEEE. 3. Cloud Security Alliance, Security Guidance for Critical Areas of Focus in Cloud Computing V2.1, 2009. 4. Farroha, B. and Farroha, D. (2010). Cyber security components for pervasive Enterprise Security Management and the virtualization aspects‖, Systems Conference, 4th Annual IEEE, pp. 553 558. 5. GICTF (2010). Use cases and functional requirements for inter-Cloud computing, GICTF White Paper, Global Inter-Cloud Technology Forum. 6. Grundy J., Almorsy, M. and Ibrahim, A. S. (2011). Collaboration-Based Cloud Computing Security Management Framework‖, 4thInternational Conference on Cloud Computing , IEEE, pp. 364-371.

Implementation Handbook for Enterprises, Recursive Press. 8. J. W. Rittinghouse, J. F. Ransome (2009). ―Cloud Computing: Implementation, Management and Security‖ CRC Press, ISBN: 978-1-4398-0680-7. 9. Kretzschmar, M., Golling, M. and Hanigk, S. (2011). Security Management Areas in the Inter-Cloud‖, International Conference on Cloud Computing , pp.762-763, IEEE. 10. M. Dikaiakos, G. Pallis, D. Katsaros, P. Mehra and A. Vakali (2009). ―Cloud Computing: Distributed Internet Computing for IT and Scientific Research‖, IEEE Internet Computing, vol. 13, no. 5, 2009. 11. T. Mather, S. Kumarasuwamy and S. Latif (2009). ―Cloud Security and Privacy‖, O‘Rielly, ISBN: 978-0-4596-802769.

Corresponding Author P. Nageswara Rao*

Research Scholar, Shri Venkateshwara University, Uttar Pradesh

nageshpambala@gmail.com