0% found this document useful (0 votes)
230 views

Unit 4

This document discusses resource management and security in cloud computing. It covers inter-cloud resource management, which allows clouds to share resources beyond their own infrastructure through contracts with other cloud providers. It describes methods for provisioning compute resources like virtual machines on demand in the cloud, including demand-driven, event-driven, and popularity-driven approaches. It also addresses security challenges in cloud computing and standards for areas like identity and access management.

Uploaded by

DJ editz
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
230 views

Unit 4

This document discusses resource management and security in cloud computing. It covers inter-cloud resource management, which allows clouds to share resources beyond their own infrastructure through contracts with other cloud providers. It describes methods for provisioning compute resources like virtual machines on demand in the cloud, including demand-driven, event-driven, and popularity-driven approaches. It also addresses security challenges in cloud computing and standards for areas like identity and access management.

Uploaded by

DJ editz
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 90

Unit 4

RESOURCE MANAGEMENT AND


SECURITY IN CLOUD

Inter Cloud Resource Management – Resource


Provisioning and Resource Provisioning Methods –
Global Exchange of Cloud Resources – Security
Overview – Cloud Security Challenges –Software-as-
a-Service Security – Security Governance – Virtual
Machine Security – IAM –Security Standards.
INTER-CLOUD RESOURCE
MANAGEMENT
Intercloud or 'cloud of clouds' is a term refer to a
theoretical model for cloud computing services based on
the idea of combining many different individual clouds into
one seamless mass in terms of on-demand operations.
The Intercloud would simply make sure that a cloud
could use resources beyond its reach, by taking
advantage of pre-existing contracts with other cloud
providers.
The Intercloud scenario is based on the key concept
that each single cloud does not have infinite physical
resources or ubiquitous geographic footprint. If a
cloud saturates the computational and storage resources of
its infrastructure, or is requested to use resources in a
geography where it has no footprint, it would still be ableto
satisfy such requests for service allocations sent from its
clients.
 The Intercloud scenario would address such situations
where each cloud would use the computational, storage,
or any kind of resource (through semantic resource
descriptions, and open federation) of the infrastructures
of other clouds.
 This is analogous to the way the Internet works, in that
a service provider, to which an endpoint is attached, will
access or deliver traffic from/to source/destination
addresses outside of its service area by using Internet
routing protocols with other service providers with
whom it has a pre-arranged exchange or peering
relationship.
 It is also analogous to the way mobile operators
implement roaming and inter-carrier interoperability.
Such forms of cloud exchange, peering, or roaming may
introduce new business opportunities among
cloud providers if they manage to go beyond the
theoretical framework.
Real time Example
 IBM researchers are working on a solution that
they claim can seamlessly store and move data
across multiple cloud platforms in real time. The
firm thinks that the technology will help
enterprises with service reliability concerns. On
top of this, they hope to “cloud-enable” almost
any digital storage product.
 Researchers at IBM have developed a “drag-and-
drop” toolkit that allows users to move file
storage across almost any cloud platform. The
company cloud would host identity authentication
and encryption technologies as well as other
security systems on an external cloud platform
(the ‘InterCloud Store’) to keep each cloud
autonomous, while also keeping them synced
together.
 cloud-of-clouds invention can help avoid service
outrages due to the fact it can tolerate crashes of any
number of clients. It would do this using the
independence of multiple clouds linked together to
increase overall reliability.
 Storage services don’t communicate directly with each
other but instead go through the larger cloud for
authentication. Data is encrypted as it leaves one
station and decrypted before it reaches the
next. If one cloud happens to fail, a back-up cloud
responds immediately.
 The cloud-of-clouds is also intrinsically more
secure: “If one provider gets hacked there is little
chance they will penetrate other systems at the same
time using the same vulnerability.” Alessandro Sorniotti,
cloud storage scientist at IBM and one of the
researchers, says. “From the client perspective, we will
have the most available and secure storage system.”
Resource Provisioning and Resource
Provisioning Methods
Resource Provisioning and Platform
Deployment
The emergence of computing clouds suggests
fundamental changes in software and hardware
architecture. Cloud architecture puts more emphasis
on the number of processor cores or VM instances.
Parallelism is exploited at the cluster node level.
Here they describing
The techniques to provision computer
resources or VMs.
Storage allocation schemes to interconnect
distributed computing infrastructures by
harnessing the VMs dynamically.
Provisioning of Compute Resources (VMs)
 Providers supply cloud services by signing SLAs with end
users.
 The SLAs must commit sufficient resources such as CPU,
memory, and bandwidth that the user can use for a preset
period.
 Under provisioning of resources will lead to broken SLAs and
penalties.
 Over provisioning of resources will lead to resource
underutilization, and consequently, a decrease in revenue for
the provider.
 Deploying an autonomous system to efficiently provision
resources to users is a challenging problem.
 The difficulty comes from the unpredictability of consumer
demand, software and hardware failures, heterogeneity of
services, power management, and conflicts in signed SLAs
between consumers and service providers.
 Efficient VM provisioning depends on the cloud
architecture and management of cloud
infrastructures.
 Resource provisioning schemes also demand fast
discovery of services and data in cloud computing
infrastructures.
 In a virtualized cluster of servers, this demands
efficient installation of VMs, live VM migration, and
fast recovery from failures.
 To deploy VMs, users treat them as physical hosts
with customized operating systems for specific
applications. For example, Amazon’s EC2 uses
Xen as the virtual machine monitor (VMM). The
same VMM is used in IBM’s Blue Cloud.
 In the EC2 platform, some predefined VM templates
are also provided. Users can choose different kinds of
VMs from the templates.
 IBM’s Blue Cloud does not provide any VM templates.
In general, any type of VM can run on top of Xen.
 Microsoft also applies virtualization in its Azure cloud
platform. The provider should offer resource-
economic services.
 Power-efficient schemes for caching, query pro-
cessing, and thermal management are mandatory due
to increasing energy waste by heat dissipation from
data centers.
 Public or private clouds promise to streamline the
on-demand provisioning of soft-ware, hardware, and
data as a service, achieving economies of scale in IT
deployment and operation.
Resource Provisioning Methods
Figure shows three cases of static cloud resource
provisioning policies.
 In case (a), over provisioning with the peak load
causes heavy resource waste (shaded area).
 In case (b), under provisioning (along the capacity
line) of resources results in losses by both user and
provider in that paid demand by the users (the
shaded area above the capacity) is not served and
wasted resources still exist for those demanded areas
below the provisioned capacity.
 In case (c), the constant provisioning of resources
with fixed capacity to a declining user demand could
result in even worse resource waste. The user may
give up the service by canceling the demand, resulting
in reduced revenue for the provider. Both the user
and provider may be losers in resource provisioning
without elasticity
Three resource-provisioning methods are
presented in the following sections.
 The demand-driven method provides
static resources and has been used in
grid computing for many years.
 The event driven method is based on
predicted workload by time.
 The popularity-driven method is based
on Internet traffic monitored.
Demand-Driven Resource Provisioning
 This method adds or removes computing instances
based on the current utilization level of the allocated
resources.
 The demand-driven method automatically allocates
two Xeon processors for the user application, when
the user was using one Xeon processor more than
60 percent of the time for an extended period.
 In general, when a resource has surpassed a threshold
for a certain amount of time, the scheme increases
that resource based on demand. When a resource is
below a threshold for a certain amount of time, that
resource could be decreased accordingly.
 Amazon implements such an auto-scale feature in its
EC2 platform. This method is easy to implement. The
scheme does not work out right if the workload
changes abruptly.
The x-axis in Figure 4.25 is the time scale in
milliseconds. In the beginning, heavy fluctuations of
CPU load are encountered.
All three methods have demanded a few VM
instances initially.
Gradually, the utilization rate becomes more
stabilized with a maximum of 20 VMs (100 percent
utilization) provided for demand-driven provisioning
in Figure 4.25(a).
However, the event-driven method reaches a stable
peak of 17 VMs toward the end of the event and
drops quickly in Figure 4.25(b).
The popularity provisioning shown in Figure 4.25(c)
leads to a similar fluctuation with peak VM utilization
in the middle of the plot
Event-Driven Resource Provisioning
 This scheme adds or removes machine instances
based on a specific time event.
 The scheme works better for seasonal or
predicted events such as Christmastime in the
West and the Lunar New Year in the East.
 During these events, the number of users grows
before the event period and then decreases
during the event period.
 This scheme anticipates peak traffic before it
happens. The method results in a minimal loss of
QoS, if the event is predicted correctly.
Otherwise, wasted resources are even greater
due to events that do not follow a fixed pattern.
Popularity-Driven Resource Provisioning
 In this method, the Internet searches for
popularity of certain applications and creates the
instances by popularity demand.
 The scheme anticipates increased traffic with
popularity. Again, the scheme has a minimal loss of
QoS, if the predicted popularity is correct.
 Resources may be wasted if traffic does not occur
as expected. In Figure 4.25(c), EC2 performance
by CPU utilization rate (the dark curve with the
percentage scale shown on the left) is plotted
against the number of VMs provisioned (the light
curves with scale shown on the right, with a
maximum of 20 VMs provisioned).
Dynamic Resource Deployment
 The cloud uses VMs as building blocks to create an
execution environment across multiple resource sites.
 The Inter-Grid-managed infrastructure was developed
by a Melbourne University group. Dynamic resource
deployment can be implemented to achieve scalability
in performance.
 The Inter-Grid is a Java-implemented software system
that lets users create execution cloud environments
on top of all participating grid resources.
 Peering arrangements established between gateways
enable the allocation of resources from multiple grids
to establish the execution environment
In Figure 4.26, a scenario is illustrated by which an
inter-grid gateway (IGG) allocates resources from
a local cluster to deploy applications in three
steps:
 (1) requesting the VMs,
 (2) enacting the leases, and
 (3) deploying the VMs as requested. Under
peak demand, this IGG interacts with another
IGG that can allocate resources from a cloud
computing provider
 A grid has predefined peering arrangements with
other grids, which the IGG manages. Through multiple
IGGs, the system coordinates the use of InterGrid
resources.
 An IGG is aware of the peering terms with other
grids, selects suitable grids that can provide the
required resources, and replies to requests from
other IGGs.
 Request redirection policies determine which peering
grid InterGrid selects to process a request and a
price for which that grid will perform the task.
 An IGG can also allocate 40 resources from a cloud
provider.
 The cloud system creates a virtual environment to
help users deploy their applications. These
applications use the distributed grid resources.
 The InterGrid allocates and provides a distributed
virtual environment (DVE).
 This is a virtual cluster of VMs that runs isolated
from other virtual clusters.
 A component called the DVE manager performs
resource allocation and management on behalf of
specific user applications.
 The core component of the IGG is a scheduler
for implementing provisioning policies and
peering with other gateways.
 The communication component provides an
asynchronous message-passing mechanism.
 Received messages are handled in parallel by a
thread pool.
Provisioning of Storage Resources
 The data storage layer is built on top of the physical or virtual
servers.
 As the cloud computing applications often provide service to
users, it is unavoidable that the data is stored in the clusters of the
cloud provider.
 The service can be accessed anywhere in the world.
 One example is e-mail systems. A typical large e-mail system might
have millions of users and each user can have thousands of e-mails
and consume multiple gigabytes of disk space.
 Another example is a web searching application.
 In storage technologies, hard disk drives may be augmented with
solid-state drives in the future. This will provide reliable and high-
performance data storage. The biggest barriers to adopting flash
memory in data centers have been price, capacity, and, to some
extent, a lack of sophisticated query-processing techniques.
However, this is about to change as the I/O bandwidth of solid-
state drives becomes too impressive to ignore.
 A distributed file system is very important for
storing large-scale data.
 However, other forms of data storage also exist.
Some data does not need the namespace of a
tree structure file system, and instead, databases
are built with stored data files.
 In cloud computing, another form of data
storage is (Key, Value) pairs. Amazon S3 service
uses SOAP to access the objects stored in the
cloud.
Table 4.8 outlines three cloud storage services provided
by Google, Hadoop, and Amazon.
 Many cloud computing companies have
developed large-scale data storage systems
to keep huge amount of data collected every
day.
 For example, Google’s GFS stores web data
and some other data, such as geographic
data for Google Earth.
 A similar system from the open source
community is the Hadoop Distributed File
System (HDFS) for Apache. Hadoop is the
open source implementation of Google’s
cloud computing infrastructure.
 Similar systems include Microsoft’s Cosmos
file system for the cloud.
 Despite the fact that the storage service or
distributed file system can be accessed directly,
similar to traditional databases, cloud computing does
provide some forms of structure or semi-structure
database processing capability.
 For example, applications might want to process the
information contained in a web page. Web pages are
an example of semi-structural data in HTML format.
 If some forms of database capability can be used,
application developers will construct their application
logic more easily.
 Another reason to build a database-like service in
cloud computing is that it will be quite convenient for
traditional application developers to code for the
cloud platform.
 Databases are quite common as the underlying
storage device for many applications.
 Thus, such developers can think in the same way they do
for traditional software development.
 Hence, in cloud computing, it is necessary to build databases
like large-scale systems based on data storage or distributed
file systems.
 The scale of such a database might be quite large for
processing huge amounts of data.
 The main purpose is to store the data in structural or semi-
structural ways so that application developers can use it
easily and build their applications rapidly.
 Traditional databases will meet the performance bottleneck
while the system is expanded to a larger scale. How-ever,
some real applications do not need such strong consistency.
The scale of such databases can be quite large.
 Typical cloud databases include BigTable from Google,
SimpleDB from Amazon, and the SQL service from
Microsoft Azure
Global Exchange of Cloud
Resources
 In order to support a large number of application
service consumers from around the world, cloud
infrastructure providers (i.e., IaaS providers) have
established data centers in multiple geographical
locations to provide redundancy and ensure reliability in
case of site failures.
 For example, Amazon has data centers in the United
States (e.g., one on the East Coast and another on the
West Coast) and Europe. However, currently Amazon
expects its cloud customers (i.e., SaaS providers) to
express a preference regarding where they want their
application services to be hosted.
 Amazon does not provide seamless/automatic
mechanisms for scaling its hosted services across
multiple geographically distributed data centers.
 This approach has many shortcomings.
 First, it is difficult for cloud customers to determine in
advance the best location for hosting their services as
they may not know the origin of consumers of their
services.
 Second, SaaS providers may not be able to meet the
QoS expectations of their ser-vice consumers
originating from multiple geographical locations.
 This necessitates building mechan-isms for seamless
federation of data centers of a cloud provider or
providers supporting dynamic scaling of applications
across multiple domains in order to meet QoS targets
of cloud customers
 In addition, no single cloud infrastructure provider will be
able to establish its data centers at all possible locations
throughout the world.
 As a result, cloud application service (SaaS) providers will
have difficulty in meeting QoS expectations for all their
consumers.
 Hence, they would like to make use of services of multiple
cloud infrastructure service providers who can provide
better support for their specific consumer needs.
 This kind of requirement often arises in enterprises with
global operations and applications such as Internet services,
media hosting, and Web 2.0 applications.
 This necessitates federation of cloud infrastructure service
providers for seamless provisioning of services across
different cloud providers.
 To realize this, the Cloudbus Project at the University of
Melbourne has proposed InterCloud architecture sup-
porting brokering and exchange of cloud resources for
scaling applications across multiple clouds.
 By realizing InterCloud architectural principles in
mechanisms in their offering, cloud providers will be able to
dynamically expand or resize their provisioning capability
based on sudden spikes in workload demands by leasing
available computational and storage capabilities from other
cloud service providers; operate as part of a market-driven
resource leasing federation, where application service
providers such as Salesforce.com host their services based
on negotiated SLA contracts driven by competitive market
prices; and deliver on-demand, reliable, cost-effective, and
QoS-aware services based on virtualization technologies
while ensuring high QoS standards and minimizing service
costs.
 They need to be able to utilize market based utility models
as the basis for provisioning of virtualized software services
and federated hardware infrastructure among users with
heterogeneous applications.
 They consist of client brokering and coordinator services
that support utility-driven federation of clouds: application
scheduling, resource allocation, and migration of workloads.
 The architecture cohesively couples the administratively
and topologically distributed storage and compute
capabilities of clouds as part of a single resource leasing
abstraction.
 The system will ease the crossdomain capability integration
for on-demand, flexible, energy-efficient, and reliable access
to the infrastructure based on virtualization technology.
 The Cloud Exchange (CEx) acts as a market maker for
bringing together service producers and consumers. It
aggregates the infrastructure demands from application
brokers and evaluates them against the available supply
currently published by the cloud coordinators.
 It supports trading of cloud services based on
competitive economic models such as commodity
markets and auctions.
 CEx allows participants to locate providers and
consumers with fitting offers. Such markets enable
services to be commoditized, and thus will pave the
way for creation of dynamic market infrastructure for
trading based on SLAs.
 An SLA specifies the details of the service to be
provided in terms of metrics agreed upon by all
parties, and incentives and penalties for meeting and
violating the expectations, respectively.
 The availability of a banking system within the market
ensures that financial transactions pertaining to SLAs
between participants are carried out in a secure and
dependable environment
Security overview
 Cloud service providers are leveraging virtualization
technologies combined with self-service capabilities for
computing resources via the Internet.
 In these service provider environments, virtual
machines from multiple organizations have to be co-
located on the same physical server in order to
maximize the efficiencies of virtualization.
 Cloud service providers must learn from the managed
service provider (MSP) model and ensure that their
customers’ applications and data are secure if they
hope to retain their customer base and
competitiveness.
 Today, enterprises are looking toward cloud computing
horizons to expand their on-premises infrastructure,
but most cannot afford the risk of compromising the
security of their applications and data
For example, IDC recently conducted a survey1 (see Figure) of 244 IT
executives/CIOs and their line-of-business (LOB) colleagues to gauge
their opinions and understand their companies’ use of IT cloud
services. Security ranked first as the greatest challenge or issue of
cloud computing.
Cloud Security Challenges
Lack of Visibility and Control
 Relating to both public and hybrid cloud environments, the
loss of overall service visibility and the associated lack of
control can be a problem.
 Whether you’re dealing with public or hybrid cloud
environments, a loss of visibility in the cloud can mean a loss
of control over several aspects of IT management and data
security. Where legacy style in-house infrastructure was
entirely under the control of the company, cloud services
delivered by third-party providers don’t offer the same level
of granularity with regards to administration and
management.
 When it comes to visualizing potential security
vulnerabilities, this lack of visibility can lead to a business
failing to identify potential risks. In some sectors, such as
media, cloud adoption is as low as 17%, which has been
blamed on this lack of visibility and control.
Data Breaches and Downtime
 Despite the fact that generally speaking,
enterprise-grade cloud services are more secure
than legacy architecture, there is still a potential
cost in the form of data breaches and downtime.
With public and private cloud offerings, resolving
these types of problems is in the hands of the
third-party provider. Consequently, the business
has very little control over how long critical
business systems may be offline, as well as how
well the breach is managed.
 In the 12th annual Cost of Data Breach Study,
sponsored by IBM, it was found that the global
cost of data breaches amounted to $3.62 million,
so we can see how this particular issue is a major
one with regard to cloud adoption.
Vendor Lock-In
 For companies that come to rely heavily on public
and hybrid cloud platforms, there is a danger that
they become forced to continue with a specific
third-party vendor simply to retain operational
capacity. If critical business applications are locked
into a single vendor, it can be very difficult to
make tactical decisions such as moving to a new
vendor. In effect, the vendor is being provided
with the leverage it needs to force the customer
into an unfavourable contract.
 Logicworks recently performed a survey that
found showed that some 78% of IT decision
makers blame the fear of vendor lock-in as a
primary reason for their organization failing to
gain maximum value from cloud computing.
Compliance Complexity
 In sectors such as healthcare and finance, where
legislative requirements with regard to storage of
private data are heavy, achieving full compliance whilst
using public or private cloud offerings can be more
complex.
 Many enterprises attempt to gain compliance by using
a cloud vendor that is deemed fully compliant. Indeed,
data shows that some 51% of firms in the USA rely
on nothing more than a statement of compliance
from their cloud vendor as confirmation that all
legislative requirements have been met.
 But what happens when at a later stage, it is found
that the vendor is not actually fully compliant? The
client company could find itself facing non-
compliance, with very little control over how the
problem can be resolved.
A Lack of Transparency
 When a business buys in third-party cloud
services as either a public or hybrid cloud
offering, it is likely they will not be provided
with a full service description, detailing
exactly how the platform works, and the
security processes the vendor operates.
 This lack of service transparency makes it
hard for customers to intelligently evaluate
whether their data is being stored and
processed securely at all times. Surveys have
shown that around 75% of IT managers are
only marginally confident that company data
is being stored securely by their cloud
vendor.
Insecure Interfaces and APIs
 Cloud vendors provide their customers with a
range of Application Programming Interfaces
(APIs), which the customer uses to manage the
cloud service.
 Unfortunately, not every API is entirely secure.
They may have been deemed to be initially, and
then at a later stage be found to be insecure in
some way. This problem is compounded when the
client company has built its own application layer
on top of these APIs. The security vulnerability
will then exist in the customer’s own application.
This could be an internal application, or even a
public facing application potentially exposing
private data.
Insufficient Due Diligence
 For companies that lack the internal resources to fully
evaluate the implications of cloud adoption, then the risk
of deploying a platform that is ineffective and even
insecure is real.
 Responsibility for specific issues of data security needs to
be fully defined before any deployment. Failing to do so
could lead to a situation where there is no clearly defined
way to deal with potential risks and solve current security
vulnerabilities.
Shared Technology Vulnerabilities
 Using public or hybrid cloud offerings can expose a
business to security vulnerabilities caused by other users
of the same cloud infrastructure.
 The onus is upon the cloud vendor to see that this does
not happen, yet no vendor is perfect. It is always possible
that a security vulnerability caused by another user in the
same cloud will affect every user.
Other Potential Threats
 Alongside the potential security vulnerabilities
relating directly to the cloud service, there are also a
number of external threats which could cause an
issue. Some of these are:
 Man in the Middle attacks – where a third party
manages to become a relay of data between a source
and a destination. If this is achieved, the data being
transmitted can be altered.
 Distributed Denial of Service – a DDoS attack
attempts to knock a resource offline by flooding it
with too much traffic.
 Account or Service Traffic Hijacking – a
successful attack of this kind could provide an
intruder with passwords or other access keys which
allow them access to secure data.
Software-as-a-Service Security
 Cloud computing models of the future
will likely combine the use of SaaS (and
other XaaS’s as appropriate), utility
computing, and Web 2.0 collaboration
technologies to leverage the Internet to
satisfy their customers’ needs.
 New business models being developed as
a result of the move to cloud computing
are creating not only new technologies
and business operational processes but
also new security requirements and
challenges as described previously.
 As the most recent evolutionary step in the cloud service model (see
Figure 6.2), SaaS will likely remain the dominant cloud service model
for the foreseeable future and the area where the most critical need
for security practices and oversight will reside.
 Just as with an managed service provider, corporations or
end users will need to research vendors’ policies on data
security before using vendor services to avoid losing or not
being able to access their data.
 The technology analyst and consulting firm Gartner lists
seven security issues which one should discuss with a cloud-
computing vendor:
1. Privileged user access—Inquire about who has
specialized access to data, and about the hiring and
management of such administrators.
2. Regulatory compliance—Make sure that the vendor is
willing to undergo external audits and/or security
certifications.
3. Data location—Does the provider allow for any
control over the location of data?
4. Data segregation—Make sure that encryption is
available at all stages, and that these encryption
schemes were designed and tested by experienced
professionals.
5. Recovery—Find out what will happen to data in the
case of a disaster. Do they offer complete restoration?
If so, how long would that take?
6. Investigative support—Does the vendor have the
ability to investigate any inappropriate or illegal
activity?
7. Long-term viability—What will happen to data if the
company goes out of business? How will data be
returned, and in what format?
 Determining data security is harder today, so data
security functions have become more critical than they
have been in the past.
 A tactic not covered by Gartner is to encrypt the data
yourself. If you encrypt the data using a trusted
algorithm, then regardless of the service provider’s
security and encryption policies, the data will only be
accessible with the decryption keys.
 Of course, this leads to a follow-on problem: How do
you manage private keys in a pay-on-demand computing
infrastructure?
 To address the security issues listed above along with
others mentioned earlier in the chapter, SaaS providers
will need to incorporate and enhance security practices
used by the managed service providers and develop
new ones as the cloud computing environment evolves.
Security Governance
What Is Cloud Security Governance?
 Cloud security governance refers to the management model
that facilitates effective and efficient security management
and operations in the cloud environment so that an
enterprise’s business targets are achieved. This model
incorporates a hierarchy of executive mandates, performance
expectations, operational practices, structures, and metrics
that, when implemented, result in the optimization of
business value for an enterprise.
 Cloud security governance helps answer leadership
questions such as:
 Are our security investments yielding the desired
returns?
 Do we know our security risks and their business
impact ?
 Are we progressively reducing security risks to
acceptable levels?
 Have we established a security-conscious culture
within the enterprise?
Key Objectives for Cloud Security Governance
 Building a cloud security governance model for an
enterprise requires strategic-level security management
competencies in combination with the use of
appropriate security standards and frameworks (e.g.,
NIST, ISO, CSA) and the adoption of a governance
framework (e.g., COBIT).
 The first step is to visualize the overall governance
structure, inherent components, and to direct its
effective design and implementation.
 The use of appropriate security standards and
frameworks allow for a minimum standard of security
controls to be implemented in the cloud, while also
meeting customer and regulatory
compliance obligations where applicable.
 A governance framework provides referential guidance
and best practices for establishing the governance
model for security in the cloud.
The following represents key objectives to pursue in establishing a
governance model for security in the cloud. These objectives assume
that appropriate security standards and a governance framework have
been chosen based on the enterprise’s business targets, customer
profile, and obligations for protecting data and other information
assets in the cloud environment.
Strategic Alignment
Enterprises should mandate that security investments, services,
and projects in the cloud are executed to achieve established
business goals (e.g., market competitiveness, financial, or
operational performance).
Value Delivery
Enterprises should define, operationalize, and maintain an
appropriate security function/organization with appropriate
strategic and tactical representation, and charged with the
responsibility to maximize the business value (Key Goal
Indicators, ROI) from the pursuit of security initiatives in the
cloud.
Risk Mitigation
Security initiatives in the cloud should be subject to measurements
that gauge effectiveness in mitigating risk to the enterprise (Key Risk
Indicators). These initiatives should also yield results that
progressively demonstrate a reduction in these risks over time.
Effective Use of Resources
It is important for enterprises to establish a practical operating
model for managing and performing security operations in the
cloud, including the proper definition and operationalization of due
processes, the institution of appropriate roles and responsibilities,
and use of relevant tools for overall efficiency and effectiveness.
Sustained Performance
Security initiatives in the cloud should be measurable in terms of
performance, value and risk to the enterprise (Key Performance
Indicators, Key Risk Indicators), and yield results that demonstrate
attainment of desired targets (Key Goal Indicators) over time.
Virtual Machine Security
Virtual Environment Security
Mechanisms
 The security of VM-based services rests on the assumption that
the underlying trusted computing base (TCB) is also secure. If the
TCB is compromised, then all bets are for the VM-based
Security of Virtual Machines
 In a Type I virtual machine, the trusted computing base is the
virtual machine monitor. Some services also need to include the
dedicated secure VM as part of TCB. The TCB is considered to be
secure because “It is so simple that its implementation can be
reasonably expected to be correct” Virtual machine monitors are
only responsible for virtualizing the physical machine’s hardware
and partitioning it into logically separate virtual machines
 Compared to a full operating system, which may have several
million lines of code, VMMs have around 30,000 lines of code. Also,
the secure VM typically has a reduced mini-OS without any
unneeded services or components.
 In addition to having a small code base, the interfaces to VMM and
the dedicated security VM are much simpler, more constrained, and
better specified than a standard operating system. This helps
reduce the risk of security vulnerabilities.
Consider the following to provide
VM security
a) Mandatory access control:
 MAC component runs in a separate VM and administrator
can modify the security policy

b) Para-virtualization:.
 the interface executed by the guest OS consist of three
components: “memory management, CPU, and device I/O” and the
guest OS is responsible for managing these resources.

c) Policy considerations
 it is better to have proper guidelines and security policies which
can be implemented dynamically in accordance with the change in
the virtual environment.
Benefits
 Security: An important feature of virtualization is isolation. That is
software running in one VM will not interact with another VM
running is the same machine This gives a lot of security benefits.

 f) Intrusion protection: Here the author Michael Price [4] brings


the concept of clones. He talks about Signature based intrusion
detection. Here the state of a system is determined by monitoring
the system activity. Here he suggests that instead of looking for
the patterns on the original machines, clones can be created and
the events can be monitored [4]. Clones can be run in standby
mode and then can be synchronized with the real machine and
then the pattern of the clone activity can be monitored [4, 15].In
this manner one need not compromise the real system
Identity Access Management (IAM)
What is Identity Access Management in
Cloud Computing
 The concept of identity in the cloud can refer
to many things, but for the purpose of this
discussion, we will focus on two main
entities:
 users
 cloud resources.
 IAM policies are sets of permission policies
that can be attached to either users or cloud
resources to authorize what they access and
what they can do with it.
Roles of Identity Access Management in Cloud
Security
 IAM is crucial to protecting sensitive enterprise
systems, assets, and information from unauthorized
access or use. This represents the systematic
management of any single identity and provides
authentication, authorization, privileges, and roles of
the enterprise boundaries.
 The primary goal is to upgrade security and
productivity by decreasing the total cost, repetitive
tasks, and system downtime. Identity access
management in cloud computing covers all types of
users who can work with defined devices under
unlike circumstances.
 In a cloud system, the storage and processing of data
are performed by organizations or with the help of
third-party vendors. The service provider has to ensure
that data and applications stored in the cloud are
protected as well as the infrastructure is an insecure
environment. Further, users need to verify that their
credentials for authentication are secure.
 There are many security issues that compromise data in
the process of data access and storage in the cloud
environment, especially in the case of data storage with
the help of third-party vendors who themselves may be
a malicious attacker. Though standards and best
practices are available for overcoming such security
problems, cloud service providers are reluctant in
securing their network with the updated set of security
standards.
 Identity and access management is one of the best
practices to measure cloud services. Presently, Identity
and Access Management (IAM) provides effective security
for cloud systems. IAM systems perform different
operations for providing security in the cloud
environment that includes authentication, authorization,
and provisioning of storage and verification. IAM system
guarantees the security of identities and attributes of
cloud users by ensuring that the right persons are
allowed in the cloud systems. IAM systems also help to
manage access rights by checking if the right person with
the right privileges is accessing information that is stored
in cloud systems.
 Currently, many organizations use Identity and Access
Management systems to provide more security for
sensitive information that is stored in the cloud
environment.
How Identity Access Management can control Interactions
with Data and Systems
IAM can move beyond simply allowing or blocking access to data
and systems.
For example, IAM can:
Restrict access to data: Specific roles can access only necessary
parts of systems, databases, and information.
Only allow view access: Users with such roles can only view data,
they cannot add, update, or amend it.
Only permit access on certain platforms: Users may have
access to operational systems, but not on development, testing, or
PROD platforms.
Only allow access to create, amend, or delete data, not to
transmit it: Some roles may not be able to send or receive data
outside the system, meaning it cannot be exposed to other third
parties and applications.
Based on a company’s specific requirements, there are many ways to
implement IAM policies to define and enforce exactly how individual
roles can access systems and data.
Why Identity Access Management is a Vital IT
Enablement & Security Layer
IAM offers several advantages over all other traditional
products. Below is the list to understand the few benefits of
identity management in cloud computing:
 Enhanced Network Abilities: Identity access management
(IAM) makes it simple in sharing the network capabilities
with a complete grid of users who were connected with it.
 Support On-demand improvement: 24*7 hours support
and monitoring can be provided based on need.
 Increase Overall Productivity: Cloud-based services are
configured and hosted by service providers. As a result, many
organizations can improve their overall productivity instead
of worrying about the infrastructure.
 Centralized Management System: Clients can be able to
manage all their services and programs at one place with the
cloud-based services. Identity access management can be
done with one click on a single dashboard.
Security Standards.
Cloud Security Standards: Benefits

• Standards promote interoperability, reducing vendor lock-in

• Standards facilitate hybrid cloud computing by making it easier to


integrate on-premises security technologies with those of cloud
service providers (CSPs)

• Standards provide assurance that best practices are being followed


both internally within an enterprise and by CSPs

• Standards support provides an effective means by which cloud


service customers can compare and contrast CSPs

• Standards support enables an easier path to regulatory compliance

7
8
Cloud Security Standards: Current Landscape
Cloud security standards - maturing
 Different types of standards need to be considered
 Formal standards specific to cloud security already published

General IT security standards applicable to cloud computing


 Insist that cloud service providers support these

Cloud service customers: insist on security certifications from providers


 Cloud specific security certifications available today (emerging)
 Existing general security certifications are applicable

Advisory Standards Security Frameworks Standards Specifications


 Interpreted and applied to all types  Define specific policies, controls,  Define APIs and protocols
and sizes of organizations checklists, procedures  Extensibility - optional functions
 Flexibility: latitude to adopt  Define processes for examining that go beyond those defined in
information security controls that support - auditors use to assess & standard
make sense to users measure CSP conformance  Formal certifications not provided -
 Unsuitable for compliance testing  Suitable for compliance testing compliance and interoperability
test suites may be available

Examples Examples
Examples
ISO 27002, ISO 38500, COBIT OAuth 2.0, SAML 2.0, SSL/TLS,
ISO/IEC 27001, ISO/IEC 27017,
ISO/IEC 27018 X.509 6
10 Steps to Evaluate Cloud Security

10 Steps to Evaluate Cloud Security


1. Ensure effective governance, risk & compliance
2. Audit operational & business processes
3. Manage people, roles & identities
4. Ensure proper protection of data & information
5. Enforce privacy policies
6. Assess the security provisions for cloud applications
7. Ensure cloud networks & connections are secure
8. Evaluate security controls on physical infrastructure & facilities
9. Manage security terms in the cloud service agreement
10. Understand the security requirements of the exit process
Step 1: Ensure effective governance, risk and compliance

General IT Governance Standards


GRC Requirements
 Cloud computing presents different risks than traditional
IT solutions

 Customers must understand their risk tolerance and


must focus on mitigating the risks that are most crucial
to the organization

 Customers must fully understand specific laws or


regulations that apply to the services (data retention,
privacy requirements, etc.)

 Customers should be notified if any breach occurs


regardless if the customer is directly impacted CSF
 Primary means to ensure application and data security is Country / Industry Standards
through Cloud Service Agreement

 General IT governance standards apply to cloud


 Country & industry specific governance standards also
apply
81
Step 2: Audit operational & business processes

Audit Requirements

 Security audit of cloud service providers


is essential
 Security audits should be carried outby
appropriately skilled staff
 Security audits should leverage an
established standard for security controls
 Typically done as part of a formal
certification process
 Insist on ISO/IEC 27001 and ISO/IEC
27017 or equivalent certification
 For cloud services with impact on
financial activities seek SSAE 16
certification

82
Step 3: Manage people, roles & identities

Considerations
 Cloud service provider shouldsupport:
• Federated identity management
• Delegated user administration
• Single sign-on
• Strong, multi-factor, mutual and/or
even biometric authentication
• Role, entitlement and policy
management
• Identity & Access audit
 Any access to the provider’s management
platform should be monitored and logged
 Several standards available for federated
WS-Federation
IDs, single sign-on and access control

83
Step 4: Ensure proper protection of data & information

Control Standards
Considerations
 Security considerations apply to data at rest
as well as data in motion

 Controls for securing data in the cloud:


Data In Motion Standards
• Create a data asset catalog
• Consider all forms of data
• Consider privacy requirements
• Apply confidentiality, integrity and
availability principles
• Apply identify and access
management
Data Encryption Standards
 Seek security standards for control
framework, data in motion and data
encryption
KMIP
84
Step 5: Enforce policies for Protection of Personal Data

Considerations
 “Privacy”: acquisition, storage, use of personally EU-US Privacy Shield

identifiable information (PII)


• Gaining importance
• Law and regulations usually apply
 Privacy requirements include:
• Limitations on use of and access to PII
• Tagging PII data correctly
• Securing storage of PII
• Limiting access to authorized users
 Specific types of PII require special treatment
• Health data: HIPAA
• Credit card data: PCI-DSS
 Privacy issues should be addressed in CSA
• ISO/IEC 19086 Cloud SLAs
 Required controls specified in ISO/IEC 27018 12
Step 6: Assess the security provisions for cloud applications

Considerations
 Application security in cloud environment over complete
lifecycle

 Infrastructure as a Service
Secure development, secure deployment
ISO 27034
Application Security

(OWASP, •ISOCustomer
27034) responsible for majority of security components



 Platform as a Service
Security testing (NIST 800-115)
Deployment model impacts application security
• Provider responsible to provide secure operating system, middleware, network
• Customer responsible for application security

 Software as a Service
• Provider provides application security
• Customer must understand data encryption standards, audit capabilities, SLAs
SP 800-115

13
Step 7: Ensure cloud networks & connections are secure

Considerations
 Customer should gain assurance on
provider’s internal and external network
security
 Areas of concern ISO 27033
• Confidentiality Integrity Availability Network Security
 External network requirements
• Traffic screening
• Intrusion detection / prevention
• Logging and notification
 Internal network requirements
• Protect clients from each other
• Protect the provider’s network
• Monitor for intrusion attempts
 ISO 27033 addresses network security
 NIST 800-53 R4 has useful controls
 FedRAMP specifies specific controls 14
Step 8: Evaluate security controls on physical infrastructure &
facilities

Considerations

 Customer should gain assurance on provider’s


physical security
ISO 27002
• Physical infrastructure & facilities should Security Techniques: Code ofPractice
for Information Security Controls
be in a secure area
• Protection against external &
environmental threats
• Control of personnel in working areas
• Equipment security controls
• Controls on supporting utilities
• Control security of cabling
• Proper equipment maintenance TIA-942
• Removal & disposal of assets
• Human resource security
• DR & BC plans in place
 Look for certification to ISO/IEC 27002 standard
 ANSI TIA-942 covers physical security
15
Step 9: Manage security terms in the cloud service agreement (CSA)

ISO/IEC 19086
Considerations ISO/IEC 27004
 Understand who is responsible for what (provider or ISO/IEC 27017
customer) ISO/IEC 27018
 CSA should specify that (and how) customer is notified
of security incidents SP 800-55
 CSA must also cover recovery measures and customer
compensation
CIS Consensus
 Security clauses in the CSA apply to cloud provider as
Metrics 1.1.0
well as its subcontractors

 SLAs must include metrics for performance and Common


effectiveness of information security management
Weakness
(see ISO/IEC 27004 and 19086, NIST 800-55 and CIS
Enumeration
Consensus Metrics)

 Data protection should follow ISO/IEC 27018 STAR


Registry
 Require compliance reports to covering security
controls, services and mechanisms
TR178
 ISO/IEC 27017 specializes ISO/IEC 27002 to the cloud 16
Step 10: Understand the security requirements of the exit process

Considerations
 Once termination process is complete, “the
right to be forgotten” should be achieved
 No customer data should reside with ISO/IEC 19086
provider after the exit process Cloud Computing – Service
Level Agreement Framework
 Require provider to cleanse log and audit
data
• Some jurisdictions may require
retention of records of this type for
specified periods by law
 Exit process must allow customer a smooth
transition without loss or breach of data
 The emerging ISO/IEC 19086 standard
contains language on the exit process

90

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy