Cloud Resource Pooling is a type of cloud computing in which multiple resources are pooled together.
The essential characteristics of cloud are defined by the NIST.The video and text instructions can be found at the bottom of the page.
The NIST definition of cloud resource pooling is that the provider's computing resources are pooled to serve multiple consumers using a multi-tenant model.The customer has no control over the exact location of the provided resources but may be able to specify location at a higher level of abstraction.Storage, processing, memory, and network bandwidth are examples of resources.
Let's take a closer look at this.We can pool the resources of the physical server that we are running virtual machines on.
We need to go back to our lab demo for this.It is similar to the kind of software that cloud providers use to manage their hosts and virtual machines.Maybe they are also using a different vendor's hypervisor.You can see that I have two hosts in my lab, both of which are hosted by VMware.
The physical host has two processor sockets with two cores per processor, and 2 gigabytes of RAM.A real world cloud server provider would be using more powerful hosts than I have here.
The virtual machines running on the host have the same resources as the real ones.I can see that I have three virtual machines on here.These are virtual machines for my lab demonstration.
If I click on OpenFiler1, I can see that it has over 300 MB of memory and four virtualCPUs.It has a lot of storage space.My external SAN storage is where that is located.
Nostalgia2 can be used to run old DOS games.It has one virtual processor and 32 MB of memory.
The virtual machines have access to the physical host.The job of the hypervisor is to make sure that the virtual machines get their fair share of resources.
We have the concept of tenants with cloud computing.Customer A is a different customer than customer B, so they are both tenants.Multiple customers using the same underlying infrastructure is a multi-tenant system.
We often have different virtual machines for different customers on the same server.The cloud provider will make sure that they don't put too many virtual machines on any single server so they can all get good levels of performance.The virtual machines are kept separate from each other.
Storage is the next resource that we can pool.The big blue box is a storage system with many hard drives.The smaller white squares represent the hard drives.
My centralised storage allows me to slice up my storage and give the virtual machines their own small part of it.The example below shows how to allocate a slice of the first disk as a boot disk.
I can give them exactly how much storage they need, rather than having to give whole disks to different server.Thin provisioning, deduplication and compression are some of the storage efficiency techniques that can be used to make further savings.
All of the different tenants are going to have firewall rules controlling what traffic is allowed to come in to their virtual machines, such as RDP for management and HTTP traffic on port 80 if it is a web server.
We can share the same physical firewalls between different customers because we don't need to give every customer their own.Load balancers can be shared between multiple customers.
There are multiple switchers and routers in the main section of the diagram.The traffic for different customers going through the same devices is shared.
On the right hand side of the diagram, you can see that the cloud provider is also providing various services to the customers.Windows Update and Red Hat update server are used for operating system patching.Customers don't have to provide their own DNS solution if they have a centralised service.
The NIST states that the customer has no knowledge or control over the exact location of the provided resources, but they may be able to specify location at a higher level of abstraction, such as at the country, state, or data center level.
When I spun up a virtual machine, I did it in the Singapore data centre because I am from the South East Asia region.I will get the lowest network latency and the best performance by having it close to me.
I know the data centre that my virtual machine is in, but not the actual server it is running on.It could be in that data centre.It could be using any of the individual storage systems in the data center.The customer doesn't care about those specifics.
The cloud provider needs less equipment in their data centers if they use shared equipment rather than dedicating separate hardware to each customer.economies of scale, better efficiency and cost savings can be passed on to the customer.It's a more viable solution from a financial point of view.