Which of the following can manage OS and application as a single unit by encapsulating them?

How Virtualization Happens

Diane Barrett, Gregory Kipper, in Virtualization and Forensics, 2010

Full Virtualization

Full virtualization is a virtualization technique used to provide a VME that completely simulates the underlying hardware. In this type of environment, any software capable of execution on the physical hardware can be run in the VM, and any OS supported by the underlying hardware can be run in each individual VM. Users can run multiple different guest OSes simultaneously. In full virtualization, the VM simulates enough hardware to allow an unmodified guest OS to be run in isolation. This is particularly helpful in a number of situations. For example, in OS development, experimental new code can be run at the same time as older versions, each in a separate VM. The hypervisor provides each VM with all the services of the physical system, including a virtual BIOS, virtual devices, and virtualized memory management. The guest OS is fully disengaged from the underlying hardware by the virtualization layer.

Full virtualization is achieved by using a combination of binary translation and direct execution. With full virtualization hypervisors, the physical CPU executes nonsensitive instructions at native speed; OS instructions are translated on the fly and cached for future use, and user level instructions run unmodified at native speed. Full virtualization offers the best isolation and security for VMs and simplifies migration and portability as the same guest OS instance can run on virtualized or native hardware. Figure 1.5 shows the concept behind full virtualization.

Which of the following can manage OS and application as a single unit by encapsulating them?

Figure 1.5. Full Virtualization Concepts

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597495578000011

Service Creation and Service Function Chaining

Ken Gray, Thomas D. Nadeau, in Network Function Virtualization, 2016

Virtual Service Creation and SFC

At this point in the discussion, what NFV potentially introduces is a much greater degree of dynamic elasticity since the functions share a common hardware/software base … theoretically, substitutable infrastructure. This can lead to lower service deployment friction, and at the same time a higher degree of service “bin packing” on devices that are already deployed and in service. While having the potential for less optimal traffic routing by virtue of being forced to locate a service function instance somewhere further away, but available, this approach also results in far fewer stranded or under-utilized resources. This model also makes for a potentially far easier service cost/benefit calculus for the network operator.

The role of SFC is to not only render the service path from the service chain by virtue that it creates true topological independence, but also in doing so to give the operator a bridge to the missing functionality of the highly integrated solution set.

Transport technologies like MPLS and/or SDN-associated technologies (eg, VXLAN or NVGRE abetted by Orchestration or DevOps tooling) allow the network operator to create orchestrated overlays.7 Whether you use Layer 2 (VLAN-stitching) or Layer 3 (VRF-stitching or tunnel-stitching), transport-only solutions lack the additional functionality that address the entire service creation problem directly. For example, these solutions do not address the specifics of the placement of the virtualized network elements or the lifecycle management of those constructs.

Although operators have been creating services with these technologies, just as they have through “brute-force” physical chaining, the attraction of SFC is in yet-to-be-created services that take advantage of the additional information that can be passed in the creation of a true service overlay.

Fig. 2.7 demonstrates two service chains, A (A1, A2, A3) and B (B1, B2, B3), but also shows service function reuse service in that both chains traverse the same service function A2. In this figure we demonstrate how SFC should also provide the additional benefit of service component reuse where it makes sense. That is a single component/function can be utilized by more than one chain or path. Thus SFC will help an operator manage both the physical and logical separation of service functions, and at the same time, compressing together and optimizing resources.

Which of the following can manage OS and application as a single unit by encapsulating them?

Figure 2.7. Two service chains (A and B) with two separate paths while reusing service function 2, which provides an additional abstraction in the form of multiple instances.

Ultimately, the problem of configuration complexity will have to be solved outside of SFC.8 Note that by “configuration” we intend that more than the network path is configured. This is best expressed as the logical operation of the function itself when it is applied to a specific flow, affecting not only forwarding of the flow but also embedding state/policy dependencies. This can be accomplished through the use of common service models, which can eliminate or obscure the CLI of individual vendor implementations. This can be achieved using a standards-based (or de facto standard derived from open source) REST API call that is locally translated into a configuration. The next step here will likely involve some sort of evolution from vendor-specific to community-wide multivendor models. An early example of this normalization is the use of service models in the IETF NETMAP WG, or even The OpenDaylight project’s northbound API for SFC.

Note that service chaining still has to deal with architectural requirements around bidirectional flows. This is particularly true for stateful services where the restrictions imposed by highly integrated and loosely coupled services implicitly avoid these issues.

For stateless service functions, high availability will be realized through the “swarm” or “web-scale”9 approach.

This paradigm relies on orchestration and monitoring to eliminate failed members of a swarm of servers (ie, far more than a few servers) that scale to handle demand and simple load distribution. The collection of servers is either dictated in overlay network directives through central control, or managed inline. In the latter case, the abstraction between chain-and-path and function-and-instance are critical to scale.

For stateful service functions (eg, proxy services: any service that maps one communication session to another and thus has the reverse mapping state or monitors the status of a session in order to trigger some action), traditional HA mechanisms can be leveraged. These have traditionally been active/active or active/passive, 1:1 or 1:N, and with or without heartbeat failure detection.

Admittedly, traditional stateful function redundancy schemes have a cost component to be considered as well.

These traditional mechanisms have been labeled “weak” by recent academic work10 (or at least “nondeterministic” regarding the state synchronization, which can lead to session loss on failover). Potential mitigation techniques for “nondeterministic state sharing” have their own potential costs in delay and overall scale (eg, requirements to write to queues instead of directly to/from NIC, freezing the VM to snapshot memory) that need to be balanced.

New system design techniques such as those used in high scale distributed systems can be used to decouple these applications from a direct linkage to their state store (common store for worker threads) or their backup (if the state store is necessarily “local” by design), potentially enabling the web-scale availability model while reducing cost.11 This “nonmigratory” HA, like the recommendations to solve determinism problems in other schemes, assumes a rewrite of code in the transition from appliance to VM providing an opportunity to improve HA.

The traditional stateful HA approaches often include a scheme to appear as a single addressable network entity, thus masking their internal complexities (an abstraction that collapses detail).

In Fig. 2.8, the service chain “A” is bidirectional and the function A2 is stateful and elastic. The service path for both the forward and reverse direction for a flow distributed by A2 to a virtual instance of its function must transit the same instance. Here A2 is shown as an HA pair with A2′.

Which of the following can manage OS and application as a single unit by encapsulating them?

Figure 2.8. High availability of stateful service functions.

To some degree, SFC might actually provide relief for common network operational problems through limited geographical service function or service path redundancy. The seemingly requisite distribution function whether centralized or locally available, may ultimately be leveraged to allow operational flexibility (eg, A/B software upgrade schemes or “live migration”).

Ultimately, our view of a “chain” has to change from a linear concept to that of a “graph” (Fig. 2.9) whose vertices are service functions and edges can be IP or overlay connectivity—with less focus on the network connectivity and more on the relationship of the functions.

Which of the following can manage OS and application as a single unit by encapsulating them?

Figure 2.9. Service graphs.

Varying Approaches to Decomposition

As we transition from the tightly coupled, through loosely coupled and on to full virtualization, it is important to note that different vendors may chose widely varying decomposition, scale, and packaging strategies to a service.

While the “base use case” (referenced in Chapter 1: Network Function Virtualization) is an “atomic” function that does not decompose (it may “multi-thread” to scale), and is thus relatively simple, some integrated service platforms are far more complex and potentially decompose-able.

Consumers and vendors often refer to this decomposition as creating “micro services” allowing them to either sell/consume a service in a formerly bundled “macro service” (eg, GiLAN may have an integrated NAT or Firewall service, which can now be “parted out” into a service chain) independently, allowing “best of breed” consumption. However, true “micro services” go beyond decomposition to the function/service level and can approach the process/routine level as an enabler of software agility, which we will touch on in a later chapter.

This is particularly well illustrated in the area of mobility with the GiLAN and vIMS (both of which we pointed to in Chapter 1: Network Function Virtualization, as a service that was well on its way to virtualization prior to the NFV mandate, and thus “low hanging fruit”).

For its part, the GiLAN decomposes into more readily-identifiable services/atoms IMS is a much more interesting study.

In a 2014 IEEE paper on cloudified IMS,12 the authors propose three possible solutions/implementation designs to address the scale requirements of a cloud implementation of IMS: a one-to-one mapping (or encapsulation of existing functionality, see Fig. 2.10), a split into subcomponents (completely atomic) and a decomposition with some functions merged. Each of these architectures preserves (in one way or another) the interfaces of the traditional service with minimal alteration to messaging to preserve interoperability with existing/traditional deployments.

Which of the following can manage OS and application as a single unit by encapsulating them?

Figure 2.10. IMS decomposition in default, function-per-VM mode.

These views illustrate the complexity involved and decision making in decomposing highly integrated functions that make a service—outside of the mechanisms used to chain the components together (the networking piece)!

In the one-to-one mapping, each function of a traditional IMS would be placed in its own VM (they do not have to be in the same host, this is a simplification). Note that some of the functions are stateful (and some are not). In this decomposition, 3GPP defines the discovery, distribution, and scaling of the individual functions.

In the split decomposition, each function has a function specific load balancer (Fig. 2.11). In this imagining, each function is rendered into a stateless worker thread (if it is not already stateless) in a container, with any shared state for the function being moved to shared memory. Even the load balancing is stateless, though it has its own complexities (it has to implement the 3GPP interfaces that the function would traditionally present).

Which of the following can manage OS and application as a single unit by encapsulating them?

Figure 2.11. The PCSCF function split decomposition with its own load balancer and back-end state sharing.

The last decomposition (Fig. 2.12) combines a subset of functions in the same VM (as threads) to reduce communication costs between them. It also removes the state in a common webscale fashion into a back-end database and replaces function-specific load balancing with a simple proxy (the proxy has to support one of the traditional IMS interfaces, Mw).

Which of the following can manage OS and application as a single unit by encapsulating them?

Figure 2.12. The IMS “merged” decomposition proposition.

These choices in decomposition were the grist of some debate in the ETSI work, and ultimately were left undefined (with respect to how they would be managed and orchestrated) in their early work (see Chapter 3: ETSI NFV ISG).

Ultimately, many of the scale techniques use architectures that work within the limits of the von Neumann architecture that dominates COTS compute. We pick up on this theme in Chapter 7, The Virtualization Layer—Performance, Packaging, and NFV and Chapter 8, NFV Infrastructure—Hardware Evolution and Testing.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978012802119400002X

Advanced Topics

Peter Barry, Patrick Crowley, in Modern Embedded Computing, 2012

Hardware Support for Virtualization

In response to the rising popularity of virtualization, Intel and AMD both added hardware support to their processors to provide full virtualization without requiring changes to operating systems. The hardware support is similar in spirit to binary rewriting, but rather than rewriting code sequences above the application binary interface, hardware support can be built in to the CPU to trap and virtualize hardware-specific operating and code sequences beneath it, in the CPU’s microarchitecture.

Providing hardware support for virtualization involves very little runtime overhead but may lead to considerable resource additions in the microarchitecture to more effectively share internal CPU state that in a nonvirtualized context does not need to be shared.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123914903000151

GIS Methods and Techniques

Shaun Fontanella, Ningchuan Xiao, in Comprehensive Geographic Information Systems, 2018

1.02.4.2 Containerization

Containerization is a form of virtualization that is becoming increasingly popular. Containerization is an even denser version of virtualization. As described above, virtualization takes an entire operating system and condenses it down to one file. With full virtualization, all the files for the entire operating system exist within each VM. If a host server has 10 VMs, it has 11 copies (including its own) of all of the operating system files. A container is a VM that exists as a separate OS from the host OS but it only has the files that are unique to it. Containerization removes the redundant files and only maintains configuration files necessary to maintain the system state. This sharing of base files means that the VMs take even fewer resources to host. Also, library images of containers with stock configurations are much smaller and can be shared more efficiently. Containerization allows researchers to spin up many VMs to keep applications logically separate while not having to worry about idling hardware.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124095489096007

An Introduction to Virtualization

In Virtualization for Security, 2009

Server Virtualization

Although the concepts we have discussed so far have been about virtualization in general, they are most exhibited in server virtualization products. Server virtualization has become the most successful form of virtualization today. Server virtualization is sometimes called full virtualization. Server virtualization abstracts both the hardware resources on the physical computer as well as the hosted guest operating systems that run on the virtualization platform. A virtual machine running on a virtualized server needs no special software in order to run on the virtualized server. Implementations of server virtualization exist on, and for all, CPU platforms and architectures, the most popular being the IA-32 or x86. The challenges posed by the x86 architecture's ISA and the Popek and Goldberg requirements have led to several approaches to VMM development. Although there are many different implementations of a VMM for x86, they can be summarized into four distinct categories. Table 1.4 provides additional information about each category for server virtualization.

Table 1.4. Types of Server Virtualization

Type of VirtualizationDescriptionProsCons
Full virtualization A virtualization technique that provides complete simulation of the underlying hardware. The result is a system in which all software capable of execution on the raw hardware can be run in the virtual machine. Full virtualization has the widest range of support of guest operating systems. Provides complete isolation of each virtual machine and the VMM; most operating systems can be installed without any modification. Provides near-native CPU and memory performance; uses sophisticated techniques to trap and emulate instructions in runtime via binary patching. Requires the right combination of hardware and software elements; not quite possible on the x86 architecture in its pure form because of some of the privileged calls that cannot be trapped; performance can be impacted by trap-and-emulate techniques of x86 privileged instructions.
Paravirtualization A virtualization technique that provides partial simulation of the underlying hardware. Most, but not all, of the hardware features are simulated. The key feature is address space virtualization, granting each virtual machine its own unique address space. Easier to implement than full virtualization; when no hardware assistance is available, paravirtualized guests tend to be the highest performing virtual machines for network and disk I/O. Operating systems running in paravirtualized virtual machines cannot be run without substantial modification; virtual machines suffer from lack of backward compatibility and are not very portable.
Operating System Virtualization This concept is based on a single operating system instance. Tends to be very lean and efficient; single OS installation for management and updates; runs at native speeds; supports all native hardware and OS features that the host is configured for. Does not support hosting mixed OS families, such as Windows and Linux; virtual machines are not as isolated or secure as with the other virtualization types; Ring-0 is a full operating system rather than a stripped-down microkernel as the VMM, so it adds overhead and complexity; difficult to identify the source of high resource loads; also difficult to limit resource consumption per guest.
Native virtualization This technique is the newest to the x86 group of virtualization technologies. Often referred to as hybrid virtualization, this type is a combination of full virtualization or paravirtualization combined with I/O acceleration techniques. Similar to full virtualization, guest operating systems can be installed without modification. It takes advantage of the latest CPU technology for x86, Intel VT, and AMD-V. Handles non-virtualizable instructions by using trap-and-emulate in hardware versus software; selectively employs accelerations techniques for memory and I/O operations; supports x64 (64-bit x86 extensions) targeted operating systems; has the highest CPU, memory, and I/O performance of all types of x86 virtual machines. Requires CPU architecture that supports hardware-assisted acceleration; still requires some OS modification for paravirtualized guests, although less than pure paravirtualization.

Designing & Planning…

Hardware-Assistance Enhances Virtualization

To maximize the performance of your x86-based physical platform and the hosted virtual machines, be sure to select processors that support hardware-assisted virtualization. Both Intel, providing Intel Virtualization Technology (Intel VT), and AMD, providing “Pacifica” (AMD-V), offer such technologies in their latest generation of processors available for servers as well as desktops and notebooks.

Hardware-assisting processors give the guest OS the authority it needs to have direct access to platform resources without sharing control of the hardware. Previously, the VMM had to emulate the hardware to the guest OS while it retained control of the physical platform. These new processors give both the VMM and the guest OS the authority each needs to run without hardware emulation or OS modification.

They also help VMM developers design a more simplified VMM. Since hardware-assisted processors can now handle the compute-intensive calculations needed to manage the tasks of handing off platform control to a guest OS, the computational burden is reduced on the VMM. Also, key state information for the CPU and guest OS can now be stored in protected memory that only the VMM has access to, protecting the integrity of the handoff process.

Finally, hardware-assisted processors, all of which support 64-bit processing, now allow the benefits of 64-bit computing to filter up to the guest OS and its hosted applications. This provides virtual machines with greater capabilities, headroom, and scalability.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597493055000013

Creating a dynamic data center with Microsoft System Center

Thomas Olzak, ... James Sabovik, in Microsoft Virtualization, 2010

As discussed in Chapter 1, workload consolidation is usually one of the first and most focused upon benefits in the early discussions of a virtualization implementation project. However, as also discussed in Chapter 1, you will quickly learn that full virtualization of an entire data center is often not feasible. So your workload consolidation plan must be thorough, with proper expectation set and presented to the stake holders of the project from the earliest stages.

Note

It is important to understand that workload consolidation is NOT the same thing as hardware consolidation. We will discuss hardware consolidation later in this chapter, but for now just realize that there are some significant differences between the two. You will need this mindset as we go through the discussions in this chapter.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597494311000102

Virtualization

Rajkumar Buyya, ... S. Thamarai Selvi, in Mastering Cloud Computing, 2013

3.6.2.1 Full virtualization and binary translation

VMware is well known for the capability to virtualize x86 architectures, which runs unmodified on top of their hypervisors. With the new generation of hardware architectures and the introduction of hardware-assisted virtualization (Intel VT-x and AMD V) in 2006, full virtualization is made possible with hardware support, but before that date, the use of dynamic binary translation was the only solution that allowed running x86 guest operating systems unmodified in a virtualized environment.

As discussed before, x86 architecture design does not satisfy the first theorem of virtualization, since the set of sensitive instructions is not a subset of the privileged instructions. This causes a different behavior when such instructions are not executed in Ring 0, which is the normal case in a virtualization scenario where the guest OS is run in Ring 1. Generally, a trap is generated and the way it is managed differentiates the solutions in which virtualization is implemented for x86 hardware. In the case of dynamic binary translation, the trap triggers the translation of the offending instructions into an equivalent set of instructions that achieves the same goal without generating exceptions. Moreover, to improve performance, the equivalent set of instruction is cached so that translation is no longer necessary for further occurrences of the same instructions. Figure 3.12 gives an idea of the process.

Which of the following can manage OS and application as a single unit by encapsulating them?

Figure 3.12. A full virtualization reference model.

This approach has both advantages and disadvantages. The major advantage is that guests can run unmodified in a virtualized environment, which is a crucial feature for operating systems for which source code is not available. This is the case, for example, of operating systems in the Windows family. Binary translation is a more portable solution for full virtualization. On the other hand, translating instructions at runtime introduces an additional overhead that is not present in other approaches (paravirtualization or hardware-assisted virtualization). Even though such disadvantage exists, binary translation is applied to only a subset of the instruction set, whereas the others are managed through direct execution on the underlying hardware. This somehow reduces the impact on performance of binary translation.

CPU virtualization is only a component of a fully virtualized hardware environment. VMware achieves full virtualization by providing virtual representation of memory and I/O devices. Memory virtualization constitutes another challenge of virtualized environments and can deeply impact performance without the appropriate hardware support. The main reason is the presence of a memory management unit (MMU), which needs to be emulated as part of the virtual hardware. Especially in the case of hosted hypervisors (Type II), where the virtual MMU and the host-OS MMU are traversed sequentially before getting to the physical memory page, the impact on performance can be significant. To avoid nested translation, the translation look-aside buffer (TLB) in the virtual MMU directly maps physical pages, and the performance slowdown only occurs in case of a TLB miss. Finally, VMware also provides full virtualization of I/O devices such as network controllers and other peripherals such as keyboard, mouse, disks, and universal serial bus (USB) controllers.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124114548000036

Virtualization

Dijiang Huang, Huijun Wu, in Mobile Cloud Computing, 2018

2.3.3 Comparison to Hypervisor Virtualization

Compared to hypervisor-based virtualization, the OS level virtualization or containers have the following differences:

Fast deployment – The full VM starts in minutes, whereas the containers starts guests in seconds. The containers avoid initializing the guest OS, which makes the guests start much faster.

Less resource requirement – Since the full virtualization allocates resources for each guest OS, it requires much more resources. On the other hand, guests in the containers either share the OS with the host or even no OS, like OSGi, thus, the resource consumption is much less.

Flexibility – Some containers provide start and stop features, which is a lightweight operation to freeze and resume guests and keep the guest state in memory. The full VM freeze and resume function usually saves the guest state in the disk due to the large VM state image, whose cost is much higher than the containers.

Forensic – The container's state is easy to access by the host, since guests share some resources with the host. Full VM state is much harder to get because the host has to interpret a full memory image and get to know the other OS state, which is not that easy.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978012809641300003X

Protection through isolation

Johanna Ullrich, Edgar R. Weippl, in The Cloud Security Ecosystem, 2015

2.1 General architectures

A hypervisor provides an efficient, isolated duplicate of the physical machines for virtual machines. Popek and Goldberg (1974) claimed that all sensitive instructions, i.e., those changing resource availability or configuration, must be privileged instructions in order to build an effective hypervisor for a certain system. In such an environment, all sensitive instructions cross the hypervisor, which is able to control the virtual machines appropriately.

This concept is today known as full virtualization and has the advantage that the host-operating system does not have to be adapted to work with the hypervisor, i.e., it is unaware of its virtualized environment. Obviously, a number of systems are far from perfect and require a number of additional actions in order to be virtualizable, leading to the technologies of paravirtualization, binary translation, and hardware-assisted virtualization (Pearce et al., 2013).

Paravirtualization encompasses changes to the system in order to redirect these nonprivileged, but sensitive, instructions over the hypervisor to regain full control on the resources. Therefore, the host-operating system has to undergo various modifications to work with the hypervisor, and the host is aware that it is virtualized. Applications running atop the altered OS do not have to be changed. Undoubtedly, these modifications require more work to implement, but on the other hand may provide better performance than full virtualization which often intervenes (Rose, 2004; Crosby and Brown, 2006). The best-known hypervisor of this type is Xen (Xen Project, n.d.).

Hardware-assisted virtualization is achieved by means of additional functionality included into the CPU, specifically an additional execution mode called guest mode, which is dedicated to the virtual instances (Drepper, 2008; Adams and Agesen, 2006). However, this type of virtualization requires certain hardware, in contrast to paravirtualization, which is in general able to run on any system. The latter also eases migration of paravirtualized machines. A popular representative for this virtualization type is the Kernel-based Virtual Machine (KVM) infrastructure (KVM, n.d.). Combinations of the two techniques are commonly referred to as hybrid virtualization.

Binary translation is a software virtualization and includes the use of an interpreter. It translates binary code to another binary, but excluding nontrapping instructions. This means that the input contains a full instruction set, but the output is a subset thereof and contains the innocuous instructions only (Adams and Agesen, 2006). This technology is also the closest to emulation, where the functionality of a device is simulated and all instructions are intercepted. The performance is dependent on the instructions to translate. VMware is an example of virtualization using binary translation (VMware, n.d.).

Hypervisors can also be distinguished by their relation to the host-operating system. In the case where the hypervisor fully replaces the operating system, it is called a bare-metal or Type I hypervisor, and where a host-operating system is still necessary, the hypervisor is hosted, or of Type II. Classifying the aforementioned hypervisors: Xen and KVM are both bare-metal—VMware ESX also, but its Workstation equivalent is hosted. Most workstation hypervisors are hosted as they are typically used for testing or training purposes.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128015957000069

Cloud Resource Virtualization

Dan C. Marinescu, in Cloud Computing, 2013

5.15 History notes

Virtual memory was the first application of virtualization concepts to commercial computers. It allowed multiprogramming and eliminated the need for users to tailor their applications to the physical memory available on individual systems. Paging and segmentation are the two mechanisms supporting virtual memory. Paging was developed for the Atlas Computer, built in 1959 at the University of Manchester. Independently, the Burroughs Corporation developed the B5000, the first commercial computer with virtual memory, and released it in 1961. The virtual memory of the B5000 used segmentation rather than paging.

In 1967 IBM introduced the 360/67, the first IBM system with virtual memory, expected to run on a new operating system called TSS. Before TSS was released, an operating system called CP-67 was created. CP-67 gave the illusion of several standard IBM 360 systems without virtual memory. The first VMM supporting full virtualization was the CP-40 system, which ran on a S/360-40 that was modified at the IBM Cambridge Scientific Center to support Dynamic Address Translation, a key feature that allowed virtualization. In CP-40, the hardware’s supervisor state was virtualized as well, allowing multiple operating systems to run concurrently in separate virtual machine contexts.

In this early age of computing, virtualization was driven by the need to share very expensive hardware among a large population of users and applications. The VM/370 system, released in 1972 for large IBM mainframes, was very successful. It was based on a reimplementation of CP/CMS. In the VM/370 a new virtual machine was created for every user, and this virtual machine interacted with the applications. The VMM managed hardware resources and enforced the multiplexing of resources. Modern-day IBM mainframes, such as the zSeries line, retain backward compatibility with the 1960s-era IBM S/360 line.

The production of microprocessors, coupled with advancements in storage technology, contributed to the rapid decrease of hardware costs and led to the introduction of personal computers at one end of the spectrum and large mainframes and massively parallel systems at the other end. The hardware and the operating systems of the 1980s and 1990s gradually limited virtualization and focused instead on efficient multitasking, user interfaces, the support for networking, and security problems brought in by interconnectivity.

The advancements in computer and communication hardware and the explosion of the Internet, partially due to the success of the World Wide Web at the end of the 1990s, renewed interest in virtualization to support server security and isolation of services. In their review paper, Rosenbloom and Grafinkel write [308]: “VMMs give operating system developers another opportunity to develop functionality no longer practical in today’s complex and ossified operating systems, where innovation moves at a geologic pace.”

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124046276000051

What manages OS and application as a single unit by encapsulating them into virtual machines?

VirtualCenter is virtual infrastructure management software that centrally manages an enterprise's virtual machines as a single, logical pool of resources.

What is OS virtualization in cloud computing?

OS Virtualization or Operating System Virtualization works as the last mode of Cloud Computing Virtualization. It is the mode of virtualization of the server. OS Virtualization means in which we use software that lets system hardware run different operating systems on a single computer.

Which is most commonly used for managing the resources for every virtual system?

That's called system virtualization. It most commonly uses the hypervisor for managing the resources for every virtual system. The hypervisor is a software that can virtualize the hardware resources.

Which of the following server virtualization types offers the best guest OS isolation?

Full virtualization offers the best isolation and security for VMs and simplifies migration and portability as the same guest OS instance can run on virtualized or native hardware. Figure 1.5 shows the concept behind full virtualization.