Advanced Micro Devices

AMD Virtualization Journal

Subscribe to AMD Virtualization Journal: eMailAlertsEmail Alerts newslettersWeekly Newsletters
Get AMD Virtualization Journal: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn

Related Topics: Virtualization Magazine, Java EE Journal, Apache Web Server Journal, AMD Virtualization Journal

Virtualization: Article

Making Sense of Virtualization

The tidal wave of innovation has begun

Companies are finding it increasingly difficult to manage their enterprise data centers as they become highly complex, expensive to build out, and difficult to reconfigure as needs change. In an effort to address these challenges, many IT professionals are turning to virtualization technologies.

Virtualization addresses a number of these issues and offers a variety of benefits including hardware utilization, operational efficiency, and data center agility. However, many customers and their technology partners are becoming increasingly frustrated with the proprietary and expensive nature of the available virtualization software solutions. Luckily, a new wave of virtualization-related technologies is emerging to address these challenges and improve the economics of virtualization.

These emerging solutions are enabling a more dynamic IT infrastructure that helps transform the static, hard-wired data center into a software-based dynamic pool of shared computing resources. They provide simplified management of industry-standard hardware and enable today's business applications to run on virtual infrastructure without modification. Using centralized policy-based management to automate resource and workload management, the solutions deliver "capacity on demand" with high availability built in.

Virtualization 101
Regardless of the increased need for and the constant discussion in the industry around virtualization, many IT professionals are still having difficulty grasping the terminology and comprehending the many choices of hypervisors and hardware that make up the complicated virtualization landscape.

Originally part of mainframe technology, virtualization isn't a new concept. It's been applied to various technology problems throughout computing history and is now receiving renewed interest as an approach for managing standardized (x86) servers, racks, and blade systems.

Virtualization lets administrators focus on service delivery by abstracting hardware and removing physical resource management. It decouples applications and data from the functional details of the physical systems, increasing the flexibility with which the workloads and data can be matched with physical resources. This enables administrators to develop business-driven policies for delivering resources based on priority, cost, and service-level requirements. It also enables them to upgrade underlying hardware without having to reinstall and reconfigure the virtual servers, making environments more resilient to failures.

At the core of most virtualization software solutions is a "virtual machine monitor" or "hypervisor" as it's sometimes called. A hypervisor is a very low-level virtualization program that lets multiple operating systems - either different operating systems or multiple instances of the same operating system -share a single hardware processor. A hypervisor is designed for a particular processor architecture such as an x86 processor. Each operating system appears to have the processor, memory, and other resources all to itself. However, the hypervisor actually controls the real processor and its resources, allocating what's needed to each operating system in turn. Because an operating system is often used to run a particular application or set of applications in a dedicated hardware server, the use of a hypervisor can make it possible to run multiple operating systems (and their applications) on a single server, reducing overall hardware costs.

Server Virtualization versus Data Center Virtualization
Server virtualization is the masking of server resources from server users. The technology can be viewed as part of an overall virtualization trend in enterprise IT that includes storage virtualization, network virtualization, and workload management. This trend is one component in the development of autonomic computing, in which the server environment will be able to manage itself based on perceived activity. Server virtualization is also seen as a likely requirement for both utility computing in which computer processing power is seen as a utility that clients can pay for as needed, and grid computing in which an array of computer processing resources, often in a distributed network, are used for a single application.

While first-generation technologies were limited to working on a single machine or with small clusters of machines, data center virtualization manages the utilization and sharing of many machines and devices including server, storage, and network resources. This enables enterprises to automate numerous time-intensive manual tasks such as provisioning new servers, moving capacity to handle increased workloads, and responding to availability issues. In this environment, any application can run on any machine or be moved to any other machine without disrupting the application or requiring time-consuming SAN or network configuration changes. With these capabilities companies can transform the data center into a manageable and dynamic pool of shared computing resources, enabling IT to rapidly respond to changing business demands and dramatically reduce the costs of managing and operating the data center.

More Stories By Alex Vasilevsky

Alex Vasilevsky is Co-Founder and CTO of Virtual Computer. Before that he was the founder and CTO of Virtual Iron Software, held senior engineering and management roles at Ucentric Systems, Omtool, Avid Technology, and Thinking Machines. He is an industry-recognized expert in virtualization, open source, parallel processing, video systems, and advanced optimizing compilers. Prior to Virtual Computer, he was founder and CTO of Virtual Iron Software. He has authored numerous papers and patents (6 granted and 16 pending) on datacenter and networking topics including security, network and server virtualization, resource optimization, and performance. Listed in The History of the Development of Parallel Computing, Alex is the winner of three IEEE Gordon Bell Awards for practical applications of parallel processing research. He has a BS in computer engineering from Syracuse University, and an MS in computer science from Boston University.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.