Virtual Machines (VM) and the Capacity Management (CM) advantage

An important attraction of using Virtual Machines (VM) instead of traditional servers can be presented as a Capacity Management (CM) advantage.

While a traditional server can serve multiple purposes up to a point, they are not built to be as flexible, and don’t scale in the same way. A server is a combination of hardware and software, with the software being relatively static in configuration.  If needs are highly variable, more servers of different types are needed, and so multiple stochastic demands have to be met with relatively independent servers.

But a VM environment does two key things: it decouples the software from the hardware, so now the software is a pool; and it allows hardware to be configured more freely, so that it can be shared as well.  So now software can be provisioned rapidly, closer to real time, and hardware can be shared among a larger pool of resources.  By decoupling software from hardware, and making hardware a rapidly configurable resource, the various demands now share a larger pool, may vary less overall, and share a larger pool. By sharing a larger pool, utilization can be managed tighter, even with the same variability, so hardware resources are wasted less. Software, because it can be freely copied in such an environment, can be rapidly created and destroyed as needed, as long as licensing can allow it.

So what used to have very long lead times, leading to higher costs and lower utilization of resources, now can be managed with higher utilization and lower costs, simply because VM can be rapidly created and destroyed as needed, in the form needed. The true advantage of VM, and indeed Cloud architectures as well, is better systems behavior for CM.

About Rupe

Dr. Jason Rupe wants to make the world more reliable, even though he likes to break things. He received his BS (1989), and MS (1991) degrees in Industrial Engineering from Iowa State University; and his Ph.D. (1995) from Texas A&M University. He worked on research contracts at Iowa State University for CECOM on the Command & Control Communication and Information Network Analysis Tool, and conducted research on large scale systems and network modeling for Reliability, Availability, Maintainability, and Survivability (RAMS) at Texas A&M University. He has taught quality and reliability at these universities, published several papers in respected technical journals, reviewed books, and refereed publications and conference proceedings. He is a Senior Member of IEEE and of IIE. He has served as Associate Editor for IEEE Transactions on Reliability, and currently works as its Managing Editor. He has served as Vice-Chair'n for RAMS, on the program committee for DRCN, and on the committees of several other reliability conferences because free labor is always welcome. He has also served on the advisory board for IIE Solutions magazine, as an officer for IIE Quality and Reliability division, and various local chapter positions for IEEE and IIE. Jason has worked at USWEST Advanced Technologies, and has held various titles at Qwest Communications Intl., Inc, most recently as Director of the Technology Modeling Team, Qwest's Network Modeling and Operations Research group for the CTO. He has always been those companies' reliability lead. Occasionally, he can be found teaching as an Adjunct Professor at Metro State College of Denver. Jason is the Director of Operational Modeling (DOM) at Polar Star Consulting where he helps government and private industry to plan and build highly performing and reliable networks and services. He holds two patents. If you read this far, congratulations for making it to the end!
This entry was posted in Engineering Consulting, IT and Telecommunications and tagged , , , , , , , , , . Bookmark the permalink.

Comments are closed.