The other day I walked over to our local Starbucks for my usual early morning coffee only to be surprised to see a number of folks standing around a pretty good outdoor kitchen. When I first saw the line I thought there was a small protest forming outside the local supermarket and wondered what was the problem. As I came even closer to the group I began to wonder if it wasn’t a political group assembling for a morning of campaigning door to door.
I am an early riser these days, and it was only a little after 5:00am with only the first signs of dawn visible, as I walked up to the coffee shop. The usual marine fog layer was still blanketing the area and so the atmosphere was a little surreal. But I sat outside to drink my coffee and it was only then that I understood what was going on – Hollywood had come to Simi Valley and to the local Wood Ranch village shopping area. And I have included a picture of the scene as it unfolded early that morning.
It was to be the first day of shooting of a very low budget film, and I was looking at the entire production team – cameramen, sound engineers, actors, make-up artists, the caterers, and the director. I am not familiar with the sport of wiffleball, a form of indoor baseball, but I was told that the film was about the emergence of a wiffleball champion, with the first day’s shoot featuring our local restaurant. The movie was going to be a piece of whimsical fantasy with none of the supporting cast taking it very seriously.
It’s not the first time I have been close by a “location shoot”.
Back in the late 70s the Dallas restaurant to which I took my business associate, quickly turned into an on-location set for the popular TV program Dallas, and as we ate our steaks, we were fortunate enough to see the stars rehearsing their roles. More recently, as I walked across the famous Ponte Vecchio Bridge across the Arno River in Florence, I happened upon Anthony Hopkins trying to shoot a scene from one of the follow-on productions to the Silence of the Lambs movie.
As the director tried desperately to transition into the make-believe world required for the movie, he bellowed into a megaphone “Quite please! This is a real take!” the surrounding Italian crowd burst into raucous cheers until eventually the local Police had to be called in to clear the bridge. In Simi Valley, it was a totally diffeent scene - we had no crowds and for most of the time, no director!
The last couple of times I have been in Sydney, I Came across a number of movies being filmed including the most recent Superman movie, as well as Mask 2, which I never saw released. It’s still fascinating, and every time I encounter these film crews I am fascinated by the fantasy and the make-believe world that directors can create out of common, everyday, locations.
As I thought about the world of movies, I couldn’t help thinking about virtualization. While not pursuing quite the same objective as the world of movies does, they both share some things in common. We enjoy going to the movies to be transported into another life, and into another world. For the short time we watch the movie, we witness events and scenes totally disconnected, for the most part, from anything that’s real. There’s a level of abstraction with both that allows everything that’s visible to be layered on top of foundations about which we know very little. One moment, a film can be transporting us to a street in a major city when the very next instance, it’s all a veneer on a film studio’s back lot.
I have addressed the topic of virtualization in a couple of blog postings. Last year, back on September 13th when I was in Sydney, I posted “A taste of Virtualization” where I made the observation about how “I believe … the chip sets and boards, are heading to where there will be no limit to the number of CPUs, or processing engines, that end up occupying just a single socket on a processing board. To take advantage of this new packaging … then, for vendors where any number of Operating Systems (OS) may need to share a basic board package, some level of abstraction between the OS and the metal has advantages.”
Later that year, in the posting of December 19th in “Virtualization? Unreal, mate!,” I referenced an entry in Wikipedia that stated “the common theme of all virtualization technologies is the hiding of technical detail, through encapsulation. Virtualization creates an external interface that hides an underlying implementation.” In that same posting I made the observation that behind the scenes, “For those managing data centers, there is a huge push to consolidate the servers. The economics from simple server consolidation are overwhelming.”
Doesn’t this all sound very similar to the movie industry? Is it just me, or isn’t the challenges facing technology vendors, like HP, similar to those faced everyday by movie directors? “Some level of abstraction” has advantages! “Hiding the underlying implementation” is becoming necessary! None of us would like to see what really was being used to make up a movie set, and none of us would be excited without the special video effects or the audio track being left off … In other words, the final product, an application accessed on a server, or a finished movie viewed in a theater, both benefit equally from being an abstraction and divorced from the mechanics of “what lies beneath”!
The reference to server consolidation cannot be ignored. Moving beyond the comparisons between movie making and deploying an application on a modern, multi-core server, data center managers indeed are one of the key driving forces behind virtualization.
In the articles Dr.Bill Highleyman writes for the Availability Digest newsletter (http://www.availabilitydigest.com/), there was a three part piece (covered in the March, April, and June issues) where he opened with “recent studies have shown that typical servers in a data-center environment that is governed by a one-application, one-server policy are running at only 10% to 15% of capacity. If only we could harness this excess capacity, we could significantly reduce the number of servers in a data center by a factor of two, three, or even more … Virtualization lets one physical server do the work of many.”
Bill added in Part 2 with the comment “The physical layer requests made by the guest operating systems are adjudicated by an intervening layer, the hypervisor …(that) in effect, multiplexes the requests from the operating system and allows only one request at a time to be passed to the physical server.” What Bill is highlighting is that there can be huge advantages for a data center manager, overwhelmed by the sheer scope of servers requiring support, with virtualization, and that the arrival of hypervisors as the layer of abstraction hiding all the metal “on the board” will go a long way in helping with any consolidation.
But as much as I have always been a supporter of virtualization, I am no longer as certain as I once was that there will be real value in the NonStop marketplace. Is such a layer of abstraction really going to help the cause of NonStop and support the level of continuous availability we have come to depend upon?
Coming from a communications and networking background, I was an early witness to the flexibility and power that came with the separation of multiple logical networks running over a single physical network. This year, in my January 14th posting, “Virtualization? A bargain at any price!”, I referenced the early work done with SNA and how it gave “the networking community a way to isolate an ever-changing physical network from the mission critical online applications of the day.” But the NonStop environment really does need to know about the “ever-changing physical” components lying underneath.
In my posting of May 18th, “Virtualization? A testing time” I ask the question “how will we simulate fail-over in a completely virtual world? And will it be worth it? Will we want to see a return to simpler configuration just for reliability?” In simpler terms, will the arrival of virtualization with support for NonStop really work? Will it decrease the levels of availability we need? Indeed, Bill Highleyman goes so far as to suggest that it may reduce the continuous availability characteristics we accept today, down to the high availability characteristics pretty much every other vendor, with some form of clustering support, can provide today.
As I watched the very first take for the whiffleball movie take place, and talked with a number of the crew working on the shoot, how different the real world was from what would finally make it into the projection rooms of local theaters. And there really wasn’t any need for the audience to know about the specifics.
But I can’t say the same for virtualization when it comes to the HP NonStop. While it may not be that important for the application to know what’s going on behind the scene, for the operating system, an incredibly crucial component of NonStop, it is important. But will we have to call in the authorities, and chase away the crowds cheering for virtualization, before we can determine what’s best for NonStop!
I am an early riser these days, and it was only a little after 5:00am with only the first signs of dawn visible, as I walked up to the coffee shop. The usual marine fog layer was still blanketing the area and so the atmosphere was a little surreal. But I sat outside to drink my coffee and it was only then that I understood what was going on – Hollywood had come to Simi Valley and to the local Wood Ranch village shopping area. And I have included a picture of the scene as it unfolded early that morning.
It was to be the first day of shooting of a very low budget film, and I was looking at the entire production team – cameramen, sound engineers, actors, make-up artists, the caterers, and the director. I am not familiar with the sport of wiffleball, a form of indoor baseball, but I was told that the film was about the emergence of a wiffleball champion, with the first day’s shoot featuring our local restaurant. The movie was going to be a piece of whimsical fantasy with none of the supporting cast taking it very seriously.
It’s not the first time I have been close by a “location shoot”.
Back in the late 70s the Dallas restaurant to which I took my business associate, quickly turned into an on-location set for the popular TV program Dallas, and as we ate our steaks, we were fortunate enough to see the stars rehearsing their roles. More recently, as I walked across the famous Ponte Vecchio Bridge across the Arno River in Florence, I happened upon Anthony Hopkins trying to shoot a scene from one of the follow-on productions to the Silence of the Lambs movie.
As the director tried desperately to transition into the make-believe world required for the movie, he bellowed into a megaphone “Quite please! This is a real take!” the surrounding Italian crowd burst into raucous cheers until eventually the local Police had to be called in to clear the bridge. In Simi Valley, it was a totally diffeent scene - we had no crowds and for most of the time, no director!
The last couple of times I have been in Sydney, I Came across a number of movies being filmed including the most recent Superman movie, as well as Mask 2, which I never saw released. It’s still fascinating, and every time I encounter these film crews I am fascinated by the fantasy and the make-believe world that directors can create out of common, everyday, locations.
As I thought about the world of movies, I couldn’t help thinking about virtualization. While not pursuing quite the same objective as the world of movies does, they both share some things in common. We enjoy going to the movies to be transported into another life, and into another world. For the short time we watch the movie, we witness events and scenes totally disconnected, for the most part, from anything that’s real. There’s a level of abstraction with both that allows everything that’s visible to be layered on top of foundations about which we know very little. One moment, a film can be transporting us to a street in a major city when the very next instance, it’s all a veneer on a film studio’s back lot.
I have addressed the topic of virtualization in a couple of blog postings. Last year, back on September 13th when I was in Sydney, I posted “A taste of Virtualization” where I made the observation about how “I believe … the chip sets and boards, are heading to where there will be no limit to the number of CPUs, or processing engines, that end up occupying just a single socket on a processing board. To take advantage of this new packaging … then, for vendors where any number of Operating Systems (OS) may need to share a basic board package, some level of abstraction between the OS and the metal has advantages.”
Later that year, in the posting of December 19th in “Virtualization? Unreal, mate!,” I referenced an entry in Wikipedia that stated “the common theme of all virtualization technologies is the hiding of technical detail, through encapsulation. Virtualization creates an external interface that hides an underlying implementation.” In that same posting I made the observation that behind the scenes, “For those managing data centers, there is a huge push to consolidate the servers. The economics from simple server consolidation are overwhelming.”
Doesn’t this all sound very similar to the movie industry? Is it just me, or isn’t the challenges facing technology vendors, like HP, similar to those faced everyday by movie directors? “Some level of abstraction” has advantages! “Hiding the underlying implementation” is becoming necessary! None of us would like to see what really was being used to make up a movie set, and none of us would be excited without the special video effects or the audio track being left off … In other words, the final product, an application accessed on a server, or a finished movie viewed in a theater, both benefit equally from being an abstraction and divorced from the mechanics of “what lies beneath”!
The reference to server consolidation cannot be ignored. Moving beyond the comparisons between movie making and deploying an application on a modern, multi-core server, data center managers indeed are one of the key driving forces behind virtualization.
In the articles Dr.Bill Highleyman writes for the Availability Digest newsletter (http://www.availabilitydigest.com/), there was a three part piece (covered in the March, April, and June issues) where he opened with “recent studies have shown that typical servers in a data-center environment that is governed by a one-application, one-server policy are running at only 10% to 15% of capacity. If only we could harness this excess capacity, we could significantly reduce the number of servers in a data center by a factor of two, three, or even more … Virtualization lets one physical server do the work of many.”
Bill added in Part 2 with the comment “The physical layer requests made by the guest operating systems are adjudicated by an intervening layer, the hypervisor …(that) in effect, multiplexes the requests from the operating system and allows only one request at a time to be passed to the physical server.” What Bill is highlighting is that there can be huge advantages for a data center manager, overwhelmed by the sheer scope of servers requiring support, with virtualization, and that the arrival of hypervisors as the layer of abstraction hiding all the metal “on the board” will go a long way in helping with any consolidation.
But as much as I have always been a supporter of virtualization, I am no longer as certain as I once was that there will be real value in the NonStop marketplace. Is such a layer of abstraction really going to help the cause of NonStop and support the level of continuous availability we have come to depend upon?
Coming from a communications and networking background, I was an early witness to the flexibility and power that came with the separation of multiple logical networks running over a single physical network. This year, in my January 14th posting, “Virtualization? A bargain at any price!”, I referenced the early work done with SNA and how it gave “the networking community a way to isolate an ever-changing physical network from the mission critical online applications of the day.” But the NonStop environment really does need to know about the “ever-changing physical” components lying underneath.
In my posting of May 18th, “Virtualization? A testing time” I ask the question “how will we simulate fail-over in a completely virtual world? And will it be worth it? Will we want to see a return to simpler configuration just for reliability?” In simpler terms, will the arrival of virtualization with support for NonStop really work? Will it decrease the levels of availability we need? Indeed, Bill Highleyman goes so far as to suggest that it may reduce the continuous availability characteristics we accept today, down to the high availability characteristics pretty much every other vendor, with some form of clustering support, can provide today.
As I watched the very first take for the whiffleball movie take place, and talked with a number of the crew working on the shoot, how different the real world was from what would finally make it into the projection rooms of local theaters. And there really wasn’t any need for the audience to know about the specifics.
But I can’t say the same for virtualization when it comes to the HP NonStop. While it may not be that important for the application to know what’s going on behind the scene, for the operating system, an incredibly crucial component of NonStop, it is important. But will we have to call in the authorities, and chase away the crowds cheering for virtualization, before we can determine what’s best for NonStop!
Comments
A nice take on virtualization - interesting analogy to a movie.
Actually, NonStop systems have been providing the advantages of virtilaization since Tandem first delivered product. What are the touted advantages of virtualization today:
- Running servers full blast by hosting several applications on a single server. Sound familiar? When was the last time you saw a NonStop system running at 10% of capacity.
- Simpler management. NonStop systems beat the management complexity of other systems hands down.
- High availability? With server virtualization, you are putting all of your eggs in one basket. If that server fails, you have lost many applications. Some new products can provide failover, but failover is minutes at best, especially if the failure caused data corruption (which must be checked for). NonStop applications fail over in seconds.
- Reduced cost? Well, let's see how NonStop blades play out.
Question - running a number of virtual machines on one server, and being vulnerable to a physical server outage; any difference to what a Pathway environment exposed? In other words, a total server failure / outage needed to be accommodated with access to a second server even with NonStop right? So, shouldn't you view, from a planning perspective, the need for a second server when running virtual machines much the same way? Just a thought, Bill.
Cheers, and again, thanks ...
Absolutely. If one of your virtualized servers is going to be running a mission-critical application, you'd better have a backup. The virtual machine products now generally provide failover facilities. For planned downtime, failover can be seamless with no application downtime. However, following a crash, failover is like a cluster. The database has to be recovered and applications brought up. This can take minutes or more, especially if the crash caused database corruption (which at least has to be checked for).
With NonStop, the backup is already there and failure recovery is virtually instantaneous.
Got it! Thanks ...
Now, to change tack on you here - let's look at Pathway. While positioned nicely as an infrastructure / middleware product addresing the needs of users developing transaction processing (tp) applications - can't you imagine that Pathway too helps out here and provides a level of virtualization? After all, Pathway developers never had to understand or deal with the processors supported by Pathway - be it a simple 2 cpu configuration - or a much larger 16 cpu configuration?
Thoughts?
Yes, Pathway is definitely a form of virtualization. Actually, it is virtualization in two ways:
1. On the one hand, Pathway supports multiple applications on a single server. The applications each have their own virtual view of the physical server. That is virtualization in the hypervisor sense, but without a hypervisor.
2. On the other hand, a single application can be a server class. Several like servers appear as a single server to the outside world with load balancing and fault tolerance. That is virtualization in another sense. Pathway virtualizes the server class to appear as a single server to the outside world.
Wikipedia defines virtualization as an abstraction of a physical process. In the first case above, Pathway abstracts the NonStop server and OS to make it appear to the application that it has its own machine (a virtual machine, in effect). In the second case, Pathway abstracts the server class and makes it look like a single application server. Many servers for one machine (1), or many machines for one server (2).