Skip to main content

Virtualization? It’s on its way - or is it!

The other day I walked over to our local Starbucks for my usual early morning coffee only to be surprised to see a number of folks standing around a pretty good outdoor kitchen. When I first saw the line I thought there was a small protest forming outside the local supermarket and wondered what was the problem. As I came even closer to the group I began to wonder if it wasn’t a political group assembling for a morning of campaigning door to door.

I am an early riser these days, and it was only a little after 5:00am with only the first signs of dawn visible, as I walked up to the coffee shop. The usual marine fog layer was still blanketing the area and so the atmosphere was a little surreal. But I sat outside to drink my coffee and it was only then that I understood what was going on – Hollywood had come to Simi Valley and to the local Wood Ranch village shopping area. And I have included a picture of the scene as it unfolded early that morning.

It was to be the first day of shooting of a very low budget film, and I was looking at the entire production team – cameramen, sound engineers, actors, make-up artists, the caterers, and the director. I am not familiar with the sport of wiffleball, a form of indoor baseball, but I was told that the film was about the emergence of a wiffleball champion, with the first day’s shoot featuring our local restaurant. The movie was going to be a piece of whimsical fantasy with none of the supporting cast taking it very seriously.

It’s not the first time I have been close by a “location shoot”.

Back in the late 70s the Dallas restaurant to which I took my business associate, quickly turned into an on-location set for the popular TV program Dallas, and as we ate our steaks, we were fortunate enough to see the stars rehearsing their roles. More recently, as I walked across the famous Ponte Vecchio Bridge across the Arno River in Florence, I happened upon Anthony Hopkins trying to shoot a scene from one of the follow-on productions to the Silence of the Lambs movie.

As the director tried desperately to transition into the make-believe world required for the movie, he bellowed into a megaphone “Quite please! This is a real take!” the surrounding Italian crowd burst into raucous cheers until eventually the local Police had to be called in to clear the bridge. In Simi Valley, it was a totally diffeent scene - we had no crowds and for most of the time, no director!

The last couple of times I have been in Sydney, I Came across a number of movies being filmed including the most recent Superman movie, as well as Mask 2, which I never saw released. It’s still fascinating, and every time I encounter these film crews I am fascinated by the fantasy and the make-believe world that directors can create out of common, everyday, locations.

As I thought about the world of movies, I couldn’t help thinking about virtualization. While not pursuing quite the same objective as the world of movies does, they both share some things in common. We enjoy going to the movies to be transported into another life, and into another world. For the short time we watch the movie, we witness events and scenes totally disconnected, for the most part, from anything that’s real. There’s a level of abstraction with both that allows everything that’s visible to be layered on top of foundations about which we know very little. One moment, a film can be transporting us to a street in a major city when the very next instance, it’s all a veneer on a film studio’s back lot.

I have addressed the topic of virtualization in a couple of blog postings. Last year, back on September 13th when I was in Sydney, I posted “A taste of Virtualization” where I made the observation about how “I believe … the chip sets and boards, are heading to where there will be no limit to the number of CPUs, or processing engines, that end up occupying just a single socket on a processing board. To take advantage of this new packaging … then, for vendors where any number of Operating Systems (OS) may need to share a basic board package, some level of abstraction between the OS and the metal has advantages.”

Later that year, in the posting of December 19th in “Virtualization? Unreal, mate!,” I referenced an entry in Wikipedia that stated “the common theme of all virtualization technologies is the hiding of technical detail, through encapsulation. Virtualization creates an external interface that hides an underlying implementation.” In that same posting I made the observation that behind the scenes, “For those managing data centers, there is a huge push to consolidate the servers. The economics from simple server consolidation are overwhelming.”

Doesn’t this all sound very similar to the movie industry? Is it just me, or isn’t the challenges facing technology vendors, like HP, similar to those faced everyday by movie directors? “Some level of abstraction” has advantages! “Hiding the underlying implementation” is becoming necessary! None of us would like to see what really was being used to make up a movie set, and none of us would be excited without the special video effects or the audio track being left off … In other words, the final product, an application accessed on a server, or a finished movie viewed in a theater, both benefit equally from being an abstraction and divorced from the mechanics of “what lies beneath”!

The reference to server consolidation cannot be ignored. Moving beyond the comparisons between movie making and deploying an application on a modern, multi-core server, data center managers indeed are one of the key driving forces behind virtualization.

In the articles Dr.Bill Highleyman writes for the Availability Digest newsletter (http://www.availabilitydigest.com/), there was a three part piece (covered in the March, April, and June issues) where he opened with “recent studies have shown that typical servers in a data-center environment that is governed by a one-application, one-server policy are running at only 10% to 15% of capacity. If only we could harness this excess capacity, we could significantly reduce the number of servers in a data center by a factor of two, three, or even more … Virtualization lets one physical server do the work of many.”

Bill added in Part 2 with the comment “The physical layer requests made by the guest operating systems are adjudicated by an intervening layer, the hypervisor …(that) in effect, multiplexes the requests from the operating system and allows only one request at a time to be passed to the physical server.” What Bill is highlighting is that there can be huge advantages for a data center manager, overwhelmed by the sheer scope of servers requiring support, with virtualization, and that the arrival of hypervisors as the layer of abstraction hiding all the metal “on the board” will go a long way in helping with any consolidation.

But as much as I have always been a supporter of virtualization, I am no longer as certain as I once was that there will be real value in the NonStop marketplace. Is such a layer of abstraction really going to help the cause of NonStop and support the level of continuous availability we have come to depend upon?

Coming from a communications and networking background, I was an early witness to the flexibility and power that came with the separation of multiple logical networks running over a single physical network. This year, in my January 14th posting, “Virtualization? A bargain at any price!”, I referenced the early work done with SNA and how it gave “the networking community a way to isolate an ever-changing physical network from the mission critical online applications of the day.” But the NonStop environment really does need to know about the “ever-changing physical” components lying underneath.

In my posting of May 18th, “Virtualization? A testing time” I ask the question “how will we simulate fail-over in a completely virtual world? And will it be worth it? Will we want to see a return to simpler configuration just for reliability?” In simpler terms, will the arrival of virtualization with support for NonStop really work? Will it decrease the levels of availability we need? Indeed, Bill Highleyman goes so far as to suggest that it may reduce the continuous availability characteristics we accept today, down to the high availability characteristics pretty much every other vendor, with some form of clustering support, can provide today.

As I watched the very first take for the whiffleball movie take place, and talked with a number of the crew working on the shoot, how different the real world was from what would finally make it into the projection rooms of local theaters. And there really wasn’t any need for the audience to know about the specifics.

But I can’t say the same for virtualization when it comes to the HP NonStop. While it may not be that important for the application to know what’s going on behind the scene, for the operating system, an incredibly crucial component of NonStop, it is important. But will we have to call in the authorities, and chase away the crowds cheering for virtualization, before we can determine what’s best for NonStop!

Comments

Anonymous said…
Richard -
A nice take on virtualization - interesting analogy to a movie.
Actually, NonStop systems have been providing the advantages of virtilaization since Tandem first delivered product. What are the touted advantages of virtualization today:
- Running servers full blast by hosting several applications on a single server. Sound familiar? When was the last time you saw a NonStop system running at 10% of capacity.
- Simpler management. NonStop systems beat the management complexity of other systems hands down.
- High availability? With server virtualization, you are putting all of your eggs in one basket. If that server fails, you have lost many applications. Some new products can provide failover, but failover is minutes at best, especially if the failure caused data corruption (which must be checked for). NonStop applications fail over in seconds.
- Reduced cost? Well, let's see how NonStop blades play out.
Richard Buckle said…
Bill,

Question - running a number of virtual machines on one server, and being vulnerable to a physical server outage; any difference to what a Pathway environment exposed? In other words, a total server failure / outage needed to be accommodated with access to a second server even with NonStop right? So, shouldn't you view, from a planning perspective, the need for a second server when running virtual machines much the same way? Just a thought, Bill.

Cheers, and again, thanks ...
Anonymous said…
Richard -

Absolutely. If one of your virtualized servers is going to be running a mission-critical application, you'd better have a backup. The virtual machine products now generally provide failover facilities. For planned downtime, failover can be seamless with no application downtime. However, following a crash, failover is like a cluster. The database has to be recovered and applications brought up. This can take minutes or more, especially if the crash caused database corruption (which at least has to be checked for).

With NonStop, the backup is already there and failure recovery is virtually instantaneous.
Richard Buckle said…
Bill,

Got it! Thanks ...

Now, to change tack on you here - let's look at Pathway. While positioned nicely as an infrastructure / middleware product addresing the needs of users developing transaction processing (tp) applications - can't you imagine that Pathway too helps out here and provides a level of virtualization? After all, Pathway developers never had to understand or deal with the processors supported by Pathway - be it a simple 2 cpu configuration - or a much larger 16 cpu configuration?

Thoughts?
Anonymous said…
Richard -

Yes, Pathway is definitely a form of virtualization. Actually, it is virtualization in two ways:

1. On the one hand, Pathway supports multiple applications on a single server. The applications each have their own virtual view of the physical server. That is virtualization in the hypervisor sense, but without a hypervisor.

2. On the other hand, a single application can be a server class. Several like servers appear as a single server to the outside world with load balancing and fault tolerance. That is virtualization in another sense. Pathway virtualizes the server class to appear as a single server to the outside world.

Wikipedia defines virtualization as an abstraction of a physical process. In the first case above, Pathway abstracts the NonStop server and OS to make it appear to the application that it has its own machine (a virtual machine, in effect). In the second case, Pathway abstracts the server class and makes it look like a single application server. Many servers for one machine (1), or many machines for one server (2).

Popular posts from this blog

If it’s June then it’s time for HPE Discover 2021.

  For the NonStop community there has always been an annual event that proved hard to resist; with changing times these events are virtual – but can we anticipate change down the road? Just recently Margo and I chose to return home via US Highway 129. It may not ring any bells, but for those who prefer to call it the Tail of the Dragon – 318 curves in 11 miles – it represents the epitome of mountain excitement. For Margo and me, having now driven the tail in both directions, driving hard through all these turns never gets old. Business took us to Florida for an extended week of meetings that were mostly conversations. Not everything went to plan and we didn’t get to see some folks, but just to have an opportunity to hit the road and meet in person certainly made the 4,500 miles excursion worthwhile. The mere fact that we made touring in a roadster work for us and we were comfortable in doing so, well, that was a real trick with a car better suited to day trips. This is all just a p

The folly that was Tandem Computers and the path that led me to NonStop ...

With the arrival of 2018 I am celebrating thirty years of association with NonStop and before that, Tandem Computers. And yes, a lot has changed but the fundamentals are still very much intact! The arrival of 2018 has a lot of meaning for me, but perhaps nothing more significant than my journey with Tandem and later NonStop can be traced all the way back to 1988 – yes, some thirty years ago. But I am getting a little ahead of myself and there is much to tell before that eventful year came around. And a lot was happening well before 1988. For nearly ten years I had really enjoyed working with Nixdorf Computers and before that, with The Computer Software Company (TCSC) out of Richmond Virginia. It was back in 1979 that I first heard about Nixdorf’s interests in acquiring TCSC which they eventually did and in so doing, thrust me headlong into a turbulent period where I was barely at home – flying to meetings after meetings in Europe and the US. All those years ago there was

An era ends!

I have just spent a couple of days back on the old Tandem Computers Cupertino campus. Staying at a nearby hotel, this offered me an opportunity to take an early morning walk around the streets once so densely populated with Tandem Computers buildings – and it was kind of sad to see so many of them empty. It was also a little amusing to see many of them now adorned with Apple tombstone markers and with the Apple logo splashed liberally around. The photo at the top of this posting is of Tandem Way – the exit off Tantau Avenue that leads to what was once Jimmy’s headquarters building. I looked for the Tandem flag flying from the flagpole – but that one has been absent for many years now. When I arrived at Tandem in late ’88 I have just missed the “Billion Dollar Party” but everyone continued to talk about it. There was hardly an employee on the campus not wearing the black sweatshirt given to everyone at the party. And it wasn’t too long before the obelisk, with every employee’s signature