Would it come as a surprise to anyone following the HP NonStop Server that it’s making it into the Cloud Computing strategy? And would it be a surprise to any that it’s doing so on the strength of the data base?
Spring is finally making it to Boulder, and finally our snowberry trees are in bloom. For the past couple of weeks we have looked at the trees alongside our driveway through the glass doors of our kitchen nook every morning just to see if they had blossomed. The picture alongside this post shows that yes, they finally gave us the display of white flowers we had been hoping for as the weather had begun to warm.
It has been five years since we saw this display, as business had necessitated us being away from home for extended periods and no matter how hard we tried to schedule being home for this time of year, we had never managed to successfully sidestep commitments to be elsewhere. So the experience of this week has gone a long way to compensate for the distractions that came with almost continuous travel!
However, the picture isn’t quite as rosy as it may appear, and we lived through several false starts that had me wondering whether I would see any flowers develop!
Storms during the past two weeks, dumping several inches of snow and driving us indoors as well as forcing us to abandon taking the car to a track-day event at a nearby road circuit (yes, a Colorado facility with no snow-removal equipment), and it just appeared that winter wasn’t about to let go of its grip on us. Friends to the east of us swear winter will be with them for several more weeks and like us, are cautious about becoming prematurely optimistic!
However, the time indoors has seen me devoting a lot of time to database technology and to looking at the more popular implementations. My business colleagues are often amused with the enthusiasm I have for all things database – wasn’t I the manager of the product management team within Tandem Computers overseeing product roadmaps for communications and networking products and technologies? Wasn’t it me who was always quick to step-up and present new features in excruciating detail?
It was several years ago when in the post of June 5th, 2008 “My road to the vendor side …” where I disclosed how my first years within IT were spent working with data bases. From my earliest days I worked with data and file access methods, becoming a master of IBM’s ISAM and BDAM file systems. These were times when good application programmers, equipped solely with the machines assembler, could write file sorts that would execute entirely within the channel.
I moved to London to work on an early commercial deployment of the Cullinane database management system, IDMS, and then after a bitterly damp English winter, I struck out for North America only to land in Edmonton, Alberta where I parlayed my knowledge of IDMS into my first job as a DataBase Administrator (DBA). For the corporation I had just joined, I quickly dropped all enthusiasm for IDMS and pushed hard to try my hand at deploying an “inverted file” type DBMS, a new product from Insyte Datacom Inc, called Datacom/DB, a precursor of what was to follow with “relational” type DBMS (RDBMS).
Perhaps the only real memorable occurrence that the years I spent working with databases produced was that it gave me the ticket out of Canada. As I wrote in that June 8th post, what happened with all that Datacom knowledge – well, it turned out to be the key that allowed me to return home to Australia. For the next three decades, my focus switched to networking and communications; initially with TeleProcessing (TP) Monitors, before delving deeper into the underlying network architectures.
However, today we live with the Internet and with the protocols it spawned and networking has become comparatively dull; so getting an opportunity from my client to revisit the topic of data bases tapped into a lot of experiences (and late nights) that had lain dormant for a long time. Perhaps not quite on the same scale as the coming of spring, but tempered with as much anticipation certainly!
Before I paint too rosy a picture about data bases, they have seen their fair share of storms as well but we starting to see support begin to flourish and that shouldn’t be a surprise to anyone within the NonStop community!
This post is not going to cover the same material that I have developed for my client but it does give me the opportunity to segue nicely into describing a future I see for the HP NonStop server in light of the messages being developed within HP in support of the just-announced strategy that’s focusing on Clouds and Mobility. Obviously, “Mobility”, centered on mobile devices and the WebOS operating system rules out NonStop for the time being, at least. And with that observation the future of NonStop, as a thriving participant in the enterprise offerings of HP, implies a future within Cloud Computing.
With all the talk of virtualization and clustering NonStop SQL/MX database (NS SQL/MX), remains one of the best kept secret in the industry. Whereas every popular database product sits atop the operating system, often in conflict with the operating system when it comes to efficient resource oversight, NS SQL/MX is a part of the operating system. Years of development went into making the Tandem Computer fault tolerant and the pursuit gave us an n+1 technology built on a shared-nothing architecture – a copy of the operating system inside every processor in the system, or node.
The first byproduct from providing users with fault tolerance and the ability to run 24 X 7 was that from the very outset, Tandem Computers could scale linearly. Today, packaged as very modern HP NonStop servers, they have lost none of these availability and scalability properties. Why are we finding this is so important today?
Cloud Computing is coming after a period of server consolidation, and with it, the emergence of mixed workloads. Before we consolidated we had to come to terms with the fact that we have too many lightly used servers and the manageability challenge they presented was proving daunting – even for the biggest users. The growth in business needs fueled the popularity of this model and as we contemplate Cloud Computing there’s no evidence that we will not be growing further – so we need to be able to scale the cloud.
For many corporations considering building out their data centers with a super “private cloud” populated with as many servers as required to satisfy the corporations IT needs, means there will be mixed workloads. From all that I have uncovered talking to users and vendors alike, nothing runs mixed workloads quite like the HP NonStop server – and yes, nothing else includes the data base as part of the mixed workload the way NonStop does! No mysteries in how this is achieved after all, mixed workloads are just riding the coattails of the services within the NonStop operating system designed to ensure fault tolerance!
I wonder how long it takes before applications providers, targeting Cloud Computing recognize this. Or how inexpensive it is to utilize the data base that is part of the NonStop operating system, NS SQL/MX? And before you think I’m painting too rosy a hue on the prospects of NS SQL/MX here, over the past six weeks I have talked with a dozen or so corporations with NS SQL/MX deployed including the DBAs overseeing their maintenance, and the story really is very positive. However, there’s a lot more that is still required and there’s nowhere near critical mass in terms of solutions vendors opting for NS SQL/MX.
HP has a long way to go before everyone understands how the vision of Cloud Computing and Mobility will come to fruition and how this strategy will impact their own partnership with HP. For the users of HP NonStop servers, however, the steps into Cloud Computing and the reasons behind taking those steps are becoming clearer. And it has a lot to do with the merits of NS SQL/MX.
Time to dust off my database skills and to check out the latest analysts’ reports! Who could have guessed that after so many years, the industry would turn so markedly back to databases! As a colleague of mine was quick to point out today, the ball is squarely in HP’s court on this issue – if NS SQL/MX is that good, they need to build programs to attract partners; follow the Apple model, if you like.
It may be a stretch and HP may hedge, of course, but if I have learned anything from talking to NonStop customers these past few weeks, HP is not lacking when it comes to reference-able customers! And watching NS SQL/MX develop strong advocates, perhaps others apart from HP will begin to pay attention! Perhaps even … you know who!
It has been five years since we saw this display, as business had necessitated us being away from home for extended periods and no matter how hard we tried to schedule being home for this time of year, we had never managed to successfully sidestep commitments to be elsewhere. So the experience of this week has gone a long way to compensate for the distractions that came with almost continuous travel!
However, the picture isn’t quite as rosy as it may appear, and we lived through several false starts that had me wondering whether I would see any flowers develop!
Storms during the past two weeks, dumping several inches of snow and driving us indoors as well as forcing us to abandon taking the car to a track-day event at a nearby road circuit (yes, a Colorado facility with no snow-removal equipment), and it just appeared that winter wasn’t about to let go of its grip on us. Friends to the east of us swear winter will be with them for several more weeks and like us, are cautious about becoming prematurely optimistic!
However, the time indoors has seen me devoting a lot of time to database technology and to looking at the more popular implementations. My business colleagues are often amused with the enthusiasm I have for all things database – wasn’t I the manager of the product management team within Tandem Computers overseeing product roadmaps for communications and networking products and technologies? Wasn’t it me who was always quick to step-up and present new features in excruciating detail?
It was several years ago when in the post of June 5th, 2008 “My road to the vendor side …” where I disclosed how my first years within IT were spent working with data bases. From my earliest days I worked with data and file access methods, becoming a master of IBM’s ISAM and BDAM file systems. These were times when good application programmers, equipped solely with the machines assembler, could write file sorts that would execute entirely within the channel.
I moved to London to work on an early commercial deployment of the Cullinane database management system, IDMS, and then after a bitterly damp English winter, I struck out for North America only to land in Edmonton, Alberta where I parlayed my knowledge of IDMS into my first job as a DataBase Administrator (DBA). For the corporation I had just joined, I quickly dropped all enthusiasm for IDMS and pushed hard to try my hand at deploying an “inverted file” type DBMS, a new product from Insyte Datacom Inc, called Datacom/DB, a precursor of what was to follow with “relational” type DBMS (RDBMS).
Perhaps the only real memorable occurrence that the years I spent working with databases produced was that it gave me the ticket out of Canada. As I wrote in that June 8th post, what happened with all that Datacom knowledge – well, it turned out to be the key that allowed me to return home to Australia. For the next three decades, my focus switched to networking and communications; initially with TeleProcessing (TP) Monitors, before delving deeper into the underlying network architectures.
However, today we live with the Internet and with the protocols it spawned and networking has become comparatively dull; so getting an opportunity from my client to revisit the topic of data bases tapped into a lot of experiences (and late nights) that had lain dormant for a long time. Perhaps not quite on the same scale as the coming of spring, but tempered with as much anticipation certainly!
Before I paint too rosy a picture about data bases, they have seen their fair share of storms as well but we starting to see support begin to flourish and that shouldn’t be a surprise to anyone within the NonStop community!
This post is not going to cover the same material that I have developed for my client but it does give me the opportunity to segue nicely into describing a future I see for the HP NonStop server in light of the messages being developed within HP in support of the just-announced strategy that’s focusing on Clouds and Mobility. Obviously, “Mobility”, centered on mobile devices and the WebOS operating system rules out NonStop for the time being, at least. And with that observation the future of NonStop, as a thriving participant in the enterprise offerings of HP, implies a future within Cloud Computing.
With all the talk of virtualization and clustering NonStop SQL/MX database (NS SQL/MX), remains one of the best kept secret in the industry. Whereas every popular database product sits atop the operating system, often in conflict with the operating system when it comes to efficient resource oversight, NS SQL/MX is a part of the operating system. Years of development went into making the Tandem Computer fault tolerant and the pursuit gave us an n+1 technology built on a shared-nothing architecture – a copy of the operating system inside every processor in the system, or node.
The first byproduct from providing users with fault tolerance and the ability to run 24 X 7 was that from the very outset, Tandem Computers could scale linearly. Today, packaged as very modern HP NonStop servers, they have lost none of these availability and scalability properties. Why are we finding this is so important today?
Cloud Computing is coming after a period of server consolidation, and with it, the emergence of mixed workloads. Before we consolidated we had to come to terms with the fact that we have too many lightly used servers and the manageability challenge they presented was proving daunting – even for the biggest users. The growth in business needs fueled the popularity of this model and as we contemplate Cloud Computing there’s no evidence that we will not be growing further – so we need to be able to scale the cloud.
For many corporations considering building out their data centers with a super “private cloud” populated with as many servers as required to satisfy the corporations IT needs, means there will be mixed workloads. From all that I have uncovered talking to users and vendors alike, nothing runs mixed workloads quite like the HP NonStop server – and yes, nothing else includes the data base as part of the mixed workload the way NonStop does! No mysteries in how this is achieved after all, mixed workloads are just riding the coattails of the services within the NonStop operating system designed to ensure fault tolerance!
I wonder how long it takes before applications providers, targeting Cloud Computing recognize this. Or how inexpensive it is to utilize the data base that is part of the NonStop operating system, NS SQL/MX? And before you think I’m painting too rosy a hue on the prospects of NS SQL/MX here, over the past six weeks I have talked with a dozen or so corporations with NS SQL/MX deployed including the DBAs overseeing their maintenance, and the story really is very positive. However, there’s a lot more that is still required and there’s nowhere near critical mass in terms of solutions vendors opting for NS SQL/MX.
HP has a long way to go before everyone understands how the vision of Cloud Computing and Mobility will come to fruition and how this strategy will impact their own partnership with HP. For the users of HP NonStop servers, however, the steps into Cloud Computing and the reasons behind taking those steps are becoming clearer. And it has a lot to do with the merits of NS SQL/MX.
Time to dust off my database skills and to check out the latest analysts’ reports! Who could have guessed that after so many years, the industry would turn so markedly back to databases! As a colleague of mine was quick to point out today, the ball is squarely in HP’s court on this issue – if NS SQL/MX is that good, they need to build programs to attract partners; follow the Apple model, if you like.
It may be a stretch and HP may hedge, of course, but if I have learned anything from talking to NonStop customers these past few weeks, HP is not lacking when it comes to reference-able customers! And watching NS SQL/MX develop strong advocates, perhaps others apart from HP will begin to pay attention! Perhaps even … you know who!
Comments
I do agree that SQL/MX is the "hidden secret" and I also agree that it could mark the entry of NonStop into providing cloud services. What I don't know is the answer to a couple of questions:
(1) how hard is it to port something from more popular databases (Oracle, DB2, MySQL, ...) to SQL/MX ?
(2) in how many cases (if at all) is the database layer nicely separated from the application layer so you can easily switch (only) the database
(3) a lot of cloud services are written in powerful scripting languages which combine the "Internet stuff", application logic and database in one tool (Ruby on Rails, PHP, Perl, ...). These tools allow startups to start with very little code but they are also powerful enough to drive a full cloud service (?). Which of these are ported (or portable) to NonStop ?
Cheers, Thomas
Availabilty of labor is still an issue. We talk to management about TOC, but the fact remains that NonStop skills are relatively rare and expensive compared to Windows skills. OSS/SQL/MX/Java/Eclipse, etc. all are nice, but skills for the underlying components, Guardian, DP2, load balancing, TMF, Pathway, along with proper development skills for the architecture are in short supply and signficantly affect managment decisions on platform choices.
HP should revive the old fundamentals courses and give them away. Maybe then, serious inroads into cloud computing could be made.