Saturday, September 29, 2007

The ebb and flow in technology deployments!

Back home in Boulder again - and this is my Boulder office. For those of you who were present at the closing session of the ITUG 2005 Summit, you may recall the gift I received from Volker Dietz - a painting created for the Tandem Germany office. It's now framed and takes pride of place in my office.

I am pleasantly surprised with the readership this blog is receiving and with the number of good comments being posted. Just to let you know more about how this works, at the request of ITUG, all comments that are posted are sent my way before appearing on the blog – I can accept, or reject complete postings – but I cannot edit them. So what you see posted is what was submitted – in total.

I am also pleased with the marketing done by others – already the link to this blog is being promoted in other newsletters – electronic and printed. Watch for even more references in upcoming ITUG Connection issues. I have always been a fan of “guerilla marketing” and the support I am receiving is clear evidence to me that this is always a good avenue to pursue – especially for support of a medium like this blog.

The dialogue I am having with a number of HP NonStop product managers is also encouraging. Having said this, I understand HP has a policy in place that really does limit how much feedback is provided to issues and comments posted in a blog – for HP, as you can imagine, there’s just a lot of topical blogs out there and it would be an endless, and perhaps thankless pursuit, to chase down every one of them and to sort out the fact from fiction. The channel they do watch is obviously the SIG discussion forums – and I just want to reinforce the necessity for maintaining SIGs exchanges. My blog is intended to complement these more formal channels, and not to replace them.

Having said that, I am particularly interested in any comments you elect to post about new clients and new applications for the Integrity NonStop server. I hear a lot of buzz that’s anecdotal about this agency purchasing a large NonStop server, and about a new NonStop server being deployed for that new application. Later this year I am going down to Singapore to look at one such new customer deployment.

There’s a lot that I hear, but like many of you, until it comes out in a press release, it’s hard to just jump in and talk about it. And there are some categories of users, government agencies in particular, especially any to do with state or homeland security that could tell you about this new application or that new deployment – but then perhaps they would have to kill you, if you know what I mean. But I have to add, from some of the licenses I have seen being prepared even within my own company – there are some pretty big configurations out there that are still covered by nondisclosure.

So, where it’s appropriate, just add a comment or two to let me know what you are hearing about and I will follow-up and report back on this blog. It’s very important for most of us to know that the need for NonStop is still alive, and that fault tolerant servers remain a much sought-after solution.

There are a lot of external factors now that suggest we are returning to a “centralized deployment model” once again. For as long as I have been associated with IT, I have watched the ebb and flow that takes place between IT groups and business units. Companies have let the end users add more smarts into their groups and then, over time, re-thought the deployments and moved whatever smarts were deployed back into the data center.

Whether it was out of frustration over the timeliness of a new application, the angst over the inflexibility of legacy applications, or quite simply the discomfort over the manner in which some IT chiefs used information in much the same way as feudal chiefs did back in the dark ages, there was always the potential to argue strongly for local control of selected information.

From distributed computing with remote minicomputers, to client/ server computing with departmental servers, or even today, with web services and the multi-tier architectures that they introduce, the arrival of each of these has spawned the appearance of computing power outside of IT’s direct management control.

Distributed computing fell out of favor as the people costs skyrocketed – just how many people were hidden inside headcount allocations that were really working on IT support? How much additional software did we need to oversee it all? Client / Server began to loose ground when we added up how much we were paying for the Wintel platforms springing up everywhere – and companies could not standardize on anything with the arrival rate of new applications that was exceeding the then-current tracking systems. When even Dell said it couldn’t fill some major orders because the technology would change across the delivery timeframe then yes, customer did have every right to question the value proposition.

The latest push for some centralization is coming from two sources – security, and the need to reduce the energy bill. I heard a term for the first time, only a few months back, when a client talked about a new metric – MIPS per Kilojoules! It turns out that in some markets – the island of Manhattan being one – that only so much power could be delivered. In the ongoing battle to balance computing power with heating / cooling needs, trade-offs were being made to the point where you could introduce no additional processing power.

Security these days has more to do with compliance than with shutting off access, I suspect. Recent legislation has made the whole process onerous, but like just about anything else tied to government, now that legislation is in place and agencies established to monitor, I can’t see us changing it any time soon.

Energy, and security, are among the most widely covered topics in the press and on the web. Back at a Gartner Symposium and IT expo in 2003, researchers Carl Claunch and Al Lill said in their keynote presentation “Gartner Predicts: The Future of IT” that we will see the next massive wave of innovation and demand for IT starting in 2006 and the drivers will include:

Transition to SOA
Low power consumption mobile devices
Secure broadband networks
Real Time Infrastructures

Gartner has its critics, for sure, but I recently paged through this particular presentation and found its accuracy uncanny. SOA, Security, Real Time, and Energy Usage are all front and center in most discussions I have participated in lately.

And this bodes well for the future on Integrity NonStop. This is a definite elevation in visibility of a platform that highlights the support of many of the above drivers and includes mature technology options among the platform’s attributes.

So, as this blog continues to develop and as the guerilla marketing expands and more of the NonStop community shares information, I would really like to get more feedback from you all on the many daily successes you experience from you own internal promotion of NonStop. Large, highly energy-efficient and extremely secure server packages, have now become very important. And to paraphrase a well-known declaration by an Ork in Lord of the Rings: “Well boys, look like NonStop is back on the menu”!

Wednesday, September 26, 2007

It's time to leave the Duke!

Back in Boulder again for a few days, and then it’s back to Simi Valley next week. Later next week, it’s off to the Euro ITUG event in Brighton. So what I would like to do tonight is to pick up on one theme I have been covering – the Integrity NonStop platform itself.

It is clear to me that in the months to come, HP will be distancing itself from its competitors based on its roll-out, top to bottom, of blades packages. And nowhere will the impact on users be more visible than in the NonStop community. IBM has already made it clear that, for the time being, its mainframes will not be based on blades – the basic building block of “books” will continue for new product introductions for some time to come. This is not to say that IBM’s decision is wrong, and they are certainly enjoying an up-tick in mainframe usage of late, just that it will be pursuing a different path to HP.

As I flew to Denver I grabbed a couple of magazines and among them were a couple of recent ITUG Connection issues. I was flipping through the pages of the July / August 2007 issue when I ran across the Winston Prather piece “News from HP’s NonStop Enterprise Division”. Winston is the VP and General Manager of NonStop Enterprise Division & High Performance Computing within HP. With overall responsibility for the Integrity NonStop platform – it just makes sense to always read his column. So it was with interest that I caught the statement "HP is making significant investments in the platform, most notably in moving it to a bladed architecture.

What further caught my attention was the follow-on remark “we will also be moving storage and communications to Linux front-end servers in the future. Greater leverage of volume economics through increased use of industry-standard components will translate into lower TCO for our customers.”

With IBM and HP going in slightly different directions, the user community will face a number of decisions – but I am always comfortable when there are choices. Blades versus books! Itanium versus Power! And different views on virtualization! These are all topics I plan to cover in future blog’s postings.

What Winston is foreshadowing, to me, is a new paradigm where a common blade building block – even within a single system, such as the integrity NonStop - may be the same, but may be running different OS’s. Underneath the covers, of a future system may be a combination of OS’s each selected to support a specific function - be it storage, communications, etc. As most of you know, I have worked in communications and networking for more than three decades, and today I take no issue with electing to run communications stacks on top of a Linux distribution. It makes all the sense in the world and I can only see more complete protocol offerings and better industry support as a result.

Servers, based on standardized hardware, have a shot at being more energy efficient and we will see their “greening” become a priority. By this I mean placement of the blades within the total package of racks can be scrutinized and then organized for optimal heat management. I just have to believe mapping the heat signature of a common building block will then lead to better blade placements and a more energy efficient overall package.

We are going to see a lot of changes, and have to adapt to new ways of doing things. We may be moving in a totally new direction with different manageability priorities and a whole range of new interfaces and tools.

Last year, in a magazine insert called Next-Gen IT (July, 2006) that was put together by the editors of Computerworld and CIO magazine, Michelle Bailey, an IDC research director said “CIOs are seeing that the economics of yesterday’s data center isn’t going to work in the future – you can’t have more people, more I/O, and more servers with every new application.”

This week I caught up with Wil Marshman and we met in the Duke of Edinburgh for a drink. For all of you not familiar with the Cupertino campus, for the past 25 years or so many of us thought the pub was part of the campus. Back in Tandem days, on special occasions, it was even graced with a regular Tandem building “tombstone” - although I forget the specific location number it was assigned. Being back in the pub with Wil brought back a lot of old memories and as I looked around, there was Jack Trice – in his usual corner! I walked outside and there was Andy Hall on his cell phone, talking to family – it was Andy who first took me into the Duke in 1987 and where I first met folks like Roger Mathews and Steve Saltwick.

And it just stopped me dead in my tracks – for 20 years, we had kicked back in this place and brainstormed all sorts of wild and crazy ideas. It was a comfortable place, and we all knew where to head at the end of the day to catch up with folks we neede to see. We were kind of reluctant to try any place else! And it reminded me of what I had written only days earlier in my August 29th blog posting (Back Home … To NonStop) “what we have considered as our safe and trusted turf may be moving underneath us”!

I guess in some ways, we all would like to stay with what we have. We feel most comfortable working with tools and utilities that we have depended on for many years – where we can quickly comprehend, and react to, the information returned. But I sense we are headed in an entirely new direction and so much of it will be different, unfamiliar, and perhaps a little frightening.

As I look ahead, I really don’t know all the details about what’s coming and so I need to be cautious. But we do know that a bladed architecture is coming, and we will have an industry-standard building block from which any number of configurations will be built. We will see multiple operating systems in these packages, and we will have a much more efficient “green” product. But among the packages, there will be NonStop and it will be a key part of the HP server strategy.

What we can’t predict with any certainty is whether this new NonStop will find universal acceptance and whether they usher in a new era of growth – in selected niches or across a broad mixture of industries. The potential is definitely there however, and I am looking forward to their arrival.

My favorite baseball coach, Tony LaRussa, talks about how he prefers his players to get themselves into positions to “manufacture runs”. Tony prefers to manage the game, one innings at a time, so that a number of runs can be scored over the duration of the game. He doesn’t build a team that relies on individuals scoring home runs to win a game and becoming dependent on a single swing of the bat.

Returning to the Next-Gen IT story, they noted that “it’s clear, that the next-generation of data center will be a bastion of virtualization, consolidation, and automation technologies”. And so it is that I see the strategy of HP unfolding – no longer dependent on a single outrageously successful product, but rather, from the compilation of many successful packages built from a bladed architecture. We do know we are heading towards a future where the basics will be simpler, and where we can’t load up on the data center staff to look after it all.

And this leads me to a couple of closing thoughts.

While Winston forecasts Integrity NonStop servers will include potentially multiple Linux components – can we realistically rule out other HP Integrity server offerings not including NonStop components? Can we rule out NonStop becoming part of every BCS offering?

I can already see a day when the option to run a data base may come complete with a NonStop foundation – but would there be any limits to how Nonstop evolves? I just don’t think so and see the NonStop server line living well beyond any previous expectation I we had.

Monday, September 24, 2007

What did you have in mind, eh?

With a dozen postings already up on the blog, I am beginning to enjoy the exchange that’s begun. However, among the postings and emails that have arrived, are some questions about the scope of my topics, the style of commentary I am providing, as well as some concerns about who exactly my audience is. So, I thought it is probably the right time to take this on and provide some feedback as to my views on all three of these issues. The picture I have included here is of me relaxed, looking out to sea, and I can't think of a better way to introduce the topic!

When it comes to the scope of topics I will cover, it was clearly outlined in my first Connection magazine column, the Real Time View, back in the July – August 2006 issue – when I said “I plan to focus exclusively on the data center, and to cover the many combinations of server platforms and infrastructure we all rely upon to support our business. This is not to dismiss other important areas within IT - but just reflects the area of interest that I am most passionate about”.

I recently had an exchange with a couple of folks in Cupertino on this topic, and I think it’s worthwhile to repeat it here in my blog. But since putting it together for them, I have had a little more time to think it through, and now believe that the scope of my topics will center on three themes:
1. The platform,
2. System attributes (or fundamentals), together with a recent addition,
3. Data Bases and Business Intelligence.
Within these three themes, I will cover a fair amount of territory, as follows:

1. Platform

Blades: I foresee dramatic changes on the horizon (2008?) as Blades first make an appearance in support of NonStop – this to me, bundled in potentially hybrid packages, will put NonStop directly in the cross-hairs of IBM and its System z. I say this not because there’s anything wrong, or bad with this eventuality, but to think of such a positioning in terms of having a choice.

Open Systems: I really liked what Wendy Bartlett said in a recent blog comment to the “Got Security?” posting I made a few weeks back. In her comment she said “while OSS is POSIX compliant, OSS is not POSIX, UNIX, or LINUX. OSS does use the high-level interface code from the OSF/1 implementation (” Wendy then went on to add “however, the low-level kernel code was implemented by HP on HP’s NonStop Operating System to our own software engineering standards and therefore inherits the NonStop Operating System fundamentals.” This is huge, and something we should all remain cognizant of, and actively promote within our corporations!

Service-Oriented Architecture (SOA) Journey: I continue to follow this topic as I see us transition to a services model and my previous blog served as an introduction to this topic (Service? I’ll give you service … Sept 21, 07). From the early days of Web services, on to SOA, and potentially, to an Event-Driven Architecture (EDA), I have been an active participant, and I see great potential here, not only with the product-set from HP, but from the vendor community as well. I have borrowed the phrase SOA Journey from another user group but really I do believe the deployment of an SOA suite is but the first big step on a journey that will continue for many years to come.

2. System Attributes

Availability: I am pretty passionate about this topic and truly believe it is the most important product differentiator. I see this as fundamental to the question of “why NonStop”? I see every need to have systems that have no downtime due to unplanned outages. I am also actively involved with the Business Continuity SIG as the SIG leader where concerns over availability are front-and-center of the discussions within this SIG.

Scalability: I am equally as passionate about this topic and view it as an attribute that is often skipped and/or glossed over. There are three great lies in this world –

“The check is in the mail!”
“I checked, and your meal will be right out!”
“This system will be as big as you need!”

Many of us sat through a great presentation at the ITUG event in 2006 I believe, when the ANZ bank (Australia) talked about their migration from NonStop, to a cluster of Intel servers running NT. The work they did to address availability moved them close to NonStop, with a couple of exceptions – but just as they were about to go into full production, the ANZ bank bought another bank and they found they could not quickly scale-up to meet the new demands. The return to NonStop, for them, was purely based on the scalability of NonStop.

Security: And, winding it’s way through all of this will be Security – something I have become extremely sensitive to lately – and not just because my wife who joined XYPRO last year, talks to me about it over coffee (not the specifics of her products, mind you, but in general terms), and not because at GoldenGate, data integration and business continuity is bumping into security as well, but because it’s right at the top of the list of concerns with every CIO I have talked to recently.

3. Data Base and Business Intelligence

Real-Time: I don’t think we can escape this subject anymore, as the term-real time is becoming more important as discussions turn to Business Intelligence (BI). I view the definition of real-time along the lines of the IEEE Standard Dictionary, Sixth Edition (John Wiley, 1996), when it states “the actual time in the real world during which an event takes place … an event or data transfer in which, unless accomplished within an allotted amount of time, the accomplishment of the action has either no value or diminishing value …” Again, this is an attribute of the data base that runs on NonStop that puts a lot of distance between it and what some other platforms can provide. In a real-time world, there’s increased pressure to remove the downtime due to planned outages.

Software / Data Integration: I foresee we will all be playing in a world of “busses” as we interconnect our servers. Will the catching the bus be worth the price of a ticket? Part of this I see tied to the emergence of data virtualization or, the data base backplane, as I put it, while another part is tied to the growing trend in look-to-book configuration optimization. In other words, while we have sorted out the communications / network pieces and no longer think about how to assemble networks and access data, there is a heck of a lot of issues around ensuring the data we do end up seeing is the right data for our purposes.

As for the style of writing you will find in this blog – consider it as lying somewhere between Peter Egan’s columns in Road and Track (Side Glances) and Cycle World (Leanings), and the fictional Carrie Bradshaw and her Sex in the City column. I am a huge fan of the anecdotal style of Peter Egan, but I also like the questioning style of Carrie. While I have a passion for user groups I am also just as passionate about cars and motorcycles! And while I like technology in general, this blog will stay focused on Computer technology.

In my very first Connection magazine column (Real Time View July – August 2006) I said that "I am writing (the column) with the expectation of presenting a slightly different view of NonStop than might be presented elsewhere and I am writing it to generate further discussion. I am openly soliciting your feedback and I can assure you I will be reading all correspondence that I receive."

Following an anecdotal style gives me the freedom to entertain as much as inform. I don’t want to become repetitive or boring. As I run across topics that interest me or become engaged in conversations that I find enjoying, I will work out ways to include them here. Availability touches all of our lives today and has an influence over many of the decisions we take on technology so I can’t believe I will ever run out of subject matter for this column.

With respect to the audience for this column – this is still a work-in- progress. For those of you who have stayed close to this blog and have been reading my postings, you will see that I have thrown together a mix of topics for a very broad audience. Right now, it’s a bit of a dilemma for me, and it could go either way. Whether I engage in a dialogue at the bits and bytes level, or approach from a business perspective still has to be worked out. However, what is obvious to me following the comments of the past few weeks, is that I will be heavily influenced by the comments I receive. What triggers an active exchange will have an impact on the direction I head and on the column-inches I put together on the subject.

To wrap up this posting, I would like to address one final point. The creation of this blog is not in competition to any other exchanges that exists within the ITUG community, it is being undertaken to complement other programs. The thought behind the production of this blog is that across today’s community there are different generations of users. Some of us are more comfortable reading a trade publication while we are on a plane, some of us prefer to email our friends, while others like to get information from a number of online sources, including blogs. It would be a miss on the part of ITUG not to have such a channel as this operating.

As just another volunteer, I will try to maintain “the separation between Church and State”, between my day job and my hobby, and between the different user groups I support - so, please remain open and blunt with your comments and let’s see if we can build something we all can enjoy.

Friday, September 21, 2007

Service? I’ll give you service …

I’m starting to put my things together and am getting ready to head up to the Bay area to drop in on Cupertino, as well as my own head office in San Francisco. I like my Simi Valley office – and here it is! After a few days in the Bay area, I will head back home to Boulder. I need to catch up with my good friend Lyman, one of my business partners, and I have to take our motorcycles in for service.

And as I was thinking about getting the bikes checked in for routine services, it occurred to me that service, any kind of service, is just going downhill. Like most of you, I have to believe there’s just not too many 1.800 numbers we care to call anymore. I find it so much easier today if I can just go online and punch a few keys on someone’s web site. This was brought home to me recently when, as a result of my own mistakes, I have to admit, I put myself into a situation that I am still trying to sort out.

As the Memorial Day weekend arrived (that’s back in late May, for my international readers) I bought a car. Not a very difficult transaction these days – although the paperwork is ballooning out of site. But I toughed it out, and just took a car right off the lot. Sharing time between Colorado and California, I thought I should do the right thing by my Californian friends, and slip into a two-seater ragtop – with a Beach Boys CD in the player!

From this point on, everything began to fall apart. I planned on driving the car back to Boulder, register it in the State of Colorado, and simply add it to our existing insurance plan. Pretty normal, and seriously, nobody really wants to deal with California insurance rates these days. My Colorado agent processed my request over the phone, no worries!

So, after I had driven back to Boulder, I walked into the local branch of the Motor Vehicle department (the Colorado DMV) and began the process. Yes, I had the proof of insurance firmly in my hand. I paid for the registration, picked up the new Colorado plates, and then took a quick look at the documentation I had to keep in the car. My wife’s name was wrong!

“It’s not our problem – you need to correct it back in California!” was the response. “You’re OK and the car is now registered here, in Colorado, and with Colorado insurance, you should have no problems driving it in California.”

My insurance company agreed, after they checked with a California Highway Patrolman they knew – and don’t ask me how, in Colorado, they were good friends with a Californian Patrolman. Still, I wanted to have the paperwork corrected.

But how do you change the title – when you financed the purchase? Who made the error and where do you start? I had to fill in the right forms to get my finance company to request a title name change in California, get the DMV in California to issue an amended title for the finance company, and carry the temporary validated new California registration back to Colorado. It took two trips to the Colorado offices and four trips to the California offices, before each party was satisfied.

In the meantime, California suspends the registration (that they thought they had, as we were still updating it) as they never received the insurance papers – a completely separate item that fell through the cracks as the paperwork was being ferried around.

And no, none of this could be done over the phone or even by mail – and no, there was no self-help kiosk available that could guide you through the process. On two separate occasions I almost lost it when it was suggested that I should consider starting again at the beginning! And on one occasion I was given a completely wrong set of instructions.

In today’s “Service-Oriented” world, why couldn’t I just sit down at a self-service kiosk, or even my own PC with Internet access, and walk through a process that sorts out this mess? Yes, I am aware of the security implications, but I had all the information in hand and could answer any random questions on file (and yes, I do remember what my first car was, which city I was born in, and what is the name of my favorite pet!).

I would have liked to have been able to walk up to such a kiosk, probably located in a state government building of course, and been able to select car registration transfer. I would have let the kiosk read my encoded California receipt, and then I could have double-clicked on a car icon in the state of California (courtesy of Google maps) and dragged the car icon to Colorado, swipe my credit card for payment (yes, I know – that’s an issue today as well), watch for my month / year stickers and a new replacement encoded Colorado receipt to be printed, and then take the receipt to the kiosk that dispenses plates?

But really, couldn’t I simply log on to Colorado DMV from my office PC – initiate a transfer, check the names on the title, click the request for change (with replacement title going back the finance company, as it was already on file), pay the fees and rest easy? All the while, calmly sipping a glass of wine?

Gimme a break – this is not hard to do today! And to do it securely!

But you are trying to smash together applications from different departments; you want to cut across a number of different networks, public and private; and you have to be authenticated at every step. Yes, I know – but it’s all there. I have it all!

Why do I need to make multiple visits to a bricks-and-mortar shop-front?

A couple of reports and magazines arrived in my office this week – inside of InformationWeek there were the results of a survey that they just completed across 229 companies on Service-Oriented Architecture (SOA) / Web services. I also received a survey report “The New Mainframe: Data Integration and Service-Oriented Architecture, Big Iron Style” that was conducted by Unisphere Research (publisher of Data Base Trends and Applications) and where some 431 responses were collected. All up, pretty healthy numbers for both research pieces.

These reports went on to say much the same thing – even though SOA has been with us for a while now, and has had a lot of time to get traction inside many corporations, the results were proving to be a little disappointing.

While the initial goals of most users was application integration and service reuse, as InformationWeek noted, its perceived complexity seemed to be the culprit. The Unishere paper still pointed a finger firmly in the direction of the protocols, XML in particular, and how this text-based language, with its parsing requirements, was proving to be too much for many servers.

But the goals of SOA are right on the money. We really do need to unlock many of our applications, turn them into services, and think through better ways to integrate with other systems and services. The ubiquitous nature of the Internet, and the Web, has effectively removed the complexities of the network from our subconscious and we can get to anything we need these days. So now it’s up to us to push more of our interfaces out to our clients and users.

There’s a slow but sure revolution going on among end users themselves however – they are beginning to understand how to integrate multiple feeds right there, on their desktops. Many applications and services that they can access are now providing feeds and API’s that are pretty easy to use. End users are combining, or “mashing”, their own applications and these “mashups” are turning up in lots of places, and pretty quickly.

The hope with SOA, and now with Mash-Ups, is that end users will gravitate to those corporations who just make it easy to work with. And yes, I would advocate that “mashups” are moved closer to the center and onto IT managed, highly available, servers rather than out on the client devices.

Let’s be blunt here – systems like the HP NonStop server should be more seriously considered for functionality like this, given how critical this is to any corporation’s image. Nobody comes off looking good when a service goes offline for any reason. And I am a huge supporter of SOA, and have been for many years. Isn’t it time we pushed more of the systems supporting our end users back out to them, for their benefit?

I have to believe we all see the value of doing it ourselves? Even if some of us experience glitches and gotchas, the rest of us could prove very useful in the QA testing of early roll-outs. Just give me the displays and let me sort it all out – once I have satisfied you that I am who I am, then just get out of my way! I just don’t need hand-holding here; save your money.

Our knowledge of SOA has had a lot of time to take hold – aren’t we at the stage where we can leverage it to better serve our clients? Hopefully, as we work through the issues and begin to open up more applications this way, I will never have to go back and explain what I need to have done – mind you, I am not there yet.

This story is not finished by any means – I still have one more trip (I hope) to the Colorado DMV. Maybe there are more forms I need to complete. I am optimistic that we can figure this all out, and that I will never have to stand in line again!

Wednesday, September 19, 2007

Brave new open world!

Back in my Simi Valley office, and once again knee deep in my day job. But there have been a number of exchanges and conference calls today where a common thread has emerged. While in Sydney I had the opportunity to catch up with old friends, some of them mentioned in previous blog postings, and the conversations turned to how cool development has become.

No matter the age – and yes, I talked with folks my own age as well as with those the age of my children – cutting code was what they really liked to do. Whether deep inside software development shops, or simply providing applications to their own organizations, the types of environments they worked with and the tools they used, began to look alike. I really saw for the first time the full impact of standards and an open development environment – and the flexibility and power that has come as a result.

As most of you know, I have a passion for user groups and for those folks who support user groups. It should come as no surprise to anyone that I actively participate in a number of them and am a board member of a group that is not ITUG. I think we all recognize that today, it’s a heterogeneous world and I like to stay engaged with the users of a number of different technologies. No real surprise here, I suspect.

It was while in a phone conference with another group, we began to look at who we should be targeting as a potential member of the community and from somewhere, the term “Java Dude” came up. Immediately, all of us could visualize exactly who that individual was – we could all relate to someone we knew as that Java Dude! First playing with Java in their home computing environment, in many instances, and developing a taste for the ease of development they experienced, they became champions of the environment and over time, saw acceptance of this development environment within their workplace. Taking on the role of evangelists, they become increasingly proud of the inroads this modern technology was making within their organization.

But at the core of all of this was the increasing use of industry standards and open systems. This certainly does change a lot of things and when used in the development environment, pretty much shields today’s developer from any uniqueness that may be associated with the target server platform.

The move to Intel’s Itanium chip at the core of today’s NonStop servers, really was a significant step in the right direction as far as I am concerned – and riding the Intel Itanium roadmaps should see significant price-performance improvements going forward. Suddenly, developers are running virtual machine environments on their desktops that support multiple OS images with any number of data bases management systems deployed. And Linux is becoming widely used in these powerful development clients.

The Itanium play is only part of the story. The Java Dude, and his peers, using any mixture of C/C++ and even C#, have all gravitated to some pretty powerful client desktops. Industry standards and open systems were really opening up the productivity floodgates!

And it was against this backdrop that I had a very lively conversation with Marty Turner, now at Integrated Research and a former Tandem Cupertino developer (remember SNAX/ CDF?). I caught up with Marty in Sydney last week, and what follows here is basically how the dialogue developed, and I am including it as it happened, to make sure I don’t miss anything or edit errors into the dialogue:

RT Writer(RTW): Now, about all this Linux stuff - are you happy with it on clients? Or just on servers? Does it help you as a developer?

Marty: I use it mostly as a client, building software I download using VI (I also have Slickedit and use the KDE workbench, but good old VI (actually VIM) is still my favorite for a "quick" edit. VI is the old ubiquitous text editor written by Bill Joy in the 70s; easy to use - once one gets a subset of the (myriad of) available commands in one's repertoire. VIM is a much improved (hence, VI iMproved) version that is on most all Linux distributions (and available on many/most Unix’s). KDE is the Open Source version of the CDE (Common Desktop Environment): CDE was an industry-attempt (by Sun, IBM, etc.) to build a common Unix GUI desktop manager. KDE is very powerful and has many configurable options whereas Gnome (being somewhat less easily configurable) is pretty and easier to use (industry perception) than KDE.
I run a MySQL server for playing with some Internet stuff and run a CVS and Subversion server just to keep my skills up with those two. Oh, and since I have a 250GB Raid1 setup on it (it being my PogoLinux box (see, I use it as our master backup server for the likes of our personal data and my 48GB of iTunes!

RTW: Could real dummies like me easily install on my home PC? Or, is it still for power users / developers and for servers?

Marty: Absolutely you could get into it; the state-of-the art of some of the distributions (or, “distros”) has become quite easy to use; there are of course the distros for the "power" users (like Debian), but the Linux community have worked really hard in the last 3-5 years to make installing it less painful (so more folks use it). A good website just to see what folks are using is distrowatch. I have about 9 or so distros under VMWare on my Windows box and use Mandriva 2007 (Mandriva used to be Mandrake until they merged with Connectiva) on my PogoLinux box. For a Linux tyro I'd suggest Ubuntu or Kubuntu as they are for the most part currently the easiest to install, configure, maintain, and use; either directly on a PC or under VMWare (I've only installed on desktops, but have hard that it's easy on laptops as well). Both Ubuntu and Kubuntu are based on Debian and as such are quite stable and have a good selection of software (applications, tools and games) and are very easy to use Linux-distro-wise: the main difference between the two being - Ubuntu uses Gnome as its window manager and desktop, and Kubuntu uses KDE as its window manager/desktop. Ubuntu is kept up to date (stable Linux kernel and tools) the most as it's the most popular and it'd be the one I suggest for a Linux installation/user novice as it's easy to install, use, and keep up-to-date. Too bad I didn't know you were interested in it, or I could have given you a DVD copy of 7.10 the other night (the current version). See for Ubuntu and for Kubuntu (again, I'd still suggest Ubuntu first). The easiest way to learn how to use all this is install it under VMWare (I use workstation 6); else, just use/buy an el cheapo desktop and blow away Windows with an Ubuntu install (you can dual boot, but that sort of setup is not easy for a novice).

RTW: What do you think of VMWare? And, based on your previous IBM experience, is it a kind of IBM VM, but for Intel? And for AMD to?

Marty: Functionally, it allows me to do just what I used to do with VM/370: run a different OS than the native one in control of my hardware; the main difference being that VMWare integrates its functionality with a host OS (in my case WinXP, but can do Linux etc. as a host) rather than being a top-level hardware controlling OS.
It runs on Intel and AMD hardware (my box is a HT technology Pentium 4) with supposedly no major differences; the key here is that Intel has new chips (coming out already or soon) that provide instructions that help mitigate context switching etc. on both X86 and Itanium. AMD has their own acceleration coming (or maybe here already) to do the same thing (think VM 370 hypervsor microcode if you will).
I just run Linux distros under WinXP hosted VMWare, but I could also host other WinTel OS’s (eg Win2k) or have a Linux hosted VMWare running Wintel stuff with other Linux’s etc. very powerful. Now note that we've not even addressed other VMs like Open Source XEN, but my choice is VMWare for now as it's a real commercial product that will (hopefully) be evolve in a useful fashion over time.

RTW: For what we are talking about here, are there minimum memory / CPU requirements as I have heard others struggle with them on older PC platforms?

Marty: You need a fair amount of grunt (cpu), disk space (for swap etc.) and real memory. My 3.2GH Dell Pentium 4 has two 160GB hard drives and 1 GB of memory and I can only run two active VMs at a time; I'm told another GB of real memory will help that but I haven't purchased the memory yet.

RTW: And just one point, are you involved in any way in the development of code for Tandem / NonStop these days?

Marty: That's all I work on at IR. IR has PROGNOSIS which not only works on NonStop, but a bunch of other stuff these days; however, my job since 9/2006 has been all on NonStop (pTAL, C, C++).

RTW: Many thanks …

On my return to the US, I asked my own development VP here at GoldenGate, Sachin, and was surprised to see we were heading down the same path. According to Sachin “in general, we are starting to give powerful desktops to our developers and they are running VMWare so they can have private environments of different OS's and databases.”

As this was the second reference to VMWare, I decided to give my good friend Al Williams over at Penn State University a quick email query as in previous meetings, at the other user group I am involved in, he often talked about it. Now, mind you, he also brought a Mac to the board meetings, so I was a bit cautious. But Al provided the following response.

Al: VMWare is a pretty big topic. We use the server products for server consolidation - pretty common for lots of places. The other interesting part is the VMWare desktop products (Workstation, Player, Fusion and ACE). I don't know much about ACE - it is for deploying secure images in a managed environment. Player is a free download and will allow you to run VMWare virtual machines created by the server products or Workstation on a Windows or Linux environment - kind of like Adobe Acrobat reader lets you view PDFs, but not create them. Workstation is the product that runs on Linux or Windows that allows you to create and run Linux, Windows, Solaris and several other OS's (not Mac OS). Fusion is the version that runs on Mac OS X and lets you run all the other OS's except Mac OS.

So who would want this desktop stuff? Developers can run different OS's on their single development machine to test out a product they are working on to make sure it works across various releases and types of OS's. The Player allows you to run any of those environments.

RTW: Again thanks …

But now, I began to want to dig deeper. Was Marty’s set up unique, or could we expect to see more deployments along similar lines to his. I decided to check in with Brad Poole of ACI, another former Tandem Cupertino guy (this time, remember DSM and the EMS component?) and now heavily engaged in field Proof of Concepts (PoC’s) – did he have a similar set up?

Brad: It looks like Marty's development environment is focused on Unix (VI and VIM are text editors for Unix.... I think the macro capability of this is considered quite good though I've never used it). My world is primarily PC Windows based and I mostly use Eclipse for Java development and, of course, Visual Studio for C# development.

RTW: OK! thanks ...

From this, it appears we have seen the playing field move to Microsoft and I wondered if this was by choice, or whether a company like ACI was setting basic standards. So this time, I fired off a quick email to Dave Finnie, a key architect and developer I had talked to a few days earlier in Sydney.

Dave: It is a bit of an individual thing. Most people still use VC++/Visual Studio. No-one uses Linux to develop on as far as I know. Although, I've got to say, I used to develop on Linux and I much preferred it. The whole Windows reliance is pretty much mandated by the gear that ACI has set up out of Omaha. About half of us use Eclipse to some degree. Several people use Eclipse for building but Visual Studio for editing. I use emacs for editing (and probably will until I hang up my programming gloves :-)). A number of us make use of a lot of the cygwin utilities to mimic the facilities that you would normally get on a Linux system. We do use the NSK cross-compilers a lot - in fact they are part of our build process for the real products that we ship. Do I see this changing - probably not, under ACI.

RTW: Fair enough …

But is this a unique ACI variation? When it comes to being a power developer working on NonStop, then the best source I knew of had to be Sam Ayres. Sam is well known to many of us with his Advocacy and SIG support and Sam is currently an ITUG Board member. So, again, another quick email exchange led to this.

RTW: Do you develop under Linux? Or something else – any VMWare, for instance?

Sam: While I’ve done development under Linux (Red Hat) and also under VMWare, these days I work primarily under Eclipse running under Windows XP on a Lenovo laptop (thinkpad). I’ve written Java client code (Java payment SDK) that runs on all platforms. I tested that on Linux, HP-UX, Windows, Solaris. While we do run Linux servers in the middle tier various parts of our system, we use HP-UX for our Oracle databases.

Here is a big consideration for a developer: note the Eclipse plugins for C++, Cobol and PTAL cross-compilation for NonStop are only supported under Windows, so I don’t think I’ll be leaving my windows workstation soon. That being said, personally speaking, I seldom do anything other than Java these days, so I could work under Linux as easily as Windows. But then there is that issue of connection to the company’s e-mail systems, shared filesystems, remote access, etc etc etc.

Answer in a nutshell: due to company infrastructure I develop primarily on a Windows PC, developing under Eclipse, targeting NonStop. I don’t think that will change here; there is too much dependence on Windows infrastructure.

I would suggest for you, rather than running multiple boot environments, just buy a laptop/workstation with Linux pre-installed, I don’t believe you want to be experimenting with dual boots on the machine that you use for your day to day business…

RTW: Aha … I am getting the picture! Thanks …

I have covered a lot of ground here and this blog entry is longer than I normally provide. But I really wanted to make sure I lost none of the content in the exchanges that occurred. Do we all need to upgrade out PCs and workstations and install software like VMWare so we can kick off multiple OS’s and databases? For many partner ISVs then yes, I can see that happening and there’s many benefits associated with doing that. For other users, I suspect a Windows environment running Eclipse with the plug-ins for NonStop is the way to go.

But throughout all of these exchanges, what really hit me today was how easily a broad cross-section of the NonStop community is now developing using exactly the same tools, utilities, and services that were used to support the development of applications for any other platform. Now, the same can be said for all the other platforms as well – today, industry standards and open systems have just leveled the playing field. There is no requirement for today’s developer to step outside his or her comfort zones as they switch between all the platforms that exist today in our heterogeneous world. NonStop is as standard and as open as any other platform.

Just engage in a conversation with any of these Java, or C/C++/C# Dudes – and you will come away with a whole new perspective on how well integrated the NonStop platform has become. Choice abounds and options are a plenty – and we really do have to ask, “why do any of us have issues today with recommending NonStop”? As delivering services in a real time world becomes more imperative, surely the availability and scalability characteristics of NonStop servers remains the differentiating component but why do we often penalize the selection of NonStop when it is as standard and open as any other platform across the world of developers we rely on so strongly?

Thoughts? And I look forward to you comment postings.

Monday, September 17, 2007

Fast Forward! Transformation and Morphing!

I have now returned to the US – after a short stop over in Boulder, I am back in my Simi Valley office and the picture I have included here is of me just off of Presidential Drive overlooking Simi Valley. It doesn’t matter how many times I make the trip to Sydney, I still come back whacked each time … there’s just no fun from those long-haul over-water trips.

But now that I am back at work, with access to my system, I have gone back and added a few comments to earlier postings. I won’t cover them all here, I promise! I have to admit though that I am pretty pleased with the level of response I have seen so far.

The blog postings “Is 30 Minutes too Long?”, “Got Security?”, and “Whereto CTO?” all generated comments that I felt are worth revisiting. A steady trickle of postings is beginning to appear. Clearly, this isn’t the only dialogue taking place across the ITUG Community – a number of online discussion forums are supported by ITUG and can be easily accessed from the ITUG web site –

There are newsletters as well, and Bill Highleyman’s Availability Digest is a particularly good source to learn how users are deploying highly available solutions. Bill has told me that “the Digest really is not a newsletter. It does not talk about current events. It is a Digest in the purest form of the word, summarizing various topics in high availability.”

What’s a little different about this blog is that it is not so much logging questions as they arise but rather, attempting to anticipate the questions that are likely to arise soon; covering topics and trends I pick up on as I talk to clients and partners – and to HP itself. So far, I have discussed the user group itself and where it could be headed, looked at virtualization and Linux, thrown in some thoughts on availability and even given consideration to the roles of architects and technologists. For the next few weeks I will continue to introduce different subjects and see which of them generate traction with the community - I am looking at the comments posted as one way to determine interest levels, and to validate my belief that the issues are about to arise.

What users of HP NonStop servers should be glad to see is that there is a bit of an ecosystem developing around the platform – and more information is becoming available. Whether you pick up information from this blog, a discussion forum on the ITUG web site, or from newsletters or digests, such as the one Bill is producing – the good news is that people are now talking in a public forum about NonStop. In the past, these exchanges have gone on but usually captured within emails so that most of us never had the opportunity to read about and to leverage other users experience is an incredibly important element of growth.

I don’t think there are any of us that do not want to see growth in NonStop, see more users deploy HP NonStop servers. Every time we hear of a new application on NonStop we get excited and every time we hear of a new customer – someone trying NonStop for the first time - we are pretty impressed. Throughout the corporate world, consolidation is rampant and when discussions focus on the consolidation of many servers back onto a single platform – then the question always comes back to “is the platform reliable? Is it highly available?” and “can the platform scale?” and still, there’s no platform the equal to NonStop on both counts.

Now, talking about Boulder and Sydney, have you read the recent announcements from IBM? While most of us have been made aware of HP’s CIO, Randy Mott’s mission to collapse thousands of HP internal servers back to a few large complexes based on HP NonStop, and to deploy them across six data centers – some of which are still under construction – did you pick up on IBM adopting pretty much the same plan?

In case you missed it, check out:

ARMONK, NY - 01 Aug 2007:

IBM data centers in Poughkeepsie, New York; Southbury, Connecticut; Boulder, Colorado; Portsmouth, UK; Osaka, Japan; and Sydney, Australia, will participate in the initiative. IBM has established world-class teams to migrate, test and deploy the applications, which include: WebSphere® process, portal and application servers; SAP applications; and DB2®.

And for even more details, check out:

Thursday, August 09, 2007

In a significant transformation of its worldwide data centers in a generation, IBM has announced that it will consolidate thousands of computer servers onto about 30 System z mainframes running the Linux operating system. "The mainframe is the single most powerful instrument to drive better economics and energy conservation at the data center today," said James Stallings, general manager, IBM System z mainframe. "By moving globally onto the mainframe platform, IBM is creating a technology platform that saves energy while positioning our IT assets for flexibility and growth."

When you take a good look at the configurations – each of these System z mainframes will be a hybrid of sorts – with z/OS and DB2 being at the center, and the application code (mostly Java) running within Application Servers on Linux. Sound familiar? Among the locations will be Boulder, Colorado as well as Sydney, Australia. Go figure … and I had nothing to do with it!

The point here is that the HP NonStop server was not only the logical choice to anchor HP’s consolidation, but HP’s main rival also picked it’s own largest system in pretty much a parallel move to HP’s. It’s kind of reassuring, at the end of the day, to see both HP and IBM electing to go with systems most of the customers have been depending on for key transactional and data base applications for several decades.

Over time, you will see both of these mainframes, or “mainframe class of servers” morph to where they will be very similar in many ways – the main difference will always be that the HP servers have NonStop highly integrated for better availability and scalability. HP will leverage industry-standard chips – and as long as IBM continues to own its own chip technology, IBM will depend on higher-cost proprietary technology. I kind of like where this is headed – I have worked with both systems over the years – and they are both suited to the consolidation tasks awaiting them.

I think most of us understand that we live with a many-tiered technology deployment model with Microsoft on the desktop, some Microsoft and even Linux on the web servers, mostly Unix on the application servers, and significant populations of NonStop and zOS at the core. So what we are seeing today from HP and IBM plays into this many-tiered model and potentially reflects where many of us are headed. Perhaps more as Blades becomes better defined.

With the advent of newsletters, digests, discussion forums, and blogs – an information ecosystem focused on providing independent news to the NonStop community, it will become much harder to ignore the success that HP has with NonStop. And so the question needs to be asked – are we taping into these services? Are we forwarding information to our management? Or do we think that the moves being made by the two biggest vendors don’t really apply to our situation?

Thursday, September 13, 2007

A Taste for Virtualization

I am still in Sydney but getting close to leaving – and what a change this second week has seen. The high-powered leaders of many nations have now left, and the city is beginning to look a lot better. The security fences are coming down and access to the city is returning to normal.

Following a couple of positive comments over my use of a picture last week to show the extremes of Sydney’s security measures, I couldn’t help myself when I saw a city bus go by with this plastered across its side – and I just couldn’t come up with a better word to describe the transformation of Sydney as it was locked-down.

I have caught up with a couple of old friends from my time at Nixdorf back in the early ‘80s. David Miller, Kim Brebach, Rolf Jester, Kevin McCormack – all folks I worked closely with for many years at a time when Nixdorf made a very serious push into the IBM Plug Compatible Mainframe marketplace – the 8890 product family.

But it wasn’t about the past we spent time on – but the future. And center-stage was the future of virtualization and operating systems, in particular, Linux. Kim Brebach has done a pretty solid job of developing a multi-part feature on his efforts to load a Linux distribution onto his desktop – a hilarious tale about the road he went down and told pretty much in a medieval style. For those interested, I will forward a link. But after the missteps and stumbling, Kim is a real Linux-on-the-desktop supporter.

I also had a chance to kick around some ideas with Rolf Jester a Vice President, Distinguished Analyst – ITU Services Market Strategies for Gartner. Rolf is looking way out there beyond what we see today and thinks much of our future IT growth will come from the CME – Communications, Media, Entertainment – market segment as traditional IT collapses down to just a handful of global servers. For sure, the transaction loads on these servers will be enormous, but they won’t be exciting to work with at all. It’s all going to happen over in CME-land where new data types and formats will drive innovation. And again, the conversation drifted towards Linux – and the future potential of Linux in this space.

And so it’s been this week – Linux, and almost in lock-step, Virtualization. The two seem to be getting as much media time as each other and this, I believe, is because the underlying metalwork, the chip sets and boards, are heading to where there will be no limit to the number of CPUs, or processing engines, that end up occupying just a single socket on a processing board. To take advantage of this new packaging – essentially, an endorsement of the more traditional Symmetric Multi-Processing (SMP),or Shared-memory Multi Processor as I also saw it defined, model – then, for vendors where any number of Operating Systems (OS) may need to share a basic board package, some level of abstraction between the OS and the metal has advantages.

Hence, the discussion about virtualization and the talk about hypervisors – the software that provides support for a Virtual Machine (VM) environment where any number of OS’s can be deployed. Today there’s a number of different views as to how this is done, with some vendors now talking about “hosted, operating system level vs native” VMs. OS-level virtualization has become visible on servers (Unix, Windows, etc) and looks a lot like partitioning where a server is split into a number of smaller “machines” and where there is only ever one OS kernel. IBM’s zVM is one example of a native VM running, as is VMware and Xen while Java Virtual Machine (JVM) and the Common Language Runtime (CRL) for C#, .Net, etc. are examples. In conversations many years back, with a financial services company’s architect designing new applications for NonStop, he suggested that perhaps we should consider Pathway as an example of an OS-level virtualization and I suspect he may be right.

In the research I was doing for this piece, I ran across a pretty good description on wikipedia “system virtual machines (sometimes called hardware virtual machines) allow the multiplexing of the underlying physical machine between different virtual machines, each running its own operating system. The software layer providing the virtualization is called a virtual machine monitor or hypervisor. A hypervisor can run on bare hardware (Type 1 or native VM) or on top of an operating system (Type 2 or hosted VM).”

I just read in InformationWeek (August 13, 2007) where Ann Livermore, talking about virtualization, said “Innovation has just begun. This is an area where you’ll see whole sets of services and services companies”!For those of us working with NonStop and familiar with the principles of the NonStop OS Kernel, the NSK, some of this may sound alien. But let’s pause for a minute here and look what’s going on inside HP.

For starters, we know that Martin Fink heads up BCS and he has managed the Open Source / Linux Organization (OSLO) in Ft Collins, CO. We know Hal Massey, from NonStop, now works directly for Martin and he is hard at work with the next round of hardware to be shared across a number of OS’s – and that HP is firmly committed to blades. We have also seen Linux as part of Neoview supporting load capabilities, and we also have heard from a number of sources that the new NonStop comms controllers will be Linux. Not all of these observations are connected and some usage is temporary – but I think the dots and the lines are all there and it’s pretty easy to connect them all.

Apart from meeting with my old colleagues at Nixdorf, I had the opportunity to drop in on Marty Turner – now with Integrated Research here in Sydney but formerly with Tandem and one of the team that brought us SNAX/CDF, and later NonStop NET/MASTER. Marty is old-school development – it’s all about cutting code and solving client’s problems. It doesn’t matter to Marty – pTAL, C, C++ … whatever is needed. Marty’s a classic power user and as I looked around his home office it was littered with books and servers. He runs Linux on his desktop – multiple Linux’s on top of VMware’s product offering. Marty absolutely sees the need today to be very aware of getting the most out of today’s new hardware packages with the latest chip sets.

And so it is – as we push ahead in the world of NonStop I will not be surprised what we see happening under the covers. As I connect the dots – and I am the first to admit that a lot of this is speculation on my part – and as I put it into context based on what I see my peers doing, then I am both excited by the potential here as well as extremely curious about how HP will accommodate within NSK.Do we see signs of what I am talking about here – or am I alone in my observation? Am I connecting the dot’s all wrong? I don’t think so – I really do see fundamental changes occurring in the direction the NSK OS is headed.

Will this be a big step forward for NonStop? Will it even look like the NonStop product we are all so familiar with – or will the NSK OS morph and become some derivate of the traditional NonStop Messaging System (a la ServerWare, we saw all so briefly back in the mid ‘90s) layered on top of Linux on top of a highly tailored HP-specific hypervisor? In future blog postings I will take a closer look at virtualization and the potential benefits as well as possible pitfalls of virtualization.

Will we be running native NonStop in the future at all? Or will we deal with the new and fresh flavor of NonStop?

Wednesday, September 12, 2007

Glitches, the norm?

Still in Sydney – but the headlines of the past few weeks have been bothering me somewhat. Have you seen all of them – it looks like computer glitches are hitting us hard! I have included a picture of Sydney in case you aren't sure what it looks like!

In my August 27th blog posting – Is 30 mins too long? – I remarked that “I have little patience for any retailer of financial institution that skips on their infrastructure”. But now I am seeing whole sectors of the community being affected. I have to start wondering – are we becoming desensitized to all of this?

What caught my attention was the headlines here last week – well, actually, a small article in one of the financial papers – “Glitch shuts out Westpac online customers”! It turned out that about 30 percent of the bank’s 400,000 internet banking customer could not access the (online banking) service. The paper I was reading went on to add that according to Westpac, “it appears to be related to an internal systems error which we’re still trying to isolate” and then added a comment that the bank wasn’t sure whether this was related to a recent website revamp at the bank.

Now, in isolation, this would have just been something I read and had a brief chuckle about. But unfortunately, I had only moments early read on my blackberry about Barclay’s having a big problem in the UK that forced them to borrow 1.6 Billion Pounds. According to “Barclay’s blames technical glitch for 1.6 Bn Pound emergency loan”! A problem with the link between its electronic settlement system and the CREST settlement house on Wednesday broke down … for an hour!

Going back to my August 27th blog posting, you may recall that I mentioned, in passing, that Wells Fargo had suffered a serious outage on the West Coast that not only affected ATMs but major portion of the branch banking business, as well. I just went back and googled the Wells Fargo outage and the first link I was directed to was something called and the heading simply stated “Wells Fargo ATM, other glitches last longer than first reported”. The report also put the timing in perspective as well, when it added “Well’s computer glitch came at a poor time for nervous banking customers, considering the recent turmoil in the mortgage and stock markets.”

I began to look at this after I met with a former colleague of mine, Dieter Monch. Dieter was the Australian Managing Director of Nixdorf Computer when I worked for Nixdorf, back in the early ‘80s. Dieter is an investor, and now manages, the company that sells red-light and speeding cameras around Australia. He recently attended a state government presentation that asked potential vendors to look into providing a camera network that wouldn’t fail – borrowing words from NASA, failure was not an option. Dieter simply, and I have to believe, politely – how much are you prepared to pay?

Now, I am not all that sympathetic to the loss of a speeding camera – and the revenue opportunity missed. I don’t think many of us are – and don’t look positively on this form of revenue generation. But looking at it from a different perspective – if these were cameras tracking vital security operations and went down at the time a key illegal or terrorist activity was being executed – then I can see a time in the future when even these types of networks just have to remain operational at all costs.

So, glitches and their implied outages, as well as the implications of lost revenue, are beginning to show up across all industries and markets. So we are taking the issue pretty seriously, and we seem to understand the problem. But with the news coverage I have seen over the past couple of days – I am not sure how seriously we are taking the fall-out from today’s glitches. Surely, the loss of credibility in a marketplace of 400,000 as was the case in Australia, or millions I would have to believe in the US – as well as the real cost in terms of interest on the short-term borrowing of 1.6 Bn Pounds is pretty serious. Again, have we become desensitized to the issue of computer glitches? Has the term become an easy way out – a catch-all phrase to cover up any infrastructure stuff-up we may make?

Do we aggressively promote the value of applications and data bases that survive single (and now, multiple) points of failure? Do we explain how all this works and the value we can provide? Or, do we simply leave it to others – the comms guys? the web server guys? to explain why an element of the infrastructure failed?

Do we still believe that some subset of these applications are so fundamentally important to us that we view them as "mission critical applications", and ar we prioritizing and routing these "mission critical transactions" to a platform that is orders of magnitude more reliable than the other servers we may have deployed?

While we, as users of NonStop, have come a long way in removing many sources of outages – how strong a voice do we have in other areas of infrastructure? And are we still strongly advocating NonStop in support of mission critical applications, or have we elected to just to sit back and watch as less reliable platforms siphon-off these transactions? In other words, have glitches become the norm and have we reached a time where it’s OK to simply explain away a service interruption to the dreaded glitch?

Saturday, September 8, 2007

Whereto, CTO?

In Sydney, I have had the opportunity to catch up with a couple of former Insession colleagues. It’s been a year now since I left Insession / ACI and went over to GoldenGate, but I still enjoy a discussion with the lads.

Peter Shell, who used to manage the Sydney development team has now moved on to a new job as the boss of IT across AsiaPac. This is bringing him face to face with many of the issues many IT exec face today – dealing with an ever-demanding user community while navigating the vagaries of regulation and compliance. I often loose site of the fact that ISV development shops share many of the same issues as the corporate world and where any outage at all can put a project at risk as valuable time is lost.

Peter is known to many of us as the former leader of OzTUG and spent many years associated with the event I always looked forward to – in a few weeks time there will be another OzTUG event that will take on a different look as at aligns with other HP activities within the region. It will be interesting to see how the now format fairs.

I also met with Neil Coleman, the chief architect of Insession responsible for bringing the ICE product to market. Neil was very instrumental in the development of the follow-on products WebGate and SafeTGate – now very much integrated into ACI’s payments products. Neil is now developing a technology group and mentoring others to take on more responsibility.

Neil was particularly enthusiastic to relay to me how the original ICE product – a NonStop implementation of core IBM communications protocols and services – is now being groomed as key infrastructure components in support of ACI’s cross platform initiative. Already, key communications modules are running on a number of Unix platforms including HP-UX and shortly, on IBM’s zSeries mainframes. The irony here doesn’t escape me at all – as Star War’s Darth Vader would say, “ the circle is now complete; the student is now the master!” Neil and his crew have taken a sliver of IBM’s mainframe SNA code that they first developed for NonStop, then added support for HTTP, XML, and SOAP, thrown in a healthy amount of security, and now have it implemented back on the IBM mainframe superseding any need to run IBM middleware – even for comms!

In talking to Neil I was reminded that the role of CTO was evolving: just recently, I was asked by a leading IT manager who really does understand that he needs a technical leader and a group responsible for architecture, “how does one motivate such a team, and how do you convince your CEO”? Surely, he went on to add, “do you know where such a document exists or maybe you yourself have something like this”?

At first, I had to believe there was a wealth of material out there in Gartner white papers and in popular magazines like CIO Magazine. But seriously, do any of us have the time to wade through these publications – and shouldn’t we just know all of this instinctively?

As I began to look into this in a bit more detail, I began to recognize that the right answer is - maybe! It depends! It varies across different groups! All pretty wimpy answers on the surface – so I went back to basics and looked at what I have been involved in over the years.

Just before leaving Insession and joining GoldenGate, I spent about a year at ACI’s HQ in Omaha working in the CTO office with a minor role looking at open source usage across the organization. Within ACI there was an architecture group as well as a Technology Steering Group. The Steering Group included some stellar technicians. But what I came away with is that, even in large organizations, it’s not easy to have an architecture group too far removed from the daily development activity. Documenting recommendations and defining models and templates is all well and good, but in some situations the technology moves so quickly that there’s always the risk that for all the good work that is put into an architecture option, the industry has moved on and there’s just a better way to do things.

The Steering Group was expected to “keep the hand on a pulse”, but the distinction between its role and that of the CTO architecture group was blurry.

Within GoldenGate we do not have a CTO per se – but rather a VP of Technology who stays very closely connected to prototype any new “areas of interest” deployment. In other words, as we push into a new area or try a new technology, as we find a customer or partner interested in trying it – then his team become intimately involved. Supporting the VP of Technology is a group of specialist architects experienced with different platforms and operating systems. To date, I have seen this model working well.

But back to the question – do we need a CTO today? Do we need to have an architecture team? From the above experiences I have had across a number of ISVs – the answer is yes, but with a caveat. Don’t put too much distance between such a group and the folks charged with getting their hands on the code. As you descend into smaller ISVs, the role is not necessarily a full time job – often the founder performs this function.

Within the user world, someone has to take responsibility for setting directions, for putting together roadmaps. There is a definite need to have someone staying just one step ahead of the offerings from key vendors and partners.

Knowing when to skip a generation and when to switch from one partner to another, can really make a difference. Making the right decision on hardware platforms, on operating systems, on databases, and on systems and network management is crucial to building and then supporting a stable and reliable environment. Laying across this a security framework and making sure it is kept current and free of unprotected entrances and then making sure there’s a sound business model behind a dual-site strategy that accommodates planned as well as unplanned, or disaster, outages – needs a wealth of experience and knowledge and a serious commitment of time and money to stay current.

You know, in the end, I feel far more comfortable when a group or team is involved with this responsibility. Too often I have seen individuals steering companies down the wrong path – perhaps not initially, but over time, one decision at a time. I saw a good team approach being developed at Insession and I guess this has now biased me somewhat. And the picture I have included here, at the top of the posting, is of the former Tandem DSM team that worked on the NonStop NET/MASTER program in the early '90s.

Getting this all right is hard, and getting a good team together who really do understand the “balance” between costs and benefits, equally as challenging. But the downside here is beginning to become burdensome – with legislation now on the books in many countries that includes penalties for leaving personal data unprotected and where unavailability of a service precludes timely financial reporting.

For those companies that generate revenues from their IT infrastructure – services bureaus and application services providers, card processing and switching companies, etc. – this function can be very important and vital in keeping the company competitive. Whether an individual or a team, it will depend upon the size of the organization as again, one size doesn’t fit all!

How do you motivate – like everything today, this has as much to do with the engagement of the group or individual as anything else. By this, I mean, the people in this field have to be a part of the action and be visibly recognized with any success – this is not an ivory-tower position any more, but one where communication and people skills are just as highly prized attributes as technical prowess. Folks who just can’t communicate and motivate are not suited for these positions in the future. Motivation comes from a series of little wins, of small battles won well, and it’s up to the senior execs of these companies to encourage them with early goals that can be achieved. I don’t know of anyone in the user world pursuing the grand deployment anymore.

Talking to Peter and Neil brought it all back to me. There’s no substitute for professional management – folks who have had the opportunity to develop the skills that are so much in demand these days. I am seeing the role of the CTO becoming less executive and more hands-on, operational. Indeed, I am seeing user companies assigning the role to folks that are really tuned into the ever-changing technology landscape and who themselves understand the investment in time this all takes. And in the months ahead, I will be looking at many of the senior roles present in our organizations and at the value they provide.

So, will we see the role of the CTO within the user community simply fade away being replaced by a Technology Leader, whose team is directly involved in progressing technical developments?

Tuesday, September 4, 2007

Got Security?

After a couple of good days in Boulder, I am now down in Sydney – and what a change! Not just in terms of the weather, as Sydney is coming out of Winter and the first days of Spring are pushing up the thermometer whereas Boulder continue to bake in some of the hottest weather it’s experienced all year.

The real shock has come in arriving in "Fortress Sydney"!
For the week we are here there is the APEC conference – a meeting involving the heads of state of the US, Russia, China, Japan and many other countries that ring the Pacific. Concerned over the potential of demonstrations and even terrorist actions, the CBD has been surrounded in concrete and 10 foot high fences.

I have included a picture (above) here to give you a feeling for what it’s like – the perimeter goes for miles as it fences in most of the North Eastern corner of the city including the Opera House, the residence of the Governor, and most of the major hotels.

Gates have been set up to control access into the restricted zone. Police are highly visible checking all credential and there are helicopters in the air at all times. We have even heard an F18 fighter plane make a couple of passes over the city. The Navy and Water Police are also highly visible

I have never seen anything like it and until I paid the city a visit to take a look, I hadn’t thought it was possible to lock down a city the size of Sydney – but they seem to have pulled it off. They have sealed it all off and limited access to a select few entrances through highly controlled gateways.

And this really made me think about the security and what we are doing today to secure our systems. Back in the 70s Honeywell went so far as to provide a system with no comms or networking capabilities whatsoever, and went on to promote it as completely secure! But I don’t think that will be of any use these days in the highly connected business world we all have to deal with. Essentially, we have to live with systems that have many entrance gates and with access allowed from pretty much anywhere in the world.

Corporations that I talk to, often change gears on me, and talk about meeting regulatory guidelines and requirements. But is compliance really what security is all about? And is it only an issue for larger corporations? Does security really involve every organization we deal with in our technology ecosystem – from the large vendors all the way down to the smaller specialist tools and monitoring companies we deal with day to day? In erecting our own fences and opening only a few gates, do we really believe we can secure all the applications and data bases we are responsible for?

So, is compliance a well defined path toward security? Is it because we don’t know what would make systems secure that we happily comply thinking it relieves us from the responsibility of defining security ourselves?

Or, we comply grudgingly, knowing well enough that following the rules defined by regulations doesn’t change a thing?

I have to admit, seeing the steps Sydney has gone to in order to demonstrate how seriously it was taking security, only reinforced in my mind how open to all sorts of threats we really are. There has always been the saying “I don’t really know what I want – but I will recognize it when I see it”. Today I have just seen a city locked down and secured – and I am pretty sure that’s what I would want if it was my responsibility. But have any of us really seen a system so secured that they immediately recognize that it is exactly what they need for themselves? I am not sure I have seen such a system yet!

How do you feel about security and the function of compliance?

Saturday, September 1, 2007

BLOG Launch ...

This week, ITUG will begin active promotion of this blog and I will be very interested to see how many in the community begin to read my entries and the comments these generate. I am keeping my fingers crossed here and am hopeful that a good number of you will turn to the blog from time to time.

In my upcoming ITUG Connection columns I will be talking about the NS1000 and asking the question about its role in a distributed environment - will the NS1000 generate new opportunities to offload other platforms, and not just existing NonStop servers. Would it be good business to offload other Unix applications, and perhaps even IBM mainframe applications, onto this family of NonStop servers? Could you take advantage of NS SQL on the NS1000 to create small EDW systems - one's where much less than 10's of Terabytes were needed?

I elected to adddress these possibilities for the NS1000 as I see this as a transitional architecture - it takes advantage of a building block that just costs so much less to develop and manufacture than previous "packages" used - while not yet a true blade package, for me clearly a step down the path to blades. So, how much attention are you giving this additional NonStop server line?

I am also continuing to give presentations at various ITUG events on the contents of a number of my earlier columns - but as I do, I am updating them with new material from these just-produced columns. To date I have given the presentation at SATUG, and at the recent HPTF&E. Shortly I will be giving it at the Euro ITUG event in Brighton, and I am hopeful that I will get other opportunities later this year.

I refer to the upcoming columns, and events, as much of what I will be covering in this blog will be tied into them - this blog will be a place where I develop a number of threads that I believe are important to the community and where your feedback is something I value. From my earliest days on the ITUG Board I always felt it was important to pursue different models that opened up a two-way dialogue. But now the technology is at a point where I am comfortable that such a dialogue can be developed pretty easily.

So, with the promotion efforts of ITUG beginning this week I will be keenly watching the outcome - if you have seen something I have written in the blog entries posted to date that interests you - please add you comments. The content, in the end, will only be as exciting and as meaningful as you dictate. Turn to Connection and read Real Time View! And, if you are attending the Euro ITUG event, check out my presentation and drop by to let me know what you think of this blog.

Looks can be deceiving! HPE NonStop; when being the best still matters!

For the NonStop community, we know what looks good may not only be deceptive but borderline dangerous; mission critical applications are bes...