Monday, April 30, 2007

Sign of the Zodiac

I mentioned in last week’s blog that I’d been to Mainz in Germany with IBM. The focus of the meeting was on SMB customers rather than mainframe users, although I would guess plenty of mainframe sites have a host of other boxes around the place.

One thing that surprised me was the number of horror stories they could quote of sites that had a number of x86 servers around the company, but weren’t sure quite how many there were or what the boxes they knew about actually did – ie what applications were running on it.

Before these sites could even think about virtualizing, they needed to discover what they had installed. They needed some way of discovering what boxes they owned and what applications were running on them, and they needed to do this without having to install an agent on each box to do the job – because, if you don’t know what boxes you’ve got, you can’t put an agent on them!

This is where a very clever piece of software called Zodiac comes in. This complex software can link in to other software where necessary and help a company build up an accurate picture of what’s going on where on its servers. The software will sit on the network and pick up message traffic – eg it will find a query going to a database, and find a response coming from that database.

Once a site knows where it is at the moment (in terms of hardware and software), it becomes possible to plan for a more idealized working environment and how to get to there from here – because currently there seems to be a lot of sites that don’t know where “here” actually is! Obviously, a business case needs to be built and Zodiac works with Cobra, a component that can help build a business case. This is perhaps harder for many sites than it might at first appear because as well as consolidating and reorganizing the hardware and software at a site (sites will be looking to virtualize their servers in order to use fewer of them and reduce overall costs), it also involves reorganizing people. There is strong likelihood that after the reorganization, the jobs needed to run the data centre will be different from those needed before the reorganization. Some new skills will be required and some old ones may not be needed, or two or more jobs might be consolidated because the amount of work needing to be done is reduced. These HR effects are important, and will need dealing with by companies making the change.

Steve Weeks, who heads the Zodiac project at IBM, said that they were now visiting sites that had used Zodiac for the initial inventory report and to create the business case for the migration, and who were now ready to move forward again and wanted to use Zodiac facilities a second time.

Zodiac doesn’t depend on IBM boxes being on site, it is completely vendor neutral, and will identify whatever it finds from whichever manufacturer. It seemed like a very useful product and one that I was completely unaware of before this meeting.

Monday, April 23, 2007

Big Blue goes green?

I was recently with IBM in Mainz discussing data centre challenges for the 21st century. Interestingly, one of the issues under discussion was about having a green data centre.

Now, environmental friendliness is very much on every politician’s agenda, with everyone trying to outdo the opposing candidate on how green they are – in terms of recycling waste, cutting energy use, and creating fewer carbon emissions by not using cars and planes etc whenever possible. And if people are recycling at home, turning down the thermostat on the heating, and cycling to work, it makes sense for them to look at being green in the work environment too.


Now this is where the problems start. Hands up anyone who can define what is meant by a green data centre. We all know what we think it means, but it is quite hard to come up with a definition that is worth including in a dictionary. And in many ways, it is impossible to have a green data centre because of the amount of energy needed to create the processors and data storage devices in the first place, the amount of energy necessary to run them so that we’re getting decent processing speeds, and the energy required to do something environmentally friendly when the hardware is passed its best and being shipped out.

At the meeting in Mainz, IBM was suggesting ways that the data centre could become greener – by which they meant more energy efficient. They were specifically talking about blade servers rather than mainframes, but I guess most sites have a mixture of technologies and this will apply to them.

IBM made a statement that I found quite startling, but everyone else in the room nodded sagely, so I guess it’s true. I suppose it’s my mainframe background that made the idea seem so strange – you’ll probably be saying, “of course, everyone knows that”. They suggested that the average usage of an x86 server was around the 20% mark and that people were likely to go out and buy another server when they hit 25% usage. This shopping expedition wasn’t necessarily caused by the increased server utilization, it was just the sort of pattern that they had observed. That means these companies would end up with rooms full of servers with around a quarter utilization. The first way to become more green is obviously to get rid of half the servers and double the utilization figures. But how can you do that? Well IBM is very keen on virtualization (and why wouldn’t they be, having been using VM for forty years?). Obviously, virtualization does use slightly more power on a single server than not virtualizing, but significantly less than running two servers.

Their other greening strategy was in the way the blades are cooled. Apparently air conditioning warm air works better than air conditioning cooler air! They told horror stories of servers that were drawing lots of power and were then running hotter, so the fans would spin faster to cool them down, which meant that the fans were drawing more power (and creating more heat, which meant…). Their solution simply involved keeping the hot and cold air separate, which results in the air conditioning working more efficiently and less energy being used.

They did also have ways of water cooling the doors of blade servers to keep down the temperature and some of the components were now more energy efficient, which meant they were greener – although this was a consequence of a desire to make them more efficient rather than anyone specifically following a green agenda.

So, to answer my question in the title of this blog, Big Blue is moving towards greenness. It’s doing it because it makes sense for them to do so. And that’s because energy efficiency means customers can save money – always a strong selling point. And also because customers are asking for greener solutions at affordable prices, which IBM is able to provide. However, I doubt we’ll be seeing a data centre that an environmentalist would consider to be green for a long time yet.

Friday, April 13, 2007

CICS V3.2 – do I need it?

IBM has been excitedly telling everyone recently about the latest release of CICS. But the real question is whether sites should be looking to upgrade from 3.1 to 3.2. Is there really any point?

IBM reckons that the upgrade rates to CICS 3.1 were the fastest that it had ever experienced and there was probably a good reason for that – SOA. Service-Oriented Architecture was available for the first time with CICS V3.0, but it was V3.1 that provided the first full production implementation. With so much pressure on sites to save money and provide better business value, you can see that a migration to V3.1 was going to be on the agenda for everyone in order to maximize the benefits that SOA can offer a company. And I would list them here, but you see them in every PowerPoint presentation you sit through these days, so I won’t bother. You know what they are!

So having migrated to V3.1 for all these benefits, do I need to take the next step to the newly-announced V3.2? The answer from IBM is obviously yes, so let’s see what 3.2 has to offer.

V3.1 of CICS in a Web services environment is a heavy user of resources. IBM has tried to do better by optimising the HTTP client and server in the new version. There’s also better management including a way to trace the progress of an end-to-end transaction. This makes use of WSRR – an acronym you’re going to become more familiar with over the next few months. The WebSphere Registery and Repository is a single location for all the available Web services.

Some people found the old message size restrictive – a transaction couldn’t handle all the data they wanted to send. IBM now has MTOM (Message Transaction Optimization Mechanism), which overcomes the problem. V 3.2 has increased transaction granularity and, by exploiting 64-bit architecture, it can handle larger payloads. CICS 3.2 has also seen improvements to the user interface making it easier to install and define regions, and problem location has been enhanced.

So do you need CICS Version 3.2? With the important improvements to SOA and Web services and the other improvements including problem identification, I think the answer is a resounding yes.

Monday, April 09, 2007

SOA – Same Old Architecture

Last week I blogged about a session at a legacy application modernization session I attended. This week I’d like to tell you about another presentation I saw later that same day. This second one was by Gary Barnett, Research Director at Ovum Consulting.

His approach was less one of telling us what to do, but rather raising our consciousness to stop us making the same mistakes that other people have made in the past. He is responsible for defining SOA as Same Old Architecture – which, although intended as a joke, made the point that this isn’t all new. He reminded us that Web services weren’t the first type of services that we’d come across. He suggested that we’d looked at work in terms of services before, with things like CORBA services and Tuxedo services (from BEA).

Gary also confidently predicted that 80% of SOA projects would fail. He based this prediction on the fact that they relied on ASCII and XML and that 80% was probably the number of projects that failed anyway.

He also had some important thoughts on re-use. He suggested that it wasn’t enough simply to have nice interface. He insisted that if re-use was to occur it had to have been planned since the design phase. There is no way to retro-fit re-use! He also insisted that “best practice” only worked when it really was “practised”!

Gary likened many IT projects to building a bridge. IT people know how to build metaphorical bridges, so when someone says let’s have a bridge the IT people start building. The reason so many projects fail is because it is not until they are half way across the river that anyone from IT stops to ask the questions, “just how wide is this river?” or, “do you really want the bridge here?”.

Gary said that most presentations show large coloured squares joined by thin lines and warned that the reason the lines were so thin was that people didn’t want anyone to notice them and ask questions. However, he stressed, it is often the links between applications or services that are the most difficult to modernize.

On a serious note, Gary insisted that the focus for change should be on business processes. He said that in any successful company there would be no such thing as a legacy system modernization project, there would only ever be business modernization projects.

Definitely a “make you think” session, and well worth seeing for anyone contemplating modernization (ie all of us!).

Monday, April 02, 2007

Legacy application modernization

Last Monday I was lucky enough to attend a one-day seminar near Heathrow in London organized by Arcati. It had a number of speakers, and gave a very interesting positioning of where many companies are today and where they’d like to be – and the all important guidelines describing how to get there.

It highlighted two very important points – that getting there is going to take time and effort; and, by the time you have arrived there, you’ll want to be somewhere else and the whole process will start all over again!

Dr Mike Gilbert, principal of Legacy Directions, suggested that there were three immediate problems organizations face in terms of modernization strategies. His first was the COBOL skills problem – he suggested that the average age of a COBOL programmer was 45, and few youngsters wanted to learn COBOL (or can even find places that teach it!). In ten years time, most COBOL experts will be looking to retire rather than take on a modernization challenge.

His second problem he likened to an octopus. The core legacy application was the body of the octopus and the tentacles were the peripheral systems that the application touched. While it is easy (a relative term, obviously) to do something with the tentacles, it is much harder to carry out work moving or integrating the octopus’s body.

The third problem was simply cost. What happens if the modernization project goes wrong? The cost can be enormous, and Mike gave an example of one company at which all the senior officers resigned following a project failure because of the expense suffered by the company.

Mike Gilbert then explained that we should be looking at the big picture when thinking of legacy systems. Firstly, there are people involved – not just IT, but the users who are familiar with using the applications. We should think about the processes – there are the old (original) processes and the new ones. Then we get to the applications themselves, which could be locked into particular databases etc. And finally there is the infrastructure that must be considered.

Before a modernization project can be undertaken, it’s important that the business leaders understand the need for the project in business terms and can see the benefits the business will get from the change. The business leaders must support the modernization project. The project must use the best methodologies (and in some cases the methodologies may be in their infancy). Lastly, companies must have appropriate tools for the project – again these might not exist for all modernization projects.

Mike suggested that in any modernization project, it was important to go through five stages. Stage 1 was to define the challenge. Stage 2 was to define success – that way you knew when you’d finished. Stage 3 was to plan the project. Stage 4 was to carry out the project. And Step 5 was to return to Step 1.

He suggested always starting with processes, because these were used by people. Then look at people, because they use the actual applications. Then look at the applications, which run on the infrastructure. And lastly look at infrastructure. He suggested that by scoring assets, it was possible to produce a decision table and show what changes were possible in terms of cost and risk. The final decisions should always be taken by the business leaders so they are supportive of the project.

This is just a flavour of one session at the seminar. All in all, it was a very useful day with lots of valuable information.