Monday, January 29, 2007

Mainframe futures… (part 1)

If I was in charge of a data centre at the moment, what issues would I be concerned about for 2007? That’s the sort of question CIOs and others must have been asking themselves recently. I’ve been kicking around that question recently and below are some of my thoughts. I’d love to hear your thinking on this topic.

The perennial problem of how to do more with less must be at the top of everyone’s agenda. There is always less money and there are always more demands put on the data centre. For example, the amount of data that needs to be stored somewhere just seems to grow each year, and this compounded in some companies by the need to store sound files (whether music or podcasts) and pictures (including videos). This is on top of the growing databases and the increasing number of unstructured files, and, of course, the need to retain everyone’s e-mails for audit purposes.
Doing more with less also applies in many cases to manpower. It’s a frequently discussed phenomenon of the ageing COBOL programmer. As well as the loss of application programmers, there are also fewer operators (certainly than in the heydays of the late 1970s with 20 operators on a single shift!), and fewer systems programmers. The solution is more and more automation – which would be much easier if companies didn’t insist on changing the hardware and software in use!

High Availability is also an important issue. Again there is a lot of talk about how this can be accomplished on mainframes and mid-range boxes. But IT departments can also be responsible for a fairly mixed bag of PCs and networks that also need to be up and running. This all needs to be taken in to account when an HA initiative is undertaken.

Many sites also have issues with licensing problems. William Kim Mongan has an article on this topic in a forthcoming issue of Xephon’s z/OS Update (www.xephonusa.com). He describes the problems with licences his company faced when moving processing from one location to another. He describes how some vendor products are dependent on CPU-ID, which means that they won’t run on different hardware. Some have a tolerance mode giving the user a week or two’s grace before new keys need to be entered. IT departments are going to have to ensure back-up or recovery sites will run the software needed in the case of a failover situation, or even following the all-too-common company mergers.

Most IT managers will be looking more favourably at Open Source software in 2007. This is a difficult decision for many sites. On the one hand licence fees cost money, but they do bring with them the assurance that someone cares about that software. Whereas Open Source may have been developed and debugged by hundreds of different people and should be robust and resilient, but, should you experience a problem, who are you going to call? Having said that, there are many examples where Open Source software may be just what you need for PCs and mid-range machines. And with Linux on the mainframe, we will see the growth of Open Source applications running on the mainframe.

Next time I will look at the future for mainframes in relation to virtualization, SOA, and compliance.

0 Comments:

Post a Comment

<< Home