Monday, January 29, 2007

Mainframe futures… (part 1)

If I was in charge of a data centre at the moment, what issues would I be concerned about for 2007? That’s the sort of question CIOs and others must have been asking themselves recently. I’ve been kicking around that question recently and below are some of my thoughts. I’d love to hear your thinking on this topic.

The perennial problem of how to do more with less must be at the top of everyone’s agenda. There is always less money and there are always more demands put on the data centre. For example, the amount of data that needs to be stored somewhere just seems to grow each year, and this compounded in some companies by the need to store sound files (whether music or podcasts) and pictures (including videos). This is on top of the growing databases and the increasing number of unstructured files, and, of course, the need to retain everyone’s e-mails for audit purposes.
Doing more with less also applies in many cases to manpower. It’s a frequently discussed phenomenon of the ageing COBOL programmer. As well as the loss of application programmers, there are also fewer operators (certainly than in the heydays of the late 1970s with 20 operators on a single shift!), and fewer systems programmers. The solution is more and more automation – which would be much easier if companies didn’t insist on changing the hardware and software in use!

High Availability is also an important issue. Again there is a lot of talk about how this can be accomplished on mainframes and mid-range boxes. But IT departments can also be responsible for a fairly mixed bag of PCs and networks that also need to be up and running. This all needs to be taken in to account when an HA initiative is undertaken.

Many sites also have issues with licensing problems. William Kim Mongan has an article on this topic in a forthcoming issue of Xephon’s z/OS Update (www.xephonusa.com). He describes the problems with licences his company faced when moving processing from one location to another. He describes how some vendor products are dependent on CPU-ID, which means that they won’t run on different hardware. Some have a tolerance mode giving the user a week or two’s grace before new keys need to be entered. IT departments are going to have to ensure back-up or recovery sites will run the software needed in the case of a failover situation, or even following the all-too-common company mergers.

Most IT managers will be looking more favourably at Open Source software in 2007. This is a difficult decision for many sites. On the one hand licence fees cost money, but they do bring with them the assurance that someone cares about that software. Whereas Open Source may have been developed and debugged by hundreds of different people and should be robust and resilient, but, should you experience a problem, who are you going to call? Having said that, there are many examples where Open Source software may be just what you need for PCs and mid-range machines. And with Linux on the mainframe, we will see the growth of Open Source applications running on the mainframe.

Next time I will look at the future for mainframes in relation to virtualization, SOA, and compliance.

Monday, January 22, 2007

Virtualization – it’s really clever

I’ve recently been taking a look at virtualization and The IBM Virtualization Engine platform, and I’ve got to say that I am very impressed with the concept behind it. I’d really like to hear from people who have implemented it to see how successful they have found it to be.

Virtualization started life in the late 1960s with the original implementation of VM/CMS. The problem that VM/CMS solved was how to let lots of people work at the same time on the fairly limited hardware that was available. It was not unknown in those days for developers to book slots on the hardware to do their work. CMS (Conversational Monitoring System) was developed at Cambridge and gave each person sitting at a terminal their own virtual computer. They had disks, memory, processing power, and things like card readers and card punches, all apparently available to them. They would do their work and VM (Virtual Machine) would run as a hypervisor (rather than an operating system as such) and dispatch the different virtual machines running according to priorities it was given.

In the late 1980s and early 1990s, the story takes the next step forward with the introduction of PR/SM (Processor Resource/System Manager) and LPARs (Logical PARtitions). This worked by having something like the VM hypervisor running in microcode on the hardware. Users were then able to divide up the existing processor power and channels between different LPARs. And the reason they would want to do that is so the same physical hardware could run multiple operating systems. That meant one large partition for production work and smaller ones for development and testing, but all located on the same hardware. It made management and control much easier.

The next big leap forward takes place in the middle of 2005 with the introduction of the z9 processor. This took the idea of processors and peripherals appearing to be available one step further. Rather than physically dividing up the processor and channels into LPARs, everything was logically divided. These virtual machines are then prioritized and dispatched accordingly. What it also did, that is very clever, is it allowed insufficient or non-existent resources to be simulated, so they would appear to be available.

If that was the end of the story, that would still be quite clever – but it’s not. IBM long ago realized that it couldn’t pretend it was the only supplier of computer equipment. And most companies – through takeovers, mergers, and anomalous decisions – have ended up with a mish-mash of server hardware, most of which, eventually, becomes the IS department’s responsibility. IBM has cleverly extended its virtualization concept to cover all the servers that exist at a site. It combines them all together into one large unit. Now you might think this would be large and unwieldy and completely the wrong thing to do, but in fact the opposite is true. It now becomes possible to control these disparate servers from a single console and monitor them from one place (which could be a browser). Management becomes much easier. It’s not only possible to manage System z, System i, and System p components, it’s also possible to manage x86-based servers. It can also manage virtual machines generated by Microsoft Virtual Server, VMware, and open source Xen.

So, basically, IBM has come up with a way of making virtual components appear to be available to virtual computer systems running across almost any server that currently exists. This maximizes the usage of the resources available to suit the workload. And it has done it in a way that makes management of such a complex system fairly straightforward. Really clever, eh?

Monday, January 15, 2007

I say I say I say!

It must have been about six years ago last time I looked at “speakwrite” software, and at that time it was a bit useless! A speakwrite machine, you’ll remember from George Orwell’s novel 1984, was a device allowing the user to speak into a “mouthpiece” (microphone) and his spoken words would be written onto the page. Such a device would be really useful for us bloggers – we could record a podcast (OK, Orwell never mentioned these) and let the speakwrite machine create this text that you’re reading!

I know there are an enormous number of products currently available that can take commands from voice, but I thought that the leaders in this area were probably IBM with its ViaVoice and Dragon NaturallySpeaking from Nuance. I decided to give Nuance’s Dragon NaturallySpeaking a try. To be honest, I was not expecting too much.

Previously, I had spent time reading through pages of text, training the software to respond to my dulcet tones. Following that, I had said a couple of sentences and been amazed to see what words the software had written on the screen. I would then read out these new words from the screen and see what the software wrote next. As you can imagine, this could go on for a long time as the software produced sentences less and less like my original.

To be fair, I did know of one IT worker who, having damaged one wrist in a car accident and suffering RSI in the other, turned to voice recognition software as the only way to continue working without having to type. He did use the software successfully, but his work rate was much slower than before.

Anyway, Dragon NaturallySpeaking is now at Version 9 and was released last summer (2006). One interesting development is that there is no longer a need to train the software to recognize your voice patterns. It also comes with a wireless headset to make dictation easier – you’re less likely to rip the headset off as you move your head away from the computer!

The advantage of speaking your text into the computer is that you can get up to 160 words per minute, whereas typing is less than that. As well as being useful for lazier people (like me), voice recognition software is ideal for people like my friend who are unable to type. Nuance also makes a business case for its use by illustrating how much time can be saved.

The big question that’s hovering at the back of your mind now is, “does it work?”. Is it any better than six years ago or is it still just a bit of a fad? Well, I can honestly say that I was very impressed with the improvement in the technology. It does take a little bit of getting used to as you switch from dictation to command mode and back, but once you get the hang of it, it’s very good. I’m not sure whether I’m writing any faster yet, I’ve still only been using it for a little while.

So if you do a lot of typing into your computer (words rather than program code), I think it is worth giving it a try.

BTW you might have noticed that Xephon’s MVS Update and MQ Update journals have changed their names this month. They are now called z/OS Update and WebSphere Update. This reflects the fact that most people are referring to z/OS as z/OS these days rather than the sort-of generic title of MVS, and, secondly, changing to WebSphere reflects the additional content necessary to satisfy users of WebSphere MQ. Full details about the publications can be found at www.xephonusa.com.

Monday, January 08, 2007

Extending a small network

In an earlier blog (16 October 2006) I looked at Netgear’s wall-plugged wireless range extender kit and mentioned that there were other similar products available. One of those “others” is the Solwise HomePlug, which I’ve recently had a look at.

The problem that both these products try to overcome is the one where a wireless network doesn’t reach as far as a computer that wants to use it. It can also be used where a LAN doesn’t stretch to reach a remote user, but the mains power supply does. And that’s really the trick with these kinds of device. They make use of the network that most buildings already have installed – the electrical circuits.

The Solwise HomePlug plugs into a router and also into an electrical socket. A second HomePlug connects to a socket near where you are working. Here’s the big difference between the Netgear device and the Solwise HomePlug – the Netgear plug now acts as a wireless device and a number of laptops can share it; the Solwise HomePlug connects by wire to a single computer. But this is not a bad thing because, usually, you have only the one computer that needs connecting in this way. Of course, if you have more computers to connect, then you can purchase more HomePlugs.

The HomePlug provides 200Mbps connectivity. It is also very easy to set up. It comes with a CD containing the HomePlug AV Utility, which is installed first, and then you must install .NET Framework 1.1. The main use of the utility is to change the Private Network Name, and you can use it to detect any other HomePlugs. Changing the Network Name allows a password to be added. It’s also possible to upgrade the firmware of the HomePlugs from the Utility.

Those of you who work from home or a small office will find devices like the Solwise HomePlug very easy to use and an easy way to extend your existing network.