Monday, October 30, 2006

CICS and AJAX

Trevor Eddolls I’ve mentioned both CICS and AJAX before in these blogs, and it seems to be a marriage made in heaven to bring them together. On the one hand you have all the advantages of transaction processing on the mainframe – speed, reliability, security, etc – and on the other you have the fastest way of allowing users to work from a browser. AJAX, for those of you just returned from the planet Tharg, allows users to update pages on their screen without necessarily having to send the whole of that screen to the server and receive a complete new screen back from the server. It basically allows users to work as if the application they were using was running locally on their computer – it can make things that quick. No more press the button, go and get a coffee, sit down again and hope the response has arrived!

HostBridge has recently produced a newsletter about CICS and AJAX. They suggest that AJAX enables true two-tier access to CICS (rather than three-tier). They say that AJAX allows the browser to contain the application logic and make calls directly to CICS. The newsletter goes on to say that “AJAX applications also allow you to retrieve data from CICS, maintain the data in memory, and repurpose the data as needed”. What this means in effect is that it takes only on call to CICS in order to provide the information necessary for more than one (ie two or more) views from the browser. This has the immediate impact of reducing network traffic and speeding up the response the user sees.

The newsletter also has some recommendations for the design of the AJAX interface. Their three suggestions are to use AJAX frameworks, add server-side processing, and design based on patterns. The third one really says look at what users like to do and try to design things that way. Otherwise the users just might not want to use it.

HostBridge isn’t alone in looking at CICS and AJAX combinations, back in August NetManage introduced NetManage OnWeb for CICS, its software that transforms CICS data into standard XML. Once you’ve got XML, you’re halfway towards being able to use AJAX on the end-user interface. Illustro has its z/XML-Host product for XML conversion (see “Putting it all together…”). And FireXML has FireXML for CICS to integrate their ObjectStar system with Web server applications. And a search on Google will probably find lots of others.

It’s also rumoured that Williams Data Systems, the people who brought us Implex and other network monitoring and management software, has an AJAX interface on new monitoring technology they’re introducing at the end of November.

The great thing about CICS is that it is a consumer and a provider of Web services, which means that it can be used with all these “new-fangled” Web 2.0 applications. Combining it with AJAX just makes a perfect combination.

Monday, October 23, 2006

What’s Project ECLipz?

Just recently, people have been talking a lot about Project ECLipz, and, I have to admit, I wasn’t really sure what they were talking about. Yet, it appears that this IBM project has been in existence since 2001 or thereabouts. And the reason behind the recent gossip about it has been the POWER 6 announcements earlier this year.

Basically, it seems, IBM has a goal to converge all its non-Intel server lines onto a single platform. Those non-Intel servers, of course, are what we know as zSeries, pSeries, and iSeries – and if you reverse the order you get the last three letters of the project (i, p, and z). In fact the whole acronym is meant to stand for Enhanced Core Logic for iSeries, pSeries, and z Series. Cynics among you will probably say that as Sun is a major competitor of IBM, they came up with the word “eclipse” and then made it into an acronym – you know, “total eclipse of the sun”.

Why would IBM want to converge their server technology like this – the answer is very simple, it saves money. Obviously, the development of new hardware has a cost, so if you can develop hardware that works for two types of server, that will cut your development costs in half. If you can split it three ways, well then it’s only a third of the cost!
If you look at the iSeries, you find that OS/400 runs on more-or-less a pSeries POWER5 system. This is probably the easiest convergence for IBM because, I’m told, the OS/400 underlying instruction set is quite similar to intermediate code, which meant that only very few parts of the operating system needed porting to PPC. In terms of costs, the hardware development is split in half, with the OS/400 people needing to do a little software development. Now that’s got to look good on the budget sheet.

The big problem for IBM and the ECLipz project is that zSeries hardware is quite a bit different from the other two. For a start there are all those wonderful coprocessors, there are System Assist Processors (SAPs) that are used for I/O, and there’s support for hexadecimal floating point and support for decimal numbers in packed and region formats. Not forgetting those go-faster stripes down the side!

POWER6 itself was announced in February and should be available next year. Some sources, outside IBM, are suggesting that POWER 6 includes some type of z/OS emulation through on-chip microcode that would create a CISC mainframe instruction from RISC instructions. What this means is that the POWER6 processor would be the first hardware able to support the three server lines.

Converging the server lines does make economic sense for IBM. For users of the servers, anything that helps reduce the cost has got to be a good thing. IBM needs to bear in mind that zSeries users, while paying a high price premium, do expect something that is very fast and extremely reliable. Those users won’t be happy if compatibility issues halt the inexorable increase in processor performance they have come to expect.

It just goes to show (as the old Chinese curse would have it) that we live in interesting times.

Tuesday, October 17, 2006

While on the subject of Wireless Tech...

There is an article on cnet.com on the subject of USB (Universal Serial Bus) and the slow transition to UWB or "ultrawideband. Maybe the most valuable part, aside from shedding the litany of cables surrounding nearly everything I own, is this:
UWB technology can deliver data rates at up to 480 megabits per second at around 3 meters, with speeds dropping off as the range grows to a limit of about 10 meters. Real-world speeds will probably be a little slower, but this is as fast as the wired version of USB 2.0 and much faster than current Wi-Fi networks are capable of transmitting data.
Oh, and maybe this, if you're a gadget person like me:
MP3 players are another potentially big market for this technology, Broockman said. Microsoft's Zune player is going to ship with a 802.11g Wi-Fi chip later this year, allowing two Zune users to share songs. But Certified Wireless USB is much faster and uses less power than Wi-Fi, he said.
Thoughts? Predictions? Anyone know how much this is going to cost??? -colin

Monday, October 16, 2006

Wireless working

We’ve been working away on our laptops in a wireless environment for about three years. Everyone can access the wireless router and get on to the Internet, and everyone can share files. Because of our Macintosh heritage we still talk about “Transfer” files rather than “SharedDocs” in “My Network Places” – but that’s just us!

Obviously, all the new laptops come with wireless technology – either on the chip or on-board somehow, whereas our older ones have dongles hanging off the back to access the wireless network. The problem with our office is that we have walls!! You might have come across this problem yourself with your office. So suddenly the predicted distance doesn’t match with reality! In the past, we’d overcome this problem by using two metres of USB cable. One end attached to the USB port on the laptop, the other end hung down the stairs to with the wireless dongle attached. The result was that everything worked nicely – and no-one bumped into the hanging cable (so no need to start on Health & Safety issues).

We recently got our hands on the Netgear wall-plugged wireless range extender kit. This is clever piece of technology that makes use of the wired network found in every building – the electricity supply. In fact, it works in a similar way to some baby listening devices in that one “plug” is put in the wall socket in one room and another “plug” is put in the wall socket in a different room. For baby listening devices, one plug is put in the baby’s bedroom and the other is put in the kitchen or lounge, or wherever the parents are going to be. With Netgear’s extender kit, one end is plugged into a socket and also into the router, the other end is put in a socket in whichever room it’s needed. The wireless technology in the laptop talks to the ‘plug’, which relays it round the building using the cabling. The ‘plug’ at the other end then talks to the router. And, of course, the whole process can work in reverse – pages from the Internet (or whatever) pass from the router to the first plug, round the cables, out the second plug and wirelessly into the laptop.

Obviously it isn’t quite that simple, there’s a little bit of setting up to do. Luckily, the CD that comes with the extender kit runs a small program that makes everything work. The total set up time is about ten minutes – allowing time to read the simple instructions and run through the program. It’s simple enough for a novice to do. The signal can be encrypted in the usual way, so you’re not setting up a wifi hotspot for other people to make use of. The two plugs are different – one has an Ethernet cable connection so you can connect to the router (XE102), and the other one doesn’t (WGX102). Our IT people also tried using the plugs through extension leads – which it says in the instructions not to, and we still found that the connection worked. I assume that the data transfer rate would be lower going through an extension lead, but we never actually tested that. It seemed OK at displaying Web pages – that was our test!

Of course Netgear aren’t the only people selling this kind of extension kit, there’s similar products from Devolo, eConnect, and Solwise. We just tested and used Netgear’s product and found it to be very useful. If you’ve got problems with walls or distance, then this is a very easy solution to the problem - which is why I thought I'd pass it on.

In a future blog I hope to look at some hints and tips for improving CICS performance. If you have any that you’d like to share with the wider CICS community then drop an e-mail to TrevorE@xephon.com.

Monday, October 09, 2006

What’s the OpenDocument Format?

Trevor Eddolls It used to be the case that whenever someone sent me a file from a PC it would be in a format that I didn’t have and I would have to run it through some other application before I could use it. Paint Shop Pro was brilliant because you could use it to convert so many picture file types from one to another. I have a piece of music software, dBpowerAMP, that can convert mp3 files to wav (or most other sound file types) and back again. And I still have Microsoft Office and IBM’s Lotus SmartSuite so I can open most word processor files people send me.

So why should a paragraph moaning about the diversity of file types be followed immediately by one suggesting that we should adopt yet another “standard” file type? Well it does sound a bit odd, but what I’d like to suggest is that we have a kind-of lingua franca file type – one that could be produced by any word processor and opened by any other word processing software. So, not quite such a silly idea!

Back in May, the International Organization for Standardization (ISO) approved the open source Open Document Format (ODF) as an international data format standard. And, although not usually thought of as a leader in IT, the Belgium government has instructed its government departments to use ODF for all internal communications. Similarly, the National Archives of Australia has decided to use OpenDocument for their cross-platform/application document format.

There is an Open Document Format Alliance, which is made up of a mixture of vendors and other organizations, and has around 140 members – probably more by the time you read this. It was developed by the OASIS industry consortium and is based on the XML format originally created by OpenOffice.org. IBM, Sun Microsystems, and Novell are keen promoters of the OpenDocument Format.

You might ask what’s the thinking behind ODF. Is it a way of getting back at Microsoft with its ubiquitous DOC format or Adobe’s PDF? No, there’s much more to it than that. The problem really first appeared when people tried to access older documents – not dusty scrolls tucked away at the back of ancient vaults – just documents that had been saved to disk using their word processor of choice some years ago and which couldn’t easily be read anymore. What happens to the files that were created using it, when you throw away old word processing software? And even if you kept the same product but have upgraded to the latest release, there’s always a chance that you used a feature that’s no longer supported – backward compatibility is a nightmare as more development work goes into a product. So, with ODF, you have a standard that works now, and, they predict, will work in a hundred years time.

The file extensions used for OpenDocument documents are ODT for text documents, ODS for spreadsheets, ODP for presentations, ODG for graphics (and there’s a proposed ODF for formulae). Technically, an OpenDocument file can be either a simple XML file using "<office:document>" as the root element, or a ZIP archive file comprising any number of files and directories. The ZIP-based format is used in the main because it can contain binary content and, obviously, is much smaller.

Older versions of Microsoft Office don’t support the standard, but apparently, new versions will. If you’d like software now that supports ODF, then there’s OpenOffice (http://www.openoffice.org/) and KOffice (http://www.koffice.org/). Go to http://odf-converter.sourceforge.net/ for an early version of Microsoft’s Open XML translator for Word.

So if you’re looking for file types that will work across platforms and across time, then ODF is what you need. If you’ve got millions of DOC and XLS files archived, then you either hope Microsoft’s Office product remains backward compatible forever and you never migrate from Windows or you have a lot of conversion work ahead of you! And with big names like IBM and Sun behind it, and people like Google joining in, you know this standard isn’t going to suddenly disappear.

Monday, October 02, 2006

More mainframe information

Trevor Eddolls Last week I wrote about the excellent Arcati Yearbook 2006 at www.arcati.com/yearbook.html, but that isn’t the only source of mainframe information on the Web. There are numerous Web sites created by mainframe enthusiasts out there, many of which are well worth a look. This is a topic that I intend to return to in future blogs.

Http://www.mximvs.com/ is home to Rob Scott’s OS/390 and MVS resources. The Web site says, “This site offers resources for professionals working with MVS, OS/390, ISPF, REXX, and Assembler. Here you will find free software to download and most of them come with the source code. Also included on the site is my free MVS monitor software ‘MXI’, which is the result of over 8 years of effort.”

In fact, the site suggests that MXI hasn’t been updated since September 2004. In fact, the product was acquired by Rocket Software (www.rocketsoftware.com/) and is downloadable from www.rocketsoftware.com/portfolio/mxi/download.php. Information about the latest version is at www.rocketsoftware.com/portfolio/mxi/. The page says, “Rocket MXI G2 for z/OS is an ISPF-based application that enables the systems programmer to display important configuration information about the active MVS, OS/390 or z/OS system”.

Rob Scott’s site has a number of downloads available. There are utility programs such as VTOCUTIL, VARYDASD, CONFIGXX, DELNOENQ, DDDEFCHK, and DDDEFPTH. There are external REXX functions such as STEMPUSH and STEMPULL, LISTSYM, LISTMEM, and the very useful SLEEP. In the MISC section is IEFACTRT, a sample step termination exit that prints job summary messages at the top of the JES2 job log and I/O statistics for each DDname for each step.

Bill Lalonde’s Big Iron site is at http://billlalonde.tripod.com/. The site describes itself by saying, “This page is dedicated to S/390 mainframes and the MVS world”. Clicking on the “Stupid JCL tricks – The Ongoing Series!” takes you to “This Month's Topic”, which discusses searching HFS directories, and includes example JCL.

Clicking on, “The REXX page” takes you to a links page offering a number of different options, one of which is “sample code”. This takes you to a page offering a large number of REXX samples including: INTDATE, REXX, TERMINFO, READDIR, DI, DT, FINDJSAB, MVSVAR, RGNINFO, SYMDEF, ISEE, BOOKSEEK, URLINFO, WHOHAS, TIMEUP, ACFRES, XDSI, FINDJCT, LINKEXT, FINDNTTP, ADDREGN, EQUAL, SUBCOM, and OEMVAT.

Obviously, the IBM site has lots of software – www-03.ibm.com/servers/eserver/zseries/zos/downloads/ says, “you can find the following types of z/OS downloads on this page: as-is z/OS downloads; SMP/E installable z/OS Web deliverables”. Definitely worth a look, although you probably have already!

Finally, and I hope you won’t mind me mentioning it, the Xephon Web site at www.xephonusa.com has lots of code, some of which is freely available. Xephon has been publishing its Update journals since 1985 – the first issue of CICS Update came out in the December of that year. Since then thousands of Assembler macros, COBOL programs, REXX EXECs, bits of JavaScript, etc etc have been published (in all the Update publications) and are all on the Web site.