Having fallen from favour as a result of the hype it attracted during the dot-com era, middleware is now starting to settle into a useful role as part of an overall Enterprise Application Integration (EAI) environment. But like all aspects of IT in financial services at the moment, the ultimate driver is the business need to reduce costs and operational risk.
This need has been focused in the minds of banks by the Basel II proposals. Even though their implementation in 2007 looks increasingly in doubt, the general thrust is one that banks â and other financial institutions not directly governed by the Basel Committee â realise can pay dividends in other ways, simply by reducing operating risk and costs caused by errors in processing.
Combined with a tight squeeze on spending, this drive towards automating the entire transaction lifecycle in a true straight-through processing environment obviously means that firms have to make the most of their existing systems, but alongside that there is a need to bring those systems together in a way that reflects the business organisation.
The key thing in middleware and EAI deployment, for the financial sector at any rate, is the definition of a consistent business logic and the formalisation of rules for how a particular firm goes about its trading business. In business schools and management seminars that much-abused word ontology is brought into play: logicians use it to mean a formal definition of structures, and that is what businesses need to do. From the definition of the trading structures will come the workflow rules and best-practice definitions that can be used to build an efficient overall system.
Ideally, what any trading organisation should have is a common bus architecture integrating its front, middle and back office operations. And it should cut across the various asset classes the firm trades in, which may, of course, have their own front and middle offices, if not all the way to the back.
To do this means the development of a common taxonomy across these operations, which is something that we are now seeing develop through the work of putative standards bodies such as XML.org and the work being done on the Market Data Definition Language.
While the leaders and shapers may be implementing systems based on Sun Microsystemsâ Java or Microsoftâs .NET framework, perhaps with some Linux and a few new database types thrown in, the vast majority of businesses are not.
Far more are using legacy systems â you might even call some of them heritage systems â where a great deal of data that is vital to the STP process is stored, but are structured in a way that no-one understands: the business logic has been lost, probably along with the IT staff who originally coded the system.
By its very definition, middleware can solve this problem. In the initial buzz around the technology, much was made of it allowing data to be sucked out of aging data silos for use on state-of-the-art boxes.
It could, and can, do that but the reality is that middleware has matured both in terms of the range of data and systems types that can be interfaced with and in the complexity of its deployment at the core of an EAI project. Increasingly, systems like IBMâs WebSphere or BEA Systemsâ WebLogic have developed from Internet novelties to mature technologies that are being marketed as Application Servers used for not only sucking data out of older systems, but also passing data to them for processing.
Where that is appropriate to the business logic, it is a perfectly valid application of the if-it-ainât-broke rule of engineering. Much more crucial is being able to work out if it is appropriate in the first place.
Once a definition of the business logic and workflow is in place, application integration can become a reality. Only once youâve achieved integration of the systems can you monitor and manage them â and thatâs the benefit that middleware and EAI are at last starting to deliver.