Enterprise Data Management – Finding Our Nirvana

Manual data collection is proving to be a painful thorn in the side of financial services firms seeking to implement enterprise data management (EDM) strategies. Reliance on manually collected data could scupper drives towards enterprise data management in the financial services industry. While data projects remain high on the agenda for the industry as firms …

by | March 9, 2011

Manual data collection is proving to be a painful thorn in the side of financial services firms seeking to implement enterprise data management (EDM) strategies. Reliance on manually collected data could scupper drives towards enterprise data management in the financial services industry.

While data projects remain high on the agenda for the industry as firms continue to centralise data functions and manage costs, many are struggling to deal with the enormity of manually collected data that flows through their operations.

The financial services industry has dedicated considerable thought and resources towards achieving the nirvana of a single complete and correct version of a core dataset. What is not often discussed is how to deal with the exceptions and the non-vendor based content. However, there is an increasing awareness of the risks associated with manual data collection and its contribution to valuation errors, missed deadlines, overstretched resources, scalability constraints as well as operational risk.

The changing regulatory environment and international accounting standards are further adding to the need for greater transparency.

Eradicating manual data collection can help resolve all these issues simultaneously. The biggest challenges lie with illiquid fixed income and over the counter (OTC) derivatives where the structured nature of the assets make data less transparent and therefore, not particularly easy to collect. But organisations are struggling to capture complete and accurate records for instruments such as American depository receipts and contracts for difference, where an underlying security can add confusion, as well as all variants of funds. Even mainstream activities such as unit trust pricing can prove troublesome. These challenges are not only limited to pricing data but extend to cover income and capital events as well as asset identification and static data.

There are numerous systems on the market that can assist organisations to construct their own data management platforms. These can add some value managing the bulk collection, storage and processing of readily available data.

Building a process for capturing and processing the existing data feeds may improve clarity and transparency. However, this is far from an all-encompassing data initiative. Many organisations can achieve good levels of automated processing for the bulk of their data, but manually collected data often remains untouched by the EDM strategy.

If manual data that is input to a central data platform is not subjected to the same stringent routines as readily available data, it will create background noise and confusion. If multiple sources are used to validate listed content but only a single manually input entry exists for other datasets, consistency can never be achieved.

– Building solutions

It is widely acknowledged that an entity’s overall performance is limited by the weakest component. In data management terms that will be the human element. Manual data collection will still exist once all of the readily available content has been automated and will continue to cause problems and chip away at quality, costs, resources, management and reputations.

The challenge facing the industry is finding a way to collect, database, normalise, reconcile and validate all data even for the labour intensive and risk prone manual data. The core principal of automating data collection is surely the correct approach.

Automation is the key to removing the weakest link in the data management chain – human error. Computers don’t care if they are performing mundane tasks or not, nor do they care whether, as a senior computer, they are performing junior tasks and are not being fulfilled. Credit crunch worries or deciding what they want for lunch fail to distract computers.

Add complexity to mundane tasks such as having to collect data from numerous sources and utilising various emails, websites, extranets, terminals, internal departments (whose primary function is not to provide data to other teams) and, ultimately, having to contact somebody at another organisation, and it’s easy to see why data management is such a complex, labour intensive and fragmented process.

The data, once gathered together, typically resides on an array of spreadsheets, with colour coding and bold fonts to stop the various users falling through the cracks and locked cells to stop one user deleting another user’s macros all without audit trails to assist with unraveling queries or problems.

In this scenario, it is easy to see why mistakes happen so frequently. This is a very common situation that we see and is the starting point for defining a potential data solution. The data is generally available somewhere, just not in an easily accessible form. It could be imbedded in an encrypted PDF, a Word document, spreadsheet, email text as part of a distribution list or on a website/extranet and many more.

This problem is unlikely to go away in the future. The ever expanding range of instruments will continue to pose challenges. The advent of additional trading platforms and the growth of off-exchange and algorithmic trading means liquidity is being forced into less transparent pockets. The collection of the pricing and reference data for these instruments will continue to be challenging.

Clearly a defined strategy to tackle manual data collection is necessary. Technology alone cannot solve all the problems in this area. A rigid, purely technology focused approach can lead to silos of data, with any individual successes and gains unable to be reused elsewhere, potentially resulting in the proliferation of mini data management solutions, with no correlation.

– Flexible automation

Combining a flexible model with process controlled applications is critical to successful data management initiatives. Furthermore, spreadsheets alone are not the solution. Databasing the content is the logical starting point, allowing it to be managed via controlled processes with full audit trails.

A small gain in one area is all well and good but if it adds problems either up or downstream, no overall gains are made. The objective is to automate every possible step, but flexibility must be maintained to ensure an automated process can be adjusted to accommodate changes and evolve with the advent of similar requirements elsewhere in the overall process. Flexibility in individual solutions allows successes to be combined, leading to marked enterprise-wide gains.

This type of solution is not easy to achieve and requires bespoke systems, skills and maintenance, but to make significant gains and provide scalable solutions that can evolve as business requirements change, flexible automation is a must. With the ability to reuse the framework or template from each individual success for other similar challenges, it is clear to see how versatile technology based solutions can contribute to executing the high level data management strategy most efficiently.

However, exceptions do and will continue to occur, and when they do, experienced staff will be required to solve them. With senior administration staff and analysts removed from the shackles and monotony of actually collecting data, they can focus more clearly on applying their knowledge and experience to those situations that warrant it.

So is the ultimate goal of fully integrated, enterprise wide data management initiatives actually achievable? Perhaps it is nirvana. It will certainly remain out of reach without a method for handling manual data. With years experience of introducing solutions for the myriad of challenges that clients and prospects face in the highly pressured valuation environment, very few organisations fully understand the principles needed to deliver such solutions. Focused and experienced specialist suppliers of data validation services must develop tailored processes to automate ‘manual data’ capture and validation. Experience tells us that there are no quick fixes.

While the goal must be kept in mind, if the problem is tackled in achievable steps with each one contributing to the overall objective, progress will be made.

This way, with a defined set of practices, approaches and procedures contributing to the overall objective, plus the experience acquired along the way, benefit will be gained from past achievements further down the line by adapting and evolving previous solutions.

The reality is that big goals are daunting, ever changing and can become overwhelming. Cutting out the background noise and interference leaves a clear view of what remains to be done. From this lofty vantage point next steps can be defined that build on the gains already made and maybe, just maybe, the goal is achievable.



Regulatory reporting: 7 Questions with Philip Flood, Gresham Technologies

Other | Behavior detection & predictive analytics Regulatory reporting: 7 Questions with Philip Flood, Gresham Technologies

Gresham Technologies
Real-time payments tech put pressure on banks

Best Practice | Behavior detection & predictive analytics Real-time payments tech put pressure on banks

Managed Services in 2021: Poised for Lift-Off

Best Practice | Behavior detection & predictive analytics Managed Services in 2021: Poised for Lift-Off

SmartStream Technologies