On the third day of BCBS 239, G. Sibley encounters another fundamental concept – data architecture. After establishing a shared understanding of meaning, and roles and responsibilities for data governance, it’s time to set up a data architecture to support risk data aggregation.
It has been possible to comply with most recent regulations by adopting the “do the minimum necessary to get by” approach. For such firms, data infrastructure still involves many manual processes, takes a long time to produce results, and can’t stand up to the quality and flexibility demands of sound risk data management.
BCBS 239 makes it no longer possible to take the “minimum necessary” approach. BCBS 239 calls for enterprise-wide data harmonisation, without cutting corners.
- BCBS 239 Principle 2 calls for integrated data taxonomies and architecture across the banking group.
- BCBS 239 Principle 3 puts it in plain English that banks should strive for a single, authoritative source of each type of data.
This requires an EDM/MDM approach to sourcing, standardising, verifying quality, aggregating, and distributing critical data.
The remaining days of our BCBS 239 exploration will be built upon this foundation. The last thing you’ll want to do is get your ‘stocking’ loaded up with risk data attributes, only to have the bottom fall out partway through. It’s important to take the time now to set your infrastructure up properly.
This certainly can be a lot of work. But, to wrap this topic up on a positive note, there are additional benefits to be gained from applying an EDM/MDM data infrastructure. So a bit of extra effort now will pay off many times over in the future. G. Sibley will discover this for himself in a future post.
Look out for day four tomorrow.
By Steve Engdahl, SVP, Product Strategy, Goldensource