To bring innovative products faster to the market, global financial enterprises seek cleaner real-time, actionable data. But it’s more easily articulated than achieved. In fact, our present-day predicament mirrors the legend of El Dorado. For the better part of four centuries, the lore of El Dorado changed from being a man, to a city, to a kingdom and finally to an empire. The legend launched multiple expeditions across an entire continent and influenced popular culture. The lure of gold snared generations.
Not unlike, the spell data-led disciplines (AI, ML, DL and Data sciences) cast on banks today. This time though, enterprises are better prepared to not fall victim to the El Dorado myth.
Or are they?
The data conundrum and the diamond connection
Gartner’s 2021 D&A top trends notes how emancipated organisations’ are using various solutions to address high levels of diversity, distribution, scale and complexity in their data assets. As top banks invest more in the ‘diamond’ technologies – scalable AI, composable D&A, data fabric, X ops, graph technologies, and edge computing – it will help for CIOs, CDOs and CXOs to discern how the basic ‘carbon’ (data) transforms to become the ‘glitter’ (AI-led solutions).
After all, isn’t it easy to take a diamond’s shine for granted? Like perhaps it is to overlook processes that make data innovation-worthy?
For one, diamonds are increasingly cut in sophisticated factories with high tech equipment rather than by hand. Next, the diamonds are sorted in the rough, planned for manufacture, cleaved, or sawed into preliminary shapes, its girdle shaped, and the facets polished. Needless to say, decisions at each step influences the shape, size, cut, colour and clarity of the final gem and consequently its value.
But like data, analysing raw diamond batches is arguably the trickiest step in the cutting process, requiring the most experience and technological expertise. Does a craftsman cut one large round shape that sells for more per carat but wastes more of the raw stock? Maybe, two smaller cuts that sell for a lower price but waste less rough? What combination is likely to present the best yield? In the event, all calculations centre on one decision: How does one maximise the market value of possible gems that can be produced from a starter batch.
Come to think of it, these decisions are analogous to ones banks make about data. But when these decisions are taken sub-optimally, there are costs to pay.
The challenges and costs of taking data for granted
The problem for financial enterprises, simply put, starts with: “We have too much data and it piles on relentlessly, unstructured and in multiple formats”. And ends in the premise: “We need to extract faster and richer customer insights”. In between those two poles, is the cause-continuum that blocks data exploitation. Be it the lack of data access, data integration complexities, data stagnating in siloes, poor data quality, opaque data governance, or the thorny challenge of sharing data between various cloud configurations (public and on-prem cloud); not to mention adherence to changing security and regulatory mandates – most organisations face deep challenges in ingesting, integrating, analysing, and sharing data.
As the amount, types and sources of data increases, the challenge only snowballs. The stakes grow deeper. Not more than three percent of companies’ data meets basic quality standards. Research shows that 74 percent of data is not analysed in most organisations, and up to 82 percent of enterprises are inhibited by data silos. There are human costs as well. Their primary job dissatisfaction, as data scientists claim, comes from spending most of their time massaging rather than mining or modelling data. In fact, a leading AI company cites 55 percent of the surveyed data scientists saying the quality and quantity of training data poses the biggest challenge in their jobs.
So how does all the bad Big Data (inaccurate, incomplete, inappropriate) add up economically? Well, if you consider multiple sources the detrimental effects pile up.
- Gartner’s survey conducted across 500 organisations estimates that every year, poor data quality (DQ) costs organisations an average $15m
- MIT Sloan reports employees waste 50 percent of their time coping with mundane data quality tasks
Data scientists are held back by data, not the science, so what are the possible solutions?
Depending on a Bank’s DQ maturity, a quick answer may include – defining adequate data standards and then establishing it across the enterprise, strengthening data governance by including DQ dashboards, and D&A leaders leveraging diverse groups (vendors, service providers) to exchange alternate perspectives on best practices and insights.
The other dynamic that governs customer analytics, is that fresh implications surface with each technology wave. Take for example, the dependence AI-first banks have on data.
For these new-age enterprises reimagining customer engagement across diverse platforms and partner ecosystems, doesn’t begin by building a highly flexible and fully automated decisioning layer via an AI-powered capability stack. To manage data accuracy, they have to first reimagine their entire pipeline with its integrations. It is only then can highly personalised and timely tailored communication be delivered through preferred channels that builds loyalty and maximises customer lifetime value.
All that glitters is gold – when mined correctly
How then does the new-age C-suite banker work out priorities?
The eye-blinding lustre of AI’s much-touted power notwithstanding; CIO’s, CDO’s and CXO’s have to maintain laser focus on data management, particularly its lifecycle – starting with data discovery, definitions, lineage, quality, and its frictionless access.
After all, if the diamond industry teaches us a lesson – it is that skills of a master craftsmen can only shape the ‘exquisite’ and ‘priceless’ after the pipeline processes push through the best quality of raw rocks.
To learn more about Maveric Systems Data and Analytics capabilities, visit our website.