Unleashing Big Data into the open

By Debra Walton | 15 September 2014

There is no doubt today that big data is spurring innovation. Online retailers use it to better serve customers and thus improve their bottom lines, while sites such as Google and Facebook use it to target ads with incredible precision. The “internet of things” will soon know everything about what’s going on in our physical environment—as will the companies tracking it.

In the financial industry, surprisingly, very few firms are using big data in the same way as Amazon, Google, Facebook, and others. A recent study we commissioned, “Big Data in Capital Markets: At the Start of the Journey,” found that the majority of firms active in capital markets today don’t have a big data strategy in place across the enterprise. Only five percent of 423 firms contacted felt they had enough knowledge of the subject to participate, or were willing to talk about their big data programs. Where such strategies do exist, they tend to be in silos, separated from the rest of the operation.

Of course, there are a few standouts, such as State Street—which in 2013 announced the launch of a big data initiative called State Street Global Exchange, focused on providing a centralised analytical data platform for its quantitative equity teams for portfolio modeling. The situation will certainly change as firms seek more insight, speed of response, and scalability. Big data mining offers enormous potential for the financial services industry, and firms seeking alpha—in other words, trying to beat their benchmarks—will have no choice but to incorporate big data strategies. Change is on the horizon, and coming fast.

In our financial services industry, there are at least three key categories of big data. Financial firms can create potent tools for achieving alpha by utilising any or all three of them, as I’ve been discussing with Stephen Malinak, global head of analytics at our Thomson Reuters Finance & Risk division.

Text is an abundant type of “unstructured” big data.  Companies regularly correspond with the Securities and Exchange Commission, other regulators, and investors via documents, such as 10K, 10Q, 8K filings, and annual reports—to name just a few. These leave a regulatory paper trail and an imprint of valuable data, yet they are not fully incorporated into research by most fund managers. If and when these reports are used, they are often read by an analyst, not a computer. It not only takes an immense amount of time to work through all the footnotes, but the effort needs to be replicated by others who may need the same data. 

Financial professionals must also digest a vast amount of news—which increasingly comes from social media, blogs, and online forums—to stay on top of companies, sectors, industries, and more.

Transactional data, essentially data records of what people are buying and selling, is another key source of big data.  Just a few years ago it was only available at set times, typically after long delays, but today it can be accessed far more quickly. Transactional data consists not just of bond or stock trades, but includes consumer-level information such as credit card use or the number of cars in a retailer’s parking lot during a given time period, providing us with valuable consumer sentiment information.

A lesser-used form of data, but one which will increasingly become a key part of the alpha equation, is sensor data such as image processing. This encompasses a wide variety of sources, from satellite images (such as those used to monitor crops), face recognition software, radar images (such as those of oil tank reserves), and more. The internet of things continues to add to the vast store of sensor data in houses, cars, and the public domain such as transit systems.

The key thread among these three groups is that they are essentially a data exhaust—that is, the product of a trade, news, or other market activity. Using today’s big data analytics, they can be leveraged faster and with greater finesse than in the past. Instead of spending time slogging through piles of paper, firms can focus on developing winning insights.

We ourselves, as an information company, have always aggregated data—it’s just that today, there’s so much more of it coming, and at much higher speeds. The amount of data being generated continues to grow at an enormous rate. A McKinsey study found that in 2003 the world had created six exabytes of digitised content since the dawn of civilisation—but that by mid-2013 the world was creating that amount of data every other day. In tandem with that, the cost of storing it goes down with technology such as cloud computing.

It’s increasingly becoming mandatory for financial services firms to find a way to manage big data in order to comply with regulations such as Sarbanes Oxley. That means using cutting-edge computing power coupled with curation by the smartest data scientists in the field, who can explain how conclusions were reached and what the implications are for businesses. This is opposed to the sort of “black box” data provided by some consultants to financial services firms with no underlying explanation of the results, or the “I can feel it in my gut” approach still relied upon by many people in the industry.  

Firms seeking alpha will have no choice but to dive in—which means accepting that insight gleaned from big data will not just supplement but eventually supplant paper, making personal instinct and judgment all the more powerful. 

 

By Debra Walton, Chief Content Officer, Thomson Reuters 

Become a bobsguide member to access the following

1. Unrestricted access to bobsguide
2. Send a proposal request
3. Insights delivered daily to your inbox
4. Career development