In the financial services industry, data has always been “big”. Information derived from data is the fuel that drives the engines of global capital markets. Behind every trading decision – to buy, sell, or hold – is an analysis, whether based on a company’s fundamentals, its past market performance, or a trader’s own instincts, that relies on data points. From the ticker tape to modern market data feeds, data has served as a catalyst for both investment decisions and innovation in the ways our markets function.
With the advent of the commercial cloud, organisations of all sizes across all industries have access to incredibly flexible, scalable, enterprise-grade infrastructure. Across industries such as healthcare, oil and gas, media, and yes – financial services, companies’ ability to make the best use of their data sets is no longer limited by the computing power available in their own server room or under their employees’ desks, and are instead enriched by tools such as real-time streaming, predictive analytics, machine learning, low-cost data storage, and petabyte-scale data warehousing. At Amazon Web Services (AWS), we’re helping our financial services customers – from startups to the largest global enterprises – harness the power of big data to bring innovative solutions to market.
Imagine reading 15,000 tweets in a single day. Chris Camillo, the founder of fintech startup TickerTags and the author of the book Laughing at Wall Street used to do just that as an individual investor when in 2007, he took $20,000 and turned it into just over $2 million in three years. Camillo’s premise was a simple one: read as much information as possible about companies that interested him and then make investments based on the sentiment of the social networks that he scanned. In 2015, Camillo co-founded TickerTags to enable investors to perform social sentiment analysis using data from online content streams including Twitter, blogs, message boards, and more. Instead of the 15,000 tweets that Camillo was able to physically read each day, now TickerTags users have access to the Twitter Firehose data feed (500 million+ tweets per day) and libraries of curated “tags” to query from thanks to the company’s use of Amazon Simple Storage Service (S3) and Hadoop.
Fintech startups aren’t the only financial firms using big data to innovate. Both FINRA, the primary regulatory agency for broker-dealers in the US, and Nasdaq, the second-largest exchange by market capitalisation, leverage the cloud for data analytics. For market surveillance, each night FINRA loads approximately 35 billion rows of data into cloud storage and uses Amazon Elastic Map Reduce (EMR, our version of Hadoop) to monitor trading activity on exchanges and market centres in the US. Nasdaq leverages a petabyte scale data warehouse to store an average daily volume of 7 billion rows of data upon which it runs analytics for its internal business teams and external broker-dealer customers. In addition to these use cases, financial services firms are leveraging big data analytics, data warehousing, and machine learning to better enable fraud detection, risk analytics including stress tests mandated by global regulatory agencies, and mobile, voice and internet banking.
While big data is “big” in financial services today, it will only get bigger – both in the volumes, velocity, and variety of the data sets themselves, as well as the use of analytics to both produce actionable information and improve the customer experience as the technical transformation currently in motion in the industry continues. Today’s drivers: agility, the ability to get products and services in the hands of customers more quickly, and the ability to constantly and seamlessly update and improve those services in real-time as customer demand changes, will continue to drive innovation in the future. The most innovative organisations embrace a DevOps mentality that breaks down traditional organisational silos that separate business users, software development, and infrastructure management and enables them to evolve, improve, and deploy applications quickly and efficiently.
In the future, big data – the tools and processes that we use to accomplish it – will no longer be in focus; rather, the output of these will be expected to simply deliver “best information,” and to do so through channels that abstract away the heavy lifting of the actual analytics. Today, financial firms are preparing for a future where their customers leverage devices such as a voice-powered personal assistant to ask complicated questions related to their financial life. In that future, rather than booking an in-person review with a financial advisor, customers might simply be able to ‘ask’ devices, “how did my portfolio perform this month?”, or “how long until I can retire comfortably?”, “show me how much money I could save on my utility bill by setting the thermostat four degrees higher, or even “please show me options for rebalancing my portfolio to protect against instability in Europe and then execute the strategy I select”. In the not so distant future, these big data applications may have the capability to not only deliver simple answers to complex calculations, they will also be able to execute customer commands that originate from the information provided.
As you consider the future of your own big data projects, think long-term. As you move to take greater advantage of cloud-based analytics focus on the long-term benefits of leveraging flexible, scalable infrastructure that support projects that require fast and powerful data collection, processing, analysis, and delivery. By doing so, as the volume, velocity, variety, variability and veracity of your data sets continues to grow into the future, your speed of innovation will be able to keep pace and you will be well equipped to meet the “best information” demands of our rapidly evolving financial markets.
By Scott Mullins, Head of Worldwide Financial Services Business Development, Amazon Web Services