One person’s risk is another person’s reward. A trite phrase, to be sure, though the sentiment is far subtler. Conditional evaluations such as “risk,” or “good” and “bad,” are subjective judgments, which have long been the bane of IT and computing systems. The programs, algorithms, reports and systems that we rely on to help us evaluate risk are all derived from the more objective binary logic that their creators wielded.
The financial landscape continues to evolve, as does our understanding of a vast, interconnected global network. Many of our models for risk are based on tried and true methods. Actuarial science is very well established and the derived methods that are applied to structured analysis and processes are critical to the investment and financial markets landscapes. But these same methods represent a formalised, top-down view of risk that is one-size-fits-all. When considering the dynamisms of the market coupled with the relatively staid and static methods for calculating risk, one can easily see the potential for disconnect.
The advent of cloud computing places computational power – processing potential that was unthinkable beyond the university or government level only a few generations ago – into the hands of regular consumers. The significance of this revolution is staggering. Corporations, like private, individual consumers, are capable of building applications that scale enormously and that collect and process data at mammoth proportions. The result of this phenomenon is commonly and collectively referred to as ‘big data.’
These new systems don’t necessarily adhere to the rigor and structure of their database predecessors. Data are not constrained into pre-destined schema.
The challenge in dealing with risk in financial services is that we must continue applying the same proven models and methods of calculating and forecasting risk while marrying those outcomes with the subjectivity and context that each consumer brings to bear on the data. In other words, we need to design systems that break away from the one-size-fits-all, top-down world and incorporate the fundamentals of risk analysis with ways to make that information relevant to the individual consumer. It is no longer sufficient to flag something as risky because our algorithms say so. We must consider what variable is of highest priority to the consumer. We need to understand why it is important to the consumer and be guided by the user and become responsive to their underlying needs. This is where the risk management systems of the future can benefit from the fundamental lack of structure inherent in the large, unstructured data sets that are becoming prevalent today.
It’s not enough to merely say that specific events, actions or data are risky. If we are able to gain insight into the consumer’s motivations, we are able to infer what is important to them when considering risk. For example, one might ask:
- Is the user concerned with complexity?
- Are they concerned with timeliness?
- How does tolerance for financial loss play into calculating risk?
- Does one geography/domicile or source of data present an elevated quantity of risk?
Fundamentally, these considerations are meant to represent the juxtaposition of a specific system design paradigm (design-driven and user-centric in nature) with traditional risk measurements (time-series, financial, geo-centric risk calculations) and large, unstructured datasets.
If we take the above problem and map it out into a specific domain, such as corporate actions processing, we see an excellent example of the confluence of data, risk, subjectivity and automation. One of the prevalent challenges corporate actions managers face is bringing in a subjective evaluation of the events, announcements, positions and options at play on any given announcement. All too often the evaluation of risk is performed at too high a level, which obscures the details and renders everything “risky” (and therefore nothing is). Occasionally, risk evaluation is done at too low a level, resulting in risk evaluation being done only at a specific event or account level. This results in risk being hidden because the evaluation lacks the proper context.
Therefore, the solution is to marry context, or the user’s perception of risk, with an organisational standard of risk evaluation. This gives the user the ability to specify at varying levels and at varying times those criteria that are most important specific to need (the subjective). The consumer’s view of risk is uncluttered because the factors that he or she is concerned with are only taken into consideration in the weighing of risk. In other words, the user’s subjective evaluation of what is “risky” is made available to them. Other critical details can still be made available, but it’s up to the consumer to determine if they are valuable enough to affect the display.
Many examples exist that can carry the illustration of this marriage of context and content further. We will continue to see the future design of systems heavily influenced by the needs of the user, whereas the paradigm of forcing data and consumers to conform to rigid ideals will slowly recede. Increasing data volume and complexity is a reality that we must recognise. Rather than creating more and more complex ways to account for the shortcomings of traditional IT systems to account for subjectivity and context, we must instead embrace the diversity of data and emphasize the user’s perspective on the value of the data.
By Daniel Retzer, Senior Vice President and Chief Technology Officer, SunGard’s North American Securities Business and SunGard’s XSP