Data Science for Risk Management in a Global Market Data Science for Risk Management in a Global Market
Monetizing in a global market is challenging. Many organizations hope that by the time an opportunity arises, they’ll have a plan... Data Science for Risk Management in a Global Market

Monetizing in a global market is challenging. Many organizations hope that by the time an opportunity arises, they’ll have a plan in place to handle both the risk and opportunity of monetizing any new market  by using data science for risk management

Prabhu Sadasivam’s 2019 talk for ODSC’s Accelerate AI, “Data Science for Risk Management in a Global Market,” seeks to define strategies for using data science in global risk management as companies expand to new markets and predict up and coming ones. He offers practical strategies, and some pitfalls organizations must avoid in order to have a reliable strategy. Let’s learn from his expertise.

Consolidate and Simplify Your Data

When you move into new markets, you must create a master file identifying the unique list of businesses or customers. You’ve most likely got a ton of information coming in, but the idea is to build something simpler — your master file. It gives you a single truth for you to go after.

It also helps you to identify inconsistencies in both your data and uncover real-world risks for your market. In Sadasivam’s real-world example, creating a master list revealed some individuals applying for fraudulent life insurance policies in multiple states. Once the information was compiled into a single record, the risk of loss through fraud was clear.

Building a centralized database also allows you to create robust services. Customer records move with them as they interact not only with your business but the entire field. Again, the centralization of information makes it easier to detect fraud, accidental duplicate records, and other mistakes that cost companies money.

Link Your Data Entries

Once you’ve consolidated your information, the next step involves using machine learning to begin linking potential duplicates. You don’t have to use unsupervised learning; you could use things like fuzzy logic math to start making these connections.

Identifying these potential duplicates allows you to weed out risky customers, but it also gives you market opportunities by identifying new ones. This master file becomes your market pipeline, continually producing insights that identify new opportunities and lowering risky entries.

Not Everything Is Predictive Analytics

Not all of your data involves predictive analytics. Sometimes, the opportunity to move into a new market exists before you have predictive capabilities. So how do you work with this type of market?

For example, in a country where there’s no defined target, your work with your master file is twofold:

  • Analyze the data for outliers (potential fraud, waste, or abuse cases) – For example, Sadasivam’s team uses Mahalanobis Distances to identify outliers.
  • Build performance data through a database – Allows you to identify opportunities and become a leader in an unestablished market.

When you build your own file, you can begin working with the markets without blind trial and error. As you move into a developing market that has no established data sources, you can make decisions and take advantage of the opportunity with less risk.

The Lean Model for Opportunity Markets

So what does this data set get for you? Once you’ve identified your market opportunity, it’s time to move quickly. Lean models apply here as with many things in the new continuous intelligence economy, so make sure your company is ready to pivot.

This isn’t as simple as taking your data algorithm from an established market and applying it to the new market. Instead, Sadasivam suggests a different approach — separation.

Separate your market algorithm into three sections:

  1. Attributes- separate the attributes from the model.
  2. Model algorithm “math” – separate the math layer or go to the open-source library for an as-is model.
  3. Calibration and reason codes – separate these into your final layer.

Now, you can set up libraries that you can publish by keeping the variable attributes within each layer. You won’t have to start from scratch with each market, but you also aren’t trying to force algorithms where they don’t fit.

The Importance of Adaptation

One of the most significant ways to minimize risk within a global economy and emerging markets is to remain adaptive to circumstances. No part of your strategy can be applied entirely to a new market; you must understand how to adapt within these frameworks.

  • Local expertise: We must understand how valuable subject matter expertise is within a developing market. Sadasivam is clear: you must have boots on the ground giving you real-time insight into the market itself. It isn’t enough to study the data in an office from 1000 miles away. 
  • Local compliance: Compliance and regulations are other obstacles. Each new market has its own compliance and regulatory requirements; you must have an intimate understanding of the unique regulations for each new market without assuming your current market regulations also apply (or apply in the same way).
  • Local geographic markets: The US market has zip codes to define the location, but this concept isn’t universal. There’s no such thing as a global address cleaner.
  • Local data types: Understand the barriers for types of data specific to your emerging markets.

The Importance of Macroeconomics

Another concept is understanding the differences of macroeconomics within different markets. For example, in some markets, there’s no concept of positive credit. Reports are only filed if there are adverse remarks on someone’s credit. You must adapt your models and data methods to account for these differences.

This provides the chance for your organization to be the driver of the market and understand how to operate. When you invest and study the market, you can be a driver of change and become an expert.

The Size and Availability Matter for Data Science and Risk Management

However, not everything is a big data problem. The most crucial concept is subject matter expertise. You don’t have to fight the math in all your algorithms. Instead, sometimes single-node solutions can perform better. Choose between big data and single node wisely.

Cloud computing is the right solution for all these solutions. Still, you must consider if your cloud provider operates in your target market and if it supports disaster recovery within the developing market. 

Once you’ve assessed the size of your market and the availability of your cloud solutions, you’ve reached the final step of risk mitigation and movement within a target market.

Watch Sadasivam’s talk, and other real-world cases from Accelerate AI, to find out what algorithms and methods his team deploys through real-world use cases and to understand how this can play out in emerging markets.

Elizabeth Wallace, ODSC

Elizabeth is a Nashville-based freelance writer with a soft spot for startups. She spent 13 years teaching language in higher ed and now helps startups and other organizations explain - clearly - what it is they do. Connect with her on LinkedIn here: https://www.linkedin.com/in/elizabethawallace/