Understanding the “Machine Learning Way” to Solve Business Problems through Real-World Scenarios  Understanding the “Machine Learning Way” to Solve Business Problems through Real-World Scenarios 
Ironically, one of the foremost barriers preventing the exploitation of machine learning in a business is neither the implementation of the... Understanding the “Machine Learning Way” to Solve Business Problems through Real-World Scenarios 

Ironically, one of the foremost barriers preventing the exploitation of machine learning in a business is neither the implementation of the algorithm nor the retrieval of the data (the how): the toughest part is to recognize the right occasion to use it (the why)! We need to identify within the complex map of business processes (the operating model of the company) the specific steps where algorithms can bring real value if adopted. If we develop our sensitivity to recognize such leverage points, then we will be able to find in our work the first opportunities to put ML into practice.  

This article is a section taken from one of the chapters of the book Data Analytics Made Easy by Andrea De Mauro. The book introduces the concepts of data analytics and helps you get your data ready and apply algorithms without having any prior knowledge of implementing analytics solutions. 

There is a machine learning way (we can call it the ML way) to operate business processes. Let’s go through three sample scenarios to distinguish between the traditional and the ML ways to create value with data. 


Scenario #1: Predicting market prices 

You work for a car dealership, specializing in multibrand, used vehicles. You notice that some cars take much longer to sell because their initial listing price was too high versus customers’ expectations. To improve this situation, you want to implement a technical solution to guide the price-setting process in a data-based manner: the objective is to anticipate the actual market price of each car to keep inventory under control while maximizing revenues.  

Here are two possible approaches to solve this: 

  • The traditional way is to codify a set of rules that define how the car’s features impact the market price and build a “calculator” that implements these rules. The calculator will leverage a mix of available data points (like the starting price of new cars by brand and model, or the cost of accessories) and some thumb-rules defined thanks to common sense and to the expertise of those who have been for some time in the business. For example, some rules can be: the car depreciates by 20% during its first year and then by an additional 15% every further year of age, or cars that run for more than 10,000 miles/year are high-mileage and their value is reduced by a further 15%, and so on. To build this calculator, you will need to implement these if-then rules using a programming language, which means that you also need a programmer to develop and maintain the code over time.  
  • The ML way is to get an algorithm to analyze all the data related to previous sales and autonomously “learn” the rules that connect the car features (like mileage, age, make, model, accessories, and so on) with the actual price at which the car sold. The algorithm can identify some recurrent patterns, which are partly confirming the rules we already had in mind (maybe adding a further level of precision to those approximate thumb rules), and partly identify new and unexpected connections, which humans failed to recognize and summarize in a simple rule (like, for instance, model X depreciates 37% more slowly when equipped with this rare optional). Using this approach, the machine can keep learning over time: the rules underlying the price will evolve and get automatically updated as new sales happen and new car models enter the market.   

The main difference is that in the traditional approach, our price prediction will leverage the existing human knowledge on the matter (if adequately documented and codified), while the ML way provides for that knowledge (and possibly more) to be autonomously learned from data by the machine.  

Scenario #2: Segmenting customers 

You work in the marketing team of a local supermarket chain. You are responsible for preparing a weekly newsletter to be sent to those customers who signed up to the loyalty program and opted-in to receive emails from you. Instead of sending a one-size-fits-all newsletter to everyone, you want to create a reasonable number of different versions and distribute them accordingly to the various groups. By selecting key messages and special offers that are closer to each group’s needs, you aim to maximize the engagement and loyalty of your entire customer base.  

There are at least two ways to create such groups: 

  • The traditional way is to use common sense and your existing knowledge to select the features that can reasonably discriminate across different types of customers, each having a more specific set of needs. For instance, you can decide to use the age of household members and average income level to define different groups: you will end up with groups like the affluent families with children (to whom you might talk about premium-quality products for kids) and low-income 60+ empty nesters (more interested in high-value deals). This traditional approach assumes that the needs within each group—in this case, solely defined by age and income—are homogeneous: we might end up ignoring some meaningful differences across other dimensions. 

The ML way is to ask an algorithm to identify for us a number of homogeneous groups of customers that systematically display similar shopping behavior. Not being restricted by the cognitive capacity of humans (who would struggle to take into account dozens of variables at once) and their personal biases (driven by their individual and specific experiences), the algorithm can come up with groups that are specific and more closely connected to the actual preferences of each customer, like: food lovers who shop in the weekend (to whom we might send some fancy recipes every Saturday morning) and high-income pet owners (wanting to take care of their beloved furry companions).   

Traditionally, we would differentiate actions by considering apparent—and, sometimes, naïve—differences, while the ML way goes straight to the core of the matter and identifies those homogeneous groups that best describe the diversity of our customer base, keeping everyone included and engaged. 

Scenario #3: Finding the best ad strategy 

You work in the digital marketing department of a mid-sized company providing online photo printing services. Your responsibility is to define and execute digital advertising campaigns with the ultimate objective of maximizing their return. Instead of having a single campaign based on the same content, you want to optimize your strategy by testing different digital assets and seeing what works best. For example, you might have banners showing different products (like photo books and cheesy mugs with a portrait on them), various colors and fonts, or alternate versions of the tagline text. You can post your ads on social media and search engines, and you can control the available budget and duration of each test. 

Also in this case, we can recognize two possible approaches to make this happen: 

  • The traditional way is to run the so-called A/B/n testing: you define several alternative executions (for instance, three similar ads with the same graphic but three different call-to-action texts like buy me nowcheck it out, or click to learn more), run each of them—let’s imagine—10,000 times, and calculate their individual return by counting, for example, the number of orders generated by each execution. You will need to repeat the test over time to check if it’s still valid and, if you want to optimize across other dimensions (like the time of day at which the ad is served, or the location of users, and so on) you will end up with a growing number of combinations to test (and a larger cost of the experiment). 

The ML way is to let an algorithm dynamically decide what to test and how, moving progressively toward the path leading to the best combinations of variable aspects. At first, the algorithm will start similarly as in an A/B/n test, trying some random combinations: this will be enough to grasp some first knowledge on the most promising directions to take. Over time, the algorithm will focus its attention (and budget) more and more on the few paths that are working best, finetuning and enriching them with an increasingly larger number of factors. In the end, the algorithm might end up with some very specific choices like using a pink font and displaying a photo mug for people in their 50s who are connecting through a laptop. 

In both approaches, we have used tests to learn what the best ad strategy was. However, in the traditional A/B/n testing approach, we had to define test settings based on prior knowledge and common sense. In the ML way, we put the machine in the driving seat and let it interact with the environment, learn progressively, and dynamically adjust the testing strategy, so as to minimize costs and get higher returns. 

The business value of learning machines 

These three scenarios unveil some recurrent differences between the traditional and the ML way of operating business processes. Let’s have a look at the incremental benefits we get from ML: 

  • Both approaches rely on technology and data, but the ML way leverages them more extensively. Suppose you allow algorithms to explore the full information content of a large database. In that case, you capitalize on the massive horsepower of digital technologies and avoid hitting the bottleneck of human cognitive limitation. With ML, more data will be considered at once (think about the many attributes of customers to be segmented or the factors that differentiate digital ads), which is likely to end up in better business outcomes. In other words, the ML way tends to be more accurate and effective than traditional approaches, leading to an economic advantage for the company relying on it.  
  • Once it is correctly set up, the ML way can operate independently and with minimal supervision from its human companion. This means driving automation of intellectual tasks and, as a result, incremental efficiency and productivity for the business. Because of this automation, ML algorithms can stay always on and keep learning unceasingly on a 24/7 schedule. This means that they will get better and better at what they do over time, as more data comes in, without necessarily having to invest in further upgrades or human improvements. Think about the new car models appearing in the market or the evolving preferences of customers served by a digital ad: algorithms will observe reality vigilantly, spot trend breakers, and react accordingly to keep the business going. 
  • The traditional approach relies on previous human knowledge while the ML way generates additional knowledge. This is a game-changing and fascinating benefit of ML called Knowledge Discovery: learning algorithms can provide a better and sometimes insightful understanding of reality (think about the subtle rules that explain car price formation, or the unexpected connections pulling together consumers in homogeneous groups) that can’t be spotted by just looking at the data. It is the capacity to hack reality by finding unexpected patterns in the way things work. If the learning algorithm provides for its outcome to be humanly intelligible (and many of them do), this knowledge will go and accrue to the overall know-how of the organization in terms of customer understanding, market dynamics, operating model, and more: this can be as valuable as gold, if used well! 

Making processes more efficient and effective, plus systematically acquiring an additional understanding of the business, these benefits by themselves are enough to explain why ML is currently exploding in modern business, making it a competitive advantage that nobody wants the risk of not having. Let’s now meet the types of algorithms that can enable all of this. 


In this article, we went through a series of practical business scenarios where we saw intelligent algorithms at work. These examples showed us how, if we look carefully, we can often recognize occasions to leverage machines for getting intellectual work done. We saw that, as an alternative to the traditional mode of operating, there is an ML way to get things done: whether we are predicting prices, segmenting consumers, or optimizing a digital advertising strategy, learning algorithms can be our tireless companions. If we coach them well, they can extend human intelligence and provide a sound competitive advantage to our business. 

About the Author 

Andrea De Mauro is Director of Business Analytics at Procter & Gamble, looking after the continuous elevation of the role of data and algorithms in the business and the development of digital fluency across the global organization. He is a professor of Marketing Analytics and Applied Machine Learning at the Universities of Bari and Florence, Italy, and at the International University in Geneva, Switzerland. His research investigates the essential components of Big Data as a phenomenon and the impact of AI and Data Analytics on companies and people. He is the author of popular science books on data analytics and various research papers in international journals.

Editor’s note:

At our upcoming event this November 16th-18th in San Francisco, ODSC West 2021 will feature a plethora of talks, workshops, and training sessions on machine learning and machine learning research. You can register now for 30% off all ticket types before the discount drops to 20% in a few weeks. Some highlighted sessions on machine learning include:

  • Towards More Energy-Efficient Neural Networks? Use Your Brain!: Olaf de Leeuw | Data Scientist | Dataworkz
  • Practical MLOps: Automation Journey: Evgenii Vinogradov, PhD | Head of DHW Development | YooMoney
  • Applications of Modern Survival Modeling with Python: Brian Kent, PhD | Data Scientist | Founder The Crosstab Kite
  • Using Change Detection Algorithms for Detecting Anomalous Behavior in Large Systems: Veena Mendiratta, PhD | Adjunct Faculty, Network Reliability and Analytics Researcher | Northwestern University

Sessions on MLOps:

  • Tuning Hyperparameters with Reproducible Experiments: Milecia McGregor | Senior Software Engineer | Iterative
  • MLOps… From Model to Production: Filipa Peleja, PhD | Lead Data Scientist | Levi Strauss & Co
  • Operationalization of Models Developed and Deployed in Heterogeneous Platforms: Sourav Mazumder | Data Scientist, Thought Leader, AI & ML Operationalization Leader | IBM
  • Develop and Deploy a Machine Learning Pipeline in 45 Minutes with Ploomber: Eduardo Blancas | Data Scientist | Fidelity Investments

Sessions on Deep Learning:

  • GANs: Theory and Practice, Image Synthesis With GANs Using TensorFlow: Ajay Baranwal | Center Director | Center for Deep Learning in Electronic Manufacturing, Inc
  • Machine Learning With Graphs: Going Beyond Tabular Data: Dr. Clair J. Sullivan | Data Science Advocate | Neo4j
  • Deep Dive into Reinforcement Learning with PPO using TF-Agents & TensorFlow 2.0: Oliver Zeigermann | Software Developer | embarc Software Consulting GmbH
  • Get Started with Time-Series Forecasting using the Google Cloud AI Platform: Karl Weinmeister | Developer Relations Engineering Manager | Google

ODSC Community

The Open Data Science community is passionate and diverse, and we always welcome contributions from data science professionals! All of the articles under this profile are from our community, with individual authors mentioned in the text itself.