How to make better decisions with AI simulations
Have you ever wondered what your world would be like if you could just wind back the clock, make a different decision about something you weren’t happy with and seeing how things played out? “If Jane was managing that account instead of Peter would they have churned?” Wouldn’t it be wonderful if only we could try out different scenarios before committing to them? Well I have good news my friends, because this is an area where machine learning can really come to your aid and in this article we’ll take a look at what you can do and why it’s so useful.
Models but not as we know it
This is a model
When I think of models the inner child in me thinks of something like this. It’s a scale model model of an F-14 Tomcat and I remember making one very similar as a ten year old, getting glue on the kitchen table and soaking transfers in water for hours. The point is though is that it’s a representation of something that exists in real life and it’s very clear to anyone that looks at this lump of plastic that the thing it represents is an aircraft that flies and has recognisable features like wings and a cockpit. This model represents a physical object.
Now in the world of AI the word model gets banded about a lot: we have LLM (large language models), computer vision models, forecasting models, and I think word gets used so much that we forget what it means. At the end of the day the models we use in machine learning are no different from our F-14 Tomcat - they’re representations of something. In the machine learning world the things they represent could be obvious or could be abstract. A computer vision model that finds growths in an X-ray models a radiographer; the models in autonomous vehicles are representations of drivers; a model that plays chess emulates a chess master.
Machine learning models that we use in business represent more abstract concepts: lead health, marketing channel effectiveness, sales cycle durations, customer satisfaction. Nevertheless they are still representations of things that happen in the real world and carry value.
The beauty of models of course is that they can be modified to allow us to see the consequences of our actions without actually having to perform those actions in real life. We could paint our plane bright pink and put a barbie sticker down the side without incurring the wrath of some rear-admiral or we could control the sensor inputs on the autonomous vehicle in a lab to simulate someone stepping into the road without putting anyone’s life at risk.
What If?
It follows also that we can use the same principle to simulate scenarios with machine learning models that we use inside a business setting. By just adjusting the inputs into our model, we can run as many simulations as we like to answer questions like:
If I changed the proportion of spend on individual marketing channels what would be the effect on lead generation?
If I changed the Sales Rep on an opportunity how much more likely would it be that it would be won?
If I shared specific blog posts with my leads how much more likely would they be to convert to opportunities?
Another way of looking at it is, if you have a machine learning system that models part of your operation then you have the ability to run simulations on that part of your operation. Now obviously the quality of the simulations will only be as good as the quality of the model. The better the model represents the thing we want to emulate the more realistic our simulations will be.
Recommendations
There’s more, because you can use the results of many simulations to do more powerful things like generate recommendations. For example something that I have done successfully in the past is to use a model that predicts outcomes on opportunities based on activity data. Then I was able to ask the model to simulate what the outcome would be if I performed one of a set of actions (eg proposing a webinar, emailing marketing material, or asking for a meeting). By trying each action I could then infer what the best action to take was based on the predicted improvement in the deal outcome and this was the basis of a “next best action” recommendation. The same principle could be applied to do things like:
Recommend a pricing discount
Recommend a piece of content to share with a lead.
Of course there are other ways of making recommendations, but this is a simple approach that can work well.
Analysis
There’s even more though and this ones a bit tricky to explain, but the aim here is to find the true underlying effect of some factor that influences the performance of the sales operation. Let’s take Lead Source for example. We want to compare what the true effects of Referrals, Inbound Web Enquiries, Cold Calling and Partner Generated Lead are on win rates as this will help us to decide which channels we should invest in.
Now doing a simulation on deal for each of those lead source values as we did for the recommendations is certainly possible , but would only give us a single data point since we are only using one deal. What we really want to be able to do is to run the simulations on lots and lots of deals as this will increase our sample size and allow us to derive more statistically robust insights.
As it turns out it is possible to do this by utilising the historical sales data trained the machine learning model on and making lots of simulations where we change the Lead Source for all the deals in the dataset. We just set the Lead Source to be one of the values at a time and measure the difference in the predictions.
This principle is something that I saw on one of Jeremy Howards Fast AI videos a long time ago and I really like the idea that with a bit of imagination we can repurpose a model to help solve completely new tasks. This technique is used to create powerful plots like Partial Dependency Plots and ICE plots, can help uncover patterns, trends, and key drivers in sales data by showing how specific factors influence outcomes. These in turn can help provide insights like:
show that leads from LinkedIn have the highest conversion rates, while cold outreach has the lowest.
reveal that customers who log in less than twice a week have a much higher churn risk.
if deal remains in the same stage for more than 30 days, the risk of slippage increases significantly.
Conclusion
Running what-if scenario simulations is an incredibly useful and powerful technique that can be used to sense check ideas before we put them into practice. As with everything in machine learning I think it really works best when we use it as a guide to help inform our decision making having understood first the constraints that these models are operating under. I hope you’ll agree that with a bit of imagination we can make models work harder for us, ultimately helping us to make decisions that will lead to better sales performance and more revenue.