We hear two common questions when talking with users who are new to the artificial intelligence (AI) space and looking to leverage AI in their business: “What should I predict?” and “Can I build a model to accomplish [insert random task]?”
Because there are so many applications and tasks you can enhance by deploying a machine learning (ML) model, I often find it helpful to start by sharing a basic mental framework for how models work, then apply that to the business and its data.
The most important thing to understand about ML models is that, at their core, they are pattern-matching machines. When you train a model, you teach it to recognize patterns in data. When you ask that model to make a prediction, you show it an example, and it does its best to determine what pattern it matches.
For example, let’s say you want to predict if a written restaurant review is positive or negative. You first train a model using a dataset of reviews with each classified as positive or negative. The cleaner the differences between positive and negative are in the training data, the better the model will classify each review.
On the other hand, if you tried to break down each written review into a “star rating” between 1 and 5 (where 1 was very negative, and 5 is very positive), there would very likely be a lot of overlap between a rating of 2 and a rating of 3.
Question #1: Could a human perform this task?
It’s called “artificial intelligence” for a reason: our brains are also sophisticated pattern-matching machines. If you asked ten people to go through that list of 100 reviews and rate them as positive or negative, you would likely get very similar results. If you asked the same ten people to rate each review on a scale of 1-5, you would get directionally similar ratings with a lot of variation between categories because the patterns are not sufficiently differentiated.
To hammer that point home, if you gave one person the task of ranking 100 reviews on a 1-5 scale, then gave them a month off and asked them to repeat the job, would their answers be the same? Very unlikely. Just like a person matching patterns, a machine learning model will be more accurate if the prediction targets have minimal overlap.
The same principle applies for scoring leads on their likelihood to buy, forecasting revenue, detecting fraud, or even predicting the weather. If a subject matter expert were examining individual records or building a financial model and they have the right data, they can do the job. But if it’s an impossible task for a human – like predicting which days it’ll rain in August two years from now – then it’s unlikely a machine learning model will work either.
The benefit of machine learning is that you don’t need a subject matter expert and you can avoid all of the manpower that goes into matching new examples to proper categories in order to predict outcomes.
Question #2: Do I have the data I need to train a model?
For a model to learn to match a pattern, you need to train it on data (often historical) that contains the patterns you are hoping to predict. Each example (or record in the dataset) needs to be correctly labeled (or tagged). If the data is not tagged, you will need to tag it manually or via an auto-tagging service. So the second question you should ask is if you have enough data to teach the model.
What is “enough” data? That depends on the complexity of the problem, but as a rule of thumb, more data usually means better results. The less overlap between your classes the better. Consider for example an image classifier that you are training to tell the difference between a cat and a lion. Cats and lions are reasonably similar – so you are going to have to teach the model using a lot of examples such that it can learn the nuance between cats and lions. On the other hand, if you are trying to train the model to learn the difference between a cat and a rocketship, you can train a model with a lot less data.
Keep in mind that most AutoML solutions will hold back a random 20% of your data, train on the remaining 80%, and then run the held-back examples against the model to determine its performance. Practically that means that if your data set is too small the model may not be able to actually train and run – which is why many systems set the lower limit around 1000 records (although some allow you to go down to 100).
If you have a relatively straightforward classification problem, you can get away with as few as 100 records. But if you’re hunting for a needle in a haystack (say detecting credit card fraud with an incidence rate of 1 in 1000), you will need 50-100 thousand examples. A quick way to answer the “enough data” question is to just train a model and find out!
Example: Prioritizing Sales Leads
Let’s apply our two-questions framework to the ranking of incoming sales leads generated by the marketing team. A salesperson can take any given lead, review the person’s job title, know something about the company they work for, look at how the lead came in, and do a reasonable job of figuring out if the lead is a qualified potential customer. So the answer to the first question is clearly yes – it is indeed possible to build an AI model to do the same.
For the second question, do we have the data we need to train a model? Well, that depends. To build a competent lead-scoring model, you will need historical data about your past customers. Do you have the same information available that the salesperson needed to judge a lead as qualified or not, and do you have enough examples of leads that were both qualified and not qualified? If the answer is yes – then you are good to go!
Luckily, most businesses today already gather the data necessary for implementing a machine learning solution as part of their everyday operations. If a human can do the task, and you have the historical data, you should have everything you need to take advantage of AI to drive your business smarter and faster.
About the author:
Jon Reilly is co-founder and COO of Akkio, where he focuses on driving the company’s product led growth (PLG) mission of democratizing AI. Prior to Akkio, Reilly was VP of Marketing at Markforged and led the Music Player product management team at Sonos.