The 10-minute master class:
Everything You Need to Understand About AI
What is Artificial Intelligence?
Artificial intelligence is a software capability that allows computers to detect patterns and abstract rules based on a series of extremely complex inputs.
These inputs can include any dataset, including:
- pixels in an image,
- sound waves in an audio file,
- letters on a page,
- the mathematical relationships between all the words in the English language,
- financial transactions of the Fortune 500,
- the location of every star in the galaxy,
The dataset itself doesn’t matter much. As long as an AI model can take a look at the input and match it to its output, the system itself can start to map — and ultimately predict — the rules that connect inputs and outputs.
For example, a financial institution may have a dataset of historical stock market prices and lots of information about each company. By matching company information as the input with stock price as the output, an AI could learn to identify company information that is highly correlated with a rising stock price.
What’s great about AI is that it is uniquely able to consider thousands of inputs and correlations far too subtle for humans to even notice. For example, almost no stockbroker integrates CEO commute time into their investment strategy. However, artificial intelligence has found it to be a relevant predictor of CEO performance.
In any case, over time, these predictions become more accurate. Eventually, the model becomes reliable and useful enough that it can be deployed and put to work.
Let’s look at another example.
At a factory, quality assurance officers must pull broken eggs from a conveyor belt. To accomplish this task, the inspectors will look for obvious indicators such as a cracked shell or a shimmer of yolk on the conveyor belt. In other words, these folks are using visual information (inputs) to categorize eggs (outputs).
This effort is a perfect task for an AI model. In fact, if we were to assign an AI model the same task, it would tackle the problem much in the same way as its human counterparts: it would look at an egg and check to see if there were any visual indications that it was cracked.
To accomplish this task, however, an AI would need to be trained. In this case, the model would need to chew on hundreds, maybe thousands, of photos of cracked and uncracked eggs to start to figure out the pattern.
Eventually, the model would create mathematical rules to define what a cracked egg looks like. With more training, the accuracy of the model would improve, and eventually, it would be reliable enough to put into production.
A Note About Accuracy and AI
Artificial intelligence is not yet capable of achieving 100% accuracy. So, depending on the use case, data scientists must make judgments about whether an AI model meets a certain threshold to put it into production.
Most AI models can get to about 75% accuracy without too much trouble. Beyond that, there’s a logarithmic increase in (a) the number of inputs the model needs to train on and (b) the amount of processing power it takes to create the model. The former is time-consuming, the latter is expensive, and in most cases, 100% accuracy isn’t necessary. Depending on the task at hand, the accuracy threshold will vary.
This lack of 100% accuracy is one of the reasons self-driving cars still have a long way to go. In data science, a 90% accuracy rate is quite good but imagine if an autonomous vehicle only noticed 90% of the pedestrians who crossed the street.
Moreover, how things are inaccurate can also be a factor. For example, in a lung cancer model, false positives aren’t nearly as bad as false negatives. Therefore, the best disease diagnostic models will be optimized to detect every single possible instance of cancer (avoiding false negatives) — even if it means a few healthy lungs get incorrectly flagged along the way (allowing for false positives).
Experts who build AI solutions must decide on acceptable accuracy thresholds before putting a model into production.
What does it mean to put a model into production?
Well, as we’ve discussed, AI models are really good at matching inputs to outputs. However, for that capability to be useful to anyone, the model must be piped into some kind of app or hardware to be put to work.
In the egg example, the food processing plant would have to do more than train the broken-or-not-broken model to get any use out of it. It would need to set up a camera to film the eggs coming down the belt, pipe that footage into the model for processing, and then alert the line workers of a broken egg via some kind of interface.
Most AI service providers consider training a model and putting it into production as two separate steps. Because the former is much more complex (and expensive), many IT departments opt to do the production piece themselves. In most cases, the average programmer can access the AI model (usually via an API) and put it to work.
What is Deep Learning?
For AI, identifying whether an egg is broken is relatively simple. By contrast, looking ahead at a red, round object and labeling it a “stop sign” is much more complex.
This is where deep learning comes in. Deep learning enables AI models to sort things into tinier and tinier categories. AI experts think about this sorting as a layered process.
In the case of the stop sign, the relevant layers might be:
- Distance from car.
- Estimated height.
- Position on the road.
- Text on the sign.
To identify the stop sign, a self-driving car would have to pass the image of the stop sign through a series of layers to make sure it wasn’t a mountain, a tree, a speed limit sign, or a Starbucks. By layering simple AI categorization models on top of each other, AIs become much more sophisticated.
Let’s look at another example related to emotion detection. Could an AI detect whether someone was in a bad mood? To consider that question, let’s first think about how humans can tell whether someone is in a bad mood.
In order to make that judgment, our brains must integrate a lot of visual and auditory information about the subject.
- Mouth position (smiling, frowning)
- Eyebrow position (raised, narrow)
- Pronounced wrinkles (laugh or frown lines)
- Tone and volume of voice (yelling)
- Speaking speed
- Shoulder position
In fact, there are so many subtle and not-so-subtle indicators that it’s nearly impossible to list them all out. What’s nice about AI is that we never have to define those messy and long-tail indicators. AI’s superpower is that it can figure out the rules for itself.
By showing an AI model dozens of faces and labeling each with an emotion, the model could be trained to find the pattern, thereby integrating a wealth of obscure information about how someone looks to predict how someone is feeling.
This is the same facial recognition capability that Apple uses to unlock your iPhone. In that case, instead of matching your face with an emotion, the system is ensuring your face matches your account.
AI researchers are using images of people’s faces as inputs for all kinds of models, including lie detection, age detection, disease diagnostics, and even imaging technologies that age-up missing children.
As you begin to think more and more about the ways artificial intelligence can transform your company, challenge yourself to think in terms of inputs (data) and outputs (categorization).
Remember, AI is amazingly powerful. AI models can use Xrays to diagnose disease, billions of financial transactions to detect fraud, and boxes of handwritten case files to prove someone is innocent.
Don’t worry if the distance between inputs and outputs seems vast. AI is more than capable of making those connections.
Explore Other Guides
We've written up lots of articles to help business professionals orient themselves around AI. Learn how Artificial intelligence can meaningfully change how your organization does business by exploring the resources below.