In our experience as technologists, consultants, and public speakers, we are constantly meeting people with different opinions about the definitions of AI, data science, and ML. Although many are quite opinionated, few can defend their position. Indeed, finding a universal definition of AI is not as trivial as it might look.
Going by its name, we might try to define artificial intelligence by finding the human traits that we associate with intelligence. Once we agree on what makes humans intelligent, we can say that any computer that does the same thing is AI. It makes sense, right? Although this is a common approach, it falls apart even with simple scenarios. For instance, a human who can divide 13.856 by 13 down to the tenth decimal number in a split second would definitely be called intelligent, yet its artificial counterpart is a $2 pocket calculator that nobody would dare call AI. At the same time, we would never call someone intelligent just because they’re able to drive in heavy traffic, yet a self-driving car is generally considered one of the toughest forms of AI the tech industry is working on today. We shouldn’t be surprised by how hard defining intelligence is; after all, philosophers and scientists have been debating about it for centuries.
Not only do we have different weights to measure human and machine intelligence, but we also seem to be changing our mind pretty fast about what is AI and what isn’t. Let’s take an example from Paul Graham, founder of Y Combinator, the most successful Silicon Valley startup accelerator, and arguably one of the most forward-looking people in tech. In 2002, Graham wrote an essay proposing a new solution to detect spam emails. Back then, email was just getting off the ground, and spam (unwanted email) was one of the most serious threats to widespread use of the internet by non techies. It seems hard to imagine now, but the best computer scientists were busy trying to write complex rules to let computers automatically sort through Viagra advertisements.
In his essay, Graham thought about a new ML-based approach that would learn to classify an email by processing thousands of “good” and spam emails. Paul’s simple software learned to recognize spam better than the complex rules concocted by engineers. Fast-forward 20 years, and automatic spam detectors are such a boring technology that we would be laughed out of the room if we dared call it AI.
In fact, it seems like AI is about mastering tasks that our imagination suggests computers shouldn’t be able to do. Once we get used to a technology in our daily life, we remove the AI badge of honor and start calling it just computer software. This is a well-studied phenomenon called the AI effect.
Because of the AI effect, the goalposts for what we call AI keep moving just as quickly as technology improves. The definition of AI we draw from these considerations is “a temporary label to a piece of software that does something cool and surprising, until weget used to it.” We don’t know about you, but that just doesn’t feel like a satisfying definition.
We hope we have convinced you that it is extremely hard to find a definition that makes everyone happy and can be valid as technology evolves. With the AI effect in mind, we decided to avoid a narrow definition of AI that rewards “flashy” applications just to ditch them once the hype is gone. We embrace a broader definition that includes less flashy applications. This is our definition of AI:
Software that solves a problem without explicit human instruction.
As you can see, our definition focuses on the outcome of the technology rather than the specific techniques used to build it. Some people will not agree with it, because it’s almost equivalent to what we said about machine learning in the previous blog post. The truth is, learning is an intelligent trait, and while ML is just a tool, it is the tool behind 99% of the successful applications we happen to call AI today. This may change in the future, but we don’t see any new approaches on the horizon that hold the same promise as ML.