Key Takeaways
- The market appears to incentivize both the deliberate and inadvertent use of misleading advertising as to the actual AI capabilities of various organizations, aided by the widespread confusion over terminology and functionality
- The rampant overstatement of AI use is detrimental not only to consumers but also to entrepreneurs acting in good faith
- There are some basic sanity tests one can conduct as well as questions one can ask to help gain clarity on the actual AI capabilities in production
There is ample evidence that we have long-since emerged from the proverbial AI Winter. However, one often-cited datum we may wish to reconsider as irrefutable evidence is the number of "AI-powered" companies in the market. Given that roughly 40% of businesses purporting to be "AI Startups" show absolutely no evidence that the technology is material to the execution of their value proposition, it appears a prudent juncture for taking stock of exactly what role Artificial Intelligence can and does play in various industries.
To state this more directly, the astonishingly alarming rate at which companies appear to be, deliberately or unintentionally, misrepresenting the role that AI plays in their business model necessitates that investors, regulators, policymakers, and consumers alike become vigilant in their detection of this technological chicanery.
Why do companies overstate their use of AI?
With Artificial Intelligence startups having received a record $26.6bn in funding in 2019, it’s no wonder the demand for Machine Learning Engineers and Data Architects has skyrocketed, with entrepreneurial fervor rushing into the sector.
The reverential mystique that surrounds AI, inflated by Hollywood lore and sensationalized journalism, seems to permeate and propagate consumer demand, even among user bases that don’t necessarily understand the technology.
The Dunning-Krueger Effect is not lost on purchasers of AI products and solutions. In one study, 72% of respondents confidently claimed to understand Artificial Intelligence, only to go on and indicate the exact opposite. Only half of respondents understood that AI solutions enable machines to learn new things and one-third were certain AI would never know their individual preferences as well as another human. While only 34% of respondents thought they had interacted with an AI product in the past year, questions about their daily lives indicated that 84% actually had.
With this rampant level of confusion and conflation surrounding what Artificial Intelligence and Machine Learning are as well as how they can be applied, the market is virtually incentivizing and encouraging stakeholders to incorporate the technology into pitch decks and sales materials before tech stacks and code bases.
Anyone who has ever launched a startup and sought funding themselves will easily be able to imagine the competitive pressures created by a rival business’s announcement that their offering is now powered by AI.
Another study examined 2,830 businesses claiming to be an "AI Startup", only to find that over 40% offered no evidence whatsoever that AI was "in any way material to their company’s value proposition."
What is the impact of the overstatement of the use of AI?
Chief among the problematic downstream impacts of this behavior is a severe dilution of the good faith efforts and value established by the companies that truly are using AI to drive value and execute on their vision. Investors who see funding squandered by companies making such overstatements may sour to future investments that rely on a belief in the competitive advantage offered by AI. In addition, human nature dictates that, as we see others successfully raise money and acquire customers using deceitful tactics, we will become marginally more inclined to do so, particularly if we see ourselves as standing in a zero-sum relationship with those other parties.
Unfortunately, the knowledge and skill set required to tease apart real AI from fake is generally correlated with the financial luxury of being able to make a mistake in that very assessment, particularly as relates to purchasing and investment decisions.
There is also a harm that accrues to consumers who spend their limited resources on goods and services, expecting to receive incremental value owing to the power of AI. They are likely to have overpaid for items, end up less satisfied with those items, and become less likely to repurchase perceived substitute products.
What are obvious signs that a company is using AI?
One of the trademarks of AI being used in production involves the real-time assimilation of new data into the decision engine, thereby enabling iterative improvement. Generally, AI projects have a minimal need for human involvement when the true learning is taking place. Of course, training data sets are often collected, cleansed, and labeled by human operators. However, the weights of a neural network, for example, should be set independently of human intervention.
Implementing Artificial Intelligence systems within a business model is a massive undertaking that requires an incredible combination of resources, domain expertise, experience, and critical problem solving. As such, founding and executive teams populated by people with advanced technical degrees and established technological track records are a common marker that an organization is at least staffed properly for the task at hand.
While in the future, AI ethics boards and governance policies will be the status quo across industries, that is not the case today. Still, forward-thinking companies that rely heavily on massive predictive engines to make decisions which have real-world impact are realizing the importance of establishing such internal committees. As such, the presence of one is often a positive indicator of the presence of real AI at work, though the absence is not necessarily evidence of the contrary.
Ideally, the company has not only staffed itself in the way one would expect a scalable AI company to, but is also seeking to fill new jobs with descriptions and qualifications that would support the type of operation being advertised.
In its current form, the type of AI used in commercial applications is generally some form of Machine Learning that is performing extensive optimizations on a narrowly-defined process. As such, there should exist an obvious aspect of the underlying business model that would dramatically benefit from a quantitative form of optimization. Even in cases where, for example, AI assists in writing advertising copy, it still does so with a specific goal in mind, such as minimizing the cost of customer acquisition or maximizing an unambiguously-defined conversion rate.
What are obvious signs that a company is not using AI?
Perhaps more important than being able to adeptly identify "real AI" is being able to identify "fake AI". For example, companies with non-technical founders that claim to be purveying sophisticated technical solutions are classic red flags. It is first important to remember that automation and AI are not the same thing. And, while automation is often a fantastic value driver, it should not be conflated with Artificial Intelligence. The same goes for algorithms and any other static or man-made rules-based decision vehicles.
History is also riddled with examples of founders who started with the noblest of intentions in the execution of their value proposition, but were forced to pursue suboptimal, often-unethical means as a result of unexpected events. For example, Theranos infamously raised $700mm from investors, promising to revolutionize blood testing with a new proprietary device, only to end up running blood tests on commercially-available equipment disguised as something new, in a desperate but recklessly dangerous effort to save the company. Similarly, investors and consumers should be wary of those who raise funds for the purpose of developing AI systems, only to be found with no product in market years later.
The importance of such skepticism is underscored by the success of The Mechanical Turk, a fake chess-playing machine built in the late 18th century, which would be showcased as a feat of automation, despite being manually operated by a human chess master hidden inside the apparatus. Though its name has since been all but overwritten by Amazon’s crowdworking platform, in the 84 years during which the Turk was in circulation, it notoriously played and defeated the likes of Napoleon Bonaparte and Benjamin Franklin.
We should be similarly skeptical of the 21st century Mechanical Turk - a company that pretends to use AI for decision-making, but actually relies on an inferior, less efficacious functionality. If human intervention is consistently required to manage the system, it’s very unlikely said system will take full advantage of the benefits AI offers.
A number of companies, across sectors, have been rumored to advertise AI solutions that are really based on more conventional analytical frameworks. To finish the Theranos analogy, imagine a platform that claims to use AI in order to predict weather patterns with greater accuracy than the stochastic simulations that competitors are running, only to actually use Monte Carlo simulations without relying on AI in any form.
If the types of data analysis being performed by the company are relatively basic, it’s likely the company isn’t using AI at all. If, in fact, some form of Machine Learning is being utilized, there’s a good chance the company is simply wasting resources without adding incremental value.
Harder to detect are the companies that really do use AI, but not in a way that enhances their unique selling proposition. For example, it should generally be considered disingenuous for a weather service to claim that it is AI-powered, if the main role of artificial intelligence has more to do with optimizing advertising spend or CRM functionality than actually predicting the weather.
Where it Gets Confusing
I recently saw a Tweet (from a name in tech with a blue check mark) that got quite a bit of traction, and was then screenshotted onto Reddit where it gained even more velocity. The Tweet first quotes a press release that states, "Uber will use artificial intelligence to identify drunk passengers. It will use factors like time of day, pickup location, and how long a user interacts with the app before making a decision." The user then states, "That’s not AI. That’s an if statement."
And, at first glance, this seems like a cleverly-crafted excoriation of Uber’s overstatement of the role AI plays in the decision-making process. It appears, for example, that the entire press release could be referring to a simple line of code such as:
IF(timeOfDay == 11:00pm && pickupLocation == sportsBar && timeOnScreen > 1:00)
{ user = drunk; }
However, it’s not quite this simple. For example, it may be the case that Machine Learning was used in order to fine-tune the weights placed on each of the input parameters considered. Additionally, there may be a feedback mechanism whereby drivers indicate whether they felt a passenger was intoxicated, such that errors can be iteratively incorporated into an updated predictive engine.
It seems that a company like Uber would have far more to lose from lying about the use of AI than they would stand to gain from convincing people that it is being used when, in fact, it is not. Without direct insight into this particular project, it will be hard to confirm with certainty in either direction.
Regardless, the moral of the story is clear - even people working around, writing about, and commenting on matters pertaining to the technology don’t seem to totally agree on what constitutes AI.
What are good questions to ask to tell if they are using AI?
Parties interested in making the type of determinations discussed in this article are often in the position to ask questions directly to a founder, CTO, or product manager. It is here that just a few strategic questions can be very informative in discerning the real AI from the fake.
- Can I demo the product using my own data? It is easy for an AI solution to be perfectly finetuned to a particular data set, or for AI-like results to be faked using manual calculations. However, a truly turnkey AI solution that is ready for production should be able to run on any data set without major alterations.
- What AI methods/algorithms are being relied upon and why? This is an extremely basic, fundamental question that any goodfaith operator should be able to answer succinctly on the spot. Generally, hesitations and ambiguous responses on behalf of founders is a bad sign.
- How much training data was needed and how often is retraining expected to occur? Similar to the previous question, anyone involved in the sale or pitching of an AI product should have no trouble whatsoever providing a response to this question.
- How was the training data labeled/annotated? How did you ensure against bias? How will that process be applied to data collected in production? Though you may stumble across nontechnical salespeople who aren’t well versed in the nuances of training data, these are absolutely questions you should want answered before purchasing or investing in an enterprise solution. As such, even if the particular person with whom you’re speaking can not answer this question, they should easily be able to facilitate you finding someone who can.
- What is your budget for cloud services? It is generally the case that the size and scope of an AI project is directly correlated with the cost of associated cloud services, typically as relates to computational resources as well as storage. This is so commonly a large expense on the income statement that any AI company not beholden to the likes of AWS or Microsoft Azure should at least be able to explain how that came to be the case.
I so frequently find myself reminded of Eliezer Yudkowsky’s assertion that, "by far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it." I think the message is apropos, both in the apocalyptic sense as well as the practical one. Gaining a proper grasp on both the applications and limitations of emerging technologies should be seen as a critical skill for virtually all stakeholders, particularly as we emerge from a world rocked by the COVID-19 crisis, with a bruised economy that grows ever more dependent on software.
About the Author
Lloyd Danzig is the Chairman & Founder of the International Consortium for the Ethical Development of Artificial Intelligence, a 501(c)(3) non-profit NGO dedicated to ensuring that rapid developments in AI are made with a keen eye toward the long-term interests of humanity. He is also Founder & CEO of Sharp Alpha Advisors, a sports gaming advisory firm with a focus on companies deploying cutting edge tech. Danzig is the Co-Host of The AI Experience, a podcast providing an accessible analysis of relevant AI news and topics. He also serves as Co-Chairman of CompTIA AI Advisory Council, a committee of preeminent thought leaders focused on establishing industry best practices that benefit businesses while protecting consumers.