Previous technology shifts have largely been about improving business workflows – important but somewhat abstract. AI, in contrast, applies directly to objects and decisions in the real world. The key insight is that unlike the PC, web and mobile revolutions that came before – AI has the ability to move and impact tangible things like cars, patients and crops. It’s a genuine big deal.
“AI is the answer to so many real world challenges – moving cars, curing patients, delivering things on time, predicting agricultural yields. The number of applications is crazy.” — Hussein Mehanna, Former Director of AI Product & Engineering at Google
While there is no shortage of claims about the potential for AI in the enterprise, concrete examples of AI gains, as well as how to achieve them through repeatable systems, are harder to find. Gigster has delivered over 1,500 digital transformation projects, with over half of our projects now involving AI algorithms. This is one of two blog posts inspired by roundtable conversations Gigster had with partners and customers at the Google Next conference in San Francisco.
*“Everything comes down from the basic mission we set for ourselves.” – Victor Martin, VP of AI and Geosciences at Total* As the latest shiny object, AI is in danger of being over-applied as a pixie dust to every project. Many projects can benefit from AI, so the challenge is to select projects that will benefit the most. At Total, this analysis starts by identifying the high level objectives for each project, such as reducing costs or increasing safety and security.
These objectives are then mapped to holistic approaches that deliver the highest value and only incorporate AI as a means to an end. It’s critical to understand that while the effective use of AI does drive defensibility, it should be a component of a broader product strategy that’s laser focused on solving a real problem. Martin Casado of a16z wrote a good cautionary tale on assuming that just accumulating more and more data and leveraging AI will magically deliver more value to a business.
*“Explainability is the top problem forward thinking AI organizations have.” — Hussein Mehanna, Former Director of AI Product & Engineering at Google* There are crucial legal and ethical reasons to care about making models that can explain how they make their predictions so that we can understand the underlying “why.” In healthcare and finance, for example, the decisions being made by AI are in a heavily regulated context and without more insight into how these decisions were made, adoption will continue to be limited.
As AI methods like deep learning become increasingly sophisticated, figuring out what is going on inside the “black box” is critical. For example, Microsoft recently released InterpretML – a toolkit for building explainable systems giving them a leg up in the AI arms race.
“You can’t fake AI commitment any more than you could fake e-commerce commitment. Either you were a company that went web native – like Amazon – or you were a company that put a website up and declared victory – like Sears.” – Frank Chen
As was the case with web and mobile, companies must reinvent themselves as AI native. This often means that enterprises must be willing to disrupt existing business models in order to take advantage of new opportunities derived from AI.
With web and mobile, enterprises developed specific cultural habits like continuous deployment and A/B testing to achieve success. Similarly, success with AI requires deeply rooted engineering practices that seek out new data sources and algorithms to enable business transformation.
“You have more data than you think.” — Frank Chen“Use humans when you have no data. Begin manually and then automate and even then consider rules, heuristics and much simpler statistical approaches to start.” — Debo Olaosebikan* AI models are only as good as the data they can learn from. A common explanation for AI failure is that there was not enough data to solve the problem.
However, there are several techniques that enable boot-strapping ML models with little or no initial data:
• Use humans when you have no data. Some problems can start by collecting human generated data. For example, to make a chatbot, it might make sense to first have humans answering messages until the service has amassed enough data to begin automating some of the task.
• Repurpose existing data for new AI applications. Frank Chen makes the interesting observation that large organizations have often been collecting data for a very long time and just haven’t realized its value. An example is call logs (“your call is being monitored for quality assurance purposes”), which can be used to train customer support agents or to understand customer sentiment. Another example is to combine data from email, calendar invites and payment records to paint a picture of the customer lifecycle instead of giving up because the CRM is not up to date.
*“Seek out partners. Even if you have talent internally you need to augment that talent with external partners.” – Mike Lebron, Senior Director, IT Innovation at Canon* AI is about applying specific predictive algorithms to solve business problems in a new way. Most enterprises are experts in multiple business domains but no enterprise have experts in all relevant AI domains. The most valuable partners for enterprises are firms that can provide a cross-section of AI skills and experience applying those skills to solve business problems. These AI partners provide talent and they supply cultural and organizational know-how.
*“A big vision buys the time to prevent the premature deaths of AI initiatives.” – Debo Olaosebikan, Co-founder & CTO Gigster* Since most companies are early in their AI journey, a key obstacle is to rack up early wins that validate the business value of AI projects. For enterprises this means convincing business executives that AI can ultimately deliver billions of dollars in value to the company.
Clearly it’s critical to pick problems that are big and contribute meaningfully to the business. One of the more common mistakes is actually from AI projects that are halted prematurely. Focusing on a big problem allows you to ask for time for things to play out. Though pitching a big vision is important, the art of breaking problems down and starting with small wins is essential.
*“In the future, all AI teams will have an ethicist.” — Frank Chen* Google released its AI principles and organizations like the Partnership on AI and OpenAI highlight the increasing importance of considering AI ethics. For example, a poorly calibrated AI model for reading resumes might use lopsided historical data to systematically reject more women engineers than men engineers. This also goes beyond doing what’s good and ethical. Indeed doing what’s good is also good for business, such as when IBM released inclusive datasets (e.g Diversity in Faces) to build more accurate Machine Learning models.
The road ahead for AI is exciting and — as with other major technology shifts — there will be an adjustment period to develop new conventions and best practices. Ultimately, the winners will focus on real problems and shift their cultures to become natives of the new AI-powered business world.