By The Open Group
Artificial Intelligence (AI) has a lot of promise, but it is still in the hype phase. Although strides have been made in machine learning and cognitive computing, most practical applications for AI are still nascent. As such, this is the right time to begin developing AI standards that can address some of the issues that have already been identified with AI, such as potential bias and the ethical concerns behind the technology, so that business value can be maximized.
Syed Husain, Manager Enterprise Architecture for Accenture, examined these topics at The Open Group London event in April 2018. In this far-reaching conversation, we spoke with Husain about where AI stands today, what the ultimate promise of AI is and the value standards may be able to bring to this still-emerging technology.
How do you define Artificial Intelligence?
There are three aspects to Artificial Intelligence (AI) as it relates to how the industry functions. One of them—the one that’s most commonly used—is machine learning, and this is the kind of technology that actually learns from its interactions with you and gets better over time. That consists of supervised learning, unsupervised learning, reinforcement learning—these are the domains of machine learning. From an organizational perspective, this is where you have specific data or skill sets, and this is what gives you strategic advantage because you can do what you currently do much better than the competition because you have either data or specific skills that interpret that data. That’s a strategic differentiator of machine learning.
There’s also cognitive computing, which is another element of machine learning. Cognitive computing is basically reusing someone else’s solution in a way that makes your overall process more intelligent. For example, if you’re using Alexa for your chatbot channel, that makes your process AI enabled but it’s also going to give you a competitive advantage. You can use it to reduce customer facing costs and come up with novel ways to incorporate it into your process, but it doesn’t really differentiate you specifically because it’s so easy to do now, everybody’s doing it.
The third category, which is very heavily debated (a lot of people say it’s part of AI, others say it’s not) is the world of RPA, or robotic process automation, which goes into seeing how you can take current processes and automate them. And that’s very focused on reducing costs, and it doesn’t really render any strategic differentiation, it just helps you cut costs pretty aggressively.
In three broad strokes, that’s the world of AI as most people define it.
AI has been around for a long time and yet it always seems to be on the cusp of breaking through and then that doesn’t materialize. Where does AI stand today?
AI as a technology has had two AI “winters.” Every time that we think we’ve cracked it, it turns out we actually haven’t. I think the first time was in the mid-1950s and four researchers got together and said, ‘I think it will take us a week to crack AI,’ which was pretty idealistic. But they were like, ‘Don’t worry, we’ve got this.’ And lo and behold, 60 years later, nobody has the answer.
But there’s the Turing test. That’s where Alan Turing said that computers would get smart to the point where you wouldn’t be able to differentiate between them and human responses, and that’s when you’ve gotten to AI. What’s different now is the fact that we actually have the ancillary tools to make AI usable for organizations and individuals. We have the data, we have the processing power. I don’t want to say we have new algorithms because the algorithms they use have been around for a long time. But they’re upgrading and fixing the algorithms in different ways and experimenting with how it can be applied and learning from that. So AI is benefitting from the ancillary technologies around it getting better.
As with all technologies, there are benefits and drawbacks to AI – what are some of the potential benefits of this technology?
If you take AI to its logical conclusion, what it gives you is the ability to map and customize products at levels unheard of before. We could create a movie that we know you would love and only you would love because you like certain things, and we could charge you a price that we know you would be willing to pay for that movie. That’s the promise of AI. It can create products that are customized to individual people, and it can do that because it knows a lot about you and it has the raw horsepower to actually customize things to match individual preferences and needs. That’s the ultimate promise of AI.
If you think about it in a very dumb sense, what AI does today is look at data and create a program, as opposed to you thinking about it and writing a program. It’s just a bunch of if/then statements—that’s the other end of the spectrum, if you think about it in a very reductionist way.
On some level, what’s the benefit of having everything tailored to the individual like with your movie example? If you have something like that, you miss out on the community aspect of the experience of Star Wars or Black Panther and sharing that art or whatever the experience is. Why go that far?
That’s a really good question. If you individualize something like entertainment that’s for your personal enjoyment, that maximizes your personal enjoyment for watching a movie. But if you think about it from a different perspective, such as for education, you could mass customize everybody’s potential. For example, I learn best by doing. So if my education from my schooling was specifically customized to me, that has benefit. If teaching were customized to the way I learn best and a teacher slows down because I can’t pick up a concept very quickly and speeds up where I grasp the concept quickly, that kind of customization could happen in fields like education or healthcare and be very beneficial. But in terms of entertainment, yes, some aspects of entertainment is the that we have a shared experience and we can talk about Black Panther after it’s done or discuss why Jaime Lannister would charge a full-grown dragon in Game of Thrones.
Why are companies having a difficult time finding ways to fit AI into the enterprise?
A bit of it is because the technology is so new. Finding people who actually understand how AI works is hard. But at the same time, the real power of AI as it stands today starts from something really boring like redesigning your business processes using AI—it’s like doing digital all over again. There was business process re-engineering that was a big part of adopting digital into your organization, and now when we’re doing AI, you have to do the same thing again. And people expect really marvelous things from AI, when AI can’t actually do marvelous things yet. It can do certain things really well, and if you know what those are and start adopting them into your business processes, it’s relatively straightforward to start adopting AI. For example, using Alexa. If you want to create a new customer channel where you can chat with customers very easily and take advantage of the fact that Alexa has how many million users now, you can do that relatively easily and quickly. But people haven’t even taken advantage of that yet, so it’s just that the uptake is slower than you would expect it to be with the hype levels we’ve experienced.
Why is that? Are there reasons for that?
A lot of companies haven’t even gotten through the digital transformation yet. And a lot of companies, especially the ones that we talk to, are experimenting with AI on the fringes. So some parts of the company are experimenting with AI and looking at it, but the real value of AI will be delivered when you have an industrialized methodology to actually implement AI across the board for your entire organization. Having a center of excellence that does AI—this this what Google and Facebook have done, they’ve created centers of excellence for AI—so people can put it back into their value streams and actually go and deliver AI into their products and what they are creating. That takes money.
The biggest thing that it takes is top-down, management commitment. If you ever read the book ‘Money Ball,’ that was what Billy Bean brought to the Oakland As—he brought management, trust, commitment, and analytics. The data that they used had been around for many years. People have been publishing data about baseball since at least the 1940s or 1950s, but nobody used that data. The statistics they used were really simple—that had been around for a long time as well, so why didn’t people put two and two together? The answer is, people did put two and two together, but no baseball organization was willing to pay for it or have management use the output of analysis until management commitment made them do it.
We hear a lot of hype about AI disrupting how work is done, in particular, and a huge loss of jobs to AI – what are your thoughts on this?
Will there be disruption? Yes. We have it already. It’s started with things like driverless cars—that will impact the workforce in a significant manner. If you take it to the extreme, you come back to a world like in Star Trek, where the world is basically a communist economy where everybody works and gets resources according to their needs. If you take machine learning to its extreme and you invent general artificial intelligence, that’s may be the world we end up in, where there’s a central intelligence controlling everything and because data flow is entertainment and perfect, communism may work. But that’s not realistic.
What will probably happen is that governments may need to redistribute wealth once companies displace enough workers and start experimenting with minimum basic living wages, giving people a living wage whether they’re working or not working. That’s one possibility once the large majority of the population can’t work. This has happened over and over again. We don’t have telegraph workers today, but it’s similar. People need to be retrained to provide value-added activities that aren’t valuable today but may be in the future, so that needs to happen. Governments will have to step in and tax companies that have large-scale automated workers and use that to retrain people and provide a living wage—that’s just an example.
Many people also have ethical concerns about AI. What are some of the common concerns people have with AI?
Are you familiar with the example of how Target was using data about shopping patterns to predict pregnancy and how they revealed that a teenager was pregnant to her parents? That’s an example of where our technical capability exceeded our humanistic ability to understand the consequences of what we were implementing. That’s coming up more and more.
When it comes to AI, there are three ways you can mess it up ethically. The data you use to train your machine learning models can be skewed. For example, if you make racist decisions consistently and that’s in your data and you train a machine-learning model based on that data, the machine learns from your example. If that’s racist, what the machine does will be racist. A great example of that is the Microsoft Tay chat bot. The chat bot became racist, misogynistic and genocidal within three hours of talking to people. So the data we feed it, it learns from it. If the data is bad, then what it learns is bad.
That’s one way. The second is that the implementation of a solution could be biased. For example, if certain classes of people don’t use mobile apps and all of the solutions that you’re implementing machine learning in are developed via mobile apps, then your solution is missing out on a large part of the population. And if you use the data that you get from that in non-mobile app contexts, then obviously you’re missing a large part of the population and they’re being discriminated against. An example of this is when Google noticed that women weren’t being offered high-paying job ads. The ads for high-paying jobs weren’t even being shown to women. They were precluding women based on the data and how they implemented things.
The third ways is the algorithms you use to train the model. If you’re not using ensemble models, and you’re using a specific model, the model itself is based on a biased view of the world.
How do you go about combatting that kind of bias?
One of the types of things people are looking at is the explainability of machine learning decisions. I believe in Europe when they were introducing General Data Protection Regulation (GDPR), the original law the way that it was written talked about how any machine learning algorithm would need to be human explainable, and so people have been experimenting with how do you make a decision that a machine makes explainable to humans? There are algorithms out there now. It sounds weird but we’re using machine learning to explain machine learning to humans, and that’s really novel, but how far down does this go? How many turtles do you stand on?
You talked at The Open Group London event about how standards can help in overcoming some of the challenges AI poses. How can standards address some of the issues AI faces?
For one of them—specifically on how your data could be biased—you would have standards about how you would actually go about examining your data for being skewed or biased. You could implement those standards and some of those standards could include tools that examine your data for you and say, did you know that your data doesn’t look at women, for example? Your sample is 100 percent men. It can actually do that—it can go through your data and make you aware that some variables are completely non-deterministic. That’s one way to do it.
The other way to think about it is to have a standardized process. Even if it’s a poor standardized process, it’s better than having no process because a process can always be improved. You can examine your standards and improve them over time to learn and improve. Whereas the way machine learning is done today, it’s very non-deterministic. You do the best that you can do, you examine your data protocols from machine learning and data scientists pull all-nighters and put together a model and put it in production.
What are some ways that companies can succeed with AI today?
Absolutely. If you think about it, the cognitive solutions are the low-hanging fruit. Text to speech, optical character recognition, speech to text, voice to text, and text to speech. There are quite a few APIs that you can use for natural language processing. You can actually extract topics and sentiments from text and these are all easy to implement now. I don’t want to say they’re very easy, but they’re a lot easier than they used to be. For example, Google, Amazon, IBM, they all have APIs on their cloud services where you can call an API and it will give you back a topic with a certain amount of text and it can tell you what the sentiment is behind the topic. With that, you have a really easy way to start implementing your machine learning within your organization.
A really interesting use case is using topic and sentiment analysis on customer data—figuring out what your clients don’t like about you and your product and understanding how much they don’t like it. For example, if a customer says, ‘Your mortgage process sucks, but Lisa from customer service was really nice,’ you’ll be able to understand that people don’t like your mortgage process, but also that your customer service is doing well.
There are lots of examples in the world of anti-money laundering and fraud detection. A lot of companies are doing some really cool things in this space. The funniest example I came across from was a friend who told me they found that people who buy Diet Coke, if there’s a Diet Coke in an order, it’s less likely to be fraudulent. 7Up is more likely to be fraudulent. And an even better one was that they’d never found a case of fraud with people buying tofu.
http://www.opengroup.org @theopengroup