Do Androids Dream of Electric Sheep?

By Stuart Boardman, KPN

What does the apocalyptic vision of Blade Runner have to do with The Open Group’s Open Platform 3.0™ Forum?

Throughout history, from the ancient Greeks and the Talmud, through The Future Eve and Metropolis to I Robot and Terminator, we seem to have been both fascinated and appalled by the prospect of an autonomous “being” with its own consciousness and aspirations.

Hal-2001

But right now it’s not the machines that bother me. It’s how we try to do what we try to do with them. What we try to do is to address problems of increasingly critical economic, social and environmental importance. It bothers me because, like it or not, these problems can only be addressed by a partnership of man and (intelligent) machine and yet we seem to want to take the intelligence out of both partners.

Two recent posts that came my way via Twitter this week provoked me to write this blog. One is a GE Report that looks very thoroughly, if somewhat uncritically at what it calls the Industrial Internet. The other, by Forrester analyst Sarah Rotman Epps, appeared in Forbes under the title There Is No Internet of Things and laments the lack of interconnectedness in most “Smart” technologies.hammer

What disturbs me about both of those pieces is the suggestion that if we sort out some interoperability and throw masses of computing power and smart algorithms at a problem, everything will be dandy.

Actually it could just make things worse. Technically everything will work but the results will be a matter of chance. The problem lies in the validity of the models we use. And our ability to effectively model complex problems is at best unproven. If the model is faulty and the calculation perfect, the results will be wrong. In fact, when the systems we try to model are complex or chaotic, no deterministic model can deliver correct results other than by accident. But we like deterministic models, because they make us feel like we’re in control. I discussed this problem and its effects in more detail in my article on Ashby’s Law Of Requisite Variety. There’s also an important article by Joyce Hostyn, which explains how a simplistic view of objectivity leads to (at best) biased results. “Data does not lie. It just does not (always) mean what you think it does” (Claudia Perlich, Chief Scientist at Dstillery via CMSWire).

Now that doesn’t detract from the fact that developing a robot vacuum cleaner that actually “learns” the layout of a room is pretty impressive. That doesn’t mean that the robot is aware that it is a vacuum cleaner and that it has a (single) purpose in life. And just as well. It might get upset about us continually moving the furniture and decide to get revenge by crashing into our best antique glass cabinet.

With the Internet of Things (IoT) and Big Data in particular, we’re deploying machines to carry out analyses and take decisions that can be critical for the success of some human endeavor. If the models are wrong or only sometimes right, the consequences can be disastrous for health, the environment or the economy. In my Ashby piece I showed how unexpected events can result in an otherwise good model leading to fundamentally wrong reactions. In a world where IoT and Big Data combine with Mobility (multiple device types, locations and networks) and Cloud, the level of complexity is obviously high and there’s scope for a large number of unexpected events.
IoT Society

If we are to manage the volume of information coming our way and the speed with which it comes or with which we must react we need to harness the power of machine intelligence. In an intelligent manner. Which brings me to Cognitive Computing Systems.

On the IBM Research Cognitive Computing page I found this statement: “Far from replacing our thinking, cognitive systems will extend our cognition and free us to think more creatively.”  Cognitive Computing means allowing the computer to say “listen guys, I’m not really sure about this but here are the options”. Or even “I’ve actually never seen one of these before, so maybe you’d like to see what you can make of it”. And if the computer is really really not sure, maybe we’d better ride the storm for a while and figure out what this new thing is. Cognitive Computing means that we can, in a manner of speaking, discuss this with the computer.

It’s hard to say how far we are from commercially viable implementations of this technology. Watson has a few children but the family is still at the stage of applied research. But necessity is the mother of invention and, if the technologies we’re talking about in Platform 3.0 really do start collectively to take on the roles we have envisaged for them, that could just provide the necessary incentive to develop economically feasible solutions.

spacemenIn the meantime, we need to put ourselves more in the centre of things, to make the optimal use of the technologies we do have available to us but not shirk our responsibilities as intelligent human beings to use that intelligence and not seek easy answers to wicked problems.

 

 

I’ll leave you with 3 minutes and 12 seconds of genius:
marshalldavisjones
Marshall Davis Jones: “Touchscreen”


Stuart BoardmanStuart Boardman is a Senior Business Consultant with KPN where he co-leads the Enterprise Architecture practice as well as the Cloud Computing solutions group. He is co-lead of The Open Group Cloud Computing Work Group’s Security for the Cloud and SOA project and a founding member of both The Open Group Cloud Computing Work Group and The Open Group SOA Work Group. Stuart is the author of publications by the Information Security Platform (PvIB) in The Netherlands and of his previous employer, CGI. He is a frequent speaker at conferences on the topics of Cloud, SOA, and Identity.