Artificial Intelligence has been in the news a lot lately, mainly because more and more people at all levels of society are starting to recognize its potential, in whatever area of human activity. From a briefing paper published by the European Parliament October of 2016:
The ability of AI systems to transform vast amounts of complex, ambiguous information into insight has the potential to reveal long-held secrets and help solve some of the world’s most enduring problems. AI systems can potentially be used to help discover insights to treat disease, predict the weather, and manage the global economy. It is an undeniably powerful tool. And like all powerful tools, great care must be taken in its development and deployment. However, to reap the societal benefits of AI systems, we will first need to trust it.
What kind of trust are we referring to here? This a very complex question. The more we let AI into our lives, the more likely we are to develop a dependency on it, and the amount we are willing to trust it will be in direct relationship to the willingness to have our lives altered by its outcomes, since the rise of AI will have without question a bearing on them, regardless what aspect of life we might be talking about.
But since we don’t really know what kind of future we want for ourselves – how will we be able to trust AI if it pushes us into a direction that at first glance appears to be not in our best interest, if only because we might not fully understand the consequences of an AI suggested action.
And so the critical question remains: with or without AI, what course of action is ultimately in humanity’s best interest? I have written enough on that subject in the past to suggest that, at minimum – owing to the fact that we really don’t know what we are or what we are about – our survival as a successful species ought to be of primary interest to us.
Beyond that it is anyone’s guess how to proceed from there, and this on the assumption that we will in fact be successful in not wiping each other out in some stupefying display of nuclear grandstanding. Then, hopefully, a day will arrive in the not too distant future when we might have an inkling of what is going on with us, and start living our lives in that context. In the meantime, it will likely be AI to help us figure out what course of action we need to chart in order to be able to reach that future.
I believe AI will be able to gain our trust gradually and take an ever greater role in our daily lives. The technology will continue to seduce us with the ability to seemingly give us everything we ask for, leading to our ever greater dependency on it, but the caveat is that throughout the application of AI we cannot take its credibility for granted, as much as it may continually seem to prove itself. At bottom, AI is a calculator working with an algorithm (a set of rules governing a deductive process) and any data derived from it is subject to the ago old dictum “garbage in – garbage out”. To safeguard the integrity of process is one thing, safeguarding the integrity of the information it is working on is a whole different matter.
My biggest worry, however, would be to allow AI to modify its own algorithms in order to overcome its limitations, e.g., allow it to simulate an inductive or inferential process, to make the process seem more “human”, or as smart, if not smarter. I’m thinking about situations where AI is faced with incompatible observations – or when there is just not enough data – in which case it might be allowed to arrive at some kind of “best guess” scenario by either modifying one of its procedural rules or by introducing some other random factor to settle the issue in order to simulate a reasoned conclusion. In case of finding your way out of conflicting data a machine cannot benefit from a “gut” feeling, or by an appeal to instinct or intuition, since the essential quality of being human cannot be translated into machine language, no matter how sophisticated the algorithm it has been derived from.
But this is precisely the crux of the matter, and in order to safeguard a future that is in fact in our best interest, AI should never be more than an augmentation to human intelligence. This so we will continue to strive for a future that will always put the interest of humans first, efficacy over efficiency, by seeking to maximize our social-economic well-being and the ethical-legal framework that supports it.
AI will not be able to apply critical and uniquely human qualities such as empathy and compassion to its output, although it might be able to simulate them to a certain degree, based on what it has “learned” from the observation of human behavior. This may be good enough for some behaviorists out there – followers of the late great behaviorist psychologist B.F Skinner – who hypothesize that human behavior is strictly a function of external factors, and not driven by thoughts or emotions, but I think they are definitely out to lunch on that front. There is a logical gap between what is seen on the outside in human behavior and that which motivates it from within, and what it means to be human is the only thing that fits in that space and is able to connect the two.
(For those who think that the essence of a human being can be reduced to a bunch of neurons firing in some particular fashion – so what is so special about that, right? Well, a couple of things. Clearly, you need to get out more – and a lot more, I would think; watch a sunset or two, take in a play, maybe listen to a little Mozart, or do something more adventurous such as a hike up to Machu Picchu, whatever. And then also ask the questions as to why these neurons are firing, and the earlier question: why are these neurons here in the first place, i.e., why is there anything here at all. No aspect of the given world should ever be taken for granted; doing so is the ultimate arrogance of man, and not only does this diminish the value of the world, but especially the value of the one making the assertion.)
As a result, AI must remain in a secondary position to the human mind in its determination what might be best for us, although there is no guarantee that we will always decide to do the right thing there. Such is our predicament at the moment, as well as in the foreseeable future.