Artificial Intelligence

Artificial Intelligence has been in the news a lot lately, mainly because more and more people at all levels of society are starting to recognize its potential, in whatever area of human activity. From a briefing paper published by the European Parliament October of 2016:

The ability of AI systems to transform vast amounts of complex, ambiguous information into insight has the potential to reveal long-held secrets and help solve some of the world’s most enduring problems. AI systems can potentially be used to help discover insights to treat disease, predict the weather, and manage the global economy. It is an undeniably powerful tool. And like all powerful tools, great care must be taken in its development and deployment. However, to reap the societal benefits of AI systems, we will first need to trust it.

What kind of trust are we referring to here? This is a very complex question. The more we let AI into our lives, the more likely we are to develop a dependency on it, and the amount we are willing to trust it will be in direct relationship to the willingness to have our lives altered by its outcomes, as the rise of AI will have no doubt a bearing on them, regardless what aspect of life we might be talking about.

It remains an open question, however, if will we be willing  to trust AI when it pushes us into a direction that at first glance appears to be not in our best interest, if only because we might not fully understand the reasons for an AI derived conclusion. From an article in Bloomberg Businessweek titled Artificial Intelligence Has Some Explaining to Do by Jeremy Kahn:

This is what gives AI much of its power: It can discover connections in the data that would be more complicated or nuanced than a human would find. But this complexity also means that the reason the software reaches any particular conclusion is often largely opaque, even to its own creators.

Nevertheless, I believe AI will continue to gain our trust gradually and take an ever greater role in our daily lives. The technology will seduce us with the ability to seemingly give us everything we ask for, leading to our ever greater dependency on it, and leading us to believe that we can take its credibility for granted, and that would be a dangerous thing. At bottom, AI is a machine, and a calculator working with an algorithm (a set of rules governing a deductive process) and any data derived from it is subject to the age old dictum “garbage in – garbage out”.  To safeguard the integrity of a process is one thing, safeguarding the integrity of the data it is working on is a whole different matter.

In addition, we need to worry about that has been referred to as “machine learning”, the ability of an AI machine to “improve” on its own programming in order to overcome its deductive limitations, e.g., allow it to simulate an inductive or inferential process, to make the process seem more “human”, or as smart, if not smarter.   I’m thinking about situations where AI is faced with incompatible observations – or when there is just not enough data – in which case it might be allowed to arrive at some kind of “best guess” scenario by either modifying one of its procedural rules or by introducing some other random factor to settle the issue in order to arrive at a reasoned conclusion.

The fact remains that a mechanical analysis cannot find its way out of conflicting data by means of a “gut” feeling, i.e., the appeal to instinct or intuition, or the application of other unique human qualities such as empathy and compassion since they cannot be translated into machine language. At most, a machine might be able to simulate them to an extent based on what it has “learned” about these qualities from the observation of human behavior in a variety of scenarios. And if AI can only simulate human reasoning, that is not the same as replacing it. While this may be good enough for some  behaviorists – followers of the late great behaviorist psychologist B.F Skinner – who hypothesize that human behavior is strictly a function of environmental factors, and not driven by thoughts or emotions, but I think they are definitely out to lunch on that front.  There is a logical gap between what is seen on the outside in human behavior and that which motivates it from within, and what it means to be human is the only thing that fits in that space and is able to connect the two.

The upshot is that the essence of what it means to be human cannot be quantified and reduced to a set of rules governing machine language, and that AI can never be more than an augmentation to human intelligence.  This so we will continue to strive for efficacy over efficiency, to ensure we will choose quality over quantity, and that our continuing development as a species will always be a reflection of that,  uncertain as our future seems at the moment.

print

This entry was posted in Cosmology & Evolution, Science and Technology and tagged , , , . Bookmark the permalink.