"Artificial Intelligence - The robots are coming to get us and other such stories.” #74 #cong16

By Ciaran Cannon.

I must admit that I am a technophile. I firmly believe that digital technology will facilitate profoundly positive change over the coming centuries. So when I read recently that Stephen Hawking, Bill Gates, Elon Musk and Steve Wozniak had co-signed a letter warning that artificial intelligence could potentially be more dangerous than nuclear weapons, I had to sit up and take notice. When some of the world's greatest minds, who have been to the forefront of global innovation, begin to question how that very same innovation could ultimately threaten the survival of our species, I feel obliged to dig a little deeper to test that hypothesis.

As you might expect, and more than a little ironically, there are thousands of pages of debate on this very issue to be found on the internet and in the next thousand words or so, I will try to give you a little insight into the challenges and opportunities that lay ahead for us all.

Just a few short miles from Cong, in the fields of Athenry, Apple is planning to spend €850m on developing a new data centre. Apple is also building a similar one in Denmark. In fact all of the world's largest tech companies are falling over themselves to develop additional cloud storage simply because the amount of data being generated globally is expanding at an unprecedented rate.

By 2020 there will be over 20 billion devices connected to the internet and generating a unrelenting torrent of data every second of every day. Cisco predicts that global cloud traffic will reach 8.6 zettabytes by the end of 2019, four times what it is today.

  • A zettabyte is roughly 1000 exabytes.
  • An exabyte has the capacity to hold over 36,000 years worth of HD quality video.

Our data storage capacity is increasing at an incredible rate and it is being matched by the computing power required to analyse it at a forensic level. For example the servers that power the Xbox in 2016 contain more than the entire world's computing power available back in 1995.

In fact if the world’s total computing capacity could be directed at running minds as efficiently as those of humans, we would currently have the equivalent of 1500 extra human minds available to us and by 2030 it is predicted that we will have around 50 million such "minds" thus increasing the world’s effective population by about 1%.

However it is important to point out that despite the availability of unprecedented computing power, no one has managed to artificially create something that can function as well as one human brain. Computer-based neural networks, which try to mimic the brain, are still a long way from replicating what their human counterparts can achieve. Even the biggest current neural networks are hundreds of times smaller than the human brain.

However there is no question but that at some point in the future a computer, or an array of computers will have the intelligence of a human brain and then it is only a matter of time until such brains become ubiquitous and perhaps more importantly, sentient.

So the question we must ask ourselves is quite simple. As we feed every minute aspect of our lives into these newly minted minds, will we be creating the perfect ally or the perfect adversary? 

As you might expect, opinion is deeply divided on this profound question.

Almost every generation of humanity has experienced what it perceives to be major technological change and with that perception comes a deep fear of the unknown. Every century or so we whip ourselves into a frenzy and predict that some new technology will threaten our very existence. What is even more surprising is that with the benefit of hindsight, and the fact that we are still around, we cannot resist making similar doomsday predictions over and over again. Every generation succeeds in convincing itself that it is confronting a new technology that far surpasses the power of others experienced by previous generations.

So you could argue that the whole Artificial Intelligence (AI) debate is much ado about nothing. However there is already enough evidence to suggest that AI is indeed the change we should manage very carefully, the one that could break the rule of those centuries of experience. And it is because of the rapid pace of development of computing power and AI that we are being advised to proceed with caution.

Place a wheel from 1016 beside a wheel from 2016, well they look pretty much like wheels. The electricity that boiled your kettle this morning is much the same as that which powered Mr. Edison's bulb in 1879.

However the smartphone you have in your pocket today contains 2.7 times the processing power of the Cray-2 supercomputer developed in 1985. A Cray-2 was the size of a family car and cost $16m dollars, thus putting it out of the reach of most humans. It was used by NASA, Ford and General Motors amongst others to carry out millions of very complex calculations. Now millions of humans have double that computing power in the palm of their hand. That pace of technological change is unprecedented in human history and it is increasing in speed, every day.

I believe fundamentally that AI will be a powerful force for positive change. With that kind of computing power at our fingertips we will see a blurring of the lines between man and machine. AI will bring together the complementary talents of people and computing systems. It's already happening.

AI-enabled devices are allowing the blind to see, the deaf to hear, and the disabled and elderly to walk, run, and even dance. In 2011 gamers playing a protein-folding game called Foldit helped to unlock the structure of an AIDS-related enzyme that the scientific community had been unable to unlock for a decade, a feat that neither people nor computers working alone could come close to matching. The solution represents a significant step forward in the quest to cure retroviral diseases like AIDS.

Professor Geoff Hinton, known as the godfather of deep learning, has recently said that we are only at the dawn of AI and attempting to second-guess where it may take us is "very foolish".

In his words; "You can see things clearly for the next few years but look beyond 10 years and we can't really see anything - it is just a fog."

As we negotiate our way through that fog it would seem sensible to do so very carefully and by laying down some basic ground rules. If we get those rules right from the very beginning, I believe that we have little to fear and much to look forward to.

Many modern AI experts, rather strangely, fall back on a simple set of guidelines devised by science fiction writer Isaac Asimov who was remarkably prescient when he wrote a short story called "Runaround".......in 1942.

Asimov proposed three laws of robotics - taken from the fictional "Handbook of Robotics", 56th edition 2058, and they are as follows:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm
  • A robot must obey the orders given to it by human beings, except where such orders would conflict with the first law
  • A robot must protect its own existence as long as such protection does not conflict with the first or second laws.

To me, that seems like quite a good place from which to start.


CongRegation © Eoin Kennedy 2017 eoin at congregation dot ie