Innovation in AI is accelerating - should we be afraid? #31 #cong17

By Clare Dillon.

Clare Dillon AI image

There is so much innovation happening in so many areas at the moment, from decoding the human genome and 3D printing of body parts, to VR and nanorobotics, that it's hard to keep track. But, conversations around innovation in Artificial Intelligence seem to spark a level of emotion I haven't seen in other discussions of technology trends.

No-one can deny the phenomenal amount of innovation that has happened in the AI space in the last decade - and many experts agree that the rate of innovation is increasing. Recently, I've had the pleasure of speaking about the potential of AI to change the world for the better. I love the topic. Apart from geeking out at what AI can accomplish these days, it gives me hope that we can solve in new ways some of the problems society grapples with. We all can appreciate ways in which AI already makes our lives easier (just by helping people like myself actually find the shortest way from A to B), but when you look at how AI may be used in medical scenarios and to innovate around food production and climate change, it's easy to get very excited indeed.

However, as I learn more about these topics, l find I have two main types of emotional responses to stories about innovation in AI: unbridled excitement, or gut-wrenching fear. Nestled among the stories of technical wizardry and promises of making the future a better place, there are a large number of articles predicting the end of the world as we know it: biased and racist AIs perpetuating the worst possible views of individuals, AI stealing your job, autonomous war robots that will bring nothing but DEATH AND DESTRUCTION... And, much as the optimist in me would love to dismiss folks who tell these stories as a group of Chicken Lickens predicting the sky will fall, there is general agreement that AI and automation will cause huge disruption for organisations, societies and for us all as individuals.

It's easy to understand why we might get scared of AI - we're built to be easily scared. It doesn't help to hear people like Stephen Hawking talking about how AI "could spell the end of the human race". Most movies and media featuring AI usually don't end with rainbows and butterflies. And if, as Elon Musk suggests, we are "summoning the demon" that poses the "biggest existential threat" to the human race, perhaps we should be shaking in our boots. 

But I believe fear is the wrong response to what's happening in the world of AI. Acting from a place of fear is just not a good idea. What good does it do us to have everyone ready to fight or take flight when confronted with the prospect of AI being integrated to our lives? Are you ready to up stakes and move off-grid? I doubt it! And if a group of modern-day Luddites think they're going to smash up my smartphone - they have another think coming. Getting stressed about the whole thing doesn't help anyone either - it's not like we need more stress these days. So how should we react?

Practically speaking, I don't believe we can avoid, out-run or reverse the AI revolution - that Pandora's box is already wide open. Regardless, I am not willing to give up on all the marvellous potential it has. I still believe we need to approach the coming AI revolution with optimism. Not the type of passive optimism which leaves us smiling, cooing at robotic pets, crossing our fingers and hoping for the best - but a type of engaged optimism which sees many more people getting involved in defining how and where AI should be employed for the best possible outcome for all of us.

There are a number of institutions already engaged in these efforts: Oxford's Future of Humanity Institute, NYU's AI Now Institute, Partnership on AI, Future of Life Institute are all looking at the ethical and social implications of a world with AI. But most organisations looking at the social impact of AI are either academic institutions (requiring you to take a PhD to participate in the conversation) or industry groups made up of tech companies interested in getting some standards set around AI. Some government agencies are getting in on the act delivering economic impact reports and action plans. All these types of discussions are necessary and worthwhile - but I believe more individuals need to get in on the conversation as quickly as possible.

This shouldn't mean you have to get a PhD in AI. Many AI action plans feature a lot of recommendations about technical upskilling. I absolutely support all efforts to increase technical skills in the workforce and in our schools. But I also believe that they are not the highest priority when it comes to education around AI. Here is what I would like to see everyone spend some time on:

  1.  Get informed about the latest trends in technology, the potential benefits and risks. A quick google search will get you a million articles around the latest AI innovation in whatever field you work in. You don't have to be technical to realise the benefits of the applications of AI. On the other hand, this article from the World Economic Forum or the most recent report from the AI Now Institute gives a good overview of the other considerations around AI adoption. 
  2. Figure out the kind of world you want to live in. This is something we should all do anyway. However, because things are moving so fast in the area of AI, it is predicted that related economic and social change is going to happen quicker than we think. Therefore, it is now more important than ever for people to be clear about the direction they want our society to move in. Getting to a collective understanding of where we want to head (and where we don't want to head) helps us shift trends, make decisions, and vote with purpose. I really liked the PWC report on the Workforce of the Future. It helped me visualise and describe the kind of world I want to see in the future (I like the Yellow one).
  3. Start new conversations about how we can get to that type of world. I don't think anyone has all the answers yet. There are already some good documented recommendations for how we approach adoption of AI (for example from the 2017 report from the AI Now Institute) and those need to be amplified and actioned. More conversations also need to be had on the topic by a wider set of people, standards need to be set, policies need to be formed.       

I look forward to furthering this conversations at #cong17. The implications of AI are too significant to leave it to the academics and techies. It's time to conquer any fears we might have. It's time for us all to help shape the potential future these innovations are creating.

CongRegation © Eoin Kennedy 2017 eoin at congregation dot ie