Ethics of AI - The sweet spot between business, democracy and application of AI.
With each leap in human ability to invent and change what we can achieve, we have used a new ‘power’ – a new energy that has redefined the boundaries of imagination. Steam and the industrial revolution; electricity and the age of light; and so, again, we stand on the precipice of another seismic leap: Era of Artificial Intelligence (AI).
However, part of this change will be our approach to the ethics of Artificial Intelligence (AI) - arguably the most interesting debate policymakers will have. It is also one of the most urgent. The pace of algorithmic innovation and scale of deployment means it is no longer sustainable for businesses to decide ethical dilemmas in isolation. Politicians and regulators must engage.
Our present view of AI is heavily coloured by how this new power will deliver automation and the potential reduction of process-reliant jobs; and how those who hold the pen on writing the algorithms behind AI could exert vast power and influence on the masses; or simply a fear that, if we release the AI genie out of the bottle, who is really in control. The challenge is that the sheer limitless potential of AI is intimidating. And if, like me, you are from a certain generation, these seeds of fear and the fascination with artificial computer-based intelligence have been planted by numerous Hollywood movies of every genre picking on our hopes, dreams and fears as to what AI could do to us. Think of the unnerving subservience of HAL in 2001 (1968), the menacing and semi-obedient Maximilian from The Black Hole (1979), a fantasy woman created by the power of 80s home computing – such as ‘Lisa’ in Weird Science (1985) - and of course the ultimate hellish future of machine intelligence taking over the world in the form of Skynet in The Terminator (1984). These and many other futuristic interpretations of AI have helped to fan the flames in the minds of engineers, computer scientists and super geeks alike to get computers to talk, walk, run simulators or (and it was a great achievement!) even beat the reigning world chess champion, Garry Kasparov, as Deep Blue achieved in 1997. We have strived for sci-fi to become reality.
But in the world of the movies the difficult questions did not need to be addressed
As the processing power and the development of supercomputing has continued and now converged with our ability to both create and harness vast amounts of data. This has been underpinned by the ability to connect everything to everything via the internet and voilà – we have the emergence of the ultimate meta-ecosystem for data scientists to be able to not just develop systems that can replicate human activity but also learn, adapt, predict and decide. And this capability can be applied to the whole spectrum of machines and more importantly, human services.
We have poured rules (algorithms) into machines and made them ‘thinking machines’ - and we have stopped prioritising on trying to get robots to look and feel like us and focused instead on enabling them to do more activities for us. In the process, some machines have superseded humans in doing things faster and better and, therefore, are perceived to be more intelligent than us.
As we peer into the near future, we see a move from ‘pioneering’ to the ‘application’ and generative era of AI. Where technology is now creating - and the line between human innovation and technology outcome is becoming harder for us to clearly define.
So, with all this new power, with so many opportunities and benefits to be derived by its application, what should we fear? Well, my answer is not one from Hollywood science fiction and does not relate to individuals losing control to machines but rather how will we ensure that this power remains democratic, transparent, accessible and benefits the many.
How will we ensure that control does not fall in the hands of the few; that wealth does not determine the ability to benefit from innovation and that a small set of organisations do not gain ultimate global control or influence on our lives.
Because what we are really learning in the pioneering era is that it is the vast amounts of information that is being created and disseminated that has the power to influence and change.
The costlessness of creating and sending and sharing information has meant huge tsunamis of messaging can be both created but also targeted to drive specific results. Whether that be in terms of commercial businesses targeting consumers or by political movements and parties for informing of citizens. But the real question is when does information become mis-information – and following that who or what is the arbiter of what is right or wrong?
The impact of misinformation on society can and will be significant. The potential harm, negative outcomes, and the erosion of trust by actors who are either deliberately undertaking this action or believe in what they are doing even if this leads to negative results.
So where will the power to arbitrate lay – do we need to continue to see the interventions of the global platforms – Facebook, Twitter, Google etc – to censor and set the ‘moral settings’ for content. And if so where is the democratic mandate in their organisations. How does the state then influence or regulate these global platforms to enable there to be effective creation and application of the moral rules.
The power of misinformation is one our key concerns as we move forward with ultising the power of AI. Governments and global businesses must now work side by side as we try to both navigate and establish the right type of regulatory frameworks – to achieve the ethical sweet spot of transparency, accountability and fairness – though we know that both sets of actors can also become poachers from game keepers in an environment that to do what they believe is the right thing, they will have to influence and inform us that they are the right people to do it.
Comments