My Lords
Are we ready for the power of Artificial Intelligence? With each leap in human ability to invent and change what we can achieve, we have utilised a new ‘power’ – a new energy that has redefined the boundaries of imagination. Steam and the industrial revolution; electricity and the age of light; and so, again, we stand on the precipice of another seismic leap: However the future of AI is not just about what we can do with it, BUT ALSO who will have access to control its power. If you believe half of what is written and said about AI – it has the ability to become ubiquitous, like electricity? A technological power capable of driving economic and social change for the next generation.
So, I welcome the Noble Lord Holms’s attempt via this Bill to encourage an open, public debate on democratic oversight of AI.
However, I do have some concerns. Our view of AI at this stage is heavily coloured by how this new power will deliver automation and the potential reduction of process-reliant jobs; how those who hold the pen on writing the algorithms behind AI could exert vast power and influence on the masses; we a fear that the AI genie is out of the bottle and we may not be able to control it. AND the sheer limitless potential of AI is intimidating.
If like me, you are from a certain generation, these seeds of fear and the fascination with artificial computer-based intelligence have been long planted by numerous Hollywood movies picking at our hopes, dreams and fears as to what AI could do to us. Think of the unnerving subservience of HAL in Stanley Kubrick’s 2001 – A Space Odyssey made in 1968, the menacing and semi-obedient robot Maximilian from the 1979 Disney production - The Black Hole, a fantasy woman called ‘Lisa’ created by the power of 80s home computing – in Weird Science from 1985 - and of course the ultimate hellish future of machine intelligence taking over the world in the form of Skynet in The Terminator made in 1984. These and many other futuristic interpretations of AI helped to fan the flames in the minds of engineers, computer scientists and super geeks – many of whom created and run the biggest tech firms in the world.
I previously wrote these inspirations in my paper in the Digital Vision for AI – a thought leadership paper published by Atos over 5 years ago AND I REFER TO MY REGISTER OF INTEREST AS PREVIOUSLY BEING AN EXEC OF THAT BUSINESS –
and in the preceding years ever since AI has increasingly become part of our lives. The advancement in processing power coupled with the availability of vast amounts of big data and development of Large Language Models has lead to an era of commercialisation of AI – dollops of AI are available in everyday software programmes, via chatbots and automated services, obviously the emergence of CHATGPT turbo charged the public awareness and usage of the technology We have poured algorithms into machines and made them ‘thinking machines’ - and we’ve stopped prioritising on trying to get robots to look and feel like us, and focused instead on the automisation of systems and processes - enabling them to do more activities for us.
My Lords, we have moving from the ‘pioneering’ to the ‘application’ era of AI. So, with all this innovation, with so many opportunities and benefits to be derived by its application, what should we fear? Well, my answer is not one from the world of Hollywood science fiction and it does not relate to individuals losing control to machines but rather how will we ensure that this power remains democratic, accessible and benefits the many. How will we ensure that control does not fall in the hands of the few; that wealth does not determine the ability to benefit from innovation and that a small set of organisations do not gain ultimate global control or influence on our lives. BUT also that Governments and burracracies do not end up ever furthering the power and control of the state through well intentioned regulatory control……We are at a critical time, when the future power of AI is still being understood but we already know that it is shaping advancements in every field – only yesterday we saw the headlines of Breast cancer breakthroughs as scientists develop AI tools that can predict treatment side effects AND spot tiny signs of disease that human doctors miss - from science and medicine to space exploration and energy, we are going to remodel how society is served. Which is why we must appreciate the size of this opportunity, think about the long-term future, and start to design the policy frameworks and new public bodies which will work in tandem with those who will design and deliver our future world.
But – and here is the rub – I do not believe we can control / manage / regulate this technology through a singular ‘Authority’ – let me say I am extremely supportive of the Noble Lord Holms ambitions to drive this debate. But may I humbly suggest the questions we need to focus on will be how can we ensure that the - innovations, outcomes and quality services that AI delivers are beneficial and well understood
But the Bill as it stands may over ambitious for the scope of The AI Authority, to:
TO Acting as an over site across other regulators – described as ‘horizon scanning’.
TO Assessing safety, risks and opportunities
TO Monitor risks across the economy rising from AI.
TO Promote interoperability and regulatory framework.
AND TO act as a ‘incubator’ – providing testbeds and sandbox for new innovations before they come to market.
My Lords – to achieve this and more The AIA would need vast CROSS cutting capability and resources.
Again, I appreciate what the Noble Lord Holms is trying to achieve and as such I would say we need to consider with more focus which questions we are trying to answer.
I wholly heartedly believe and agree that a critical role will be to drive public education, engagement and awareness of AI, where and how it is used and to clearly identify the risks and benefits to the end users / consumers / customers and the broader public – but I would strongly suggest we do not begin this journey by applying – as is stated in section 5, sub-section 1 clause 3 by using ‘unambiguous health warnings’ on AI products or services. Language matters and how we will approach this task to reassure and support public understanding – and critical to this will be our need to work hand in hand with Industry and trade bodies to build public confidence in this technology.
I do believe there will eventually be a need for some form of future Government body / Authority to help provide guidance to both industry and the public about how AI supported outcomes – especially those in delivering public sector services – are transparent, fair in their design and approach and ethical in approach.
This body will need to take noted of the approach of other nations and it will need to engage with local and global businesses to test and formulate the best approach. So, although though I do not fully support many of the specifics of the Bill, I do welcome and support the journey that this Bill, the noble Lord, and this debate is beginning us upon.
Comments