When John Barber developed the first gas turbine in 1791 he could have had no conception of the transformation his invention would trigger.
By 1886, after almost a century of innovations, Karl Benz began the first commercial production of a car powered by an internal combustion engine.
Ten years later one of his products ran down Bridget Driscoll near Crystal Palace, making the Croydon resident the first person in Britain to be killed by a car.
The driver that day didn’t need a licence – they weren’t introduced until 1903 – and wouldn’t need to pass a driving test for another 32 years.
By then there was a Highway Code (first published in 1931) that included instructions for horse-drawn vehicles, for which drink-driving was an offence nearly 30 years before the same rules applied to car drivers.
Seat belts, which cut fatalities by half, were not compulsory in the UK until 1983.
The point of which is that the internal combustion engine was a world-changing technology that took decades to develop, but effective regulation took much, much longer.
AI could help produce deadly weapons that ‘kill humans’ in two years’ time, Rishi Sunak’s adviser warns
About ducking time: Apple to tweak iPhone autocorrect function that changes common expletive
Terminator and other sci-fi films blamed for public’s concerns about AI
Like the man with a red flag paid to walk in front of the new-fangled machines, lawmakers were left choking in the dust as manufacturers charged ahead and tarmac transformed economies.
The AI revolution
Today artificial intelligence promises a similar industrial revolution, but the pace of development can be measured in months and years rather than decades, and its inventors are clear-eyed about the risks.
The last month has seen apocalyptic predictions, with leading developers warning that “generative” AI, capable of producing text and images from prompts and learning as it goes, poses a “societal-scale” risk similar to pandemics and nuclear war.
That makes regulation and oversight crucial, and Rishi Sunak says it’s an area he wants to own, declaring the UK can lead an international discussion.
He will host a “global summit” in the autumn with suggestions that a UK-based agency, modelled on the International Atomic Energy Association, could follow.
Please use Chrome browser for a more accessible video player
Beyond providing favourable coverage of the prime minister’s trip to Washington, the move is part of a wider ambition to position the UK as a centre of AI, and make digital innovation a priority for delivering growth.
He sees regulation as an opportunity too, though what that might look like in practice is much less clear.
We do know that as recently as March the government was intent on following the motor car model.
In a white paper, it said it would focus “on the use of AI rather than the technology itself” in order “to let responsible application of AI flourish”.
Regulating AI
Instead of passing laws to limit the technology, existing regulators will monitor its application in their areas, working with developers to establish acceptable boundaries.
So rather than establishing a central AI body, medical regulators will oversee its use in diagnostics, Ofcom would remain responsible for policing machine-generated misinformation online, and the Office of Road and Rail whether it’s safe for AI to analyse inspections of transport infrastructure.
This model effectively already applies in industries where generative AI is in use.
Energy company Octopus is using an AI tool to answer more than 40% of customer correspondence, but to comply with data protection law it strips out all personal data from emails before the AI reads them.
Mr Sunak has appeared to go further in recent weeks, talking about the need for “guardrails” to steer AI, and there are already concerns regulators are falling behind.
The Trades Union Congress (TUC) believes tighter reemployment laws are already required. AI is already being used by employers to sift job applications and in some cases, unions believe, make hiring and firing decisions.
They want everyone to have a right in law to appeal decisions to a human rather than rest on the judgement of a machine, not least because of the potential for biases and prejudices to become ingrained as AI learns and draws from previous experience.
Please use Chrome browser for a more accessible video player
The UK approach contrasts with the EU, where the European Commission has proposed what it says is the world’s first legal framework for AI, based on four levels of risk to people’s livelihoods, safety and rights.
The use of AI to conduct social scoring by governments, or in toys that might encourage dangerous behaviour, will be considered an “unacceptable risk” and outlawed.
Minimal risk areas include video games and spam filters, while limited risk covers the use of chatbots, as long as it is clear you are talking to a machine.
High-risk areas include any application in education, critical infrastructure, employment such as the sorting of CVs, migration decisions, public sector decision-making and judicial systems.
To be legal in these fields AI tools will have to meet numerous conditions, including “appropriate human oversight”, and the ability to trace and log activity.
Read more:
Who is the ‘Godfather of AI’?
Smartphone camera lens technology to be used to diagnose thousands of patients
When a computer says no, we will need to know why
A challenge for developers and users today is that it is not always clear how AI has reached its conclusions.
When ChatGPT or another language tool produces plausible human text it’s not possible to know from where it drew its information or inspiration.
If you have asked it to write a limerick or a letter, that may not matter.
If it is deciding whether you qualify for benefits, it very much does. When the computer says no, we will need to know why.
Making the UK the home of technology regulation, a sort of digital Switzerland, is an attractive post-Brexit ambition, but whether it’s possible is moot.
As the ongoing saga over retained EU law demonstrates, we might want to shape our own regulation, but commercial logic sometimes dictates we have to follow larger markets.
Be the first to get Breaking News
Install the Sky News app for free
Doing nothing is not an option, however.
AI is moving so fast the UK cannot afford to be left on the roadside as it heads for the horizon.
Uniquely, it may come to be the first technology that knows more about its destination, for good and ill, than we do.