Image by wolfhf and DALL-E

Facepalms at Bletchley Park: How Not To Approach AI Regulation

Wolfgang Hauptfleisch

--

In the end the UK AI Safety Summit¹, held symbolically at Bletchley Park in Buckinghamshire, was all choreographed happiness: 28 countries signed a joint declaration about the “catastrophic risk of artificial intelligence”.

Of course, the declaration was already agreed upon before the summit started rather than the result of the summit itself, and it will be known as “The Bletchley Declaration”², though I doubt it will be taught in schools.

After the event the British prime minister Rishi Sunak announced proudly that the technology companies “will allow governments to vet their artificial intelligence tools for the first time”. A success, according to him.

Three things however overshadowed the summit: Well timed — though unfortunate for the UK government — Joe Biden stole some of the show with his executive on artificial intelligence³, announced only one day before the summit kicked off.

Secondly, for some inconceivable reason, Rishi Sunak deemed it necessary to end the summit by consulting the world’s self-declared expert on AI and ethics: Elon Musk.

And last but not least, the summit kind of missed the point by simply feeling out of touch.

Catastrophism Without Urgency

So what actually went down during the summit that was supposed to establish “Britain’s leadership in AI Regulation”?

Artificial Intelligence poses risk comparable to “nuclear war” Sunak told the press, but there is no need to “be alarmist about it”. This does not seem to go together, but represents the UK governments constantly changing stance — oscillating between “AI is innovation” and “AI might kill us all” very well.

Of course, such joint declarations have their place in politics, no doubt. The acknowledgement of AI related risk would have been a step forward .. if it had happened 3 years ago.

Catastrophism makes nice headlines, but it distracts from the gritty work that needs to be done

The problem here is not — but that of attitude. The issue is: Catastrophism makes nice headlines, but it distracts from the gritty work that needs to be done: The work to define risk levels, clarify liability and establish common standards that makes accountability possible.

It’s this task that Biden’s executive order tries to begin to tackle, and it’s the devil in the detail work the EU is now occupying itself with for more than two years.

And it is exactly this what the UK government stubbornly refuses to do, and hence only waited days after the summit to state that despite the impeding doom it is in no hurry to actually take action.⁴

Banalities And Awkward Fireside Chats

Not to miss any chance for embarrassment, Rishi Sunak had to choose the currently least suitable person on this planet (or any planet, frankly) for a fireside chat on ethics: Elon Musk.⁶ The same Musk who — by the way — proposed a “moratorium” on AI development in the summer, while at the same time investing millions into his own AI startup to create a “ChatGPT without guardrails”.

least suitable person on this planet for a fireside chat on ethics: Elon Musk

The bromance didn’t last long though, and Sunak was quick to distance himself from Musk’s antisemitism boosting only weeks later.

No Medals to Win

I believe we have long passed the stage for general political statements on AI safety. Now it’s time to get to work. This won’t be glamorous and will be in conflict with parts of the industry, for the simple reason that by default the industry’s incentive is to be first to market and not to “step forward carefully”.

AI Regulation will get those who work on it it’s working out those details little praise and a lot of blame, as clearly demonstrated by the disagreements about the EU AI Act that appear to drag on forever.⁵

--

--

Wolfgang Hauptfleisch
Wolfgang Hauptfleisch

Written by Wolfgang Hauptfleisch

Software architect, product manager. Obsessed with machines, complex systems, data, urban architecture and other things.

No responses yet