Black and white. Chalk and cheese. Artificial intelligence and ethics.

Run, Bambi, run! The hunters are <brzzchkkkk…>

As the final ozone breath whispers from a mature doe lying in a forest clearing, hydraulic oil bleeding into the moss from a single bullet wound, its technologically enhanced offspring standing nearby refuses to heed its late parent’s high-priority instruction to run.

Instead, the curiously sinewed young deer – serial no. 8AM8-1 – sniffs the air and turns to face the hunter’s sights. Leaning a little closer, he snarls: “Je reviendrai."

This scenario is one envisioned by French MP Damien Adam, who was derided mercilessly by colleagues this week for proposing – in parliament, no less – that robot animals rather than real ones should be let loose in the countryside for hunters to shoot at.

As a veggie-bore, this would suit me just fine. But it’s not going to happen – for more reasons than the obvious one that it’s nuts. First, Monsieur Adam probably doesn’t really mean it. He just felt he had to say something that sounded modern, tech-savvy and, er, you know, thrusting in order to distract voters’ attention away from the fact he is one of the French parliament’s few remaining paid-up members of President Macron’s Thatcherite LREM political party.

Second, and more importantly, you don’t mess with French hunters. They are not like huntsmen in the UK. Mention ‘hunting’ back in Blighty and people think of toffy-nosed twats in bright red tops who we’d laugh at even harder if they accidentally fell off their horses and broke their twatty necks. Mention ‘hunting’ in France, though, and it conjures visions of solitary men in their retirement, accompanied by their faithful but ageing and somewhat dopey dog, in search for a wild boar to put on the dinner table. It’s essentially Obelix in a donkey jacket.

An MP from an opposing party mocked M Adam by tweeting “Hunting is not laser quest”. This is true: it’s more like paintball, except it’s worse because the animals don’t have goggles and can’t shoot back. Possibly the arming of robo-fauna could follow with subsequent legislation. I suspect M Adam is eyeing the long-term here.

Journalists have even started giving the concept of drone-deer and robo-rabbits a name – ‘robots gibers’ – which is doubly appropriate when translated into English as it becomes a pun: ‘game robots’.

What fun! Now I can see how hunting might evolve in the future: for every dozen electro-furries you blast away, you have to face a boss-level forest robo-creature. Hunters might be hoping for a mecha-stag or a somewhat-miffed hairy pig; I’m thinking more along the lines of a Pyreneean bear with deadly laser-beam optics and paw-mounted rocket launchers.

For this to be acceptable to hunters, and with the rest of humanity, we need to talk about ethics. No, not about the ethics of killing random animals for a laugh and the right to eat a chewy supper that takes fours days to cook and leaves your house stinking of dung. I mean, of course, human ethics of artificial intelligence in terms of consumer trust.

Henry Jinman of British AI outfit EBI.AI has a three-point formula for […looks down to read press release…] “winning hearts and minds in the quest for consumer trust in the age of AI.” Before you snigger, bear in mind he’s trying to raise interest in AI ethics in the light of enterprise response to COVID-19. The pandemic is an excuse for some employers to replace staff who are alive with customer-facing AI such as chatbots, smart speaker assistants and other such script-spewing digital morons.

Would you be happy to let them replace us all without some sort of check?

Jinman’s first principle is Data Quality or, as he puts it: “Excellent data-driven outcomes that inspire consumer trust.” By this, I take it to mean not doing stuff such as getting the office work-experience lad to copy and paste personal data with life-and-death consequences into an outdated Excel spreadsheet. Good luck with that one, Henry. I suspect that ship has already set sail.

His second principle is Transparency: “Where organisations make it absolutely clear to consumers how, why and when their data is being used.” It’s funny but organisations do that already, don’t they? The only problem is that what they make clear to us is not true. The next thing you know, websites will have us believing that those ‘essential’ cookies that cannot be disabled and exist only to ensure correct operation of the site itself don’t actually carry on gathering information about what we’re looking at.

Third is Privacy First: “All consumers have the right to meaningful information about the logic, significance and envisaged consequences of automated decisions.” Another sail on the horizon, I fear. There’s no point waving a GDPR flag as you storm the AI Bastille, as you probably ticked the ‘I accept’ box without having taken the trouble to sit down with your team of lawyers to evaluate the 15,000-word legal text above it.

Pessimist I may be but at least I hold no illusions about developers of consumer-facing AI voluntarily choosing to respect a code of ethics or even rudimentary norms of morality.

What I do worry about is the user interface. Artificial intelligence is an illusion of intelligence. That is, the intelligence you might think you’re engaging with is not really there and it’s certainly not intelligent; if it goes wrong, consumers will quickly realise it’s a bugger to fix. Humans and robots do not think alike.

A great example arose last month in Melbourne when the city was forced to refund around 1,200 parking fines to motorists because they’d been typing capital ‘O’s instead of zeroes when entering their number plates into the city’s cash-free parking app. Shocking as it may seem to those who know a lot about vehicle registration – number-plate spotters, perhaps? – many people can’t tell between O and 0, nor frankly do they give a flying fuck what font it’s in. Steve Jobs was right: typography matters… but not if you’re a tit about it.

Back in the days when I was working on the short-lived 1990s consumer tech publication Computer Life, we eschewed the convention of running comparative product reviews in a controlled testing environment. Instead, we invited a bunch of volunteer readers into one of the journalists’ homes and got them to test out competing IT products in the living room and kitchen while a stopwatch was running, to see which were the easiest to set up and use. We didn’t call it ‘Lab Test’ or ’Reviewed & Rated’; we called it ‘Donkey Derby’.

The software Donkey Derbys were the worst. Why? In those days, you had to enter a serial number before you could even install a program. For one Donkey Derby for drawing packages, it took one of our readers 45 minutes just to do this one thing, as the font on the security sticker did not make clear the difference between not just O and 0, but between 1, I and l (i.e. number one, capital i and lower-case L).

I pity the fool in a French forest, trying to enter the robots-gibers override code on his smartphone with an array of mis-typed 2s and Zs as a hacked Duracell bunny rips out his throat.

Quality, transparency, privacy – or security. I don’t trust government or business to deliver on any of those. There’s more than ethics and mission statements at stake here.

On the other hand, if you want a Mission statement, here’s mine:

ALISTAIR DABBS is a freelance technology tart, juggling tech journalism, training and digital publishing. He trusts you have enjoyed Autosave is for wimps so far. If so, you may also be interested to learn that his free weekly column at The Register – Something for the weekend, sir? – has returned. The new edition is already out. Subscribe to both! Note that to subscribe to SFTWS, you should register with The Register (yes, I know), go to your Reg account page, click on Alerts and then set up an alert for every time an article by ‘Alistair Dabbs’ is published. Phew!

Reply

or to participate.