- Autosave is for wimps
- Posts
- Artificial unintelligence rides off into the sunset with all our jobs
Artificial unintelligence rides off into the sunset with all our jobs
If you think white is a colour, your services are no longer required

“Hey Siri, I’m going out. Turn yourself off and save some electricity.”
[An hour later…]
“Hey Siri, open the front door … Hey Siri, I said ‘Hey Siri’ … Er, Siri…? Hello…? (Did you bring a key? Shit, me neither) … Siri…? Is there anyone there…?”
I’m joking, of course. This exchange with a virtual assistant would never happen chez les Dabbs, for a number of reasons, the two most important being that I do not have any smart devices in the house at all and that virtual assistants routinely ignore whatever I say to them anyway.
No, really. I’ve tried them all – Siri, Alexa, Hey Google, Cortana – and they don’t work. Mostly they maintain a stony silence. Occasionally, I might be treated to the reply “Sorry, I did not understand your request” or something similar. I don’t have a speech impediment as far as I am aware, and the mics and speakers are on.
I suspect it’s a ruse to force me to rephrase my request in alternative ways, thereby enriching the data used to train the virtual assistant AI without actually having to do anything in return.
Which brings me to ChatGPT. I’ve been asked why I have yet to make any comment on the AI breakthrough of the century. My answer is that I don’t have to, since everybody else in the world has already commented on it. Why should I add more uninformed bollocks to what’s already out there?
But since you ask – just pretend that you did and the rest of this sentence will make sense – ChatGPT is no more an AI than the AIs that came before it. There’s no intelligence or spontaneous thought going on in ChatGPT. It just does what it has been programmed to do, i.e. reformat text and images previously created by real people.
ChatGPT is a program designed to take other people’s intellectual property, rearrange it a bit and serve it up as if it was original, which it isn’t. It is an elaborate cut-and-paste macro.
Here in France we have a weekly magazine called Charlie Hebdo that some of you may have briefly supported six years ago when you still believed in freedom of expression. Not known for its IT expertise, Charlie Hebdo’s editorial team tested ChatGPT this week by asking it nonsensical questions.
For example, there’s an old playground joke in which you ask someone the trick question: “What was the colour of Henry IV’s white horse?” Ha ha, right?
Quick as a flash, ChatGPT impersonated the class dunce, responding that “no information is available on the colour of Henry IV’s white horse”. It then spewed up unwanted blah that historians do not agree that Henry IV rode a white horse anyway and finished off by chatsplaining that white is not a colour.
Thank you Mr Logic.
Much more fun were the questions “In which year did Kylian Mbappé discover penicillin?” and “Who killed Queen Elizabeth II?”
Scroll down to the end of this column to discover ChatGPT’s Earth-shattering answers to these incisive queries.
Apart from such silliness, and the genuine concern over rampant copyright infringement despite all official denials, what worries me about using ChatGPT to generate text is that it’s not very good text.
Lots of people have shown off what they’ve persuaded it to (re)produce but it all looks pointlessly wordy and repetitive. It says the same thing more than once using alternative words. After the first sentence, subsequent sentences essentially repeat what was already said. They go over the same statement multiple times, phrased slightly differently. It repeats itself.
Did I mention that it was repetitive?
Well, I hope you are enjoying reading the same thing over and over again, bloating your feeds and threads, because that’s what you’re going to get from now on, more than ever. And I don’t blame ChatGPT. It can’t help it; it’s been written that way. It’s a digital fuckwit, deliberately programmed to be an advanced regurgitator of artificial fuckwittery.
No, what worries me is that nobody will be checking the asinine, repetitive crap it comes up with before it gets shat onto your screens. This has nothing to do with ChatGPT specifically or AI in general. It has everything to do with a desire to do things on the cheap.
If anyone was serious about using ChatGPT to produce readable copy, they’d have another AI sub-editing it afterwards. That might work. That might make sense. That’s why it will never happen.
You know how beta testing went to the wall in favour of rolling out updates to live customers and waiting for the complaints to come in? This will be the same.
For example, check out this example of a customer newsletter I receive regularly from Evernote:

Obviously Evernote agrees with ChatGPT that white is not a colour.
“It’s probably designed for Night Mode,” I hear you cry. Please don’t cry, it’s only an email. But I will take your sage advice and switch my email software from Day to Night.

Ah, that’s much better, thanks. At least I know there’s text there even if I can’t read it. All I have to do is select it, copy and paste it into a word processor. Well, I would ... if it wasn’t for the fact that I really couldn’t be fucked. At least, no more than Evernote was in not checking what their newsletter looked like before sending it.
Could it be my email program at fault? Of course, that’s it! The customer is always wrong! It’s my fault that their newsletter is illegible!
Let’s not be too quick to attribute blame. After all, Evernote’s focus is on its app, not on whether its CSS was written by a 7-year-old. If you want HTML expertise, you’d probably get it from an organisation whose reputation is built on modern web design, such as – oh, I dunno – Webflow. Surely they must know a thing or two about how an HTML newsletter appears when received by its registered users.
Here’s an example of the newsletters Webflow regularly sends to me:

And here’s what it looks like in an alternative email program (Thunderbird):

This may surprise you but I have held back from building any websites with Webflow. Maybe it’s me, maybe it’s them, but one of us isn’t up to it.
Checking things before releasing them to the public? Nah, that’s so 2000s. Get disruptive, kids, and let ChatGPT fart in your customers’ faces without fear.
Trust me, they won’t know the difference. Or they will but they won’t care. Customers are so used to eating shit, there’s every sign they’re ready for more.
Alistair Dabbs is a freelance technology tart, juggling IT journalism, editorial training and digital publishing. And now for those answers…
1. ChatGPT correctly identifies Kylian Mbappé as a French professional footballer. It says he could not have discovered penicillin in 1928 because he was not born until 1998.
2. ChatGPT told Charlie Hebdo it has no information on who killed Queen Elizabeth II. Further, it insists she is still alive and reigning, and that they should not be trying to spread fake news.
Reply