I think what pisses me off about AI is the disrespect. Whatever you’ve learned to do, whatever your hard-won skillset, it’ll do it sort of OK but also quite badly and go: “There. That’s what you look like to me. You prat. Bothering to try. Easy. And I don’t even care.”
I hate how *sarcastic* it is.
And of course that’s not the machine. It’s the attitude behind the people using, promoting, advocating for the machine. And their insistence that this is worthwhile or interesting or a replacement for a human doing it, just because it was quicker, and avoided the grind.
“There. That’ll do. It’s nearly what you do. I can do it too. And I can’t tell the difference. Not enough to be bothered, anyway. Yeah, I know you can tell. But caring’s lame.”
AI is, to the breadth of human achievement and experience, what leaning out of a car window and going “moooooo!” at cows in fields is to the experience of being a cow. It’s sort of the same noise, the cows will look up – “Hang on. Is that one of us? No.” – but it’s meaningless. And actually a bit rude.
I don’t think you’re particularly respecting cows when you do it. It’s not meant to be helpful. It’s certainly not saving them any effort, or improving their day. Because your ‘moo’, like most big data-set AI, is just based on a non-cow observing a mass of cow noise over time (that’s the noise they make, right?) and the non-cow realising it could probably make a noise that’s like that, without understanding what the noise means to cows.
Cory Doctorow, the sci fi author and tech-watcher, said recently that the offer of AI made by its advocates – that it will make things cheaper and faster for businesses – is unlikely to come true, because it is a hugely skilled job to check AI’s output isn’t meaningless or actively dangerous. (Useful for written copy and graphic design, vital if we’re talking about trusting it to diagnose illness and drive our cars.) So the AI future will involve employing extremely expensive humans in an oversight role for everything the AI does. Every fake human needs a real human handler, keeping an eye on them. Just like automatic tills in supermarkets mean you need extra till staff to help customers operate them, and those staff need to be more skilled at human interaction, calming frustrated customers down, working IT etc, than a checkout worker. You’re making every routine human job harder, and potentially, in a fair world, better paid. So you’ve done the opposite of your aim.
We will inevitably need people who can do the actual job of being a highly trained, sensitive, skilled, expert human, knowing what humans want. And they can, in a sense, check that the machine isn’t just going ‘moo’ out of the window at us.
Because of how AI achieves its output – by studying a huge set of finished work, examining neighbouring data points that have been accepted as ‘ours’ by humans in the past. and guessing what is most likely to be there. It lacks any knowledge of why humans tend to put this thing there, and this thing there, and the decisions that led to it.
It knows a lot of sad paintings are blue and grey. It doesn’t know why, or what sadness is. It knows Wes Anderson films are symmetrical and have the characters in the middle of the frame. It doesn’t know that forced single point perspective makes humans feel balanced but alienated. It knows we say ‘sorry’. It doesn’t know that we invented manners to help turn theory of mind into empathy into communication because we feel bad, accept responsibility and want to make someone feel better.
It knows we go ‘moo’, but it has no idea what ‘moo’ means.
Which is worse than useless. It’s rude. It makes us feel terrible.
Nobody wants to be mooed at.
I wrote once, way back in the early 2000s, for the Bollocks To Alton Towers books, about how pre-recorded announcements on train platforms that include the word ‘sorry’ or ‘apologise’ for delays, are the modern equivalent of those crazy mechanical hat-tipping machines that you sometimes see in collections of Victorian patent blueprints, or the cartoons of Heath Robinson. There’s no point automating the tipping of a hat, because the meaning of the hat-tip is that it involves a human being, and that human being made a decision to show you respect. Automating it robs it of sense. A recording of ‘sorry’ contains none of the meaning of the word ‘sorry’. By removing the process that leads to the apology, the apology stops existing. It’s not the noise of the word ‘sorry’ that matters, it’s the feelings that caused that noise.
With AI and its automation of our human processes, just showing us the finished product, it’s the piss-take part that hurts. AI’s (accidental maybe?) sarcasm towards what we’re up to as complicated, feeling, striving, interesting, flawed, slow, foolish humans is equally true for any genuine human skill that’s been half-automated in this current flurry of empty mooing.
AI doing what it sees you doing, not what you’re feeling inside, is insulting, whether your trade is oil painting, or copywriting, or checking shopping through a till and wishing custiomers a nice day, or answering the phone to another human down a helpline in a manner that’s warm, concerned and calming.
Any job usually has a variation of “the satisfaction of a job well done” as a mantra.
In my job, making stuff, it’s “trust the process”. The end result is less important than observing the mistakes, the route to that end.
AI doesn’t trust the process. Just the outcome.
Because, fatally, it was trained only on results.
Thanks for sharing, Joel. I work in a specialist niche of client relationships, conducting insight interviews to increase customer satisfaction. So often, I get asked "but can you scale it? Can you speed it up?" but if a robot does it, what's the point? You can't get the insights without caring about the process, and actually caring about your customers being happy. Everything else is lipservice.