AI and the impending end of the world


Reading Time: 2 minutes

[rant adapted from a string of Instagram posts on my personal and private account, after seeing too many catastrophizing and sensationalist posts]

Hello, friends! Most of you who know me well would have put up with me talking about AI and tech at some length, including why it’s almost imperative to understand what it’s about. Understanding is NOT:

  • “omg did you hear about how AI automatically learns on its own an can go rogue?”
  • “did you hear about how AI can write fiction or make art? It’s going to overtake us all!!!!!”
  • or perhaps more pertinently in the US: “did you hear about how AI sent the wrong people to jail or identified them as at risk of reoffending?”

People, AI is powerful, but AI is a reflection of its human masters.

If AI is powerful, it’s because its computational structure mirrors the human brain.

If AI is powerful, it’s because it processes input far quicker and and in far greater amounts than we can.

If AI is powerful, it’s because it’s trained by human masters – nothing happens automagically.

A lot of work happens behind the scenes. Behind the scenes, it is an engineer or a decision-maker that decides to use an AI, whether they have become apparent or not. In this sense, AI is like a business decision or model gone awry – except on a scale writ large, because people have entrusted so much to the tech. If there is a problem, do you pin it on the model, or the person who engineered it, or the person who decided to go ahead with it? For every extreme success, there are many instances of AI which are not fit for purpose.

At the end of the day, AI is statistical modelling.

AI that writes fiction? It’s built up a statistical model of the language used: you can tell a lot about a word from the company it keeps. We call it context.

AI that can play chess or Starcraft? Statistical model of which moves are most likely to bring it long term rewards, expressed as a positive or negative number.

AI that streamlines production processes? Curve-fitting except it’s not one variable against another, but multidimensional. If efficiency is a landscape, it builds a statistical model of that landscape and tells you where the most efficient gains are to be had.

Don’t get me wrong, AI is revolutionary. But it is not sentient. Behind every bad AI decision or rogue AI is a bunch of humans that made a bad decision, whether data-related or management or otherwise. An unforeseen mishap does not just happen. Just because it is unforeseen does not mean it is uncaused, without any tether to human responsibility.

(Product liability law and tort law has a lot to say about this.)

So please, calm down. AND START TALKING ABOUT THE PEOPLE BEHIND THE MACHINES!

Like I said, AI is a mirror to one’s bias, and bad decision-making is going to be reflected in the AI. That’s what we need to work on.

But if you don’t have the tools to work on that at the moment, I think you’re better off doing your small part to stave off the ongoing and accelerating climate catastrophe.

(As always, hmu if you would like to learn more about this or have some references! I think I will create a follow-up post with just this.)