Mirror, mirror, on the wall… …should this guy get bail or not? #46 #cong23 #reality

Synopsis:

My contention is that even though AI (Generative AI) can’t draw a realistic hand to save its life, it is a powerful window into a reality we might otherwise not see.

Total Words

946

Reading Time in Minutes

4

Key Takeaways:

  1. AI is not just a bad renderer of human hands.
  2. AI is a mirror that shows us truths we might not want to see, but should.
  3. The material we use to train AI is a fair representation of ourselves. And the cold, unbiased eye of AI is the perfect way to see the truths contained in it.
  4. AI can show you the truth, but it’s up to you to do something about it.

About Richard Ryan

I have worked in Advertising for approximately 30 years. I am a copywriter, which means I wrote the very words that made you choose that specific box of cornflakes, or cellphone plan or midrange server.

I work in a small, full-service ad agency in Brooklyn NY, called Something Different. What actually makes us something different is we solve your business problems with smart, plain-spoken, deeply human ideas. It what every agency should do, but sadly doesn’t.

I live in New Jersey, where I enjoy having four distinct seasons.

Contacting Richard Ryan

You can check out Richard’s personal site, and the Something Different Agency or send him an email.

By Richard Ryan

We’ve all sniggered at the oddly-webbed, six-fingered hands that AI draws for us. Or laughed at ChatGPT when it tried to gaslight a New York Times reporter and convince him to leave his wife for the program. And then there’s the Pepperoni Hug Spot commercial.

But don’t let that sideshow fool you.

I think AI is a powerful window into our reality. Or, to be more precise, a mirror. A mirror that shows us truths we might not want to see, but should.

Consider how Generative or Creative AI works. We feed it a set of things. The more the better. Things we write, draw and create. Images. Books. Letters. Scientific papers. Greek poetry. Whatever we want. And it absorbs them all. Then, using its super complicated algorithms, it “learns” what we’re showing it. It sees the patterns in what we’ve done. And then tries to recreate it. By guessing. Based on what it saw. It’s a hugely powerful trick. This way it can learn to code. Or converse in Chinese. Or if we give it millions of mammograms and medical data it can learn to spot breast cancers with uncanny accuracy

You could argue that it doesn’t actually understand anything. It’s not filtered or underpinned by emotion or beliefs or context. It just spits back the reality of what it sees.

So to my point. What does it see? Well, it was recently reported that when you ask Midjourney (which is a picture-generating AI) to create pictures of doctors, what it sends back are images of white men.

Possibly not what you’d expect, but it’s reflecting back what it has seen. It’s the truth.

What do those images tell us about our reality? Or about opportunity? Or about whether we really value diversity?

Admittedly, although it’s a thought-provoking fact, those are just pictures. No harm done. But that’s not always the case.

I said AI has taught itself to read mammograms. It’s way better and much faster than humans. It’s so good, doctors don’t quite understand what it’s seeing, or how it does it, but it has saved people’s lives. The problem is, while it’s very good at spotting cancers in white women, it’s not so good at spotting breast cancers in people of color.

That also teaches us something about our reality.

Because – just as with the doctor pictures – the data sets we’re using to train it are from real life, taken from a health care system that is biased and skewed.

The reality our AI is reflecting back at us is a reality where we don’t treat people equally. We treat some people worse.

That’s what the mirror is showing us.

In March of this year a judge in India couldn’t decide whether to grant bail for a murder suspect so he just asked ChatGPT to give him the answer. Chat GPT said the guy didn’t deserve bail because the program considered him “a danger to the community and a flight risk.” So the judge said fair enough and sent him back to jail.

Of course that’s a story of one lazy judge. That behavior would never become institutionalized, right? Wrong. Unfortunately, it could.

Right now, if you’re booked into jail in New Jersey, the judge when he’s deciding whether to send you to jail or not, has a small black box that uses risk-assessment algorithms to help him make his decision. Not quite autonomous. At least not yet. But when that AI does come on line, what data sets will be used to teach it? Whichever they are, they won’t be equitable. The data sets that comprise all the information on the US incarceration system were built up over centuries of hugely racist government policies.

So the decisions that AI will return – either go to jail or go home – will reflect and reinforce a reality that isn’t remotely fair.

That won’t be a few harmless pictures of white doctors, that’ll be someone’s life.

So the next time your AI doesn’t send you back quite what you’re expecting, don’t blame it for not getting reality right. Consider that, in its unvarnished, unemotional way, it may be getting reality exactly right.

Then, once we see that reality, consider what we want to do about it.

Leave a Reply

Your email address will not be published. Required fields are marked *