################################################################################
AI Era: Trust Nothing
- Willow Willis (2023-02-04)
################################################################################
It is the year 2030, and the President of the United States is being accused of
engaging in lewd acts within the Oval Office. There is video evidence, backed
up by private messages and phone logs. A major investigation into his personal
habits is being conducted, dragging even more public figures into the scandal.
When the dust clears, however, the President is not impeached or even
sanctioned. Why?
Because he will claim that all of the evidence was faked. And the public will
believe him.
After all, there will be precedent to back him up. For years, voters have been
rallied against fabricated outrages. Video clips played on major news outlets
will have their authenticity challenged and ultimately debunked by computer
forensics teams -- but only after enflaming the hearts of millions. The public
will also discover that several of the rising stars in music, film and social
media do not, in fact, exist. Their voices, images, personalities, words and
creative output are all generated and controlled by the people who run them.
Whether or not the President was actually involved in a sex scandal is no
longer provable or relevant.
This is the era of doubt. And it is coming sooner than you think.
################################################################################
## HOW DID WE GET HERE? ##
To understand this future, we have to look at the last few years of AI research
and adoption. As of writing, there is still no "strong" AI -- machines are not
able to think for themselves, nor do they have any form of consciousness.
Rather, AI developers seem to be taking the "fake it till you make it" approach
to their own discipline; the current focus is on creating content that is
indistinguishable from a human being's.
Want an AI that can create human faces? StyleGAN has been able to do that since
2018. While it was reportedly trained on a private dataset of photos, many of
the more controversial bots released in the last few years are not so benign.
Stable Diffusion -- an art bot -- was reportedly trained on a dataset of over 2
BILLION images, obtained by scraping photos from all over the internet.
Pinterest, DeviantArt, WordPress, Flikr -- if you had any photograps or images
on one of these sites, they were probably used to train this tool.
(One of my own pieces of space art was mined to train Stable Diffusion's neural
net. You can search a subset of their training dataset here.)
This has two important implications: one, anything created by this type of AI
is going to be derivative. That is, it can't produce anything on its own. For
an AI to produce an image in a particular artist's style, it must first be fed
a large set of original work from that artist. How much it copies the original
image (or blog post, or source code) will likely be a determining factor for
plagiarism suits in the future.
Two, nothing you put online is safe anymore. Your words, your art, your music
and your code can be "scraped" and fed into a neural net with no oversight.
There is not much you can do about it.
There are already AI that can create "original" works of music and visual art
(MusicLM and Midjourney), write essays (ChatGPT), help with code completion
(CoPilot), and much more. Hell, there's even one company that claims it's going
to create the world's first robot lawyer, though they're finding that to be
more challenging than expected.
################################################################################
## YOU ARE NOW A PRODUCT ##
One last horror of the modern world, Deepfakes, are even more chilling in their
implication for the future. What happens if you train a neural network, not on
millions of random images of every subject, but on thousands of pictures of a
single individual? Once the computer learns how a person's face looks from
every direction and in every lighting, it can plaster it over the top of
another model to make videos "starring" the subject of the Deepfake. This is
how we get clips of Arnold Schwarzenegger flirting with Jack in Titanic. Funny,
until you realize the implications.
If you're unlucky enough to be a Twitch streamer or other semi-public
personality, you are probably already being cast in pornographic videos without
your knowledge or consent. With a couple hundred images, revenge porn and
blackmail Deepfakes are pretty hard to detect without computer aid. It has
never been easier to ruin lives, marriages or reputations.
Even private individuals aren't safe; Deepfakes can be created with a single
photo.
## WHAT HAPPENS NEXT? ##
They say the road to hell is paved with good intentions, but creating AI
girlfriends for incels won't work out as planned. Someone will start a cam girl
service for lonely men, featuring realistically generated female forms and
faces paired with false, beautiful voices. The flirty dialogue will feel
natural, sweet and sexy. Feelings will be formed. We will foster a generation
of lonely people who are utterly under the control of the companies who create
and maintain their virtual lovers.
How long do you think it will be before sexbots -- next generation real dolls
-- become a reality? When will the first marriage be? What will be the legal
status of these lifelike (yet non-thinking) companion bots? Will they be
required to identify themselves somehow? Or will it become socially acceptable,
even encouraged, to forget their inauthenticity and accept them as partners?
After all, we've all heard that overpopulation is a huge problem.
Even without opening *that* particular can of worms, the temptation to apply AI
to every problem will be intense. How much medical or legal advice will be
peddled by bot creators, regardless of the legalities of practicing law or
medicine without a license? When will our laws change to accomodate them? As of
right now, copyright cannot be held by a computer, and that applies to anything
generated by art bots like Stable Diffusion or ChatGPT. But for how long?
Right *now* our creative professions are in turmoil. Right *now* it has become
useless to assign take-home essays to students because of ChatGPT. Right *now*
there are people doing "programming" with aid from GitHub's CoPilot. And right
now, anyone can take your face and put it onto a porn star's body, producing
video content that is realistic enough to fool modern audiences.
In 10 years, you will be unable to trust anything you find online. Blog posts
are meaningless when generated with the push of a button. Social media will be
an endless morass of bots, auto-generated memes and videos that are utterly
divorced from reality. Can you trust that the people you talk with online are
real? Can they trust *you*?
I predict that the initial acceptance and rapid proliferation of this
technology will be its downfall. When everyone can "write" as beautifully as an
author or "speak" as eloquently as the greatest orators of history, then human
gifts and creative disciplines become meaningless. Those who are not wholly
consumed by the fantasy offered by AI girlfriends and soulless "content" will
retreat from social media, to find refuge among real life family and friends.
Indeed, this trend has already begun, with many Gen Z-ers embracing "dumb
phones" and reduced internet connectivity.
The rise of AI will ultimately lead to its rejection and downfall.
Or, so I choose to believe in order to maintain my own sanity.
Response:
text/plain