If you’re not familiar with or haven’t heard about ChatGPT, you should be paying very close attention. What is ChatGPT? Well, in a nutshell, it’s a newly released to the public, AI-based program that can be used for generating dialogues. Sounds innocuous enough, right? Well, the ramifications of this platform could dramatically alter your existence. It has a lot of potential upsides and uses which we won’t discount, but there’s aspects of if that are rarely discussed, and how it will directly impact us is growing by the millisecond. It is showing signs that it very may well soon fabricate an entirely different reality that will very likely alter our lives in very profound ways. So let’s chat about it.
You’re Giving Them Everything
You’re every engagement online, either through your smartphone, home computer, shows you watch on Netflix, your smart devices in your home connected to the internet, are all being tracked to the point that algorithms can appear prescient and predict your next move. Your phone is tracking your location. Your navigation app is looking ahead for gas stations and fast food restaurants because this is about the same time you stop somewhere for lunch each day. If you used a debit card or app to pay for anything, your spending habits and location were logged somewhere. Did you search for something online? That was recorded. Did you cross the road or drive a different route because of construction? So did a hundred people before you. Did you type a word into a search engine? Did you read a specific news article about the drought, and the next day your news feed has two more? Is your Echo Dot, Alexa, Siri, or smartphone listening to you? Is the street cam tracking your movement? Do you use email or messaging? Did you like your friend’s goofy picture of their dog or your smiling grandkids on social media? Did you visit a website?
Anything you do creates a data point that adds up and focuses on what makes you tick as a human and a consumer. Why are most of the apps we use on our phone or computers free? It’s because you’re giving the company something extremely valuable they can sell: your personal data. You are a product they can now sell to with precision because they now know everything about you. Sometimes, it isn’t associated with you by name, social security number, or any exact identifier. Sometimes it’s just an email address, a phone number, an IP address, or a geographical point where you frequently are that helps computer algorithms identify and catalog your actions. It’s all processed by a million different algorithms on a million different machines. Algorithms are simply sets of “if–then” rules. It’s a somewhat narrow road, but there isn’t a single algorithm but millions of them all processing data points, cross-referencing and drawing predictive conclusions. That single decision road quickly transforms into a very detailed road map as AI combs and refines data and passes the conclusions on to other machines with additional algorithms.
How did the thousand people before react when presented with the same decision? After reading a particular article, what did people read next? You might believe that you are unique in your decision-making, but to artificial intelligence, you are simply following a decision pattern with everything you do. Your routine, from the time you awake to the time you sleep, is cataloged in a million different files and databases. The only limit to AI is its access to these datasets. If that weren’t enough, AI isn’t chilling with a TV show after work. It’s continually, tirelessly learning, processing, and calculating. If that’s not scary concerning enough already, consider that it only reacts to what we feed it.
GARBAGE IN, GARBAGE OUT
In 2016, Microsoft launched a Twitter chatbot named Tay. It was designed to simulate the online ramblings of a teenage girl and allowed anyone to interact with it. You may not have heard of it because Microsoft shut it down within 24 hours. One user purposefully feed the AI chatbot Tay misleading and inflammatory misinformation and much to the embarrassment of Microsoft, the AI recommended some incredibly disturbing feedback and courses of action. Due to the very fact that YouTube’s AI will read my words in this blog, we can’t even repeat the details, but it’s something you can easily Google. If these systems only learn how programmers instruct them to and from the material we provide, they will not develop into model citizens. AI often comes with baked-in biases because it is fed the online picture of things and not the accurate worldview.
In one experiment, cropped photos of men and women were fed into an AI image generator, and the AI was asked to complete the picture. 43% of the time, the man’s image was completed with the man wearing a suit. 53% of the time, the image of the woman was autocompleted, with her wearing a low-cut blouse or even a bikini. Why, might you ask? You needn’t look further than your own computer and the web to understand why this occurred. If we feed AI history, which version is it being fed because that will determine everything it returns to us.
Right now, there’s a comment bot, probably right in the comments section of this video, on social media, leaving a review or a random comment elsewhere, that is posting some trolling comment crafted to infuriate you. It is well documented that state-sponsored hacking groups design and use these bots to sow division in other countries and even try to influence elections. It doesn’t have to be in the perfect syntax and grammar of neatly arranged words or colloquialisms. Regarding AI image generation, you should pay attention to this next part because it ties into a bigger problem. It was discovered that several AI art generators, hosted on Facebook, have recently sprung onto the scene, and people are generating images derived from millions of other images fed into the machine. Photo realistic images generated by AI might be discernible by another computer as it analyzes a multi-megabyte image pixel by pixel backward and forward. Still, the human eye might perceive and believe the image to be true. The human brain can be manipulated by the image because of the terabytes of data tracking eye movements and patterns in a million other images. Subliminal and subconscious cues can be inserted into the image of which the human mind would never be aware.
Text, audio, images, and even video clips can all be instantly generated by AI for maximum effect. AI is on the cusp of generating a completely alternate reality and history in mere seconds and then distributing that through multiple channels, so it all appears seamlessly laced into our collective history. AI can take all of what we give it and give it back to us in fresh, new ways based on our history of reactions. It can motivate people to believe untrue things and even give them the means and justification to act upon these untruths. All the while, it has no conscious. It lacks a moral compass to guide it between right and wrong. One of the recent revelations that came from Facebook is that content which inflamed and infuriated people drove higher engagement which resulted in more ad revenue for them, which as a result, the AI amplified. Combined with AI’s ability to generate information, images, and the like combined with the factors that drive higher engagement, we think you can see where this is going.
MISINFORMATION ON STERIODS
Let’s imagine for a moment that some nefarious human operators link together enough AI systems to generate a series of news articles, images, and narratives and publish them across an array of networks, even inserting stories into the past history code of Wikipedia and other seemingly static and stable sources. Imagine if that story is about an alien culture living amongst us, an alien invasion, or nuclear missiles being deployed along a border of any country. Who of your fellow citizens will believe it, and how will they act upon it?
There’s no denying that we live in a time where misinformation is given higher priority than facts and science. Fact-checkers are dismissed as shills of a greater, all-powerful conspiracy. People rarely accept that they were wrong or misled but merely exchange one falsehood for another that better supports their confirmational bias of today. We are spoon-fed, carefully curated information. My news feed always has articles about weather, drought, astronomical events, the war in Ukraine, self-sufficiency projects, and things to do in my area. We couldn’t tell you the name of three Kardashians or name 3 NBA players, but maybe your feed has both. Mine doesn’t. Why do you think that is? What does your feed contain? We are given the information and the lens to view it with.
People have acted upon misinformation in the past. A California man killed his kids over QAnon and ‘serpent DNA’ conspiracy theories. You may have heard of Pizza Gate, where proponents of a theory claimed a basement in a pizzeria in Washington, D.C., was a meeting ground for Satanic ritual abuse of children. Despite the pizzeria not having a basement. A 28-year-old man from Salisbury, North Carolina, arrived at that pizzeria and fired three shots from an AR-15-style rifle that struck the restaurant’s walls, a desk, and a door. The man later told police that he had planned to “self-investigate” the conspiracy theory and that he saw himself as the potential hero of the story—a rescuer of children. People have acted, sometimes quite violently, on misinformation before. These theories in the past were false narratives generated by a single person that took on a viral quality.
What happens when AI creates a thousand false narratives, all corroborating each other at such a volume that it’s hard to tell fact from fiction? Already we can see evidence of news articles generated completely by AI. They’re entirely false or harmless rumors at the moment, but what will they be when they are exponentially more advanced in the next year or two ahead of us? How impactful will they be when the human prompts creating them are nefarious? How many people will be driven to outrage and action due to one of these false stories? How will we even know if what we’re being fed is real or not? AI doesn’t care because it doesn’t have a conscience. History, though, has several examples of wars being started based on very little evidence.
WHAT CAN YOU DO?
We did a blog a while back going into detail on how to beat facial recognition technology which we’ll link to in the cards above and in the description section below. While beating facial recognition technology is vastly different than outsmarting AI, with most technological innovations, there’s things we gain and often many things we lose. Some of the old adages of the past still hold true today when it comes to protecting yourself from misinformation generated by AI. Never believe anything you read. Only accept the first-hand account generated by your five senses. Believe nothing you hear and only one half that you see. Is it even possible, though, to protect your data, withhold your judgments, and live a solitary life free from the grid and the web of computers learning from your every action, click, route taken, dollar spent, and choice made? Probably not. So, you must guard your judgment, the only thing you still control in a world that AI will increasingly fabricate.
There is hope. AI generator detection tools are being built. Currently, though, they are more of an afterthought and trail behind AI generators. Faculty and teachers worldwide are concerned about students using AI to write papers for them. Only recently has a tool popped up that can check writing, like plagiarism checkers, for the likelihood that a human author created text.
While we maintain some level of optimism that technology like this can be leveraged to accelerate production, the real potential for quickly spreading misinformation at a rate and level the likes of which we wouldn’t be able to distinguish or comprehend concerns me greatly. What do you think? Is the concern overblown or should we demanding safeguards and rails for systems like this?
As always, stay safe out there.