Artificial Intelligence (AI) has transformed numerous aspects of our lives, from automating tasks to enhancing decision making processes. However, with advancements in AI, there comes the potential for misuse. I’m sure you have seen it in action already.
Pictures of a former president getting “arrested” and pictures of the pope wearing a puffy jacket went viral recently.
Obviously, people have already taken advantage of the creation of fake images and videos to deceive others into believing they are real. These deepfakes have gained significant attention in recent years for their potential to disrupt various aspects of society, from spreading misinformation, to harming reputations, and even undermining trust in the news media (or any kind of media).
Photographers have manipulated film to get their creative vision realized for years. At first it was film processing in the lab to change simple things. Then, with the advent of digital photography, the processing went to software on your computer such as Photoshop.
Until recently, at the very least you had to be in the right place at the right time to get the photograph to begin with or purchase the rights to use someone else’s photograph. Now we have the ability to type out what we want and an image is created for us.
Fake images and videos are created using deep learning algorithms that can manipulate the content, making it difficult to distinguish between what is real and what isn’t.
These algorithms use vast amounts of data to learn and mimic human behavior, allowing them to generate highly realistic images, video, and audio.
You can imagine how this has raised serious concerns about the trustworthiness of what we see and hear on the internet, the television, and even something as “old fashioned” as the radio or over the phone.
One of the scariest consequences of these AI generated images, videos, text, and audio is the ability to spread misinformation. With the ability to create convincing fake content, someone could use deepfakes to spread hoaxes, false alerts, made up news stories, and scams.
It could be used to manipulate political figures by creating fabricated evidence or even misrepresent historical events. This can have severe implications on public opinion and political systems. It can also make social tensions worse, create new conflicts, and damage the credibility of legitimate sources of information, eroding public trust in media and established institutions.
Journalists and media organizations play a crucial role in shaping public opinion. However, with the proliferation of deepfakes, the credibility of visual media can be undermined, creating doubts about the authenticity of news and information. This can lead to widespread skepticism and confusion among the public, eroding trust in those that tell the stories and report the news.
Breaking news would have to be delayed so that the people reporting can investigate thoroughly to make sure that the images and video they are receiving are actually real events.
So now journalists and media organizations will be facing ever more difficult challenges in verifying the authenticity of their content, compromising their ability to provide accurate, timely, and reliable news to the public.
Just to tell you how quick and easy these images were to create, to get the left image below, I typed in “elderly gentleman with deep wrinkles sitting on a bench in a mall while waiting for his wife to finish shopping“. Less than a minute later, it gave me four options to choose from.
The “Midjourney” AI Image generator was used to make the “photographs” in this article but more big names in the AI industry are “Stable Diffusion”, and “Dall-E”. If you are looking for a text/chat based AI, you will want to look up “ChatGPT”. OpenAI owns both ChatGPT and Dall-E.
Unfortunately, the answer to the problem of deepfakes being taken seriously hasn’t been found yet. We have to be extra diligent not to trust everything we see on the internet and news at first glance. The older you are, the more you know how we grew up being able to believe a photograph. We can’t do that anymore.
You also have to keep in mind that even if you hear a story or see a picture that is shared with you by someone you trust such as a friend, there’s good reason to ask where they got it from because we can all be tricked. Nobody is exempt.
In our family, when someone shares something that seems a little outrageous, we say “source that for me”. Get in the habit of doing that because you will need it going forward.