AI and the Dangers of Deep Fakes

A News Report

A train carriage containing controlled nuclear materials was stolen in Cincinnati today. Its whereabouts are unknown.

The incident occurred on the downtown train line, which runs from Covington and Ashland stations.

In an email to Ohio news outlets, the U.S. Department of Energy said it is working with the Federal Railroad Administration to find the thief.

“The theft of this nuclear material will have significant negative consequences on public and environmental health, our workforce and the economy of our nation,” said Tom Hicks, the U.S. Energy Secretary, in a statement. “Our top priority is to secure the theft and ensure it doesn’t happen again.”

The stolen material was taken from the University of Cincinnati’s Research Triangle Park nuclear research site, according to a news release from Department officials.

The Nuclear Regulatory Commission did not immediately release any information.

According to the release, the U.S. Department of Energy’s Office of Nuclear Material Safety and Security is leading that team’s investigation.

“The safety of people, the environment and the nation’s nuclear stockpile is our highest priority,” Hicks said. “We will get to the bottom of this and make no excuses.

-OpenAI’s GPT-2 Language Model (source)

Only the first sentence of this fake news report was written by a human. The rest was generated using OpenAI’s newest unsupervised language model GPT-2. As is evident from the report, GPT-2 is capable of creating human-like articles from a very short prompt and requires almost no additional guidance. This is remarkable and there are so many incredible things that can come from this technology. With the help of artificial intelligence we will soon see complicated scientific papers quickly being summarized in language that any person could understand. We will see fast, unsupervised translations across languages. We will see the rise of more capable AI writing assistants. These are all good things and will come in time.

Unfortunately, many negatives can also come from this new technology. These negatives are unavoidable and so I believe it is our responsibility to address them now before too much damage has occurred. OpenAI has already taken the first step by withholding release of the complete GPT-2 model until they have further discussed possible implications. To quote their website, “while we are not sure that it is the right decision today, we believe that the AI community will eventually need to tackle the issue of publication norms in a thoughtful way in certain research areas” (OpenAI). It is time for us all to seriously discuss and develop strategies for the ethical future of artificial intelligence.

Deep Fakes

https://lawfare.s3-us-west-2.amazonaws.com/staging/s3fs-public/Screen%20Shot%202018-02-21%20at%2010.21.21%20AM.png

Screenshots of a fake video of Obama developed using a generative adversarial network

 

In 2014 a machine learning technique called a generative adversarial network was invented as a way to generate new data from previous data. For instance, by looking at a thousand photos of cats, this network could create a new cat picture unlike any of the previous thousand but close enough that it is difficult for humans to tell this new picture apart from the previous thousand. In 2017 this technology was used maliciously by Reddit users to create digitally altered pornographic videos that superimposed the faces of female celebrities into the videos. This is a massive violation of women’s privacy rights and calls into question so many ethical concerns about our bodies and their digital representation. In response, Reddit banned these videos but it was too late. The creator of this technology has since released a platform making the creation of deep fakes easy for everyone. To quote the Guardian “anyone with access to the internet and pictures of a person’s face could generate their own deep fakes” (Oscar Schwartz). Unfortunately, the implication of deep fakes goes beyond non-consensual explicit videos.

Their Implications

We live in an time where we are constantly bombarded with information, so much so that we cannot possible give everything the full attention needed to properly absorb them. Unfortunately, this situation has been abused by creators of fake news. Articles can be written in such a way that false information appears true at first glance and since we seldom have time to give more than a first glance, the false information remains without being fact checked. Given enough time we will remember the “fact” but forget where we read it and suddenly a new “truth” is made. The more realistic the fake news sounds, the more likely we are to not question it on first glance and the more misinformation there becomes. Now, consider if all this fake news can be generated almost automatically using artificial intelligence models like GPT-2 (in fact they test this here). Now we have well-crafted fake news being developed, almost instantaneously, for free and misinformation will spread like wildfire.

With regards to deep fakes the possibilities are even more malicious. Right now though we tend to give the benefit of the doubt, everyone knows something written could be filled with lies. This is not the case with videos. The technology is approaching the point where fake, professional looking videos can be made using AI to generate realistic voices and realistic looking video.

It is easy to see how this technology could be abused by individuals or governments with malicious intents. The deep fakes do not even have to be perfect. They just have to be good enough to cause outrage in a few people who then spread the false information contained in the videos. While some people will try and go back to see the video for themselves and may realize it is a fake, many will simply jump on the bandwagon and become wrapped up in the artificially developed discontent.

What Can Be Done

As with all technology, I do not believe that it is possible for us to simply ban deep fakes. The technology will come regardless of policy. We have to accept this and ensure that policies are already present by the time these technologies become commonplace.

First, with regards to pornographic deep fakes, while some protections exist, there is a considerable legal gray area especially when celebrities are being taken advantage of. It is possible that these fakes could be argued to be parodies or some other form of protected work. As such, we should try and develop a clear legal line between what constitutes parody and what constitutes an abuse of privacy and body. Personally I would say that any deep fake created without consent from the people whose faces and bodies are being used should be illegal. However, there are less extreme routes that could also be taken. Regardless, these conversations should be happening now.

Second, with AI research having such a large impact on our society, it is now time for the researchers to consider the possible ramifications of their findings when choosing what to research and publish. This is exactly what OpenAI did by withholding publishing the full GPT-2 until more time has been spent addressing ethical concerns. These considerations should occur publicly allowing for feedback from other people in the field. While researchers should not be held responsible for the future abuse of their findings, by taking into account possible malicious uses from the very beginning it helps prepare everyone for the impact a new technology will have.

https://i.pinimg.com/originals/00/1b/ff/001bffd0671ac45ce1fcc15fc4e2fa79.jpg

Third, I personally believe it is the responsibility of schools to help fight against fake news by promoting critical reading skills at a young age. While education reform always comes slowly, we can start by encouraging children to think critically about what they read in the classroom asking questions such as “what type of evidence does the author provide for his/her claim?” and “can you provide a counter-example or a situation where the author’s claim does not apply?” In the future, schools could build into their English programs units on critical reading that involves having students directly compare true and fake news articles and promotes using the internet wisely to search for additional evidence.

Lastly, it is possible to use artificial intelligence to counter artificial intelligence. As deep fakes become more and more realistic it will become increasingly difficult for the human eye to point out the differences. Thankfully, we already have already created machine eyes. Those interested in entering the field of artificial intelligence can choose to focus on developing technology for detecting AI rather than developing technology to alter or replace human intelligence. For instance, with every heartbeat blood rushes into our face and to our brain. Though undetectable to the human eye this changes the tint of our skin ever so slightly. This is a tiny detail that is easy to miss and hard to model using deep fakes. Someone could develop technology that looks for this slight skin color change in videos to determine if a video is real or fake. I am sure there are many other tiny details that are left out in deep fakes. As the technology becomes better and better these tiny details will become increasingly important.

Conclusion

Ultimately, the technology that allows for deep fakes and AI generated fake news will reach a point where we cannot tell the difference. If we are not ready for this future then we run the risk of completely losing sight of what is true and what is fake. Thankfully, I believe that if we take the proper steps now, when this technology is still in its infancy, we can ensure that reaching this point does not cause mass distrust and chaos. At times I find it scary to think about the future and the dangers of AI. However, by addressing these fears and discussing them with one another we open the doors for important conversations that turn into important societal beliefs and policies preventing bad people from abusing these remarkable technologies.

 

10 thoughts on “AI and the Dangers of Deep Fakes

  1. Cool post David. If I had to guess, I’d say that the problem of fake news and even these new deep fake videos is likely to get worse before it gets better– platforms like Twitter/Facebook don’t even have a handle now on fake accounts and posts, and as you detail, fake content is only improving. I wonder if we’ll ever be able to fully ‘solve’ the problem of fake news–it might be a necessary tradeoff if we wish to maintain freedom of speech/expression. Additionally, we’re so inundated by information that it tends to be the extreme content that breaks through, which isn’t a hopeful sign.

    Like

  2. With all of the hype around AI, as we’ve talked about in class a couple times, it was interesting to me to read about it from a more ethical, philosophical perspective. As we’re developing new technologies and industries, I think the kinds of questions you’ve raised here are essential. Especially with issues of deep fakes, it’s important to be intentional about free speech in regulation and development– we don’t want fake images circulating without any way to tell the difference, but neither do we want to, for instance, require that photos be certified to be distributed.

    Like

  3. Nice post. I actually wonder whether deepfakes will have the opposite effect. When they become so easy to create, we stop trusting any video information online altogether. That could lead to the resurgence of journalism, trusted individuals who investigate and report the news without manipulating it. I could see some sort of reputation mechanism developing with an associated encryption ID that proves its authenticity. Paradoxically, only when the fakes get better do we return to trusted sources.

    Like

  4. Very intersting post! I agree with you that there needs to be more education and information regarding deepfakes and how to tell if something on the internet is real or not. From just skimming a news article such as the one you posted above, it is difficult for people to tell whether the article is legitimate or not. It is scary to think that we need to be skeptical of almost everything we see online nowadays.

    Like

  5. Cool topic! Parts of your discussion reminded me of a line from one of the videos we watched about blockchain last week: that new technologies are often first put into practice by criminals. It’s definitely concerning how this kind of AI technology can be used in malicious ways. However, it’s such a complicated topic that it is difficult to see exactly what lies ahead, and what potential solutions may be. Hopefully trustworthy sources will adapt to the new technology so we can have a place to turn to do our best to protect ourselves from this new kind of fake content.

    Like

  6. It’s interesting how on an ethics level, it is pretty clear that deepfakes shouldn’t be something we tolerate much in society, but we’ve already passed a point of no return. The idea you brought up about using AI and ML to essentially track deepfakes is really interesting. Since we can’t know exactly what a given ML AI logic is, we can design another AI using ML to track the other AI machine. It’s too much for me to handle and I prefer to not have to think about it.

    Like

  7. I read the news on GPT-2 and Elon Musk said that they are too scared to release the technology to public. It was also reported that GPT-2 can generate high scoring SAT essays within seconds. It sounds like my childhood dream of robot completing essays for me is now coming to reality.
    Personally, I believe Facebook and other platforms should take a more active role in identify fake news. Even without GPT-2, pictures can be photoshopped and fake news can be fabricated by human. We have already been living in the age where things cannot be easily identified with human eyes. AI like Alpha Go has already surpassed our intelligence level, at least in chess. From a more optimistic view, I do believe that the development of GPT-2 can open new markets and opportunities in other fields. LOL, cannot wait to read the first fiction written by robots.

    Liked by 1 person

    • It is actually very interesting. When news came out about AI composing music, it all praised on the breakthrough development. Composing lyrics isn’t so different from writing paragraphs, but articles are more concerned with the threat of GPT-2. I am not sure if media have played a role in guiding which direction it goes.

      Like

  8. Great post David! Did you watch Shane Dawson’s recent conspiracy theory video on YouTube? He discusses all of these advanced forms of AI and their implications. In the video, he and his friends use AI to generate automated voices from their own recorded speech and use these computer generated voices to call family members. I thought that this part was terrifying because if family members cannot tell the difference between real and fake, then there are many frightening applications of this technology. For example, like you mentioned, someone can use a fake version of the president’s voice during an important phone conversation with a foreign diplomat.

    Like

  9. This topic (and other similar ones) has been talked about so much online that many people are starting to worry about “AI taking over the world.” I think that the debate on whether AI will replace humans will never stop until we witness actual results. At least in the near future, many industries can grow and innovate while AI takes over the tedious yet not mindless work that has been traditionally done by human labor. Great post!

    Like

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s