A News Report
A train carriage containing controlled nuclear materials was stolen in Cincinnati today. Its whereabouts are unknown.
The incident occurred on the downtown train line, which runs from Covington and Ashland stations.
In an email to Ohio news outlets, the U.S. Department of Energy said it is working with the Federal Railroad Administration to find the thief.
“The theft of this nuclear material will have significant negative consequences on public and environmental health, our workforce and the economy of our nation,” said Tom Hicks, the U.S. Energy Secretary, in a statement. “Our top priority is to secure the theft and ensure it doesn’t happen again.”
The stolen material was taken from the University of Cincinnati’s Research Triangle Park nuclear research site, according to a news release from Department officials.
The Nuclear Regulatory Commission did not immediately release any information.
According to the release, the U.S. Department of Energy’s Office of Nuclear Material Safety and Security is leading that team’s investigation.
“The safety of people, the environment and the nation’s nuclear stockpile is our highest priority,” Hicks said. “We will get to the bottom of this and make no excuses.
-OpenAI’s GPT-2 Language Model (source)
Only the first sentence of this fake news report was written by a human. The rest was generated using OpenAI’s newest unsupervised language model GPT-2. As is evident from the report, GPT-2 is capable of creating human-like articles from a very short prompt and requires almost no additional guidance. This is remarkable and there are so many incredible things that can come from this technology. With the help of artificial intelligence we will soon see complicated scientific papers quickly being summarized in language that any person could understand. We will see fast, unsupervised translations across languages. We will see the rise of more capable AI writing assistants. These are all good things and will come in time.
Unfortunately, many negatives can also come from this new technology. These negatives are unavoidable and so I believe it is our responsibility to address them now before too much damage has occurred. OpenAI has already taken the first step by withholding release of the complete GPT-2 model until they have further discussed possible implications. To quote their website, “while we are not sure that it is the right decision today, we believe that the AI community will eventually need to tackle the issue of publication norms in a thoughtful way in certain research areas” (OpenAI). It is time for us all to seriously discuss and develop strategies for the ethical future of artificial intelligence.
In 2014 a machine learning technique called a generative adversarial network was invented as a way to generate new data from previous data. For instance, by looking at a thousand photos of cats, this network could create a new cat picture unlike any of the previous thousand but close enough that it is difficult for humans to tell this new picture apart from the previous thousand. In 2017 this technology was used maliciously by Reddit users to create digitally altered pornographic videos that superimposed the faces of female celebrities into the videos. This is a massive violation of women’s privacy rights and calls into question so many ethical concerns about our bodies and their digital representation. In response, Reddit banned these videos but it was too late. The creator of this technology has since released a platform making the creation of deep fakes easy for everyone. To quote the Guardian “anyone with access to the internet and pictures of a person’s face could generate their own deep fakes” (Oscar Schwartz). Unfortunately, the implication of deep fakes goes beyond non-consensual explicit videos.
We live in an time where we are constantly bombarded with information, so much so that we cannot possible give everything the full attention needed to properly absorb them. Unfortunately, this situation has been abused by creators of fake news. Articles can be written in such a way that false information appears true at first glance and since we seldom have time to give more than a first glance, the false information remains without being fact checked. Given enough time we will remember the “fact” but forget where we read it and suddenly a new “truth” is made. The more realistic the fake news sounds, the more likely we are to not question it on first glance and the more misinformation there becomes. Now, consider if all this fake news can be generated almost automatically using artificial intelligence models like GPT-2 (in fact they test this here). Now we have well-crafted fake news being developed, almost instantaneously, for free and misinformation will spread like wildfire.
With regards to deep fakes the possibilities are even more malicious. Right now though we tend to give the benefit of the doubt, everyone knows something written could be filled with lies. This is not the case with videos. The technology is approaching the point where fake, professional looking videos can be made using AI to generate realistic voices and realistic looking video.
It is easy to see how this technology could be abused by individuals or governments with malicious intents. The deep fakes do not even have to be perfect. They just have to be good enough to cause outrage in a few people who then spread the false information contained in the videos. While some people will try and go back to see the video for themselves and may realize it is a fake, many will simply jump on the bandwagon and become wrapped up in the artificially developed discontent.
What Can Be Done
As with all technology, I do not believe that it is possible for us to simply ban deep fakes. The technology will come regardless of policy. We have to accept this and ensure that policies are already present by the time these technologies become commonplace.
First, with regards to pornographic deep fakes, while some protections exist, there is a considerable legal gray area especially when celebrities are being taken advantage of. It is possible that these fakes could be argued to be parodies or some other form of protected work. As such, we should try and develop a clear legal line between what constitutes parody and what constitutes an abuse of privacy and body. Personally I would say that any deep fake created without consent from the people whose faces and bodies are being used should be illegal. However, there are less extreme routes that could also be taken. Regardless, these conversations should be happening now.
Second, with AI research having such a large impact on our society, it is now time for the researchers to consider the possible ramifications of their findings when choosing what to research and publish. This is exactly what OpenAI did by withholding publishing the full GPT-2 until more time has been spent addressing ethical concerns. These considerations should occur publicly allowing for feedback from other people in the field. While researchers should not be held responsible for the future abuse of their findings, by taking into account possible malicious uses from the very beginning it helps prepare everyone for the impact a new technology will have.
Third, I personally believe it is the responsibility of schools to help fight against fake news by promoting critical reading skills at a young age. While education reform always comes slowly, we can start by encouraging children to think critically about what they read in the classroom asking questions such as “what type of evidence does the author provide for his/her claim?” and “can you provide a counter-example or a situation where the author’s claim does not apply?” In the future, schools could build into their English programs units on critical reading that involves having students directly compare true and fake news articles and promotes using the internet wisely to search for additional evidence.
Lastly, it is possible to use artificial intelligence to counter artificial intelligence. As deep fakes become more and more realistic it will become increasingly difficult for the human eye to point out the differences. Thankfully, we already have already created machine eyes. Those interested in entering the field of artificial intelligence can choose to focus on developing technology for detecting AI rather than developing technology to alter or replace human intelligence. For instance, with every heartbeat blood rushes into our face and to our brain. Though undetectable to the human eye this changes the tint of our skin ever so slightly. This is a tiny detail that is easy to miss and hard to model using deep fakes. Someone could develop technology that looks for this slight skin color change in videos to determine if a video is real or fake. I am sure there are many other tiny details that are left out in deep fakes. As the technology becomes better and better these tiny details will become increasingly important.
Ultimately, the technology that allows for deep fakes and AI generated fake news will reach a point where we cannot tell the difference. If we are not ready for this future then we run the risk of completely losing sight of what is true and what is fake. Thankfully, I believe that if we take the proper steps now, when this technology is still in its infancy, we can ensure that reaching this point does not cause mass distrust and chaos. At times I find it scary to think about the future and the dangers of AI. However, by addressing these fears and discussing them with one another we open the doors for important conversations that turn into important societal beliefs and policies preventing bad people from abusing these remarkable technologies.