Deepfake Technology, a Double-Edged Sword
Deepfake Technology, a Double-Edged Sword
  • Reporter Tae Jong-hyeok
  • 승인 2024.09.25 21:47
  • 댓글 0
이 기사를 공유합니다

▲Deepfake technology / Gettyimagesbank
▲Deepfake technology / Gettyimagesbank

On Aug. 15, Korea’s National Liberation Day, Lee Ok-bi, a descendant of independence activist and poet Lee Won-rok (also known as Lee Yuk-sa), wept as she saw a photo of Lee Won-rok dressed in a turquoise Hanbok. This image was part of a video created by a food company in collaboration with the Ministry of Patriots and Veterans Affairs. It digitally recreated the appearance of Lee Won-rok, whose last known photo was a mugshot from his imprisonment. The video even showed independence activists walking outside Seodaemun Prison wearing Hanbok. These scenes were possible through deepfake technology, which uses artificial intelligence (AI) to create highly realistic fake images or videos indistinguishable from real things. Many Koreans were touched to see the independence activists smiling and celebrating National Liberation Day.

In 2018, a research team from the University of Washington created a video called the “Obama Deepfake,” which brought deepfake technology to public attention. It was difficult to distinguish between the actual video of his speech and the one created using deepfake technology. How is deepfake technology able to synthesize such realistic videos? The core of deepfake technology is deep learning. AI learns videos featuring a specific individual and generates a new face, seamlessly blending it into the original footage. Additionally, it uses Generative Adversarial Networks (GAN) technology to produce even more realistic and indistinguishable images.

Deepfake technology is widely used in various fields because it can synthesize not only an image but even videos and voices that do not exist. Recently, creators of visual content, such as dramas and films, have been utilizing deepfake technology. In fact, in one Netflix drama, the face of a child actor was created by deepfake to depict scenes from an actor’s childhood. In this way, deepfake technology in the movie and entertainment industry provides creative direction and realism, offering audiences a more immersive viewing experience. It is useful for dangerous action or stunt scenes, allowing for seamless video production while reducing the physical demands on leading actors.

Meanwhile, deepfake technology is also being effectively utilized in the medical field. In 2019, the Institute for Medical Informatics at the University of Lübeck in Germany created deepfake medical images using GAN technology. While it is difficult to obtain enough data for AI to learn from for disease diagnosis, medical images generated by deepfake are used for training. It allows AI to detect and diagnose diseases, facilitating active research in the field.

However, recently, since anyone can easily create and distribute images using deepfake technology, negative results have also occurred. In particular, sexual crimes using deepfake have spread through Telegram, raising growing concerns. Criminals are creating new videos by synthesizing the faces and bodies of different people through deepfake and distributing them. Victims of this criminal activity come from almost all age groups, including teenagers. Shockingly, about 270,000 people have been found to be participating in the illegal synthetic material production channel on Telegram. Any person can access this channel through a simple procedure. By paying only about 650 KRW, anyone can create illegal synthetic materials. In addition, the anonymity of the dealer is guaranteed by using virtual currency.

The development of deepfake technology also affects the production of more realistic and convincing fake news. In fact, before this year’s April 10 general election, 388 posts violated the Public Official Election Act with deepfake images; regardless, 97 of them remain posted. By synthesizing and distributing photos or voices of specific people as if they were real, the fairness of the election is undermined.

In May 2024, it became mandatory to display a watermark on contents generated by AI. Plus, platform companies are now responsible for the creation and distribution of deepfake videos. However, platforms like Telegram, where illegal videos are circulated, makes investigations difficult due to their anonymity. Professor Kim Myuhng-joo at Seoul Women’s University said, “Some people may use deepfake for fun without realizing it is a crime. Education is essential to ensure AI tools are used positively.”