Deepfake Geography: The Problem of AI Falsifying Satellite Photos

-

Art imitates life, and so does AI. Imitation through illustrations and paintings is an integral part of human culture, going back as far as prehistoric times with cave paintings. This ability is provided by the mirror neurons in our brain. While this shows how we learn as humans, the presence of deepfakes in cyberspace shows the damaging repercussions of this ability.

For the uninitiated, deepfakes are basically morphed copies of media available on the web and are created by AI. Recently, the American actor Bill Hader was seen in a deepfake video where he is seamlessly morphed into Arnold Schwarzenegger while doing his impression. These deepfake videos are made by a YouTube channel called Ctrl Shift Face who shared with The Guardian that through his deepfakes he wants to raise awareness of the possible damage this technology is equipped to cause.

While Twitter continues to tackle fake news, Bo Zhao, Shaozeng Zhang, Chunxue Xu, Yifan Sun, and Chengbin Deng from the University of Washington have published a paper in Cartography and Geographic Information Science journal that tackles deepfake geography. Zhao boils down the idea by saying “This isn’t just Photoshopping things. It’s making data look uncannily realistic.”

AI-based deepfake geography study

“To understand such a new mode of fake geography, it is necessary to comprehend the basic algorithm of deep fake techniques in making fake geospatial data and thus to inspire us to explore possible detection approaches,” write the authors. Deepfakes are made using a deep learning AI tool called generative adversarial network (GAN). In their study, Zhao et al. used Cycle-Consistent Adversarial Networks or CycleGAN, a popular model of GANs, to generate simulated images of a city.

Deepfake Geography The Problem of AI Falsifying Satellite Photos
Symbolic image, not satellite footage (image: Peter Nguyen / Unsplash)

The way that CycleGAN works is that it solves this equation: G(),F()=L(G,F,Dx,Dy). Here, the mapping functions G() and F() are meant to constantly interact with each other. These functions must always find a relation between X and Y, G: X → Y and F: Y → X, with X and Y as the domains. In other words, it will have to learn after multiple attempts whether the data in question is real or fake. When it reaches a point where it can’t tell the difference, that will be the final output or the deepfake geographical map.

Zhao et al. chose Tacoma, Seattle, and Beijing as the cities for their research. The maps and satellite images were combined to create new images of one city drawn from the characteristics of the other two. Tacoma was designated as their “base map” city, with the geographic features and urban structures of Seattle and Beijing, creating a deepfake geography of Tacoma.

Defense against deepfakes

Zhao et al. constructed a deepfake detection dataset containing 8,064 satellite images in the size of 256*256 pixels. The dataset includes authentic satellite images of Tacoma, Seattle, and Beijing, as well as the simulated images of Tacoma. They also added 25 features such as CFI, Brenne image quality index, gray level concurrence matrix just to name a few. The results show that 21 out of 25 features had significantly different mean values, questioning the authenticity of the image.

The study of Geospatial Artificial Intelligence has provided leverage to the Geographic Information System (GIS) and AI advances such as natural language process, unstructured data classification, computer vision, or map style transfer. Although Zhao et al. recognize the opportunities here, they concluded that “GIS practitioners should also be aware of possible falsification of geospatial data and get prepared for that by developing detection approaches for identifying fake geospatial data and utilizing such an approach when necessary.”

The series of satellite images of the deepfake geography of Tacoma reflects similar visual patterns to Seattle and Beijing. Zhao et al. end with a warning and suggestion, saying that the “study warns of the emergence and proliferation of deep fakes in geography just as ‘lies’ in maps. We suggest timely detections of deep fakes in geospatial data and proper coping strategies when necessary.”

Photo credit: The feature image is symbolic and was prepared by NASA. The photo in the body of the article has been taken by Peter Nguyen.
Sources: Kim Eckart (UW News) / Elle Hunt (The Guardian) / Jason Brownlee (Machine Learning Mastery)

Was this post helpful?

Ujala Chowdhry
Ujala Chowdhry
Hello, I'm a tech journalist here. I have been able to view many facets of technology at TechAcute and continue to learn more. I love covering global tech solutions and being socially available on Twitter.
- Advertisment -
- Advertisment -
- Advertisment -
- Advertisment -
- Advertisment -
- Advertisment -