Deepfake is an artificial intelligence tool that can create incredibly realistic images and videos of people. It is a part of a larger class of AI known as “deep neural networks,” which rely on layers of interconnected units to learn and perform tasks. Deepfake can be used for a variety of purposes, including fraud and social manipulation. The technology has been used to splice together footage of politicians or celebrities, and it can also imitate voices and emotions.
Some researchers and companies are working to develop technology that can detect and block deepfakes. These include Adobe, Meta X and Intel FakeCatcher. Some of these tools are designed to verify the source of photos and videos before they are posted on a platform, and others focus on identifying subtle physiological details such as pixel variations in blood flow to detect fakes.
Deepfake: How This Technology is Revolutionizing Video Content Creation
Making a deepfake requires expensive software and computing power. Most deepfakes are created on high-end desktop computers, or on GPUs that are connected to the cloud. Many of the resulting videos are highly stylized and can take days or weeks to complete. However, a growing number of services allow users to make deepfakes on less-expensive computers and in minutes.
The potential for abuse of deepfake is causing concern among politicians, security experts, and citizens alike. It is easy to imagine how a threat actor could use such technology to manipulate public opinion, sow disinformation and lend legitimacy to international armed conflict. As the technology becomes more widely available, it is critical for security officials and leaders to understand how to combat its effects.
…