Earlier this year, British TV program Channel 4 News found that among the top five most-visited websites for adult “deepfake” content, there are almost 4,000 celebrities who have become victims; take Channel 4 News presenter Cathy Newman, who believed it was a violation of their autonomy.
This statistic only considers famous people—what about businesses, politicians, and private individuals going about their day, only to become subjects of deepfake content?
In a nutshell, deepfakes are artificial media, typically videos, photos, or audio, that use AI technologies to alter or generate realistic depictions of people saying or doing things they never actually did. They’ve become so indistinguishable that many people can’t discern deepfakes from authentic videos anymore.
For businesses, deepfakes pose security and fraud risks; for societies, they have the power to fuel misinformation, worsen political tensions, and undermine trust in media. Private individuals may also find their identities misused for malicious purposes like blackmail or phishing scams. All this is to say that deepfake technology is a crucial concern to anyone online—basically everyone.
70 Key Deepfake Statistics You Need to Know
As deepfake technology progresses, understanding its implications on a global scale is crucial to avoid falling victim to it. So, we’ve listed some of the most pressing deepfake statistics to help you stay in the know.
The proliferation of deepfakes
Deepfakes started appearing in 2017. This technology has become prevalent since then. Here are some statistics about how deepfakes have taken the world by storm.
- The global market value for AI-generated deepfakes will reach USD 79.1 million by the end of 2024. Moreover, it reached USD 1.39 billion in 2023, a 37.6% compound annual growth rate (CAGR). (Dimension Market Research)
- There are 95,820 deepfake videos in 2023—a staggering 550% increase since 2019. (Security Hero)
- The number of online deepfake videos roughly doubles every six months; for example, there were 49,081 videos in June 2020, which rose to 85,047 only in December of the same year. (Sensity)
- Malicious entities have begun using deepfakes to bypass verification checks, with 3,000% attempts so far in 2024. (Onfido)
- There’s a considerable 100% increase in deepfake videos from 2018 to 2019, increasing to 14,678. (Regmedia)
- Compared to 2022, there have been thrice as many video deepfakes and eight times as many voice-based deepfakes online in 2023. (Reuters)
- Social media users shared around 500,000 voice and video deepfakes in 2023, many of which are deepfaked politicians—with significant implications on global politics. (Reuters)
- The number of detected deepfakes worldwide increased tenfold from 2022 to 2023: 1,740% in North America, 1,530% in the Asia Pacific, 780% in Europe, 450% in Africa and the Middle East, and 410% in Latin America. (Sumsub)
- Human detectors only have a 57% accuracy in detecting deepfakes, considerably less than leading detection models, which have an accuracy of 84%. (PNAS)
- The entire process of creating a deepfaked photo or video can be as quick as 8 minutes. (NPR)
- “Face swaps” are among the most common deepfake categories, with a 704% increase from the first to the latter half of 2023. (Tripwire)
Fraud and threats
Despite their relatively recent invention, deepfakes have already caused much damage to people. Here’s a short preview of this technology’s disruptive capabilities.
- Among other forms, deepfakes and other AI-powered nefarious activities are the most prevalent types of identity fraud. (Sumsub)
- Email is the most common delivery method for deepfake phishing attacks. (Broadcom)
- In 2024, roughly 26% of people came by a deepfake scam online, with 9% falling victim to them. (McAfee)
- Surprisingly, 80% of Telegram channels contain deepfake content. (Human or AI)
- The prevalence of generative AI and deepfakes makes facial recognition crucial for identity verification. (Jumio)
- Unfortunately, a considerable number of people (77%) who fell victim to deepfake attacks lost money as a result. (McAfee)
- A third of deepfake victims lost over USD 1,000. Meanwhile, 7% of victims lost up to USD 15,000 to fraudsters. (McAfee)
What people think about deepfakes
While the usage of deepfakes has increased, many people are still unaware that such technology exists. The following are statistics about what regular people think about this technology.
- People can no longer ignore the problem of deepfakes; 58% of surveyed respondents agree that deepfakes are a growing concern necessitating urgent regulations. (iProov)
- Despite these concerns, 71% of people aren’t aware of what deepfakes are; only one-third know of them. (iProov)
- That said, more people know what deepfakes are: 29% in 2022, a considerable increase from 13% in 2019. (iProov)
- Another source found a 66% increase in concerns about deepfakes in 2024, especially as the US elections draw near. (McAfee)
- In fact, 66% of Americans have encountered deepfaked videos and images aiming to mislead them, with 15% seeking them often online. Only 33% claim that they rarely or have never come across deepfakes. (Pew Research Center)
- 60% of consumers encountered deepfake content in 2023, while 22% are unsure of the legitimacy of what they consume. (Jumio)
- The same source found that 72% of consumers worry about deepfakes tricking them. As such, they want governments to regulate AI more effectively. (Jumio)
How deepfakes affect businesses
The adverse link between deepfakes and citizens are obvious, but did you know that they can be used against businesses as well? Here’s how businesses have responded to the threat of deepfakes:
- Over 10% of organizations have become targets for successful or attempted deepfake fraud, primarily due to their outdated cybersecurity protocols. (Business.com)
- How about clientele? New research found that nearly half (40%) of companies and their consumers have fallen victim to deepfake attacks. (Straits Research)
- There were 500,000 deepfaked videos and voices in 2023. (DeepMedia)
- Despite the prevalence of corporate-targeted deepfake attacks, only 52% of organizations are confident in their ability to discern deepfakes of their CEO. (PingIdentity)
- Around 80% of consumers are willing to go through extensive identity verification protocols when using financial services if it improves security. (Jumio)
- Also, three-fourths (75%) will switch banks if their fraud protection measures aren’t enough to stop deepfakes. (Jumio)
- Considering the rise of deepfake attacks and other cybersecurity threats, there’s a declining confidence in bank protections. As such, 69% of consumers demand more robust cybersecurity protocols. (Jumio)
- Most consumers (72%) are constantly worried about deepfakes fooling them. (Jumio)
What business leaders think about deepfakes
While the threat of deepfakes against businesses is apparent, many business leaders have not yet responded appropriately. Here’s what they think about deepfakes:
- Despite most companies falling victim to deepfake threats, 31% of business executives remain adamant that deepfakes haven’t increased their fraud risk. (Business.com)
- Likewise, 37% believe deepfakes don’t put their businesses at risk—mainly because they think their companies aren’t big enough to become targets and their cybersecurity measures are adequate. (Business.com)
- A third of leaders (32%) aren’t confident their staff can recognize deepfake fraud attempts targeting their businesses. (Business.com)
- Over 50% even say their employees don’t have sufficient training to identify and address deepfake attacks (Business.com)
- Unfamiliarity with deepfake technology isn’t only limited to private consumers. Roughly 25% of company leaders are minimally or entirely unfamiliar with deepfakes. (Business.com)
- At the same time, 80% lack adequate protocols to handle and defend against deepfake attacks, rendering them invulnerable. (Business.com)
- Still, 61% of executives haven’t established protocols or processes to address deepfake risks in their organizations. (Business.com)
- Only a few companies (13%) have protocols sufficient to defend against fraud, impersonation, and other attacks. (Business.com)
- Almost half (45%) of surveyed individuals say they’d reply to a message claiming to be a friend or loved one, regardless of its authenticity. (McAfee)
- Regarding the subject of who was in deepfake content, over 90% of deepfaked YouTube videos feature Western subjects. (Deeptrace)
The social impact of deepfakes
The rise of deepfake technology has sparked growing concern about its potential social impact. From spreading misinformation to undermining trust in digital media, deepfakes pose a unique threat to both individuals and society. Understanding these risks is essential to navigating the evolving landscape of digital content and communication.
- People are becoming more suspicious of deepfakes, thanks to the global efforts to warn people about this technology. (PLOS)
- Likewise, 77% of Americans want more regulations and restrictions for misleading deepfakes. (Pew Research)
- About 61% of US adults also say average Americans can’t recognize altered photos and videos. (Pew Research)
- No thanks to deepfakes, 32% of adults have become more suspicious of social media than ever. (McAfee)
- Deepfakes also affect people’s memories, although minimally. (PLOS)
- Similarly, altered videos and photos confuse 63% of Americans about facts surrounding current events. (Pew Research)
- Nearly half (43%) of surveyed respondents believe that the most worrying use of deepfakes is its influence on elections. Meanwhile, 37% answered how they undermine trust in media. (McAfee)
- In fact, 23% of Americans claimed to have come across a political deepfake they later discovered to be fake. (McAfee)
Deepfakes and pornography
Deepfakes have become a troubling tool in the realm of pornography, often used without consent to create explicit content. This misuse not only violates personal privacy but also raises ethical and legal concerns, highlighting the urgent need for stricter regulations and protective measures in digital spaces.
- Deepfake creators take less than 25 minutes and spend nothing to create a one-minute pornographic video of anyone—all it takes is one clear image. (Security Hero)
- Unsurprisingly, 96% of pornographic deepfakes online are non-consensual. (Deeptrace)
- Most (98%) of online deepfakes aren’t political or entertaining—they’re pornographic. (Security Hero)
- Most deepfake cases (87.7%) are in the crypto sector, with fintech (7.7%) and online gaming (1.6%) following far behind. (Sumsub)
- Almost everyone (94%) featured in pornographic deepfakes work in the entertainment industry, such as celebrities and influencers. (Security Hero)
- One-third of all deepfake tools allow users to create pornographic deepfake content. (Security Hero)
- All but 1% of pornographic deepfake targets are women. (Security Hero)
How good are we at detecting deepfakes?
Despite advancements in technology, detecting deepfakes remains a challenging task. While AI tools and algorithms are improving, deepfakes continue to evolve, making it harder to distinguish between real and manipulated content. This raises concerns about the reliability of detection methods and their ability to keep pace with rapid technological progress.
- While 57% of people can spot deepfake videos, 43% can’t differentiate between altered and authentic content. (Statista)
- Unfortunately, humans can only detect voice cloning or speech deepfakes 73% of the time. (PLOS)
- The human brain can unconsciously detect deepfakes 54% of the time. (Analytics Insight)
- Humans have a considerable margin of error. In a test, respondents identified 69% of real faces as fake. (ScienceDirect)
- Another source tested 280 participants’ ability to detect deepfakes and reported an average accuracy of 62%. However, individual figures range from 30% to 85%. (Oxford Academic)
- Further emphasizing the unreliability of human eyes in deepfake detection is that training only increases accuracy by 3.84% on average. (PLOS)
- Altered texts are difficult to spot, with a detection rate of only 57%. Meanwhile, deepfaked audio (74%) and video (82%) are much easier to discern from genuine content. (Cornell University)
- Nearly half of tested people (48.2%) can’t recognize a real or deepfaked photo of a person—slightly lower than a 50-50 random guess. (PNAS)
- The same study discovered that people found deepfaked faces to be 7.7% more trustworthy than real people. (PNAS)
- Only 27% of people can tell whether a friend or loved one behind a call is genuine or AI-generated. (McAfee)
- There was a significant 60% increase in the development of AI-powered deepfake tools in 2023. (Human or AI)
- The deepfake detection industry has a market value of USD 5.5 billion in 2023. It could grow to USD 15.7 billion by 2026—a 42% CAGR. (Liminal)
Stay Aware of Deepfakes
As you see, deepfakes have unprecedentedly impacted governments, businesses, individuals, and entire societies. As deepfake technology continues to evolve, the world should expect greater potential for misuse. Hence, it’s crucial to stay vigilant. Discerning authentic content from manipulated or AI-generated media has become vital to maintaining trust.
Stay ahead of the digital curve by enlisting the help of Spiralytics, a leading digital marketing agency in the Philippines. With our help, you can deploy content marketing campaigns leveraging cutting-edge technologies to drive results that take you closer to your business goals. Contact us to elevate your marketing operations today.