{"id":87764,"date":"2023-11-14T23:55:09","date_gmt":"2023-11-15T04:55:09","guid":{"rendered":"https:\/\/sciencesensei.com\/?p=87764"},"modified":"2023-11-30T13:56:13","modified_gmt":"2023-11-30T18:56:13","slug":"ai-deepfakes-that-will-make-you-question-reality","status":"publish","type":"post","link":"https:\/\/dev.sciencesensei.com\/ai-deepfakes-that-will-make-you-question-reality\/","title":{"rendered":"AI Deepfakes That Will Make You Question Reality"},"content":{"rendered":"

Artificial intelligence is reshaping our world, prompting questions about the nature of these transformations. One noteworthy development in recent times is the explosion of deepfake technology. Leveraging sophisticated algorithms, individuals can manipulate photos and videos by seamlessly substituting one person’s face with another. While some instances seem relatively harmless individuals with malicious intent can easily generate fake revenge porn, amplifying concerns about individual privacy and government security. From Kylie Jenner to the Queen of England it seems no one is safe from the ramifications of AI Deepfakes.<\/p>\n

\n
\n
\"Kylie
[Image via YouTube]<\/figcaption><\/figure>\n

Kylie Jenner’s TikTok Deepfake<\/h2>\n

TikTok has become a breeding ground for celebrity doppelgängers, with Kylie Jenner joining the ranks of stars discovering their online lookalikes. A TikTok<\/a> user known as @kjdrafts has taken the platform by storm, amassing over 4 million likes with just 13 videos showcasing an uncanny resemblance to the beauty mogul. While fans marveled at the likeness, skepticism arose as some users questioned whether the resemblance was too good to be true. Speculations of the use of the ‘reface’ app or even deepfake technology surfaced, fueled by observations of glitches and limited facial expressions in @kjdrafts’ videos.<\/p>\n

Deepfakes involve fabricating videos, manipulating content to depict individuals saying or doing things they never did, with techniques ranging from replacing the entire face with that of a victim or celebrity to replicating specific lip movements and facial expressions. In 2019 Deeptrace<\/a> reported a shocking 96% of deepfakes were nonconsensual and often pornographic, primarily targeting women, including celebrities. While signs hint at the possibility of Kylie Jenner’s lookalike being a deepfake, such as consistent hand positioning to avoid disrupting potential filters, glitches around facial features, and limited expressions, the truth remains uncertain.<\/p>\n<\/div>\n<\/div>\n<\/div>\n

<\/p>\n

\n
\n
\"Morgan
[Image via Creative Bloq]<\/figcaption><\/figure>\n

This Morgan Freeman isn’t Real.<\/h2>\n

Even on YouTube the line between reality and illusion is becoming increasingly blurred. A recent viral video features an eerily realistic deepfake of Morgan Freeman delivering a message urging viewers to question reality. Unlike previous attempts that often carried a subtle uncanny feeling, this deepfake is exceptionally convincing, making it difficult for viewers to discern the artificial nature. The clip was originally shared by the Dutch deepfake YouTube Channel Diep Nep<\/a> with help from Bob de Jong for the concept and Boet Schouwink for the impeccable voice acting. The disturbingly real video resurfaced on Twitter, amassing over 6.5 million views. The widespread attention has sparked concerns about the implications of such lifelike deceptions and their potential use for malicious purposes in the future.<\/p>\n

As social media, particularly Twitter<\/a>, becomes abuzz with discussions about this exceptionally well-executed deepfake, it highlights the growing challenges associated with the advancement of AI technology. The unsettling realism of the Morgan Freeman deepfake raises questions about the need for increased vigilance and safeguards against the misuse of such technology, prompting a broader conversation about the ethical implications and potential threats posed by the rapidly evolving world of deepfakes.<\/p>\n

<\/p>\n

\"Tom
[Image via Cnet]<\/figcaption><\/figure>\n

This Tom Cruise is Actually Miles Fischer<\/h2>\n
\n
\n
\n

A series of TikTok videos featuring Tom Cruise engaged in atypical activities surfaced, showcasing the actor playfully goofing around in a high-end men’s clothing store, demonstrating a coin trick, and even singing a snippet of Dave Matthews Band’s<\/a> “Crash Into Me.” However, the catch was that this wasn’t the real Tom Cruise but an artificial intelligence-generated doppelganger, created by visual and AI effects artist Chris Umé and actor Miles Fisher. The deepfake videos gained immense popularity on TikTok, amassing tens of millions of views and inspiring Umé to co-found a company called Metaphysic<\/a> in June.<\/p>\n

Metaphysic utilizes deepfake technology to craft innovative advertisements and restore old films, pushing the boundaries of what was previously achievable in media production. Despite concerns about the nefarious uses of deepfakes, Umé and his co-founders believe in the technology’s potential for creativity and fun, envisioning applications like making older entertainers appear younger or generating video doubles of famous personalities for commercials. However, the ethical implications of such technology are not lost on them, prompting efforts to develop guidelines and guardrails to ensure responsible use. The company works directly with clients, requires consent for commercial projects, and remains vigilant about ethical considerations amid the evolving landscape of deepfake technology.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n

<\/p>\n

\n
\n
\"Zelenskyy
[Image via UVA Today]<\/figcaption><\/figure>\n

Disturbing Volodymyr Zelenskyy Identity Theft<\/h2>\n

A manipulated video featuring a deepfake of Ukrainian<\/a> President Volodymyr Zelenskyy surfaced on social media. It even made its way onto a Ukrainian news website, planted by hackers before being debunked and removed. The deepfake, lasting only a minute, depicted Zelenskyy supposedly urging his soldiers to surrender in the face of the conflict with Russia. It remains unclear who is behind the creation of this deepfake. Ukrainian officials had been warning about the potential use of manipulated videos by Russia for weeks as part of information warfare. Social media platforms such as Facebook, YouTube, and Twitter promptly removed the video for policy violations. But, unfortunately it gained traction on Russian<\/a> social media.<\/p>\n

Despite the less-than-sophisticated quality of the deepfake, experts emphasize its potential danger. The Ukrainian government’s preemptive warnings and Zelenskyy’s swift denial helped stop its spread in the West, but concerns remain about the impact on global perceptions of future videos featuring the Ukrainian president. The political deepfake emphasizes broader challenges in the information ecosystem, casting doubt on the authenticity of content and raising questions about the potential manipulation of videos in times of crisis. As the complexity of the situation unfolds, the need for vigilance against deepfake threats and their implications becomes increasingly apparent.<\/p>\n<\/div>\n<\/div>\n<\/div>\n

<\/p>\n

\"Nancy
[Image via CBS News]<\/figcaption><\/figure>\n

Nancy Pelosi Political Deepfake<\/h2>\n

In a recent social media uproar, a manipulated video featuring House Speaker Nancy Pelosi circulated widely, amassing over 2.5 million views on Facebook. This incident shed light on the rising concern surrounding deepfakes, a sophisticated technology that enables the alteration of videos and images to create convincing but false content. Computer science<\/a> professor Hany Farid from the University of California Berkeley noted that the Pelosi video was a relatively simple example, emphasizing the broader threat of using such technology to fabricate statements or actions that individuals never made. U.S. intelligence officials have issued warnings<\/a> about the potential misuse of deepfakes, expressing concerns about their impact on political campaigns and the risk of spreading false information with significant consequences, especially in the context of upcoming elections.<\/p>\n

The repercussions of deepfake technology extend beyond politics, as demonstrated by President Trump sharing an edited video of Pelosi on Fox Business Network. This video, selectively edited to highlight verbal missteps from Pelosi’s press conference, underscores the power of manipulated content in shaping public perception. Social media platforms like Facebook and YouTube grapple with the challenge of addressing such content, with Facebook reducing the distribution of the Pelosi video and YouTube opting to remove it entirely. The evolving landscape of deepfakes raises critical questions about the need for effective policies to mitigate the potential harm caused by deceptive digital content.<\/p>\n

<\/p>\n

\n
\n
\"Mark
[Image via DailyMail]<\/figcaption><\/figure>\n

Facebook isn’t Safe from Deepfake Fraud<\/h2>\n

Bill Posters and Daniel Howe worked with advertising company Canny<\/a> to have a deepfake video crafted featuring Facebook founder Mark Zuckerberg. The manipulated video, uploaded to Instagram, shows Zuckerberg delivering an ominous speech about Facebook’s influence, using broadcast chyrons to mimic a news segment. CBS later requested its removal, citing an “unauthorized use of the CBSN trademark.” This deepfake is part of the Spectre exhibition at the Sheffield Doc Fest in the UK. The exhibit features Canny, in conjunction with Posters, and showcases similar synthetic videos featuring figures like Kim Kardashian and Donald Trump. Instagram, owned by Facebook, has pledged to treat the content like misinformation, relying on third-party fact-checkers to determine its authenticity.<\/p>\n

This Zuckerberg deepfake is a product of Canny’s proprietary AI algorithm<\/a>, trained on a short segment of the original video and additional footage of a voice actor. Despite some noticeable differences in the voice, the manipulated video convincingly replicates Zuckerberg’s facial expressions and movements. As concerns about deepfakes and altered content on social media platforms grow, Facebook’s response is scrutinized, especially given its decision to de-prioritize a manipulated video of Nancy Pelosi instead of removing it. Canny views this project not only as a technological showcase but also as an opportunity to prompt discussions about the current and future implications of AI in shaping our digital landscape.<\/p>\n<\/div>\n<\/div>\n<\/div>\n

<\/p>\n

\n
\n
\"Barack
[Image via BBC]<\/figcaption><\/figure>\n

Jordan Peele Creates Deepfake of Former President Obama<\/h2>\n

Filmmaker Jordan Peele recently collaborated with BuzzFeed<\/a> CEO Jonah Peretti to unleash a deepfake video featuring a convincingly simulated Barack Obama delivering a public service announcement. Peele, known for his thought-provoking film “Get Out,” utilized deepfake technology to emphasize the importance of skepticism in the digital age. The PSA conveys Obama’s warning about the consequences of blindly believing online content, raises critical questions about the potential dystopian outcomes if misinformation continues to thrive. Peele’s skillful impersonation of Obama, achieved through algorithmic machine learning and 56 hours of training, highlights the unsettling ease with which deepfakes can manipulate public figures, pushing the boundaries of reality distortion.<\/p>\n

While the video reveals the serious implications of deepfake technology in an era dominated by fake news<\/a>, it also hints at the intriguing yet concerning realm of possibilities this technology presents. Peele’s contribution, although raising awareness, may inadvertently highlight the allure of manipulating reality for entertainment or even mischief. As deepfakes transcend their initial association with celebrity porn, the broader implications of this technology on political discourse and public perception become increasingly evident. Peele and Peretti’s timely message addresses the potential for adversaries to exploit these tools, emphasizing the urgent need for media literacy and vigilance in an age where appearances can be deceiving.<\/p>\n<\/div>\n<\/div>\n<\/div>\n

<\/h3>\n
\"Queen
[Image via LatestLY]<\/figcaption><\/figure>\n

Improper Deepfake of Queen Elizabeth<\/h2>\n

In a daring move that has ignited both controversy and discussion, Channel 4 chose to air a deepfake video featuring a digitally altered version of Queen Elizabeth II<\/a> in lieu of her traditional Christmas Day broadcast. The five-minute video, voiced by actor Debra Stephenson, showcases the deepfake Queen reflecting on the year’s events, including Prince Harry and Meghan Markle’s departure as senior royals and Prince Andrew’s connection to financier Jeffrey Epstein. The unexpected twist comes as the deepfake Queen surprises viewers with a dance routine borrowed from the popular social media platform TikTok.<\/p>\n

Channel 4 defends its decision, asserting that the broadcast serves as a “stark warning” about the looming threat of fake news in the digital era<\/a>. Director of Programmes Ian Katz emphasizes the video as a “powerful reminder that we can no longer trust our own eyes.” However, some experts caution against potential misconceptions. Some suggesting that the broadcast may inadvertently exaggerate the prevalence of deepfake technology. While acknowledging the importance of exposing the public to deepfakes, technology policy researcher Areeq Chowdhury argues that the primary concern lies in the misuse of deepfakes, particularly in non-consensual deepfake pornography, rather than widespread manipulation of information. As society grapples with the increasing role of synthetic media, deepfake expert Henry Ajder encourages responsible practices such as disclaimers and watermarks to guide ethical use in this evolving landscape.<\/p>\n

\"Scarlett
[Image via Cosmopolitan]<\/figcaption><\/figure>\n

NSFW Scarlet Johansson Deepfake<\/h2>\n
\n
\n
\n

In a candid article for the Washington Post, Scarlett Johansson<\/a> delves into the complex and disconcerting realm of deepfake adult content, sharing her personal encounters with AI-generated explicit content featuring her likeness. Despite acknowledging the futility of combating this disturbing trend within the vast and lawless expanse of the internet, Johansson emphasizes the importance of individuals standing up for their right to control their image. The actress expresses both repulsion and resignation, describing her own unsuccessful attempts to counter the unauthorized use of her image in AI-generated porn. Johansson warns that deepfakes represent the inevitable evolution of hacking, extending beyond the realm of celebrities and underscoring the lawlessness of the online landscape.<\/p>\n

While Johansson acknowledges the daunting challenge of safeguarding oneself against internet depravity, she stresses the significance of the fight for image rights<\/a> and the potential for legal recourse. Despite the inevitability of deepfakes becoming more prevalent, Johansson’s narrative unveils the unsettling reality of a technology that, even in its infancy, poses significant ethical and legal concerns. In a chilling revelation, she notes that someone has already created a robot bearing her likeness, highlighting the increasingly blurred boundaries between reality and manipulated digital content.<\/p>\n<\/div>\n<\/div>\n<\/div>\n

<\/p>\n

\"Emma
[Image via MSN]<\/figcaption><\/figure>\n

Lewd Deepfake of Emma Watson<\/h2>\n

English actress Emma Watson<\/a> found herself unwittingly embroiled in a scandal involving sexually suggestive deepfake advertisements on Meta platforms, including Facebook, Messenger, and Instagram. The controversial ads, promoting the Facemega app, claimed to offer ‘DeepFake FaceSwap’ capabilities, illustrating the growing misuse of deepfake technology. These manipulated visuals, created through artificial intelligence, sparked outrage and discussions surrounding privacy and consent, prompting Meta to swiftly remove over 230 offending ads from its platform.<\/p>\n

This scandal sheds light on a broader trend as synthetic media<\/a>, particularly deepfakes, infiltrates various facets of the online landscape. While the scandal raised concerns about the potential for harassment and manipulation, industry experts predict a future where advertising heavily relies on synthetic media technology. Despite its nefarious applications there are also instances where deepfakes serve positive purposes, as demonstrated in campaigns promoting social causes and values. As technology advances, the use of deepfakes is expected to become more commonplace, prompting a reevaluation of ethical standards and regulations in the evolving digital landscape.<\/p>\n

<\/p>\n

\n
\n
\n
\n
\n
\"Joe
[Image via TheVerge]<\/figcaption><\/figure>\n

Joe Rogan’s Deepfake Sparks Controversy<\/h2>\n

In the fast-paced world of TikTok, a controversial video ad featuring Joe Rogan<\/a>, renowned host of The Joe Rogan Experience podcast, has stirred up a storm. The clip promotes a supposed “libido booster for men” called Alpha Grind, with Rogan providing specific instructions on where to find the product on Amazon. However, this endorsement was never uttered by Rogan on his podcast. Instead, it appears to be a sophisticated deepfake, strategically crafted to boost product sales. TikTok swiftly removed the video, posted and promoted by user @mikesmithtrainer, citing a violation of their harmful misinformation policy, subsequently banning the account.<\/p>\n

The suspected deepfake not only gained traction on TikTok but also went viral on Twitter, amassing over 5 million views before being removed due to a reported copyright violation. Rogan’s podcast guest, Andrew D. Huberman<\/a>, clarified that the conversation in the video never occurred, revealing that certain segments were taken from the actual podcast while others were manipulated using AI deepfake technology. This deepfake reveals the ongoing challenges posed in the digital realm, prompting a renewed focus on the potential misuse of artificial intelligence. TikTok, having banned deepfake videos in 2020, faces questions about the enforcement of its policies in the wake of this controversial content. As discussions surrounding the ethical implications of deepfake technology persist, the incident indicates the need for vigilance and scrutiny in the evolving landscape of online content.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n

<\/p>\n

\"Tom
[Image via Fox]<\/figcaption><\/figure>\n

Tom Hanks Sounds Alarm on AI Misuse in Dental Plan Ad<\/strong><\/h2>\n

In October of 2023, Hollywood icon Tom Hanks<\/a> took to Instagram to warn fans about a false advertisement featuring an artificial intelligence version of himself promoting an unspecified dental plan. The Oscar-winning actor shared an image showcasing an AI likeness of his younger self, raising concerns about the unauthorized use of his image for promotional purposes. Despite Hanks sounding the alarm, CNN could not independently verify the content of the dental plan ad, prompting inquiries to Hanks’ representatives for clarification.<\/p>\n

Tom Hanks, known for his influential presence in the film industry, has been at the forefront of discussions regarding the intersection of artificial intelligence and Hollywood. As the industry grapples with the implications of AI-generated virtual actors, Hanks recently expressed his reservations on “The Adam Buxton Podcast<\/a>.” The actor pondered the possibility of AI allowing him to appear in movies even after his demise, emphasizing the need for actors to protect their likenesses as intellectual property. While acknowledging the potential limitations of AI performances, Hanks questioned whether audiences would discern or even care about the difference, shedding light on the evolving landscape where technology and entertainment converge.<\/p>\n

\n
\n
\"MrBeast\"
[Image via Unilad]<\/figcaption><\/figure>\n

MrBeast Deepfake Scam<\/h2>\n

In the fast-paced world of social media, YouTube sensation MrBeast<\/a>, aka Jimmy Donaldson, recently raised concerns about the rising tide of AI deepfakes infiltrating advertising platforms. Donaldson took to X, formerly known as Twitter, to question the readiness of social media platforms in tackling deepfake scams after a TikTok advertisement featured a convincing deepfake of him promoting a giveaway of $2 iPhones. The fake promotion underscores the growing sophistication of deepfake technology, making it challenging to discern manipulated content from reality. Although the ad has been removed from TikTok, the video reveals the deceptive realism that AI can now achieve, prompting a broader discussion about the potential misuse of deepfake technology on popular social platforms.<\/p>\n

Donaldson’s experience adds to a growing list of public figures, who have expressed concern about their likenesses being exploited without permission. This incident amplifies the urgency of addressing the impact of deepfake videos<\/a>, not only on individual reputations but also on the broader landscape of digital media and advertising. As social media platforms grapple with the evolving threat of deepfakes, questions arise about the effectiveness of current detection measures and the need for robust policies to prevent the spread of misleading content. The intersection of technology, celebrity identity, and the potential for scams in the digital realm brings to light the challenges that lie ahead in ensuring the integrity of online spaces.<\/p>\n<\/div>\n<\/div>\n<\/div>\n

<\/p>\n

\n
\n
\n
\"Robert
[Image via TheRecentTimes]<\/figcaption><\/figure>\n

Inside the AI world of Robert Pattinson<\/h2>\n

If you’ve been scrolling through TikTok lately, you might have stumbled upon the unexpected sight of 36-year-old movie star Robert Pattinson<\/a>, not in his usual Hollywood roles but cast as an aspiring vlogger on the account @unreal_robert. With a whopping 1.1 million followers since May 2022, this TikTok profile hosts a collection of surreal deepfake videos featuring the actor in bizarre scenarios, from dancing to a Sea Shanty Medley to performing amateur magic with a teddy bear and a mini Dutch oven. Despite the obvious disparities like mismatched lighting and pacing, the deepfake Pattinson’s uncanny smile and peculiar antics have garnered millions of views.<\/p>\n

The account has not escaped Pattinson’s attention, who, in a January 19 interview with the Evening Standard<\/a>, admitted to finding it “terrifying” and recounted instances where close friends mistook the deepfake for reality. As TikTok continues to feature such deepfake parody accounts, like @deeptomcruise with 5 million followers, questions about the potential consequences and public understanding of AI and deepfakes persist. In the evolving landscape of digital manipulation, @unreal_robert stands as both a source of confusion for some viewers and a testament to the unsettling advancements in deepfake technology, prompting Pattinson himself to ponder the future implications for his career.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n

<\/p>\n

\n
\n
\"Kim
[Image via Digital Trends]<\/figcaption><\/figure>\n

Kim Kardashian Wins Against Deepfake Issue<\/h2>\n

Following the viral deepfake video featuring Mark Zuckerberg came a satirical deepfake, since removed by YouTube, of Kim Kardashian created by the same group. Unlike the Zuckerberg incident, where the conversation revolved around platform definitions of satire, the Kardashian video was taken down through a copyright claim filed by Condé Nast<\/a>, the original creator of the video used in the deepfake. This raises a crucial question about the power of copyright holders to swiftly remove deepfakes, especially those created for political commentary.<\/p>\n

The Kardashian deepfake delves into the influence social media companies hold over their users. While the Zuckerberg video was removed but remained accessible on Facebook and Instagram, the Kardashian deepfake is absent from YouTube due to Condé Nast’s Content ID claim<\/a>. The use of a copyright claim in this instance prompts considerations about whether copyright holders should have the authority to remove deepfakes created for political statements. Legal experts suggest that the transformative nature of the deepfake, using only a fraction of the original video and serving as a commentary on social media power dynamics, might constitute fair use. This challenges the prevailing use of copyright claims as a remedy for such issues. The case reflects the broader debate about the efficacy of copyright claims in addressing the nuanced challenges posed by political or social commentary deepfake content.<\/p>\n<\/div>\n<\/div>\n<\/div>\n

<\/p>\n

\"Boris
[Image via BBC]<\/figcaption><\/figure>\n

UK Prime Minister Boris Johnson Deepfake Election Meddling<\/h2>\n

As the UK finds itself in the midst of a polarizing general election, the political landscape takes an unexpected turn with the emergence of deepfake videos featuring Prime Minister Boris Johnson and Labour Party<\/a> leader Jeremy Corbyn. Digital artist Bill Posters, known for his previous deepfake creations involving figures like Mark Zuckerberg and celebrities, collaborates with Future Advocacy<\/a> to release videos where the political rivals appear to endorse each other. Despite the realistic nature of these deepfakes, the creators emphasize their intent to raise awareness about the dangers of misinformation and deepfake technology.<\/p>\n

Posters, who advocates for stricter regulations on online content, argues that the recent ban on political advertising by Twitter indicates the need for similar action from other platforms. However, experts warn against potential disadvantages, emphasizing that legislative attempts to control deepfakes could impede free speech online. The controversy surrounding these deepfakes unfolds against the backdrop of a broader debate about the real-world impact of such manipulated content, with studies revealing that the primary victims of malicious deepfakes are women in non-consensual porn—a stark contrast to the attention-grabbing but hypothetical concerns of political deepfakes. Despite criticism, Posters maintains that his creations serve a larger purpose by challenging public understanding of how personal data is wielded by powerful technologies and urging lawmakers to establish comprehensive privacy safeguards.<\/p>\n

\"Taylor
[Image via Radii China]<\/figcaption><\/figure>\n

Deepfake Taylor Swift Speaks Mandarin<\/h2>\n

Taylor Swift captivates Chinese audiences by effortlessly speaking fluent Mandarin. The AI-generated clips, crafted with technology from Chinese startup HeyGen<\/a>, depict Swift engaging in a talk<\/a> show conversation about her recent travels and musical inspirations—all while flawlessly syncing her Mandarin speech with her lip movements. The video quickly went viral, accumulating millions of views on social media platforms and prompting widespread discussion about the potential ramifications of AI dubbing technology.<\/p>\n

While many Chinese citizens marveled at the realism of the deepfake, concerns about its misuse surfaced. Some expressed worry about the technology being employed for deceptive purposes, such as creating convincing fake news. The ability of AI to simulate both voice and mouth movements raised fears about the ease with which people might be misled. Despite these apprehensions people remain optimistic by suggesting creative applications, such as translating and dubbing entire television series. The video captures the ongoing debate around the ethical use of deepfake technology and the challenges of regulation in the face of its rapidly advancing capabilities. In China laws regarding deepfakes are already in effect. Recent regulations mandate the labeling of AI-altered content and the collection of consent from individuals depicted, reflecting efforts to address concerns about misinformation and privacy. However, the enforcement of these rules remains an ongoing challenge.<\/p>\n

\"David
[Image via CampaignLive]<\/figcaption><\/figure>\n

David Beckham Uses Deepfake Technology for Good<\/h2>\n

In a global campaign, Deepfake David Beckham lends his voice to the fight against malaria, delivering a multilingual appeal in nine languages using controversial deepfake voice technology. The 55-second spot by charity Malaria No More, titled “Malaria must die, so millions can live,” skillfully employs video synthesis technology from UK company Synthesia<\/a> to make Beckham’s appeal appear seamlessly multilingual. While the campaign aims to raise awareness for the world’s first voice petition against malaria ahead of the Global Fund Replenishment Conference in October, Synthesia’s deepfake technology has raised concerns about its potential misuse, with fears that it could be employed to doctor videos of politicians or newsreaders for fraudulent purposes.<\/p>\n

Beckham, a founding member of Malaria No More’s UK leadership council and a Unicef<\/a> goodwill ambassador, speaks passionately in the campaign, representing diverse voices from around the globe, including malaria survivors and doctors fighting the disease. Despite the innovative use of artificial intelligence in video synthesis, the technology’s potential dark side overshadows worries expressed by politicians about the threat of deepfakes to democracy. The campaign, created by R\/GA London, encourages people to add their voices to the petition, emphasizing the power of voice as a medium to draw attention to one of the world’s oldest and deadliest diseases.<\/p>\n

 <\/p>\n

Where Do We Find This Stuff? Here Are Our Sources:<\/strong><\/p>\n

Kylie Jenner Deepfake: https:\/\/www.glamourmagazine.co.uk\/article\/kylie-jenner-tiktok-lookalike deepfake<\/p>\n

Morgan Freeman Deepfake: https:\/\/www.creativebloq.com\/news\/morgan-freeman-deepfake<\/p>\n

Tom Cruise Deepfake: https:\/\/www.cnn.com\/2021\/08\/06\/tech\/tom-cruise-deepfake-tiktok-company\/index.html<\/p>\n

Volodymyr Zelenskyy Deepfake: https:\/\/www.npr.org\/2022\/03\/16\/1087062648\/deepfake-video-zelenskyy-experts-war-manipulation-ukraine-russia deepfake<\/p>\n

Nancy Pelosi Deepfake: https:\/\/www.cbsnews.com\/news\/doctored-nancy-pelosi-video-highlights-threat-of-deepfake-tech-2019-05-25\/<\/p>\n

Mark Zuckerberg Deepfake: https:\/\/www.vice.com\/en\/article\/ywyxex\/deepfake-of-mark-zuckerberg-facebook-fake-video-policy<\/p>\n

Former President Obama Deepfake: https:\/\/www.vox.com\/2018\/4\/18\/17252410\/jordan-peele-obama-deepfake-buzzfeed<\/p>\n

Queen Elizabeth Deepfake: https:\/\/www.theguardian.com\/technology\/2020\/dec\/24\/channel-4-under-fire-for-deepfake-queen-christmas-message<\/p>\n

Scarlettt Johansson Deepfake: https:\/\/www.vulture.com\/2018\/12\/scarlett-johansson-ruminates-on-deepfake-porn-of-her-image.html<\/p>\n

Emma Watson Deepfake: https:\/\/www.thedrum.com\/news\/2023\/03\/08\/after-emma-watson-deepfake-ad-scandal-experts-share-risks-and-rewards-synthetic<\/p>\n

Joe Rogan Deepfake: https:\/\/mashable.com\/article\/joe-rogan-tiktok-deepfake-ad<\/p>\n

Tom Hanks Deepfake: https:\/\/www.kcra.com\/article\/tom-hanks-ai-dental-video-ad\/45415149<\/p>\n

MrBeast Deepfake: https:\/\/www.nbcnews.com\/tech\/mrbeast-ai-tiktok-ad-deepfake-rcna118596<\/p>\n

Robert Pattinson Deepfake: https:\/\/www.insider.com\/tiktok-robert-pattinson-dancing-deep-fakes-people-believed-2023-1<\/p>\n

Kim Kardashian Deepfake: https:\/\/www.vice.com\/en\/article\/j5wngd\/kim-kardashian-deepfake-mark-zuckerberg-facebook-youtube<\/p>\n

Boris Johnson Deepfake: https:\/\/www.vice.com\/en\/article\/8xwjkp\/deepfake-of-boris-johnson-wants-to-warn-you-about-deepfakes<\/p>\n

Taylor Swift Deepfake: https:\/\/radii.co\/article\/taylor-swift-deepfake-video<\/p>\n

David Beckham Deepfake: https:\/\/www.campaignlive.com\/article\/deepfake-voice-tech-used-good-david-beckham-malaria-campaign\/1581378<\/p>\n\n","protected":false},"excerpt":{"rendered":"

Artificial intelligence is reshaping our world, prompting questions about the nature of these transformations. One…<\/p>\n","protected":false},"author":58,"featured_media":87766,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-87764","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-general"],"lang":"en","translations":{"en":87764},"pll_sync_post":[],"_links":{"self":[{"href":"https:\/\/dev.sciencesensei.com\/wp-json\/wp\/v2\/posts\/87764","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dev.sciencesensei.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dev.sciencesensei.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dev.sciencesensei.com\/wp-json\/wp\/v2\/users\/58"}],"replies":[{"embeddable":true,"href":"https:\/\/dev.sciencesensei.com\/wp-json\/wp\/v2\/comments?post=87764"}],"version-history":[{"count":13,"href":"https:\/\/dev.sciencesensei.com\/wp-json\/wp\/v2\/posts\/87764\/revisions"}],"predecessor-version":[{"id":88374,"href":"https:\/\/dev.sciencesensei.com\/wp-json\/wp\/v2\/posts\/87764\/revisions\/88374"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dev.sciencesensei.com\/wp-json\/wp\/v2\/media\/87766"}],"wp:attachment":[{"href":"https:\/\/dev.sciencesensei.com\/wp-json\/wp\/v2\/media?parent=87764"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dev.sciencesensei.com\/wp-json\/wp\/v2\/categories?post=87764"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dev.sciencesensei.com\/wp-json\/wp\/v2\/tags?post=87764"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}