Home Technology How hostile AI creates superficial trust in the deepfake world

How hostile AI creates superficial trust in the deepfake world

by trpliquidation
0 comment
How hostile AI creates superficial trust in the deepfake world

We want to hear from you! Take our quick AI survey and share your insights on the current state of AI, how you’re implementing it, and what you expect in the future. Learn more


While 87% of Americans hold companies responsible for digital privacy, yet alone 34% By trusting that they will use AI effectively to protect against fraud, there is a significant trust gap. Despite 51% of companies deploying AI for cybersecurity and fraud management, only 43% of customers globally believe companies are doing it right. There is an urgent need for companies to bridge the trust gap and ensure that their AI-driven security measures inspire trust. Deepfakes widen the gap.

Growing trust gap

The growing trust gap is everywhere, from customers’ purchasing relationships with companies they have trusted for years to the elections held in seven of the world’s ten largest countries. from Telesign Confidence index 2024 provides new insights into the growing trust gap between customers and the companies they buy from and, on a broader scale, national elections.

Deepfakes that deplete trust in brands, elections

Deepfakes and disinformation drive a wedge of distrust between companies, the customers they serve and citizens participating in this year’s elections.

“Once you’ve been fooled by a deepfake, you may no longer believe what you see online. And when people start to doubt everything, while they cannot distinguish fiction from fact, democracy itself is threatened.” say Andy Parsons, senior director of Adobe Content Authenticity Initiative.


Countdown to VB Transform 2024

Join business leaders in San Francisco from July 9 to 11 for our flagship AI event. Connect with colleagues, explore the opportunities and challenges of generative AI, and learn how to integrate AI applications into your industry. register now


The widespread distribution of deepfakes on social media platforms filled with bot-based, often automated fake accounts makes it even more challenging to distinguish between fake and real content. This technique has become commonplace worldwide. An example is from September 2020, when analytics firm Graphika and Facebook blocked a Chinese network of accounts that ‘Operation Naval Staring‘ which posted content on geopolitical issues, including US-China relations in the context of the South China Sea conflict.

Nation states invest heavily in disinformation campaigns to influence the elections of countries with which they are in conflict, often with the aim of destabilizing democracy or creating social unrest. The U.S. Intelligence Community Annual Threat Assessment 2024 The report states: “Russia is using AI to create deepfakes and developing the ability to fool experts. Individuals in war zones and unstable political environments can serve as some of the most valuable targets for such deepfake malign influence.”

Attackers are ruthless in weaponizing AI and building arsenals of deepfake technologies that rely on the quick gains made in generative adversarial networks (GANs). Their craftsmanship has an immediate impact on voters worldwide.

According to Telesign’s Index, 72% of voters worldwide are concerned that AI-generated content featuring deepfake video and vote cloning will undermine today’s elections. 81% of Americans are specifically concerned about the impact deepfakes and related GAN-generated content will have on elections. Americans are among the most aware of AI-generated political ads or messages. 45% report seeing an AI-generated political ad or message in the past year, while 17% have seen one in the past week.

Trust AI and Machine Learning

One promising sign from Telesign’s Index is that despite fears of hostile AI-based attacks using deepfakes and vote clones to derail elections, the majority (71%) of Americans would trust election results more if AI and machine learning (ML) would be used to prevent elections. cyber attacks and fraud.

How GANs deliver increasingly realistic content

GANs are the technical engines powering the growing popularity of deepfake. Everyone from rogue attackers experimenting with the technology to advanced nation states, included Russiadoubles down on GANs to create videos and voice clones that appear authentic.

The greater the authenticity of deep fake content, the greater the impact on customer and voter trust. Because they are so difficult to detect, GANs are widely used in phishing attacks, identity theft, and social engineering schemes. The New York Times offers one quiz to see if readers can identify which of the ten images are real or AI-generated, further underscoring how quickly GAN’s deepfakes are improving.

GANs include two competing neural networks, with the first serving as a generator and the second as a discriminator. The generator continuously creates fake, synthetic data, including images, videos, or audio, while the discriminator evaluates how real the created content looks.

The goal is for the generator to continuously increase the quality and realism of the image or data to deceive the discriminator. The sophisticated nature of GANs makes it possible to create deepfakes that are virtually indistinguishable from authentic content, significantly eroding trust. These AI-generated counterfeits can be used to quickly spread disinformation through social media and fake accounts, eroding trust in brands and democratic processes.

Source: CEPS Task Force ReportMay 2021.

Protecting trust in a deepfake world

“The rise of AI over the past year has brought the importance of trust in the digital world to the forefront,” said Christophe Van de Weyer, CEO of Telesign. “As AI continues to evolve and become more accessible, it is critical that we prioritize fraud protection solutions powered by AI to protect the integrity of personal and institutional data. AI is the best defense against AI-enabled fraud attacks. At Telesign, we are committed to using AI and ML technologies to combat digital fraud, ensuring a more secure and reliable digital environment for everyone.” By harnessing the intelligence of more than 2,200 digital identity signals, Telesign’s AI models enable companies to transact with their customers and increase trust, realizing the growth potential that today’s diverse digital economies represent. Telesign helps its customers prevent the sending of more than 30 million fraudulent messages per month and protects more than 1 billion accounts from takeovers every year. Telesign’s Verify API uses AI and ML to add contextual intelligence and consolidate omnichannel authentication into a single API, streamlining transactions and reducing fraud risks.

The Telesigns Index shows that there is cause for concern when it comes to getting cyber hygiene in order. Their research shows that 99% of successful digital break-ins begin when accounts have multi-factor authentication (MFA) disabled. CISA provides a useful fact sheet on MFA that defines why it is important and how it works.

A well-executed MFA plan requires the user to present a combination of something they know, something they have, or some form of biometric factor. One of the main reasons why so many Snowflake customers are getting hacked is that MFA is not enabled by default. Microsoft will begin enforcement MFA on Azure in July. GitHub As of March 2023, users will be required to enable MFA.

Identity-based breaches quickly erode customer trust. The lack of a solid identity and access management (IAM) hygiene plan almost always leads to orphaned, dormant accounts that often remain active for years. Attackers are constantly honing their skills to find new ways to identify and exploit dormant accounts.

Recent research from Ivanti discovered that 45% of companies believe that former employees and contractors may still have active access to their corporate systems and files. “Companies and large organizations often fail to consider the vast ecosystem of third-party apps, platforms and services that provide access long after an employee or contractor has been dismissed,” Dr. Srinivas Mukkamala, Chief Product Officer at Ivanti during an interview with VentureBeat. earlier this year. “There are a shockingly large number of security professionals – and even leadership-level managers – who still have access to the systems and data of former employers.”

Conclusion – Maintaining trust in a deepfake world

Telesign’s Trust Index quantifies current trust deficits and their future direction. One of the most pragmatic findings of the Index is how important it is to get IAM and MFA right. Another is the extent to which customers depend on CISOs and CIOs to make the right decisions regarding AL/ML to protect their customers’ identities and data.

As neural networks continue to improve and GAN’s accuracy, speed, and ability to create deceptive content increase, doubling down on security will become the core of every CISO’s roadmap for the future. Nearly all breach attempts start with a compromised identity. Ending it, no matter how it starts with deep fake content, is a goal that is achievable for any company.

You may also like

logo

Stay informed with our comprehensive general news site, covering breaking news, politics, entertainment, technology, and more. Get timely updates, in-depth analysis, and insightful articles to keep you engaged and knowledgeable about the world’s latest events.

Subscribe

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

© 2024 – All Right Reserved.