The dangers of anthropomorphizing AI: An infosec perspective (2024)

The generative AI revolution is showing no signs of slowing down. Chatbots and AI assistants have become an integral part of the business world, whether for training employees, answering customer queries or something else entirely. We’ve even given them names and genders and, in some cases, distinctive personalities.

There are two very significant trends happening in the world of generative AI. On the one hand, the desperate drive to humanize them continues, sometimes recklessly and with little regard for the consequences. At the same time, according to Deloitte’s latest State of Generative AI in the Enterprise report, businesses’ trust in AI has greatly increased across the board over the last couple of years.

However, many customers and employees clearly don’t feel the same way. More than 75% of consumers are concerned about misinformation. Employees are worried about being replaced by AI. There’s a growing trust gap, and it’s emerged as a defining force of an era characterized by AI-powered fakery.

Here’s what that means for infosec and governance professionals.

The dangers of overtrust

The tendency to humanize AI and the degree to which people trust it highlights serious ethical and legal concerns. AI-powered ‘humanizer’ tools claim to transform AI-generated content into “natural” and “human-like” narratives. Others have created “digital humans” for use in marketing and advertising. Chances are, the next ad you see featuring a person isn’t a person at all but a form of synthetic media. Actually, let’s stick to calling it exactly what it is — a deepfake.

Efforts to personify AI are nothing new. Apple pioneered it way back in 2011 with the launch of Siri. Now, we have countless thousands more of these digital assistants, some of which are tailored to specific use cases, such as digital healthcare, customer support or even personal companionship.

It’s no coincidence that many of these digital assistants come with imagined female personas, complete with feminine names and voices. After all, studies show that people overwhelmingly prefer female voices, and that makes us more predisposed to trusting them. Though they lack physical forms, they embody a competent, dependable and efficient woman. But as tech strategist and speaker George Kamide puts it, this “reinforces human biases and stereotypes and is a dangerous obfuscation of how the technology operates.”

Ethical and security issues

It’s not just an ethical problem; it’s also a security problem since anything designed to persuade can make us more susceptible to manipulation. In the context of cybersecurity, this presents a whole new level of threat from social engineering scammers.

People form relationships with other people, not with machines. But when it becomes almost impossible to tell the difference, we’re more likely to trust AI when making sensitive decisions. We become more vulnerable; more willing to share our personal thoughts and, in the case of business, our trade secrets and intellectual property.

This presents serious ramifications for information security and privacy. Most large language models (LLMs) keep a record of every interaction, potentially using it for training future models.

Do we really want our virtual assistants to reveal our private information to future users? Do business leaders want their intellectual property to resurface in later responses? Do we want our secrets to become part of a massive corpus of text, audio and visual content to train the next iteration of AI?

If we start thinking of machines as substitutes for real human interaction, then all these things are much likelier to happen.

Learn more on AI cybersecurity

A magnet for cyber threats

We’re conditioned to believe that computers don’t lie, but the truth is that algorithms can be programmed to do precisely that. And even if they’re not specifically trained to deceive, they can still “hallucinate” or be exploited to reveal their training data.

Cyber threat actors are well aware of this, which is why AI is the next big frontier in cyber crime. Just as a business might use a digital assistant to persuade potential customers, so too can a threat actor use it to dupe an unsuspecting victim into taking a desired action. For example, a chatbot dubbed Love-GPT was recently implicated in romance scams thanks to its ability to generate seemingly authentic profiles on dating platforms and even chat with users.

Generative AI will only become more sophisticated as algorithms are refined and the required computing power becomes more readily available. The technology already exists to create so-called “digital humans” with names, genders, faces and personalities. Deepfake videos are far more convincing than just a couple of years ago. They’re already making their way into live video conferences, with one finance worker paying out $25 million after a video call with their deepfake chief financial officer.

The more we think of algorithms as people, the harder it becomes to tell the difference and the more vulnerable we become to those who would use the technology for harm. While things aren’t likely to get any easier, given the rapid pace of advancement in AI technology, legitimate organizations have an ethical duty to be transparent in their use of AI.

AI outpacing policy and governance

We have to accept that generative AI is here to stay. We shouldn’t underestimate its benefits either. Smart assistants can greatly decrease the cognitive load on knowledge workers and they can free up limited human resources to give us more time to focus on larger issues. But trying to pass off any kind of machine learning capabilities as substitutes for human interaction isn’t just ethically questionable; it’s also contrary to good governance and policy-making.

AI is advancing at a speed governments and regulators can’t keep up with. While the EU is putting into force the world’s first regulation on artificial intelligence — known as the EU AI Act — we still have a long way to go. Therefore, it’s up to businesses to take the initiative with stringent self-regulation concerning the security, privacy, integrity and transparency of AI and how they use it.

In the relentless quest to humanize AI, it’s easy to lose sight of those crucial elements that constitute ethical business practices. It leaves employees, customers and everyone else concerned vulnerable to manipulation and overtrust. The result of this obsession isn’t so much humanizing AI; it’s that we end up dehumanizing humans.

That’s not to suggest businesses should avoid generative AI and similar technologies. What they must do, however, is be transparent about how they use them and clearly communicate the potential risks to their employees. It’s imperative that generative AI becomes an integral part of not just your business technology strategy but also your security awareness training, governance and policy-making.

A dividing line between human and AI

In an ideal world, everything that’s AI would be labeled and verifiable as such. And if it isn’t, then it’s probably not to be trusted. Then, we could go back to worrying only about human scammers, albeit, of course, with their inevitable use of rogue AIs. In other words, perhaps we should leave the anthropomorphizing of AI to the malicious actors. That way, we at least stand a chance of being able to tell the difference.

generative ai|generative artificial intelligence|chatbot|Artificial Intelligence (AI)

Charles Owen-Jackson

Freelance Content Marketing Writer

The dangers of anthropomorphizing AI: An infosec perspective (2024)
Top Articles
Latest Posts
Article information

Author: Velia Krajcik

Last Updated:

Views: 6253

Rating: 4.3 / 5 (74 voted)

Reviews: 81% of readers found this page helpful

Author information

Name: Velia Krajcik

Birthday: 1996-07-27

Address: 520 Balistreri Mount, South Armand, OR 60528

Phone: +466880739437

Job: Future Retail Associate

Hobby: Polo, Scouting, Worldbuilding, Cosplaying, Photography, Rowing, Nordic skating

Introduction: My name is Velia Krajcik, I am a handsome, clean, lucky, gleaming, magnificent, proud, glorious person who loves writing and wants to share my knowledge and understanding with you.