PolyAI raises $50 million series C Read more

Responding to the MGM hack: is call center voice AI a vulnerability or security asset

September 22, 2023

Share

Last week, MGM Las Vegas fell victim to a cyber attack as a result of social engineering. Reports indicate members of a cybercrime group made phone calls to the IT service desk and persuaded a service representative to reset all Multi-Factor Authentication (MFA) factors for multiple highly privileged users, gaining access to sensitive systems and customer data.

Empathy and compassion are two essential traits for effective customer service representatives. Career customer service reps work hard and stay in their jobs because they want to help callers. It makes sense that they would want to help a colleague or a customer in a bind. It’s exactly this kind of behavior that pulls in high CSAT scores and builds loyal customer relationships.

But these very skills that make for excellent customer service representatives also present a vulnerability that remains the Achilles heel of many organizations’ defenses. Fear, urgency, and sympathy are used as emotional tactics to manipulate and cloud judgment, resulting in impulsive decisions that can cause irreversible brand damage.

While much has been speculated about the risks associated with AI, this hack could have been prevented with an AI-powered solution that isn’t vulnerable to emotional manipulation.

With many of our hotel and casino clients reaching out to understand how PolyAI can help to provide a secure authentication process, we decided to put together this quick guide to using call center voice AI to prevent social engineering hacks.

An emotionless approach to security

Passwords and MFA are prime examples of emotionless security processes that have stood the test of time. They succeed because they follow a strict protocol: if you don’t know the password or can’t provide the necessary verification, access is denied. Of course, in MGM’s case, this secure process was broken through emotional manipulation.

Security is a matter of probabilities, and with every high-profile attack, it’s clear that no organization is impenetrable. However, leveraging multiple security measures is the most effective way to reduce the chances and potential impact of a successful cyber attack. Security professionals call this a “defense-in-depth” strategy.

Cyberattackers will always use the latest tools to gain an edge, and AI will be used to probe for vulnerabilities, issues, and weaknesses. However, security professionals – especially network security, application security, and physical security – can use AI to identify and stop potential attackers before a breach is successful.

Effective voice assistants for ID&V

Call center voice AI solutions like voice assistants can handle Knowledge-Based Authentication (KBA) processes, running callers through a series of security questions in the same way a customer service representative would.

KBA is naturally conversational, and although it requires a little effort from the customer, it is generally a simple process that callers are already familiar with. KBA is popular in contact centers as a conversational method of both identifying and verifying customers. It is secure in that multiple customers are unlikely to match against a combination of personal details (e.g., name and date of birth).

Unlike customer service representatives, voice assistants are not susceptible to social engineering. Using a voice assistant to automate KBA processes frees up people to focus on calls where empathy and compassion is an asset, not a vulnerability.

If you’re thinking of implementing a voice assistant to handle identification & verification, it’s important to note that your partners and solutions should not require customer data to deploy. Your vendor should not be storing customer data, simply processing it in line with necessary security and compliance protocols.

Using call center voice AI for data-driven security

Voice assistants gather structured conversational data as standard. Security analysts can leverage this data to identify trends indicative of attempted attacks or targeting through analytics on security-related calls.

As the voice assistant generates data on usage, security, and contact center teams can work together to develop further automation strategies that reduce security vulnerabilities while freeing up customer service representatives to use valuable soft skills to build lasting customer relationships.

Conclusion

The recent MGM hack serves as a reminder that the compassion and empathy we so value in customer service remains a critical risk in cyber security. While human vulnerabilities continue to be a significant challenge, it’s essential to recognize that AI helps security teams adapt to constantly evolving threats.

By integrating AI voice assistants into identification and verification processes, organizations can ensure consistency, impartiality, and efficiency in verifying critical information, all while maintaining the natural flow of conversation.

Ready to hear it for yourself?

Get a personalized demo to learn how PolyAI can help you
 drive measurable business value.

Request a demo

Request a demo