Humanlike AI agents must master three core skills: listening, reasoning, and speaking Read more

Responding to the Coinbase ransomware attack: Strengthening contact center security with voice AI

May 16, 2025

Share

Security blog

Coinbase, the largest U.S. cryptocurrency exchange, recently faced a serious ransomware attack. Criminals bribed customer service agents overseas to access customers’ personal data, like names, birthdates, and parts of social security numbers.

The attackers then used this information to try to scam Coinbase customers out of their crypto funds. They demanded $20 million in bitcoin and threatened to release the stolen data if Coinbase didn’t pay.

This attack highlights a common problem: people often remain the weakest link in security systems. Social engineering attacks trick people by playing on emotions like trust and urgency, leading to mistakes and breaches.

Why human empathy can be a security risk

Customer service reps are trained to help and be understanding. That’s what makes good customer experiences. But criminals can exploit this kindness to manipulate staff into giving up sensitive information or bypassing security checks.

This emotional manipulation is a big security risk that can cause serious damage to companies and customers.

How voice AI can help prevent these attacks

Unlike people, AI agents can’t be tricked by fear or sympathy. Solutions like AI agents can handle identity checks and verification using set security questions and strict rules, making the process consistent and harder to fool.

Using voice AI for these tasks lets human agents focus on helping customers where empathy really matters, while keeping sensitive steps secure and automated.

Call center voice AI solutions like AI agents can handle Knowledge-Based Authentication (KBA) processes, running callers through a series of security questions in the same way a customer service representative would.

Knowledge-based authentication

Knowledge-based authentication (KBA) is the process of verifying customers through a series of questions, like What’s your mother’s maiden name, or What was the street you grew up on? Although it requires little effort from the customer, it is generally a simple process that callers are already familiar with. KBA is popular in contact centers as a conversational method of both identifying and verifying customers. It is secure in that multiple customers are unlikely to match against a combination of personal details (e.g., name and date of birth).

Unlike customer service representatives, AI agents are not susceptible to social engineering. Automating KBA processes with an AI agent frees customer service reps to focus on calls where empathy and compassion are an asset, not a vulnerability.

If you’re thinking of implementing an AI agent to handle identification & verification, it’s important to note that your partners and solutions should not require customer data to deploy. Your vendor should not be storing customer data, simply processing it in line with necessary security and compliance protocols.

Data security and privacy controls for AI agents

AI agents often handle sensitive customer information, such as personal details, financial data, or health records. By implementing database access restrictions, an LLM can be configured to have no direct access to databases, ensuring a strong separation between consumer interactions and sensitive data stores.

Filtering can be implemented on the LLMs’ generated outputs to remove or anonymize sensitive information, particularly personally identifiable information (PII). Regular updates are required to refine these filtering techniques to adapt to evolving data privacy standards.

Using AI-driven insights to detect and prevent emerging threats

AI agents generate structured data from interactions, enabling security teams to analyze patterns and detect suspicious activity or emerging threats early. This intelligence can be combined with human oversight to strengthen “defense-in-depth” strategies and reduce vulnerabilities.

Automating powerful customer interactions with robust safety

Putting customer interactions in the hands of automated systems requires a lot of trust. If your agents were unsure of how to resolve a customer’s issue, you’d want them to check their response so they deliver a trustworthy, secure, and correct answer.

PolyAI’s customer-led AI agents are consistent, reliable, and safe. Our proprietary generative AI framework incorporates the benefits of generative AI while retaining the safety guardrails that are so important to enterprises looking to use AI responsibly and keep customer conversations and data secure.

Balancing human empathy and AI security for stronger protection

The Coinbase ransomware attack shows how social engineering and insider threats are still big security challenges. Social engineering exploits the natural empathy of customer service reps, putting sensitive data and customers at risk.

Voice AI offers an effective way to reduce this vulnerability. By automating identity verification with consistent, rules-based checks, AI agents can prevent emotional manipulation and free human agents to focus on genuine customer care.

At the same time, strong data security controls and privacy measures must be in place to protect customer information handled by AI. Combining AI-driven insights with human oversight creates a stronger, layered defense against these emerging threats.

Ready to hear it for yourself?

Get a personalized demo to learn how PolyAI can help you
 drive measurable business value.

Request a demo

Request a demo