In this week’s U.S. Senate hearing focused on “Big Tech and the Online Child Exploitation Crisis,” the spotlight was on the pressing need for robust protections against digital harms. This event, coupled with my own experiences speaking to legislators about the potential and pitfalls of AI, underscores a critical, yet often overlooked, challenge: the rise of deepfakes and AI-powered cyberattacks, as millions of Taylor Swift fans have recently learned.
The Unseen Threat of Voice Cloning

Deepfakes, or AI-generated falsifications, are evolving rapidly. Beyond the well-publicized fake images and videos, there’s a more lo-fi threat: voice cloning. Technologies like VALL-E can now mimic a person’s voice with just three seconds of audio, reproducing not just the sound but the emotional tone and even the background noise. This capability opens a Pandora’s box of potential scams, from emergency pleas for money to unauthorized requests for sensitive information, posing significant risks to individuals and organizations alike. Consider how you would react if your boss was at an overseas conference and called you stating they were pickpocketed and lost their passport, credit cards, and money. They need you to wire them $500 to get a car to the Embassy in London. You’re up for promotion next month. What would you do?
The Expanding GenAI Battlefield
With tens of millions of content creators across platforms like TikTok and YouTube, the raw material for voice cloning is readily available, making it easier for malicious actors to execute scams. From the “Family Emergency Scam” to the “Boss Emergency Scam” I mentioned earlier, the methods of exploitation are becoming more sophisticated, leveraging the human element to bypass traditional security measures. According to Verizon’s Data Breach Report, 74% of data breaches in 2021-2022 involved a human element, highlighting the urgent need for enhanced vigilance and preparedness. It’s not just families and organizations that are being targeted. The recent fake Biden robocall scam in New Hampshire, which used AI-based voice cloning to discourage citizens from voting is just the beginning.
Beyond Deepfake Tay: The Threat of Image and Video Cloning
The potential misuse of deepfake technology extends beyond voice cloning to include image and video manipulation. The scenario of deepfake nudes, for instance, represents a grave violation of privacy and can lead to severe reputational damage. This threat demands not just awareness but proactive measures to safeguard personal and organizational integrity.
Strategies to Combat for All
1. Cultivate AI Fluency: Both organizations and families must stay informed about AI advancements and their associated risks. Regular education and awareness sessions can empower individuals to recognize and respond to suspicious activities effectively.
2. Implement Personal Verification: Establishing unique verification methods for family, friends, and colleagues is critical, such as challenge codes or personal knowledge questions provide a critical layer of security against voice cloning attacks. Do not trust, verify.
3. Adopt Multi-Factor Authentication and Network Monitoring: These technologies are essential for preventing unauthorized access to sensitive information and should be standard practice in all organizations.
4. Secure Communication Channels for Sensitive Information: The use of encrypted messaging apps and secure email services can protect against the interception of sensitive communications. Avoid sharing codenames, project names, or uniquely identifiable resources over unsecured texts.
5. Engage with Legislators and demand AI Fluency: Advocating for AI fluency among policymakers is crucial for developing informed, effective strategies to combat cyber threats. Only Georgia, Hawaii, Texas, and Virginia have laws that criminalize deepfakes. California and Illinois give victims the right to sue. Minnesota and New York offer both protections. Meanwhile, initiatives to identify and stop attacks before they happen such as the FTC’s Voice Cloning Challenge represent positive steps, but more comprehensive efforts are necessary.
This Genie is Not Going Back in the Bottle

The advancement of AI technology, while offering immense potential, also brings with it new vulnerabilities and challenges, and a new reality we all need to prepare for. Imagine instead of Taylor Swift, it’s you or one of your employees that are targeted with deepfake nudes, using all those vacation photos you posted on social media. The hacker contacts you with a photo with the naughty bits blurred out that gets past email protections. The author threatens to send the fakes to everyone in your Address Book, unless you follow their instructions. This scenario was explored in Netflix’s series, “Black Mirror” in 2016, where a young man’s computer is hacked and his webcam exposed. Since 2016, everyone from former FBI Director James Comey to Facebook CEO Mark Zuckerberg have been seen with black tape over the webcams. Text-based attacks like this claiming in email to have hacked your webcam are happening all the time. In the new age of deepfakes, no amount of tape will protect us.
As we navigate this evolving landscape, the need for AI fluency and proactive cybersecurity measures has never been more critical. By fostering awareness, implementing robust security protocols, and advocating for informed legislative action, we can mitigate the risks posed by deepfakes and other AI-powered cyberattacks. Let’s take collective action to secure our digital future.
What do you think? Are there additional measures or perspectives to consider? Be sure to leave your thoughts in the comments.


