A leading voice on technology policy warned that artificial intelligence is moving faster than public safeguards, calling for tighter guardrails and practical fixes during a national TV appearance.
The interim executive director at Georgetown University’s Center for Security and Emerging Technology (CSET) joined Fox News’ Fox Report to outline growing risks on Sunday. The discussion touched on election security, deepfakes, national security, and the pace of regulation. The appearance came as lawmakers and tech firms weigh how to reduce harm without stifling useful tools.
Interim executive director at Georgetown’s Center for Security and Emerging Technology discusses growing concerns over A.I. on ‘Fox Report.’
Why The Warning Matters
CSET is a Washington-based research group that studies how new technologies affect security and policy. Its analysts brief Congress, agencies, and allies. The interim director’s message signals where policy debates may head next.
Public anxiety has climbed after a year of high-profile AI rollouts and missteps. Companies launched new image and text systems. Users then showed how they can produce convincing fake audio, altered photos, and crafted phishing messages in seconds. Experts say these tools lower the barrier for fraud and influence campaigns.
Election Security And Deepfakes
One urgent focus is elections. Cheap tools can clone a candidate’s voice or face. In early 2024, fake robocalls using a cloned voice urged voters in New Hampshire to skip a primary. State officials opened investigations. The Federal Communications Commission later declared AI-generated voices in robocalls illegal under the Telephone Consumer Protection Act.
Security researchers warn that audio and video forgeries can spread faster than fact-checks. Even after debunks, false clips can leave doubts. That makes quick detection and clear labeling key. It also pushes campaigns and newsrooms to verify media before sharing.
- Threats: Voice cloning scams, forged videos, tailored disinformation.
- Targets: Voters, seniors, small businesses, public officials.
- Defenses: Authentication, content provenance, faster takedowns.
Regulation Catches Up
Policymakers have begun to react. The White House issued an AI executive order in October 2023, calling for testing, reporting, and safety standards for powerful models. Federal agencies are drawing up rules for procurement, critical infrastructure, and consumer protection.
Abroad, the European Union approved the AI Act in 2024. It sets risk tiers, bans certain uses, and requires transparency for high-risk systems. Supporters say it gives clarity. Critics worry about compliance costs and vague terms.
Many U.S. states are advancing narrower bills on deepfakes, data privacy, and automated hiring. The patchwork could spur companies to adopt baseline practices nationwide. Still, gaps remain for open-source models, small developers, and cross-border misuse.
National Security And Industry Impact
AI also shapes defense and cyber operations. Analysts highlight risks from model theft, data poisoning, and automated hacking. They also flag how adversaries might use generative tools to craft more persuasive spear-phishing or tailor propaganda at scale.
For businesses, the near-term issues are trust and liability. Banks and retailers face rising fraud attempts using cloned voices and fake documents. Media companies confront manipulated images that can inflame events. Insurers and auditors are asking for clearer controls and logs.
Companies are testing solutions. Watermarks and content provenance standards aim to show how a file was created and edited. Red-team testing and safety evaluations are more common before launch. Some firms are building models that refuse to output certain prompts. Others publish usage rules and offer rewards for finding flaws.
Competing Views On The Path Forward
Advocates for stricter rules say the public bears too much risk. They want clear penalties for harmful uses and firm duties for companies releasing powerful tools. Civil liberties groups warn that broad bans or overbroad monitoring could chill speech and research.
Developers argue for flexible standards, open research, and targeted enforcement against misuse. They say innovation can deliver better detection, filters, and safer designs. Many agree on the need for transparency about training data, model limits, and known failure modes.
What Comes Next
The next year will test whether policy and practice can keep pace. Watch for federal guidance on watermarking, identity proofing for high-risk actions, and rules for government use. Expect more lawsuits over deepfakes and consumer harm. Election officials are likely to expand rapid response units for false media.
The Georgetown expert’s on-air warning boiled down to a pragmatic note: act now on known risks while building longer-term safeguards. That means basic hygiene—verification tools, disclosure, incident reporting—paired with clearer accountability for those who deploy powerful systems.
As AI tools spread, the measure of progress will be simple. Are scams and forgeries harder to pull off? Are users better informed? The answers will shape public trust, business adoption, and the safety of key institutions this year and next.
