Saturday, 14 Feb 2026
  • About us
  • Blog
  • Privacy policy
  • Advertise with us
  • Contact
Subscribe
new_york_report_logo_2025 new_york_report_white_logo_2025
  • World
  • National
  • Technology
  • Finance
  • Personal Finance
  • Life
  • 🔥
  • Life
  • Technology
  • Personal Finance
  • Finance
  • World
  • National
  • Uncategorized
  • Business
  • Education
  • Wellness
Font ResizerAa
The New York ReportThe New York Report
  • My Saves
  • My Interests
  • My Feed
  • History
  • Technology
  • World
Search
  • Pages
    • Home
    • Blog Index
    • Contact Us
    • Search Page
    • 404 Page
  • Personalized
    • My Feed
    • My Saves
    • My Interests
    • History
  • Categories
    • Technology
    • World
Have an existing account? Sign In
Follow US
© 2025 The New York Report. All Rights Reserved.
Home » Blog » Georgetown Expert Flags Rising AI Risks
Technology

Georgetown Expert Flags Rising AI Risks

Kelsey Walters
Last updated: February 13, 2026 10:18 pm
Kelsey Walters
Share
georgetown expert flags rising ai risks
georgetown expert flags rising ai risks
SHARE

A leading voice on technology policy warned that artificial intelligence is moving faster than public safeguards, calling for tighter guardrails and practical fixes during a national TV appearance.

Contents
Why The Warning MattersElection Security And DeepfakesRegulation Catches UpNational Security And Industry ImpactCompeting Views On The Path ForwardWhat Comes Next

The interim executive director at Georgetown University’s Center for Security and Emerging Technology (CSET) joined Fox News’ Fox Report to outline growing risks on Sunday. The discussion touched on election security, deepfakes, national security, and the pace of regulation. The appearance came as lawmakers and tech firms weigh how to reduce harm without stifling useful tools.

Interim executive director at Georgetown’s Center for Security and Emerging Technology discusses growing concerns over A.I. on ‘Fox Report.’

Why The Warning Matters

CSET is a Washington-based research group that studies how new technologies affect security and policy. Its analysts brief Congress, agencies, and allies. The interim director’s message signals where policy debates may head next.

Public anxiety has climbed after a year of high-profile AI rollouts and missteps. Companies launched new image and text systems. Users then showed how they can produce convincing fake audio, altered photos, and crafted phishing messages in seconds. Experts say these tools lower the barrier for fraud and influence campaigns.

Election Security And Deepfakes

One urgent focus is elections. Cheap tools can clone a candidate’s voice or face. In early 2024, fake robocalls using a cloned voice urged voters in New Hampshire to skip a primary. State officials opened investigations. The Federal Communications Commission later declared AI-generated voices in robocalls illegal under the Telephone Consumer Protection Act.

Security researchers warn that audio and video forgeries can spread faster than fact-checks. Even after debunks, false clips can leave doubts. That makes quick detection and clear labeling key. It also pushes campaigns and newsrooms to verify media before sharing.

  • Threats: Voice cloning scams, forged videos, tailored disinformation.
  • Targets: Voters, seniors, small businesses, public officials.
  • Defenses: Authentication, content provenance, faster takedowns.

Regulation Catches Up

Policymakers have begun to react. The White House issued an AI executive order in October 2023, calling for testing, reporting, and safety standards for powerful models. Federal agencies are drawing up rules for procurement, critical infrastructure, and consumer protection.

Abroad, the European Union approved the AI Act in 2024. It sets risk tiers, bans certain uses, and requires transparency for high-risk systems. Supporters say it gives clarity. Critics worry about compliance costs and vague terms.

Many U.S. states are advancing narrower bills on deepfakes, data privacy, and automated hiring. The patchwork could spur companies to adopt baseline practices nationwide. Still, gaps remain for open-source models, small developers, and cross-border misuse.

National Security And Industry Impact

AI also shapes defense and cyber operations. Analysts highlight risks from model theft, data poisoning, and automated hacking. They also flag how adversaries might use generative tools to craft more persuasive spear-phishing or tailor propaganda at scale.

For businesses, the near-term issues are trust and liability. Banks and retailers face rising fraud attempts using cloned voices and fake documents. Media companies confront manipulated images that can inflame events. Insurers and auditors are asking for clearer controls and logs.

Companies are testing solutions. Watermarks and content provenance standards aim to show how a file was created and edited. Red-team testing and safety evaluations are more common before launch. Some firms are building models that refuse to output certain prompts. Others publish usage rules and offer rewards for finding flaws.

Competing Views On The Path Forward

Advocates for stricter rules say the public bears too much risk. They want clear penalties for harmful uses and firm duties for companies releasing powerful tools. Civil liberties groups warn that broad bans or overbroad monitoring could chill speech and research.

Developers argue for flexible standards, open research, and targeted enforcement against misuse. They say innovation can deliver better detection, filters, and safer designs. Many agree on the need for transparency about training data, model limits, and known failure modes.

What Comes Next

The next year will test whether policy and practice can keep pace. Watch for federal guidance on watermarking, identity proofing for high-risk actions, and rules for government use. Expect more lawsuits over deepfakes and consumer harm. Election officials are likely to expand rapid response units for false media.

The Georgetown expert’s on-air warning boiled down to a pragmatic note: act now on known risks while building longer-term safeguards. That means basic hygiene—verification tools, disclosure, incident reporting—paired with clearer accountability for those who deploy powerful systems.

As AI tools spread, the measure of progress will be simple. Are scams and forgeries harder to pull off? Are users better informed? The answers will shape public trust, business adoption, and the safety of key institutions this year and next.

Share This Article
Email Copy Link Print
Previous Article arabian leopard conservation royal briefing Prince Briefed On Arabian Leopard Revival
Next Article fbi website guthrie disappearance case FBI Launches Website In Guthrie Disappearance

Your Trusted Source for Accurate and Timely Updates!

Our commitment to accuracy, impartiality, and delivering breaking news as it happens has earned us the trust of a vast audience. Stay ahead with real-time updates on the latest events, trends.
FacebookLike
XFollow
InstagramFollow
LinkedInFollow
MediumFollow
QuoraFollow
- Advertisement -
adobe_ad

You Might Also Like

meta acquires singapore manus agent
Technology

Meta Buys Singapore AI Agent Manus

By Kelsey Walters
1b1eb83f-f309-4d4d-a9fb-8c08ed851ea5
Technology

Google Revives Iowa Nuclear Plant to Power Data Centers

By Kelsey Walters
ai parenting debate sparked altman
Technology

Altman Sparks Debate Over AI Parenting

By Kelsey Walters
tesla robotaxi fleet
Technology

Tesla Robotaxi Fleet Faces Continued Delays Despite Musk’s Promises

By Kelsey Walters
new_york_report_logo_2025 new_york_report_white_logo_2025
Facebook Twitter Youtube Rss Medium

About Us


The New York Report: Your instant connection to breaking stories and live updates. Stay informed with our real-time coverage across politics, tech, entertainment, and more. Your reliable source for 24/7 news.

Top Categories
  • World
  • National
  • Tech
  • Finance
  • Life
  • Personal Finance
Usefull Links
  • Contact Us
  • Advertise with US
  • Complaint
  • Privacy Policy
  • Cookie Policy
  • Submit a Tip

© 2025 The New York Report. All Rights Reserved.