Ptechhub
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs
No Result
View All Result
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs
No Result
View All Result
PtechHub
No Result
View All Result

Who’s to Blame When AI Agents Screw Up?

By Wired by By Wired
May 22, 2025
Home AI & ML
Share on FacebookShare on Twitter


Over the past year, veteran software engineer Jay Prakash Thakur has spent his nights and weekends prototyping AI agents that could, in the near future, order meals and engineer mobile apps almost entirely on their own. His agents, while surprisingly capable, have also exposed new legal questions that await companies trying to capitalize on Silicon Valley’s hottest new technology.

Agents are AI programs that can act mostly independently, allowing companies to automate tasks such as answering customer questions or paying invoices. While ChatGPT and similar chatbots can draft emails or analyze bills upon request, Microsoft and other tech giants expect that agents will tackle more complex functions—and most importantly, do it with little human oversight.

The tech industry’s most ambitious plans involve multi-agent systems, with dozens of agents someday teaming up to replace entire workforces. For companies, the benefit is clear: saving on time and labor costs. Already, demand for the technology is rising. Tech market researcher Gartner estimates that agentic AI will resolve 80 percent of common customer service queries by 2029. Fiverr, a service where businesses can book freelance coders, reports that searches for “ai agent” have surged 18,347 percent in recent months.

Thakur, a mostly self-taught coder living in California, wanted to be at the forefront of the emerging field. His day job at Microsoft isn’t related to agents, but he has been tinkering with AutoGen, Microsoft’s open source software for building agents, since he worked at Amazon back in 2024. Thakur says he has developed multi-agent prototypes using AutoGen with just a dash of programming. Last week, Amazon rolled out a similar agent development tool called Strands; Google offers what it calls an Agent Development Kit.

Because agents are meant to act autonomously, the question of who bears responsibility when their errors cause financial damage has been Thakur’s biggest concern. Assigning blame when agents from different companies miscommunicate within a single, large system could become contentious, he believes. He compared the challenge of reviewing error logs from various agents to reconstructing a conversation based on different people’s notes. “It’s often impossible to pinpoint responsibility,” Thakur says.

Joseph Fireman, senior legal counsel at OpenAI, said on stage at a recent legal conference hosted by the Media Law Resource Center in San Francisco that aggrieved parties tend to go after those with the deepest pockets. That means companies like his will need to be prepared to take some responsibility when agents cause harm—even when a kid messing around with an agent might be to blame. (If that person were at fault, they likely wouldn’t be a worthwhile target moneywise, the thinking goes). “I don’t think anybody is hoping to get through to the consumer sitting in their mom’s basement on the computer,” Fireman said. The insurance industry has begun rolling out coverage for AI chatbot issues to help companies cover the costs of mishaps.

Onion Rings

Thakur’s experiments have involved him stringing together agents in systems that require as little human intervention as possible. One project he pursued was replacing fellow software developers with two agents. One was trained to search for specialized tools needed for making apps, and the other summarized their usage policies. In the future, a third agent could use the identified tools and follow the summarized policies to develop an entirely new app, Thakur says.

When Thakur put his prototype to the test, a search agent found a tool that, according to the website, “supports unlimited requests per minute for enterprise users” (meaning high-paying clients can rely on it as much as they want). But in trying to distill the key information, the summarization agent dropped the crucial qualification of “per minute for enterprise users.” It erroneously told the coding agent, which did not qualify as an enterprise user, that it could write a program that made unlimited requests to the outside service. Because this was a test, there was no harm done. If it had happened in real life, the truncated guidance could have led to the entire system unexpectedly breaking down.



Source link

Tags: Artificial Intelligencelawmachine learningrobotssoftware
By Wired

By Wired

Next Post
DSIT makes £5.5m of funding available to new projects | Computer Weekly

DSIT makes £5.5m of funding available to new projects | Computer Weekly

Recommended.

ISO New England Selects GridUnity to Accelerate Interconnection and Planning in the Northeast

ISO New England Selects GridUnity to Accelerate Interconnection and Planning in the Northeast

December 15, 2025
How Military Discipline Shapes Cybersecurity Leadership

How Military Discipline Shapes Cybersecurity Leadership

September 3, 2025

Trending.

Google Sues 25 Chinese Entities Over BADBOX 2.0 Botnet Affecting 10M Android Devices

Google Sues 25 Chinese Entities Over BADBOX 2.0 Botnet Affecting 10M Android Devices

July 18, 2025
Stocks making the biggest moves premarket: Salesforce, American Eagle, Hewlett Packard Enterprise and more

Stocks making the biggest moves premarket: Salesforce, American Eagle, Hewlett Packard Enterprise and more

September 4, 2025
Stocks making the biggest moves after hours: AppLovin, Arm Holdings, Flutter Entertainment, Fortinet and more

Stocks making the biggest moves after hours: AppLovin, Arm Holdings, Flutter Entertainment, Fortinet and more

May 7, 2025
Warning: WinRAR Vulnerability CVE-2025-6218 Under Active Attack by Multiple Threat Groups

Warning: WinRAR Vulnerability CVE-2025-6218 Under Active Attack by Multiple Threat Groups

December 10, 2025
Risky shadow AI use remains widespread

Risky shadow AI use remains widespread

January 6, 2026

PTechHub

A tech news platform delivering fresh perspectives, critical insights, and in-depth reporting — beyond the buzz. We cover innovation, policy, and digital culture with clarity, independence, and a sharp editorial edge.

Follow Us

Industries

  • AI & ML
  • Cybersecurity
  • Enterprise IT
  • Finance
  • Telco

Navigation

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Subscribe to Our Newsletter

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Copyright © 2025 | Powered By Porpholio

No Result
View All Result
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs

Copyright © 2025 | Powered By Porpholio