Hyderabad

Agentic AI Raises Alarms Over Loss of Human Control

Listen to Story
Autonomous AI agents operating without human supervision in critical sectors

HYDERABAD: Artificial Intelligence is entering a new phase that is raising serious concerns among technology experts and policymakers. Until recently, AI systems were largely limited to chatbots that answered queries or generated text. That model has now evolved into what experts call agentic AI. systems that can take decisions independently and execute actions without direct human intervention.

These autonomous digital agents are already being deployed in sensitive sectors such as banking, healthcare, power grids and industrial operations. Their ability to act on their own has triggered fears that humans may gradually lose control over critical systems.

Unlike conventional AI, which only provides information, agentic AI converts analysis into action. If asked about the weather, a chatbot might give a forecast. An agentic system, however, could respond to bad weather by shutting windows or adjusting air-conditioning settings on its own.

From finance to healthcare, machines act independently

In banking, such agents can freeze accounts after detecting suspicious transactions. In hospitals, they can alter medication dosages based on a patient’s blood pressure or sugar levels. In factories, they can independently regulate machine performance. These systems also have the ability to communicate with other AI agents and jointly complete complex tasks.

The scale and speed of these decisions have become a major concern. Agentic AI systems can take thousands of decisions within minutes, often without human supervision. Even a minor error can have serious consequences.

Experts warn that a wrong decision by an AI agent in the stock market could wipe out crores of rupees within seconds. There is also the growing risk of cybercriminals hijacking such agents and manipulating their behaviour to carry out unlawful acts.

Another key risk is what experts describe as unintended behaviour situations where an AI agent misinterprets its objective and takes an incorrect or harmful route to achieve it.

Call for strict human oversight and governance

Technology specialists stress that control over these advanced systems must remain firmly in human hands. Every AI agent, they argue, must have a clear and traceable digital identity, similar to an employee identification card within an organisation.

Which agent accessed which file, when changes were made, and where information was sent must all be recorded through detailed logging. This, experts say, would allow organisations to quickly identify responsibility and rectify problems if something goes wrong.

Strict governance frameworks are also being called for to define how much autonomy these agents should be allowed and what data they can access. Trust frameworks must ensure that human approval is mandatory for critical or high-risk decisions.

While automation can significantly increase efficiency, experts caution that security must take priority. Unchecked acceleration of autonomous systems, they warn, could result in losses far greater than the benefits promised by the technology.

(For article corrections, please email hyderabadmailorg@gmail.com or fill out the Grievance Redressal Form.)