Since large language models have shown to be very capable and also increasingly accurate, a new paradigm in AI world is emerging. This is called Agentic AI. Agentic AI refers to technology where we create “AI Agents” which can act on their own. Or in some sense have an “agency” of their own. It goes beyond simply following pre-programmed rules or responding to specific commands.
For example, we have been using agents and routines for a long time. An alarm clock can be seen as an agent who will make noise at specific time. However this is not an intelligent agent. Asking a hotel staff to call you once your shuttle arrives on other hand is an example of intelligent agent.
The large language models and other multi-modal models are now capable for far more complex processing and detecting changes to environments. This enables us to create more open ended commands for AI agents to act on our behalf.
An example of agentic ai could an ai agent that takes care of your plants. For example you can tell your AI agent that “I have planted a lemon tree and connected to sprinkler 1, figure out how to water it on as needed basis”. A good smart AI agent will then ensure the tree gets water as and when needed, check if it has already rained etc.
Agentic AI is still an emerging field, but it has the potential to revolutionize various industries by automating complex tasks, improving decision-making, and enabling new forms of human-AI collaboration.
A more formal research paper, gives us better summary.
Increased delegation of commercial, scientific, governmental, and personal activities to AI agents -- systems capable of pursuing complex goals with limited supervision -- may exacerbate existing societal risks and introduce new risks. Understanding and mitigating these risks involves critically evaluating existing governance structures, revising and adapting these structures where needed, and ensuring accountability of key stakeholders. Information about where, why, how, and by whom certain AI agents are used, which we refer to as visibility, is critical to these objectives. In this paper, we assess three categories of measures to increase visibility into AI agents: agent identifiers, real-time monitoring, and activity logging. For each, we outline potential implementations that vary in intrusiveness and informativeness. We analyze how the measures apply across a spectrum of centralized through decentralized deployment contexts, accounting for various actors in the supply chain including hardware and software service providers. Finally, we discuss the implications of our measures for privacy and concentration of power. Further work into understanding the measures and mitigating their negative impacts can help to build a foundation for the governance of AI agents.
Risks of AI Agents
AI Agents is where perhaps some of the biggest AI risks manifest themselves. What if AI agents who an agency to do things that do active harm ? Who ultimately is responsible for actions taken by an AI agent ? How can we possibly ensure that an AI agent does not do harm. After all even in real world with good humans, good intentions sometimes leads to bad outcomes. This risk would exist for AI agents as well.
Risks of agentic AI is an evolving field and we have much to learn.
References: