Anekanta: Agentic AI for Leaders, The Good, Bad & the Ugly

Anekanta

According to Anekanta, the shift from insight to agency is the defining pivot of 2026, creating new opportunities and new risks for organisations.

The first wave of Generative AI (Gen AI) largely focused on analysis and content generation. Systems could analyse data, summarise reports, generate text, images, code or assist users with queries. In almost all cases, involvement of Gen AI in business activities needed to be initiated and the outputs used or controlled directly by humans.

Agentic AI changes that model.

Rather than simply responding to prompts, AI agents can be instructed to plan a chain of tasks, select tools and execute actions across systems. Instead of producing information alone, the system may now initiate and complete work. Agentic AI allows organisations to move beyond edge productivity and narrowly defined automation, towards enterprise wide, outcome-driven operations, where systems can coordinate tasks and execute actions to achieve goals.

For organisations already familiar with automation technologies such as Robotic Process Automation (RPA) or machine-to-machine (M2M) integrations, the concept is not entirely unfamiliar. Whilst RPA is about compliance to a path, Agentic AI is about commitment to a goal. The delegated decision-rights and operational autonomy of agentic AI systems represent a significant step forward.

As with many technological advances, the emergence of Agentic AI presents both opportunities and risks.

Understanding the distinction is now becoming an important leadership issue.

What actually makes ai “agentic”?

Traditional automation systems operate using deterministic workflows. These workflows follow predefined logic that is predictable and bounded.

For example: If invoice > £10,000, then route to finance director for approval. Every possible pathway is defined in advance. Agentic systems operate differently. Instead of following fixed instructions, the system is given a goal and determines how to achieve it.

Agentic AI systems typically demonstrate three characteristics:

  1. Autonomy – the ability to initiate and execute tasks through accessible software tools
  2. Goal-driven reasoning – the ability to determine how to achieve an objective
  3. Adaptability – the ability to adjust behaviour in dynamic environments

This means the sequence of actions may not be predetermined.

The potential benefits of Agentic AI are discussed in a recent paper published by the European Commission supporting the use of agentic systems to streamline complex workflows by orchestrating tasks across multiple systems and automating routine decisions.

For example, an AI procurement agent tasked with securing the best supplier contract might: analyse previous purchasing, data, search external supplier markets compare price and reliability indicators, request quotes initiate procurement documentation. The exact path taken by the system can vary depending on the circumstances. In other words, the system is reasoning about how to achieve an objective, a step beyond simply executing predefined instructions.

This is what distinguishes agentic AI from traditional automation.

The Good: AI that gets work done

The potential advantages of Agentic AI are significant. Rather than assisting humans with individual tasks, AI agents can potentially complete multi-step processes across systems, where specialised agents coordinate subtasks and tools to achieve a larger objective.

For example, an agent could:

  • Retrieve relevant operational data
  • Analyse compliance requirements
  • Generate documentation
  • Submit reports
  • Escalate issues where required

This type of automation may extend beyond what traditional workflow systems can achieve.

For organisations operating large digital infrastructures, the ability to automate complex workflows may deliver significant efficiency gains.

Agentic Commerce (Banco Santander & Mastercard)

In the financial sector, early experiments are emerging around agent-driven transactions, where AI systems initiate and execute actions within tightly controlled environments.

One of the clearest signals that Agentic AI is moving from theory to practice emerged in early 2026 when Banco Santander and Mastercard completed Europe’s first live end-to-end payment executed by an AI agent within a regulated banking framework.

In the pilot, the AI agent was able to initiate and complete a purchase transaction on behalf of a user through Mastercard “Agent Pay” infrastructure. The transaction was executed on live banking rails under controlled conditions, demonstrating that AI systems are now capable not only of recommending purchases but also of executing financial transactions autonomously within defined governance limits.

The experiment marks a significant milestone in what some analysts describe as “agentic commerce,” where AI systems search, negotiate, and transact on behalf of humans. While still experimental, the development suggests that financial institutions are actively preparing for a future in which non-human actors participate directly in economic activity.

Cybersecurity Implications – Agentic AI as both defender and attacker

Agentic AI is also beginning to reshape cybersecurity operations. Research suggests that autonomous AI agents could eventually assist security teams by automating tasks traditionally performed in Security Operations Centres (SOCs), including threat detection, vulnerability analysis, and incident response.

As agent architectures become more complex – combining reasoning, memory, and system integration – traditional cybersecurity frameworks may struggle to fully address the emerging risk landscape.

The Bad: When Agentic AI accelerates cyber attacks

At the same time, Agentic AI’s beneficial capabilities can introduce new attack surfaces; because AI agents can reason, access tools, and execute actions across systems, they create security risks that differ fundamentally from traditional software vulnerabilities. Tool misuse, cross-system propagation of attacks, and goal misalignment, may result in agents pursuing objectives in unintended ways.

The same capabilities that enable productive automation can also be exploited. In 2025, Anthropic reported what is believed to be one of the first cyber espionage campaigns in which AI agents conducted the majority of operational tasks autonomously.

According to analysis of the incident, attackers used agentic AI systems to automate large portions of the attack process, including reconnaissance and operational execution. In some cases, the AI agent carried out 80–90% of attack tasks with minimal human intervention, even assessing and organising the most valuable data prior to exfiltration.

The implications are significant.

Traditionally, sophisticated cyber attacks required teams of highly skilled specialists. Agentic AI has the potential to make decisions which direct parts of that process, enabling a single operator to coordinate attacks that previously required a larger group. Agentic AI systems could accelerate activities such as vulnerability discovery, credential exploitation and phishing campaigns. In contrast, the technology can also benefit defenders by dynamically identifying vulnerabilities in software at scale, potentially allowing organisations to patch weaknesses faster than agentic AI can exploit them.

Evidence of accelerated vulnerability discovery

In one widely reported collaboration between Anthropic and Mozilla, the AI model Claude Opus 4.6 was able to identify more than 100 previously unknown bugs in Firefox known as the most security-hardened browser on the web – in just two weeks, including multiple high-severity vulnerabilities. The Firefox engineers executed fixes immediately. This experiment highlighted how quickly advanced AI systems can analyse large software codebases to identify weaknesses and automating resilience.

Arguably, the Mozilla case illustrates the moment Agentic AI moved from “assisting” security researchers to “performing” as a security researcher – complete with its own tools, logic loops, and the ability to work 24/7 without human intervention. This is also a good example of an autonomous system with a virtuous lifecycle. No human could have analysed the volume of code covered, therefore if the AI agent finds just one critical vulnerability which is fixed and checked by a human specialist, this should be considered a significant value add.

The Ugly: When Agentic AI is unleashed without governance

Perhaps the greatest risk arises when agentic systems are deployed without appropriate governance structures. Unlike traditional enterprise software, AI agents may interact with multiple internal and external systems simultaneously.

These systems may include: financial platforms, operational control systems, security systems, enterprise databases, external services and APIs.

Without clear boundaries, this creates the possibility that an AI system could initiate actions with real-world consequences.

Key areas organisations must therefore address include:

  • Defining the parameters of autonomy
  • Deciding actions which require human approval
  • Determining which systems the agent can access
  • Who is responsible and accountable for decisions made by the system

Emerging integration standards such as the Model Context Protocol (MCP), originated from Anthropic, are helping connect AI agents directly to enterprise tools, applications and data sources. MCP is essentially the universal translator or orchestration layer for agents. However, security risks are also associated with these integrations, including the potential for malicious tools or injected instructions to manipulate the system.

In other words, the same architecture that enables powerful dynamic automation also introduces new attack surfaces.

For more Anekanta news, click here

Share this

Related News

For a long time, the collaboration industry has focused…

News

ThreatModeler has announced it has acquired IriusRisk, allowing them…

News

PwC US has announced a three-year, $400 million collaboration…

News

Scroll to Top