Gaurav Pathak is the Vice President of Product Management AI and Metadata at Informatica . For months, business and technology leaders have been bombarded with claims about how generative AI is going to change everything. But although AI transformed personal productivity in 2024, its impact on enterprises has largely been false starts with conversational AI and little impact on the bottom line.
But that's about to change. AI agents are coming, and they promise a lot more than conversations. AI agents are fully autonomous systems with memory that extends beyond a chat session, native integration with tools required to complete workflows and human-like reasoning to get out of tight corners.
I believe this will be a transformative shift for organizations, initiating a multiyear wave of innovation. This transformation will be driven by three primary trajectories: GPU infrastructure and energy requirements, algorithmic advancement and domain-specific data acquisition. The foundation of agentic progress relies heavily on computational infrastructure, particularly GPU architecture and energy consumption.
Although energy costs remain substantial at AI scale, they're a commodity constraint rather than a technological barrier. The industry anticipates significant innovation in hardware architecture, enabling the operation of larger models across various scales, including local deployments. This development mirrors the revolutionary vision of personal computing, suggesting a future where AI agents could become as ubiquitous as desktop computers.
The core algorithmic landscape of AI remains anchored in established methodologies: transformers, deep learning, gradient descent, Q-learning, A* pathfinding and Monte Carlo tree search. However, the innovation lies in the sophisticated chaining of these algorithms within training workflows, particularly in post-training reinforcement learning methods, like the ones used for o1 and now o3 Open AI models. These approaches leverage human-in-loop annotations within simulated environments.
Although some researchers don’t see any evidence of it, many others, including Nobel laureates like Geoffrey Hinton , believe these systems are intelligent, have experiences and are capable of making decisions based on those experiences, like people do. I anticipate further formalization of these concepts through 2025, with major players like OpenAI, Anthropic, Google and X leading developments in reasoning and planning models. Notably, open-source innovation will continue to rapidly advance, matching the pace seen with closed-source counterparts.
Most AI agents will follow a similar training cycle, relying on real-world and domain-simulated synthetic data: 1. Initial training utilizing high-quality, human-annotated real-world data. 2.
Deploying agent prototypes in domain-specific simulations. 3. Generating synthetic training data.
4. Human and AI annotating of synthetic data. 5.
Refining agent capabilities. 6. Controlling its introduction into real environments.
7. Collecting new real-world data to continue the cycle. Simulation environments serve as crucial testing grounds for agent development.
Waymo's autonomous driving program exemplifies this approach, combining 20 million miles of real-world autonomous driving with approximately 20 million simulation miles per day . This approach allows AI agents to see long-tail cases faster and provides the opportunity to evaluate the agent in a low-stakes scenario. The successful implementation of AI agents presents several key challenges along with its opportunities: Organizations must provide agents with contextual data, including real-world information, simulation data and annotated training sets from structured and unstructured sources of enterprise data.
For example, in customer service applications, this means integrating call logs, interaction histories, customer profiles and technical support databases. The challenge lies in creating data foundations that maximize high-quality context availability in a secure environment that preserves user privacy and minimizes harmful hallucinations. Sophisticated protocols are required for both human-agent and agent-agent interaction.
This includes standardized data exchange formats, privacy protocols and communication frameworks. The resulting interactions will generate valuable data for continuous system improvement and optimization. AI safety considerations include responsibility and accuracy, repeatability and transparency, ethical considerations and privacy, and governance frameworks.
I anticipate strong regulatory interventions that could follow high-profile safety incidents, particularly regarding copyright protection for training data, bias mitigation and transparency controls in enterprise contexts. As we move through 2025, AI agents will likely proliferate in almost everything we do. The success of these systems will ultimately depend on: 1.
The speed and efficiency of training loops to cover the long tail of reality. 2. The quality of human experience and interactions.
3. Evaluation systems for agent output. 4.
Strong data management foundations. Organizations that establish strong data foundations and implement effective agent improvement loops will likely gain significant competitive advantages in their respective domains. And who knows? Maybe one day these AI agents will finally stop us from doom-scrolling so much—although they'll probably need a few million high-quality predictions to figure that one out.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?.
Technology
Enterprise AI Agents: Drivers, Challenges And Opportunities

As we move through 2025, AI agents will likely proliferate in almost everything we do.