Article

The New Agentic AI Battleground: The Case for Unified Architecture

Enterprises that move AI from pilot to production will win by unifying structured and unstructured data with agentic AI — without sacrificing governance.

Sumeet Arora
Sumeet Arora
28 avril 2026 4 min de lecture

This is an all-too-common scenario: An organization is excited about the possibilities of AI. There’s tremendous internal buzz about the launch of an AI pilot. After the launch, though, there’s not much news about any real results. Eventually, the pilot winds down with little fanfare, and things go back to normal, except for a lingering, nagging, company-wide fear that the organization is further behind in the AI race.

According to an IDC report, approximately 88% of AI proofs of concept (POCs) launched by surveyed enterprises never reach production. The report states, “The high number of Al POCs but low conversion to production indicates the low level of organizational readiness in terms of data, processes and IT infrastructure.” MIT Media Lab’s “State of AI Business 2025” report produced an even more stark finding: 95% of generative AI pilots in enterprises have delivered no measurable ROI.

Why is this happening? Teradata, an autonomous AI knowledge platform suggests that AI pilots stall and fail to scale because of fragmented data silos and architectures that are designed for static reporting instead of dynamic intelligence. 

The unstructured data gap enterprises can't afford to ignore 

Enterprises are looking to extract insights, but they’re not taking the steps to handle structured data alongside the torrent of unstructured data like images, audio, PDFs and customer chats. They must come to grips with the fact that unified data architecture is not merely a technical preference for AI, but a strategic prerequisite.

Structured data is relatively easy to query, parse, and analyze because it’s organized in databases with strict structures and predefined fields. But unstructured data is fueling this new era of LLMs — and to a greater extent, agentic AI.

Unstructured data is growing three times faster than structured data. Common industry estimates indicate that unstructured data accounts for 80–90% of all new enterprise information.

To understand why this gap matters, consider an airline trying to analyze customer feedback through unstructured channels such as emails, chat logs, and qualitative surveys. They tried to use an external LLM and strong prompt engineering. That approach worked well enough in development, but it broke down at scale.

They solved the problem by using open-source models to convert customer messages to vector embeddings. (Vectoring is a way to numerically represent pieces of unstructured data so AI models can parse them.) Then, they were able to match conversation based on topics and sentiment, rather than simply with keywords.

The external model that the airline initially tried to use may have been perfectly capable, but the barrier to moving it into production was a data architecture problem that they had to solve first. The lesson is that many AI failures are not model failures. They are architecture failures. And solving them requires enterprises to rethink the way their data environments are built.

Traditional data pipelines were built to move information from one place to another. Agentic AI requires something much more dynamic. 

From information pipelines to intelligence architecture 

Information pipelines, along with extract, transform, load (ETL) processes that move structured data are no longer sufficient. Today’s enterprise data also depends on an intelligence architecture comprising knowledge, context and measurable outcomes.

Dynamic context engines handle constant uncertainty, unpredictability and variability, particularly within the context of an enterprise’s own knowledge. Even the most advanced model is only as useful as the context it can access.

A unified knowledge layer is necessary to integrate important business context with data and insights to make data actionable — one where AI systems can reason, decide and act. Alongside measurable business outcomes, there must also be a governance layer built into the architecture, so enterprises can experiment and work safely, with built-in compliance and security.

All of this has to happen at speed, not just at scale, especially in environments where decisions must be made in real time and governance cannot be compromised. One example is defense, where structured and unstructured data needs to be processed in real time within strict security protocols.  For example, if military organizations need to determine the survivability of given camouflage applications in  real time, troops on the ground can use secure apps to take images of camouflaged assets and send them to be analyzed.

Combining structured and unstructured data in a single, governed database, allows the system to process images alongside data such as terrain patterns and threat signatures and deliver guidance to soldiers in situ. 

The agentic AI opportunity, and the gap between ambition and execution 

Gartner predicts that 40% of agentic AI projects will be canceled by 2027, “due to escalating costs, unclear business value or inadequate risk controls.” Regardless, enterprises need to invest in foundational capabilities to build towards implementation, which Bain reports could demand 5% to 10% of technology spending over the next three to five years.

The Capgemini Research Institute pegs the economic value generation of agentic AI at $450 billion by 2028, even though just 2% of organizations surveyed are currently at full-scale deployment. And the Futurum Group predicts that as agentic AI replaces data pipelines, and enterprises move from experimental pilots to production, the data market could reach $541.1 billion in 2026 — and $1.2 trillion by 2031.

The market opportunity is enormous, and there are numerous real-world examples of where a unified context-rich, architecture enables agentic AI to have an impact.

The enterprises who move beyond AI pilot projects into production will be those who solve their data readiness challenges — unifying structured and unstructured data with agentic AI capabilities across any environment, and without compromising on governance.

À propos de Sumeet Arora

Sumeet Arora is chief product officer at Teradata, leading engineering, product management, and innovation strategy. With deep expertise in analytics, AI, and cloud technology, he has a proven track record of building products that drive revenue growth. Previously, he was chief development officer at ThoughtSpot, leading its global engineering, product, and design teams. Prior to that, he served as SVP/GM of service provider networking at Cisco, overseeing its engineering and product management team. He serves on the board of CloudBees. Voir tous les articles par Sumeet Arora
Restez au courant

Abonnez-vous au blog de Teradata pour recevoir des informations hebdomadaires



J'accepte que Teradata Corporation, hébergeur de ce site, m'envoie occasionnellement des communications marketing Teradata par e-mail sur lesquelles figurent des informations relatives à ses produits, des analyses de données et des invitations à des événements et webinaires. J'ai pris connaissance du fait que je peux me désabonner à tout moment en suivant le lien de désabonnement présent au bas des e-mails que je reçois.

Votre confidentialité est importante. Vos informations personnelles seront collectées, stockées et traitées conformément à la politique de confidentialité globale de Teradata.