Article

From AI Ambition to AI at Scale: How Banks Industrialize Intelligence

Learn how banks are closing the gap between AI pilots and production with disciplined architecture, governance, and data foundations.

Simon Axon
Simon Axon
5 mai 2026 18 min de lecture

The AI inflection point and the paradox it created in banking

Artificial intelligence did not arrive in banking in 2022.

Financial institutions have been applying advanced analytics, machine learning, optimization algorithms, and statistical modelling for decades. Credit scoring, fraud detection, risk modelling, and liquidity forecasting have long relied on sophisticated data science.

What changed in late 2022 was not the existence of AI, but its visibility.

When OpenAI released ChatGPT in November 2022, generative AI moved from specialist research labs and enterprise deployments into the hands of the general public. For the first time, millions of people could directly interact with a system that appeared to reason, synthesize information, and respond conversationally to complex questions.

The technological foundations underpinning large language models had been evolving for years. But the psychological shift was immediate. For the first time, advanced AI systems became directly accessible to the masses.

Customers could see it. Employees could test it. Executives could demonstrate it in board meetings. Regulators could observe its implications in real time. Within months, generative AI had reshaped expectations across industries, including banking.

This was the true inflection point: the start of an acceleration in expectation.

For banks, the impact of LLMs has been profound.

The impact of LLMs in banking

From below, customers who were already accustomed to seamless digital experiences–whether in retail, entertainment, travel, or e-commerce–began to expect more intelligent, responsive, and conversational interactions from their financial institutions. The bar for user experience had been rising steadily for years. Generative AI raised it again, sharply.

From above, boards and executive teams faced growing pressure to articulate an AI strategy. Investors asked about productivity gains. Analysts began to forecast efficiency improvements. Competitors announced pilots and partnerships. What had previously been a measured, multi-year innovation journey suddenly became an urgent strategic priority.

And from the side, regulators accelerated their focus. Public exposure to generative AI intensified conversations around explainability, bias, resilience, operational risk, and governance. In Europe particularly, regulatory frameworks began to crystallize around AI risk management and operational resilience.

In short, banks were hit by a convergence of forces, including rising customer expectations, executive and investor urgency, accelerating regulatory scrutiny, and a rapidly expanding vendor ecosystem. And this all happened at once. For an industry that is, by necessity, deliberate and risk-conscious, this compression of timelines created tension.

The challenge banks face

Banks are not designed for overnight reinvention. They operate within deeply interconnected systems built over decades. They serve millions–often tens of millions–of customers. They manage mixed analytical workloads across real-time payments, batch reporting, fraud detection, regulatory submissions, liquidity monitoring, credit modelling, and increasingly AI-driven decisioning. They operate across jurisdictions, under continuous supervisory oversight, with strict expectations around auditability, transparency, and resilience.

An AI demonstration that works elegantly in isolation does not automatically succeed within this environment.

Proofs of concept thrive in controlled conditions:

  • Curated data
  • Limited integration requirements
  • Constrained user groups
  • Temporary cost tolerances

Production systems must withstand something very different:

  • Continuous, high-concurrency demand
  • Cross-system integration
  • Full regulatory audit trails
  • Explainability requirements
  • Multi-year cost predictability
  • Operational resilience at national or regional scale

At that level, technical performance is only part of the story. Systems must stay up. Controls must hold. And costs must make sense over the long term.

This is where the paradox becomes unavoidable. AI capability has advanced rapidly and become highly visible, creating the impression that scaling it should now be straightforward. Yet the institutions trying to deploy it at scale are constrained by complexity, regulation, legacy systems, and economic accountability. The technology feels ready, but the organisation often isn’t.

The release of generative AI exposed the industrialisation gap in banking.

Can a successful AI pilot actually make things worse?

In the past few years, successful AI pilots have often been mistaken for enterprise readiness. Early pilots create confidence, yet scaling them often reveals friction. Banks showed that they could effectively build models, but the bigger question was whether the organisation around those models was ready to live with them.

The hard part of AI in banking is not building models; it is absorbing extreme complexity predictably, securely, compliantly, and at scale, and doing so within defined economic boundaries.

The events of 2022 compressed timelines and magnified ambition. Infrastructure interdependence, fragmented data estates, and regulatory scrutiny became an even more significant challenge, and what followed was inevitable. The industry moved rapidly from experimentation to execution and discovered that the distance between the two was far greater than anticipated.

Banks are now forced to ponder, “How can we industrialise AI in a way that withstands the full weight of banking complexity?”

Those who answer that question well will define the next decade of global financial services. Those who cannot will discover that pilot mode is not a competitive position.

The production-scale gap

If 2025 was the year of AI ambition, 2026 is the year of AI reality.

Across global banking, experimentation has been widespread. McKinsey's 2025 State of AI survey found that 88% of organizations use AI in at least one function, yet only 7% have achieved full enterprise-wide deployment.1 This gap reflects something deeper than effort or intent; it goes to how banks are built.

From hundreds of consumers to tens of millions

Proofs of concept typically operate in controlled environments:

  • Tens or hundreds of internal users
  • Curated data sets
  • Limited system integration
  • Short-term funding tolerance

Production banking environments operate on an entirely different order of magnitude:

  • Tens of millions of retail customers
  • High-concurrency transaction volumes
  • 24/7 availability requirements
  • Multi-jurisdiction regulatory oversight
  • Continuous auditability
  • Multi-year cost accountability

The difference between a successful pilot and a production system is exponential.

Consider institutions already operating AI at true scale. Digital assistants such as Capital One’s Eno and Bank of America’s Erica operate at massive scale. Erica alone has surpassed over one billion cumulative interactions across tens of millions of customers.2 These systems are embedded into core digital experiences, risk frameworks, and operational processes.

Scaling to that level requires model accuracy. But more than that, to achieve scale these initiatives require robust infrastructure discipline.

Where initiatives stall

When AI programs fail to progress beyond a pilot, the underlying causes are rarely about algorithmic sophistication. They are typically rooted in four structural constraints:

  1. Data fragmentation: AI systems amplify underlying data inconsistencies. Fragmented architectures, duplicated data marts, and inconsistent definitions become more visible, and more costly, at scale.
  2. Integration complexity: Production AI must integrate with payments systems, CRM platforms, risk engines, reporting environments, and regulatory controls. Integration effort often exceeds model development effort.
  3. Governance overhead: Explainability, bias testing, resilience testing, model validation, audit logging, and regulatory documentation add layers of operational weight that are absent in pilots.
  4. Economic unpredictability: Consumption-based infrastructure models can create cost volatility as usage scales. What appears affordable at pilot scale can become financially opaque at enterprise scale.

Most stalled programs have run up against the realities of scale. Without architectural coherence, sustainable scale will forever be out of reach.

The illusion of readiness

Early generative AI demonstrations created a powerful narrative that capability had outrun institutional adoption. In reality, institutional absorption was a greater constraint than capability.

Demonstration success was widely mistaken for enterprise readiness. What worked for 5,000 users did not automatically work for 50 million. What functioned under temporary budget approval behaved differently under multi-year cost scrutiny. The challenges that arise when transitioning to production-scale often derive from systemic preparedness.

From transformation to industrialization

For several years, the dominant narrative in banking has been “AI transformation.” Transformation suggests disruption, reinvention, and bold strategic change. But the next phase for AI in banking requires something far less theatrical and more disciplined: industrialization.

Industrialization refers to the ability to deploy, govern, and scale AI systems in a repeatable, economically predictable, and regulatorily compliant manner across the enterprise.

Although breakthrough models are important, successful industrialization demands disciplined architecture. It requires:

  • Standardized data foundations rather than proliferating data silos
  • Repeatable deployment pipelines rather than bespoke integration efforts
  • Embedded governance frameworks rather than post-hoc compliance reviews
  • Economic models designed for sustained scale rather than temporary experimentation

Industrialization replaces heroic pilots with systematic capability, which signals maturity. It recognizes that the competitive advantage in AI will arise from sustained, repeatable execution at scale.

The institutions that close the production-scale gap will be those whose platforms are designed to absorb complexity safely and predictably as AI scales.

The industrialization imperative

The next phase of enterprise AI will be defined by orchestration: the ability to coordinate multiple intelligent systems while continuously absorbing the operational burden they create.

In my discussions with senior leaders across banking, the emphasis is already shifting. Early deployments focused on discrete use cases: chat interfaces, coding assistants, document summarization, targeted risk models. Those initiatives demonstrated capability.

In contrast, the emerging challenge concerns coordination. Leaders want to know how multiple models can interact across customer journeys, risk processes, compliance workflows, and operational systems in a controlled and sustainable way.

From model deployment to coordinated systems

Agentic AI—systems capable of autonomous multistep decision-making across workflows—has become part of this evolving conversation. Institutions are exploring autonomous workflows in fraud investigation, credit assessment, compliance monitoring, and customer servicing. These systems rely on coordinated decision-making across multiple models, each drawing on shared data, escalating exceptions where necessary, and operating within defined governance boundaries.

At enterprise scale, this coordination reveals the depth of systemic intricacy. Institutions such as the Bank for International Settlements have highlighted how interconnected AI systems can introduce new forms of systemic risk, particularly where decision-making becomes distributed across multiple models and data sources.3

Every additional model increases data interdependence. Automated workflows generate new audit requirements. Interacting agents introduce heightened resilience expectations. What began as model deployment gradually becomes systems engineering.

At scale, agentic AI does not simply automate tasks. It introduces systems that make interconnected decisions across domains, often in real time, and under regulatory scrutiny. In a banking context, this means credit decisions influenced by multiple models, fraud investigations that evolve dynamically, and customer interactions shaped by continuously adapting logic. The challenge goes well beyond deploying individual models to managing networks of decision-making systems whose behavior must remain controlled, explainable, and economically viable under sustained load.

The key is to approach AI from a foundational perspective, designing architecture, data flows, and governance with scale in mind from the outset; this can dramatically shorten the path from pilot to production, reducing friction and enabling coordinated growth, rather than continually stalling to implement reactive fixes.

Why the data foundation becomes decisive

As AI ecosystems expand, the underlying data architecture becomes increasingly consequential. Fragmented data estates that were manageable under traditional analytics introduce instability under sustained AI load. Duplication creates inconsistency, cross-system latency undermines real-time decisioning, and divergent definitions increase regulatory exposure, particularly in multi-jurisdiction environments.

Orchestration cannot rely on improvisation. Sustainable industrialization depends on a coherent data foundation, one that minimizes unnecessary duplication, preserves lineage, and embeds governance directly into the data layer rather than retrofitting it at the model layer.

Without that coherence, increasingly sophisticated AI systems risk amplifying inconsistency rather than insight. Intelligence, at scale, reflects the quality and integrity of the data layers beneath it.

Economic discipline at enterprise scale

Economic considerations intensify as usage expands. Consumption-based infrastructure models may offer flexibility during experimentation, yet continuous operation across millions of interactions introduces a different planning dynamic. Infrastructure costs scale with usage, inference workloads compound, and data movement multiplies across environments.

When financial exposure becomes difficult to forecast, it introduces hesitation at precisely the moment institutions are seeking operational confidence.

Long-term AI operating models therefore require economic clarity alongside technical rigour. Predictable cost structures align more naturally with multi-year investment horizons and regulatory scrutiny. Financial opacity, even in the presence of technical capability, can constrain strategic ambition.

From initiative to operating fabric

Over time, the implementation of AI changes how the bank actually operates. Architectural coherence across hybrid environments becomes a precondition for scale. Governance must naturally shift from episodic oversight to embedded control. Deployment processes must accommodate repetition rather than exception. Leadership alignment across technology, risk, compliance, and business functions becomes continuous rather than reactive.

In this phase, competitive advantage emerges from coherence. Coordinated model ecosystems, resilient data foundations, disciplined economic structures, and aligned operating models form part of a single industrial design. Institutions that engineer deliberately for that coherence will convert AI ambition into durable capability. Those that treat orchestration as an afterthought will find that scale introduces friction faster than it delivers advantage.

The data foundation: The prerequisite for AI at scale

AI does not fail in banking because models are weak. It fails because the data underneath them is fragmented.

Although that may sound obvious, it is not.

Most large banks do not operate on a single, clean data estate; they operate on layers built over decades. Core banking systems, payments platforms, risk engines, CRM environments, regulatory reporting warehouses, cloud-native applications—each was rational at the time it was introduced. Together, they form something far more complex.

Traditional analytics can tolerate a degree of fragmentation, but AI at scale cannot.

When models are deployed into live customer journeys, credit processes, fraud investigations, and compliance workflows, data inconsistencies surface quickly. Definitions that differ slightly across systems begin to matter. Latency that once seemed acceptable quickly becomes a barrier to real-time decisioning. Copies of data created for convenience multiply governance effort and introduce reconciliation risk. Under sustained AI load, duplication becomes drag.

A problem beyond data storage

The issue is not simply storage architecture. The real issue is trust.

At production scale, every automated decision must be explainable. Every output must be traceable to a data source. Every data transformation must be auditable. When data exists in multiple physical marts, with local logic layered on top, lineage becomes harder to demonstrate and control becomes more expensive to maintain.

This is where the idea of a single, trusted data foundation moves from architectural preference to strategic requirement. A foundation does not mean a single monolithic system. It means a coherent layer where data is governed once, defined once, and made available to multiple use cases without repeated copying and re-engineering.

When data is physically replicated across environments, each copy creates new governance overhead. Each transformation increases the surface area for inconsistency. AI systems amplify those inconsistencies because they operate across domains rather than within them.

By contrast, when institutions reduce unnecessary duplication and virtualize access where possible, they lower friction across the entire AI lifecycle. Models can be trained, validated, and deployed against consistent data. New use cases can be added without rebuilding pipelines from scratch. Governance is embedded at the data layer rather than bolted onto each model individually.

This becomes even more important as agentic systems emerge.

The emergence of agentic AI in banking

When multiple models interact across workflows, they draw on shared data in near real time. If that data is fragmented or inconsistently defined, coordination breaks down. If it is coherent and governed centrally, orchestration becomes manageable.

The data foundation is therefore not an implementation detail. It determines whether AI remains a series of isolated initiatives or becomes an operating capability.

For banks seeking to industrialize AI, this is the point where architectural discipline replaces incremental patching. The question is no longer whether a model can perform a task. It is whether the underlying data environment can support hundreds of such models interacting simultaneously, under regulatory scrutiny, without multiplying cost and structural friction.

Institutions that treat data architecture as core infrastructure will move faster in the long run. Those that allow fragmentation to persist will find that every new AI initiative carries hidden integration and governance effort.

At scale, intelligence reflects its foundation.

Cost predictability as a strategic differentiator

AI may begin as a technology investment. At scale it becomes a capital allocation decision.

AI is often framed as a productivity story. And in many cases, that promise is real. Coding assistants reduce development time. Automation lowers manual processing effort. Intelligent routing improves service efficiency.

In many institutions, the most important stakeholder in AI is not the CIO or the head of data. It is the CFO.

AI initiatives do not stall because models fail to perform. They stall because the financial model does not align with how banks plan, allocate capital, and manage risk over time.

Most AI programs are designed from a technical perspective first, with cost considerations layered in later. For CFOs operating on multi-year planning cycles, this creates a disconnect. Variable, consumption-driven cost models may work for experimentation, but they are far harder to reconcile with the predictability required for sustained investment at scale.

But at enterprise scale, another conversation needs to quickly take centre stage: cost visibility.

Often, CFOs and risk leaders are not sceptical of AI’s potential. What they resist is financial opacity. If usage-based pricing fluctuates with every spike in customer interaction or model invocation, long-term planning becomes harder. When cost curves are difficult to forecast, investment committees hesitate.

And that hesitation is rational.

This dynamic is increasingly visible in industry research. Deloitte has noted that cost management and unclear ROI remain among the top barriers to scaling AI in financial services, while Gartner estimates that a significant proportion of AI projects fail to reach production in part due to unresolved economic and operational constraints.4,5

Why banks hesitate

Banks operate on multi-year planning cycles. Capital allocation decisions must stand up to regulatory scrutiny and shareholder expectations. Leaders need to understand not just the technical feasibility of an AI program, but its economic behavior over time.

Unpredictable cost structures introduce uncertainty into that equation.

This creates a quiet tension inside many institutions. On one hand, AI promises efficiency and competitive advantage. On the other, cloud consumption models can introduce volatility precisely as usage scales. The result is friction between innovation teams and finance functions, even when both are aligned on strategic intent.

Cost discipline therefore becomes a strategic capability, not simply a budgeting exercise.

Institutions that can align AI workloads with predictable operating models are more likely to sustain investment. When leaders have clarity on how costs scale, they can commit with confidence. They can expand use cases without reopening fundamental funding debates. They can treat AI as core infrastructure rather than experimental spend.

This is especially relevant in regulated environments, where operational resilience and financial resilience are closely linked. Infrastructure that cannot be forecast reliably introduces risk beyond technology alone.

Predictability does not mean inflexibility. It means that scale behaves in ways leadership understands.

Over the next decade, the banks that industrialize AI successfully will be those that treat economic architecture with the same seriousness as technical architecture. They will design platforms and partnerships that absorb growth without introducing uncontrolled financial volatility.

AI may begin as a technology discussion. At scale, it becomes a capital allocation decision. And capital allocation is where strategy becomes real.

The European advantage: Regulation as a catalyst, not a constraint

Regulation is often described as a headwind for innovation.

In the context of AI, that assumption deserves to be challenged.

Europe is entering the AI era with two major regulatory anchors already in place: the EU AI Act and DORA. Together, they set expectations around governance, transparency, resilience, and operational control. For some institutions, this has initially felt like added complexity.

A third anchor is emerging through the European Banking Authority, particularly in its guidance on internal governance and the use of machine learning in credit risk.6 These frameworks extend beyond technology and resilience into the core of banking decision-making, setting expectations for how models are developed, validated, and governed in high-impact use cases such as lending.

Over time, it may prove to be an advantage.

Clear rules reduce ambiguity. When expectations around risk management, model documentation, human oversight, and operational resilience are defined upfront, institutions can design with confidence. They do not need to guess what “good” looks like. They can build towards it.

A marathon, not a sprint

In less structured environments, innovation can move quickly at first. But as scrutiny increases, organizations often find themselves retrofitting controls into systems that were never designed for them. That process is disruptive and expensive.

European banks are unlikely to have that luxury. The regulatory bar is already visible. This changes behavior.

When governance is embedded earlier and documentation standards are higher, structural AI considerations such as human-in-the-loop oversight can be designed into workflows rather than added after incidents occur. Resilience testing becomes routine rather than reactive.

In the short term, this can slow superficial experimentation. In the long term, it accelerates industrialization.

Europe has seen this pattern before.

Learning from other technology regulation

The General Data Protection Regulation initially raised concerns about compliance burden and innovation drag. Yet over time, it established global norms around privacy and data protection. Many non-European institutions adopted similar standards because operating multiple regimes became impractical.

Financial regulation has followed similar paths. European frameworks around capital, operational risk, and stress testing have often shaped global supervisory expectations. Institutions that adapted early were better prepared when comparable standards emerged elsewhere.

AI may follow the same trajectory.

As generative and agentic systems become embedded in financial services globally, expectations around transparency, explainability, resilience, and accountability are unlikely to weaken. If anything, they will tighten.

European banks, operating within structured frameworks from the outset, are positioned to develop production-ready AI under conditions of scrutiny. That discipline can translate into trust. And trust, in financial services, compounds.

There is also a sovereignty dimension.

Why sovereignty matters

Hybrid and sovereign cloud models, common across Europe, align naturally with regulatory expectations around control and resilience. They support the idea that critical data and critical decisioning capabilities remain within defined governance boundaries. In an era where AI systems influence credit, fraud prevention, liquidity, and customer outcomes, that control becomes strategically important.

This direction is reinforced by European Central Bank supervisory expectations, which continue to emphasize robust model risk management, explainability, and institutional accountability in AI-driven decisioning.7

None of this guarantees a leadership position in the market. Regulation alone does not create advantage.

But institutions that treat regulatory clarity as a design input rather than constraint can move decisively. They can industrialize AI with governance, resilience, and accountability already built in. When global expectations converge around responsible AI, those institutions may find they have been preparing for years.

By the early 2030s, the competitive edge in banking will not rest solely on who deployed AI first. It will rest on who deployed it responsibly, predictably, and at scale.

Europe not only has an opportunity to define that standard, but I would argue they are already doing so.

What leaders must do now

Moving from AI experimentation to AI operating at scale is not simply a technical progression. It requires leadership choices that shape architecture, governance, and long-term investment.

Most banks already have access to capable models and skilled teams. What determines whether those assets translate into meaningful outcomes is the clarity of direction from the top of the organization. Industrializing AI means building platforms and operating models capable of absorbing the complexity that intelligence inevitably introduces.

Establishing architectural coherence

AI programs often begin in isolated parts of the organization. Innovation teams experiment with new models, product groups explore customer-facing applications, and operational teams automate specific processes. These efforts can produce useful insights, but over time they can also create fragmentation.

Leadership plays a critical role in deciding whether those initiatives remain separate or converge into a shared platform. When AI development is supported by common data foundations, governance standards, and deployment environments, progress becomes easier to sustain. Teams spend less time solving the same infrastructure problems repeatedly and more time applying intelligence to real business challenges.

Architectural coherence is rarely accidental. It reflects deliberate decisions about platforms, standards, and long-term operating models.

Embedding governance from the start

As AI systems move deeper into customer journeys and operational processes, the expectations around oversight increase. Decisions influenced by models must be explainable. Data sources must be traceable. Accountability must remain clear, even when automation accelerates decision-making.

In highly regulated sectors such as banking, governance cannot be an afterthought. It needs to be designed into systems from the beginning. Leaders set the tone by defining the boundaries within which AI systems operate and ensuring those boundaries are visible to both regulators and internal stakeholders.

Human judgement remains central in this environment. Automated systems can process vast amounts of information, but responsibility for outcomes ultimately sits with the institution.

Bringing financial clarity to AI investment

As AI initiatives expand beyond pilot programs, financial considerations become more prominent. Infrastructure usage increases, new models are deployed, and data environments grow in intricacy.

Leadership teams need a clear view of how these dynamics translate into operating costs over time. When the financial behavior of AI systems is well understood, investment decisions become easier to sustain. Institutions can expand successful use cases without revisiting fundamental questions about economic viability.

Predictability in this context supports confidence. It allows AI to move from experimental spending to long-term capability building.

Amplifying human expertise

The impact of AI inside banks will depend heavily on how it interacts with people.

Financial institutions already employ large numbers of skilled analysts, technologists, and domain experts. AI systems have the potential to extend their reach by reducing routine analysis and surfacing insights more quickly. When integrated thoughtfully into workflows, these systems allow employees to focus on interpretation, judgement, and decision-making.

This dynamic reflects a broader shift in how organizations think about intelligence. Rather than replacing human expertise, successful AI systems tend to expand what skilled professionals can accomplish.

Maintaining that balance requires intentional design. Leaders must ensure that technology strengthens human decision-making rather than obscuring it.

Committing to long-term capability

Industrializing AI is not a short project. It unfolds over years as institutions refine data foundations, operating models, and governance frameworks.

This kind of progress rarely attracts immediate attention. It is slower and less visible than headline-grabbing demonstrations of new technology. Yet these foundational investments are what allow intelligence to operate reliably across large organisations.

Banks that make these commitments early tend to find that momentum builds over time. Once the underlying architecture is stable, new AI capabilities can be introduced with far less friction. Governance becomes routine, costs become easier to manage, and innovation moves closer to everyday operations.

Leadership determines whether that long-term trajectory takes shape.

The next decade of banking

Over the next decade, artificial intelligence will become embedded across every layer of banking. But its impact will not be determined by who adopts it first, or who experiments most aggressively.

It will be determined by which institutions can operate it: reliably, repeatedly, and at scale.

Banking has always been defined by its ability to manage risk, capital, and trust within complex operating environments. AI does not change that. It intensifies it.

The institutions that succeed will be those that treat AI not as a series of initiatives, but as infrastructure. They will build data foundations that support multiple use cases without fragmentation. They will design governance that scales with automation. They will align financial models with long-term operational reality.

In doing so, they will close the gap between capability and execution.

And that is where competitive advantage will be defined. Not in what AI can do, but in what banks can sustain.

 

1. Alex Singla, et al., “The state of AI in 2025: Agents, innovation, and transformation,” McKinsey & Company, 2025,.
2. Gary Class, “Conversational Intelligence in Banking,” Teradata, 2025,.
3. Iñaki Aldasoro, et al., “Intelligent financial system: how AI is transforming finance,” Bank for International Settlements, 2024,.
4. Richard Horton, et al., "AI ROI: The paradox of rising investment and elusive returns," Deloitte, 2025,.
5. Roxane Edjlali,"Lack of AI-Ready Data Puts AI Projects at Risk," Gartner, 2025,.
6. “EBA Discussion Paper on Machine Learning for IRB Models,” European Banking Authority, 2021,.
7. “ECB guide to internal models,” European Central Bank, 2025,.

European Commission (2024) The EU Artificial Intelligence Act. Available at: https://artificialintelligenceact.eu/ (Accessed: 22 March 2026).

European Central Bank (2023) Guide to internal models and model risk management. Available at: https://www.bankingsupervision.europa.eu (Accessed: 22 March 2026).

Tags

À propos de Simon Axon

Simon’s primary focus is to help Teradata customers drive more business value from their data by understanding the impact of integrated data, advanced analytics and AI. With a background that includes leadership roles in Data Science, Business Analysis and Industry Consultancy across Europe, Middle East & Asia-Pacific, Simon applies his diverse experience to understand customers’ needs and identify opportunities to put data and analytics to work – achieving high-impact business outcomes.

Having worked for the Sainsbury’s Group and CACI Limited prior to joining Teradata in 2015, Simon is now the Global Financial Services Industry Strategist for Teradata.

Voir tous les articles par Simon Axon
Restez au courant

Abonnez-vous au blog de Teradata pour recevoir des informations hebdomadaires



J'accepte que Teradata Corporation, hébergeur de ce site, m'envoie occasionnellement des communications marketing Teradata par e-mail sur lesquelles figurent des informations relatives à ses produits, des analyses de données et des invitations à des événements et webinaires. J'ai pris connaissance du fait que je peux me désabonner à tout moment en suivant le lien de désabonnement présent au bas des e-mails que je reçois.

Votre confidentialité est importante. Vos informations personnelles seront collectées, stockées et traitées conformément à la politique de confidentialité globale de Teradata.