Why Data Platforms in Banking Rarely Fit Pure Agile Frameworks

Why Data Platforms in Banking Rarely Fit Pure Agile Frameworks

 

Banking leaders often like the idea of running every technology program the same way.

It sounds efficient. It sounds modern. It gives the organization one language for delivery.

But data platforms rarely cooperate with that idea.

In banks, data work is not just another product backlog. It sits at the intersection of risk, finance, operations, compliance, analytics, customer reporting, and regulatory obligations. That alone changes the shape of delivery. McKinsey says banks now spend about 6 to 12 percent of their annual technology budgets on data, and that choosing the right data architecture can cut implementation time in half and lower costs by 20 percent. The same research says many banks still struggle to complete data transformations or realize the value they expected. 

That helps explain why pure Agile often looks better in theory than it works in practice for banking data platforms.

The problem is not that Agile is bad.

The problem is that data platforms in banking carry too much shared responsibility, too much architectural weight, and too many control obligations to behave like ordinary feature-delivery teams. When leaders ignore that, programs start looking productive at the ceremony level while staying stuck at the platform level. 

Data platforms are not just another product

A customer-facing feature usually has a visible user, a visible journey, and a visible outcome.

A data platform is different.

It serves many consumers at once. One part supports regulatory reporting. Another feeds risk aggregation. Another powers analytics. Another supplies front-end personalization or pricing logic. Another supports finance controls. Even when one use case is driving the investment, the platform often has to satisfy many more needs than that one team can see. 

That shared nature is why banking data work feels heavier.

The Basel Committee’s progress work on BCBS 239 says effective governance arrangements, adequate IT infrastructure, and sound data architecture are central to strong risk data aggregation and reporting. It also says many banks still struggle to produce timely, accurate, and complete risk reporting during stress because of fragmented IT infrastructure and manual aggregation processes. 

Once a platform is responsible for accuracy, timeliness, lineage, and cross-functional reuse, delivery stops being just about speed.

It becomes about trust.

Why pure Agile feels attractive at first

Pure Agile feels attractive because it promises movement.

For banks that are tired of long architecture debates, giant warehouse programs, or multi-year transformation roadmaps with little visible value, the idea of smaller increments is appealing. McKinsey’s data-architecture work says traditional data modernization efforts often get tied up in overplanning, long technology assessments, and road-map debates that consume months before real implementation begins. 

That critique is fair.

Plenty of data programs in banking have been too big, too slow, and too detached from actual business use. Some teams really do spend too much time designing the “future-state platform” and too little time delivering something useful. That is part of the reason product leaders push for more Agile approaches in the first place. 

And to be fair, some parts of data work absolutely benefit from Agile habits.

Use-case development, analytics products, dashboards, model features, service-layer APIs, and business-facing tools often improve when teams work iteratively. Where banks get into trouble is assuming the entire data-platform journey should follow the same pattern, at the same speed, with the same tolerance for ambiguity. 

The first problem is data quality, not sprint velocity

A lot of data-platform programs fail for a simple reason.

The data itself is not ready.

A team can write user stories, run stand-ups, and hit sprint goals, but none of that fixes poor lineage, missing definitions, duplicated records, inconsistent taxonomies, weak ownership, or manual controls buried in source systems. In banking, those issues are not edge cases. They are often the work itself. The Basel Committee said in late 2023 that nearly ten years after BCBS 239, banks were still at different stages of alignment and that additional work was required at all banks to attain or sustain full compliance. It also said boards should oversee robust data-governance frameworks and that banks should foster ownership and accountability for data quality across the organization. 

That is one of the clearest reasons pure Agile struggles here.

Agile assumes teams can discover and adapt as they go. Banking data platforms can do some of that, but only within limits. If core definitions are unstable, if control expectations are unclear, or if the same data feeds multiple regulated outputs, then too much flexibility early on can create expensive rework later.

This is why data teams often say the backlog is not the real bottleneck.

The real bottleneck is agreeing what the data means, who owns it, and whether it is safe to use in production at all.

The second problem is shared consumers

A feature team can optimize for its own user journey.

A banking data platform usually cannot.

One schema change can affect downstream reporting, risk models, surveillance controls, dashboards, partner interfaces, and operational workflows. One ingestion decision can make life easier for analytics while making audit evidence harder. One shortcut for one business line can create consistency issues for every other line using the same platform. McKinsey says banks often struggle because data transformations leave them managing fragmented warehouses and lakes at the same time, or because old and new environments end up running in parallel longer than planned, increasing cost and complexity. 

This is exactly why data-platform work does not behave like simple product delivery.

There are too many consumers, and they do not all value the same thing. Analytics teams may want flexibility. Finance may want consistency. Risk may want traceability. Operations may want stable interfaces. Compliance may want explainability. The platform team has to serve all of them, not just the loudest requester in the sprint review. 

That is not a sign that the team is un-Agile.

It is a sign that the platform has institutional responsibilities that no single product squad can fully absorb on its own.

The third problem is architecture

In banking, data platforms are shaped by old architecture long before a team writes its first story.

Most large banks did not build their data environments cleanly from scratch. McKinsey describes data architecture in many organizations as something that has taken shape over time across transactional systems, warehouses, master-data tools, and newer platforms such as lakes. It also notes that these layers were added gradually, so many now fall short on scalability and flexibility. 

That layered history matters.

If a bank has a legacy warehouse, a newer lake, a set of manually maintained extracts, and separate reporting logic across functions, pure Agile will not magically dissolve that complexity. The team still has to decide what gets carved out, what stays, what gets replicated, what becomes canonical, and what remains at the edge. McKinsey’s legacy-modernization work argues that a strong banking modernization approach has to include the data stack, the core systems, and surrounding systems together, not one in isolation. It also says some banks have cut typical transformation timelines in half and costs by 70 percent by following a more deliberate legacy-modernization model. 

That does not sound like pure Agile.

It sounds like architecture-led sequencing with iterative delivery inside it.

And that is usually much closer to how good banking data-platform work actually gets done.

The fourth problem is that platform work is often invisible

Feature work is easier to defend because people can see it.

Platform work is harder because it often feels indirect.

A team may spend months standardizing ingestion, cleaning taxonomies, building reusable pipelines, documenting lineage, or separating reporting-grade data from exploratory data. Business sponsors sometimes get restless during that phase because they do not see a flashy front-end outcome. But without that work, the platform turns into a pile of local wins that never scales. 

McKinsey makes this distinction explicitly when it talks about data platform teams and data product teams.

Its data-architecture article says successful organizations increasingly orient around a product-and-platform model. Platform teams build and operate architecture, ingestion, modeling, and standard APIs, while data product teams focus on business-facing use cases. 

That split matters a lot in banking.

If leadership measures both groups by the same short-term delivery lens, the platform side is almost guaranteed to be undervalued. Yet that side is often what determines data quality, reuse, resilience, and regulatory defensibility later on. In other words, the platform does not just support delivery. In banking, it protects credibility. 

Why hybrid delivery is usually the stronger model

This is why hybrid delivery models tend to work better for banking data platforms.

A hybrid approach lets teams stay iterative where iteration is useful, but it also accepts that some decisions need more structure. Data products can often be developed incrementally. Platform standards, lineage models, critical reporting pipelines, governance controls, and shared architecture decisions often need clearer checkpoints and broader agreement. 

That is not a retreat from agility.

It is a recognition that banking data work includes both exploratory and foundational layers. PwC’s perspective on data mesh and data fabric says the right modernization approach has to align with business outcomes and the organization’s data-governance culture. It also says legal and compliance functions need to be closely involved in rollout. 

That point lands especially well in banking.

A bank can build a backlog for reporting features, data access, or analytics services. But the moment those outputs start feeding regulated processes, risk decisions, or customer-affecting operations, governance can no longer be treated as something that waits until the end. It has to shape the delivery model itself. 

A realistic example banks should recognize

A practical banking example helps.

Imagine a bank trying to modernize data for anti-money-laundering, know-your-customer, and enterprise risk reporting. The first instinct might be to create squads around each use case and move quickly. That can help at the edges. But before long, the teams run into common customer data, entity resolution, source-system inconsistency, local customization, and shared reporting needs across business units and geographies. 

McKinsey describes a French bank that improved the quality of its AML and KYC reporting while lowering the cost of subsidiary data architecture by 30 percent through a more harmonized approach that combined a shared global data vault and local customization. In a separate example, McKinsey says a bank reduced the time needed to implement new use cases from six months to six weeks after introducing a new reference data architecture and shifting work away from environments that were never designed for those use cases. 

Those are not examples of pure Agile squads sprinting their way to victory.

They are examples of architecture, governance, and delivery being designed together.

That is usually what successful banking data programs look like once you get past the slogans.

Why regulated reporting changes everything

The pure-Agile model struggles even more when data platforms support critical reporting.

McKinsey says a reference data architecture works best when each pillar is used for what it is suited for, with classical data warehouses supporting highly critical reporting such as regulatory reporting and financial reporting, while lakes and streaming support other use cases. It also says one common mistake is using the warehouse for advanced analytics just because it feels safer, even when a better architecture would preserve stability while serving modern use cases more effectively. 

That is a very banking-specific lesson.

Not every workload belongs in the same environment. Not every change deserves the same release logic. And not every data consumer can tolerate the same level of experimentation. Banks need enough architectural discipline to separate what must stay stable from what can evolve more freely. 

This is one reason data-platform leaders often sound more cautious than product leaders.

They are not just protecting a backlog. They are protecting the integrity of reports, controls, and decisions that other parts of the bank rely on.

Leadership mistakes that make the problem worse

Leaders often make this harder than it needs to be.

One common mistake is forcing data-platform teams to justify themselves only through near-term business features. Another is treating every delay as a sign the team is not Agile enough. A third is asking for enterprise-wide reuse while still allowing inconsistent ownership, unclear taxonomies, and one-off exceptions for every business line. 

There is also a bigger strategic mistake.

Some leaders assume that if the bank moves to hybrid cloud, modern tools, or a mesh-style architecture, the delivery problem is mostly solved. It is not. Deloitte notes that banks are increasingly adopting hybrid cloud because of their investments in on-premises systems and because of regulatory-compliance requirements. That means many institutions will keep operating in mixed environments for years, which makes delivery design more complicated, not less. 

Tooling helps.

Architecture matters.

But operating model still decides whether the platform becomes dependable or just modern-looking.

What works better in practice

The stronger model is usually straightforward, even if it is not simple.

Banks do better when they separate platform responsibilities from data-product responsibilities, stage critical architectural decisions more deliberately, and let high-value use cases move iteratively on top of that foundation. They also do better when they stop trying to centralize everything at once. McKinsey’s legacy-modernization guidance says banks can often move faster by leaving data at the edge and building a flexible platform over time rather than starting from scratch. 

That is a practical lesson.

A banking data platform does not have to be fully finished before it creates value. But it does need enough governance, enough structure, and enough architectural honesty to avoid becoming another expensive layer that teams work around instead of trust. 

This is where hybrid really proves its value.

It keeps the bank from getting trapped in old-style multi-year data programs with nothing to show. But it also keeps the bank from pretending every data challenge can be solved sprint by sprint without deeper design choices. It balances learning speed with institutional responsibility. That is exactly what banking data work needs. 

Pure Agile is usually too narrow for the job

Banking data platforms rarely fit pure Agile frameworks because the work is broader than feature delivery.

It includes data governance, shared consumers, legacy architecture, critical reporting, control evidence, and platform responsibilities that cut across the whole institution. Agile habits still matter. Iteration still matters. Product thinking still matters. But on their own, they are usually too narrow for the job. 

That is why the better answer is not to reject Agile.

It is to stop pretending pure Agile is enough.

For banking data platforms, the more realistic path is hybrid delivery with clear platform ownership, stronger architecture choices, embedded governance, and iterative business use cases on top. That model may sound less fashionable than “Agile everything,” but it is much closer to what real financial institutions need when the data is not just supporting a feature, but supporting the bank itself.

Leave a Reply

Your email address will not be published. Required fields are marked *

NAICS Codes
541511 -Custom Computer Programming Services

541519 - Other Computer Related Services

541611 - Administrative Management Consulting

541690 - Other Scientific and Technical Consulting Services

541990 - All Other Professional, Scientific and Technical Services

561110 - Office Administrative Services
UEI: E2XCPB9DPCF4
CAGE: 9SEC5
Social Media
NAICS Codes
541511 -Custom Computer Programming Services

541519 - Other Computer Related Services

541611 - Administrative Management Consulting

541690 - Other Scientific and Technical Consulting Services

541990 - All Other Professional, Scientific and Technical Services

561110 - Office Administrative Services
Contact Information
UEI: E2XCPB9DPCF4
CAGE: 9SEC5
Social links

© 2025 Phoenix Marcus. All rights reserved.

2025 Phoenix Marcus. All rights reserved.