Most lenders believe their affordability models are compliant. They are not. If you cannot explain how your model makes a decision within seconds, you are already misaligned with FCA expectations. The FCA’s Data First Strategy at makes one point painfully clear. Black box decisioning is no longer tolerated in UK financial services. Yet most firms continue to run affordability engines that cannot articulate even their most basic internal logic. And if you think your internal notes, ad hoc analyst explanations, or model documentation from three years ago count as explainability, then you are already behind. Two out of three firms are in the same position. The 2024 Bank of England and FCA AI Survey found that although 75 percent of firms are already using AI and machine learning, only 34 percent understand those systems fully. The report is available at the 2024 Bank of England and FCA AI Survey . That statistic alone should unsettle you. It means most regulated organisations are deploying decision engines they cannot control, cannot justify, and cannot defend. If your affordability model sits within that majority, you are on borrowed regulatory time.
Your Model Was Built for Performance, Not Proof
The truth is simple. You built your affordability model to maximise predictive accuracy, reduce losses, and accelerate lending decisions. You did not build it to satisfy Consumer Duty requirements, fair value outcomes, or real time supervisory expectations. That is why most affordability models collapse the moment the FCA asks a difficult question. They hide undocumented variable interactions. They depend on inconsistent datasets. They rely on transformations nobody can reproduce. They output decisions nobody can explain. The FCA’s retail banking portfolio letter is explicit about the sector’s failings. Weak MI. Inconsistent definitions. Unclear model governance. Gaps in oversight. If any of this sounds familiar, then your affordability model is not an asset. It is evidence of foreseeable harm in waiting.
The Data That Should Make You Uncomfortable
You might think your firm is the exception. You are probably not. The 2024 AI Survey shows that 46 percent of firms only partially understand their deployed models. If you only partially understand your model, how can you claim it is fair, transparent, or safe. You cannot. Academic research puts a price on explainability. It shows that shifting from an opaque model to an explainable model costs 15 to 20 basis points annually in ROI as per the academic research. That number is tiny compared to the cost of remediation, enforcement, or consumer redress. Meanwhile, the FCA’s enforcement data reports over 100 enforcement cases annually in which weak MI or governance failings contributed to potential consumer harm. The pattern is obvious. Firms that cannot explain their decisioning expose themselves to supervisory intervention long before they realise they are at risk.
Before You Read Further, Test Yourself
If you cannot answer all five of these questions within 60 seconds, your affordability model is not compliant.
- What version of your affordability model was running last Tuesday.
- Which three variables most influenced a decline decision yesterday.
- Where did each data point originate, and how exactly was it transformed.
- Can you reproduce yesterday’s decisions exactly without manual reconstruction.
- Can you prove the model does not disadvantage a protected group under Consumer Duty.
If you hesitated on any of these, the FCA will not hesitate with you.
Manual Explainability Is Not Explainability
If you depend on analysts to reconstruct rationale manually, you already fail the standard. Manual explanation is slow, inconsistent, subjective, and entirely incompatible with real time supervision. And real time supervision is no longer theoretical. It is already emerging through the FCA’s shift to continuous oversight and machine readable submissions. McKinsey’s Global Banking Review 2025 services/our insights/global banking annual review/why precision not heft defines the future of banking.pdf states that firms relying on manual oversight face escalating operational fragility and increased regulatory exposure. If your model cannot speak for itself automatically, then it cannot meet FCA expectations.
If You Do Nothing, You Are Choosing Regulatory Failure
Doing nothing is not the neutral option. It is an active decision to expose your firm to regulatory scrutiny. The FCA already intervenes where firms cannot produce model rationale. When your model declines a vulnerable customer without transparent justification, you will be expected to prove that decision was fair. When your input data is inconsistent across systems, you will be expected to explain why. When your model exhibits bias, you will be expected to remediate immediately. Failure to do so is not a misunderstanding. It is a breach of Consumer Duty. Eventually, an unexplainable model becomes the catalyst for costly remediation, a skilled person review, or worse. You are not waiting for a technical failure. You are waiting for a supervisory one.
What Good Actually Looks Like
A compliant affordability model does the following. It documents every variable and transformation. It maintains full version control. It captures lineage from raw data to final outcome. It produces real time reason codes for every decision. It monitors fairness continuously. It integrates directly with MI, governance, and board oversight. It does not rely on a human to “interpret” anything. It explains itself.
Where Panintelligence Fixes the Problem You Cannot Ignore
Panintelligence provides real time explainability for every affordability decision your organisation makes. It generates instant rationale, not reconstructed narratives. It records full data lineage from source to output, eliminating guesswork. It provides version controlled explainers so you can reproduce every historic decision exactly. It monitors fairness, bias, and outcomes continuously. It unifies fragmented data, removing the inconsistencies that undermine your credibility with regulators. And deliberately, it wraps around your existing models, meaning you do not need to rebuild your risk engines. You simply give them the governance and transparency they have always lacked. You can continue defending decisions you cannot explain, or you can switch to technology that explains every decision instantly. That is the choice.
Your Model Is Not Dangerous Because It Is Wrong. It Is Dangerous Because You Cannot Prove When It Is Right.
The FCA will not accept decisions you cannot justify. Customers will not trust decisions you cannot explain. And you should not run decisions you cannot defend. The future of affordability is transparent, governed, and explainable. Panintelligence gives you the ability to meet that standard before the regulator forces you to. If you cannot explain your model today, it is not your model controlling the risk. It is the risk controlling you.












