CQC has scrapped numerical scoring: what evidence strategy looks like now

Two years of compliance investment have just been retired. Most large providers spent 2024 and 2025 rebuilding evidence libraries around the 34 quality statements, tagging artefacts to evidence categories, and engineering reporting so each category could score cleanly in inspection. In the initial response to its "Better regulation, better care" consultation, the Care Quality Commission committed to "remove scoring from our assessment approach", with "future rating judgements made holistically using professional judgement informed by evidence". The draft sector-specific frameworks published on 24 March 2026 confirm the direction. Quality statements are gone. Scoring is gone. What remains is a smaller, sharper set of questions at key-question level, and rating characteristics that describe what Outstanding, Good, Requires Improvement or Inadequate actually look like in practice.

This piece is about what that means for evidence strategy between now and implementation at the end of 2026, and what compliance teams should start doing differently this quarter.

Why CQC removed scoring from the assessment approach

The scoring model was not a design choice CQC defended and then changed its mind on. It was a rollout casualty of the Single Assessment Framework introduced in 2023. Two reviews commissioned in 2024 landed on the same day and reached compatible conclusions.

Dr Penny Dash's review into the operational effectiveness of the Care Quality Commission, published 15 October 2024, found that the SAF had "deteriorated" CQC's ability to identify poor performance. Assessments were slower, reports were less useful, and the regulator's operational credibility had narrowed. Published the same day, Professor Sir Mike Richards's review of the single assessment framework and its implementation recommended retaining the five key questions, but suspending and almost certainly scrapping the evidence categories and scoring system.

CQC accepted both. The initial response to the consultation records what has now been committed to publicly: scoring removed, rating characteristics re-introduced at each level, sector-specific frameworks. Of the 1,703 responses to the consultation, around 95% agreed or strongly agreed with each proposed framework change. Only around 80% agreed with the proposed changes to inspection and rating methodology. That 15-point gap is the most under-discussed data point in the whole reset. Providers are bought in on the framework. They are more sceptical about how inspection will actually feel under it.

Scoring was never legislated. It was CQC's operational translation of the framework, and the regulator is now translating it differently. For compliance directors that means the change of evidence posture is immediate in principle, even though the current framework continues to apply until implementation.

What replaces scoring: rating characteristics at key-question level

The five key questions stay: Safe, Effective, Caring, Responsive, Well-led. The four rating levels stay with them, and the "I statements" reflecting lived experience remain in place. What changes is the machinery underneath.

The 34 quality statements are out. In their place sit Key Lines of Enquiry, framed as structured investigative questions, and rating characteristics describing what each rating level looks like at key-question level. The adult social care draft carries 24 KLOEs split across the five key questions, six under Safe, six under Effective, three under Caring, four under Responsive, five under Well-led. The other three draft frameworks use the same architecture tailored to sector shape.

Anyone who worked under CQC before 2023 will recognise this model. Rating characteristics and KLOE-style questions are re-introduced, not invented. The goal is not a new regulatory vocabulary. It is a return to one that CQC, providers and the public had shared understanding of, stripped of the SAF overlays that weakened it.

"Professional judgement informed by evidence" is the phrase doing the commercial work for boards to understand. It is not subjective discretion. Inspectors will still be judging against published rating characteristics at each of the four levels. What changes is that they will not be aggregating scored evidence categories into a numeric rollup. They will be looking at the evidence a provider puts forward, reading it against the rating characteristics for that key question, and making a rating judgement across that key question as a whole.

The vocabulary shift is the workload shift. Fewer regulatory objects to evidence against, each one sharper. The compliance director's question is no longer "have we covered all 34 quality statements at an acceptable score". It is "against each key question, does our evidence demonstrate the rating characteristic we intend to be rated at".

What evidence strategy looks like when quantity no longer scores

The scoring-era instinct was to over-produce evidence. A well-run compliance function could map hundreds of artefacts across the 34 quality statements, often in multiple locations, because duplication protected scores where categories overlapped. That instinct is now an active disadvantage.

Rating characteristics are written at a level that rewards artefacts capable of demonstrating several characteristics at once. A clean, versioned training completion record that also shows competence sign-off, linked to named individuals and role definitions, can support Safe, Effective and Well-led simultaneously. A cluttered library of screenshots, spreadsheets and duplicate evidence submissions supports none of them well.

The board framing is straightforward: scoring optimisation rewarded coverage, and professional judgement rewards precision.

Inventory evidence against the new vocabulary

Every artefact currently mapped to a quality statement should be re-mapped against the five key questions and the draft rating characteristics most relevant to it. Artefacts that served only the scoring mechanism, and do not support a rating characteristic, can be parked.

Compress duplication

Where the same underlying fact is evidenced in three systems to cover three quality statements, one well-managed source with role-based access and audit history is stronger under professional judgement than three partial ones.

Move from completion to competence

Training completion is a common example. It has never been the same as competence. CQC's draft rating characteristics are expected to lean harder into demonstrated competence than logged completion. Evidence that records completion, assessment of competence, and named supervisor confirmation carries further than completion alone.

Present evidence at key-question level

Board reporting in the old model aggregated quality statement scores. The new reporting layer needs to read at key-question level. Outstanding, Good, Requires Improvement or Inadequate per key question, backed by the rating characteristics the provider believes it is evidencing.

None of this requires new regulatory scaffolding. The regulation 19 evidence base for fit and proper persons, the six NHS Employment Check Standards mandated by the Department of Health and Social Care for NHS appointments, the right-to-work, DBS and professional registration checks already sit underneath almost every Safe and Well-led rating characteristic in the drafts. The architecture that provides them has not changed. The way they are surfaced for rating has.

Scoring-era evidence investments that no longer pay their keep

Three investments made sense under scoring and no longer carry their weight.

Quality-statement-mapped evidence libraries organised around the 34-way taxonomy are the first. The taxonomy has been removed from the assessment approach. A library structured around it is an organising layer for a classification that no longer exists.

Score-maximising annotation. The practice of annotating evidence to explain how it supports multiple scoring categories added time without strengthening the underlying evidence. Professional judgement does not reward annotation density.

Large SAF-era binders. Where these exist in paper or as bound PDFs, they should be retired. Evidence needs to be queryable, versioned and timestamped by source. Binders are a display layer, and CQC is not inspecting display.

What replaces these investments is a workforce compliance data pipeline that produces evidence in the format rating characteristics now call for. Expiry monitoring so certifications do not lapse between inspections, verification automation for GMC, NMC and HCPC where possible, DBS renewal tracking, right-to-work re-check schedules, and reporting that rolls up to the board in the four-level language of the rating characteristics.

How automation maps to evidence quality under the new framework

Where this begins to connect to operational tooling, it connects directly to what Credentially builds. The platform's 68% admin reduction number has historically been framed as time saved on onboarding tasks. Under the new framework it is more usefully read as a shift from quantity-of-evidence work to quality-of-evidence work.

Under scoring, a large share of compliance admin went into producing, organising and cross-referencing evidence mapped to scoring categories. When scoring is removed, that work loses its regulatory payoff. What compliance teams now need is fewer better artefacts, produced as a by-product of operational flow, capable of supporting multiple rating characteristics across key questions. Automated primary source verification of professional registration, continuous credential expiry monitoring, role-based evidence of right-to-work and DBS status, and audit history by individual clinician map directly onto what the draft rating characteristics ask for under Safe and Well-led. Our guide to digital evidence management for CQC inspection covers what this looks like in practice.

A related operational point on onboarding itself. Healthcare providers using Credentially commonly move from an industry norm of around 60 days for clinical onboarding to platform-managed onboarding in as little as 5 days. The "5 days" number refers to platform-managed onboarding steps. Third-party checks, including DBS, still run in parallel on their own clocks, typically two to six weeks. The commercial and the regulatory points converge here. Shorter, better-evidenced onboarding is both a workforce fix for the dropout problem and a standing source of rating-characteristic-ready evidence on fit and proper persons. From the clinician's side, it also turns an intrusive, repetitive paperwork experience into something closer to a single completion path.

This is not a claim that any platform can produce the evidence CQC will ultimately want from the final framework. The drafts are open for consultation until 12 June 2026, and the final framework publishes in summer 2026. It is a claim that the direction of travel, from scoring of categories to professional judgement against rating characteristics, rewards precisely the kind of evidence that a healthcare-specific workforce compliance system produces as operational output.

Continuous credential monitoring is a useful example of the framing to get right. It is a provider-side operational posture, covered in our piece on continuous credential monitoring as the new standard, not a claim about CQC pushing for continuous assessment. The regulator's 2025/26 direction is sector-tailored, risk-based scheduling, not a rolling continuous assessment model. Continuous readiness on the provider side fits that direction precisely because it produces current evidence whenever assessment happens.

The next 12 months for compliance directors

Three things worth doing before final publication.

One, an internal evidence audit against the five key questions and the draft rating characteristics. Use the ASC draft's 24 KLOE structure as an illustrative map, adapted to the sector you operate in. The aim is a shorter, stronger evidence set, not a re-tagging exercise.

Two, a training review. Completion remains necessary. It is no longer sufficient. Where training evidence sits alongside competence assessment and supervisor sign-off, the rating characteristic is easier to evidence. Where it does not, the gap is visible.

Three, a governance reporting refresh. Board reporting should be capable of presenting a rating position per key question, with evidence summaries against the draft rating characteristics for each level. This both prepares the provider for rating at key question level and surfaces the evidence gaps worth closing before the framework settles.

Credentially has put together a CQC evidence readiness whitepaper aligned to the five key questions and the draft rating characteristics. It walks a compliance team through current evidence coverage against the new vocabulary, flags the scoring-era artefacts that no longer pay their keep, and highlights where workforce compliance data already produces rating-characteristic-ready evidence. It is a structured one-afternoon exercise, and the output is a short, sharper evidence position ready to inform board reporting and the provider's response to the 12 June 2026 consultation.

Download the evidence readiness whitepaper or read the rating-characteristics-aligned checklist to frame the next three months of work.

CQC has scrapped numerical scoring: what evidence strategy looks like now
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.