The Building Safety Regulator’s Gateway 2 process is forcing project teams to prove design intent and control plans with a level of clarity the industry has often struggled to achieve under programme pressure. AI plan-checking is emerging as a practical way to interrogate drawings, models and documents before they go near a submission, surfacing omissions, contradictions and missing evidence while leaving a clear audit trail.
TL;DR
/>
– AI plan-checking stress-tests Gateway 2 bundles against project requirements and common regulatory criteria, flagging gaps for the design team to resolve early.
– It works best when hooked into the CDE, using rulepacks aligned to the project’s codes, scope and naming standards.
– Don’t expect automation to sign off compliance; use it to triage, evidence decisions and prove control rather than replace competent reviewers.
– Key risks are poor data provenance, unmanaged revisions, vague rule definitions and tools that can’t explain their outputs.
– Start with a pilot on one building zone, tune the rules, and only then scale across the full submission set.
What AI plan-checking actually means for Gateway 2
/> AI plan-checking combines pattern recognition and rule-based logic to examine your submission set: 2D PDFs and DWGs, BIM models (IFC/Revit/NWC), schedules, specifications and method statements. The aim is not to certify compliance, but to create a repeatable preflight that catches missing documents, inconsistent dimensions, untagged fire doors, undefined penetrations, mismatched product references and unclear roles or control plans. Think of it as a design assurance layer that produces a machine-readable issues list and human-readable evidence pack.
Under the bonnet, most platforms blend three capabilities. First, rules engines that check naming conventions, document metadata and presence/absence of required items against a Gateway 2 checklist. Second, computer vision and geometry queries that measure escape routes, detect door swings, find unprotected shafts, or spot clashes around fire-stopping zones in a model. Third, language models that parse specifications and method statements to find contradictions (for example, a spec calling for a different door rating from the fire strategy) and to extract commitments into a traceable register.
Success depends on how well your project requirements are translated into rules. That includes Approved Documents relevant to the scheme, your fire engineer’s strategy, accessibility expectations, structural design assumptions, manufacturer data and your naming standards (Uniclass/BS 1192/ISO 19650). The better the mapping, the sharper the flags.
The output needs to be explainable. Each flag should show provenance: which file, which revision, which element, which clause it relates to, and what evidence would close it. That audit trail is what helps answer Building Safety Regulator queries without spiralling the RFI count.
On real UK projects: where it fits in the Gateway 2 sprint
/> On a typical higher‑risk residential project, the Gateway 2 sprint starts when Stage 4 design is nearing freeze. The design manager will push the latest federated model and key PDFs into the CDE. AI plan-checking sits immediately after that upload, running a preconfigured rulepack. The tool creates an issues register linked back to model elements and documents, assigning items to discipline leads and forming part of the design team’s weekly coordination cycle. When all high and medium issues are resolved or justified, the curated evidence bundle is exported into the submission pack with a decision log.
Here’s a scenario from a UK mid‑rise mixed‑use scheme heading for Gateway 2 approval. The Principal Designer is under pressure to lock the fire strategy while the frame contractor is pressing for precast drawings and the MEP lead has late changes around risers to suit procurement. A two-hour AI sweep of the federated model and PDFs flags twenty-eight doors on escape routes tagged FD30 where the strategy implies FD60, and highlights a smoke extract calculation referenced in the narrative but missing from the appendices. It spots three floor plans where the stair widths differ from the sections, and notes that the construction control plan mentions inspections but lacks hold points for firestopping sign-off. The team triages the list, assigns actions in the CDE, and a day later exports a clean evidence log showing how each point was addressed or why it is not applicable. That log becomes an index in the cover letter and a backstop if queries arise. The submission goes in with fewer surprises and a firmer story about control.
# Checklist for standing up AI plan-checking on a live Gateway 2 submission
/>
– Define your rulepack: map project standards, fire strategy criteria and submission requirements into clear checks with acceptance thresholds.
– Lock your sources: agree which model versions, PDFs and schedules are in-scope, and freeze them in the CDE for the preflight run.
– Tag responsibilities: ensure each flagged item routes to a named discipline lead (architect, fire, structures, MEP, PD) with a due date.
– Calibrate geometry tolerances: set sensible thresholds for measurements (doors, stairs, corridor widths) to avoid noise from model rounding.
– Wire in provenance: keep file hashes, revision IDs and model GUIDs so every issue can be traced back to a specific object and revision.
– Run a pilot zone: test on a single core or typical floor before scaling across the building to tune false positives and language parsing.
– Export neatly: standardise the issues log format and evidence pack so it drops cleanly into your Gateway 2 submission index.
Pitfalls and fixes when using AI for BSR submissions
/> AI that cannot explain its findings creates more questions than answers. Choose tools that show exactly why a flag was raised and how it links to design intent or a cited clause. If the software reads like a black box, your team will re‑check everything manually, wasting the point of automation.
Don’t treat rules as static. Gateway 2 is iterative, and rulepacks should be versioned as design develops and procurement clarifies products. Keep a change log when criteria are updated, especially if you shift from performance specs to named products.
Mind the data lineage. Gateway 2 relies on trust in your controls; if a flagged issue refers to a superseded revision or a model that isn’t frozen, you undermine your own case. Embed AI runs in your CDE workflow with status codes that only ingest authorised revisions.
Guard against scope creep. AI plan-checking should focus on clarity and completeness against agreed criteria, not open-ended design critique. Resist turning it into a proxy for value engineering or aesthetic debate; keep the checks measurable and relevant to safety and control.
# Common mistakes
/>
– Uploading a mixed bag of drafts and finals, then blaming the AI for “conflicting results.” Freeze the set before running checks.
– Letting subcontractor models bypass the rulepack because “they’re specialists.” Their inputs often drive the highest-risk penetrations and interfaces.
– Assuming the AI “approves” compliance. It doesn’t; it highlights gaps and contradictions for competent people to resolve.
– Ignoring language parsing on specs. Many issues hide in narrative text, not the drawings, and weak parsing misses them.
What to watch next for AI-enabled Gateway 2 assurance
/> The next wave will tighten model-to-text alignment: geometry checks tied directly to narrative commitments in fire strategies and construction control plans. Expect deeper CDE integrations so issues, decisions and evidence logs are locked to information containers, and better out-of-the-box rulepacks tuned to UK standards without heavy project-by-project scripting.
Bottom line: use AI to make Gateway 2 submissions cleaner, faster and better evidenced, not to replace professional judgement. Watch for tools that can prove provenance and explain decisions as clearly as any human reviewer.
FAQ
/>
Does AI plan-checking replace the Principal Designer’s duty at Gateway 2?
No. It’s a support tool that helps the PD and design leads find gaps, contradictions and missing evidence more quickly. The legal responsibilities around design coordination, competence and control remain with dutyholders. Use AI outputs as structured inputs to professional decision-making.
# How do we handle subcontractor models and documents in the AI run?
/> Bring them into scope with the same naming standards and revision control as the consultant models. If a specialist insists on native formats, agree an exchange view (such as IFC or locked PDF sets) to maintain traceability. Make sure responsibilities for closing flags are allocated to the right party through the CDE.
# What about data privacy and ownership when using cloud AI tools?
/> Check where data is processed and stored, and ensure contracts reflect UK data requirements and client confidentiality. Disable any vendor training on your project data by default, and use project-specific environments with clear access controls. Keep an internal copy of all inputs, outputs and model GUID mappings.
# How do we prove to the Building Safety Regulator that AI-assisted checks are credible?
/> Provide an evidence appendix that shows the rulepack version, the information set ingested, issue logs with provenance, and the close-out notes or justifications. Avoid claiming that automation “approved” compliance; frame it as a documented assurance process that supports competent reviews. The clarity and traceability usually matter more than the brand of tool.
# How do we control false positives and avoid drowning the team in noise?
/> Start with a pilot area and tune thresholds, then lock the rulepack for each run to avoid drift. Categorise findings by risk and assign owners so noise doesn’t clog workflows. If an entire class of flags is consistently not applicable, disable that rule rather than relying on manual dismissals every time.






