KavAI Platform (KAP)

Real-Time Integrity Intelligence System™
Investigate & Escalate

Version: 3.2

Date: 2026-04-15

Author: KavAI Development Team

Kav AI Platform (KAP) — PRD v3.3

Real-Time Integrity Intelligence System™ — Active Physical Intelligence at Industrial Scale

Kav AI Development Team

2026-04-28

Document Status This document presents Kav AI’s product vision, completed milestones, and roadmap through Q4 2026. It is intended for enterprise procurement stakeholders, customers, executive decision-makers, and partner integrity-engineering firms. Technical implementation details are contained in the Technical Appendix. v3.3 preserves the v3.2 strategic framing and integrity architecture in full. It consolidates external-facing material that matured between v3.2 and late April 2026: the OI.Expert Letter of Intent (partner-integrated delivery), MVP v0.2 (facility-scale evidence — Repeatability, Scale, Decision Context), and the Active Physical Intelligence promo deck (continuous coverage, fixed infrastructure, cross-source correlation moat). Q2 2026 commitments are engineering-backed. Q3 and Q4 items are directional and subject to revision.
Platform status Current focus Next milestone
Live — App MVP deployed AI Q2 Delivery — Jun 2026 Q3 Integration — Sep 2026

1 Executive Summary

1.1 Kav AI — Real-Time Integrity Intelligence System™ · Active Physical Intelligence™

Kav AI is a Real-Time Integrity Intelligence System for refinery and petrochemical operators. It is the first platform to close the loop between what sensors see, what process data says, and what engineering codes require — delivering continuously updated risk assessments that an integrity team can act on in hours, not weeks.

The platform operates as a continuous Active Physical Intelligence™ layer over the facility: it ingests visual inspection data (RGB, thermal, OGI), reads operational data from SCADA and process historians, and reasons across both inside a persistent 3D model of the plant. In v3.2 that loop was extended by autonomous robot patrols, fixed plant infrastructure, and CAD / engineering context. v3.3 sharpens the operating posture for that loop: continuous coverage from commodity fleets, cross-source confirmation as the noise filter, and a partner-integrated delivery model that puts qualified integrity engineers in the human-in-the-loop seat. The result is a single system that connects physical condition to process context to damage mechanism to risk score to recommended action — the full integrity chain, automated and auditable.

Kav AI is not a hardware manufacturer, not a SCADA replacement, not a robotics company, and not a general-purpose industrial AI platform. It is the integrity intelligence layer that sits above existing systems — purpose-built for the closed-loop analytical chain from Integrity Operating Windows (IOWs) through Damage Mechanism Reviews (DMRs) to prioritised inspection plans. Kav AI reads from operational systems; it never writes to SCADA or control systems, never actuates valves, and never commands field equipment.

1.1.1 Why refinery and petrochemical first

Kav AI’s beachhead market is downstream oil and gas — refineries and petrochemical plants — where the convergence of regulatory pressure (API 580/581/584), high unplanned downtime cost, and fragmented inspection workflows creates the strongest initial pull. The platform’s architecture is industry-agnostic, but the domain model, validation data, and go-to-market are focused here until traction is established. Expansion to adjacent asset-intensive industries (power generation, LNG, chemicals) follows.

Figure 1. Kav AI Platform — System Overview. All data flows from operator systems are read-only. Dashed arrows indicate recommendation outputs requiring operator confirmation. In v3.3, the same system boundary now also encompasses robot patrol ingestion, CAD-backed asset context, and fixed-infrastructure continuous coverage without changing Kav AI’s observe-reason-recommend posture.

Business consequence Kav AI response
Unplanned shutdowns at mid-sized refineries cost $500K–$2M per day¹ Continuous anomaly detection surfaces emerging failures before shutdown is required
Fewer than 10% of captured inspection imagery is reviewed by a qualified engineer under current workflows² AI-assisted triage reviews 100% of imagery, flagging anomalies for engineer confirmation
Facilities take 5–10 days to move from inspection capture to actionable integrity decision² Kav AI reduces the triage-to-work-order cycle from 5–10 days to < 4 hours
Solomon Associates benchmarking shows a 4–8 percentage-point availability gap between top- and bottom-quartile facilities, worth tens of millions annually per site³ Kav AI’s Solomon-benchmarked outputs provide directly comparable, defensible performance metrics
Robot inspection coverage capped by ~90-minute battery endurance and manual charging Fixed plant infrastructure (docking, beacons, 5G) plus high-endurance platforms unlock ≥ 20 h/day continuous coverage at one site

¹ Solomon Associates, Downstream Industry Report, 2023. ² Kav AI customer discovery interviews, 8 facilities, 2025. ³ Solomon Associates, RAM Study benchmarking data, 2023.

1.2 What Kav AI is — and is not

To anchor procurement and partner conversations on the same operating identity, the platform’s boundaries are stated explicitly:

Kav AI is Kav AI is not
A continuous integrity intelligence layer that observes, reasons, and recommends A control system. It never actuates valves or writes commands to plant control systems.
A vertical platform for asset integrity and predictive maintenance in asset-intensive industries A general-purpose industrial AI platform
Hardware-agnostic across drones, robots, fixed sensors, and SCADA A robotics company. Locomotion is delegated to commodity legged, wheeled, and flying platforms.
A decision-support tool for qualified integrity engineers A replacement for operators or engineers. Human oversight is a design requirement.

1.3 Competitive Moat

Kav AI’s durable advantage rests on five reinforcing pillars that no single incumbent or combination of point solutions replicates:

Pillar What it means Why it’s hard to replicate
Closed-loop integrity intelligence The only platform that connects visual inspection + SCADA context + damage mechanism reasoning + risk quantification + recommended action in a single automated chain (IOW → DMR → API 581 risk → inspection plan). Requires the integrity domain model (API 571/581/584), the multimodal AI pipeline, and the 3D spatial model — all tightly integrated. No incumbent owns all three.
Cross-source correlation engine Findings from drone campaigns, robot patrols, and SCADA are tagged, matched within a 2 m spatial radius, and rescored across independent sources. Multi-source confirmed findings target TPR > 98% / FPR < 2%. Requires both the multimodal sensing layer and the spatial+temporal correlation primitive. Single-source competitors are structurally exposed to environmental noise.
Data flywheel Every inspection campaign and recurring patrol ingested improves facility-specific detection models. A customer who has run 10 campaigns and 90 days of repeat patrols has a progressively harder-to-replicate model tuned to their equipment, corrosion patterns, route history, and operating conditions. Model performance compounds with use. A competitor entering at campaign 1 faces the same cold-start problem Kav AI has already solved for that facility. This advantage widens with each campaign, patrol, and operator-confirmed outcome, and is reflected in the subscription model.
Deployment speed 90-day pilot framework with measurable success criteria — vs. 6–18 months for Cognite Data Fusion or enterprise RBI platform implementations. Purpose-built for inspection data operators already capture (drone imagery). No LiDAR, no engineering CAD, no 6-month data model mapping required.
End-to-end, hardware-agnostic, partner-integrated solution Ingests data from any visual sensor (DJI, Skydio, Flyability, FLIR), autonomous robot patrols via KRSI, any SCADA vendor (Emerson, Siemens, AVEVA, Ignition via OPC UA), and any historian (PI, InfluxDB, TimescaleDB), while accepting multiple engineering-model formats. The Active Physical Intelligence loop is delivered jointly with qualified mechanical-integrity partners (e.g., OI.Expert) who own the human-in-the-loop validation seat. No vendor lock-in at any layer. Operators keep their existing capture hardware, fleet vendors, control systems, and engineering systems. Kav AI adds the intelligence layer, partners add the engineering judgment, without requiring infrastructure replacement.

The moat deepens with every deployment: each facility’s data trains better models, each integration validates the connector ecosystem, each partner DMR sharpens the integrity domain model, and each successful pilot becomes a reference customer. This is not a feature advantage — it is a compounding system advantage.

2 The Opportunity

2.1 Market context

Kav AI’s beachhead market is downstream oil and gas — refineries and petrochemical plants. These facilities face the sharpest version of a universal industrial challenge: inspection and operational data live in separate silos, and the people responsible for facility safety spend significant time manually correlating data across all of them. The regulatory framework (API 580/581/584), the cost of unplanned downtime ($500K–$2M/day), and the maturity of RBI adoption make downstream the market where Kav AI’s closed-loop integrity intelligence delivers the fastest, most measurable value.

Market segment 2024/2025 size Projected 2030
Drone inspection & monitoring⁴ $16.4B (2024) $38B at 15% CAGR
Industrial AI⁵ $47B (2024) >14% CAGR
Global SCADA⁶ $12.9B (2025) $20B

⁴ MarketsandMarkets, Drone Inspection and Monitoring Market Report, 2024. ⁵ Grand View Research, Industrial AI Market Analysis, 2024. ⁶ MarketsandMarkets, SCADA Market Global Forecast to 2030, 2025.

2.1.1 Expansion roadmap

Once Kav AI establishes reference customers and repeatable deployment in refinery/petrochemical, the platform expands to adjacent asset-intensive industries where similar inspection and integrity challenges exist:

Phase Target industry Entry trigger
Phase 1 (current) Refinery & petrochemical Active — 3rd inspection campaign at European Tier-1 refinery
Phase 2 LNG terminals & gas processing First refinery reference customer at production scale; autonomous robot coverage validated in large outdoor areas
Phase 3 Power generation (gas/steam turbine) Proven SCADA connector ecosystem; 5+ facility deployments; CAD / engineering context reusable outside downstream
Phase 4 Chemicals & specialty processing Domain model extension validated by Phase 2–3 learnings
Phase 5 Nuclear decommissioning & high-regulation sites Dose-aware robot workflow, lifetime retention model, and compliance package validated with a design partner

2.2 Why now

Four enabling technologies reached production readiness between 2023 and 2024 and only now make the Kav AI platform feasible at scale:

2.3 Competitive landscape

Kav AI’s most important competitor is the combination of tools the integrity team already pays for: an RBI platform (orKsoft, GE Vernova APM, or equivalent), a process historian (OSIsoft PI or AVEVA), and a CMMS (SAP PM or Maximo). Kav AI must demonstrate that it delivers more value than the integration budget that would otherwise connect these three systems.

Platform What they do well What they lack Kav AI position
orKsoft (Améthyste) 21 years of deployment maturity; API 581, 584, and 579 fully embedded; on-premise live; enterprise-certified. The incumbent RBI platform at European downstream operators. No visual inspection layer. No photorealistic 3D model. Cannot cross-reference an IOW exceedance with a physical image of the equipment. Remaining life is single-point; no P90 CI. Kav AI is the physical-world visibility layer orKsoft lacks. The two platforms are complementary: orKsoft owns the engineering record; Kav AI closes the loop between process data and physical condition.
Cognite Data Fusion Data aggregation across IT (Information Technology), OT (Operational Technology), and ET (Engineering Technology) silos, knowledge graphs, automated workflows. Strong partnership with NVIDIA (Omniverse) for industrial digital twins. Historically focused on metadata/engineering data. Visual inspection layer and photorealistic 3D facility mapping are non-native. High implementation complexity (6–18 months for enterprise deployments). Kav AI provides a lighter, vision-native alternative. The 3D model is built from drone imagery operators already capture — not from LiDAR or engineering CAD that Omniverse requires. 90-day pilot vs 6–18 month implementation.
Meridium (GE Vernova APM) Live IOW with 12 damage mechanisms, proven API 581 engine, 300+ customers. Highest score (3.0/3.0) in Verdantix APM Green Quadrant 2024. No visual inspection layer. No AI-linked cross-reference between IOW exceedance and physical imagery. No photorealistic 3D model from drone imagery. Kav AI adds physical-world visibility — Stage 4 physical validation closure — that Meridium’s workflow-driven platform cannot provide.
Percepto Autonomous capture, emissions compliance, real-time anomaly detection from flight data Narrow AI layer. No cross-modal reasoning with operational data. No SCADA integration. Limited CAD / RBI context. Kav AI adds SCADA context, CAD-backed asset identity, and multi-modal reasoning Percepto cannot provide.
Flyability Confined-space inspection, strong NDT hardware integrations for close-contact inspection No AI reasoning layer. No persistent 3D model. No operational data context. Hardware-agnostic; Kav AI ingests Flyability data as a compatible source.
AVEVA / OSIsoft (PI) Industry-standard process historian (PI System), real-time control, and extensive OT integration ecosystem. Ecosystem is fragmented across legacy local clients. AI capabilities (Atlas AI) are emerging but lack 3D spatial context for physical assets. Not competitive — foundational. PI provides the ‘When’ (time-series). Kav AI adds the ‘Where’ (3D space) and the ‘What’ (damage mechanism reasoning). Kav AI reads from PI via OPC UA/Web API.
Emerson Plantweb Optics + AMS Deep historian integration, AMS Device Manager for rotating equipment health, Plantweb analytics suite — large installed base at downstream operators Rotating equipment focus. No native IOW / API 584 module. No visual inspection layer. No photorealistic 3D model from drone imagery. Different scope — Emerson owns rotating equipment condition monitoring; Kav AI owns fixed equipment / visual inspection. Coexistence, not displacement.
Hexagon Asset Lifecycle Intelligence (ALI) Comprehensive asset lifecycle management, P&ID integration, inspection data management — strong at engineering documentation and compliance workflows. Being spun off as “Octave” with proposed US listing in 2026. Workflow and documentation-centric. No real-time AI anomaly detection. No photorealistic 3D model from drone imagery. High implementation cost and timeline. Kav AI deploys faster, provides AI-native anomaly detection, and can feed findings into Hexagon’s compliance workflows. Kav AI PRD lists Hexagon ALI as a planned output connector.
Aucerna / Quorum Business Solutions Upstream production operations, decline curve analytics, and field data capture — strong in upstream E&P financial and operational planning Upstream-focused. No visual inspection capability. No integrity reasoning or damage mechanism analysis. Different buyer; Kav AI targets inspection and integrity teams, not production planning.
Kav AI Visual inspection + operational data context + 3D spatial model + conversational AI + cross-source correlation engine — all unified in one platform, hardware-agnostic and system-agnostic. The only platform that closes the full loop: sensor data → damage mechanism → risk score → inspection plan → physical validation — continuously, as Active Physical Intelligence. Multi-source confirmed findings target TPR > 98% / FPR < 2%. The only platform that reads visual inspection data and SCADA context in the same spatial model, runs cross-source confirmation across drone, robot, and SCADA in one engine, and provides physical validation closure (Stage 4) that no incumbent offers.

Figure 2.1. System Architecture & Purdue Model Alignment. Kav AI connects to Level 3 Historians or OPC UA middleware, maintaining a strictly read-only relationship with the control network (Level 2). In v3.3, robot, CAD, and fixed-infrastructure sources continue to strengthen the integrity context while the OT boundary remains unchanged.

2.4 The Data Flywheel — Kav AI’s Compounding Advantage

Kav AI’s competitive moat is not static — it compounds with every inspection campaign a customer runs through the platform.

How it works:

  1. Campaign ingestion — Each inspection campaign (RGB, thermal, OGI) adds labelled examples of facility-specific defect patterns, corrosion signatures, and equipment conditions to the training corpus.
  2. Patrol accumulation — Repeated robot patrols add temporal history on the same assets, turning one-time detections into trendable condition signals.
  3. Model refinement — Detection models are retrained on the expanded dataset after each campaign and patrol cycle. Facility-specific patterns (e.g., CUI signatures on a specific insulation type, thermal profiles unique to a particular heat exchanger configuration) improve detection precision.
  4. OOD detector calibration — The Out-of-Distribution detector is retrained on the expanded input distribution, reducing false OOD flags and improving the signal-to-noise ratio for operators.
  5. Confidence calibration — Empirical accuracy data from operator confirmations, dismissals, and cross-source corroboration refines the confidence scoring model, tightening the calibration curve.

The compounding effect: A customer who has run 10 campaigns and built months of repeat patrol history has a detection model tuned to their specific equipment, corrosion patterns, route coverage, and operating conditions that a new entrant cannot replicate without running the same sequence. This advantage widens with each campaign and patrol — making Kav AI progressively harder to displace at each facility.

Pricing alignment: Campaign volume is reflected in the subscription model (see Appendix H). Higher-tier subscriptions include more campaigns per year, directly linking customer investment to model performance improvement.

Competitive risk acknowledgement — Cognite consolidation scenario The most likely consolidation threat is Cognite partnering with an autonomous inspection platform (Percepto, Skydio, or a new entrant) and bundling visual inspection into Data Fusion. Timeline: 12–18 months if they move. Kav AI’s durable advantages if this happens: (1) Photorealistic 3D facility model built from drone imagery operators already capture — Cognite uses Omniverse (engineering CAD), which requires LiDAR or existing CAD data; (2) IOW/DMR closed-loop chain integrating visual evidence with SCADA — this is not a feature Cognite can acquire from a drone vendor; it requires the integrity domain model; (3) Deployment speed — Kav AI’s 90-day pilot framework vs Cognite’s known implementation complexity (6–18 months); (4) Data flywheel — each inspection campaign compounds Kav AI’s facility-specific model advantage, creating switching costs that grow with use.

3 The Journey So Far

Kav AI has been in development since May 2025. In less than a year, the platform has moved from a blank canvas to a live alpha with two major milestones fully delivered and a third in active shipment.

Milestone Name What was delivered Status
M0 Platform Foundation — Jul 2025 3D viewer, authentication, image gallery, and operator dashboard — validated with a first real-world inspection dataset (RGB imagery) Complete
M1 AI Foundation — Dec 2025 Multimodal AI pipeline, natural language interface, automated task coordination, and a machine vision engine prototype — validated with thermal imagery and gas sensor readings Complete
M2 App MVP — Mar 2026 Production-ready 3D viewer and AI chat unified in a single operator interface Complete
M3 AI Q2 Delivery — Jun 2026 Contextual data chat, sensor-native analysis, actionable insights, chat with 3D map, interactive overlays, and automated reports In progress
v3.0/v3.1 foundation Persistent sensing and engineering context KRSI ingestion pattern, attachment module, CAD import pipeline, Digital Twin Sync, and fixed-infrastructure design inputs validated in parallel workstreams In progress / integrated into v3.2 / v3.3 roadmap

3.1 What M0 and M1 proved

The foundation milestones were validated against real inspection data — not synthetic benchmarks or demos. M0 was tested with a first inspection campaign producing RGB imagery from an industrial facility. M1 raised the bar: a second inspection campaign introduced thermal imagery alongside RGB, plus structured sensor readings — gas concentration, temperature, and humidity. The AI pipeline was tested against this richer, multimodal dataset, reasoning across modalities on real industrial data.

M1 also included a deliberate technical bet: a proof-of-concept machine vision engine, validating that defect detection models can be called on demand by the AI pipeline. The Q2 visual perception features build directly on that validated pattern.

Third inspection campaign A third inspection campaign is planned at an oil refinery, with an expanded sensor suite including OGI imagery, calibrated thermal, additional gas measurements, and a path to repeat patrol comparison. This is the first real-world bridge between the campaign-based v2.9 workflow and the persistent robot-enabled v3.2 / v3.3 workflow.

3.2 MVP demonstration sequence

In parallel with the platform milestones, Kav AI runs a public-facing MVP demonstration sequence that turns the engineering work above into outcomes a non-technical buyer can evaluate. v3.3 makes that mapping explicit:

MVP version Theme What it proves Platform milestone
MVP v0.1 Detection Anomaly identification on a single inspection dataset — “we found something interesting” M0 — Platform Foundation
MVP v0.2 Repeatability + Scale + Decision Context 1,200 assets scanned, 37 thermal anomalies detected, 8 high-priority items, 65% unit coverage in one day, with ranked inspection actions and $750K–$1.5M avoided-cost framing — “here are the 8 places you should inspect next, and why” M1 / M2
MVP v0.3 Decision Impact Quantified inspection prioritisation and explicit workflow integration into RBI / IDMS programmes M3 — AI Q2 Delivery
MVP v0.4 Closed-Loop Intelligence Detection → validation → action → feedback measured end-to-end, with cross-source confirmation, robot patrol continuity, and CAD-anchored asset identity Q3 / Q4 2026 platform extensions

The Scale & Impact section reports the MVP v0.2 evidence in detail. The Operations and Integrity Analytical Chain sections describe the workflow that MVP v0.3 / v0.4 instrument.

4 Where We Are Going — Q3 and Q4 2026

With the core AI platform closing in Q2, Kav AI’s second half of 2026 focuses on two distinct ambitions: deepening integration across the full industrial data environment in Q3, and advancing the research capabilities that will define the platform’s long-term intelligence in Q4.

4.1 Q3 2026 — Deepening the platform

Capability What it means for operators
Anomaly detection AI-powered detection across the expanded sensor suite from the third inspection campaign — thermal, OGI, and gas — surfacing anomalies automatically in the 3D model with severity scoring.
Expanded sensor ingestion Structured ingestion from OGI imagery, calibrated thermal, and expanded gas measurements aligned with the planned third inspection campaign at an oil refinery.
Time series signals & SCADA connectors Read-only ingestion of time series data from SCADA systems, vibration sensors, and process historians via OPC UA (IEC 62541) — the industrial middleware standard that decouples Kav AI from any specific SCADA vendor. Operational data feeds the IOW/DMR closed-loop analytical chain described in the Integrity Analytical Chain section.
Geo-tagged assets & images in 3D Every asset anchored to its precise location in the 3D model. Inspection imagery displayed directly in 3D space.
Contextual data chat — Phase 1 Compound queries across multiple data sources, human-in-the-loop confirmation for high-consequence actions, and a skills registry that learns facility-specific query patterns over time.

4.2 Q4 2026 — Completing the platform and advancing the intelligence

Engineering capability What it means for operators
3D CAD model overlay Engineering design models overlaid on the photorealistic 3D facility model — operators can compare the as-built facility against the engineering design to identify deviations. v3.2 extends this into version tracking and as-built vs as-designed comparison.
P&ID SQL connector Direct read access to the plant’s piping and instrumentation database — the 3D model reflects engineering documentation without requiring parallel data entry.
Security certification Enterprise security certification — SOC 2 Type II target, meeting the data governance requirements of major industrial operators and enabling procurement through enterprise security review processes.
Compliance management End-to-end compliance workflows — tracking inspection coverage, flagged anomalies, and corrective actions taken into audit-ready records for regulatory submissions.
On-premise & air-gapped deployment Container-based deployment package for operator-managed cloud tenants (Azure, AWS) or fully air-gapped on-premise environments. See the Deployment Architecture section for full details.
Full spatial navigation 3D walkthrough, asset search in images, manual data entry, and view from any angle including confined spaces. The same spatial layer now supports repeatable robot patrol localisation via fixed infrastructure.
Research-dependent capability (Q4*) Research question
Physical AI reasoning & remediation advice Grounding LLM-generated remediation advice in the engineering domain model of the specific facility at safety-critical reliability thresholds. Requires Q2 spike to confirm feasibility before engineering commitment.
Facility-specific model training Synthetic training data calibrated to the specific visual and thermal signatures of each facility. Research question: whether synthetic OGI/thermal data can improve rather than degrade detection performance.

Q4* items are research-dependent, contingent on Q2 spike outcomes. Their deferral to 2027 does not affect the engineering workstream above.

4.3 AI Engine and Machine Vision

Kav AI’s AI engine coordinates specialized defect detection models — each trained for a specific modality (RGB, thermal, OGI) — and calls them on demand as part of the analytical pipeline. From v3.2 onward, that orchestration layer also becomes the convergence point for robot patrol findings, CAD-linked asset identity, and cross-source confirmation. v3.3 brings the cross-source confirmation engine to the front of the user-facing narrative: independent sources are the unit of confidence, and the engine is what converts a noisy single-modality signal into a multi-source confirmed finding.

Figure 3. AI Analytical Pipeline. Kav AI’s engine coordinates specialized machine vision models, calling each as needed for modular defect detection and rapid model iteration. From v3.2 onwards the same orchestration pattern supports campaign data, patrol data, and engineering context as a unified reasoning layer; v3.3 surfaces the cross-source correlation engine as the explicit confidence-reweighting step that downstream stages depend on.

5 Expanded Capture & Engineering Context

v3.2 kept the v2.9 product framing intact and extended the platform in the two areas that most materially strengthen the integrity loop: autonomous robot coverage and CAD / engineering context. v3.3 keeps that scope and adds two structural clarifications that have become important in external conversations: the architectural shift from onboard SLAM to fixed plant infrastructure, and the cross-source correlation engine as the explicit step that turns single-source detections into multi-source confirmed findings.

These additions do not change what Kav AI is. They increase how often the platform sees the plant, how precisely it localises findings, and how reliably it ties those findings back to asset identity and engineering intent.

Design principle

5.1 Autonomous robot coverage

Autonomous robots extend Kav AI from campaign-based inspection to persistent facility awareness. The platform preserves the v2.9 hardware-agnostic position: robot data is an input layer, not a product-category shift.

Capability v3.2 addition Why it matters
KRSI adapter extension Supports standard 90-minute platforms and high-endurance 4–6 hour platforms Broadens coverage from congested indoor units to tank farms, large outdoor areas, and long linear routes
Fixed plant infrastructure Navigation beacons, communication backbone, docking integration, and coverage orchestration Improves localisation repeatability and enables continuous patrol coverage without per-robot navigation lock-in
Fleet intelligence analytics Coverage analytics, anomaly trend detection, and data-quality trend monitoring Turns repeated patrol history into a learning signal rather than isolated mission logs
Dose-aware operations Nuclear-package extension for radiation dosimetry and high-dose zone handling Expands the platform to environments where human-entry avoidance is itself a core value driver

5.1.1 Fixed infrastructure

Fixed plant infrastructure is the enabling layer for reliable, repeatable patrol coverage:

Component Specification
Navigation beacons UWB or LiDAR reflector-based reference points, targeting localisation accuracy within 10cm of onboard baseline
Communication backbone Private 5G, industrial Wi-Fi mesh, or hybrid; minimum 50 Mbps uplink per patrol zone; 5–15 ms latency for real-time remote viewing
Docking stations Charging plus wired data offload points at patrol-route endpoints
Coverage orchestration Ranked inspection priorities from Kav AI to the vendor fleet software through API or operator handoff

This architecture deliberately avoids turning Kav AI into a robot OEM stack. Kav AI specifies what needs to be seen next based on staleness and risk. Vendor fleet software or the operator still decides how to execute the patrol.

The combined effect is continuous coverage of ≥ 20 hours per day across the fleet at a fully equipped site — turning the historical “battery-bound 90-minute patrol” model into a continuous sensory web without paying the cost of a bespoke OEM stack.

5.1.2 Architectural shift: onboard SLAM vs. fixed infrastructure

Fixed plant infrastructure is not just an availability story — it is a different navigation and data-registration architecture. The promo-deck shift is captured below for procurement teams comparing Kav AI against onboard-SLAM-only competitors:

Dimension Onboard SLAM (single-vendor stack) Fixed plant infrastructure (Kav AI)
Navigation reliability Degrades in featureless or highly repetitive piping environments UWB / LiDAR reflectors provide absolute ground-truth coordinates regardless of environment
Fleet scaling cost High — each robot carries its own expensive redundant navigation stack Low — beacon and 5G cost amortised across the site, additional commodity robots add marginal cost only
Data registration ~ 5 cm error with cumulative drift ~ 2 cm beacon-anchored accuracy, no drift
Vendor lock-in Effectively single-vendor — the locomotion vendor owns the localisation primitive Hardware-agnostic — any fleet that emits compatible telemetry can be ingested via KRSI

Fixed infrastructure is the architectural reason Kav AI can credibly claim fleet-agnostic, mission-specific coverage instead of taking on the cost of a proprietary robot OEM stack.

5.1.3 Robot sensing and mission intelligence

Robot ingestion in v3.2 extends the proven v3.0 adapter pattern:

The consequence is strategic, not merely technical: the data flywheel now compounds not only across periodic drone campaigns, but also across recurring ground patrols that revisit the same assets with tighter spatial repeatability.

5.2 CAD and engineering data

v3.2 also carries forward the v3.1 expansion of the engineering context layer. v2.9 introduced CAD overlay as a roadmap item; v3.2 makes it a central extension to the integrity workflow rather than a standalone visual feature.

Capability v3.2 addition Why it matters
Additional CAD formats IFC, RVT, and direct DGN alongside the proven Navisworks pathway Reduces dependence on a single engineering-tool chain and improves fit across customer environments
CAD version tracking Import history, file hashing, and diff visualisation Makes engineering changes visible to integrity teams instead of remaining buried in document control
As-built vs. as-designed comparison CAD overlay aligned to the photorealistic 3D model and LiDAR-derived geometry Highlights geometric deviation that may create new integrity risk or invalidate inspection assumptions
P&ID linkage Equipment IDs tied back to engineering line numbers and tags Improves traceability for operator queries and work-order preparation

5.2.1 Per-format scope

Format Priority v3.2 scope
IFC High Open-standard extraction of geometry plus engineering metadata into the KAP schema
RVT Medium Revit extraction where metadata is sufficient or can be supplemented
DGN (direct) Medium Direct extraction for Bentley / PDS-heavy sites to avoid the Navisworks intermediary
STEP / IGES Low Candidate for later vendor-model ingestion where full-plant context is limited

5.2.2 Digital twin and change management

CAD only becomes operationally useful when tied to the live spatial record. v3.2 therefore combines:

  1. CAD-derived design intent from the engineering model.
  2. As-built geometry from 3DGS and LiDAR-derived scans.
  3. Inspection findings from drone, robot, and fixed/SCADA sources.

This allows the platform to surface engineering changes, geometric deviations, and repeated anomaly patterns as part of the same integrity conversation rather than as separate systems.

5.3 Cross-source confirmation engine

The robot and CAD additions are most valuable when they strengthen the core analytical chain rather than sitting beside it. v3.2 incorporated the v3.1 cross-source correlation model into the v2.9 integrity narrative; v3.3 promotes that primitive to a named engine:

Sources agreeing Correlation category Operational treatment
1 source Single-source detection Standard severity-based triage
2 sources Corroborated finding Elevated priority with explicit evidence linking
3+ sources Multi-source confirmed Highest-confidence class for immediate engineer attention; bypasses the manual verification queue but never bypasses the deterministic Filter Skill rejection (see AI Safety)

5.3.1 How the engine works

The cross-source correlation engine operates as a deterministic pre-stage to the AI confidence model:

  1. Tag — every detection from a drone campaign, robot patrol, fixed sensor, or SCADA IOW exceedance is timestamped, geo-located inside the 3D model, and tagged with its modality (RGB, thermal, OGI, acoustic, gas, process telemetry).
  2. Match — independent detections within a 2 m spatial radius and an aligned temporal window are grouped as candidate corroborations.
  3. Score — confidence is rescored from the per-source baseline. A representative example: a thermal anomaly at 0.70 confidence corroborated by an acoustic / thermal ground patrol and a SCADA insulation surface-temperature IOW exceedance is rescored to ~ 0.95.
  4. Surface — the rescored finding is presented to the operator with all source records attached. The Filter Skill and Stage 3.5 consistency gate (see Integrity Analytical Chain) still apply; cross-source uplift cannot override a hard rejection.

5.3.2 Why this is the moat

A single-source detection is structurally exposed to environmental noise. A multi-source confirmed detection is the result of independent physical signals agreeing on a precise spatial-temporal coordinate — a much harder thing to fake. v3.3’s external claim follows directly: multi-source confirmed findings target TPR > 98% / FPR < 2%, while single-source modality-specific targets remain as documented in Operations.

Cross-source confirmation does not replace engineering validation. It improves prioritisation, confidence calibration, and inspection planning by showing when multiple independent signals are converging on the same asset condition.

6 Scale & Impact — From “We Found Something” to “Inspect These Eight Locations Next”

A common failure mode for industrial AI platforms is that they are demonstrated, not deployed: a single anomaly on a single asset, dressed up as a product. v3.3 promotes the MVP v0.2 evidence to a dedicated chapter so the procurement team can see, on one page, that Kav AI operates at facility scale, repeatably, and with a direct line to financial impact.

Why this section exists v3.2 references “facility scale” in passing. v3.3 puts the numbers, the workflow, and the dollar framing on the table where customers, partners, and reference callers can find them without digging into the appendices.

6.1 What MVP v0.2 proves

MVP v0.2 is the second step in Kav AI’s public-facing demonstration sequence (see Journey — MVP demonstration sequence). It answers the three questions an operator will ask at the first procurement meeting:

Operator question MVP v0.2 answer
Can this system scale beyond a demo? The platform processes thousands of assets and dozens of anomalies in a single run, not isolated findings.
Does it help me decide what to do? Every high-priority anomaly includes recommended inspection actions tied to operational workflows.
Is the output meaningful to my business? Findings are connected to risk and economic impact, not just temperature differences.

6.2 Facility-scale evidence

The MVP v0.2 facility scan demonstrates Kav AI operating at a unit-level scale that is procurement-relevant rather than demo-relevant:

Metric MVP v0.2 result
Assets scanned 1,200
Thermal anomalies detected 37
High-priority anomalies 8
Coverage achieved 65% of a process unit in a single deployment
Output format Ranked inspection set with recommended actions, not a flat anomaly list
Recommended timing Targeted at the next scheduled shutdown, not opportunistic

Coverage is repeatable and extendable across the facility. This reframes the platform from an “interesting detection tool” to a practical inspection coverage solution.

6.3 Decision context — detection → diagnosis → action

The platform does not stop at flagging an anomaly. The MVP v0.2 representative finding shows the full chain a buyer expects to see:

Stage Representative output
Asset Insulated process piping
Observed condition Surface temperature ~ 120°F; ambient ~ 35°F; ΔT ~ 85°F
Secondary signal Adjacent structural steel ΔT ~ 49°F
Interpretation Thermal signature consistent with insulation breakdown; elevated likelihood of Corrosion Under Insulation (CUI)
Recommended action Prioritise location for insulation removal and ultrasonic thickness (UT) inspection at next scheduled shutdown
Cross-source posture Single-source thermal detection at the MVP stage; cross-source correlation against robot acoustic / SCADA insulation-surface-temperature data is the v3.3 evolution path for the same finding

The selection basis for high-priority items combines thermal severity, pattern consistency across the dataset, and cross-asset comparison — not raw anomaly count.

6.4 Economic framing

Thermal anomalies are translated into business-relevant risk so that the procurement conversation can move from “interesting” to “fundable”:

Variable Range
Typical failure consequence (if unaddressed) $500K – $2M per event (shutdown, repair, lost production)
MVP v0.2 scenario estimate $750K – $1.5M avoided cost potential for the eight high-priority items

This ties the platform output directly to the same economic levers that Solomon Associates RAM benchmarking uses to compare top- and bottom-quartile facilities (see Executive Summary).

6.5 Role of AI at this scale

A critical positioning shift between MVP v0.1 and MVP v0.2 is making the role of AI explicit:

The drone and thermal camera collect data — Kav AI is what makes that data usable at scale.

At facility scale the AI layer is responsible for:

Without this layer the workflow does not scale beyond manual review — and manual review is the workflow that today reviews fewer than 10% of captured imagery.

6.6 Where Scale & Impact connects in the rest of the PRD

Topic Where it lives
Workflow that consumes the ranked inspection set Operational Workflow — Anomaly-to-Action
Per-modality and cross-source detection / FPR targets Operational Workflow — Performance and Accuracy Targets
Risk quantification and Solomon benchmarking behind the avoided-cost number Integrity Analytical Chain — Stage 5
Continuous coverage that compounds the v0.2 scan into trend data Expanded Capture & Engineering Context — Autonomous robot coverage
Partner-led validation of MVP-style findings before they reach the CMMS Partner-Integrated Delivery Model

7 Operational Workflow — Anomaly-to-Action

To ensure Kav AI reduces cognitive load rather than creating “alarm fatigue,” the platform follows a structured escalation path for every identified anomaly. v3.2 preserved the v2.9 human-in-the-loop workflow and extended it to account for robot-sourced findings, cross-source confirmation, and richer engineering context at the point of triage. v3.3 keeps that workflow intact and adds a cross-source confirmed performance target alongside the per-modality targets.

Figure 5. Operator Workflow (Human-in-the-Loop). High-consequence findings require human validation before transition to work order systems, while low-severity items are logged directly to the asset’s digital twin. v3.3 retains this decision structure even when findings originate from autonomous patrols or multi-source correlation; multi-source confirmed findings can bypass the manual verification queue but never bypass the deterministic Filter Skill.

7.1 The Triage-to-Escalation Path

The workflow from AI detection to field action is governed by a four-stage process involving distinct operational roles:

  1. Stage A (Automated Triage): AI identifies an anomaly, runs the cross-source correlation engine, and assigns an initial severity score (Critical/Standard/Info). Low-confidence alerts (<0.7) are filtered from the primary ‘Action’ dashboard and moved to a ‘Review’ queue. From v3.2 onwards, the triage packet also includes source type (drone, robot, SCADA), patrol or campaign identifier, and CAD-linked asset identity. v3.3 adds the explicit correlation category (single-source / corroborated / multi-source confirmed) on the triage packet.
  2. Stage B (Human Verification): High-severity anomalies are surfaced to the On-call Integrity Engineer’s dashboard (desktop/mobile). The engineer must select: ‘Confirm’ (Escalate to Field), ‘Dismiss’ (False Positive - Model Feedback), or ‘Reclassify’ (Adjust Severity). Cross-source corroboration can raise priority and, in the multi-source confirmed class, can shorten the queue, but it does not bypass deterministic Filter Skill rejection or Stage 3.5 inconsistency flags.
  3. Stage C (Escalation & Audit): Confirmed anomalies generate an “Actionable Insight” record. Every ‘Dismiss’ or ‘Confirm’ decision is timestamped and logged in a shift-handover audit report visible to the Operations Supervisor. Coverage context and engineering-change context are attached where available.
  4. Stage D (CMMS Integration): Verified insights provide a structured data packet for the operator’s CMMS (e.g., SAP PM, Maximo) to initiate a work order. Kav AI reduces the ‘Triage-to-Work Order’ cycle from 5–10 days to < 4 hours. SAP PM remains the first certified connector priority because it opens the largest installed-base pathway.

7.2 Performance and Accuracy Targets

To maintain operational trust, Kav AI commits to the following detection and false positive targets for Q3/Q4 2026 (benchmarked against M0/M1 datasets and validated by ground-truth NDT):

Modality / class True Positive Rate (TPR) False Positive Rate (FPR) Target
Visual (RGB) > 85% < 15% Q3 2026
Thermal (Point) > 90% < 10% Q3 2026
Gas Leak (OGI) > 95% < 5% Q4 2026
IOW (SCADA) > 98% < 2% Q2 2026
Cross-source corroborated (2 sources) > 95% < 5% Q4 2026
Multi-source confirmed (3+ sources) > 98% < 2% Q4 2026

Detection rates are based on the Kav AI M0/M1 validation campaigns and require site-specific calibration during the 90-day pilot phase. The multi-source confirmed line is the externally-quoted “moat” claim and is the only class that may bypass the manual verification queue (still subject to Filter Skill and Stage 3.5 consistency).

7.3 Human-in-the-Loop (HITL) Validation

Kav AI is a decision-support tool, not an autonomous inspector. No “Critical” severity output or “Remaining Life” adjustment can be finalized without individual engineer sign-off in the platform. Every such action is captured in the version-controlled audit trail.

7.4 Business Readiness & Referenceability

Kav AI recognizes that enterprise procurement requires a clear path to value and financial predictability.

7.4.1 Cost Model & ROI Modelling

Kav AI recognizes that enterprise procurement requires a clear path to value and financial predictability. Indicative pricing components include:

7.4.2 Availability & Payback Commitment

Based on Solomon RAM benchmarking, Kav AI targets a 1.0 percentage point improvement in facility availability within 18 months of full operational deployment. This is achieved through the elimination of data-latency-driven shutdown delays and the early detection of high-consequence failure modes.

For sites that adopt fixed infrastructure and repeat patrol coverage, the business case broadens:

7.5 IDMS and Ecosystem Integration

Kav AI is designed to sit inside existing inspection workflows, not alongside them.

7.5.1 Bidirectional Integration Architecture (FR-INT-03)

Direction What flows Purpose
Read (IDMS → Kav AI) Asset register, CML locations and baseline readings, inspection history per CML, current inspection plan, open anomalies Enables Stage 3–5 to leverage existing RBI data rather than starting from zero. Without the read direction, Kav AI cannot know what has already been inspected, what the baseline UT measurement was, or what the current inspection plan says.
Write (Kav AI → IDMS) Asset ID, anomaly type, severity, confidence score, supporting evidence (image references, UT readings, IOW exceedance logs), recommended action, engineer sign-off timestamp Confirmed anomaly findings generate inspection notifications in the operator’s IDMS/CMMS to trigger work order creation.
Conflict resolution When Kav AI’s corrosion rate disagrees with the IDMS’s existing corrosion rate Kav AI flags the discrepancy and surfaces both values to the engineer — does not silently override the existing record.

7.5.2 Certified Connector Priority (FR-INT-04)

Priority IDMS Rationale Target
1 SAP PM Highest installed base at Tier-1 operators; opens the most procurement conversations Q4 2026
2 Meridium (GE Vernova APM) Strong RBI module that already holds the API 581 model; bidirectional integration is most valuable here H1 2027
3 Hexagon ALI Explicitly referenced in the competitive table as a Kav AI output target; certification closes that loop H1 2027

A certified integration means: a documented data schema, a tested connector, a reference customer who has run it in production, and a support SLA.

7.5.3 orKsoft Coexistence Architecture

orKsoft and Kav AI are complementary, not competitive. The coexistence pattern:

System Owns Data flow
orKsoft RBI model, CML history, API 581 inspection plan, compliance record orKsoft CML baseline → Kav AI Stage 5 corrosion rate input
Kav AI 3D spatial model, visual anomaly detection, SCADA-to-IOW chain, physical validation Kav AI Stage 4/5 outputs → orKsoft inspection plan update → orKsoft compliance record

7.5.4 Reference Customers

Kav AI is currently executing its third major inspection campaign at a European Tier-1 oil refinery (Crude/Vacuum unit). Reference calls with the lead Integrity Engineer can be facilitated upon request for qualified enterprise buyers.

7.5.5 Partner-led delivery channel

Where the customer prefers an integrated delivery model — platform plus mechanical-integrity engineering services in a single procurement vehicle — Kav AI engages with qualified partners (e.g., OI.Expert) who provide the human Integrity Engineer panel, DMR / IOW programme work, and HITL validation. The Partner-Integrated Delivery Model section describes this channel in full and reflects the framework set out in the April 2026 OI.Expert Letter of Intent.

8 The Integrity Analytical Chain — Architectural Ownership

The IOW/DMR closed-loop analytical chain is the core intelligence differentiator of the Kav AI platform from Q3 onwards. This section defines its architecture, its owner, and the data flow from SCADA ingestion to actionable risk output. From v3.2 the same chain is strengthened by robot patrol evidence, CAD-linked asset identity, and cross-source confirmation, while keeping the v2.9 architecture intact. v3.3 makes one further clarification: the cross-source correlation engine described in Expanded Capture & Engineering Context runs before Stage 3.5 and feeds the evidence-consistency check — it does not replace it.

![][image2]

Figure 2. Kav AI Platform — Internal Architecture. Shows how the AI analytical pipeline connects to operator data sources. SCADA/Historian (teal border, Q3) is a planned integration. From v3.2 onwards robot telemetry and CAD-derived engineering context are additional input layers into the same read-only intelligence architecture. v3.3 highlights the cross-source correlation engine as the explicit fan-in step ahead of Stage 3.5.

Architectural owner The IOW/DMR chain is owned by Kav AI’s data retrieval engine. It is invoked automatically when SCADA or historian data is present in the query context. The platform sequences its execution across the six stages below; findings are formatted for the operator interface. |

Figure 4. IOW/DMR 6-Stage Analytical Pipeline. The automated chain flows from raw SCADA telemetry to prioritised corrective action recommendations, with human-in-the-loop validation at Stage 4. v3.3 retains this structure and expands the evidence available at Stages 3.5–5, with cross-source correlation feeding the evidence-consistency check at Stage 3.5.

:—- |

8.1 The six-stage analytical chain

The chain runs from raw SCADA telemetry through to prioritised corrective action recommendations, benchmarked against the Solomon Associates database at the risk quantification stage.

# Stage What the agent does Data consumed Output
1 Data ingestion & QC Normalises multi-source sensor streams via OPC UA. Applies statistical outlier detection and handling for SCADA failure modes: stale values (frozen tags), engineering unit inconsistencies (base vs. scaled), and historian gaps (PI ‘shutdown’ vs. AVEVA ‘null’). Establishes timestamp synchronisation within a ±500ms alignment window. SCADA, OPC UA historian, IoT sensors Validated sensor record
2 IOW classification Compares validated readings against Integrity Operating Window limits (API 584). Categorises exceedances as critical, standard, or informational. Scores by duration × intensity to prevent alarm fatigue. Validated sensor record, IOW limit database Exceedance events with severity score
3 Damage mechanism mapping Maps exceedance events to credible damage mechanisms per API 571. Establishes the ‘Boundary of Automation’: standard mechanisms (CUI, erosion) are auto-flagged, while complex ones (HIC, NH₄Cl underdeposit, Creep) trigger a mandatory Integrity Engineer review. CAD-linked asset identity and material metadata are used where available to tighten mapping quality. Exceedance events, API 571 knowledge base, asset materials database, CAD / engineering metadata Damage mechanism map with predicted rate δ
3.5 Consistency gate Three-way consistency check before physical validation: (1) Material-mechanism consistency — is the predicted mechanism chemically plausible given the asset’s material of construction and process fluid? (2) Rate-mechanism consistency — is the predicted corrosion rate consistent with the predicted damage mechanism? (3) Evidence consistency — if physical validation data is available, does the visual evidence match what the predicted mechanism would produce? From v3.2 onwards, evidence consistency also considers cross-source corroboration across campaign, patrol, and SCADA evidence; v3.3 makes the cross-source correlation engine the explicit feeder for that input. Failures produce an explicit “INCONSISTENT — ENGINEERING REVIEW REQUIRED” flag with the conflicting evidence surfaced to the engineer. Damage mechanism map, asset materials database, cross-source correlation output, available Stage 4 evidence Consistency-validated damage mechanism map, or INCONSISTENT flag with conflicting evidence
4 Physical validation Cross-references the damage mechanism map against physical inspection evidence: UT thickness measurements, thermal scans, OGI imagery, and robot-sourced patrol evidence. Validates AI findings against ground-truth NDT measurements to calibrate model confidence. Kav AI 3D inspection model, UT data, thermal scans, OGI imagery, robot thermal/acoustic/gas data Validated damage assessment
5 Risk quantification & remaining life Applies the standard remaining life formula: RL = (tₓᴀᴄᴛ − tₘᴵⁿ) / CR. Quantifies risk using API 581 Damage Factors and Consequence categories. Propagates input uncertainty to provide P90 Remaining Life. Extends to API 581 inspection interval calculation: recommended inspection interval = f(current PoF, target risk threshold, damage factor, inspection effectiveness grade). Validated damage assessment, API 581 standards, Solomon Associates financial data Risk score (API 581 aligned), RL estimate with 90% CI, recommended inspection interval
6 Corrective action surfacing Generates prioritised inspection plans and recommended corrective actions (repair, replace, operational adjustment). All outputs are recommendations to a human operator — Kav AI never writes to SCADA or issues work orders autonomously. Risk scores, inspection backlog, operator confirmation Prioritised inspection plan (see schema below)

The IOW/DMR chain becomes available from Q3 2026 when the SCADA/OPC UA connector is delivered. Stages 4 and 5 require physical inspection data from the Kav AI 3D model, which is available from Q2. v3.3 extends those stages with repeat patrol evidence, fixed-infrastructure continuous coverage, and engineering-context enrichment where present.

8.1.1 Inspection Plan Output Schema (FR-RBI-01)

The IOW/DMR chain does not terminate at risk score generation. Stage 6 produces an inspection plan — a prioritised, time-bound schedule. The minimum schema for a Kav AI inspection plan record:

Field Description
Asset ID, asset class Facility asset identifier and equipment type
Current risk score PoF × CoF per API 581
Recommended inspection date Driven by API 581 inspection interval logic
Recommended technique & coverage Grade A/B/C/D per API 581 table
Justification Damage mechanism, IOW exceedance, or UT trend that drove the recommendation
Confidence level Data provenance (Level A/B/C corrosion rate)

This schema is agreed with the reference customer before Q3 delivery and governs the IDMS integration write interface.

8.1.2 Equipment Class Boundary of Automation (FR-RBI-02)

The Tier 1/Tier 2 automation boundary applies across equipment classes as well as damage mechanisms. Scope for v1:

Equipment class Kav AI scope Rationale
Pressure vessels & heat exchangers Full API 581 support Primary M0/M1 asset class; most common IOW-linked equipment
Piping circuits Full API 581 support Covered by OPC UA SCADA connector; CML-tracked
Pressure Relief Devices Flag only — engineer required PRD-specific methodology; different consequence logic
Atmospheric Storage Tanks Flag only — engineer required Tank floor inspection has unique methodology (API 653)
Pipelines Out of scope — v1 Requires inline inspection data (ILI); different data model
Rotating equipment Out of scope — v1 Vibration-based RBI; different physics
Equipment scope communication This boundary must be communicated proactively to customers in the pilot framework (Appendix G). An operator who assumes Kav AI covers their tank farm and discovers it doesn’t during a pilot will not proceed.

8.2 Analytical Rigour — Confidence and Provenance

To satisfy enterprise integrity standards (API 581/510), Kav AI does not treat the analytical chain as a “black box.” Every safety-critical output carries clear provenance and uncertainty bounds.

8.2.1 Corrosion Rate (CR) Provenance

The platform applies a weighted hierarchy to corrosion rate inputs, favouring measured data over theoretical models:

  1. Level A (Measured): Derived from localized Ultrasonic Testing (UT) trend data at specific Corrosion Monitoring Locations (CMLs).
  2. Level B (Modeled): Derived from process-specific corrosion models (e.g., pH, H2S, CO2 concentration), if UT data is stale (>12 months) or unavailable.
  3. Level C (Generic): Derived from the API 571/581 knowledge base for standard materials in nominal service.

8.2.2 Uncertainty Quantification (UQ)

Remaining Life (RL) estimates are never presented as single-point figures. Kav AI propagates uncertainty across the entire analytical chain:

8.2.3 Timestamp Integrity and Windowing

Correlation across heterogeneous sources (SCADA scan vs. Event-driven historians) is governed by a ±500ms synchronisation window.

8.2.4 Boundary of Automation — Expert-in-the-Loop

Not all damage mechanisms are equally detectable via process telemetry. Kav AI maintains a strict boundary:

8.2.5 CML and PIL Integration — Lifecycle Management

The 3D facility model serves as the spatial system of record for Piping Inspection Locations (PILs) and Corrosion Monitoring Locations (CMLs).

8.3 Risk Quantification Methodology — API 581 Alignment

Kav AI’s risk engine is structured for alignment with the API 581 Risk-Based Inspection (RBI) standard to ensure regulatory and insurance defensibility.

8.3.1 Probability of Failure (PoF)

The platform calculates PoF using the API 581 Damage Factor approach. This includes:

8.3.2 Consequence of Failure (CoF)

CoF is categorised into four primary streams:

  1. Flammable/Explosive: Area-based consequence of fire/explosion.
  2. Toxic Release: Dispersion modelling for H₂S, HF, or other hazardous process fluids.
  3. Environmental: Volume-based spill consequence for soil/water.
  4. Financial: Production loss and repair costs, benchmarked against Solomon Associates CPA™ facility profiles.

8.3.3 Ranked-to-Calibrated Transition

During the initial deployment and pilot phase (90 days), risk outputs are presented as Relative Risk Rankings for prioritisation. Full Calibrated Probability of Failure (mapping scores to physical failure frequencies) is achieved after the first ground-truth inspection cycle is ingested and the model is site-validated. The transition is triggered by a minimum dataset: a defined number of UT readings at registered CMLs with known inspection history, agreed with the operator during pilot onboarding.

9 AI Safety — Hallucination Mitigation & Grounding

Given the safety-critical nature of asset integrity, the Kav AI analytical chain (IOW/DMR) includes specific safeguards to mitigate AI hallucinations and ensure engineering-grade reliability. v3.2 kept the v2.9 safety model intact and extended it to handle cross-source correlation, robot patrol evidence, and nuclear-package requirements without weakening the original guardrails. v3.3 keeps these safeguards unchanged and clarifies their priority order: Filter Skill > Stage 3.5 consistency gate > cross-source correlation uplift. A multi-source confirmed finding can shorten the operator queue, but it cannot promote a Filter-rejected mechanism or override an INCONSISTENT flag.

9.1 Grounding via Deterministic “Filter Skills”

To eliminate LLM hallucinations in the IOW/DMR chain, every AI output is passed through a deterministic validation layer before it reaches the operator:

9.1.1 Filter Skill Performance Targets (FR-AI-01)

9.2 Confidence Scoring and Calibration

Kav AI provides a 0.0–1.0 confidence score for every AI-generated output (mapping, detection, recommendation):

9.2.1 Confidence Score Calibration Protocol (FR-AI-02)

Confidence scores are calibrated against empirical accuracy before they are used as a dashboard threshold:

  1. For each confidence bucket (0.5–0.6, 0.6–0.7, 0.7–0.8, 0.8–0.9, 0.9–1.0), measure the proportion of outputs in that bucket that are confirmed correct by the operator pilot lead.
  2. Plot the calibration curve. A perfectly calibrated model has a diagonal curve (0.7 confidence = 70% accuracy). Report the calibration error.
  3. Adjust the dashboard surfacing threshold based on the calibration curve, not the raw confidence score. If the 0.7 bucket has 50% empirical accuracy, the surfacing threshold should be raised to the bucket that achieves the target accuracy.
  4. Include the calibration curve as a deliverable in the 90-day pilot success evaluation (Appendix G), updated at Week 9–10.

v3.2 / v3.3 extension: calibration must be reported separately for single-source findings, corroborated findings, and multi-source confirmed findings. Cross-source uplift is disabled by default until empirical accuracy demonstrates that the uplift is warranted. v3.3 additionally requires that the externally-quoted multi-source confirmed TPR > 98% / FPR < 2% target be reported against the same calibration curve and re-evaluated after every campaign and patrol cycle.

The current threshold of 0.7 for Action dashboard surfacing and 0.6 for UNCERTAIN flagging is provisional until calibration data from the third inspection campaign is available.

9.2.2 OOD Detection Update Cadence (FR-AI-04)

The M0/M1 training distribution covers two facilities. The OOD detector trained on this distribution will flag the majority of inputs from a third facility as OOD.

9.3 Safety Fallbacks and HITL

If an AI output fails a Filter Skill or returns a confidence score below 0.6:

  1. The output is automatically marked as “UNCERTAIN - REVIEW REQUIRED.”
  2. The Integrity Engineer is presented with the conflicting evidence (e.g., “AI suggests SCC, but material is Carbon Steel - check required”).
  3. The system prevents the anomaly from propagating to the “Critical Action” dashboard until manually resolved.

9.4 Chain-Level Consistency Gate (FR-AI-03)

The IOW/DMR chain is a 6-stage pipeline. A hallucination in Stage 3 that passes the Filter Skill will propagate through Stages 4, 5, and 6, compounding at each step. The consistency gate (Stage 3.5 in the analytical chain table above) performs three checks to prevent this:

  1. Material-mechanism consistency: Is the predicted damage mechanism chemically plausible given the asset’s material of construction and process fluid? (This is the existing Filter Skill.)
  2. Rate-mechanism consistency: Is the predicted corrosion rate in Stage 5 consistent with the predicted damage mechanism in Stage 3? For example, if Stage 3 predicts CUI (external), the corrosion rate should not be derived from internal process chemistry data.
  3. Evidence consistency: If Stage 4 physical validation data is available, does the visual evidence (thermal anomaly pattern, OGI reading) match what the predicted damage mechanism would produce? A thermal anomaly consistent with insulation damage is consistent with CUI. A uniform wall-loss pattern from UT is not consistent with pitting corrosion from NH₄Cl underdeposit.

Failures produce an explicit “INCONSISTENT — ENGINEERING REVIEW REQUIRED” flag rather than propagating to Stage 5. The inconsistency reason and the conflicting evidence are surfaced to the engineer alongside the flag.

From v3.2 onwards, multi-source confirmation strengthens evidence consistency only when the corroborating sources are genuinely independent. A second source can support a finding; it cannot override a Filter Skill rejection or erase a material-mechanism inconsistency. v3.3 restates the priority order explicitly — Filter Skill > Stage 3.5 > cross-source uplift — so that the cross-source correlation engine cannot be misread as a Filter Skill bypass. Nuclear-package deployments additionally require mandatory engineer review before any downstream CMMS handoff.

10 Deployment Architecture — On-Premise, Air-Gapped, and Fixed-Infrastructure Options

Enterprise procurement in the oil and gas, chemical, and power sectors requires defined answers to three infrastructure questions: where does the data go, who controls the model, and can the system operate without cloud connectivity. v3.2 kept the v2.9 answers intact and added a fourth: what changes when the site adopts persistent robot coverage and higher-regulation operating constraints. v3.3 keeps all four answers in place and surfaces an additional procurement-relevant detail in line with the OI.Expert LOI commitments: on-premise / air-gapped deployments require no heartbeat, telemetry, or licence callbacks of any kind.

Deployment tier Description Data residency Availability
Cloud SaaS Kav AI-managed cloud infrastructure on Google Cloud. Multi-tenant with row-level security. Fastest to deploy. Suitable for operators with standard IT security postures. Kav AI GCP tenant (configurable region) Now (alpha)
Customer cloud tenant Container-based deployment into the operator’s own Azure or AWS environment. Kav AI provides the container package and upgrade process; the operator controls the infrastructure. Satisfies data sovereignty and IT security requirements of major industrial operators. Operator’s own cloud tenant. No data leaves operator environment. Q4 2026
On-premise / air-gapped Fully containerised deployment on operator-managed hardware within the OT/IT DMZ or corporate LAN. Includes self-hosted LLM inference (Llama-3-70B or Mistral-Large-24.07). Hardware Requirements: Min 128GB RAM, 16 vCPU, 2x NVIDIA A100 (80GB) or equivalent. Operator-managed hardware. 100% air-gapped; no heartbeat or license callbacks required. H1 2027 (roadmap)

10.1 SCADA Compatibility Matrix (Qualifications)

Deployment complexity for OPC UA varies by vendor and version:

OT/IT boundary by design Kav AI’s SCADA integration is read-only via OPC UA, operating from the IT side of the OT/IT boundary. Read-only is enforced technically via: (1) OPC UA Server user access rights restricted to ‘Read’ and ‘Subscribe’ services only; (2) DMZ proxy/aggregator configured to reject all Write/Call requests; and (3) Firewall TCP/IP port restriction (4840/4843). Kav AI supports outbound connection initiation from the OT side to satisfy strict “no-inbound” firewall policies.

10.1.1 Secrets and Credential Management

Kav AI does not store plain-text credentials in container configurations.

10.1.2 Container Supply Chain Security

To ensure the integrity of on-premise and tenant deployments:

10.2 Security and compliance targets

Framework Kav AI position Target date
SOC 2 Type II Security certification audit in planning. Controls mapped; observation period expected to commence Q2 2026. (Target completion: Q4 2026, pending audit window success). Q4 2026 (Target)
IEC 62443 (OT security) Read-only OPC UA integration, no writes to OT systems, Purdue Model-compatible network segmentation in on-premise tier. On-premise tier (H1 2027)
Data sovereignty Customer cloud tenant and on-premise tiers provide full data residency within operator’s own environment. No telemetry or imagery leaves the operator security perimeter in these modes. Customer cloud (Q4 2026)
Authentication JWT with short-lived tokens, multi-tenant row-level security (verified via independent pentest), SSO integration via SAML 2.0 / OIDC. Q2 2026 (delivered)

10.3 Fixed infrastructure deployment

The fixed-infrastructure additions are treated in v3.2 as deployment extensions rather than as a separate product:

Component Specification Responsibility
Navigation beacons UWB or LiDAR-reflector-based references; target within 10cm of onboard baseline Kav AI specifies; operator installs
Communication backbone Private 5G, industrial Wi-Fi mesh, or hybrid; >= 50 Mbps uplink per patrol zone Kav AI specifies; operator provisions
Docking stations Power and wired network drops for charging and bulk data offload Kav AI specifies; operator installs
Coverage orchestration Runs inside Kav AI and publishes ranked inspection priorities to fleet software or operators Kav AI deploys

These components do not alter the OT/IT boundary described above. In air-gapped deployments they operate entirely within the operator’s network perimeter.

10.4 Regulated-environment considerations

Higher-regulation deployments add operating constraints without changing the core architecture:

Requirement v3.2 treatment
Lifetime retention environments Support storage sizing and retention policies for facilities that require multi-decade data preservation
Dose-aware operations Where radiation dosimetry is in scope, robot mission records and anomaly packets carry dose context for auditability
Mandatory on-premise profile Nuclear-style deployments default to on-premise / air-gapped operation with no cloud exception path
Connector hardening SAP PM and similar enterprise connectors follow customer authentication and certificate standards rather than introducing proprietary trust models

11 Partner-Integrated Delivery Model

Kav AI’s customers do not buy a platform in isolation. They buy a working integrity programme — anomalies that have been surfaced, reviewed by a qualified engineer, embedded in a Risk-Based Inspection (RBI) plan, and pushed into a CMMS work order. v3.3 formalises a delivery model in which the Kav AI Platform (KAP) is the technology layer and a qualified mechanical-integrity engineering partner provides the human Integrity Engineer panel.

What changed in v3.3 The partner-integrated model has been used in pilot conversations since v3.0. v3.3 makes it a documented part of the PRD because the OI.Expert × Kav AI Letter of Intent (April 2026) commits both parties to a single, unified service delivery model — and that model is now a procurement option a customer can ask for by name.

11.1 Why a partner channel exists

The Kav AI position on Human-in-the-Loop (HITL) validation is non-negotiable: no Critical severity output, no Remaining Life adjustment, and no automatic CMMS work order can be finalised without a qualified Integrity Engineer’s sign-off. That requirement creates a recurring engineering workload that some customers prefer to outsource to a specialist firm rather than staff in-house.

Without a partner With a partner
Customer staffs and trains its own Integrity Engineer panel for the HITL seat Partner provides the panel as a managed service
Customer integrates KAP findings into its existing RBI / IDMS programme on its own Partner embeds KAP findings into the RBI / IDMS programme as part of its core scope
Customer is responsible for DMR / IOW production for in-scope assets Partner produces and maintains DMRs and IOW recommendations
Procurement requires two separate contracts (platform + services) Single integrated procurement vehicle

Both models are supported. The partner channel is an option for customers who want a unified procurement vehicle and faster time-to-RBI-impact.

11.2 Reference partnership — OI.Expert × Kav AI

The reference partnership documented in the April 2026 LOI defines the shape of every partner channel:

11.2.1 What KAP provides

11.2.2 What the partner provides

Service area Scope of work
Reactive engineering Inspection result interpretation, Fitness for Service (FFS), Failure & Root Cause Analysis, day-to-day materials / corrosion / welding consultation, Fire Assessment
Inspection management Facility Inspection Program Management, Inspection Test Plans, Special Emphasis Programs, RBI Programming, AutoCAD Isometrics, IDMS Support, Mechanical Integrity Audits
Advanced NDE & monitoring Online Corrosion Monitoring guidance, Advanced NDE Technology Application, Inspection & Reliability Standards
Corrosion & materials engineering Asset Material Selection, Degradation Mitigation, IOW Recommendations, IOW Deviation Response & Risk Management, Damage Mechanism Reviews (DMRs), Welding Engineering
Engineering projects Project Design Support (materials selection, WPS reviews, mechanical datasheets), Project Field Support, Third-Party Fabrication Inspection, Engineering Specification Development
Training & auditing Mechanical Integrity training programmes, audit support, compliance documentation
HITL validation seat Partner engineers serve as the Integrity Engineer panel for Stage 3.5 / Stage 4 confirmation of Critical and Tier-2 damage mechanism findings (e.g., HIC, NH₄Cl underdeposit, Creep)
RBI / IDMS embedding Partner embeds KAP findings into the customer’s existing RBI programme, IDMS records, and compliance workflows (API 580/581/584, API 510, API 570)

11.2.3 The integrated workflow

The end-to-end operational workflow follows the IOW / DMR six-stage analytical chain documented in the Integrity Analytical Chain section, with the partner explicitly placed at the Stage 3.5 / Stage 4 review seat:

Stage KAP responsibility Partner responsibility
1–2: Ingestion & QC, IOW classification Multi-source data normalised and QC’d; ±500 ms timestamp alignment; SCADA readings compared against API 584 IOW limits and scored by duration × intensity to prevent alarm fatigue.
3: Damage mechanism mapping Mechanisms mapped per API 571; Filter Skill rejects implausible mechanisms; Boundary of Automation auto-flags standard mechanisms while complex ones (HIC, NH₄Cl underdeposit, Creep) are routed to partner review. Reviews complex / Tier-2 mechanisms; sign-off on auto-flagged outputs at agreed cadence.
3.5: Consistency gate Three-way check: material-mechanism, rate-mechanism, evidence consistency, with cross-source correlation engine output as input. Disposition for any “INCONSISTENT — ENGINEERING REVIEW REQUIRED” flag before chain continues.
4: Physical validation Cross-references AI findings with UT thickness, thermal scans, OGI imagery, robot-sourced thermal / acoustic / gas patrol evidence. Validates AI findings against ground-truth NDT; advises on revised inspection technique if needed.
5: Risk quantification & remaining life API 581 PoF × CoF; P90 Remaining Life with uncertainty bounds; Solomon-benchmarked financial CoF. Engineering review of final risk score and inspection interval before commitment.
6: Corrective action surfacing Prioritised inspection plan and structured CMMS / IDMS write packet. Confirms work-order content, escalation routing, and shutdown alignment.

All outputs are recommendations to a qualified human operator. KAP never writes to SCADA, never issues work orders autonomously, and never commands field equipment. Read-only enforcement is technical — OPC UA access rights restricted to Read / Subscribe; DMZ proxy rejects all Write / Call requests; firewall TCP/IP port restriction (4840 / 4843).

11.3 Commercial framework for partner-integrated engagements

The pilot framework, MSA framework, and subscription tiers in Appendices G and H apply unchanged in a partner-integrated engagement. The only commercial additions are:

Topic Treatment
Procurement vehicle Single integrated proposal covering KAP subscription and partner engineering services. Customer may still elect to contract separately by request.
Pilot fee Standard 90-day fixed-fee pilot, applied as credit against Year 1 subscription if the customer proceeds. Partner engineering hours scoped separately.
Liability KAP liability cap remains as documented in MSA H.1. Partner liability follows the partner’s own engineering services agreement.
Data ownership All facility data and inspection imagery remains the customer’s property. KAP and partner each hold limited processing licences for their respective scopes.
HITL seat Documented as a partner deliverable in the integrated proposal so that the customer’s procurement and audit teams can see who owns the engineer sign-off.
Path to production If pilot success criteria are met, the customer receives a production proposal — KAP plus partner — within 10 business days of the evaluation meeting.

11.4 Where this connects in the rest of the PRD

Topic Where it lives
Pilot framework — 90-day timeline, success criteria, responsibilities matrix Appendix G
MSA framework — liability cap, decision-support disclaimer, data ownership Appendix H
Subscription tiers Appendix H.2
Operational workflow — Triage-to-Escalation Path that the partner sits inside Operational Workflow — Anomaly-to-Action
Integrity Analytical Chain — six-stage chain that the partner co-owns from Stage 3.5 Integrity Analytical Chain
Boundary of Automation — what partners must review by design Integrity Analytical Chain — FR-RBI-02

12 Appendix A. Feature Summary

FR Feature Area Quarter Type Priority
FR-VIS-01 3D CAD model overlay App Q4 Engineering Medium
FR-VIS-02 Geo-tagged assets & images in 3D App Q3 Engineering High
FR-APP-01 3D viewer & AI chat (M2) App Q2 Engineering Critical
FR-APP-02 Contextual data chat Ph.0 AI Assistant Q2 Engineering Critical
FR-APP-03 Contextual data chat Ph.1 AI Assistant Q3 Engineering High
FR-APP-04 Chat with 3D map App Q3 Engineering High
FR-APP-05 Interactive overlays App Q3 Engineering Medium
FR-APP-06 Automated reports AI Assistant Q3 Engineering Medium
FR-SCN-01 OGI sensor ingestion AI Assistant Q3 Engineering High
FR-SCN-02 Calibrated thermal ingestion AI Assistant Q3 Engineering High
FR-SCN-03 Gas sensor ingestion AI Assistant Q3 Engineering High
FR-INT-01 OPC UA SCADA connector AI Assistant Q3 Engineering High
FR-INT-02 P&ID SQL connector App Q4 Engineering Medium
FR-INT-03 IDMS bidirectional integration specification Platform Q3 Product Critical
FR-INT-04 SAP PM certified connector Platform Q4 Engineering High
FR-ANO-01 Cross-modal anomaly detection AI Assistant Q3/Q4 Research High
FR-ANO-02 Physical AI reasoning & remediation AI Assistant Q4* Research Critical
FR-MDA-01 Solomon Associates benchmarking AI Assistant Q3 Engineering High
FR-MDA-02 Synthetic data generation AI Assistant Q4* Research High
FR-SEC-01 SOC 2 Type II certification Platform Q4 Engineering Critical
FR-SEC-02 Customer cloud tenant deployment Platform Q4 Engineering High
FR-SEC-03 Compliance management App Q4 Engineering High
FR-NAV-01 Full spatial navigation App Q4 Engineering Medium
FR-RBI-01 API 581 inspection interval calculation AI Assistant Q3/Q4 Engineering High
FR-RBI-02 Equipment class boundary of automation Platform Q3 Product High
FR-AI-01 Filter Skill calibration & FNR measurement AI Assistant Q2 Engineering Critical
FR-AI-02 Confidence score calibration protocol AI Assistant Q3 Engineering High
FR-AI-03 Chain-level consistency gate (Stage 3.5) AI Assistant Q3 Engineering High
FR-AI-04 OOD detector update cadence AI Assistant Q3 Engineering Medium
FR-ROB-01 KRSI robot ingestion adapter Platform Q3/Q4 Engineering High
FR-ROB-02 Fixed infrastructure: navigation beacons Infrastructure Q4 Engineering High
FR-ROB-03 Fixed infrastructure: communication backbone Infrastructure Q4 Engineering High
FR-ROB-04 Coverage orchestration Platform Q4 Engineering Medium
FR-ROB-05 Fleet intelligence analytics App / AI Assistant Q4 Engineering High
FR-CAD-01 Additional CAD formats (IFC / RVT / DGN) Platform Q4 Engineering High
FR-CAD-02 CAD version tracking and diff visualisation App Q4 Engineering High
FR-CAD-03 As-built vs as-designed comparison App Q4 Engineering High
FR-CAD-04 Engineering change notification Platform Q4 Engineering Medium
FR-CAD-05 Cross-source correlation engine AI Assistant Q4 Engineering High
FR-NUC-01 Dose-aware inspection workflow Platform Q4 Engineering Medium
FR-XSC-01 Cross-source correlation engine — promoted to named primitive (tag / match / score / surface) AI Assistant Q3/Q4 Engineering Critical
FR-XSC-02 Multi-source confirmed TPR > 98% / FPR < 2% target reporting AI Assistant Q4 Engineering High
FR-PRT-01 Partner-integrated delivery model — single procurement vehicle, partner-provided HITL seat Platform / Commercial Q3 Product High
FR-PRT-02 Reference partnership — OI.Expert × Kav AI integrated proposal template Commercial Q2 Product High
FR-MVP-01 MVP demonstration sequence — public-facing v0.1 → v0.4 mapped to M0–M3 milestones Product Q2/Q3 Product Medium

Q4* = research-dependent, contingent on Q2 spike outcomes.

13 Appendix B. Research Classification and Spike Objectives

Features classified as “research-dependent” follow a structured validation protocol before engineering commitment.

Feature ID Spike Objective Success Metric Fallback
FR-ANO-01 Test Vision Transformer (ViT) vs. CNN on sparse OGI/Thermal defect data. TPR > 85% at FPR < 15% on M0/M1 benchmark. Defer to Q1 2027; rely on manual triage.
FR-ANO-02 Prototype RAG (Retrieval-Augmented Generation) vs. Structured Output Schemas for API 571 remediation logic. > 90% agreement with human Integrity Engineer panel. Descriptive reporting only; no remediation advice.
FR-MDA-02 Evaluate physics-based gas plume simulation for OGI synthetic data generation. Frechet Inception Distance (FID) < 50 on real vs synthetic. Manual labelling of third-refinery campaign data.

14 Appendix C. Non-Functional Requirements

Category Requirement Target
Performance AI query response time (P95) < 3 seconds
Performance 3D model rendering (GTX 1660+) 60 fps sustained
Scalability Concurrent operators per facility ≥ 50 concurrent sessions
Security Authentication and data isolation Multi-tenant RLS; JWT with short-lived tokens; SOC 2 target
Reliability Platform availability ≥ 99.5% uptime (excluding planned maintenance)
Deployment On-premise / air-gapped operation Full offline operation in on-premise tier (H1 2027)
Infrastructure Navigation accuracy with fixed beacons Within 10cm of onboard baseline
Infrastructure Communication backbone uptime ≥ 99.5% across patrol zones
Performance Cross-source correlation latency < 60 seconds per finding
Scalability High-endurance telemetry session duration ≥ 6 hours continuous
Regulated environments Lifetime data retention profile Supported for facility-lifetime retention deployments

15 Appendix D. Version History

Version Date Summary
v1.0 Feb 2026 Initial PRD — product overview, personas, and feature list.
v2.0 Feb 2026 Full feature cards for all 17 FRs, 9 sections, priority and phasing.
v2.1 Mar 2026 Added Research vs. Engineering classification, spike recommendations.
v2.2 Mar 2026 Added disclaimer, complexity scoring, sorted summary table, document scope statement.
v2.3 Mar 2026 Full document restructure for executive and investor audience. Industry-agnostic reframing. Added SCADA competitor category. Roadmap extended to Q2–Q4 2026. Engineering detail moved to Technical Appendix.
v2.4 Mar 2026 Added cover taglines. Named OPC UA as the Q3 SCADA integration protocol. Added OPC UA licensing open question. Added OPC UA, Historian, and SCADA to glossary.
v2.5 Mar 2026 Incorporated industry expert integrity review feedback. Added SCADA-to-risk closed-loop analytical chain (IOWs, DMRs, physical validation, risk scoring, remaining-life estimates). Added Solomon Associates benchmarking. Added DMR, IOW, and Solomon Associates to glossary.
v2.6 Mar 2026 Enterprise procurement edition. Added Section 5 (IOW/DMR chain with named architectural owner — the Data Retrieval Agent). Added Section 6 (deployment architecture: cloud SaaS, customer cloud tenant, on-premise/air-gapped tiers with IEC 62443 and OT/IT boundary design). Expanded competitive analysis to include Emerson Plantweb Optics/AMS, Hexagon ALI, and Aucerna. Added primary source citations for all market statistics. Added FR-SEC-02 (customer cloud tenant deployment) and FR-SEC-03 (compliance management) to feature summary.
v2.7 Mar 2026 Expert Review Panel edition. Incorporated feedback from 8 expert personas. Added Section 5.3 (Analytical Rigour & Confidence). Added Section 7 (Operational Workflow). Detailled protocol-level read-only enforcement and SCADA failure modes. Updated competitive landscape with Percepto, Hexagon ALI, and Aucerna refinements. Added AI safety and confidence scoring requirements. Added L1 Context Diagram (Figure 1) inline in Section 1 and L2 Container Diagram (Figure 2) inline in Section 5, both using C4 model convention.
v2.8 Mar 2026 Integrity Chain Completion edition. Addresses four gaps from v2.7 expert review panel: (1) RBI — inspection plan output schema (FR-RBI-01), equipment class boundary of automation (FR-RBI-02), API 581 inspection interval calculation added to Stage 5, consistency gate added as Stage 3.5; (2) IDMS integration — bidirectional integration specification (FR-INT-03), SAP PM certified connector (FR-INT-04), orKsoft coexistence architecture; (3) Competitive positioning — orKsoft added to competitive table, Meridium added, Cognite risk response sharpened with concrete moat strategy, primary competitor benchmark reframed; (4) AI reliability — Filter Skill FPR/FNR targets (FR-AI-01), confidence calibration protocol (FR-AI-02), chain-level consistency gate (FR-AI-03), OOD update cadence (FR-AI-04). Calibrated PoF transition now defined by minimum dataset trigger.
v2.9 Apr 2026 Strategic Positioning edition. Addresses six gaps from v2.9 executive review: (1) Category definition — Kav AI explicitly defined as a Real-Time Integrity Intelligence System, establishing a new product category; (2) Competitive moat — dedicated section in Executive Summary consolidating four pillars: closed-loop intelligence, data flywheel, deployment speed, end-to-end hardware-agnostic solution; (3) Data flywheel — elevated from buried competitive mention to standalone section with compounding model performance narrative and pricing alignment; (4) Differentiation language — closed-loop between process data, physics-based models, risk quantification, and recommended action made unmistakable throughout; (5) Industry focus — narrowed from industry-agnostic to refinery and petrochemical beachhead with phased expansion roadmap; (6) Naming standardisation — unified from “Kav AI” / “KAP” variants to “Kav AI” across all sections. Executive Summary rewritten with stronger category-defining positioning. Subtitle updated to “Real-Time Integrity Intelligence System™”.
v3.2 Apr 2026 Strategic integration edition. Uses v2.9 as the narrative and diagrammatic baseline, restores the full visual layer that was reduced in v3.1, and incorporates the most material platform additions from later work: autonomous robot coverage via KRSI and fixed infrastructure, expanded CAD / engineering-data ingestion, digital-twin-backed as-built comparison, cross-source correlation, and regulated-environment deployment considerations.
v3.3 Apr 2026 Active Physical Intelligence edition. Keeps the v3.2 backbone, integrity architecture, and safety model intact. Consolidates external-facing material that matured between v3.2 and late April 2026: (1) Active Physical Intelligence™ adopted alongside the Real-Time Integrity Intelligence System™ category definition; (2) Scale & Impact promoted to a dedicated section with MVP v0.2 facility-scale evidence (1,200 assets, 65% unit coverage, $750K–$1.5M avoided cost, ranked inspection set, CUI case study); (3) cross-source correlation engine promoted from buried mention to named primitive (tag / match / score / surface) with explicit multi-source confirmed TPR > 98% / FPR < 2% target; (4) Onboard SLAM vs Fixed Infrastructure architectural-shift table added to clarify navigation reliability, scaling cost, and registration accuracy; (5) Partner-Integrated Delivery Model added as a documented procurement option, formalising the OI.Expert × Kav AI delivery pattern from the April 2026 LOI; (6) MVP demonstration sequence explicitly mapped onto M0–M3 platform milestones; (7) priority order Filter Skill > Stage 3.5 > cross-source uplift restated to prevent misreading the correlation engine as a Filter Skill bypass. New FRs: FR-XSC-01, FR-XSC-02, FR-PRT-01, FR-PRT-02, FR-MVP-01.

16 Appendix E. Glossary

3DGS 3D Gaussian Splatting — a photorealistic 3D rendering technique used to create navigable facility models from drone imagery.

ADR Architecture Decision Record — a formal document capturing a technical decision, its rationale, and consequences.

AG-UI Agent-UI protocol used for streaming AI responses and structured events from the backend to the frontend.

CDC Contextual Data Chat — a feature set (F-APP-02) that enables operators to query facility data in natural language.

CMMS Computerized Maintenance Management System — the systems operators use to track work orders and asset maintenance history. Kav AI does not write to CMMS autonomously; all outputs are operator-confirmed recommendations.

DMR Damage Mechanism Review — a structured assessment of the degradation modes that can affect specific equipment based on its service conditions, materials, and operating history.

IDMS Inspection Data Management System — the platform that holds CML history, inspection plans, compliance records, and RBI models. Kav AI integrates bidirectionally with IDMS platforms (SAP PM, Meridium, Hexagon ALI, orKsoft).

IEC 62443 The international standard series for industrial cybersecurity. Defines security levels and requirements for OT systems including SCADA and control networks.

IOW Integrity Operating Window — a set of process parameter limits within which equipment is expected to operate safely. Exceedances trigger the damage mechanism analytical chain.

KRSI Kav AI Robot Sensor Interface — the robot-ingestion and normalisation layer that accepts patrol telemetry and payload data from supported autonomous platforms.

MCP Model Context Protocol — the protocol that enables AI agents to call object detection models and other tools as structured, callable interfaces.

OGI Optical Gas Imaging — a camera technology that visualises gas emissions invisible to standard cameras.

OOD Out-of-Distribution — inputs that differ materially from the training distribution. Kav AI’s OOD detector flags these and routes them to a Review queue rather than surfacing them as production alerts.

OPC UA OPC Unified Architecture (IEC 62541) — the universal industrial middleware standard providing secure, platform-independent read access to SCADA systems, historians, and PLCs. Kav AI’s Q3 SCADA integration uses OPC UA.

P&ID Piping and Instrumentation Diagram — the engineering drawing that documents process equipment, piping, and instrumentation.

Purdue Model A hierarchical reference model for industrial control system network architecture, defining five levels from field devices to enterprise systems. Kav AI operates at Level 3.5 (IT/OT DMZ) and above.

RBI Risk-Based Inspection — the methodology (API 580/581) for prioritising inspection effort based on the probability and consequence of equipment failure. Kav AI’s IOW/DMR chain produces API 581-aligned risk scores and inspection plans.

RLS Row-Level Security — database access control ensuring each operator can only access their organisation’s data.

SCADA Supervisory Control and Data Acquisition — the operational backbone of industrial facilities, collecting real-time sensor data from field devices. Kav AI reads from SCADA via OPC UA; it never writes to or replaces SCADA.

Solomon Associates An industry benchmarking organisation maintaining comparative databases of refinery and petrochemical plant performance. Kav AI uses Solomon benchmarks as the validation baseline for corrosion rates, equipment life estimates, and damage-mechanism statistics.

SOC 2 Service Organization Control 2 — a security certification audit framework widely required by enterprise customers.

17 Appendix F. Technical Integration Specification

This appendix defines the technical requirements and validated configurations for integrating Kav AI with an operator’s existing OT and IT infrastructure. It is intended for the operator’s OT engineering team and IT security department during procurement technical review.

17.1 F.1 OPC UA connector — supported configurations

Kav AI’s SCADA integration uses OPC UA (IEC 62541) in read-only subscription mode. The connector has been tested against the following OPC UA server implementations:

OPC UA server Vendor Tested version Status
Kepware KEPServerEX PTC / Rockwell 6.14+ Validated
Matrikon OPC Server Honeywell / Matrikon 4.x Validated
Ignition OPC UA module Inductive Automation 8.1+ Validated
Prosys OPC UA Simulation Server Prosys OMS 5.x Validated (dev/test)
Emerson DeltaV OPC UA Emerson 14.x+ In qualification
OSIsoft PI OPC UA AVEVA / OSIsoft PI Server 2018+ In qualification
Siemens S7 OPC UA Siemens TIA Portal V16+ Roadmap

17.2 F.2 Data ingestion parameters

Parameter Specification
Connection mode OPC UA subscription (preferred) or polling. Subscription mode reduces network load and delivers change-of-value events in near real-time.
Polling frequency (polling mode) Configurable: 1 second minimum, 60 seconds default. Sub-second polling is not supported in v1 of the connector.
Subscription update rate 100ms minimum update rate supported by the OPC UA standard; Kav AI default is 1 second. Configurable per tag group.
Authentication Username/password or X.509 certificate authentication. Anonymous connections not permitted in production deployments.
Encryption OPC UA Security Mode: SignAndEncrypt required. Minimum policy: Basic256Sha256.
Network requirement Kav AI connector operates from IT side of OT/IT boundary. Operator must provision a read-only OPC UA endpoint accessible from the IT DMZ. Kav AI never initiates connections from OT side.
Tag capacity Up to 10,000 monitored tags per facility in initial release. Higher capacity available on request.
Historian backfill On initial connection, Kav AI requests up to 90 days of historical data where the historian supports OPC UA Historical Data Access (HDA). Configurable.

17.3 F.3 Historian compatibility

Historian Interface Notes
OSIsoft PI / AVEVA PI OPC UA HDA or PI Web API PI Web API preferred for richer metadata. OPC UA HDA supported for air-gapped deployments where PI Web API is not exposed.
InfluxDB InfluxDB HTTP API v2 Direct connector. No OPC UA required. Supports time-range queries and continuous subscriptions.
Honeywell Uniformance PHD OPC UA HDA Validated in qualification. PHD OPC UA server configuration required.
Wonderware / AVEVA Historian OPC UA HDA In qualification. AVEVA Historian 2020+ required.
TimescaleDB PostgreSQL wire protocol Direct connector. Suitable for operators using TimescaleDB as a modern historian.
Generic SQL historian JDBC / ODBC Available for historians with SQL read access. Schema mapping required during onboarding.

17.4 F.4 Visual sensor compatibility

Kav AI is hardware-agnostic. The platform ingests data from any visual sensor source that produces output in supported formats.

Sensor type Supported formats Notes
RGB drone imagery JPEG, PNG, RAW, MP4 Compatible with DJI, Skydio, Parrot, Flyability, and any drone producing standard image formats.
Thermal (infrared) RJPEG, TIFF, radiometric JPEG Radiometric data required for calibrated temperature mapping. Compatible with FLIR, Teledyne, DJI Zenmuse XT2/H20T.
Optical Gas Imaging (OGI) MP4, AVI, MPEG Compatible with FLIR GF-series and Rebellion Photonics cameras. OGI video processed frame-by-frame.
LiDAR point cloud LAS, LAZ, E57, PLY Used for spatial reference and CAD overlay alignment. Not required for core platform operation.
Still photography JPEG, PNG, HEIC Compatible with any digital camera. Used for confined-space inspection and close-up defect documentation.

17.5 F.5 Network and firewall requirements

Connection Protocol Port Direction
Kav AI platform → OPC UA server OPC UA (TCP) 4840 (default) IT DMZ → OT DMZ. Operator provisions firewall rule.
Kav AI platform → PI Web API HTTPS 443 IT → IT/OT DMZ. Read-only.
Operator browser → Kav AI platform HTTPS / WSS 443 Internet → Kav AI (cloud SaaS) or internal (on-prem).
Kav AI platform → LLM API HTTPS 443 Cloud SaaS only. Absent in on-premise deployment.
Kav AI platform → Kav AI update service HTTPS 443 Cloud/customer-cloud only. Not required air-gapped.
Air-gapped deployment note In on-premise / air-gapped deployments, the connection to the LLM API is eliminated. Kav AI runs a self-hosted language model within the operator’s environment. The Kav AI update service connection is replaced by a manual container image delivery process. All other connections remain identical.

17.6 F.6 Infrastructure requirements by deployment tier

Component Cloud SaaS Customer cloud tenant On-premise / air-gapped
Compute Kav AI-managed Operator Azure/AWS VM (min 8 vCPU, 32 GB RAM) Operator server (min 8 core, 32 GB RAM, GPU optional)
Storage Kav AI-managed Operator-managed blob storage (Azure Blob / S3) Operator NAS or SAN (min 2 TB per facility)
Database Kav AI-managed Supabase Operator-managed PostgreSQL or Azure Database Operator-managed PostgreSQL (on-site)
LLM inference Kav AI cloud API Operator-provisioned API endpoint or Azure OpenAI Self-hosted model (hardware spec TBD by Q4 2026)
Container runtime Kav AI-managed Docker / Kubernetes (AKS or EKS) Docker or Kubernetes on-prem
Backup & DR Kav AI SLA Operator responsibility Operator responsibility

18 Appendix G. Pilot Framework

This appendix defines the standard 90-day proof-of-concept framework Kav AI uses for new enterprise customers. The pilot is designed to deliver a defensible, data-backed success evaluation before any long-term contract commitment is required.

Pilot philosophy The pilot is not a demo. It runs against the operator’s real inspection data, their real facility, and their real operational questions. Success or failure is measured against criteria agreed in writing before the pilot begins — not assessed retrospectively by Kav AI.

18.1 G.1 Standard pilot scope

Parameter Standard definition
Asset class One asset class per pilot (recommended: heat exchangers, rotating equipment, or above-ground storage tanks). Scope expansion available in Phase 2. Equipment class boundary of automation (see Integrity Analytical Chain section) must be communicated to the operator before pilot onboarding.
Facility One facility or one defined area of a larger facility (e.g. a process unit, a tank farm, or a compressor station).
Data sources Visual inspection imagery (RGB, thermal, or OGI) from at least one completed inspection campaign. SCADA / historian data is optional in pilot phase; included if operator elects to connect.
Duration 90 days from data ingestion to success evaluation meeting.
Operator commitment Named pilot lead (integrity engineer or operations manager). Access to historical inspection reports for cross-validation. Availability for three structured review sessions.
Kav AI commitment Dedicated customer success engineer for the pilot duration. Weekly progress updates. Full data deletion on pilot conclusion if operator does not proceed.
New facility onboarding The first campaign at a new facility is treated as baseline data collection. OOD flags are expected in Weeks 1–4 and are used to calibrate the detector, not to surface UNCERTAIN alerts. Detection performance in Weeks 5–12 represents steady-state.

18.2 G.2 Pilot timeline

Week Phase Activities
1–2 Onboarding Data transfer and ingestion. OPC UA connector configuration (if SCADA elected). 3D facility model generation from inspection imagery. Operator orientation session. OOD baseline calibration begins.
3–4 Baseline Kav AI generates initial anomaly detection results. Operator pilot lead reviews findings against known historical defects. Baseline accuracy established.
5–8 Active use Operator uses Kav AI natural language interface for real integrity queries. AI response quality reviewed. IOW/DMR chain activated if SCADA connected. Mid-pilot review session at Week 6.
9–10 Validation Kav AI findings cross-validated against operator’s existing inspection reports and CMMS records. False positive and false negative rate measured against agreed threshold. Confidence calibration curve generated and reviewed.
11–12 Evaluation Success evaluation meeting. Structured debrief against all success criteria. Calibration curve included as deliverable. Operator decision: proceed, extend, or conclude. Data deletion executed if not proceeding.

18.3 G.3 Success criteria

Success criteria are agreed in writing between Kav AI and the operator before data ingestion begins. The standard criteria set is defined below. Operators may substitute or add criteria by agreement.

# Criterion Standard threshold Measurement method
SC-1 Anomaly detection recall ≥80% of known defects from historical inspection reports identified by Kav AI without operator prompt Cross-validation against operator’s existing inspection records
SC-2 False positive rate ≤20% of Kav AI-flagged anomalies assessed as false positives by the operator pilot lead Pilot lead review of all flagged items
SC-3 Query response quality ≥70% of natural language queries rated ‘useful’ or better by pilot lead on a 5-point scale Structured query log reviewed at Week 6 and Week 12
SC-4 Time to first finding Kav AI surfaces first anomaly finding within 48 hours of completed data ingestion Timestamp of first flagged anomaly vs. ingestion completion
SC-5 AI response latency (P95) ≤5 seconds for natural language query response (P95) Automated latency logging during pilot period
SC-6 SCADA correlation (if connected) ≥1 confirmed correlation between a SCADA IOW exceedance and a visual inspection anomaly at the same asset Operator pilot lead validation of correlated finding

18.4 G.4 Responsibilities matrix

Activity Kav AI Operator
Data transfer and ingestion setup Provides ingestion pipeline and documentation Provides inspection data in supported format
OPC UA connector configuration Provides connector software and configuration guide Provisions read-only OPC UA endpoint; configures firewall
3D facility model generation Processes imagery and generates 3D model Provides imagery from completed inspection campaign
Historical defect cross-validation Provides tooling for structured comparison Provides historical inspection reports and CMMS records
Success criteria agreement Proposes standard criteria set; negotiates amendments Reviews and approves criteria in writing before ingestion
Weekly progress updates Delivers written update every Friday Reviews and responds within 2 business days
Data security during pilot Encrypts data in transit and at rest; access logging Responsible for data transfer security on operator side
Data deletion on pilot conclusion (no-proceed) Executes deletion within 5 business days; provides certificate Confirms deletion certificate received
Commercial decision N/A Named decision-maker attends evaluation meeting

18.5 G.5 Pilot commercial terms

Term Standard position
Pilot fee Fixed fee agreed prior to commencement. Applied as credit against Year 1 subscription if operator proceeds.
Data ownership All inspection data remains the property of the operator throughout the pilot and after its conclusion.
Intellectual property Kav AI retains all rights to platform software. Operator retains all rights to their facility data and inspection imagery.
Confidentiality Mutual NDA in place prior to data transfer. Kav AI does not use pilot data for model training without explicit written consent.
Exit rights Operator may terminate pilot at any time with 5 business days’ notice. Kav AI executes data deletion and issues deletion certificate.
Liability cap during pilot Kav AI liability capped at pilot fee paid. Kav AI outputs are recommendations only; operator retains all responsibility for operational decisions.
Path to production If success criteria are met, operator receives a production contract proposal within 10 business days of evaluation meeting.

19 Appendix H. Master Services Agreement Framework

This appendix describes the key commercial and legal positions Kav AI takes in its Master Services Agreement (MSA). It is intended to accelerate legal review by identifying Kav AI’s standard positions and areas where negotiation is anticipated. The full MSA template is provided separately by Kav AI’s legal counsel upon request.

This appendix is a summary of Kav AI’s standard MSA positions for discussion purposes. It does not constitute legal advice and does not supersede the executed MSA. Operators should engage their own legal counsel to review the full agreement.

Core legal position Kav AI is a decision support platform, not a control system. All outputs — anomaly findings, risk scores, remaining life estimates, and corrective action recommendations — are provided to a qualified human operator for their review and judgement. The operator retains full responsibility for all operational decisions. This position is reflected throughout the MSA and is not negotiable.

19.1 H.1 Key MSA provisions

Provision Kav AI standard position Negotiation status
Liability cap Kav AI’s total aggregate liability is capped at fees paid in the 12 months preceding the claim. Excludes gross negligence and wilful misconduct. Standard. Not negotiable below 12-month fee cap.
Consequential damages exclusion Kav AI excludes liability for indirect, consequential, and incidental damages including lost production, business interruption, and third-party claims. Standard. Mutual exclusion negotiable.
Decision support disclaimer Kav AI outputs are recommendations to qualified operators. Operator retains full responsibility for all operational decisions made in reliance on Kav AI outputs. Non-negotiable. Core to product liability position.
Data ownership All facility data and inspection imagery remains the property of the operator. Kav AI holds a limited licence to process the data for the purpose of providing the service. Standard. Operator IP protections negotiable.
Model training consent Kav AI does not use operator data for model training without explicit written consent. Anonymised, aggregated performance metrics excluded. Standard. Explicit opt-in required for training use.
Data security Kav AI complies with SOC 2 Type II (target Q4 2026). Encryption at rest and in transit. Breach notification within 72 hours. Standard. Additional security schedules negotiable.
Data deletion On contract termination, Kav AI deletes all operator data within 30 days and provides a deletion certificate. Backups purged within 90 days. Standard.
Uptime SLA 99.5% monthly uptime for cloud SaaS tier. Excludes planned maintenance windows (notified 48 hours in advance) and force majeure. SLA credits negotiable. Cap at one month’s fees.
Audit rights Operator may audit Kav AI’s data handling practices annually with 30 days’ notice, or following a security incident. Standard.
Governing law Ontario, Canada (Kav AI standard). Negotiable to operator’s jurisdiction for enterprise contracts. Negotiable.
Dispute resolution Good-faith negotiation (30 days), then binding arbitration under ICC Rules. Litigation waived by both parties. Negotiable to operator preference.
Term and renewal Initial term 12 months. Auto-renews for 12-month terms unless either party gives 60 days’ notice. Standard. Multi-year terms available at discount.

19.2 H.2 Subscription tiers

Feature Starter Professional Enterprise
Deployment tier Cloud SaaS only Cloud SaaS or customer cloud tenant All tiers including on-premise
Facilities covered 1 facility Up to 5 facilities Unlimited
Concurrent operators Up to 10 Up to 50 Unlimited
Inspection campaigns / year Up to 4 Up to 12 Unlimited
SCADA / OPC UA connector Not included Included (Q3 2026) Included (Q3 2026)
IOW/DMR analytical chain Not included Included (Q3 2026) Included (Q3 2026)
Cross-source correlation engine Not included Included (Q4 2026) Included (Q4 2026)
Autonomous robot patrol ingestion (KRSI) Not included Included (Q4 2026) Included (Q4 2026)
Partner-integrated delivery (e.g., OI.Expert) Not available Available on request Available on request
Solomon benchmarking Not included Included Included
SSO / SAML integration Not included Included Included
Dedicated CSM Not included Not included Included
SLA uptime guarantee 99.5% 99.5% 99.9%
Security review / pen test support Not included Annual report shared Dedicated engagement
On-premise / air-gapped Not available Not available Available (H1 2027)
Pricing basis Per facility / per year Per facility bundle / per year Enterprise licence / custom

Pricing is indicative. Final pricing provided in a separate commercial proposal. Volume discounts available for multi-facility and multi-year commitments.

Based on Kav AI’s experience with enterprise procurement in the oil and gas sector, the following provisions typically require negotiation or additional schedules during legal review. Kav AI’s legal counsel is prepared to engage on all of these.

Next steps for legal review To initiate legal review, the operator’s legal counsel should contact Kav AI to request the full MSA template and any applicable security schedules. Kav AI targets a 10-business-day turnaround on redline responses. For enterprise contracts, Kav AI’s legal counsel is available for a direct call to discuss substantive issues before formal redline exchange.