Skip to main content

Product

How Daqiq processes a claim.

A complete walkthrough of the claim pipeline — ingestion, tokenization, extraction, policy lookup, agent adjudication, trace emission, and human review. Named mechanisms, named integrations.

The seven steps.

  1. 01

    Ingestion.

    Claim evidence arrives as a mix of PDFs, images, and email attachments via a REST endpoint or SFTP drop. Document classification runs first, so Najm liability reports, Taqdeer assessments, and your carrier-authored forms pick up the right schema.

  2. 02

    Tokenization.

    Before any outbound inference call, national IDs, phone numbers, IBANs, Najm case numbers, and policy-holder names are replaced with format-preserving opaque tokens. The token-to-plaintext mapping lives in an in-Kingdom KMS-backed vault. Only the final report renderer detokenizes, and only inside the rendering context.

  3. 03

    Extraction.

    A frontier-LLM call against a per-document-class JSON schema pulls structured fields out of Arabic text natively — no translation layer. The call runs in europe-west4 because no frontier-LLM provider operates a Middle East region as of April 2026.

  4. 04

    Policy and history lookup.

    The agent calls your policy API and claim-history service, both read-only. Missing coverage fails the claim before adjudication, so no decision ever runs against an out-of-scope policy. Every lookup records a trace step with its response hash.

  5. 05

    Agent adjudication.

    A tool-using agent — your choice of frontier LLM — runs your claims playbook. Triage, fraud rules, and decision tree live as bilingual natural-language paragraphs that compile to versioned rules. The tool surface is locked down per workflow; nothing else is reachable.

  6. 06

    Decision and trace emission.

    The agent writes an append-only trace: every tool call, every retrieval, every rule evaluation, the final decision and its rationale. The trace is SHA-256 hash-chained to the prior trace for the same workflow, so tampering is detectable by construction.

  7. 07

    Human on top of the loop.

    Your reviewer sees the decision, the rationale, and the trace in one view. They can approve, override, escalate, or reopen — every human action joins the same trace. Production traffic is gated behind an agreement-rate threshold your team sets.

Integrations.

Named partners, named status. Every integration records the SAMA or NPHIES control it satisfies.

IntegrationCategoryStatusRegulation
Najm NAOSMotor — liability reportshippingSAMA 15-day adjudication
TaqdeerProperty, motor — loss assessmentshippingLoss adjuster registration
NPHIESHealth — claim payloadpilotCCHI data exchange
AbsherIdentity verificationpilotPDPL Art. 6 — lawful basis
ElmVehicle historyroadmapFraud-detection cross-reference
ZATCA FatooraE-invoicing — phase 1shippingVAT Implementing Regulation, Art. 53
Carrier PAS adapterPolicy system — read-onlyshippingThree Saudi carrier stacks supported

What your claims playbook looks like.

Your playbook is authored in plain Arabic or English. Your admin edits a paragraph; a versioned rule compiles. Your reviewer sees exactly why a claim went the way it did — the paragraph and the rule are the same object from two angles.

Playbook paragraph
Section 2.3 · Fraud triage

If the Najm liability report lists driver_at_fault and the fault party is not the insured, escalate to the senior adjuster queue. Otherwise, route to the auto-approve queue only when the extracted confidence is at or above 0.85.
Compiled rule
rule "fraud.triage" {
  when:
    police_report.driver_at_fault == true
    && fault_party != "insured"
  then:
    escalate("senior_adjuster")
  else:
    route("auto_approve", min_confidence: 0.85)
}

What you see once live.

Live dashboards, immutable logs, and a cross-border transfer register — every surface inside the product. Reviewers browse the live system.

  • Per-claim trace, deep-linkable from any decision. Every tool call, every retrieval, every rule evaluation, the rationale.
  • Per-line KPI dashboard: auto-approve rate, escalation rate, denial rate, fraud-hit rate, TAT p50 and p95, average confidence.
  • Drift alerts when a workflow's KPIs exceed baseline ± 1.5σ for three consecutive snapshots, with the candidate root cause populated.
  • Immutable admin audit log of every staff action: force-save, invoice mark-paid, dispute recorded, password reset issued.
  • PII vault access log recording who detokenized what, when, and for which claim.
  • Cross-border transfer register with source region, destination region, purpose, byte count, and payload hash.

What runs where.

Every Saudi carrier byte stays in me-central1 (Dammam). The only outbound traffic is the inference call, tokenized in-Kingdom first, routed to europe-west4 because no frontier-LLM provider operates a Middle East region. Moving inference to a KSA-based HSM is a Phase-2 upgrade.

ME-CENTRAL1 · DAMMAMIn-KingdomDatabase, vault, audit log, worker, admin consoleCloud SQL · me-central1PII vault · KMS-backedAppend-only audit logBullMQ workerAdmin consoleEUROPE-WEST4 · NETHERLANDSInferenceFrontier LLM — tokenized payloadFrontier LLM · primaryFrontier LLM · fallbackTOKENIZEtokenized payload · no plaintext PIIstructured response · Arabic-native
No frontier-LLM provider operates a Middle East region as of April 2026. Payload tokenizes in-Kingdom; the token-to-plaintext map stays in me-central1. Moving inference to a KSA-based HSM is the Phase-2 upgrade.
The same diagram lives in your DPA Cross-Border Annex once under contract, with your tokenization key identifiers pre-filled.

See it on your own claims.

Send us a note with one line of business and where the current process is breaking. We'll scope a diagnostic together.