Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.firstresonance.io/llms.txt

Use this file to discover all available pages before exploring further.

Overview

Completing a run isn’t just clicking Complete. The completion review is where the run’s accumulated state — step results, measurements, attachments, build requirements installed, redlines applied, issues filed — gets reconciled into the records that downstream consumers (auditors, customers, planners) will rely on. This page covers the completion checklist, the review surfaces, and what happens to the run’s data after completion.

Prerequisites for completion

ION enforces the structural prereqs at the data layer:
  • Every required step is signed off (complete).
  • Every required field on every required step has a value.
  • Every build requirement is installed (or excepted via an issue).
  • Every required approval gate is satisfied.
If any of those isn’t met, the Complete run button is disabled and ION shows what’s blocking it. There are also organizational prereqs that ION can warn on but doesn’t always block:
  • Open issues filed against the run — should be dispositioned before completion in most workflows.
  • Open redlines — should be closed (accepted or rejected) before the run’s record is locked.
  • Out-of-range measurements without an issue — usually require either an issue + disposition or a deliberate deviation acknowledgement.

The completion review

When prereqs are met:
  1. Open the run.
  2. Click Complete run.
  3. ION presents the completion review dialog with a checklist of items that should be reviewed:
    • Steps — count of steps completed; any failed-then-resolved steps.
    • Build requirements — every required part installed; any exceptions.
    • Redlines — count of open vs closed redlines; warning if any are still open.
    • Issues — open / dispositioned / resolved counts.
    • Approvals — every required gate satisfied.
  4. Optionally add a completion note (a free-text summary).
  5. Confirm.
ION:
  • Sets completedAt and completedBy.
  • Transitions the run to Complete.
  • Updates the target part inventory’s state to reflect the build outcome (typically available for the produced unit).
  • Closes any run-level approval gates.
  • Locks all step records — no further edits without explicitly reopening the run.

Reviewing run data after completion

A completed run is the canonical record of what was built. The Review surfaces:

Run Summary

The top of the run page shows headline stats:
  • Steps completed.
  • Cycle time (startedAt → completedAt).
  • Operators involved.
  • Issues opened during the run + dispositions.
  • Redlines applied.
  • Build requirements installed (count + parts).

Step-by-step view

Scroll the steps in their executed order. Each step shows:
  • Operator who signed off, timestamp.
  • Field values + measurement evidence (in-range / out-of-range).
  • Files attached.
  • Build requirement installs (which units went where, with serials and lots).
  • Comments.

Transaction history

The transaction history surfaces every change to the run: status transitions, sign-offs, redline applications, issue links, approvals. It’s the audit trail.

As-built BOM (aBOM)

The completed aBOM is the parent/child tree of part inventories that compose the finished assembly. See Editing Build Requirements for the structure. The aBOM is the input to traceability reports and downstream service / repair workflows.

Reopening a run

Sometimes you need to reopen a run after completion — to fix data, add a missed sign-off, or adjust an installed build requirement that was incorrect.
  1. Open the completed run.
  2. Click Reopen run (admin permission required).
  3. Provide a reason.
  4. The run transitions back to In Progress. Steps are unlocked for the duration.
Reopening is recorded in the transaction history. Re-completing the run captures a second completedAt — the run’s history shows both completion events.
Reopening should be rare. Frequent reopens usually mean the procedure or process is wrong somewhere upstream.

What downstream systems get

After a run completes, several downstream surfaces consume its data:
  • Inventory — the produced unit’s state updates; consumed inputs are marked accordingly.
  • Traceability reports — the aBOM is part of the genealogy of the produced unit.
  • Analytics — cycle time, step times, issue rates, scrap rates aggregate into dashboards.
  • External integrations — webhooks fire on run completion (see Webhooks) for ETL pipelines, ERP sync, customer notification systems.
  • Customer documentation — for customer-witnessed builds, the run record is the source for end-of-line reports.

Cycle time analytics

The most-asked-for run-level metric is cycle time. ION surfaces three:
  • _created → startedAt — staging time (how long the run sat in Todo).
  • startedAt → completedAt — execution time (the actual build duration).
  • Per-step time — RunStep.startedAt → completedAt for each step.
These propagate to dashboards under Analytics & Dashboards. When troubleshooting “this run took too long,” step-time is usually where the answer lives.

Tips

  • Treat the completion review as a real step. It’s not a rubber-stamp — it’s the last chance to catch a missing sign-off or an undocumented deviation. Reviewers should actually open each section.
  • Close redlines and issues before completing. Open redlines on a completed run are an inconsistent state — they imply work that wasn’t ratified. Most workflows should require closure as part of completion.
  • Use the completion note for context, not data. Don’t put structured data (measurements, IDs) in the completion note — that data belongs on a step. The note is for narrative context that doesn’t fit elsewhere.
  • Run completion should fire your downstream integrations. If you have an ETL or ERP that depends on the run’s data, set up a webhook on run.complete rather than polling. See Webhooks.