Product

Steward is built around one rule: every answer carries a citation, or no answer goes out. Everything else — the member UI, the rep workspace, the retrieval pipeline, the audit log — flows from that.

For members

A mobile-first Q&A surface. A member types their question; Steward retrieves from the corpus the local has loaded (CBA, side letters, MOUs, bylaws, plus a shared base layer for the labor regime that governs your craft — federal statutes, regulator and board decisions, federal regulations); the model writes a short answer with inline citations to the source paragraphs. If the corpus does not support an answer, Steward says so and offers to flag for rep — which queues the question for the local's officers without pretending to have an answer.

Plain HTML, no live socket — works on bad cell in the field.

For reps and officers

A workspace, not a chat box. Each research thread is its own scratchpad:

  • Q&A loop with the same grounded-retrieval pipeline as members, plus matter-document attachments scoped to that thread.
  • Matter uploads — drop an employer denial letter or claim file into a thread. Steward chunks and indexes it for that thread only; it never bleeds into other threads or into member-side retrieval.
  • Flagged-question queue — what members asked that Steward couldn't answer, sorted by recency and member.
  • Draft grievance assistance — citation-slot-only suggestions; counsel-gated, never auto-filed.
  • Audit log — every retrieval, every answer, every chunk surfaced, reconstructable on demand.

Retrieval, plainly

Three things make Steward's retrieval different from a generic chatbot wired to documents:

  • Supersession-aware. Effective dates and supersession chains are loaded with the corpus, not bolted on. A clause that was rewritten in a 2018 side letter does not get retrieved as if the 2014 original were live.
  • Citation-or-refuse. If retrieval comes up empty or low-confidence, the answer is "I don't know — flag for rep." No silent fallback to model knowledge.
  • Local-models-only. No hosted LLM, no API to OpenAI / Anthropic / Google. The model runs on infrastructure we (or you) control.

More on data sovereignty and DFR posture →