PMOS
Back to modules
Aggregate

Cross-module synthesis into feature suggestions.

Labeled items from Discover, Listen and Compare feed an LLM that clusters them into opportunities. Every suggestion links back to the evidence that backs it.

What it is

Where three module feeds collapse into one list of bets.

Each module produces a stream of labeled items. Aggregate is where those streams meet: one LLM pass reads the combined corpus — scoped to the active product — and produces a batch of feature suggestions. Each suggestion has a title, a one-paragraph rationale, a signal strength, categories, and links back to the items that justify it.

Flow

Generate → inspect → write PRD.

Every run is a snapshot batch. You keep history, cycle between batches, and only the current one is editable. Writing a PRD from a card triggers the PRD writer with the evidence attached.

1

Generate batch

POST /aggregate/generate
The LLM reads labeled items from Discover, Listen and Compare, clusters related signals, and writes suggestion cards. Inspect the prompt first via prompt-preview.
2

Inspect evidence

GET /suggestions/{id}/evidence
Each card expands to reveal the items it was built from — source, title, module of origin, summary, and relevance score.
3

Write PRD

POST /suggestions/{id}/write-prd
The PRD writer runs a competitor research loop, then drafts a markdown PRD scoped to the suggestion. It lands in /prds ready to edit.
4

Cycle batches

GET /aggregate/batches
Old batches never disappear. Scroll through them to see how the week changed what the system considered a bet.
What a suggestion card shows
  • Titleone-line feature name
  • Rationalea short paragraph explaining the opportunity
  • Signal strengthhow often / how loudly the items backed this
  • Categoriesa handful of accent-tinted tags
  • Evidence linksexpand-on-click, grouped by module
  • Module of originDiscover · Listen · Compare
Under the hood

Snapshot batches kept in SQLite.

Table
feature_suggestions — one row per card, with prd_content and github_issue_url persisted back.
Batching
Every generate writes a batch identifier. Old batches stay readable; only the current one is editable.
Evidence
Evidence links are item IDs, so a signal can appear in multiple suggestions without duplication.
Prompt preview
Before generating, you can inspect the exact system prompt + item corpus that will go to the agent.
Highlights

Why Aggregate is the heart of the pipeline.

01

Evidence-linked

Every opportunity is grounded in specific items from specific sources. No free-floating LLM guessing.

02

Batches are snapshots

Re-running never overwrites; you build a timeline of what was considered promising week over week.

03

One-click PRD

A card becomes a spec in one action. No context loss between synthesis and handoff.

See PMOS end-to-end.

A 20-minute demo walks through Discover → Aggregate → PRD → GitHub issue, scoped to your product.