Thanks, we’ll be in touch.

Ops · AI research

Stacking AI agents to run research sprints

The distributed research stack that feeds our weekly launch cadence without overloading the core team.

3 min read

Why we stack agents

Our launches only work when they’re grounded in real signal. Instead of assigning humans to endless desk research, we orchestrate a small roster of AI agents that surface the most relevant insights and escalate the rest. The stack runs nightly so we wake up with briefs, not raw feeds.

The core agent roster

We treat each agent like a teammate with a job description:

  • Scout scans forums, TikTok transcripts, and academic feeds for fresh demand signals.
  • Analyst clusters those signals into opportunity themes using topic modeling and tags they can defend.
  • Synthetic interviewer runs short-form interviews with proxy personas to stress-test desirability.
  • Archivist writes the daily digest and updates Notion with sources, metrics, and confidence scores.

Each agent posts to a shared queue in Linear. Humans only step in when confidence drops below 70% or a new theme needs manual validation.

The orchestration layer

Temporal handles fan-out and retries so agents never block each other. We maintain a YAML playbook for each research sprint that defines the dataset targets, authentication requirements, and rate limits. When a sprint kicks off we update the playbook, run a smoke test, and watch the initial outputs before letting it rip.

Scoring what matters

The point isn’t to collect more data—it’s to make a call fast. Every opportunity gets graded against five dimensions:

  • Problem severity
  • Audience urgency
  • Ease of prototyping
  • Distribution advantage
  • Strategic resonance with our thesis

Agents propose the score. Humans adjust after reviewing primary sources. Anything above 3.7/5 rolls into the Friday prioritization meeting.

Keeping humans in the loop

Automation without context creates thrash. We build rituals so humans stay in command:

  • A live dashboard that lets anyone drill into an agent’s reasoning chain.
  • Office hours twice a week where operators pair with the agents to refine prompts.
  • A red-team rotation where someone tries to break the system and files a post-mortem.

This prevents hallucinated insight from ever reaching a launch brief.

From research to launch

Every Friday the Archivist agent prepares a launch recommendation packet: five slides, three data pull quotes, and a list of humans to interview next. That packet seeds the Monday kickoff you can see in our one-week product checklist. The result is a research pipeline that scales as we stack more ventures on the calendar.

Get the blog in your inbox.

Friday drops with the latest launch and the research stack behind it.

No spam. Weekly launches and systems thinking only.