Use case 1: Feature delivery from spec to parallel execution
Start by creating or opening a spec that describes the feature scope, constraints, acceptance criteria, and rollout notes. Then generate tasks from that spec so each task is concrete, independently testable, and small enough for one focused session. Group tasks by domain such as API, UI, and validation. Launch one agent per task on separate new worktrees so they can run in parallel without branch collisions.
From spec to task set
Follow this flow to convert a feature brief into executable tasks:
- Open the workspace and create a new spec file for the feature.
- Write clear success criteria and non-goals so task generation stays bounded.
- Use task generation from the spec to create repo-local task files.
- Review generated tasks and split any oversized task into smaller units before execution.
- Assign each task a clear definition of done and expected output, such as tests, docs, or migration notes.
Parallel execution across agents
Use isolated execution per task:
- For each task, launch a new agent session with write access on a fresh worktree.
- Name each session by task ID or outcome so monitoring stays readable.
- Keep high-risk tasks in separate worktrees even if they touch similar areas.
- Watch live terminal output and focused diffs per session to verify progress.
- If a session blocks, pause that branch and continue progress on unrelated tasks in other sessions.
Structured handoff protocol
Use this handoff protocol to keep transitions reliable:
- When Agent A finishes an intermediate result, open the handoff flow instead of rewriting the context manually.
- Include what is complete, what remains, exact file paths changed, and the next expected action.
- Target Agent B on the correct branch or worktree and submit the handoff instruction.
- Verify Agent B context includes the handoff brief before continuing execution.
- Repeat until implementation, verification, and cleanup tasks are complete.
Use case 2: Refactor with staged review
For larger refactors, use a top-level spec and generate tasks by subsystem. Run exploratory read-only sessions first to map impact, then convert approved tasks to write-enabled sessions on isolated worktrees. Use repo tools after each stage to review diffs and confirm changes stayed inside task boundaries before merging or opening pull requests.
Quality gates before review or deployment
Before promoting results out of Nora, verify each task has a matching diff, expected tests or checks were run, and no session drifted from its task brief. Then create pull or merge requests from the relevant branches. If the project is linked to Vercel, use the Vercel panel to confirm deployment state or trigger redeploys after code review.