Our Plan for Running 100 Parallel Coding Agents
By Satya Patel
Hoook currently manages 5-7 coding agents in parallel reliably, with a goal to manage 100 agents by the end of 2026. The bottleneck isn't computational power—agent compute is already affordable—but rather human review capacity.
Mapping Out the Problem
The agent pipeline requires human involvement at nearly every step. Context-switching costs are steep: opening code, spinning up dev servers, verifying work through UI clicks, and providing feedback. Currently, agents spend more time awaiting review than performing work.
At scale (100 agents), this model becomes untenable. Manual diff review and context-switching between 100 concurrent work streams becomes impossible.
The solution: eliminate human involvement from unnecessary steps and accelerate the remaining ones.
How We'll Improve It
Have Agents Work Harder Before Reaching Out
Agents should complete extensive self-vetting before human review. This involves adding review layers between agent output and human attention.
Adversarial agents: Following patterns highlighted in recent research, dedicated "bouncer" agents can sit between coding agents and humans, ensuring work is either complete or genuinely stuck before surfacing it.
Stacking review agents and automated testing: Running multiple specialized review agents—each examining different issue categories—increases detection odds before human involvement. Tools like BrowserUse and Maestro enable visual testing, catching UI regressions invisible in code review.
Long-running agents: Rather than one-shot workflows, agents should iterate longer, refining solutions like "Ralph loops," reducing interruptions when work finally surfaces.
Make It Fast to Review Agents' Work
Developer tools remain human-centric. Shifting toward agent-orchestrated interfaces dramatically reduces review time.
Agent-driven UIs: Agents should prepare reviews with summaries, preview environments, navigation to relevant pages, and test result summaries—functioning like briefings rather than raw outputs.
Improving existing tools: Pull requests, CI dashboards, and IDEs need adaptation for agent-first workflows. Agents should annotate PRs before human opening; CI results should be agent-triaged rather than raw logs.
Reducing friction to zero: Interactions should be lightweight—binary choices, multiple-choice questions, pre-filled feedback drafts, and quick action buttons. The goal is enabling phone-based reviews between meetings.
Have Agents Be More Proactive
At 100 agents, task specification becomes a bottleneck itself.
Reusable workflows: Packaging repeatable tasks—deployment procedures, migrations, test patterns—as reusable bundles lets agents self-invoke when situations match.
Event-driven triggers: Agents automatically activate in response to events: build failures trigger investigation; Linear tickets spawn work initiation; recurring playbooks execute on schedules.
Beyond code: Patterns like meeting transcription that auto-extract action items, create tickets, and update CRM systems extend automation beyond development work.
Conclusion
Hoook has portions live, roadmap items, and emerging concepts still taking shape. The throughput framing provides one test for every feature: does this reduce per-interaction human time?
Interested in managing agents at scale? Reach out at hi@hoook.io.