Brendan MacLean, Chief Developer at the University of WashinGTOn's Department of Genome Sciences, had spent nearly two decades as the sole continuous thread holding together Skyline—an open-source mass spectrometry software with 700,000 lines of C# code and 200,000 automated nightly tests. Over 17 years, undergraduate students graduated, postdocs moved on, and interns left after summer. Each departure left behind code that no one remAIning understood.
One feature module—a file view panel—had sat unfinished for a year after its original developer left the lab. Historically, such orphaned projects simply gathered dust in the repository. But this time, two weeks later, the panel was complete. Every final commit cARRied a new co-author name: claude.
What changed? MacLean had a realization: the pain point of onboarding Claude mirrored exACTly what he had done for decades—training newcomers unfamiliar with a massive legacy codebase. Every conversation with the browser-based Claude started from scratch. It knew nothing about Skyline's architecture, component relationships, or hard-won conventions. So he decided to APPly his intern-training methodology to Claude Code.
He built a dedicated repository called pwiz-ai, completely separate from the main codebase. Inside it, a CLAUDE.md file served as the topographic map—project structure, build processes, testing workflows. But knowing where things are isn't the Same as knowing how to work. That knowledge lived in purpose-built Skills. One example: a debugging skill that forced Claude to perform root-cause analysis before touching any code, pulling it out of "guess-and-check" mode. MCP integrations gave Claude access to real test data, exception reports, and user tickets.
With these three layers of context in place, communication costs plummeted. Claude already understood what the code was doing before any new task began. The starting point was comprehension, not a blank slate.
The results were immediate. A test management module, written in Java rather than Skyline's primary C# stack, had been abandoned for three years after its maintainer left. MacLean had stopped adding features to it entirely. Using Claude Code, he generated a configuration document in under a day, then added features he had wanted for years—and even updated the page layout with CSS. Skyline's 2,000+ tutorial screenshots, once manually maintained, became nearly 100% automated and reproducible. Claude even wrote a C# MCP server enabling itself to visually detect differences between screenshots. Every morning, an AI-generated report summarizing overnight test failures, anomalies, and unresolved tickets waited in his inbox.
The same week, OpenAI unveiled Symphony—an open-source project that turns Linear project boards into an AI Coding command center. Each open issue gets an autonomous Agent running in its own workspace. Agents restart on crash, pick up new tasks automatically, and leave humans with just one job: reviewing results. Some teams reported a 500% surge in merged PRs within three weeks.
The two approaches represent diverging philosophies. anthropic bets on depth: a senior developer invests time building contextual layers, teaching AI to underStand a specific codebase like a mentor training an apprentice. openai bets on scale: an orchestration layer dispatches tireless agents, turning engineers from code-writers into board managers.
Yet both converge on one critical insight. MacLean wrote CLAUDE.md and skills to codify project knowledge; OpenAI wrote WORKFLOW.md to codify development processes. Oral traditions and muscle mEMOry no longer suffice. The bottleneck in AI-assisted programming is no longer whether models can write good code—it's whether humans have learned how to manage AI. As MacLean put it, "You wouldn't hand a new hire a 700,000-line codebase and expect them to deliver on day one. The same goes for AI."
Comments & Questions (0)
No comments yet
Be the first to comment!