Trust Comes from the Sherpa
From Code Monkey to Context Sherpa: how AI is reshaping the role of the software engineer from solo climber to group tour guide.
Most days, I don’t feel like I write code anymore.
I guide something bigger up the mountain.
These models are powerful, no question. It can spin up scaffolds, refactor code, generate endpoints in seconds. But only if I chart the path.
My job? Feed it just enough context to take the next step. Anticipate what it's missing. Nudge it back on track when it drifts. When I’m on it, when I stay ahead of its blind spots, we race to the summit together. It’s like climbing with the world’s fastest intern: never tired, never blocked, occasionally full of nonsense.
But when I misjudge and provide it too much task context, not enough instruction, the wrong focus? The whole climb becomes a slog through mud. I burn an hour investigating a package dependency that should’ve never been added, missed in an avalanche of colorful git diffs.
I used to think of myself as a builder: the one shaping plans, pouring the foundation, laying down structure.
Now I feel like the Context Sherpa.
The one holding the ladder on the way up, belaying the rope on the way down. The one watching every snow drift and ice shelf, just to shepherd another group of overconfident tourists who think black ice is a cocktail at base camp.
And that shift? It’s subtle.
But it’s the beginning of something much bigger.
Trust Is Built On Specifics
Code at tech companies never dies.
It decays.
Version by version. Patch by patch. Held together by duct tape, tribal knowledge, and that one person who remembers why a cron restarts the logging service every Thursday at 2am.
Sometimes that mess slows you down. Other times it’s the only reason you’re still shipping.
These new AI tools know how to scaffold a backend service, explain what GraphQL is, and write clean, generic OAuth flows. They’ve seen 10,000 Next.js blogs and a thousand Stripe integrations. What they haven’t seen is the handwritten CSS your CTO swears keeps load times fast.
Your codebase didn’t spring into being fully formed. It grew as a layering of context, constraints, and chaos; some of it intentional, some of it ancient, and all of it invisible to a model trained on vibes. You don’t need help “integrating Stripe”. You need help debugging why your checkout form fails, but only on Safari, when Chinese characters show up in the billing name, and the user clicked “back” twice.
(re focus this on what the role of a sherpa is here? the human needs to provide context into how everything fits together)
Productive software isn’t built on “best practices”. It’s built on specifics.
And when the model gets those specifics wrong? The Sherpa doesn’t get to shrug. They patch the commit, fix the broken code, re-deploy the services. Because AI tools don’t understand how your systems actually fit together. They don’t know which assumptions are safe to make and which ones can take prod down at 2am.
The Context Sherpa exists because someone has to translate between what the model knows and how your system actually works.
Someone has to know what the real goal is, and whether that sudden pop means a branch snapping on-shore or ice giving way. Not just to help the tools succeed, but to keep the whole team from going over the edge.
That’s the job now: not writing every line, but making sure every line still fits.
The Human Bottleneck
This is where the trust gap lives — not just in the model’s misunderstanding, but the invisible scaffolding that assumes a human will always there to catch the fall.
Software wasn’t built for machines. It’s built for humans fluent in ambiguity. Who else but Mark could know why one feature is gated by a flag, and another is gated by a comment that just says:
DO NOT REMOVE. ask mark.
Our current code is written by humans, for other humans. The moment you pull the human out of the loop, or ask them to guide instead of write, the system starts to wobble.
Take documentation. If it exists at all, it was probably written by developers for developers. We speak fluent vibes. I’ve written run-books that say things like:
SSH to prod cmh1, find a host from the kube cluster and tail the logs
It looks like a clear set of instructions, and it is, but only if you already know:
What “cmh1” is
Which kube cluster this service is hosted on
What logs to tail… and why
That blend of concrete steps and unspoken assumptions is perfect for human developers. Terrible for coding agents to understand. Now imagine a set of docs that instead had:
Precise, declarative instructions
CLI-first workflows
Structured, inspectable state
That’s not just better documentation, thats a resource that lets your agentic tooling handle more mundane tasks.
That’s a handrail on the mountainside, the kind that lets the Sherpa breathe, look up, and plan the next move.
Guiding the Climb
I’ve worked with a lot of interns over the years. Some soared. Some flailed. And the difference was almost never talent — it was structure.
The ones who struggled usually weren’t lazy or incapable. They just didn’t have the right support. No scaffolding. No clarity. No one breaking the work into pieces they could act on. They were thrown into a sea of complexities and deadlines and expected to swim.
Working with today’s coding tools feels the same. They’re interns — fast, eager, and wildly inconsistent unless you give them structure.
And like any good mentor, I spend a lot my day to day clearing the trail for them:
Stitching together missing context
Rephrasing task to get better results
Splitting work into clean, self-contained chunks
Catching hallucinations before they hit prod
Bridging assumptions the model never knew it was making
I now act as a Context Sherpa — an engineer whose job isn’t to write the code, but to make the code possible. To synthesize chaos from every corner of the org: ancient tickets, Slack threads, vague product notes, broken Swagger docs. To shape that mess into something usable and feed it to the model at just the right moment.
The best engineers don’t just “prompt better”, they orchestrate better.
They know how to set the models up for success. When letting them run the tests is going to pinpoint an error, or waste a thousand tokens. They know when to retry a task, when to zoom out, or when to change tactics entirely. They move fast, not because they trust the model blindly, but because they’ve learned how to steer it.
It’s easy to see that models write fantastic code. And projects like SequentialThinking, Context7, and MCP servers are giving agents real capabilities — memory, error handling, terminal access, planning logic.
But for everything else — the vague specs, the brittle edge cases, the undocumented behaviors?
There’s still a Sherpa on the trail.
A human engineer.
Guiding the model. Reading the weather. Choosing the path that doesn’t end in an avalanche.
From Collaboration to Succession
Let’s rewind:
In Part 1, I argued that LLMs aren’t valuable because they’re smart — they’re valuable because they feel trustworthy.
In Part 2, we explored how good tools build that trust through a slider — from assist to autopilot — letting users gradually hand off control.
Now, in Part 3, we’re seeing the deeper truth: The current development ecosystem wasn’t built for agents.
It was built for you.
That’s the real shift underway.
Because the Context Sherpa isn’t just about climbing to the summit. Now they’re the critical infrastructure that keeps the project moving.
The one who knows where the trail gets narrow.
The one who keeps the rope slack, but safe.
The one who gets there early so someone else can make it to the top.
Working with LLMs is becoming about scouting terrain. Breaking down workstreams. Learning what the model understands and where it crumbles. Not solving every problem directly, but running one step ahead to clear them before
You’re not just the engineer anymore. You’re the reason the whole system works.
The more capable these tools become — the more they remember, plan, and act — the more they rely on the thing they still can’t hold: context.
And that’s where you come in.
Trust doesn’t come from the model. Trust comes from the Sherpa.
Daniel, this “Context Sherpa” framing nails the real shift we’re watching on engineering teams. The tools are already competent apprentices; what’s scarce is the human who can translate tribal code-base lore into the bite-sized, declarative context that keeps those apprentices from marching the org off a cliff.
Thanks for articulating what many of us are feeling. The mountain metaphor is perfect: the summit is still the goal, but the ropes, ladders, and weather calls matter more than ever. Looking forward to seeing how you expand this into practical playbooks for orgs that want to climb faster—without losing the team in a whiteout!