Shifting the Blocks: How AI is Changing Developer Abstractions
From tab-completion tweaks to entire features: four practical ways AI reshapes software engineering today.
Everyone’s talking about how AI speeds up coding. But that misses the bigger shift: the way we build software is changing.
A recent video from Cursor and Anthropic brought this into focus: the abstractions are moving up. This isn’t about what programming language you use or what kind of SaaS product you’re building. The “lego blocks” developers build with are growing in size. Instead of writing code line by line, we’re now dropping in entire functions or features at a time.
This isn’t just a faster way to type. It’s a fundamental retooling of how development happens.
Here are four ways AI is already changing developer workflows, and one place it still falls short.
The Model Layer
Line-level Assist
Developers spend a lot of time making what I think of as bonsai edits—small, precise tweaks that gradually shape the code into something cleaner and more intentional. Maybe you’re renaming tests to better reflect their purpose, or rewriting comments to match updated behavior. AI tab completions are surprisingly good at understanding your intent in these moments. They don’t always get it right, but when they do, it’s a quick snip-snip and the code keeps growing in the right direction.
Bonsai edits are still the smallest building blocks, just a line or two at a time. But AI helps speed them up so you can stay focused on the bigger picture.
Function-Level Handoff
AI tools are especially useful when you know what you want to build, but the implementation is gnarly. Maybe there are too many edge cases, messy config combinations, or obscure API quirks. The models can handle the grunt work—building test cases, filling in boilerplate, and translating your design into real code. They handle the details so you can stay focused on validation instead of getting lost in the weeds. Instead of writing every line yourself, you can hand off an entire block of logic. The abstraction shifts from code-as-instruction to code-as-intent.
And then there’s legacy code. Every team has a file or two from “the last guy” who vanished mid-project. Not always bad, just weird, dense, or a little too clever for its own good. Normally, you’d ask a longer tenured engineer for some outside context or help unpacking… why?
But AI tools can fill that role surprisingly well. They can parse the tests, the structure of the code, and other scattered clues in comments and doc-strings. You don’t need to decode “the last guy’s” thought process. You just need to understand the shape of what they built. And AI is pretty good at that.
System-Level Understanding
Even experienced developers get dropped into unfamiliar systems. Maybe it’s a service you’ve never touched, or a repo with sparse docs and cryptic naming. You didn’t even know it existed ten minutes ago when your boss sent a frantic Slack message. Now there’s a meeting on your calendar named “Quick Sync – Updates?”
AI tools shine in this situation. They can quickly surface high-level summaries, explain what a block of code is doing, or map out how data flows through the system. It might not tell you how to fix the problem, but it’s usually enough to get you unstuck.
Even without full context, the model helps you operate at a higher level. You’re not reading every line: you’re scanning for intent.
And while the model’s generating context, you’ve got just enough time to decline that meeting invite.
Project-Level Generation
Every new repo comes with a certain baseline amount of boilerplate just to make it run. Maybe it’s build YAML files, Spring XML configs, or a set of Gradle plugins that need to be configured just right to work.
AI models are great at generating this kind of scaffolding, tailored to your specific use case. Not just a blank starting point, but something that already understands what you’re building and how you expect to package it up.
This is where the building blocks get even larger. We can spin up whole projects from just a few cues.
The Human Layer
Taste & Context at Scale
A lot of the current value in these AI tools comes from offloading mental overhead. You can hand off the mechanical parts of coding and spend more time thinking about what to build instead of how to type it.
But that’s also where the tools still fall short.
Generating code isn’t the goal. Solving the right problem is.
What counts as “good software” depends on a hundred subtle factors. Maybe it’s your company’s architecture strategy. Maybe it’s the latest buzzword your CTO brought back from a conference. Maybe it’s just the size and structure of your team.
What works in one repo might be a mistake in another—even within the same company. Tools like .cursor/rules can help enforce engineering patterns, but they can’t encode product strategy. And they definitely can’t define your team’s taste—the shared sense of what “good code” looks like, shaped over time through culture, PR reviews, and unspoken norms.
Think about the feedback on your last design doc. Or the conversation with your manager at lunch. Strategic goals, edge-case constraints, and product’s shifting priorities all live outside the code. But they shape what “good” looks like just as much as naming conventions or abstraction levels.
Where the Real Leverage Lives
The abstractions we work with are changing. Line-by-line edits are giving way to blocks, features, and systems that arrive already half-built. The job of the developer isn’t shrinking—it’s shifting. From typing to deciding. From building to directing.
That shift raises the bar on judgment. And for now, that’s still the part the machines can’t do for us.
Cursor x Anthropic
The original video that sparked this post is worth a watch!
This is a great read. I feel though I'd be remiss if I didn't call out “Quick Sync – Updates?” as an especially brilliant bit of it. ;-)