From Skeptic to Orchestrator: My Reluctant Descent into Agentic Coding

By Allan
An ancient illuminated manuscript where traditional calligraphy transforms into glowing circuit patterns and code

Or: How I Learned to Stop Worrying and Love the AI Apocalypse (Mostly)

Let's get something straight right up front: I thought this whole AI coding thing was complete bullshit.

There. The elephant in the room has been identified, tagged, and released back into the wild of my former skepticism.

The Beginning: Terminal Curmudgeon Mode

Picture this: it's 2022. Everyone and their venture-capitalist cousin is losing their minds over ChatGPT. "It's going to revolutionize everything!" they shouted from every Medium post and LinkedIn thought-leadership confessional.

I read those proclamations with the same enthusiasm I reserve for root canal procedures and mandatory HR training.

"How can a chatbot help me with anything?" I muttered, probably while reviewing some gnarly legacy system that smelled like it was last touched during the dot-com boom by someone who thought XML was going to solve world hunger.

So I did what any self-respecting technologist would do: I ignored it. Completely. Thoroughly. With the dedication of a cat ignoring a $200 designer scratching post in favor of the cardboard box it came in.

The Middle: The Hype Meets the Real World

Then the industry got weird in a darker way.

The layoffs started. Not the "trim a team and reorganize" kind. The mass kind. The kind that reminds you how quickly businesses will treat people as line items when the PowerPoint math starts looking spicy.

I didn't feel smug about it. I felt angry. Still do.

What bothered me wasn't just the layoffs themselves—it was how fast "AI" became a convenient story executives could tell themselves: we're not cutting because we over-hired or mismanaged; we're "investing in the future." Meanwhile the people who understood the systems—the ones holding the spaghetti together with dental floss and institutional memory—were shown the door.

That made me dig in harder. I became the person at meetups who would corner you (politely, but relentlessly) and explain why AI coding was a fad, right up there with blockchain-enabled toasters and IoT-connected juicers.

I was wrong. But not for the reasons the hype merchants claimed.

The Turning Point: Corporate Mandates and Sheer Pragmatism

Then work started applying pressure. "Have you tried using AI for this?" became the new "Have you tried turning it off and on again?"

Every meeting. Every code review. Every architecture discussion. The AI question hung in the air like a persistent fart in an elevator—unwelcome, impossible to ignore, and somehow everyone pretended it wasn't happening.

I finally broke. Not because I saw the light, but because I had a presentation due and approximately zero desire to spend my weekend wrestling with PowerPoint transitions and Excel pivot tables.

"Fine," I grumbled. "I'll ask the robot."

Two hours later, I had a presentation that would have taken me a week to assemble. The analysis was clean. The charts were professional. The narrative actually flowed like a human had touched it on purpose.

I stared at my screen like it had just performed a magic trick.

The lightbulb didn't go off. It detonated.

The Experimentation Phase: Beautiful, Glorious Failure

So I started poking at code.

Full disclosure: I'd drifted away from daily coding years ago. Leadership roles have a way of doing that—your calendar fills with meetings, your IDE collects dust, and "staying technical" becomes reading blog posts and tinkering on weekends when you have the energy. Which isn't often.

So this wasn't a working programmer picking up a new tool. This was someone who missed building things, finally finding a way back in.

Not because I believed, but because I was curious in the way a cat is curious about whether the stove burner is still hot the forty-seventh time.

My first attempts were spectacular disasters. I got code that looked like it had been written by an eager intern who only learned programming from a 1998 "HTML for Dummies" book and had a shaky relationship with the concept of syntax.

But here's the thing that broke my brain: generation is cheap.

So cheap that starting over becomes a feature, not a bug.

I could throw away a bad approach, reframe the problem, and try again—without the usual emotional tax of "welp, there goes my entire afternoon." That changed the feedback loop. It made iteration feel less like pain and more like acceleration.

I started watching the good YouTube videos (not the "AI will make you a millionaire by Tuesday" genre). I read docs. I tried different models. Different prompting styles. Different workflows.

And slowly, uncomfortably, it started working.

The Obsession: Tools, Limits, and Reality Checks

Fast forward eight months. I've become the person I used to mock—just with better guardrails.

I've tried them all: self-hosted open models that taught me more about GPUs than I ever wanted to know, frontier models that feel like cheating when they're on and like betrayal when they're off, and everything in between.

I've tested a pile of "agentic coding" setups—some brilliant, some fragile, some absolute chaos. And here's the lesson that kept repeating:

A single model can be impressive. A system of models—with roles, constraints, feedback, and verification—can be transformative.

Also: yes, these systems fail. Sometimes dramatically. Sometimes hilariously. And sometimes in ways that would be expensive if you weren't paying attention.

Which is why "orchestration" matters more than raw capability.

The Realization: The Job Shifted Under Our Feet

This is what finally converted me: software engineering has already evolved, whether we like it or not.

The industry didn't politely ask permission. It moved like a tectonic plate. The ground changed. We can either adapt, or become the person insisting the map is wrong while falling into the canyon.

The future isn't about writing every line yourself.

It's about orchestrating.

About structuring work so autonomous agents can do the boring parts fast, and you can spend your time on the parts that actually require judgment: architecture, tradeoffs, correctness, safety, performance, clarity, and—yes—taste.

It feels less like "pair programming" and more like conducting. You're not playing every instrument. You're deciding what the orchestra is trying to say, who plays when, and whether the result is music or just enthusiastic noise.

The Now: Reluctant Convert, Enthusiastic Orchestrator

I see the future now, and it's both terrifying and exhilarating.

We're not replacing programmers; we're changing what programming is. The way IDEs changed coding without ending it, agentic coding isn't killing software engineering—it's turning it into something weirder, more complex, and potentially more powerful.

The curmudgeon in me still wants to complain about the loss of "real programming" and the good old days when you could hold the whole system in your head and a debugger could tell you the truth.

But the pragmatist in me—the one who just used a small pack of agents to spin up a service in the time it would have taken me to finish the requirements doc—knows that ship has sailed, sunk, and become an artificial reef for confused fish.

At home, I'm running multiple side projects in parallel—something that would have been laughable when every feature meant hours of uninterrupted focus I didn't have. At work, analysis that used to take weeks now takes days. Not because the thinking got faster, but because the scaffolding did.

So here I am: former skeptic, current orchestrator, future whatever-comes-next. Still picky. Still suspicious. Still insisting on tests and verification like a person who's been burned before.

The revolution happened. I missed the parade because I was too busy complaining about the noise.

But I'm here now. Fashionably late. Appropriately grumpy.

Welcome to the future. It's weird here… but sometimes the code compiles on the first try, and that's a kind of magic I'm willing to respect.

I didn't write this to convince anyone. I wrote it because I'm still working through what it means—for my own work, and for the engineers I'm responsible for helping figure this out too. The answers aren't clean yet. But the questions finally feel like the right ones.

P.S. — If you're still skeptical, good. Stay that way for a while. Skepticism keeps you honest. Just make sure it stays curious, not rigid. Healthy doubt asks better questions; stubborn doubt stops asking them.

About Allan

Allan Ditzel is an engineering leader who's spent two decades building software and teams across gaming, fintech, and security. He's lived in seven countries, speaks two languages fluently with a third in progress, and believes the best leaders never stop being beginners at something. He writes about technology, leadership, and the messy space where they overlap.