
The World's Best Rubber Duck
I've been mulling over an "AI" post for a while now and I'm not sure how I want to approach it. Something grand and all-encompassing just isn't going to happen. I want to put much into it, weigh costs and benefits, pontificate broadly and cover all the bases. It's getting in the way of my vague goal of maybe writing more blog posts.
And no, I'm not going to have ChatGPT write it for me.
So maybe more of a blog series. I'll talk a little bit about how I'm using these exciting new technologies at work.
I don't talk about work much here, so I'll elaborate a bit. I'm in a group tasked with improving developer productivity at a large software company. Developer productivity these days means trying to figure out how to make best use of coding assistant LLM agent things.
I've got more than a few reservations about these technologies and how it'll shape my profession and our craft. And maybe we'll get into all that as we go along. But I wanted to talk about some ways I've found them to be helpful.
One of the things I've struggled with my entire career is, in general, I'm very reluctant to bother people and ask for help. I've learned to get over myself and do it, of course, but I still hate it, and, when I'm stuck, I'll often spend that bit of extra time to try to figure a problem out myself before bothering anyone else about it.
Inevitably, what almost always happens as soon as I try to explain the problem to someone is I suddenly realise where I was going wrong and come up with the fix myself.
Okay, if you're not a developer, you've maybe never heard of rubber duck debugging. That's basically this process.
Except I should probably have tried a rubber duck before roping another human into it.
Friends, a code assistant is an excellent rubber duck.
The real challenge is keeping it from editing half your codebase before you can fully articulate what you're trying to achieve. But you can force it into "plan" mode or whatever, and you can also massage your rules/context prompt to tell it you're just looking for feedback on your problem. But I'm finding myself going to the chat box first any time I'm even a little bit stuck and just the process of describing the problem so an LLM can understand it goes a long way towards actually solving it.
I was going to say here (this post has been in draft for months) that whatever answer it gives is invariably wrong, but it's actually getting better lately. I might even let it prototype a fix for me.
It's important, tho, that I actually understand the solution. By my nature, I don't trust the thing. But I can ask it to explain itself. And, as I was saying, it has been getting better lately. If I understand the fix and can take ownership of it, we're good to go.
Perhaps you might be thinking is the rather tremendous amount of infrastructure required for this dubious benefit worth the cost? Especially when actual, real-life rubber ducks are cheap and plentiful? I will leave you to judge that for yourself, but I will also say that this is the first in a series of blog posts, and that I'm not done yet.
I'll leave you with a few links to things that are shaping what I'm thinking about these technologies, if you want to dig further: