
As a software engineer, I would be lying if I said AI hasn’t affected my job. Over the past year, I’ve seen more changes in my professional landscape than I have in the first 9 years of my career. And that all has to do with technology’s poster child acronym: AI.
It started in 2022 when GitHub released Copilot. An IDE integration whose primary focus was inline assistance: auto-complete, micro refactors, naming suggestions, and even spell check. This was pretty cool; however, I wasn’t fully aware of what would be coming next. If Copilot was a wave of developer productivity, then the next changes and tools were a tsunami.
In early 2025, I discovered Claude, an agent created by Anthropic. At first, I used this to automate tasks in my personal life: updating my monthly budget, task list, or getting grocery trip ideas. This offered a big boost in my personal productivity sphere. Mundane tasks were now quickly outsourced and completed with a fraction of the brainpower and time. What could possibly be wrong with that?
By the middle of 2025, I started to use AI tools at my job, and my experience was very similar to my personal productivity findings. The mundane became swift, and my output increased. It was after a few months that I started to look a bit closer at my relationship with these tools.
The appeal of AI tools in software is their apparent ability to automate your workflow. To solve problems and deliver streamlined output. Does that all sound familiar to you? It should, that is what you used to get paid to do!
Of course, software engineers still do this, but in a different way. We no longer spend the same amount of time physically writing lines of code. Toiling over a failing test or tricky business logic. Those layers are abstracted away from the flow now.
Does this feel unnerving to you? In some ways it should. Ask anyone who writes software what they enjoy about it and you will probably get a mix of things, but I suspect they will overlap with a common theme: To build things that folks use. That feeling of solving a hard problem. Seeing all of the tests pass.
AI agents help with this, but in a way, it diminishes this feeling. This is why I started to think of my relationship with an AI agent in terms of pushing and pulling.
Agents are biased. Plain and simple. If you give a prompt with a hint of uncertainty, it will tell you you are correct to feel that way. You prompt the agent with the utmost confidence that you have the correct path forward, and it will proudly march you towards that end, regardless of whether it is true or not. This is dangerous for software engineers. This is where we should push.
Give these tools pushback. Give it context. Be direct and give it guidelines. A lot of these tools have a plan or read-only mode; use those first.
Okay, but you are not writing complex algorithms or tricky business logic, now is the time to pull the agent. The agent is your ally in this case. It can free up brain power and save time in your day. Set clean parameters and expectations, offer similar patterns to follow, and let it get you there. It won’t always get you 100%, but it will get you close.
Like anything, these things take time. There are iterations. Feedback loops.
We need to uphold a sense of pride and ownership in our code. In a way, it is an extension of who we are as people. It’s how we offer direct value to the company we work for. We should be proud of that. Not just for the company’s sake, but for our own craftsmanship.
So next time you use an AI tool, remember, you are ultimately in control of your work and the outcome you present. Know when to push and when to pull.