As a developer who still builds software for fun after a full workday, I’ve come to realize something slightly embarrassing: writing code isn’t actually the fun part. The joy comes from building things — that moment when an idea clicks into place, when an automated workflow hums along perfectly, or when a stubborn bug finally admits defeat. The typing? That’s just the ritual.

I’m always experimenting with new technologies and tools. And with the AI hype train rolling, I’ve been trying many ways to integrate AI into my daily workflows.

When GitHub Copilot was first release, I was in the middle of writing my bachelor thesis. Still having access to GitHub Student Developer Pack, I got to try Copilot for free long before we could use it at work. Others considered it to be a glorified autocomplete, but I was immediately hooked. Obviously, when ChatGPT was released, more and more developers in my circle started using it to write code which also changed the sentiment around Copilot.

With the capabilities of ChatGPT, Copilot felt lacking and I switched to using ChatGPT for code generation. Which was fine, but having to paste code snippets back and forth and missing the context, it wasn’t the best experience. When Copilot Chat landed in early 2023, I was thrilled to have a more integrated experience.

The next big innovation from the VS Code + Copilot world took a while longer to arrive in the editor. GitHub Copilot agent mode was released earlier this year, and it was another game changer for my workflows. It gives Copilot the ability to act more autonomously and proactively, making it way more useful for complex tasks.

Enough talk about the history, here’s my current setup for agentic software development as of November 2025.

VS Code + GitHub Copilot Agent Mode

While there are numerous AI coding tools available, I currently stick with VS Code and GitHub Copilot in agent mode. Since I’m already using the same setup for my day job, it just makes sense to keep things consistent. However, I’m always open to exploring new tools and Claude Code as well as Cursor are on my to-do list for future experiments.

Contrary to my IDE choice, the model I use changes constantly. I’m always trying out the latest preview models and try to find one that fits my needs best. At the moment, I’m using GPT-5 Codex for most coding tasks, but switch to GPT-5 for more planning and design work. For some tasks, I also use Claude Sonnet 4.5 to review and iterate on plans from GPT-5.

What’s more important than the specific model are the workflows and prompts you use to guide the AI to work effectively. I always add a AGENTS.md file to add general instructions and context for the AI agent. I’d recommend to keep the file short and focused on the things that matter most for your project.

AGENTS.md structure

There is no one-size-fits-all structure for the AGENTS.md file, but I found it useful to first give some general behavioral rules for the AI agent. For example, I tell the agent to focus on writing clean, readable and maintainable code over clever one-liners or micro-optimizations. I also like to give the instruction to be proactive and help me to refactor and improve the codebase as we go along. This really helped to mimic what I would do when implementing a feature myself.

After those general instructions, I add project-specific context. This includes:

  • The purpose and goals of the project
  • Tech stack, including package manager, important frameworks and libraries used
    • I always specify the commands to install dependencies, run typecheck, lint and tests and how to build the project
  • Coding conventions and style guides to follow
    • Like avoiding single-letter variable names (a thing that GPT-5 loves to do), avoiding any types in TypeScript, avoiding unnecessary comments that explain what the agent did, etc.
    • I don’t like exceptions, so I also add instructions on how to use Result-objects for error handling instead of throwing exceptions
  • Any important architectural decisions or patterns to follow

Then, add that every change must pass typechecking, linting and tests and that the agent should add or update tests as needed. This really helps to improve the quality of the output and reduces the amount of additional ‘fix these tests/lint errors’ prompts I have to give.

And that’s basically it. In my experience, adding even more instructions doesn’t help much. Instead, focus on whats important for your project and maybe add links to more documentation about the repository to guide the agent if needed.

Tools

For tools, the best advice I can give is to start with as few tools as possible and only add additional tools and MCP servers when they are needed. Luckily, VS Code shows a warning if you select a lot of tools at once, but I believe the limit is way higher than what is actually useful. I’d recommend to configure a user-defined toolset with only the essential tools you need for a specific type of task. To do this, open the command palette (Ctrl+Shift+P), search for “Chat: Configure Tool Sets and create a new toolsets file. You can name them however you like and configure multiple toolsets for different tasks in the same file.

A feature that takes this a step further is the ability to define custom chat modes. These allow you not only to select a specific set of tools, but also allow to add custom mode-specific instructions that are added to the prompt when starting a new chat. Click on the chat mode selector and select “Configure Modes…” to create your own chat modes. The chatmode is stored in a markdown file in the .github/chatmodes folder of your repository. To get started, I’d recommend to look at some of the open source chat modes in the awesome-copilot repository.

Workflow

My general workflow for more complex agentic coding tasks is split into two phases and uses between 2 and 4 different chat sessions. For simpler tasks, less is more and a single sentence prompt might be enough.

For planning and design, I utilize a custom planning chat mode. The instructions for this mode are derived from the plan, planner and implementation-plan modes in the awesome-copilot repository. Plans are written as markdown files in a .plans directory in the repository. This way, reviewing the plan is much easier than scrolling through a long chat history. Additionally, I sometimes iterate on this plan with a different model in a new chat to get a fresh perspective and improve the plan further. Once the plan is solid, I start a new implementation chat, but usually just use the default Agent mode with a custom toolset depending on the language and the task.

Implementation is quite straightforward. Tell the agent to read the plan from the markdown file and implement it step by step. I haven’t settled for a specific method to do this. Sometimes I tell the agent to follow the plan and be proactive, only asking me for clarifications when really needed. Other times, I tell the agent to go step by step and run tests after each step, asking for my confirmation before proceeding to the next step. Just try out different approaches and see what works best for you and the specific task at hand. For some tasks, a more hands-off approach works better, while for others, a more guided approach is needed to keep things on track.

Depending on the complexity of the task, I might use additional chat sessions to review code changes. When I do this, I usually use a different model than the one used for implementation and just give it the task to review the changes. Sometimes adding the plan context helps to get better feedback, but sometimes this also narrows the agents perspective too much. Code review comments are either dealt with by me or I copy them into the implementation chat and ask the agent to address them.

Asynchronous Agents

The ability to assign GitHub Issues to a remote, asynchronous coding agent is a powerful feature that I’m just starting to really utilize. Being able to start a more complex task, be it planning or implementation, and have the agent work on it in the background while I focus on other things is really great for productivity. I’m still experimenting with the best ways to integrate this into my existing workflows, but I can already see the potential. Some of my experiments letting the agent work on general refactoring tasks or adding zod validation to an older codebase have been quite successful. I usually assign the agent either at the end of a day to let it work overnight and review the next morning, or at the beginning of a workday to let it work while I’m at my day job. The new GitHub Agent Sessions UI makes this workflow really smooth and increases productivity during code reviews as well. So if you haven’t tried it yet, I highly recommend to give it a go.

Final Thoughts

This setup has proven to be quite effective for me so far. It brought back the joy of building things while reducing the friction of writing code. Of course, some of it probably has to do with it being new and exciting, but I truly believe that agentic coding allows developers to focus more on the creative aspects as well as higher-level design and architecture, which for me is where the real fun lies. With more and more improvements in the models, tools and integrations, I’m excited to see how this space evolves and what new possibilities come up in the next few years.