AI Coding Agents and OpenMRS GSoC 2026

The rate at which the responsibilites of a Software Engineer are changing, is getting crazier each day.

Should we prevent our GSoC students from using AI Coding Agents, even when we ourselves are increasingly using them? For instance, these days, all the code that i push is written by these agents 100%. I only review, suggest improvements, ask for alternative approaches, etc. And all the outcome of these conversations that i have with my agents are written and pushed without manually writing a single line of code by myself. And i am very sure our industry is moving to where a developer who does not maximise their productivity with these tools is going to become like someone who at the times when IDEs were invented, insisted on developing using simple text editors like notepad. :smiley:

From my experience with these agents, the quality of code that they produce, with good context engineering, is too good that even with my years of coding, i cannot compete. Of course they sometimes do stupid things which need to be corrected with reviews and giving them better context. But even with this hullucination problem, they have greatly improved in the last one month or two. And they are not slowing down!

Much as GSoC is partly about these students learning, it also includes getting value for our investment as mentors, and faster. And i am convinced that we shall get more value when these students are allowed to use these coding agents, as long as they review the output and commit only after it makes sense, according to their current level of experience. That is, they need to learn how best to use these tools at a level that is different from a vibe coder. That is how we shall end up with products that our implementers can use, but also give these students practical real world experience that is increasingly becoming a MUST to all employers.

Please take special note that i am not against having these students learn the basics. These are a MUST. But it is not our GSoC program to teach them these basics by requiring them to manually write their code. They should instead use those fundamentals/basics to know how best to output quality code, and faster, with the help of these agents.

If i am evolving with the trends in the Software Industry, why shouldn’t our OpenMRS GSoC program do the same? I therefore open up the debate. :smiley:

14 Likes

Great topic, Daniel! I think we definitely need to go beyond the question of code quality and ask what are the competancies we want our students to learn. Could you do the sort of intelligent review of the AI code, weigh alternatives, etc. if you were not already skilled in the data model, existing codebase, OMRS best practices, etc.? Seems like “learning the basics” is necessary. So to is learning how best to interact with AI without turning your brain into mush. Perhaps it is not an either/or question but one of timing, and rather than removing competancies from their GSOC time, it seems like we are going to be adding more.

2 Likes

Thanks Dan and Andrew

Brooks’ insight from The Mythical Man-Month:

TL;DR

The bottleneck in software is coordination and conceptual integrity.

Jevon’s insight from The Coal Question : or simply Jevons Paradox for Knowledge Work :

TL;DR

As cognition becomes cheaper, more projects start, more analysis is performed, more automation is attempted, more software is written and total cognitive consumption expands.

IMO:

The real design question isn’t “Agent or no Agents?” as people have said its now a norm. Because the students will use it anyway. :grinning_face:

It’s; how do you preserve conceptual integrity and supposed ownership when code generation becomes effectively “free” or offloaded to the agents?

The solution is not prohibition.

Partly it’s raising the coordination bar(not sure how best OpenMRS can do this) in proportion to the execution multiplier.

That said; I would propose among the Gsoc student guidelines you add something like;

Use of AI in programming phase is allowed but failing to understand the context of the problem and existing code will result in you failing the program. Thus, you should learn the codebase and the underlying technologies we are using and do not offload the learning part to the AI tools completely.

my 2 cents.

1 Like

I think going forward, this is how software is going to be written. For the most part, software engineering has really never been about writing code. If the agents help with the code, it means we can focus on problem-solving.

5 Likes

I think that AI sets the expectations for GSoC students (and junior devs in general) higher than ever. Applicants need to demonstrate a very high level of code contributions, critical thinking and testing skills to be considered. Times are gone when I would evaluate a written proposal… Contributions matter the most. After all students have unlimited access to a very proficient artificial coder. It’s easier than ever to contribute meaningfully even without strong programming skills. As a mentor I wouldn’t like my student to just pass my suggestions or review comments to an AI agent as a I can do that myself and I may do it more accurately and efficiently.

I would encourage students to get very proficient with AI agents, learn from them, question them and explore different approaches with their help… ask a ton of questions to fully understand everything.

It’s still in the hands of a human coder to have AI do things in the right way, which is rarely the first thing agents do when asked for not trivial things… at least for now from my experience.

1 Like