AI Coding Agents and OpenMRS GSoC 2026

The rate at which the responsibilites of a Software Engineer are changing, is getting crazier each day.

Should we prevent our GSoC students from using AI Coding Agents, even when we ourselves are increasingly using them? For instance, these days, all the code that i push is written by these agents 100%. I only review, suggest improvements, ask for alternative approaches, etc. And all the outcome of these conversations that i have with my agents are written and pushed without manually writing a single line of code by myself. And i am very sure our industry is moving to where a developer who does not maximise their productivity with these tools is going to become like someone who at the times when IDEs were invented, insisted on developing using simple text editors like notepad. :smiley:

From my experience with these agents, the quality of code that they produce, with good context engineering, is too good that even with my years of coding, i cannot compete. Of course they sometimes do stupid things which need to be corrected with reviews and giving them better context. But even with this hullucination problem, they have greatly improved in the last one month or two. And they are not slowing down!

Much as GSoC is partly about these students learning, it also includes getting value for our investment as mentors, and faster. And i am convinced that we shall get more value when these students are allowed to use these coding agents, as long as they review the output and commit only after it makes sense, according to their current level of experience. That is, they need to learn how best to use these tools at a level that is different from a vibe coder. That is how we shall end up with products that our implementers can use, but also give these students practical real world experience that is increasingly becoming a MUST to all employers.

Please take special note that i am not against having these students learn the basics. These are a MUST. But it is not our GSoC program to teach them these basics by requiring them to manually write their code. They should instead use those fundamentals/basics to know how best to output quality code, and faster, with the help of these agents.

If i am evolving with the trends in the Software Industry, why shouldn’t our OpenMRS GSoC program do the same? I therefore open up the debate. :smiley:

14 Likes

Great topic, Daniel! I think we definitely need to go beyond the question of code quality and ask what are the competancies we want our students to learn. Could you do the sort of intelligent review of the AI code, weigh alternatives, etc. if you were not already skilled in the data model, existing codebase, OMRS best practices, etc.? Seems like “learning the basics” is necessary. So to is learning how best to interact with AI without turning your brain into mush. Perhaps it is not an either/or question but one of timing, and rather than removing competancies from their GSOC time, it seems like we are going to be adding more.

2 Likes

Thanks Dan and Andrew

Brooks’ insight from The Mythical Man-Month:

TL;DR

The bottleneck in software is coordination and conceptual integrity.

Jevon’s insight from The Coal Question : or simply Jevons Paradox for Knowledge Work :

TL;DR

As cognition becomes cheaper, more projects start, more analysis is performed, more automation is attempted, more software is written and total cognitive consumption expands.

IMO:

The real design question isn’t “Agent or no Agents?” as people have said its now a norm. Because the students will use it anyway. :grinning_face:

It’s; how do you preserve conceptual integrity and supposed ownership when code generation becomes effectively “free” or offloaded to the agents?

The solution is not prohibition.

Partly it’s raising the coordination bar(not sure how best OpenMRS can do this) in proportion to the execution multiplier.

That said; I would propose among the Gsoc student guidelines you add something like;

Use of AI in programming phase is allowed but failing to understand the context of the problem and existing code will result in you failing the program. Thus, you should learn the codebase and the underlying technologies we are using and do not offload the learning part to the AI tools completely.

my 2 cents.

1 Like

I think going forward, this is how software is going to be written. For the most part, software engineering has really never been about writing code. If the agents help with the code, it means we can focus on problem-solving.

5 Likes

I think that AI sets the expectations for GSoC students (and junior devs in general) higher than ever. Applicants need to demonstrate a very high level of code contributions, critical thinking and testing skills to be considered. Times are gone when I would evaluate a written proposal… Contributions matter the most. After all students have unlimited access to a very proficient artificial coder. It’s easier than ever to contribute meaningfully even without strong programming skills. As a mentor I wouldn’t like my student to just pass my suggestions or review comments to an AI agent as a I can do that myself and I may do it more accurately and efficiently.

I would encourage students to get very proficient with AI agents, learn from them, question them and explore different approaches with their help… ask a ton of questions to fully understand everything.

It’s still in the hands of a human coder to have AI do things in the right way, which is rarely the first thing agents do when asked for not trivial things… at least for now from my experience.

3 Likes

Thanks for that point of view Raff!

In my opinion, software engineering is finally starting to become a reality. In the past it was a bad analogy for the artisanal craft of design and programming… but now creators of software will be able to act a lot more like an engineer - using a well structured plan and specifications to off load the building task to brick layer, steel welders, concrete pourers (AI agents in the software case). If they inspect a building site, or road construction, they need to know what materials are layered on what and take spirit level (or laser), or a hammer to something sometimes to check it doesn’t break.

BUT, as Engineers need to be educated about maths, physics, chemistry, materials science, etc - even though they will not manually calculate loads on structures, or pour the concrete, weld the beams etc, so software engineers will need to understand maths, logic, the Von Neuman computer architecture (perhaps GPU architectures), Orders of Complexity, algorithms, machine language, RDBMS, 3G/OO programming languages (and maybe Functional or Logic languages), [& other stuff, too long to enumerate] AND THEN how to draft good plans and designs, and be able to read the code generated, check if the tests generated provide coverage, know what that embedded SQL query means, etc, etc.

This just became concrete (excuse the pun) in my head a few minutes ago - so I’ll need to validate and extend that analogy in the future. Does it make sense to others?

Keith

2 Likes

Yes it makes lots of sense to me! :smiley: