GSoC 2026: Service Queues Improvements – Approach Clarification

Hi everyone,

I’m Mudassar Quraishi, an MCA student and GSoC 2026 applicant. I’ve been exploring the Service Queues app (esm-service-queues-app) to understand its current implementation.

While going through the code (especially the hook-based data fetching like useQueueEntries), I noticed that the frontend relies on the /ws/rest/v1/queue-entry endpoint with a custom representation. It seems to fetch a large amount of nested data (visits, encounters, observations), while the queue table only uses a small subset of fields.

My current approach is to introduce a dedicated endpoint (e.g., /queue-entry/summary) that returns only the required data for the queue table, and load detailed data separately when needed.

I wanted to confirm:

  1. Should the optimized endpoint return only flat data for the main queue table?

  2. Is the expectation to simplify frontend data handling after introducing this API?

I’d appreciate any feedback to ensure my approach aligns with the project direction.

Thanks!

Hi @mudassarquraishi , good observations — I’ve been digging into the same area. One thing worth noting is that @ibacher (the mentor) has already given some direction on this: the new endpoint should return flat fields for the main table listing, and he specifically mentioned that the other components like vitals and medication lists should remain relatively independent rather than being bundled into the same summary response. So your /queue-entry/summary idea is on the right track, just worth being careful not to over-consolidate.

He also mentioned something interesting about the polling side that since the queue table needs frequent updates, the endpoint should generate appropriate ETags so repeated polls mostly return 304s rather than full payloads. That’s worth factoring into the endpoint design early rather than retrofitting it.

Thanks @katoelvis — this is really helpful, I appreciate the detailed clarification.

That makes sense — I’ll keep the summary endpoint focused on flat fields for the main queue table and avoid bundling related data like vitals or medications, so those components can remain independent.

The point about ETags is especially useful. I hadn’t fully considered the polling impact, so I’ll make sure the endpoint is designed in a way that supports efficient updates (returning 304 where possible) instead of repeatedly sending full payloads.

Also, could you point me to where @ibacher shared these suggestions? I’d like to go through the original discussion to make sure I’m fully aligned before finalizing my approach.

Thanks again!

It was a discussion between me and him, that’s when I got some of that info.

Got it, thanks for sharing that - this gives me a much clearer direction to proceed.

This makes a lot of sense, especially using ETags to reduce unnecessary payloads during polling.

One thing I’m curious about is how updates to the queue will be tracked for ETag generation. For example, would it be based on a lastUpdated timestamp at the queue level, or derived from individual queue entries?

Also, in cases where multiple updates happen rapidly, do we anticipate any challenges ensuring that clients don’t miss intermediate state changes when relying on 304 responses?

In the longer term, could a push-based approach (e.g., WebSockets or event-driven updates) be considered alongside polling for high-frequency updates?

That’s a really good point — I was wondering about the same thing.

For ETag generation, using something like the latest “last updated” timestamp across the relevant queue entries feels like a simple and practical approach. That way, any change in the visible rows would invalidate the ETag and trigger a fresh response.

For rapid updates, I think some level of eventual consistency is probably fine for the queue view. Since it’s polling-based anyway, the UI should catch up on the next refresh — the bigger win here is avoiding unnecessary full payloads.

A push-based approach (like SSE/WebSockets) would definitely be more real-time, but that seems more like a longer-term improvement rather than something to tackle right now.

If anyone has seen similar patterns used elsewhere in OpenMRS, would be great to learn from that as well.

That approach makes a lot of sense — using the latest lastUpdated across visible queue entries keeps the ETag simple and ensures any relevant change invalidates it.

I agree that for a polling-based UI, eventual consistency is a reasonable tradeoff, especially since the goal is to reflect the latest state rather than every intermediate transition.

One thing I was thinking about: do we need to account for cases where multiple updates happen within the same timestamp resolution (e.g., if lastUpdated granularity is limited)? In such scenarios, would combining the timestamp with something like a count or a lightweight hash of the visible rows make the ETag more robust, or would that be unnecessary complexity for our use case?

Happy to align with the simpler approach for now, especially if this keeps things efficient and easy to maintain.

That’s a really good point — I hadn’t thought much about the timestamp granularity issue.

I agree that in theory combining it with something like a count or lightweight hash could make it more robust, but it might also add unnecessary complexity for what is essentially a polling-based UI.

My inclination would be to start with the simpler timestamp-based approach and only add something extra if we actually run into consistency issues in practice.

Keeping it simple and predictable probably matters more here, especially given the expected usage pattern.

That approach makes sense. Starting with a timestamp based strategy provides a good balance between simplicity and effectiveness.

I agree that it is better to avoid additional complexity unless we observe issues in practice. If needed, we can extend the ETag later by incorporating a count or a lightweight hash.

For now, using the latest lastUpdated across visible entries offers a clear and predictable mechanism for a polling based UI. It may also be useful to include basic monitoring around cache misses or stale responses so that any limitations related to timestamp granularity can be identified early.

That makes sense — monitoring is a good addition here.

It would help validate whether the timestamp-based approach is holding up well in practice and give us an early signal if we need to refine the ETag strategy later.

Starting simple and observing real behavior feels like the right approach.

Great to see your work contribution

1 Like

Hello Im SAMANVITA DHARWADKAR AIML student and a freelancer working in openmrs project and pulling pr’s

Thanks, I appreciate that!

Nice to meet you, Samanvita — that’s great to hear. I’m also exploring contributions around the service queues area. Looking forward to learning and contributing more here.