Creating workflows for each automated test in Dictionary Manager module

With the ongoing work on automated tests in Dictionary Manager module, I suggest that we look for away for each test having its own workflow configuration than having one configuration that triggers all the automated tests within the module. We can retain the parent workflow that triggers all the present tests But as well write workflows for each test to help in tracking each test status on QA Dashboard and will lessen debugging any failing test within the module.

Would love hearing from DM squad members about your idea on the epic above !

cc: @hadijah315 @ibacher @jwnasambu @suruchi @grace @dkayiwa


I stand to be corrected but I believe this was discussed sometime back in your presence in the Dictionary manager call and I still remember the answer " kindly let us first ensure the tests are running before we do that by Ian. To be honest some tests are still failing not that the step definition or Gherkin Reference are not well written but because the API has to be fixed of which we are striving hard to ensure everything works as expected then embark on that. What do you think @kdaud ?

1 Like

Following the conversation here @ibacher suggested the idea and am looking for ways to have the deal put into play!

No problem at all, as soon as things are set up then we can embark on the idea.

You are right and I like your assertiveness. I believe by next week we will be done by this work. Kindly mark my words.

Am available to support the team :seedling:

1 Like

I can’t thank you enough for the zeal you have to ensure you share your experience with others. We are really blessed to have you areound.

1 Like

I guess @hadijah315 and @ibacher have had a look on the epic above and probably gathering ideas on the topic to provide appropriate guidance!

In the nutshell have just recently read a caption via ocl-slack-channel reading: " It’s now very clear that concept cloning is our most important and most urgent work ". And thought it wise to put on hold the deal in the above epic to have the squad members focus the attention on the urgent project need(requirement) and probably the epic will resume latter when the team is ready to look into the deal (not sure whether my thought is worth).

cc: @jwnasambu @ibacher @hadijah315 @grace @dkayiwa

@kdaud I would hesitate around calling this “each test” as having a workflow-per-test is going to quickly get a little ridiculous. The Dictionary Manager currently has 9 feature files consisting of 30 specs and having either 9 or 30 workflows feels like the wrong number. I would instead suggest that we break down tests (if it makes sense to do so) into the test personas we already have defined.

This can easily be done by tagging and npm scripts. See the docs on the cucumber-cypress-preprocessor and note that as an npm script it would be something like:

"test:integration-basic-user": "cypress-tags run -e TAGS='@basic-user'"

That said, there are some real challenges to running the current e2e tests against an actual instance, something we work around by spinning up a “clean” instance of the OCL backend when running the tests in CI. Getting the tests working against a “less clean” environment probably will require the greatest deal of work, but it would be valuable to be able to run these tests against, e.g., the QA version of the Dictionary Manager.

To that end, I would suggest that it’s best to run these workflow-oriented tests in Bamboo rather than GitHub Actions. That way, we can ensure that they only run after the QA instance has been updated to the latest copy of the code.

1 Like

Awesome thanks @ibacher , Looking forward into that more in detail

Thanks for the shared resource.

I am looking into the idea to seeing a smart way to achieve the desired goal and this suggestively means having 9 workflows embedded with specs within them.

This sounds like a snag. For testing (simulation) purposes is it possible to have a local QA instance of the Dictionary Manager ? If yes do we have a doc that can aid in achieving the deal ?

Sounds cool however, we might probably need to prioritize tasks and ensuring they all are sorted in the long run.

@ibacher I’m of a thought we subject these workflow-oriented tests both in GitHub Actions and Bamboo. Yes GitHub Actions has some issues I’m aware of but would be great seeing the comparison of the test behavior from both builds and perhaps we can latter get rid of Actions and stick with Bamboo.

That’s what we do in our current setup on every commit. But I think that ideally we’d be running these tests against the QA environment

I’m not necessarily advocating moving everything to either GitHub Actions or Bamboo. They serve slightly different but sometimes overlapping purposes.

The tricky point here is that we would ideally trigger the e2e test workflows after the QA environment has been updated. The QA environment gets updated on every commit, once it’s been run by Bamboo (we generate a Docker image and this Docker image gets updated on commit). This happens as part of this plan. So the reason for suggesting using Bamboo here is that we can have Bamboo trigger the “run e2e tests” job after the “deploy to QA” task is completed.

I suppose that could be setup with Bamboo sending a request to GitHub to kick off an Actions workflow, but I just thought this would be easier to do all in one place.

1 Like

Running the Dictionary Manager locally should be easy, since that effectively what are infrastructure is doing.

Running the OCL platform locally is a different story, but might be a good use case for dockerized MockServer.

1 Like

All our e2e tests actually run against a local version of the OCL platform, created from this docker-compose.yml file.

1 Like

Hey @ibacher have been a little unwell for the past days but now back to put my hands dirty on this task and hopefully before this week ends we will have things set up for at least 2 workflows then the team will work around the ideal to have the rest sorted out. cc: @jwnasambu @suruchi @hadijah315

1 Like