Handling Each End 2 End Test in BDD Approach Config

Hello Community .

As you have been seeing there is an extensive work that is going on with writing new E2E automated tests in bdd approach around qaframework with the help of qa support team, As the qa team we are as well trying to add more configurable dashboards for written automated E2E test based workflows , However Am having some questions i think we can rectify to have something core to have more clear dashboards on qaframework board

  1. How to handle Each E2E test when its being triggered before and after test is merged into master.
    • One reason why we are adding dashboards is to confirm whether the test is passing or its failing , Any one can simply click with in the link and he/she is redirected to the url of the specific configured workflow test as seen here bellow . This is nice

However when it comes to the pull request that is still in progress/code review , it combines all the dashboards provided with in the defined folder which i dont think its the right design. as this

i would think for each written test to have a configured test for it to be well handled, One pain point am seeing of having too many configurations pilling is that , Suppose we have more than 10 written E2E test workflows, Meaning they will be also among the tracking test based workflows checks which might not be the right move.

This would ease the work for each test to simply check its build and may be have one single configuration that will handle all of them as its done with github actions config, then we can easily track these dashboards when the test is merged.

Looking forward for more ideas, thanks cc @christine @grace @dkayiwa @ibacher @mozzy @kdaud @bistenes .

1 Like

Thanks so much for raising this Sharif. You’re right that this is making the GitHub test list a bit long and confusing. We’ll probably also want to have test names that help make it clear here which tests are for 3.x vs which are for 2.x.

Curious to hear what others have to say.

I agree with the idea! One think we need to know is that each test has its own workflow configuration that triggers it to run for every PR that is made on master. Ideally when a PR is triggered on master branch, all the existing test workflows in master are triggered to run independently hence showing their respective runs as reflected in the shared screen shot.

I think we can have a configuration within our repo to rather combine all the test workflows that are present in master into one so that when a PR is triggered we can see its respective run and a run collection of all the existing workflows combined together in one some thing like All Firefox Tests/build(pull_request). Not sure how to do this but I think there should be a way to configure it and hopefully we shall figure it out. @sharif do you mind creating a ticket to track this ?

I agree !! A ticket to track these changes is RATEST-176 and am following it to see its resolved.!

@ibacher do you have an idea you can throw on the epic ?

Now this is what we atleast want to leverage

I would suggest that instead of having every workflow trigger on PR, we use GitHub Actions ability to trigger a workflow based on the path touched by the PR to limit the build on a PR. So, e.g., we might only trigger the Vitals and Triaging test on PRs that touch the relevant file.

Similarly, we could distinguish between PRs that are against the 3.x tests (and only need to trigger that workflow) and those that are for the current RefApp.

Thanks @ibacher for the suggestion, i totally agree with you ,Am as well looking at it at a Github Actions perspective and i hope this has nothing to do with the effect on qa dashboards still dashboards use case with not be effected .

Awesome, this is also amazing that we can improve , this will ease the workflow … Looking forward to share a working Pull request.

@sharif its a good idea to share the ticket here to follow up this epic !

Following up this sub epic, have made a commit here to address the idea!

   Current QA Dashboard view

  Proposed Dashboard View

cc: @sharif @grace @dkayiwa @ibacher @christine

After investigating on how we can handle E2E Tests Display on GitHub Actions I notice the following.
Each test workflow has its own configuration that triggers the test to run on every PR that is triggered on master branch and its the reason why all these test workflows present in master are displayed when a commit is made on master. See the snapshot below:

However, GitHub Actions provides a mechanism to disable these test workflows not to run on PRs. The functionality can be adjusted any time at any moment ie disable/enable the workflow. Have currently tricked the GitHub Actions not to trigger these respective test workflows But only run All Firefox Tests and the commit changes. The good part of it is that All Firefox Tests is a collection of all the workflows within master so by default all the test workflows are inclusive in All Firefox Tests run.

Am careful with this caption since we may ideally not guess which specific test workflow will be touched by the contributors’ changes however, the run of All Firefox Tests workflow will help us track this within its report in case the changes are breaking things otherwise have not yet figured out how we can achieve the waking up of a specific test workflow on a commit that introduces a change in its respective code base.

This should be a good thing to look into so that we can have 2.x RefApp and 3.x RefApp tests running independently and being reflected on GitHub Actions. I want to suggest that on every commit whatsoever the change may be, we need to have these two specific tests being shown something like 2.x All Firefox Tests and 3.x Firefox Tests. All Firefox Tests ideally means that the tests are running on firefox engine in GitHub Actions otherwise it would be as well renamed to 2.x RefApp Tests and 3.X RefApp Tests respectively.

Below is the display of the GitHub Actions on PRs now!

Am open to ideas! cc: @ibacher @sharif @dkayiwa @christine @grace @bistenes @hadijah315

What exactly have changed with in the configuration in master that changed the configurations ,we cant base on what have just happened after triggering a test and things get fixed without a mere fix, am also en counting the same thing after triggering a rebuild. Though still skeptical since its not yet confirmed. Have you confirmed what changed on master branch?