QA Chronicles, Part One: Building a QA foundation
This post is part of a series named “QA Chronicles”, where I present our experience on improving Quality Assurance (QA) practices in a software product development environment. Although I focus on real-world experience, we believe these practices can and will be reused in other endeavors. In this series we will be presenting the different strategies we implemented for improving the Quality Assurance practice in a software development team working on developing a digital product for an external client. This is a typical scenario for us, where we build engineering teams for external clients.
Three months back, I traded my manager’s hat for a tester’s headset, diving back into the trenches as a Lead QA. After two years leading the charge, I found myself on the front lines again, ready to tackle a new challenge. The team I was joining was battling both quality and velocity issues. Additionally, some individual performance concerns were simmering beneath the surface. It was clear this project wouldn’t be a walk in the park, but I thrive in the thick of things.
Eager to dive in, I began the onboarding process. Stepping into the project, I was met with a hurdle: the team’s process documentation was not up to par. Documents were not completed, some sections were missing, and in some cases, the information detail was not enough, leaving me with insufficient knowledge to do my job effectively. This produced a domino effect that affected the team. Tasks were completed haphazardly, and leaders couldn’t accurately assess who fit the project best. This broken system hindered everyone’s performance.
The causes of these issues were multiple, but one glaring issue was the lack of a test strategy. Despite the team’s enthusiasm, the testing process lacked a clear direction. Instead, the entire quality process was based on a high-level document outlining a few testing points. This ambiguity opened the door for deviations from best practices and left everyone unclear about the QA engineers’ roles in meetings, test case documentation, communication, and overall collaboration. The team also developed inconsistent practices, with testing becoming dependent on individual negotiations between developers and QA engineers. This created a system of favoritism, bypassing documented procedures and fostering a lack of transparency.
This was a challenging situation to tackle, as so many different things needed to be improved. However, I saw it as a chance to build a stronger, more transparent team and a foundation for other teams in the company. For doing so, I implemented with the team the following strategies:
1. Embrace open communication
Being part of a team, actions were defined and implemented by everyone and not just by me. We actively encouraged the team to have all conversations on public channels where all the team members and the client representatives had visibility.
For that, we decided to lead by example, sometimes asking questions we already knew the answer to, sparking discussions and bringing up hidden issues. Additionally, we pushed the team to move conversations out of internal channels towards the ones in which the client had visibility. This way we were able to identify issues proactively, provide the team with proper and timely solutions, and assure the client by providing visibility on addressing the identified issues. Also, the Quality Assurance team started notifying via Slack every time a ticket was picked up for testing, and the thread was updated with any questions the QA engineer might have and/or the outcome of the ticket.
2. Rebuild of the test case suite
We embarked on a mission to meticulously re-document all existing test cases. This resulted in a concise and well-written set of test cases sent to the client for review. We decided to ask for their revision for two reasons. We wanted to ensure the clarity of the test cases, allowing the client to provide valuable observations and confirm coverage of critical functionalities, and second, their input served as a valuable sanity check, helping us identify any potential gaps in our testing strategy.
3. Integrating Quality Assurance activities with UX
This strategy introduced a game-changing element: the Quality Assurance team’s direct participation in UX feature presentation meetings. In this meeting, the UX designers present each new feature and discuss the rationale behind the proposed design with client’s representatives, software engineers and now, also QA engineers. This proactive approach unlocked several benefits.
The first is the obvious early bug detection, since by engaging with requirements during the design phase, QA engineers are now able to identify missing steps in new functionalities, as well as misconceptions about existing system behavior. The second benefit comes from uncovering hidden issues, with this early involvement also shedding light on bugs that might have been missed due to a lack of understanding about specific design choices and their underlying rationale. The third benefit comes from a proactive test case design, allowing us to start designing the test cases from an early moment, with time to find the best approaches. Even though this practice is not completely implemented, we already see the potential for QA engineers to design test cases even before development estimates are finalized. This would provide them with a clearer grasp of the scope and empower them to plan their testing approach more effectively. As this has been a very rich experience, directly interacting with such a diverse audience, I will detail it further in a future post.
Stay tuned for this!
4. Migration of test automation from Selenium to Playwright
We needed a faster framework for achieving the goals we had set. As for the moment, although we haven’t reached the necessary set of automated test cases, we already have it as part of an automated pipeline. We set up a smoke testing suite that runs after every deployment to User Acceptance Testing (UAT) and Production environments. Additionally, it also runs for every Pull Request (PR) the Dev team creates as a check that needs to pass before it is even peer-reviewed. This, along with assigning the merge and deployment responsibility to QA, has made it possible to keep the main branch stable. Now that we have a Production environment live, we can release twice a week, spending only half a day on each release effort.
Conclusion and next steps
Although we have gone a long way, with visible improvements, I believe this is only the beginning. Our goal is to have a full end-to-end process that allows us to release to production every single ticket as soon as the Quality Assurance team clears it, and I’m sure we are on the right track for that. We will update you on the results of this case in a couple of weeks to let you all know how we are doing and what other changes we applied to our process.
QA Chronicles, Part Two – Bridging QA and UX for Better Outcomes