It is very common for test automation code to be given less attention than feature code because it is not used by the users/customers. However, test automation is equally a software project/product and ought to be implemented well so as to ensure good quality and maintainability.
Code review is one of the many effective ways to achieve this as it helps to uncover mistakes overlooked during the development phase, while also improving the developers’ skills.
The test automation code can be reviewed by other test automation engineers who are familiar with the project instead of or in addition to static code analysis tools. In fact, peer code reviews are highly recommended because it is proven to be valuable to;
- The team: Reviewing the work of fellow QA engineers effectively fosters healthy culture by encouraging them to learn how to give and apply constructive criticisms and feedback.
- The author: It is an opportunity for the author of the code to learn from others; improve his/her code and fine-tune his engineering skills.
- The reviewer: There is no single way to solve a problem especially when writing code and so it’s an opportunity for the reviewer to pick up new things and also learn from the author’s mistakes. Also, the reviewer gets to have a better knowledge of the business logic of the feature for which the tests are being written.
The role of the code reviewer should be taken seriously to maximise the immense benefits and true essence of the process by giving valuable, actionable and constructive feedback.
If you are new to reviewing test automation code and you are unsure of how to go about it, here is a list of things that can point you in the right direction;
First things first: Preliminary Checks
When assigned a Pull Request ( a request for a team member’s changes to be merged to the project’s main repository), here are a number of checks that should come first;
- Confirm that all required status checks for the PR passed. If this is not in place, then a screenshot showing all tests passing and other related info might suffice for this check. You can also confirm if tests pass locally just to be sure if these checks have not been implemented.
- Look out for any warning from the Code collaboration and version control tool used. Examples of this kind of warning may include Merge Conflicts, “This branch is out-of-date with the base branch” warning …etc and do well to call the attention of the author to them.
- Check through the test code for easy-to-spot poor coding practice and errors e.g. hardcoded test data in the test functions, confusing variable names, test functions that are too long, forgotten debug lines of code etc. You can also chip in short comments for bad code formatting as well as typographical errors
- Violation of agreed-upon team rules for what tests should contain and how they should be written. For example Missing or mismatching test ID, test description etc.
Curious Cat: Being Inquisitive
As the reviewer, you should also seek to clarify, understand and clear all doubts by asking questions. From experience, asking questions sometimes even helps uncover mistakes/omissions and wrong assumptions made by the code author.
- Be sure to be observant and take note of changes made to existing code and ask the author to clarify why such changes were made. For example when assertion values are changed.
- Don’t hesitate to ask for an explanation regarding implementation concepts and genuinely show interest in understanding, not just the how but the why behind the implementation choices were made. Asking to know why, helps you to learn a thing or two about the engineering style and the business logic.
- Ask for clarity even when something looks odd and everything doesn’t just seem to be right. If no one asks questions, code review would just involve reviewers blindly commenting “LGTM” and going ahead to click on the Approve button. This of course would defy the essence of the review process. Also, reviewers should know when to tell the author to break the pull request into smaller chunks. Large pull requests sometimes lead to not so good reviews.
Deep Dive: Technical Checks
This is where you roll up your sleeves to get your hands dirty to meticulously take a closer look.
- While reviewing the code, you are looking to see if there is a better, faster or more effective way to get this done? For example; moving implemented functions that can be reused by other tests to utils and looking for ways to optimize and remove repetitive code
- Also, you are seeking to know if the test coverage is enough. Are the validations and assertions sufficient? Did we cater for edge cases?
- Looking out to ascertain correct application of the project/business rules or requirements related checks in the tests.
- Carrying out checks to see if there are no false positives i.e. ensuring that each test fails if the expected condition is not met.
Tips when reviewing test automation code
- Always remember that the writer/author of the code is not the one being reviewed but the code.
- You can choose to phrase your comments as questions rather than making statements to make the author more relaxed and not pick offence. For example; ‘What do you think if we…? ‘, ‘Did you consider the consequence of …?’, ‘Can you please clarify …?‘
- Always chip in a word of encouragement or praise every now and then.
- Be clear and specific when suggesting improvements rather than giving an ambiguous comment that leaves the author wondering and confused. For example “ This needs to be redone!” is too vague and ambiguous.