Automated Testing and Defects: Back to the Basics

Posted by Dana Edmondson in Automation | March 24, 2010 Automated Software Testing Life Cycle

Test automation is the use of software to conduct testing that requires no operator input, however regardless of how testing is executed, the foundation of testing is still the same. According to Glen Meyers, in the Art of Software Testing, "Software Testing is the process of executing a program or system with the intent of finding errors". So then we may ask ourselves, why do automated testers fail to deliver test scripts that find defects?

The answer to this question is multifaceted, the first aspect being the mind-set of many automated testers. Automated testers come from many backgrounds, such as previous developers and manual testers who sought to increase their visibility in the testing arena. However, regardless of backgrounds, many of these testers develop tunnel vision when creating test automation. This mind-set usually consists of testers who become so focused on coding the tests that they fail to see the big picture. Automated test creation should be no different than writing a manual test case. The tester should walk through the requirement manually as they code the test. This ensures that the automated test script has the same outcome as a test executed manually. Essentially, the basics are being followed, the expected result is ensured, and the ROI is secured. Failure to provide walkthroughs while coding can have many adverse effects. Perhaps there is an outstanding defect, and the program does not produce the expected result. In any case, the newly created test script, without proper walk-through, only serves to test the current release and does not test the actual requirement at hand, making the test script fallible. Automated testers should therefore continue to stick to the basics, pushing away the notion to code beyond what is currently "on-screen", and retain a healthy mindset akin to a manual tester, keeping focus and liberating them from a tunnel vision approach.

The second aspect to the equation is non-reproducible tests. Automated testers must remember that when a defect is submitted, a developer will attempt reproduction of said defect. If an automated test is not reproducible, then the defect will not be considered, and disregarded. In order to make automated tests robust, one must make sure not only that the automated tests are well maintained and continuously ran, but also that the defect can be reproduced prior to submission. A good rule of thumb is to reproduce the defect three times before submission. If one cannot reproduce the anomaly then a warning should be submitted and development notified of the find, but not submitted as a viable defect. Well-crafted automation framework will execute the test or set of tests (regression), capture and validate data during its execution, and provide the ability to return to baseline state of data prior to test re-run. If tests are non-reproducible, then validation and checkpoints are also compromised. Remember, reproducible automated tests have a large ROI when it comes to the re-creation of defects. A developer can then see in real-time the offending code and the execution steps that were taken.

The third way automated tests fail to find defects is improper debugging. In a perfect world, every automated script would run correctly the first time, but since testers are in the habit of breaking things, there are days that even the easiest automated tests fail to execute properly. On these days, the best attribute a tester can have is the ability to track down an error within dozens or hundreds of lines of code. Many testers will simply dismiss the failed test, fix the test to suit the new "change" and continue with their day. Others will automatically assume this is a defect and submit a defect request. One must remember though that debugging in essence, is not only the ability to fix one's own automation code, but also to see if a test is failing due to code error or due to a change in the application. Once a tester has determined through debugging that the script failure is due to a program change, a few other steps must be taken before the "change" or unexpected windows are considered defects. Of course the first step is to check the system requirement this test links to, and make sure that no updates to the requirement have occurred. Next, the tester should also check with development to make sure a change has not occurred which has not been filtered to the testing department. If each of these steps returns with the same result, that no changes have occurred, then a defect must be entered. Again, all steps must be followed in defect submission that a manual counterpart would take, including ensuring the ability to not only recreate the defect via your test script but also verify that the same defect happens manually.

The last and perhaps the most important obstacle which prevents test scripts from locating defects directly relates to the automation framework created. The inability to expand an automation framework often hinders the tester’s ability to find defects. Many Automated testers have a short goal of creating a regression test which includes positive tests to walk through the application and test each scenario and requirement. Often, a few negative tests are included for good measure; however the baseline regression test never evolves, or adds in new validation, checkpoints, or different navigation to meet the requirement's expectation, or new test scripts to test the boundaries of the program. As an automated tester, one must remember that eventually everyone will know exactly how the automation scripts function, and often changes are made to the program that will never directly affect this code, making defect detection that much harder or non-existent. A good automated tester must then maintain their current regression tests and elaborate on them, testing on different levels and areas of the application previously untouched. These test scripts are usually harder to write for the tester and many in the automation world find that defects are not as predominant beyond the baseline; however, the ability to expand automation only increases coverage and provides an expanded automation framework to manipulate and new goals to achieve.

Defect detection in automated script creation and execution is notably harder than manual defect detection. Automated testers must rely not only on the software program executing the tests, but also on their manual testing skills. More importantly, automated testers must know their testing audience to ensure that tests are not formulated and designed to always "pass" due to others knowing your "code". Defect detection is the unassuming art of automated testers who not only know to manipulate software to test a program, but also never forget the basics of quality assurance.


Dana Edmonson is a certified technical lead for DeRisk IT Inc. She specializes in automated testing and project management. She has worked on numerous platforms of both Manual and Automated Testing. Her specialties include automated testing strategies and a large amount of experience with SmartBear’s TestComplete.

Since being founding in 1998, the primary role of DeRisk IT Inc. has been to help corporate organizations forecast and plan for the most efficient IT projects with respect to risk avoidance and to implement appropriate testing solutions to achieve this. DeRisk IT Inc. specializes in risk analysis at the corporate level; and at a project level results in the proactive use of testing tools and methodologies to ensure on-time completion and to the right specification. Our main focus areas are Functional, Performance and System Integration testing using both manual and appropriate automation techniques. DeRisk IT Inc. has established a position at the forefront of application testing, providing unique testing solutions such as performance, compatibility, security, usability and monitoring. DeRisk IT Inc. provides a full portfolio of services for all testing needs.