Why Test automation
fails?
·
Unrealistic Expectations
Whenever a tool or a product fails to perform as per the expectations, it is quite obvious that we blame the tool. However, we need to check if our expectations of the tool has been correct/accurate. Here are some typical unrealistic expectations heaped on automated testing tools –
Whenever a tool or a product fails to perform as per the expectations, it is quite obvious that we blame the tool. However, we need to check if our expectations of the tool has been correct/accurate. Here are some typical unrealistic expectations heaped on automated testing tools –
- Now
that we have invested in this tool, we will get immediate ROI.
- We
have purchased a leading pre-scripted testing tool that will do everything
at the click of a button
- With
this software, we can automate the entire process.
It is
very important to understand that implementing an automated testing tool is
another software project and it requires lots of planning, thoughts and
experimentation's to make it work across various testing environments.
You
cannot run automated scripts without the knowledge of software coding, so it is
extremely important to allow your team to master and perfect their coding
skills before using these scripts.
·
“One size fits all”
mindset
Another major reason for test automation failure is a prejudice that one condition suits all. Test automation is not a “one size fits all” operation and it should be updated to address changing parameters. Eye for detail and patience are two traits required for making testing automation work as per your expectations, and it requires continuous improvement.
Another major reason for test automation failure is a prejudice that one condition suits all. Test automation is not a “one size fits all” operation and it should be updated to address changing parameters. Eye for detail and patience are two traits required for making testing automation work as per your expectations, and it requires continuous improvement.
·
No understanding of manual testing process
Automated testing is perceived as a magic bullet that will function even if you don’t have an understanding of manual testing. It is important to know that automatic testing is actually a continuous extension of manual testing. If you don’t how the tool will fit in the grand scheme of testing, you cannot automate the testing, believes Mike Kelly – a leading software expert with a Fortune 100 company.
Automated testing is perceived as a magic bullet that will function even if you don’t have an understanding of manual testing. It is important to know that automatic testing is actually a continuous extension of manual testing. If you don’t how the tool will fit in the grand scheme of testing, you cannot automate the testing, believes Mike Kelly – a leading software expert with a Fortune 100 company.
·
Automated Testing is Easy and Doesn’t Requires Inputs
The key misconception about the automated testing is that it is extremely easy and doesn’t require any inputs. You cannot simply automate an existing test process, instead you have to rethink and reconsider the whole approach. Which tests should be manually tested? Which tests should be automated? This differentiation will definitely help you to seek benefits from automated testing.
The key misconception about the automated testing is that it is extremely easy and doesn’t require any inputs. You cannot simply automate an existing test process, instead you have to rethink and reconsider the whole approach. Which tests should be manually tested? Which tests should be automated? This differentiation will definitely help you to seek benefits from automated testing.
Automate
tests when
§ Business critical paths –
the features or user flows that if they fail, cause a considerable damage to
the business.
§ Tests that need to be run
against every build/release of the application, such as smoke test, sanity test
and regression test.
§ Tests that need to run
against multiple configurations — different OS & Browser combinations.
§ Tests that execute the
same workflow but use different data for its inputs for each test run e.g.
data-driven.
§ Tests that involve
inputting large volumes of data, such as filling up very long forms.
§ Tests that can be used for
performance testing, like stress and load tests.
§ Tests that take a long
time to perform and may need to be run during breaks or overnight.
§ Tests during which images
must be captured to prove that the application behaved as expected, or to check
that a multitude of web pages looks the same on multiple browsers.
More
repetitive the test run, the better it is for automation.
Not
to automate tests when
§ Tests that you will only
run only once. The only exception to this rule is that if you want to execute a
test with a very large set of data, even if it’s only once, then it makes
sense to automate it.
§ User experience tests for
usability (tests that require a user to respond as to how easy the app is to
use).
§ Tests that need to be run
ASAP. Usually, a new feature which is developed requires a quick feedback
so testing it manually at first
§ Tests that require ad
hoc/random testing based on domain knowledge/expertise – Exploratory Testing.
§ Intermittent tests. Tests
without predictable results cause more noise that value. To get the best value
out of automation the tests must produce predictable and reliable results in
order to produce pass and fail conditions.
§ Tests that require visual
confirmation, however, we can capture page images during automated testing and
then have a manual check of the images.
§ Test that cannot be 100%
automated should not be automated at all, unless doing so will save a
considerable amount of time.