Achieve Quality Code and ROI through Test Automation

19 Jul 2016

 

Software development companies are under constant pressure to deliver new applications that will satisfy every user’s needs across all devices and locations, preferably at high speed and low cost. Automated testing has become a way to ensure applications are error-free, cost-efficient, and quickly delivered.

Automated testing takes place on several levels of the project:

  1. Unit testing:  Usually performed by developers, the main goal of these tests is to make sure that the app’s components match in terms of functional requirements. For example, a unit test could involve an OOP class or function.
  2. Component testing:  Usually performed by developers, the “component” being tested is a part of the high-coupled logic. For example, authorization, payment processing, and order submission could all be components.
  3. Integration testing:  Usually performed by developers and QA, the goal of these tests is to make sure the app’s components all interact properly.System testing: Performed by QA, these tests check that the application and the system are compatible.
  4. Acceptance testing:  Performed by QA, these tests are done to ensure that the specification requirements are met.
  5. Alpha testing: Performed by development and user experience teams, these tests are done at the end of the development process.
  6. Beta testing: Performed by user experience teams, these tests are done just before the product is launched.

Levels 4-7 are often associated with functional GUI testing. At all levels, the execution time increases, as do creation and support efforts, and the possibilities of false negatives (sporadic failures). The coverage and business relevance also increase with every consecutive level.  (See visual below.)

 

The Benefits of Test Automation

Shorter project life cycle

According to 2016 State of DevOps Report, successful IT organizations deploy 200 times more often, with a 300% lower change failure rate, than low-performing organizations.1

High IT performance increases productivity and profit. This report counts test automation as one of the top practices that significantly speeds up delivery time.

Less manual testing

QA spends most of its time on manual regression testing, which ensures that the existing functionality continues after code has been changed, added, or removed. As the project grows, so does the time to manually complete these tests. This is where automated testing can really help keep the process on schedule. Because in most cases the existing functionality does not change between releases, a significant part of regression testing can be automated. The savings in QA time directly correlates to the number of automated test cases and test executions performed.

Faster expanding test suite

Well-designed architecture and automation framework documentation are key to achieving a faster expanding test suite. Keeping the test helper code DRY results in a 40%-60% reduction in automation test generation. Autotests also provide detailed error reporting, which eliminates the time-consuming task of manually collecting data for this purpose.

Fewer mistakes during testing

In large test environments, including various applications, platforms, and configurations, human error can be easily introduced during manual test execution. Developers can make mistakes and/or accidentally overlook a certain part of the test. Automated tests, however, tirelessly and consistently follow predefined instructions.

High test coverage, fewer defects

High test coverage leads to high defect detection in the existing functionality. Developers can continue to move forward with the project without having to reexamine and manually retest specific code and/or functionality. Autotests alert developers of defects, saving them significant time and effort.

Repeatability

Test automation allows 100% repeatability without the risk of human error during each test execution. Once the test script has been written and put in place, it can be executed any number of times and can check functionality over and over again.

Minimal maintenance

A properly designed automation framework will require minimal maintenance when making application changes. If the existing functionality that the autotest has been implemented for changes, the autotest will start failing. Then the developer needs to check if the failure lies in the application code or in the test itself and update accordingly.

Increased non-functional testing

This type of testing checks the way the system operates, rather than its specific behaviors. The following categories of non-functional testing can all be automated to a certain extent:

Performance testing checks speed, stability, and responsiveness. This category contains several sub-categories:

  • Load testing determines how the system will behave given its specific load.
  • Stress testing determines the system’s maximum load.
  • Spike testing determines how the system will behave if its load spikes or drops.
  • Soak testing detects memory leaks and reduction in response time during prolonged use.

Recovery testing determines how quickly the system can be recovered after hardware failures, application crashes, or similar problems occur.

Scalability testing evaluates the system through user testing. Users carry out specific tasks while observers document positive and negative responses.

Various profiling tools can be added to analyze code accuracy. By incorporating these tools into the application, while running automated tests at the same time, performance and scalability testing can be emulated. Recovery testing can be performed by monitoring the system after automated test failures and/or alterations in code.

Faster deployment

The persistent goal of most businesses is to reduce time between development and deployment, and the bottleneck occurs during the quality assurance phase. Test automation provides quick and accurate reports, which enables faster decision-making on product quality across multiple layers of architecture.

Reports usually summarize the status of the project after the autotests were executed. They note if all tests were passed, if any failures occurred, and if coverage tools were added. They also provide information on the application configuration. In cases of failure, more information can be found in a detailed report, i.e. which tests failed, what result was expected and what was received, and on which line of a test script a failure occurred. Application log information may also be available.

Challenges

 

Technical complexity

The technology and platform stack of autotests are not limited to traditional web and desktop portfolios. They also include mobile devices (Android and iOS), multiple browsers (Firefox, Safari, Chrome) and platforms (Windows, MacOS, Linux). Only a skilled professional can manage such a large stack of technologies.

Automating each specific technology requires a different approach. For example, for GUI web browser testing, Selenium is often used, but it doesn’t work with native mobile applications for iOS or Android. Another example is the currently popular single-page JS applications. Their presentation and part of logic are moved to the user-side and interact with the server to carry out asynchronous requests. The developer must employ specialized techniques when waiting for actions to complete.

Infrastructure, licensing and training costs

The test environment needs to be set up properly, and certain technical skills need to be learned or improved upon.  This involves the use of automation tools, many of which requiring licensing and training.

For example, an HP Unified Functional Testing license is $3,500 per license, per year.2 The license to use the Arena Quality management system costs about $90 per month, per user, totaling approximately $1080.3

Purchasing licenses for all users is only the start. These tools will also need to be customized for particular applications. Users will have to be trained on how to configure these tools, which is an additional cost. Moreover, migrating to another product later will most likely be a significant expense.

Determining when to implement autotests

The cost of an error correlates with the time the error is found. To reduce this cost, both manual and automated testing need to be introduced as early as possible into the software development life cycle.

Very often, important parts of this cycle, such as quality assurance, are not a high priority at the start of development. This leads to potentially serious issues during future development. Deceptive speed up of development leads to higher development speed reduction later in the project. For example, writing code without implementing unit tests accelerates the process in the beginning, but making changes to such code most often increases the time needed to ensure the code works as expected.

Solutions

 

Open-source software

To reduce commercial tool licensing and infrastructure costs, testers may be able to use open-source software and platforms.

  • Selenium is a free, open-source, widely-known web browser automation tool. It can be used for browser end-to-end testing.
  • Jenkins is a popular CI server with many plug-ins.
  • Appium is a popular, open-source, native mobile application framework.

De-skilling

Professional development teams do not need to on-board skilled automation QA. At Sphere, software engineers are specialized in framework development and domain-specific methods, so adding new automation scenarios, i.e. setting up test environments for a specific product, is a relatively trivial task. We also provide training for best practices, which further eases the testing process, even for manual testers.

Return on Investment

Based on the benefits and challenges outlined above, test automation should be seen as an overall quality and productivity improvement initiative, rather than merely a cost-saving exercise.  However, there is no denying that implementing test automation can significantly and positively impact a company’s ROI.

By reducing manual testing and increasing automated testing, a company’s investment can generally be recouped in one year, despite the cost of necessary hardware.

To illustrate, let’s say six manual testers are involved in a project before automation.  To begin the automated testing process, two automation engineers join the team. As parts of the test suite become automated, fewer manual testers will be needed for the project. Once the automated testing suite is fully implemented, the team could be reduced from six manual and two automation engineers to one manual and one automation engineer, resulting in significant monetary savings.

The time required to complete the automation phase depends on how far into the project the automation was implemented (the earlier the automation is begun, the shorter the automation phase), project mutability (the more changes in functionality, the longer the automation process), and project size. Once fully implemented, automated testing requires much less time than manual testing, which can significantly shorten the software development cycle.

Summary and Recommendations

Test automation is a very powerful tool to speed up the delivery of quality software, but it can also result in a bottleneck if implemented incorrectly. One of the key factors here is experience. Sphere’s software testing engineers specialize in developing and supporting automation frameworks for various types of applications, including web browser and mobile applications.

Here are some final recommendations to achieve optimal benefits from automated testing:

  • Cover code with unit tests: Good unit test coverage will detect about half the issues during the development phase. Sphere does not consider a story to be “code complete” without solid unit test coverage.
  • Start manual and automated testing as soon as possible: Bugs detected early are much cheaper to fix than those discovered later in production or deployment.
  • Decide which tests to automate carefully: It is impossible to automate all tests. Those that will be executed frequently and that require the manipulation of large amounts of data, like regression tests, are the best tests to automate.
  • Divide your testing efforts wisely: Creating automated tests requires solid knowledge of scripting language. Delegate this task to the appropriate engineers, while assigning others to write test cases.
  • Limit the number of UI tests: Automated tests executed through UI often start failing after changes are introduced to the user interface. Try to keep the number of UI tests as low as possible and resistant to changes.
  • Hire a consulting company to build an integration testing framework and infrastructure: Experienced QA automation engineers from a consulting company can build the initial integration test suite and train your QAs on automation test creation and management.

 


1 https://puppet.com/resources/white-paper/2016- state-of- devops-report