Automated Software Testing Tools

Author Name

Dept.Affiliation, School/Corp.

City,State, Country

[email protected]

Overview

Complex software systems are currently being developed however, testing throughout the software development cycle is pertinent to the overall success of the software. Automated software testing is regarded as the best means of executing repetitive test cases using software tools that have the capacity of controlling test executions. Automated software testing is a fundamentally important aspect of software engineering however, it is an easily forgotten practice particularly due to the current fast-paced Web application development culture. To enable the comparison of automated software testing tools, this research paper developed a multi-partitioned metric suit to guide in the tool evaluation process. The automated testing comprised of the development of scripts which are essential in not only saving time and resources when applications are updated, but also speeds up the testing process particularly when regression testing is necessary. The development of the metric suit facilitated the comparison and selection of the desired testing tools for automated testing. The tool compared here were Ranorex, Rational Function Tester (RFT), and Janova.

Test Automation is basically getting rid of repetitive manual tests and substituting or replacing them with systematic programs using automation tools (Qu, Cohen and Rothermel 2008). It is a series of software programs essential for validating test outputs against specified or particular test conditions. In simple terms, automated software testing is regarded as the best means of executing repetitive test cases using software tools that have the capacity of controlling test executions (Antoniol, Di Penta and Harman, 2011). Automated testing has been found to shorten development cycles, get rid of cumbersome repetitive tasks and more importantly, improve software quality. The success of Test Automation process is wholly based on identifying automation tools rightly and appropriately. Software testing in software development is demanding and thus requires persistence and patience (Tappenden and Miller, 2009). The process is mainly aimed at assessing and determining software quality by using software with applicable test cases to verify whether the proposed software requirement are being met or not.

Automated testing is a fundamentally important aspect of software engineering however, it is an easily forgotten practice particularly due to the current fast-paced Web application development culture. By thoroughly and efficiently software applications is vitally important for any software development company as it helps in retaining existing customers as well as attracting new ones (Qu, Cohen and Rothermel 2008). Through software testing, the reliability of the application is defined and some experts have established that approximately fifty percent of software development budget is spent on testing. It is a labor intensive and expensive process thus reducing human testing is highly recommended. Automated software testing is virtually essential given the fact that errors are inadvertently introduced into the software during design and development processes (Papadakis and Malevris, 2010). Due to increased software demands software applications have become more sophisticated and complex and thus there are more and extended lines of codes which on the other hand requires thorough and effective testing to establish their authenticity.

In the event that automated software testing is not effectively and thoroughly completed as required, the consequences can be devastating and alarming to the firm the company will be prone to financial costs particularly if users find bugs after the release of updates. Software reliability is largely threatened each time bugs are released in the application. Given this understanding, software testing is an unavoidable part of any responsible effort in developing any software system. For any software development project, System Development Life Cycle (SDLC) comprises of project planning, analysis, design, implementation, testing and support (Hemmati, Arcuri and Briand 2010). The current SDLC approaches encourage developers to have iterative approach software testing as compared to the traditional linear approach. In this regard, software testing is an ongoing activity which occurs in the entire project’s life. Accordingly, it should be noted that systems testing as well as integration testing are done prior to software deployment.

Another important aspect of software testing is that software is not only pervasive but also is integrated within multiple systems (Antoniol, Di Penta and Harman, 2011). In the medical field for example, notwithstanding technological advancements, medication errors have continually caused harm to patients and thousands succumb to death annually. With regard to this understanding, a hospital can integrate two or multiple standalone systems to help in medical administration improvement which on the other hand helps in medication errors reduction, nursing workflows made simpler, together with improving pharmacists in checking infusion rates in association with intravenous medications. From this perspective, not only is testing of standalone systems important but also more important is testing integrated systems (Antoniol, Di Penta and Harman, 2011).

Software testing is a repetitive and labor intensive process and thus it is significantly essential to find appropriate automated software testing tool (Pasareanu et al., 2008). Currently, there are various software testing tools available and hence one can assume that the quality, testability, maintainability, and stability of software are improved by using such tools (Zhang, Finkelstein and Harman, 2008). Software testing tools help software developers in increasing the software quality through automation of mechanical aspects of the software testing processes. However, in the event that modification are made on applications and there are no testing processes in place to check or perform automatic testing, the process is prone to consume a lot of time resource (McMinn et al., 2012). Using automatic scripts has been found to save time and resources. Developing test scripts that are readable, reliable and maintainable is vitally important but extremely challenging since they have to stay in perfect harmony with the applications they are to test. After such scripts have been developed together with associated test cases, they can repeatedly be used and thus save on both time and resources (Qu, Cohen and Rothermel 2008). There are numerous software testing tools and thus it is difficult to determine the best automated software developing tool to be used in order to achieve the objective of efficiently and effectively testing software.

BACKGROUND/RELATED WORK

Importance of Software testing and Quality Assurance

Different authors in software testing and quality assurance indicate that the importance of software testing is massive and thus the process cannot be neglected at any cost. Software testing reveals the concealed software defects and hence helps in minimizing the risk related to residual defects of software. Successful systems software development largely depends on quality assurance (QA) it provides adequate assurance that the software together with the associated processes in the product life cycle are in line with the specified requirements (Antoniol, Di Penta and Harman, 2011). Automating software testing is a crucial process and it has been found that software testing is not at par with software codes currently being written. For instance, the number of lines of code per day per programmer are relatively fixed however, the power of each code line has increased thus allowing the development of more complex systems.

In software engineering, software testing is an essential activity (Calvagna and Gargantini, 2009). A roadmap identifying destinations as a set of four ultimate and unachievable goals has been laid out. However, this roadmap has challenges including the amount of testing that should be done (Qu, Cohen and Rothermel 2008). Accordingly, the human factor is an essentially critical resource for software testing. For instance, the software testers’ commitment, skill, and motivation can significantly have effect on the success of the test process.

Software testing Strategies

Black box testing (specification-based/function testing).

Black box testing mainly focus on the input/output behavior and/or functionality of a component. In black box testing program no knowledge about implementation is assumed (Harman and McMinn, 2010). Functionality of the application is the main focus. In black box strategy there are two issues of complexity that must be considered the number of different execution states the application can go through, and software concurrency (Veanes, de Halleux, Tillmann, 2010).

Implementation-based (white box/code based testing)

Code-based testing or white box testing is used for generating a test suite based on the application’s source code. The focus here is testing the code behind the software to determine whether the software requirements have been met or not (Antoniol, Di Penta and Harman, 2011). Automated software testing tool can be created by developing test cases for specific three-variable function. This strategy allows for creation of test cases followed by test suites that help in executing multiple test cases at once (Nie and Leung, 2011).

Other software testing strategies include: object oriented testing techniques such as fault-based testing and scenario-based testing (Nguyen et al., 2009).

Automated Software Testing

This process has been found to speed up the software testing process. This process is essential in ensuring the reliability of the software particularly when an update is made. Software testing workbench is used in this process and it is a set of integrated tools that support the process of software testing (Zhang, Elbaum, Dwyer, 2011). Accordingly, redundancy detection test is also used to determine the reliability of the software. It is a good way of reducing test maintenance costs as well as ensuring test suites integrity (Majumdar and Saha 2009). After appropriately writing test cases it is important to update taste cases in the event that changes are made to the application. Test suite steps can be exhaustively be automated using existing tools such as JUnit, and MuJava to generate compatible test cases.

Software Testing Tools

  1. Ranorex: Comprehensive and cost effective tool used for automatic testing and is mainly used in standard programming techniques and common languages like C# and VB.net. It does not require scripting (Fraser and Zeller 2011).

  2. Rational Function Tester (RFT): Developed by IBM in 1999 and is an object oriented automated testing tool. It has regression and functional testing tools that capture the outcomes of black box test in a script format. It is particularly used in Java, Microsoft Visual Studio, .NET, Web based, terminal based. Web 2.0 and Siebel applications (Qu, Cohen and Rothermel 2008).

  3. Janova: like the above discussed tools, this tool enables the user to automate software testing solutions, however, it mainly used in cloud computing. It does not require scripts to be written only English-based tools with capacity to streamline software implementation with efficiency are used.

RESEARCH APPROACHES

For best automated software testing tools to be established, the research methods used include: identifying a set of automated software tools to be evaluated, developing a metric suite to evaluate the tools, selecting the target application to be tested, annually testing and recording results, performing a feature analysis of each tool and aggregating an ideal feature set, testing the target application using the selected too and gathering data, and interpreting the results.

Automated software testing tools selected

RFT, Ranorex, and Janova were selected in the comparison for standalone based application testing. Other test tools that were selected include SilkTest, Panorama, and Quicktest however due to the complexity of their setup together with initialization, and because of their cumbersome installation instructions, they were left out.

RFT was chosen because is among the most widely used tool (Antoniol, Di Penta and Harman, 2011). The use of automatic testing tools was essential in going through end user steps in the application as well as comparing the features between the tools. Janova was selected due to the fact that there is no need of downloading any software or buy any equipment (Qu, Cohen and Rothermel 2008). It is a cloud based software testing tool and thus it needed internet connection. The tests were created and queued into the tool. Ranorex was chose because is widely used with web based applications (Saxena et al., 2009).

Evaluation Metrics

Metrics are mainly important due to the fact they have the ability to compare different tools to efficiently select relevant and appropriate software testing tools to use based on what testing needs at a specific time (Antoniol, Di Penta and Harman, 2011). The tools were compared based features, debugging help, and automated progress, testing process support, usability and requirements. The tables below describes how the aforementioned comparison criteria were used (Qu, Cohen and Rothermel 2008).

Table 1: description of the features metrics definition of Tool Features Metric

Definition

Install required

Does the tool require installation before using it or not

Cloud based

Is the feature provided by the software cloud based or install required

Knowledge of Scripts required

Are tests done strictly based on lines of code

List of features

These are the actual features provided by the software like debugging support

Table 2: the list of usability attributes definition of tool usability metric

Definition

Ease of installation

Documentation of how easy of difficult it is to install the software

User friendly interface

Display of how easy or difficult it is to use the software as well as how easy tool features are to understand

Helpfulness of error messages

The ease or difficulty in understanding error messages received while using the software

Tutorials on how to use

Is it difficult or easy to get help when there is a problem with using the software

Terminology

Determination of whether or not the terminology used in the application are easily understandable.

Table 3: Definition of debugging help metric

Definition

Ease of getting help when an error is encountered

Determines how easy and fast it is to get help from the software support to resolve the issue

Helpfulness of where to get help on website

Determines the convenience of where to get help on the company’s website (Zhou, Okamura and Dohi 2013)

While recording scripts, the ease of using the tool

Description of whether or not the software is buggy while recording scripts

Documentation of error messages

Determination of how well documented and easy to find solutions are to errors received when using the application

Table 4: Definition of Automated Progress Metric

Definition

Automated tools availability

Determines whether or not the software has easy to understand automated tools

Automated Progress features

Describes whether or not automated progress of tests are easily documented.

Helpfulness in creating test cases

Determines how easy it is to start developing test cases

Table 5: Definition of Support for the Testing Process Metric

Definition

Ability to Compare test results with an oracle

Determines whether the tool provides a mechanism for automatically comparing results of the test against the expected results

Ability to document test cases

Determines whether the tool has the ability to provide documentation of test cases

Ability to perform regression testing

Determines whether or not the tool provides the ability for automated regression testing

Table 6: Definition of Requirements Metric

Definition

Programming language

Determines what programming languages the application will work with (Mohd 2010)

Commercial licensing

Determines the licensing options are available and at what costs

Types of testing environment

Determines whether the testing environment is a command prompt, Windows GUI, or part of the developing environment

System requirements

Determines the type of operating system the application will run on, what software requirements that the tool necessitates and what hardware requirements the tool imposes

CONTRIBUTION AND RESULTS

The tests were completely done and results for each metric were recorded. The results of the three testing tools were extensive all software testing tools were tested, evaluated and compared against each other as shown in the tables below:

Table 9: Features of Testing Tools

Ranorex

RFT

Janova

Install Required

Yes

Yes

No

Cloud Based

No

No

Yes

Knowledge of script required

No

Yes

No

Access to code required

Yes

No

No

List of features

Yes

No

No

Table 10: Usability of testing tools – Ranked 1 – 10 (1 Lowest to 10 Highest)

Ranorex

RFT

Janova

Ease of installation

8

7

NA

User friendly interface

6

7

7

Helpfulness of error messages

7

8

3

Tutorials on ‘How-to-Use’

6

5

2

Helpfulness of technical support

4

7

8

Table 11: Tool Usability Results

Ranorex

RFT

Janova

Ease of Installation

Very easy

Few steps must be followed to install. IBM Installation Manager must also be installed

No installation required

User friendly interface

Yes

Yes

Terminologies in the tool are difficult to understand. Test scripts are needed to be Features with Scenarios which are then queued in the Test queue. Each feature must be configured with a file path

Helpfulness of error messages

Documentation of error messages

Documentation of error messages

Documentation of error messages on websites however when using links to get additional help, the links were broken and thus no help. Difficult to mail technical support and up to 24 hours wait average time for response

Tutorial on ‘How-to-Use’

Extremely easy to follow

There is a ‘getting started with the product’ link which is essentially useful.

Extremely easy to follow

Helpfulness of tactical support

Very easy to find

Difficult to find

Extremely easy to find

Terminology

Software specific

Software specific

Software specific

Table 12: Debugging Help Results

Ranorex

RFT

Janova

Ease of getting assistance when an error is encountered

Response is quick

On applying back first script, the log was viewable. ‘View Results’ link is broken for each verification point. No results were produced when entering into the Help link.

Emailed support response take up to 24 hours. Email responses are only received between 8am – 5pm EST.

Helpfulness of where to get help website

Very easy to find

The IBM technical site is not convenient to use

Not helpful if logout was not used, received ‘the user has already logged in’ even when the user is not logged in at that moment, and may prevent the user from logging in for at least 15 minutes

While recording scripts, the ease of using tool

No issues

While recording two thirds of recording toolbar blacked out and was not visible. The toolbar also disappears when recording

Very hard to get started based on terminology specific to tool.

Documentation of error messages

Documentation of error messages on the website

Numbers in error messages particularly in the help section. Blackening of the toolbar was not documented on the website

When attempting to get help with ‘missing location string’ the error message ‘link does not exist’ was displayed. Test queues were not successful.

PERSONAL CRITIQUE

When determining usability of an automated software testing tool it’s vitally important to determine its ease of installation. Ranorex and RFT require installation but Janova does not require any installation. However, Ranorex is easy to install thus easy to use. Similarly, the best tool should be one that the user can easily get help particularly when the user encounters an error. Ranorex and RFT platforms give the user ease of getting help while in Janova it is very difficult (Antoniol, Di Penta and Harman, 2011). Accordingly, the best tool is one that allows the user to get quick response especially when debugging the application. Ranorex was found effective and efficient with regard to debugging while RFT and Janova are slow in delivering the response. When running butches, determining the automated progress is essentially important and thus the tool’s performance in relation to automated testing is critical. Ranorex scores the highest in automated progress based on the results above (Qu, Cohen and Rothermel 2008). The automated tool used for testing should be one that fits into the testing process easily. Both tools except Janova easily fit into the testing process. Janova does not have the ability to compare test results with an oracle.

Hardware requirements is important in determining tool feasibility with regard to the hardware system targeted to perform the tests. RFT tool had issues Compaq Presario laptop with 2GB RAM and 1.9 GHz processor (Antoniol, Di Penta and Harman, 2011). Automated software testing tools require new machines with excellent specifications laid out in order to successfully test the application. Each tool has its own hardware and software requirements including programming languages, type of testing environments, system requirements and commercial licensing.

Software testing tools are different, time, effort, and patience together with a software testing goal to determining the best tool to be used in a given type of software testing needs. The test metrics studied in this research paper provide a clear comparison of three tools Ranorex, RFT and Janova. From the results, RFT is the tool to be used when conducting regression testing (Qu, Cohen and Rothermel 2008). Janova is the best in tool due to the fact that it is cloud based and can be accessed from any internet enabled machine. Ranorex on the other hand is the best tool for web-best applications as it has different automation tools built into the software package.

Cloud based tools that have no install require and easy to learn how to use are ideal for software developers to test their applications (Qu, Cohen and Rothermel 2008). According, such tools should be easy to navigate having essential tutorials on how to get started in using the tool. The tool should also have minimal bugs given the fact that bugs encountered through this research were many and time consuming to resolve them.

REFERENCES

  1. Antoniol, G., Di Penta, M. and Harman, M., 2011. The use of search-based optimization techniques to schedule and sta software projects: An approach and an empirical study. Software — Practice and Experience 41(5), 495-519.

  2. Calvagna, A. and Gargantini, A., 2009. Combining satisfiability solving and heuristics to constrained combinatorial interaction testing. In: Proc. of the 3rd International Conference on Tests and Proofs (TAP’09), pp. 27–42.

  3. Fraser, G. and Zeller, A., 2011. Exploiting common object usage in test case generation. In: Proc. of the International Conference on Software Testing, Verification and Validation (ICST’11), pp. 80–89. IEEE.

  4. Harman, M. and McMinn, P., 2010. A theoretical and empirical study of search based testing: Local, global and hybrid search. IEEE Transactions on Software Engineering 36(2), 226–247.

  5. Hemmati, H., Arcuri, A., and Briand, L. 2010. Reducing the cost of model-based testing through test case diversity. In: Proc. of the 22nd IFIP International Conference on Testing Software and System (ICTSS’10), pp. 63–78.

  6. Majumdar, R., Saha, I., 2009. Symbolic robustness analysis. In: Proc. of the 30th IEEE Real-Time Systems Symposium (RTSS’09), pp. 355–363.

  7. McMinn, P., Harman, M., Hassoun, Y., Lakhotia, K. and Wegener, J., 2012. Input domain reduction through irrelevant variable removal and its eect on local, global and hybrid search-based structural test data generation. IEEE Transactions on Software Engineering 38(2), 453–477.

  8. Mohd, E. 2010. Different Forms of Software Testing Techniques for Finding Errors,” IJCSI International Journal of Computer Science Issues, vol. 7, Issue 3, No 1, May 2010.

  9. Nguyen, C., Perini, A., Tonella, P., Miles, S., Harman, M. and Luck, M., 2009. Evolutionary testing of autonomous software agents. In: Proc. of the 8th International Conference on Autonomous Agents and Multi-agent Systems (AAMAS’09), pp. 521–528.

  10. Nie, C., and Leung, H., 2011. A survey of combinatorial testing. ACM Computing Surveys 43 (2), 1–29.

  11. Papadakis, M. and Malevris, N., 2010. Automatic mutation test case generation via dynamic symbolic execution. In: Proc. of the 21st International Symposium on Software Reliability Engineering (ISSRE’10).

  12. Pasareanu, C. S., Mehlitz, P. C., Bushnell, D. H., Gundy-Burlet, K., Lowry, M. R., Person, S., Pape, M., 2008. Combining unit-level symbolic execution and system-level concrete execution for testing NASA software. In: International Symposium on Software Testing and Analysis (ISSTA’08), pp. 15–26.

  13. Qu, X., Cohen, M. B. and Rothermel, G., 2008. Configuration-aware regression testing: An empirical study of sampling and prioritization. In: Proc. Of the 2008 International Symposium on Software Testing and Analysis (IS-STA’08), pp. 75–85.

  14. Saxena, P., Poosankam, P., McCamant, S., Song, D., 2009. Loop-extended symbolic execution on binary programs. In: Proc. of the 2010 International Symposium on Software Testing and Analysis (ISSTA’10), pp. 225–236.

  15. Tappenden, A. and Miller, J., 2009. A novel evolutionary approach for adaptive random testing. IEEE Transactions on Reliability 58(4), 619–633.

  16. Veanes, M., de Halleux, P., Tillmann, N., 2010. Rex: Symbolic regular expression explorer. In: Proc. of the 3rd International Conference on Software Testing, Verification and Validation (ICST’10), pp. 498–507.

  17. Zhang, P., Elbaum, S. G., Dwyer, M. B., 2011. Automatic generation of load tests. In: Proc. of the 26th IEEE/ACM International Conference on Auto-mated Software Engineering (ASE’11), pp. 43–52.

  18. Zhang, Y., Finkelstein, A. and Harman, M., 2008. Search based requirements optimisation: Existing work and challenges. In: Proc. of the International Working Conference on Requirements Engineering: Foundation for Soft-ware Quality (REFSQ’08), LNCS 5025, pp. 88–94. Springer.

  19. Zhou, B., Okamura, H. and Dohi T., 2013. Enhancing performance of random testing through Markov chain Monte Carlo methods. IEEE Transactions on Computers 62(1), 186 – 192.

Appendix 1 Extended Methodology

General targeting approach

The tools were mainly evaluated for web-based testing support. The application under test AUT was tested manually for a specific time period. The manual testing process required each feature in the AUT to be reviewed to make sure that all features were functional. Test scripts were written with regard to what the AUT was written to do (Antoniol, Di Penta and Harman, 2011). The automatic testing phase, once each test script was written, the tests were easily played over and over with regard to how often tests on AUT needed to be run. The hardware specifications of machines performing the tests are in the table below:

Table 7: Specification of Testing Machines

First machine selected to perform tests

Final machines selected to perform tests

Manufacturer

HP

HP

Model

Compaq Presario F700 Notebook PC

HP Pavilion g6 Notebook PC

Operating System

Window 7 – 32bit

Windows 7 – 64bit

Processor

AMD Turion 64 X2 Mobile Technology TL-58 1.90GHz

IntelCorei3 CPU [email protected] GHz

RAM

2.0 GB

4.0 GB

Table 8: Definitions of Test Scripts

Definition of Test Scripts

Verify All Services Appear

Verification that after logging into application under the first tab, equipment and services, all current services should appear on the page

Verify Log in Works

Scripts can verify successful log in of a user

Verify tabs works

After logging into the application, the script should verify that all tabs on the page work successfully

Verify Statements are Viewable

After logging into the application that under the statements tab the script should verify that the current statement appears successfully

Verify Logout Works

The script should verify that the current user can successfully log out

Verify Pay online works

The script should verify that online payment can be done successfully

Verify Printer friendly PDF Works

The script should verify that on viewing statements, that a PDF can be downloaded and/or viewed

Verify Toll Calls are Viewable

Identify that after logging into application toll calls tab can be used to view current or past toll calls either un-billed or previous months toll calls.

Target Application

The applications that were chosen for comparison customer web based applications. The web based applications were selected because users needed to use the site to access their services to which they currently subscribe to, view current billing statements and history, view toll calls, change account setting and pay their bill online.

Appendix 2: Extended Results

Table 13: Automated Progress Results

Ranorex

RFT

Janova

Automated tool availability

The logs show details on each fail/pass

How many verification points pass/fail and why are shown on logs. They can be used to automatically test

‘Batches’ feature is only available with premium membership

Automated progress features

Software specific features

Software specific features

Software specific features

Helpfulness in creating test cases

English test scripts

Easy to create and simplified. Java test scripts can be created or simplified test scripts can be created in English.

Test cases are not easy to create

Table 14: Support for the Testing Process Results

Ranorex

RFT

Janova

Ability to test results with an oracle

Yes

Yes

No

Ability to document test cases

Yes

Yes

Yes

Ability to perform regression testing

Yes

Yes

Yes

Table 15: Requirements results

Ranorex

RFT

Janova

Programming language

Java, VB.net, C#

Java or VB.NET

Java

Commercial Licensing

Available on the website

Available on the website

Basic $10 per month

Types of testing

Windows GUI

Windows GUI

Windows GUI

System requirements

Dependent on the .NET version framework which needs to be installed for the respective Ranorex package. Requires at least th .NET Framework 2.0 for installation, Ranorex Studio requires the .NET Framework 3.5.

Minimum processor: 1.5 GHz Intel (R) Pentium (R) 4

Minimum memory : 1 GB RAM

Disk Space: Minimum 750 MB of disk space is needed for product package installation

Display: 1024 x 768 display minimum with 256 colors

Internet access

Table 16: Test Scripts Results

Ranorex

RFT

Janova

Verify all Services Appear

Pass

Pass

Pass

Verify Login Works

Pass

Pass

Pass

Verify Statement are Viewable

Pass

Pass

Pass

Verify Logout Works

Pass

Pass

Pass

Verify Pay Online Works

Pass

Pass- this shows false negative

Pass

Verify Printer Friendly PDF Works

Pass

Pass

Pass

Verify Toll Calls are Viewable

Pass

Pass

Pass