Automated Software Testing Tools

Author Name

Dept. Affiliation, School/Corp.

City, Country

email [email protected]

Overview

Complex software systems are currently being developed however, testing in the entire cycle of software development is pertinent to the overall effectiveness of the software. Automated software testing is regarded as the best means of executing repetitive test cases using software tools that have the capacity of controlling test executions. Automated software testing is a fundamentally important feature of software engineering however, it is an easily forgotten practice particularly due to the current fast-paced Web application development culture. To enable the comparison of automated software testing tools, this research paper developed a multi-partitioned metric guide for the process of the tool evaluation. The automated testing comprised of the development of scripts which essential in not only are saving on resources and time, but also increases the speeds of software testing, particularly when regression testing is required. The development of the metric guide facilitated the comparison and selection of the desired testing tools for automated testing. The tool compared here were Ranorex, Rational Function Tester (RFT), and Janova.

Test Automation is basically getting rid of repetitive manual tests and substituting or replacing them with systematic programs using automation tools (Qu, Cohen and Rothermel 2008). It is a series of software programs essential for validating test outputs against specified or particular test conditions. In simple terms, automated software testing is regarded as the best means of executing repetitive test cases using software tools that have the capacity of controlling test executions (Antoniol, Di Penta and Harman, 2011). Automated testing has been found to shorten development cycles, get rid of cumbersome repetitive tasks and more importantly, improve software quality. The success of Test Automation process is wholly based on identifying automation tools rightly and appropriately. Software testing in software development is demanding and thus requires persistence and patience (Tappenden and Miller, 2009). The process is mainly aimed at assessing and determining quality of a software, which is accomplished by using a software that has test cases that are applicable in order to verify whether the proposed software requirement are being met or not.

Automated testing is a fundamentally important feature of software engineering however, it is an easily forgotten practice particularly due to the current fast-paced culture of Website app development. By thoroughly and efficiently software applications is vitally important for any software development company as it helps in retaining existing customers as well as attracting new ones (Qu, Cohen and Rothermel 2008). Through software testing, the reliability of the application is defined and some experts have established that approximately 50 % of the budget for software development used to finance the testing process. It is a labor intensive and expensive process thus reducing human testing is highly recommended. Automated software testing is virtually essential given the fact that errors are inadvertently introduced into the software during design and development processes (Papadakis and Malevris, 2010). Due to increased software demands software applications have become more sophisticated and complex and thus there are more and extended lines of codes, which on the other hand require thorough and effective testing to establish their authenticity.

In the event that automated testing of software is not effectively and thoroughly completed as required, the consequences can be distressing and alarming to the firm the company will be prone to financial costs particularly if users find bugs after the release of updates. Software reliability is largely threatened each time bugs are released in the application. Given this understanding, software testing is an inevitable part of responsible efforts in developing any software. For any software development project, System Development Life Cycle (SDLC) comprises of project planning, analysis, design, testing, implementation, and support (Hemmati, Arcuri and Briand 2010). The current SDLC approaches encourage developers to have iterative approach software testing as compared to the traditional linear approach. In this regard, software testing is an activity in progress, which occurs in the entire project’s life. Accordingly, it should be noted that systems testing as well as integration testing are done prior to software deployment.

The integration of the software testing in the multiple systems is a significant feature of the testing process (Antoniol, Di Penta and Harman, 2011). In the medical field for example, notwithstanding technological advancements, medication errors have continually caused harm to patients and thousands succumb to death annually. With regard to this understanding, a hospital can integrate two or multiple standalone systems to help in medical administration improvement which on the other hand helps in medication errors reduction, nursing workflows made simpler, together with improving pharmacists in checking infusion rates in association with intravenous medications. From this perspective, not only is testing of standalone systems important but also more important is testing integrated systems (Antoniol, Di Penta and Harman, 2011).

Software testing process is a repetitive and labor intensive process and thus it is significantly essential to find appropriate automated software testing tool (Pasareanu et al., 2008). Currently, there are various software testing tools available and hence one can assume that the quality, testability, and maintainability of the system is enhanced using such tools (Zhang, Finkelstein and Harman, 2008). Software testing tools help software developers in increasing the software quality through automation of mechanical aspects of the software testing processes. However, in the event that modifications are made on system applications and there are no testing processes in place to check or carry out an automatic process of testing, the process is prone to consume a lot of time resource (McMinn et al., 2012). Using automatic scripts has been found to save time and resources. Developing test scripts that are readable, reliable and maintainable is vitally important but extremely challenging since they should be harmonized with the applications they are to test. After such scripts have been developed together with related test cases, they can repeatedly be used and thus save on resources and time (Qu, Cohen and Rothermel 2008). There are numerous software testing tools and thus it is difficult to determine the best automated software developing tool to be used in order to achieve the objective of efficiently and effectively testing software.

BACKGROUND/RELATED WORK

Importance of Software testing and Quality Assurance

Different authors in software testing as well as the quality assurance suggest that the importance of testing of software is massive and thus the process cannot be neglected at any cost. Software testing reveals the concealed software defects and hence helps in minimizing the risk related to residual defects of software. Successful systems software development largely depends on quality assurance (QA) it provides adequate assurance that the software together with the associated processes in the life cycle is consistent with the specified requirements (Antoniol, Di Penta and Harman, 2011). Automating software testing is a crucial process and it has been found that software testing is not at par with software codes currently being written.

In software engineering, software testing is an essential activity (Calvagna and Gargantini, 2009). A roadmap identifying purpose as a group of four eventual and attainable goals has been laid out. However, this roadmap has challenges including the amount of testing that should be done (Qu, Cohen and Rothermel 2008). Accordingly, the human factor is essential for software testing. For instance, the software testers’ commitment, motivation, and skills can significantly have effect on the success of the test process.

Software testing Strategies

Black box testing (specification-based/function testing).

Black box testing mainly emphasize on the input/output character and functionality of a component. In black box testing program no knowledge about implementation is assumed (Harman and McMinn, 2010). Functionality of the application is the main focus. In this type if strategy, there are two issues of complexity that must be considered the number of various states of execution the application can go through (Veanes, de Halleux, Tillmann, 2010).

Implementation-based (white box based process of testing)

Code-based testing or white box testing is used for generating a test suite that is based on the application’s code of the source. The focus here is testing a code that is behind the software to determine whether the software requirements have been met or not (Antoniol, Di Penta and Harman, 2011). Automated software testing tool can be created by developing test cases for specific three-variable function. This strategy allows for creation of test cases followed by test suites that help in executing multiple test cases at once (Nie and Leung, 2011).

Other approaches of software testing include: object oriented techniques such as scenario-based testing and fault-based (Nguyen et al., 2009).

Automated Software Testing

The automation process increases the speed of the process of software testing. This process is essential in ensuring the reliability of the software particularly when an update is made. Software testing workbench is used in this process and it is a set of integrated tools that support the process of software testing (Zhang, Elbaum, Dwyer, 2011). Accordingly, redundancy detection test is also used to determine the reliability of the software. It is a good way of reducing test maintenance costs as well as ensuring test suites integrity (Majumdar and Saha 2009). After appropriately writing test cases it is important to update taste cases in the event that changes are made to the application. Test suite steps can be exhaustively be automated by applying existing tools such as and MuJava and JUnit to generate test cases that are compatible.

Software Testing Tools

  1. Ranorex: Comprehensive and cost effective tool used for automatic testing and is mainly used in standard programming approaches and common languages like C# and VB.net. This does not need scripting (Fraser and Zeller 2011).

  2. Rational Function Tester (RFT): Developed by the company named IBM in the year 1999 and is an object oriented testing. It has regression and functional testing that capture the outcomes of black box test in a lettering format. It is particularly used in Java, Microsoft Visual Studio, Web based, terminal based. Web 2.0 and Siebel applications (Qu, Cohen and Rothermel 2008).

  3. Janova: like the above discussed tools, this tool enables the system user to computerize software testing exercise, however, it mainly used in cloud computing. It does not need scripts for it to be written it requires English-based tools that have the capacity to streamline software implementation with efficiency are used.

RESEARCH APPROACHES

For best automated software testing tools to be established, the research methods used include: identifying the automated system tools that should be evaluated, then develop the metric guide for evaluating these tools, choosing the application target that should be tested, doing a feature evaluation and analysis of all system tools, testing the application target using the selected too and gathering and interpreting the results.

Automated software testing tools selected

RFT, Ranorex, and Janova were selected in the comparison for standalone based application testing. Other test tools that were selected include SilkTest, Panorama, and Quicktest however due to the complexity of their setup together with initialization, and because of their cumbersome installation instructions, they were left out.

RFT was chosen because is among the most widely used tool (Antoniol, Di Penta and Harman, 2011). The use of automatic testing tools was essential in going through end user steps in the application as well as comparing the features between the tools. Janova was selected due to the fact that it was needless to download the software or even buy any tools (Qu, Cohen and Rothermel 2008). It is a cloud based software testing tool and thus it needed internet connection. The tests were created and queued into the tool. Ranorex was chose because is widely used with web based applications (Saxena et al., 2009).

Evaluation Metrics

Metrics are mainly important due to the fact they have the ability to compare various tools to select relevant and appropriate software testing tools to be use on the basis of the testing needs at any specific time (Antoniol, Di Penta and Harman, 2011). The tools were compared based features, debugging help, and automated progress, testing process support, usability and requirements. The tables below describe how the aforementioned comparison criteria were used (Qu, Cohen and Rothermel 2008).

Features metric for all tools

About four major metrics were used to evaluate different tools. The first metric was the requirement for the installation of tool before they could be used. Secondly, the researcher determined whether the tools were cloud-based or not. Third, the researcher was interested in the strictness with which the knowledge about the script was needed. Lastly, it is important to identify whether the list of tool features was provided.

Usability attributes

The present study considered five major attributed of the testing tools. The first attribute was the consideration of ease with which the system could be installed. Secondly, the researcher focused on the level to which the tool was rated as user friendly. Third, the researcher determined whether the error messages given by system were helpful or not. Fourth, the availability and ease of obtaining technical help through online tutorial was determined. Lastly, the

researcher assessed whether the terminology applied in the system was easy to comprehend or not.

Metrics for automation progress

There are three major metrics used to asses the automation progress. First, the availability of automated tools that are easy to read should be determined. The researcher assessed the documentation of the progress of the system automation. Third, the researcher determined the ease with which the test cases could be created.

Metrics for testing progress

There are three metrics that could be used to assess the process of software testing. The first metric is the capacity of the tester to compare different test outcomes in the oracle. The second metric is the possibility of documenting cases for the test performed. The last metric considered in the present study is the possibility of doing a regression test.

Requirement metrics

The present study considered four requirement metrics. First, the researcher determined the programming language that the tool could better work with. Secondly, it was necessary to determine the availability as well as the cost of commercial license for the tools. The type of environment required for testing the system was necessary in order to enhance the success of the testing process. Lastly, the determination of the operating system that a given application could run on was inevitable.

CONTRIBUTION AND RESULTS

Characteristics of the testing tools

Out of the three major testing tools considered in this paper two of them (Ranorex and RFT) required the installation before use. It is only the Janova testing tool that was cloud-based. This means that Ranorex and RFT were not cloud-based. Knowledge about the script was required for RFT, but this knowledge was not required in Ranorex and Janova. Access to the system code was required in Ranorex tool, but not in RFT and Janova. A list of the system features was available in Ranorex, but these features were not provided in RFT and Janova tools.

Usability of the test tools

The usability of the testing tools considered in this paper was assessed on a scale of 1 to 10, where 1 represented the lowest score and 10 the highest. Ranorex scored 8 and RFT scored 7 in an assessment for the ease of installation, while Janova did not require any installation. Ranorex scored 6, while RFT and Janova scored 7 each in user friendliness. Ranorex scored 7, RFT 8, and Janova 3 in terms of the helpfulness of the message about the errors committed. In terms of the helpfulness of the tutorials, Ranorex scored 6, RFT 5, and Janova 2. In terms of the helpfulness of the support, Ranorex scored 4, RFT 7, and Janova 8.

Debugging help results

It was easy to get assistance whenever an error occurred in Ranorex, while it was relatively difficult in the case of RFT, and limited to 8 am -5 Pm in the case of Janova. In addition, it was quite easy to get help from the website in the case of Ranorex, quite difficult in the case of RFT, and no web-based assistance at all in the case of Janova. No issues occurred when recording a script using Janova, the toolbar blacked out when using RFT and it became quite difficult to get started when using Janova. Error messages were easily documented in the website when using Ranorex errors were documented in the help section when using RFT, while the error documentation string does not exist in the case of Janova.

Contribution

The present study benefits the programming language community in three major ways. First, it

creates awareness about the

existence, utility, and challenges of the automatic testing tools. Secondly, by exploring the benefits of automated testing process, the study motivates the software developers, as well as the programming language fraternity, to consider automation as an appropriate alternative. This will help them reduce the cost of testing software, increase efficiency, quality, and speed. Third, the fact that the present study has highlighted the drawbacks of the available testing tools gives the programming language community a challenge to ensure that the future tools are user-friendly. This calls for extensive research and dedication to improve on the current tools. This will ensure that the testing tools developed in the future are of the state-of-the-art.

PERSONAL CRITIQUE

When determining usability of an automated software testing tool it’s vitally important to determine its ease of installation. Ranorex and RFT require installation but Janova does not require any installation. However, Ranorex is easy to install thus easy to use. Similarly, the best tool should be one that the user can easily get help particularly when the user encounters an error. Ranorex and RFT platforms give the user ease of getting help while in Janova it is very difficult (Antoniol, Di Penta, and Harman, 2011). Accordingly, the best tool is one that allows the user to get quick response especially when debugging the application. Ranorex was found effective and efficient with regard to debugging while RFT and Janova are slow in delivering the response. When running butches, determining the automated progress is essentially important and thus the tool’s performance in relation to automated testing is critical. Ranorex scores the highest in automated progress based on the results above (Qu, Cohen and Rothermel 2008). The automated tool used for testing should be one that fits into the testing process easily. Both tools except Janova easily fit into the testing process. Janova does not have the capacity to perform a comparison for test results using the oracle.

Hardware requirements are important in determining tool feasibility with regard to the hardware system targeted to perform the tests. RFT tool had issues Compaq Presario laptop with 2GB RAM and 1.9 GHz processor (Antoniol, Di Penta and Harman, 2011). Automated software testing tools require new machines with excellent specifications laid out in order to successfully test the application. Each tool has its own hardware and software requirements including programming languages, type of testing environments, system requirements and commercial licensing.

Software testing tools are different, time, effort, and patience together with a software testing goal to determining the best tool to be used in a given type of software testing needs. The test metrics studied in this research paper provide a clear comparison of three tools Ranorex, RFT and Janova. From the results, RFT is the testing tool that is applied when conducting a test in regression (Qu, Cohen and Rothermel 2008). Janova is the best in tool due to the fact that it obtained from any machine that is internet enabled. Ranorex, on the other hand, is the most appropriate testing tool for all web-best system applications. This is because Ranorex has various automation functions that are built within the system package.

Cloud based tools often do not need installation and they are quite easy to learn how they are used. They are ideal for software developers to test their applications (Qu, Cohen and Rothermel 2008). Such tools should be easy to navigate having essential tutorials on how the user can operate the tool. The tool should also possess a limited number of bugs. This is because the numbers of bugs identified in the present study were many. The time required to resolve these bugs was quite a lot.

REFERENCES

  1. Antoniol, G., Di Penta, M. and Harman, M., 2011. The use of search-based optimization techniques to schedule and sta software projects: An approach and an empirical study. Software — Practice and Experience 41(5), 495-519.

  2. Calvagna, A. and Gargantini, A., 2009. Combining satisfiability solving and heuristics to constrained combinatorial interaction testing. In: Proc. of the 3rd International Conference on Tests and Proofs (TAP’09), pp. 27–42.

  3. Fraser, G. and Zeller, A., 2011. Exploiting common object usage in test case generation. In: Proc. of the International Conference on Software Testing, Verification and Validation (ICST’11), pp. 80–89. IEEE.

  4. Harman, M. and McMinn, P., 2010. A theoretical and empirical study of search based testing: Local, global and hybrid search. IEEE Transactions on Software Engineering 36(2), 226–247.

  5. Hemmati, H., Arcuri, A., and Briand, L. 2010. Reducing the cost of model-based testing through test case diversity. In: Proc. of the 22nd IFIP International Conference on Testing Software and System (ICTSS’10), pp. 63–78.

  6. Majumdar, R., Saha, I., 2009. Symbolic robustness analysis. In: Proc. of the 30th IEEE Real-Time Systems Symposium (RTSS’09), pp. 355–363.

  7. McMinn, P., Harman, M., Hassoun, Y., Lakhotia, K. and Wegener, J., 2012. Input domain reduction through irrelevant variable removal and its act on local, global and hybrid search-based structural test data generation. IEEE Transactions on Software Engineering 38(2), 453–477.

  8. Mohd, E. 2010. Different Forms of Software Testing Techniques for Finding Errors,” IJCSI International Journal of Computer Science Issues, vol. 7, Issue 3, No 1, May 2010.

  9. Nguyen, C., Perini, A., Tonella, P., Miles, S., Harman, M. and Luck, M., 2009. Evolutionary testing of autonomous software agents. In: Proc. of the 8th International Conference on Autonomous Agents and Multi-agent Systems (AAMAS’09), pp. 521–528.

  10. Nie, C., and Leung, H., 2011. A survey of combinatorial testing. ACM Computing Surveys 43 (2), 1–29.

  11. Papadakis, M. and Malevris, N., 2010. Automatic mutation test case generation via dynamic symbolic execution. In: Proc. of the 21st International Symposium on Software Reliability Engineering (ISSRE’10).

  12. Pasareanu, C. S., Mehlitz, P. C., Bushnell, D. H., Gundy-Burlet, K., Lowry, M. R., Person, S., Pape, M., 2008. Combining unit-level symbolic execution and system-level concrete execution for testing NASA software. In: International Symposium on Software Testing and Analysis (ISSTA’08), pp. 15–26.

  13. Qu, X., Cohen, M. B. and Rothermel, G., 2008. Configuration-aware regression testing: An empirical study of sampling and prioritization. In: Proc. Of the 2008 International Symposium on Software Testing and Analysis (IS-STA’08), pp. 75–85.

  14. Saxena, P., Poosankam, P., McCamant, S., Song, D., 2009. Loop-extended symbolic execution on binary programs. In: Proc. of the 2010 International Symposium on Software Testing and Analysis (ISSTA’10), pp. 225–236.

  15. Tappenden, A. and Miller, J., 2009. A novel evolutionary approach for adaptive random testing. IEEE Transactions on Reliability 58(4), 619–633.

  16. Veanes, M., de Halleux, P., Tillmann, N., 2010. Rex: Symbolic regular expression explorer. In: Proc. of the 3rd International Conference on Software Testing, Verification and Validation (ICST’10), pp. 498–507.

  17. Zhang, P., Elbaum, S. G., Dwyer, M. B., 2011. Automatic generation of load tests. In: Proc. of the 26th IEEE/ACM International Conference on Auto-mated Software Engineering (ASE’11), pp. 43–52.

  18. Zhang, Y., Finkelstein, A. and Harman, M., 2008. Search based requirements optimization: Existing work and challenges. In: Proc. of the International Working Conference on Requirements Engineering: Foundation for Soft-ware Quality (REFSQ’08), LNCS 5025, pp. 88–94. Springer.

  19. Zhou, B., Okamura, H. and Dohi T., 2013. Enhancing performance of random testing through Markov chain Monte Carlo methods. IEEE Transactions on Computers 62(1), 186 – 192.