STLC - Software Testing Life Cycle
STLC - Software Testing Life Cycle
There is a systematic cycle to software testing, although it varies from organization to organization Software Testing Life Cycle: Software testing life cycle or STLC refers to a comprehensive group of testing related actions specifying details of every action along with the specification of the best time to perform such actions. There can not be a standardized testing process across various organizations, however every organization involved in software development business, defines & follows some sort of testing life cycle. STLC by & large comprises of following Six Sequential Phases: 1) Planning of Tests 2) Analysis of Tests 3) Designing of Tests 4) Creation & Verification of Tests 5) Execution of Testing Cycles 6) Performance Testing, Documentation 7) Actions after Implementation Every company follows its own software testing life cycle to suit its own requirements, culture & available resources. The software testing life cycle cant be viewed in isolation, rather it interacts with the every phase of Software Development Life Cycle (SDLC). Prime focus of the software testing life cycle is on managing & controlling all activities of software testing. Testing might be manual testing or an automated testing using some tool. 1) Planning of Tests: In this phase a senior person like the project manager plans & identifies all the areas where testing efforts need to be applied, while operating within the boundaries of constraints like resources & budget. Unless judicious planning is done in the beginning, the result can be catastrophic with emergence of a poor quality product, dissatisfying the ultimate customer. Planning is not limited just to the initial phase, rather it is a continuous exercise extending till the end. During the planning stage, the team of senior level persons comes out with an outline of Testing Plan at High Level. The High Level Test Plan comprehensively describes the following:
Scope of Testing : Defining the areas to be tested, identification of features to be covered during testing
Defining Risks: Identification of different types of risks involved with the decided plan
Identification of resources : Identification of resources like man, materials & machines which need to be deployed during Testing
Time schedule: For performing the decided testing is aimed to deliver the end product as per the commitment made to the customer. Involvement of software testers begins in the planning phase of the software development life cycle. During the design phase, testers work with developers in determining what aspects of a design are testable and with what parameters those tests will work.
2) Analysis of Tests: Based upon the High Level Test Plan Document, further nitty-grittys covering the following are worked out. Identification of Types of Testing to be performed during various stages of Software Development Life Cycle. Identification of extent to which automation needs to be done. Identification of the time at which automation is to be carried out. Identification of documentation required for automated testing
The Software project cant be successful unless there is frequent interaction among various teams involved in Coding & Testing with the active involvement of the Project Managers, Business Analysts or even the customer. Any deficiencies in the decided test plans come to the surface, during such meetings of crossfunctional teams. This provides an opportunity to have a rethinking & refining the strategies decided for testing.
Based upon the customer requirements a detailed matrix for functional validation is prepared to cover the following areas: Ensure that each & every business requirement is getting covered through some test case or the other. Identification of the test cases best suited to the automated testing Identification of the areas to covered for performance testing and stress testing Carry out detailed review of documentation covering areas like Customer Requirements, Product Features & Specifications and Functional Design etc.
3) Designing of Tests: This phase involves the following: Further polishing of various Test Cases, Test Plans Revision & finalization of Matrix for Functional Validation. Finalization of risk assessment methodologies. In case line of automation is to be adopted, identification of test cases suitable for automation. Creation of scripts for Test cases decided for automation. Preparation of test data. Establishing Unit testing Standards including defining acceptance criteria Revision & finalization of testing environment.
4) Construction and verification: This phase involves the following: Finalization of test plans and test cases Completion of script creation for test cased decided for automation. Completion of test plans for Performance testing & Stress testing. Providing technical support to the code developers in their effort directed towards unit testing. Bug logging in bug repository & preparation of detailed bug report. Performing Integration testing followed by reporting of defects detected if any.
5) Execution of Testing Cycles: This phase involves the following: Completion of test cycles by executing all the test cases till a predefined stage reaches or a stage of no detection of any more errors reach. This is an iterative process involving execution of Test Cases, Detection of Bugs, Bug Reporting, Modification of test cases if felt necessary, Fixing of bugs by the developers & finally repeating the testing cycles.
6) Performance Testing, Documentation & Actions after Implementation: This phase involves the following: Execution of test cases pertaining to performance testing & stress testing. Revision & finalization of test documentation Performing Acceptance testing, load testing followed by recovery testing Verification of the software application by simulating conditions of actual usage.
7) Actions after Implementation: This phase involves the following: Evaluation of the entire process of testing. Documentation of TGR (Things Gone Right) & TGW (Things Gone Wrong) reports. Identification of approaches to be followed in the event of occurrence of similar defects & problems in the future. Creation of comprehensive plans with a view to refine the process of Testing. Identification & fixing of newly cropped up errors on continuous basis.
Winding up of the test environment & restoration of all test equipment to the original base line conditions. Life Cycle of Software Testing ( STLC ) Phase Planning of Tests Analysis of Tests Activities ($) Creation of a Test Plan of High Level ($) Creation of fully descriptive Test Plan ($) Creation of Matrix for Functional Validation ($) Creation of Test Cases Designing of Tests ($) Revision of Test Cases ($) Selection of Test Cases fit for automation Refined Test Cases, Input Data Sets & Documents for Assessment of Risk Outcome Refined Test Plans & Specifications Refined Test Plans, Test Cases & Matrix for Functional Validation
Creation & Verification of Tests Execution of Testing Cycles Performance Testing, Documentation
($) Creation of scripts suitable for Test Cases for automation ($) Completion of Cycles of Testing ($) Execution of Test Cases related to performance tests & Stress Testing ($) Detailed documentation
Detailed Procedures for Testing, Testing Scripts, Test Reports & BugReports Detailed Test Reports & BugReports. Test Reports, Documentation on various metrics used during testing
-----------------------------------------------------------------------------------------------------------------------------------------
Software development life cycle(SDLC) and Software Testing Life cycle(STLC) go parallelly.
SDLC (Software Development Life STLC (Software Test Life Cycle) cycle) SDLC is Software Development The process of testing a software in a well planned and LifeCycle, it is a systematic approach systematic way is known as software testing life to develop a software. cycle(STLC). Requirements Analysis is done is this phase, software Requirements gathering requirements are reviewed by test team. Test Planning, Test analysis and Test design is done in this Design phase. Test team reviews design documents and prepares the test plan. Test construction and verification is done in this phase, Coding or development testers write test cases and finalizes test plan. Test Execution and bug reporting, manual testing, Testing automation testing is done, defects found are reported. Retesting and regression testing is also done in this phase. Final testing and implementation is done is this phase Deployment andfinal test report is prepared. Maintenance Maintenance testing is done in this phase
1. Requirements Analysis
In this phase testers analyze the customer requirements and work with developers during the design phase to see which requirements are testable and how they are going to test those requirements. It is very important to start testing activities from the requirements phase itself because the cost of fixing defect is very less if it is found in requirements phase rather than in future phases.
2. Test Planning
In this phase all the planning about testing is done like what needs to be tested, how the testing will be done, test strategy to be followed, what will be the test environment, what test methodologies will be followed, hardware and software availability, resources, risks etc. A high level test plan document is created which includes all the planning inputs mentioned above and circulated to the stakeholders. Usually IEEE 829 test plan template is used for test planning.
3. Test Analysis
After test planning phase is over test analysis phase starts, in this phase we need to dig deeper into project and figure out what testing needs to be carried out in each SDLC phase. Automation activities are also decided in this phase, if automation needs to be done for software product, how will the automation be done, how much time will it take to automate and which features need to be automated.
Non functional testing areas (Stress and performance testing) are also analyzed and defined in this phase.
4. Test Design
In this phase various black-box and white-box test design techniques are used to design the test cases for testing, testers start writing test cases by following those design techniques, if automation testing needs to be done then automation scripts also needs to written in this phase.
8. Post Implementation
In this phase the test environment is cleaned up and restored to default state, the process review meeting's are done and lessons learnt are documented. A document is prepared to cope up similar problems in future releases.
What is Validation
Validation represents dynamic testing techniques. Validation ensures that the software operates as planned in the requirements phase by executing it, running predefined test cases and measuring the output with expected results. Validation answers the question Did we build the software fit for purpose and does it provides the solution to the problem. Validation is concerned with evaluating the software, component or system to determine it meets end user requirements. Some important validation techniques are as follows:
1. 2. 3. 4. Unit Testing Integration Testing System Testing User Acceptance Testing
Unit Testing
Unit is the smallest testable part of the software system. Unit testing is done to verify that the lowest independent entities in any software are working fine. The smallest testable part is isolated from the remainder code and tested to determine whether it works correctly.
Assume you have 3 modules, Module A, Module B and module C. Module A is ready and we need to test it, but module A calls functions from Module B and C which are not ready, so developer will write a dummy module which simulates B and C and returns values to module A. This dummy module code is known as stub.
What is DRIVER?
Now suppose you have modules B and C ready but module A which calls functions from module B and C is not ready so developer will write a dummy piece of code for module A which will return values to module B and C. This dummy piece of code is known as driver.
Integration Testing
In integration testing the individual tested units are grouped as one and the interface between them is tested. Integration testing identifies the problems that occur when the individual units are combined i.e it detects the problem in interface of the two units. Integration testing is done after unit testing. There are mainly three approaches to do integration testing.
1. Top-down Approach
Top down approach tests the integration from top to bottom, it follows the architectural structure. Example: Integration can start with GUI and the missing components will be substituted by stubs and integration will go on.
2. Bottom-up approach
In bottom up approach testing takes place from the bottom of the control flow, the higher level components are substituted with drivers.
System Testing
Testing the behavior of the whole software/system as defined in software requirements specification(SRS) is known as system testing, its main focus is to verify that the customer requirements are fulfilled.
System testing is done after integration testing is complete. System testing should test functional and non functional requirements of the software. Following types of testing should be considered during system testing cycle. The test types followed in system testing differ from organization to organization however this list covers some of the main testing types which need to be covered in system testing.
1. Sanity Testing 2. Functional Testing 3. Usability Testing 4. Stress Testing 5. Load Testing 6. Performance Testing 7. Regression Testing 8. Maintenance Testing 9. Security Testing 10.Reliability Testing 11.Accessibility Testing 12.GUI Testing
Sanity Testing
1. When there are some minor issues with software and a new build is obtained after fixing the issues then instead of doing complete regression testing a sanity is performed on that build. You can say that sanity testing is a subset of regression testing. 2. Sanity testing is done after thorough regression testing is over, it is done to make sure that any defect fixes or changes after regression testing does not break the core functionality of the product. It is done towards the end of the product release phase. 3. Sanity testing follows narrow and deep approach with detailed testing of some limited features. 4. Sanity testing is like doing some specialized testing which is used to find problems in particular functionality. 5. Sanity testing is done with an intent to verify that end user requirements are met on not. 6. Sanity tests are mostly non scripted.
Functional Testing
1. Functional testing is also known as component testing. 2. It tests the functioning of the system or software i.e. What the software does. The functions of the software are described in the functional specification document or requirements specification document. 3. Functional testing considers the specified behavior of the software.
Usability Testing
Usability means the software's capability to be learned and understood easily and how attractive it looks to the end user. Usability Testing is a black box testing technique. Usability Testing tests the following features of the software.
1. How easy it is to use the software. 2. How easy it is to learn the software. 3. How convenient is the software to end user.
Stress Testing
Stress testing tests the software with a focus to check that the software does not crashes if the hardware resources(like memory,CPU,Disk Space) are not sufficient. Stress testing puts the hardware resources under extensive levels of stress in order to ensure that software is stable in a normal environment. In stress testing we load the software with large number of concurrent users/processes which cannot be handled by the systems hardware resources. Stress Testing is a type of performance testing and it is a non-functional testing.
Load Testing
Load testing tests the software or component with increasing load, number of concurrent users or transactions is increased and the behavior of the system is examined and checked what load can be handled by the software. The main objective of load testing is to determine the response time of the software for critical transactions and make sure that they are within the specified limit. It is a type of performance testing. Load Testing is non-functional testing.
Performance Testing
Performance Testing is done to determine the software characteristics like response time, throughput or MIPS (Millions of instructions per second) at which the system/software operates. Performance Testing is done by generating some activity on the system/software, this is done by the performance test tools available. The tools are used to create different user profiles and inject different kind of activities on server which replicates the end-user environments.
The purpose of doing performance testing is to ensure that the software meets the specified performance criteria, and figure out which part of the software is causing the software performance go down. Performance Testing Tools should have the following characteristics:
1. It should generate load on the system which is tested 2. It should measure the server response time 3. It should measure the throughput
2. Loadrunner
Loadrunner is HP's (formerly Mercury's) load/stress testing tool for web and other applications, it supports a wide variety of application environments, platforms, and databases. Large suite of network/app/server monitors to enable performance measurement of each tier/server/component and tracing of bottlenecks.
3. Apache jmeter
Java desktop application from the Apache Software Foundation designed to load test functional behavior and measure performance. Originally designed for testing Web Applications but has since expanded to other test functions; may be used to test performance both on static and dynamic resources (files, Servlets, Perl scripts, Java Objects, Data Bases and Queries, FTP Servers and more). Can be used to simulate a heavy load on a server, network or object to test its strength or to analyze overall performance under different load types; can make a graphical analysis of performance or test server/script/object behavior under heavy concurrent load.
4. DBUnit
Open source JUnit extension (also usable with Ant) targeted for database-driven projects that, among other things, puts a database into a known state between test runs. Enables avoidance of problems that can occur when one test case corrupts the database and causes subsequent tests to fail or exacerbate the damage. Has the ability to export and import database data to and from XML datasets. Can work with very large datasets when used in streaming mode, and can help verify that database data matches expected sets of values.
Regression Testing
Regression Testing is done to find out the defects that arise due to code changes made in existing code like functional enhancements or configuration changes. The main intent behind regression testing is to ensure that any code changes made for software enhancements or configuration changes has not introduced any new defects in the software. Anytime the changes are made to the existing working code, a suite of test cases is executed to ensure that the new changes have not introduced any bugs in the software. It is necessary to have a regression test suite and execute that suite after every new version of software is available. Regression test suite is the ideal candidate for automation because it needs to be executed after every new version.
Maintenance Testing
Maintenance Testing is done on the already deployed software. The deployed software needs to be enhanced, changed or migrated to other hardware. The Testing done during this enhancement, change and migration cycle is known as maintenance testing. Once the software is deployed in operational environment it needs some maintenance from time to time in order to avoid system breakdown, most of the banking software systems needs to be operational 24*7*365. So it is very necessary to do maintenance testing of software applications. In maintenance testing, tester should consider 2 parts.
1. Any changes made in software should be tested thoroughly. 2. The changes made in software does not affect the existing functionality of the software, so regression testing is also done.
Security Testing
Security Testing tests the ability of the system/software to prevent unauthorized access to the resources and data. As per wikipedia security testing needs to cover the six basic security concepts: confidentiality, integrity, authentication, authorization, availability and non-repudiation.
Confidentiality
A security measure which protects against the disclosure of information to parties other than the intended recipient that is by no means the only way of ensuring the security.
Integrity
A measure intended to allow the receiver to determine that the information which it is providing is correct. Integrity schemes often use some of the same underlying technologies as confidentiality schemes, but they usually involve adding additional information to a communication to form the basis of an algorithmic check rather than the encoding all of the communication.
Authentication
The process of establishing the identity of the user. Authentication can take many forms including but not limited to: passwords, biometrics, radio frequency identification, etc.
Authorization
The process of determining that a requester is allowed to receive a service or perform an operation. Access control is an example of authorization.
Availability
Assuring information and communications services will be ready for use when expected. Information must be kept available to authorized persons when they need it.
Non-repudiation
A measure intended to prevent the later denial that an action happened, or a communication that took place etc. In communication terms this often involves the interchange of authentication information combined with some form of provable time stamp.
4.
Acceptance testing is performed after system testing is done and all or most of the major defects have been fixed. The goal of acceptance testing is to establish confidence in the delivered software/system that it meets the end user/customers requirements and is fit for use Acceptance testing is done by user/customer and some of the project stakeholders. Acceptance testing is done in production kind of environment. For Commercial off the shelf (COTS) software's that are meant for the mass market testing needs to be done by the potential users, there are two types of acceptance testing for COTS software's.
Alpha Testing
Alpha testing is mostly applicable for software's developed for mass market i.e. Commercial off the shelf(COTS), feedback is needed from potential users. Alpha testing is conducted at developers site, potential users, members or developers organisation are invited to use the system and report defects.
Beta Testing
Beta testing is also know as field testing, it is done by potential or existing users/customers at an external site without developers involvement, this test is done to determine that the software satisfies the end users/customers needs. This testing is done to acquire feedback from the market.
What is Verification
Verification represents static testing techniques. Verification ensures that the software documents comply with the organisations standards, it is static analysis technique. Verification answer's the question Is the Software build according to the specifications. Important Verification techniques are as follows:
1. 2. 3. 4. Feasibility reviews Requirements reviews technical Reviews Walk through
5. 6. 7. 8. 9.
Inspections Formal reviews Informal reviews Peer reviews Static Code Analysis
Verification vs Validation
Verification 1. Verification represents static testing techniques. 2. Verification ensures that the software documents comply with the organisations standards, it is static analysis technique. Validation 1. Validation represents dynamic testing techniques. 2. Validation ensures that the software operates as planned in the requirements phase by executing it, running predefined test cases and measuring the output with expected results.
3. Verification answers the question 3. Validation answers the question Did we build Is the Software build according to the software fit for purpose and does it provides the specifications. the solution to the problem.
1. A defect is in open state when the tester finds any variation in the test results during testing, peer tester reviews the defect report and a defect is opened. 2. Now the project team decides whether to fix the defect in that release or to postpone it for future release. If the defect is to be fixed, a developer is assigned the defect and defect moves to assigned state. 3. If the defect is to be fixed in later releases it is moved to deferred state. 4. Once the defect is assigned to the developer it is fixed by developer and moved to fixed state, after this an e-mail is generated by the defect tracking tool to the tester who reported the defect to verify the fix. 5. The tester verifies the fix and closes the defect, after this defect moves to closed state. 6. If the defect fix does not solve the issue reported by tester, tester re-opens the defect and defect moves to re-opened state. It is then approved for re-repair and again assigned to developer. 7. If the project team defers the defect it is moved to deferred state, after this project team decides when to fix the defect. It is re-opened in other development cycles and moved to reopened state. It is then assigned to developer to fix it.
Smoke Testing
Smoke testing is done for the software in order to verify that the software is stable enough for further testing. it has a collection of written tests that are performed on the software prior to being accepted for further testing. Smoke testing "touches" all areas of the application without getting too deep, tester looks for answers to basic questions like, "Does the application window opens", "Can tester launch the software?" etc. The purpose is to determine whether the application is stable enough so that a more detailed testing can be performed. The test cases can be performed manually or by using an automated tool. A subset of planned test cases is decided which covers the main functionality of the software, but does not bother with finer software component details. A daily build and smoke test is among industry best practices. Smoke testing is done by testers before accepting a build for further testing. In software engineering, a smoke test generally consists of a collection of tests that can be applied to a newly created or repaired computer program. Sometimes the tests are performed by the automated system that builds the final software. In this sense a smoke test is the process of validating code changes before the changes are checked into the larger production official source code collection or the main branch of source code.
Non functionality testing focuses on the software's performance i.e. How well it works.
Integration Testing
In integration testing the individual tested units are grouped as one and the interface between them is tested. Integration testing identifies the problems that occur when the individual units are combined i.e it detects the problem in interface of the two units. Integration testing is done after unit testing. There are mainly three approaches to do integration testing.
1. Top-down Approach
Top down approach tests the integration from top to bottom, it follows the architectural structure. Example: Integration can start with GUI and the missing components will be substituted by stubs and integration will go on.
2. Bottom-up approach
In bottom up approach testing takes place from the bottom of the control flow, the higher level components are substituted with drivers.
In big bang approach most or all of the developed modules are coupled together to form a complete system and then used for integration testing.
Globalization Testing
In the current scenario of the global marketplace, it is very important to make software products which are sensitive to the different location and cultural expectations of users around the world. Most non-English-speaking customers have operating system in their native language and they
expect that computer programs will not fail on their computers, also they want that the software is available in their native language as well. The software companies which ensure that their software products are easily acceptable in different regions and cultures will definitely gain more market share than the company's which do not focus on globalization. Globalization is the term used to describe the process of producing software that can be run independent of its geographical and cultural environment. Localization is the term used to describe the process of customizing the globalized software for a specific environment. For simplicity, the term globalization will be used to describe both concepts, for in the broadest sense of the term, software is not truly globalized unless it is localized as well. There are lot aspects that must be considered when producing globalized software. Some of the aspects are as follows:
1. 2. 3. 4. 5. Sensitivity to the English vocabulary Date and time formatting Currency handling Paper sizes for printing Address and telephone number formatting
Static Testing
Static testing is the form of software testing where you do not execute the code being examined. This technique could be called non-execution technique. It is primarily syntax checking of the code or manually reviewing the code, requirements documents, design documents etc. to find errors. From the black box testing point of view, static testing involves reviewing requirements and specifications. This is done with an eye toward completeness or appropriateness for the task at hand. This is the verification portion of Verification and Validation. The fundamental objective of static testing technique is to improve the quality of the software products by finding errors in early stages of software development life cycle. Following are the main Static Testing techniques used:
1. 2. 3. 4. 5. Informal Reviews Walkthrough Technical Reviews Inspection Static Code Analysis
Dynamic Testing
Dynamic Testing is used to test the software by executing it. Dynamic Testing is also known as Dynamic Analysis, this technique is used to test the dynamic behavior of the code. In dynamic
testing the software should be compiled and executed, this analyses the variable quantities like memory usage, CPU usage, response time and overall performance of the software. Dynamic testing involves working with the software, input values are given and output values are checked with the expected output. Dynamic testing is the Validation part of Verification and Validation. Some of the Dynamic Testing Techniques are given below:
1. 2. 3. 4. Unit Testing Integration Testing System Testing Acceptance Testing
Beta Testing
1. Beta Testing is done after alpha testing. 2. Testing done by the potential or existing users, customers and end users at the external site without developers involvement is know as beta testing. 3. It is operation testing i.e. It tests if the software satisfies the business or operational needs of the customers and end users. 4. Beta Testing is done for external acceptance testing of COTS(Commercial off the Shelf) software. 5. It is done to acquire feedback from mass market, for example beta testing of Gmail.
Exploratory Testing
As the name suggests exploratory testing is about exploring more into the software and finding about the software. In exploratory testing tester focuses more on how the software actually works, testers do minimum planning and maximum execution of the software by which they get in depth idea about the software functionality, once the tester starts getting insight into the software he can make decisions to what to test next. As per Cem Kaner exploratory testing is "a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the quality of his/her work by treating test-related learning, test design, test execution, and test result interpretation as mutually supportive activities that run in parallel throughout the project." Exploratory Testing is mostly performed by skilled testers. Exploratory testing is mostly used if the requirements are incomplete and time to release the software is less.
Install Testing
Install Testing is done to ensure that the software and its components get installed successfully and function properly post installation. While doing Installation testing, test engineer should keep in mind the following points:
1. 2. 3. 4. 5. 6. 7. 8. Product Installer should check for the pre-requisites needed for the software. Product Installer should give user the default install location of the software. Installer should allow user to change the install location. Over the network installation should be supported. Try installing the software without administrative privileges. Installation should be successful on all the supported platforms. Installer should give option to repair or uninstall. Un-installation should happen successfully and all the installed files should get cleaned up from the install location, also registry entry should get removed. 9. Silent installation should be successful. 10.Native installation should be successful.
Silent Installation
Silent installation does not send messages to the console, silent installation verifies that messages and errors are stored properly in log files. Response files are used for data input in silent installation.
Native Installation
Native installation installs the software application using the OS installation utilities, it verifies that the installation of native packages(i.e. rpm files, Solaris pkg files, AIX installp files) for Linux and Unix platforms is successful.
Interactive Installation
Interactive installation is the GUI installation of software application, user sees a installation screen and provides the installation parameters.
Interoperability Testing
Interoperability means the capability of the software to interact with other systems/softwares or software components. Interoperability testing means testing the software to check if it can inter-operate with other software component, softwares or systems. As per IEEE Glossary interoperability is: The ability of two or more systems or components to exchange information and to use the information that has been exchanged.
Error guessing
This is a Test design technique where the experience of a tester is used to find the components of software where defects might be present. It is mostly done by experienced testers who can use their past experience to find defects in software. Error guessing has no rules for testing, it only uses the testers previous skills. In error guessing testers can think of situations where software will fail. For example:
1. Division by zero 2. Pressing submit button on form without filling any entries.
Alpha Testing
Alpha Testing is done to ensure confidence in the product or for internal acceptance testing, alpha testing is done at the developers site by independent test team, potential end users and stakeholders. Alpha Testing is mostly done for COTS(Commercial Off the Shelf) software to ensure internal acceptance before moving the software for beta testing.
WhiteBox Testing
1. WhiteBox testing tests the structure of the software or software component. It checks what going on inside the software. 2. Also Know as clear box Testing,glass box testing or structural testing. 3. Requires knowledge of internal code structure and good programming skills. 4. It tests paths within a unit and also flow between units during integration of units.
2. 3. 4. 5.
Let us see how can we achieve 100% code coverage for this pseudo-code, we can have 100% coverage by just one test set in which variable X is always greater than variable Y. TEST SET 1: X=10, Y=5 A statement may be a single line or it may be spread over several lines. A statement can also contain more than one statement. Some code coverage tools group statements that are always executed together in a block and consider them as one statement.
5. ENDIF
To get 100% statement coverage only one test case is sufficient for this pseudo-code. TEST CASE 1: X=10 Y=5 However this test case won't give you 100% decision coverage as the FALSE condition of the IF statement is not exercised. In order to achieve 100% decision coverage we need to exercise the FALSE condition of the IF statement which will be covered when X is less than Y. So the final TEST SET for 100% decision coverage will be: TEST CASE 1: X=10, Y=5 TEST CASE 2: X=2, Y=10 Note: 100% decision coverage guarantees 100% statement coverage but 100% statement coverage does not guarantee 100% decision coverage.
In MCDC each condition should be evaluated at least once which affects the decision outcome independently.
functionality. These record and playback tools records test cases in the form of some scripts like TSL, VB script, Java script, Perl. Advantages of Black Box Testing - Tester can be non-technical. - Used to verify contradictions in actual system and the specifications. - Test cases can be designed as soon as the functional specifications are complete Disadvantages of Black Box Testing - The test inputs needs to be from large sample space. - It is difficult to identify all possible inputs in limited testing time. So writing test cases is slow and difficult - Chances of having unidentified paths during this testing Methods of Black box Testing: Graph Based Testing Methods: Each and every application is build up of some objects. All such objects are identified and graph is prepared. From this object graph each object relationship is identified and test cases written accordingly to discover the errors. Error Guessing: This is purely based on previous experience and judgment of tester. Error Guessing is the art of guessing where errors can be hidden. For this technique there are no specific tools, writing the test cases that cover all the application paths. Boundary Value Analysis: Many systems have tendency to fail on boundary. So testing boundry values of application is important. Boundary Value Analysis (BVA) is a test Functional Testing technique where the extreme boundary values are chosen. Boundary values include maximum, minimum, just inside/outside boundaries, typical values, and error values. Extends equivalence partitioning Test both sides of each boundary Look at output boundaries for test cases too Test min, min-1, max, max+1, typical values BVA techniques: 1. Number of variables For n variables: BVA yields 4n + 1 test cases. 2. Kinds of ranges Generalizing ranges depends on the nature or type of variables Advantages of Boundary Value Analysis 1. Robustness Testing Boundary Value Analysis plus values that go beyond the limits 2. Min 1, Min, Min +1, Nom, Max -1, Max, Max +1 3. Forces attention to exception handling
Limitations of Boundary Value Analysis Boundary value testing is efficient only for variables of fixed values i.e boundary. Equivalence Partitioning: Equivalence partitioning is a black box testing method that divides the input domain of a program into classes of data from which test cases can be derived. How is this partitioning performed while testing: 1. If an input condition specifies a range, one valid and one two invalid classes are defined. 2. If an input condition requires a specific value, one valid and two invalid equivalence classes are defined. 3. If an input condition specifies a member of a set, one valid and one invalid equivalence class is defined. 4. If an input condition is Boolean, one valid and one invalid class is defined. Comparison Testing: Different independent versions of same software are used to compare to each other for testing in this method. Reference - http://www.softrel.org/stgb.html