Pre-Production Testing: Best Practices & Guidance
Pre-Production Testing: Best Practices & Guidance
Change Record
Date Author Version Change reference
Reviewers
Name Version approved Position Date
Table of Contents
1 Introduction .................................................................................................................................... 1
1.1 Purpose .................................................................................................................................... 1
1.2 Scope ........................................................................................................................................ 1
1.3 Overview ................................................................................................................................... 1
The key for successful deployment (save writing a bug-less product) is careful planning and
execution of the testing while establishing environments and test cases representing the real-world.
1.1 Purpose
The pre-deployment testing guidelines are designed to give the QA team a procedural and tactical
vision into the need, planning, execution, and analysis of the different stages on the road to a
successful deployment relying on industry and MS tactics and best practices,
1.2 Scope
This document is intended to provide guidance to a QA test team responsible for the creation and
execution of automated tests.
The document assumes functional testing is already a working practice within XXXXX, and thus
focuses on additional aspects of the stabilization effort namely, setting the current environments,
procedures for transition between environments and performance testing,
The guidelines are a set of recognized industry-standard practices and procedures intended to
provide project-level guidance to testers
1.3 Overview
The document is divided into 2 main parts. The first part deals with testing environments (and
environment transitions) and the second part deals with the different aspects of performance
testing.
The Testing Environments section provides an overview of the different environments for the
complete development cycle (as well as recommended setups for different project sizes). It then
follows with recommendations for setting testing and staging environments.
The Performance Test Guidelines are organized into an overview of the “who, what, when, where,
why” questions of performance testing followed by methodologies and considerations to execute
the various types of tests. A list of proposed standards, measures, and metrics is included after the
test types followed by lists of the do/don't do rules easing a successful execution of performance
testing projects.
2 TESTING ENVIRONMENTS
2.1 Overview
The ways in which an application is exercised at the various stages of its life cycle and deployment
schedule require several different parallel instantiations of the application. These instantiations or
environments go by many names in different organizations, but the following names are commonly
used:
The development environment is where unit level development is done. Therefore,
software and data structures tend to be volatile in this environment. It is here that the
developer is at liberty to modify the application module under development and its test
environment to suit unit level development needs. In this environment, developers typically
work solely on development workstations where they often have full administrative rights.
The development environment is a “sandbox” environment where developers are free to
use various application infrastructure elements without the constraints, for instance, of the
security that will exist in other environments. This allows developers to focus on building
application logic and learning how best to use the various application infrastructure
elements available without limitations imposed by the environment.
The integration environment is where application units (software modules, data schemas,
and data content) are first assembled and then tested as an integrated suite. This
environment is also volatile but is much less so than the development environment. The
objective here is to force coherence among the independently developed modules or
schemas. This is typically an environment where developers do not have all the
permissions that they had in the development environment. As a result, this is often the first
time that issues such as installation, configuration, and security requirements for the
eventual target infrastructure are addressed.
The test environment is where a “release candidate” grade version of the application is
run through testing exercises. It is as tightly controlled as the production environment and
also substantially less volatile than integration. The objective here is to assume relative
stability of the integrated application and test its stability, correctness, and performance.
This environment is usually off limits to developers. Deployment and updates to
applications in this environment are controlled by the test team and are often done by a
member of the build or infrastructure teams.
The staging environment is where an application resides after it has been fully tested in
the test environment. It provides a convenient location from which to deploy to the final
production environment. Because the staging environment is often used to perform final
tests and checks on application functionality, it should resemble the production environment
as closely as possible. For example, the staging environment should not only have the
same operating systems and applications installed as the production computers, but it
should also have a similar network topology (which the testing environment might not
have). Usually, the staging network environment mimics the production environment in all
respects except that it is a scaled down version (for example, it may have fewer cluster
members or fewer processors than your production servers).
The production environment is where the application is actually used by an organization;
it is the least volatile and most tightly controlled.
` ` ` ` ` ` ` `
` ` `
` ` `
Development
Environment Integration Environment Test Environment Staging Environment Production Environment
XXXXX, already has the notion of most if not all of these environments. One point that should be
noted is the difference in responsibility between the Testing and Staging environments. The Testing
environment is under the responsibility of the QA team, while the Staging environment is under the
responsibility of the Infrastructure team (whose responsibility, as mentioned above, is to make this
environment as close as possible to the production environment).
Production Environment
Minimal Environment
Medium
Development Environment / Test Environment (Shared)
Staging Environment
Production Environment
Build Machine
Release Server
Development Environment
Staging Environment
Production Environment
2.4.2 Hardware
Network – Not only is it important to use the same equipment that is used in the production
environment – it is also important to have the same networking loads and behavior that the
production environment has. This can be achieved by use of appliances (such as Shunra's
StormAppliance ) and traffic generators (such as Ixia's Chariot). Using such tools will allow
to simulate loads generated by other applications (that are also deployed on the users
machines).
Workstations – It is recommended to have at least one of each type of target workstations.
This will allow building a profile of the application usage on the target platforms and
understanding the limitations of the system.
2.4.3 Software
There are several different software aspects between the staging environment and the testing
environment:
No "Debug" mode code is used
No Development tools. Additionally use of debugging tools is also limited – see below
Try to install any test supporting tool (i.e. testing related software that is not the application)
on separate machines from the machines used to run the tests themselves whenever
possible.
Unfortunately, the system under test is most likely not the only application that will run on the end-
users machine. It is probably worthwhile to install common software used by end-users and verify
the way the system works with the additional load
3.1 Overview
3.1.1 What is Performance Testing?
The purpose of performance testing a solution is to:
Optimize the configuration of hardware, software and network environments of a system
under anticipated user and/or transaction load simulations.
Optimize application performance by identifying system bottleneck symptoms and their root
causes.
Reduce the risk of deploying a low-performance system that does not meet user/system
requirements.
3.4.1.2 Don’ts
1. Don’t use more than one (1) client to access the system during the initial performance
profile test.
2. Don’t execute the test from the same hardware that the Application resides.
3. Don’t assume performance requirements are stated explicitly in the Design Packs or
anywhere else.
4. Don’t performance-test code that has not passed regression testing.
5. Don’t outsource performance profiling if any hardware (production level) is available.
3.5.1.2 Don’ts
1. Don’t execute a load test on a functional piece until the performance profile test has been
completed.
2. Don’t allow the Test System resources to reach 85% CPU/MEM utilization, as it will skew
results.
3. Don’t execute load tests on everything!
4. Don’t execute load tests in live production or environments that have other network traffic.
5. Don’t try to break the system in a load test.
3.5.2 10-step Load Test Guide
A Load test scenario will be developed from the Performance Profile scripts and new recordings of
protocol or transactional users.
1. Record new test scripts with a test automation tool by playing successful GUI regression
test scripts or performance profile scripts while recording the protocol and server
communication at the same time.
2. Parameterize all hard-coded data, URLs, IP addresses, user names, passwords, counter-
parties, etc.
3. Correlate all session ids, database calls, certificate keys, and other dynamic user/system
specific data.
4. For a functional release (R1), wait until 80% of the functionality has passed regression
testing (week 5) and performance profiling before designing & executing the scenario, so
that you have a valuable spread of functionality to generate production-diverse load levels.
5. Execute Load test in isolated environment while monitoring the resources of all involved
devices.
6. Execute at least one (1) GUI user during a load test to capture end-user response time
under load.
7. Initiate users to the system 1 to 5 at a time and keep all users on the system until test
completion.
8. Increase users until 125% of the anticipated maximum to verify application utilization
clearance.
9. Present Recommendations for Load statistics, resource utilization, throughput, and user
response time.
10. Repeat as necessary.
3.6 Stress Test Strategy
3.6.1 Do’s & Don’ts
3.6.1.1 Do’s
1. Do stress test in an isolated environment, ALWAYS.
2. Do stress test critical components of the system to assess their independent thresholds.
3. Do use a Protocol Client to simulate multiple users executing the same task.
4. Do limit resources during a stress test to simulate stressful production levels.
5. Do use the test cases from Performance and Load Testing to build the test scenario.
6. Do try to find the system’s breaking points, realistically.
7. Do use Rendezvous points in tests to create stressful situations (100 users logging in
simultaneously).
8. Do correlate/parameterize session ID’s and SQL calls to alleviate data & database
contention issues.
3.6.1.2 Don’ts
1. Don’t stress test in production..
2. Don’t stress test code that is not successfully regression tested and complete (no
workarounds).
3. Don’t use stress test results to make financial decisions without the input of Load & Volume
testing.
4. Don’t stress test until Performance Profiling is complete
5. Don’t stress test just to “stress test”.
6. Don’t stress test everything.
7. Don’t stress test unrealistic scenarios.
8. Don’t stress test in production, EVER.
3.7.1.2 Don’ts
1. Don’t run volume tests in a production environment.
2. Don’t volume-test code that is not successfully regression tested and complete (no
workarounds).
3. Don’t volume-test until all performance profiling is complete.
4. Don’t volume-test until all load & stress testing is complete.
5. Don’t volume-test without production grade system hardware available and implemented.
3.8.1.2 Don’ts
1. Don’t fault-tolerance-test in production, EVER.
2. Don’t stop the test to recover the system (end-users shouldn’t be affected, record entire
experience).
3. Don’t fault-tolerance-test code that is not successfully regression tested and complete.
4. Don’t fault-tolerance-test until performance profiling & load testing is complete.
5. Don’t fault-tolerance-test until production hardware & network environment is available &
complete.