100% found this document useful (1 vote)
6K views

Interview Questions For Software Testers

This document contains questions related to test automation, load testing, WinRunner, and general testing topics. Some of the key questions covered include how to plan and implement test automation, choose automation tools, handle data-driven testing, perform load testing, use the WinRunner tool, and define basic testing concepts like test plans, cases, and results.

Uploaded by

api-3719303
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
6K views

Interview Questions For Software Testers

This document contains questions related to test automation, load testing, WinRunner, and general testing topics. Some of the key questions covered include how to plan and implement test automation, choose automation tools, handle data-driven testing, perform load testing, use the WinRunner tool, and define basic testing concepts like test plans, cases, and results.

Uploaded by

api-3719303
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 41

Test Automation interview questions:

1. what automating testing tools are you familiar with?


2. How did you use automating testing tools in your job?
3. Describe some problem that you had with automating testing tool.
4. How do you plan test automation?
5. Can test automation improve test effectiveness?
6. What is data - driven automation?
7. What are the main attributes of test automation?
8. Does automation replace manual testing?
9. How will you choose a tool for test automation?
10. How you will evaluate the tool for test automation?
11. What are main benefits of test automation?
12. What could go wrong with test automation?
13. How you will describe testing activities?
14. What testing activities you may want to automate?
15. Describe common problems of test automation.
16. What types of scripting techniques for test automation do you know?
17. What are principles of good testing scripts for automation?
18. What tools are available for support of testing during software development life
cycle?
19. Can the activities of test case design be automated?
20. What are the limitations of automating software testing?
21. What skills needed to be a good test automator?
22. How to find that tools work well with your existing system?
23. Describe some problem that you had with automating testing tool.
24. What are the main attributes of test automation?
25. What testing activities you may want to automate in a project?
26. How to find that tools work well with your existing system?
27. What are some of the common misconceptions during implementation of an
automated testing tool for the first time?

WinRunner interview questions


1. Give one line answer about WinRunner?
2. How do you define WinRunner on your own?
3. WinRunner is suitable for which type of applications?
4. What are all the types of applications WinRunner can use?
5. What's the WinRunner version number you used for your applications?
6. What are all the different types of recordings available in WinRunner?
7. When do you go for Context Sensitive and Analog recordings? What's the difference
between them?
8. What are all the Limitations & Advantages of WinRunner?
9. Where you find that you can't use WinRunner for automation?
10. What's the types application you working on using WinRunner?
11. What's your comfort level in using WinRunner?
12. What is meant by Synchronization? How do you implement it in WinRunner?
13. What is meant by Check points? Types of Check points? In what situation will you
use it?
14. What are all the different platforms that WinRunner can be used?
15. Any knowledge of Test Director?
16. Difference between WinRunner and Test Director?
17. What databases can Test Director reside on?
18. Explain the project tree in Test Director.
19. Advantages of WinRunner over other market tools silk, robot etc.?
20. How does Winrunner identify GUI Objects?
21. What's the use of GUI Map Editor?
22. Winrunner GUI map modes?
23. What are the two GUI Map modes available in WinRunner?
24. What is the use of rapid test script wizard?
25. How will you synchronize tests using WinRunner? When should you synchronize?
Synchronize settings?
26. How do you check GUI objects?
27. How do you check a bitmap?
28. What is meant by GUI Spy?
29. Besides Record and Replay what else can be done using WinRunner?
30. What are different types of running a test?
31. When do you use Verify/Debug/Update Modes?
32 When do you use Break Points?
33. What is Toggle Break Points? How it differ from Break points?
34. What's Step and Step into?
35. What's the role of GUI Map Editor? (Its connects logical name in the script to the
physical attributes of the object in the GUI Map Editor).
36. What is meant by Function Generator? (F7).
37. How do you load GUI Map Editor?
38. What is injector in load runner?
39. What is TSL? What 4GL is it similar too?
40. How do you program tests with TSL?
41. How do you invoke an application using TSL?
42. What is Module? What's Compiled Module?
43. Explain data parameterization in WinRunner.
44. What is User Define Functions? What are all the different types of User
45. What is Function? Types of Functions?
46. Defined Functions available in WinRunner?
47. Name a couple of standard web function found in the function generator? And explain
their purpose.
48. Where do you use Private/Public function in your script?
49. How do you forcibly capture an Object using WinRunner (when WinRunner not able
identify it)?
50. Can you test DB using WinRunner?
51. What are all the different DB that WinRunner can support?
52. How do you set a Query to get/fetch data from the DB?
53. How TSL looks like?
54. What are all the default codes WinRunner will generates when you start an
application?
55. What are the other codes you can write/call with in TSL? 56. What are all the
different languages can be called using TSL in between the scripts?
57. How do you handle an Exception in WinRunner?
58. Types of Exception available in WinRunner?
59. How do you define an Exception for complete application or for a particular
function?
60. How do you run tests on a new version of WinRunner?
61. What are data driven tests & How Do you create it in WinRunner?
62. What's the File Format used in Data Table?
63. How do you link a Data Table in your script?
64. How do you read text from an application?
65. What is a batch test? How do you program a batch test?
66. What happens when user interface changes?
67. Does load testing possible using WinRunner?
68. Does WinRunner help you in web testing?
69. How do you manage test data, test result?
70. Questions on TSL: How to generate Functions?
71. Running tests from the command line?
72. Explain WinRunner Testing Modes?
73. Have you completed the CPS exam? Which one?
74. Write a short compiled module which selects random numbers; and what function is
used to call your customized compiled module?
75. What's the purpose of the wrun.ini file?
All the answers to these questions are given in the WinRunner User Guide (PDF)&
WinRunner Tutorial (PDF) which comes along with WinRunner License Version.

1. Is automation (or testing) a label for other problems?

No = 15 points

2. Are testers trying to use automation to prove their prowess?

No = 10 points

3. Can testability features be added to the product code?

Yes = 10 points

4. Do testers and developers work cooperatively and with mutual respect?

Yes = 15 points

5. Is automation is developed on an iterative basis?

Yes = 10 points
6. Have you defined the requirements and success criteria for automation?

Yes = 15 points

7. Are you open to different concepts of what test automation can mean?

Yes = 10 points

8. Is test automation lead by someone with an understanding of both programming

And testing?

Yes = 15 points

Load Testing interview questions:


1. What criteria would you use to select Web transactions for
Load testing?
2. For what purpose are virtual users created?
3. Why it is recommended to add verification checks to your
All your scenarios?
4. In what situation would you want to parameterize a
Text verification check?
5. Why do you need to parameterize fields in your virtual user script?
6. What are the reasons why parameterization is necessary when
Load testing the Web server and the database server?
7.How can data caching have a negative effect on load testing results?
8.What usually indicates that your virtual user script has
dynamic data that is dependent on you parameterized fields?
9.What are the benefits of creating multiple actions within
any virtual user script?
10. What is a Load Test Results Summary Report?

General interview questions:

1. What types of documents would you need for QA, QC, and Testing?
2. What did you include in a test plan?
3. Describe any bug you remember.
4. What is the purpose of the testing?
5. What do you like (not like) in this job?
6. What is QA (quality assurance)?
7. What is the difference between QA and testing?
8. How do you scope, organize, and execute a test project?
9. What is the role of QA in a development project?
10. What is the role of QA in a company that produces software?
11. Define quality for me as you understand it
12. Describe to me the difference between validation and verification.
13. Describe to me what you see as a process. Not a particular process, just the
basics of having a process.
14. Describe to me when you would consider employing a failure mode and effect
analysis.
15. Describe to me the Software Development Life Cycle as you would define it.
16. What are the properties of a good requirement?
17. How do you differentiate the roles of Quality Assurance Manager and Project
Manager?
18. Tell me about any quality efforts you have overseen or implemented. Describe
some of the challenges you faced and how you overcame them.
19. How do you deal with environments that are hostile to quality change efforts?
20. In general, how do you see automation fitting into the overall process of testing?
21. How do you promote the concept of phase containment and defect prevention?
22. If you come onboard, give me a general idea of what your first overall tasks will
be as far as starting a quality effort.
23. What kinds of testing have you done?
24. Have you ever created a test plan?
25. Have you ever written test cases or did you just execute those written by others?
26. What did your base your test cases?
27. How do you determine what to test?
28. How do you decide when you have 'tested enough?'
29. How do you test if you have minimal or no documentation about the product?
30. Describe me to the basic elements you put in a defect report?
31. How do you perform regression testing?
32. At what stage of the life cycle does testing begin in your opinion?
33. How do you analyze your test results? What metrics do you try to provide?
34. Realising you won't be able to test everything - how do you decide what to test
first?
35. Where do you get your expected results?
36. If automating - what is your process for determining what to automate and in
what order?
37. In the past, I have been asked to verbally start mapping out a test plan for a
common situation, such as an ATM. The interviewer might say, "Just thinking out
loud, if you were tasked to test an ATM, what items might you test plan include?"
These type questions are not meant to be answered conclusively, but it is a good way
for the interviewer to see how you approach the task.
38. If you're given a program that will average student grades, what kinds of inputs
would you use?
39. Tell me about the best bug you ever found.
40. What made you pick testing over another career?
41. What is the exact difference between Integration & System testing, give me
examples with your project.
42. How did you go about testing a project?
43. When should testing start in a project? Why?
44. How do you go about testing a web application?
45. Difference between Black & White box testing
46. What is Configuration management? Tools used?
47. What do you plan to become after say 2-5yrs (Ex: QA Manager, Why?)
48. Would you like to work in a team or alone, why?
49. Give me 5 strong & weak points of yours
50. Why do you want to join our company?
51. When should testing be stopped?
52. What sort of things would you put down in a bug report?
53. Who in the company is responsible for Quality?
54. Who defines quality?
55. What is an equivalence class?
56. Is a "A fast database retrieval rate" a testable requirement?
57. Should we test every possible combination/scenario for a program?
58. What criteria do you use when determining when to automate a test or leave it
manual?
59. When do you start developing your automation tests?
60. Discuss what test metrics you feel are important to publish an organization?
61. In case anybody cares, here are the questions that I will be asking:
62. Describe the role that QA plays in the software lifecycle.
63. What should Development require of QA?
64. What should QA require of Development?
65. How would you define a "bug?"
66. Give me an example of the best and worst experiences you've had with QA.
67. How does unit testing play a role in the development / software lifecycle?
68. Explain some techniques for developing software components with respect to
testability.
69. Describe a past experience with implementing a test harness in the development
of software.
70. Have you ever worked with QA in developing test tools? Explain the
participation Development should have with QA in leveraging such test tools for QA
use.
71. Give me some examples of how you have participated in Integration Testing.
72. How would you describe the involvement you have had with the bug-fix cycle
between Development and QA?
72. What is unit testing?
73. Describe your personal software development process.
74. How do you know when your code has met specifications?
75. How do you know your code has met specifications when there are no
specifications?
76. Describe your experiences with code analyzers.
77. How do you feel about cyclomatic complexity?
78. Who should test your code?
79.How do you survive chaos?
80. What processes/methodologies are you familiar with?
81. What type of documents would you need for QA/QC/Testing?
82. How can you use technology to solve problem?
83. What type of metrics would you use?
84. How to find that tools work well with your existing system?
85. What automated tools are you familiar with?
86. How well you work with a team?
87. How would you ensure 100% coverage of testing?
88. How would you build a test team?
89. What problem you have right now or in the past? How you solved it?
90. What you will do during the first day of job?
91. What would you like to do five years from now?
92. Tell me about the worst boss you've ever had.
93. What are your greatest weaknesses?
94. What are your strengths?
95. What is a successful product?
96. What do you like about Windows?
97. What is good code?
98. Who is Kent Beck, Dr Grace Hopper, Dennis Ritchie?
99. What are basic, core, practises for a QA specialist?
100. What do you like about QA?
101. What has not worked well in your previous QA experience and what would you
change?
102. How you will begin to improve the QA process?
103. What is the difference between QA and QC?
104. What is UML and how to use it for testing?
105. What is CMM and CMMI? What is the difference?
106. What do you like about computers?
107. Do you have a favourite QA book? More than one? Which ones? And why.
108. What is the responsibility of programmers vs QA?
109.What are the properties of a good requirement?
110.Ho to do test if we have minimal or no documentation about the product?
111.What are all the basic elements in a defect report?
112.Is an "A fast database retrieval rate" a testable requirement?
113.Why should you care about objects and object-oriented testing?
114. What does 100% statement coverage mean?
115. How do you perform configuration management with typical revision control
systems?
116. What is code coverage?
117. What types of code coverage do you know?
118. What tools can be used for code coverage analysis?
119. Is any graph is used for code coverage analysis?
120. At what stage of the development cycle software errors are least costly to
correct?
121. What can you tell about the project if during testing you found 80 bugs in it?
122. How to monitor test progress?
123. Describe a few reasons that a bug might not be fixed.
124. What are the possible states of software bug�s life cycle?

From Cem Kaner article: "Recruiting testers" December 1999


1. What is software quality assurance?
2. What is the value of a testing group? How do you justify your work and budget?
3. What is the role of the test group vis-à¶is documentation, tech support, and so
forth?
4. How much interaction with users should testers have, and why?
5. How should you learn about problems discovered in the field, and what should
you learn from those problems?
6. What are the roles of glass-box and black-box testing tools?
7. What issues come up in test automation, and how do you manage them?
8. What development model should programmers and the test group use?
9. How do you get programmers to build testability support into their code?
10. What is the role of a bug tracking system?
11. What are the key challenges of testing?
12. Have you ever completely tested any part of a product? How?
13. Have you done exploratory or specification-driven testing?
14. Should every business test its software the same way?
15. Discuss the economics of automation and the role of metrics in testing.
16. Describe components of a typical test plan, such as tools for interactive products
and for database products, as well as cause-and-effect graphs and data-flow
diagrams.
17. When have you had to focus on data integrity?
18. What are some of the typical bugs you encountered in your last assignment?
19. How do you prioritize testing tasks within a project?
20. How do you develop a test plan and schedule? Describe bottom-up and top-down
approaches.
21. When should you begin test planning?
22. When should you begin testing?
23. Do you know of metrics that help you estimate the size of the testing effort?
24. How do you scope out the size of the testing effort?
25. How many hours a week should a tester work?
26. How should your staff be managed? How about your overtime?
27. How do you estimate staff requirements?
28. What do you do (with the project tasks) when the schedule fails?
29. How do you handle conflict with programmers?
30. How do you know when the product is tested well enough?
31. What characteristics would you seek in a candidate for test-group manager?
32. What do you think the role of test-group manager should be? Relative to senior
management?
Relative to other technical groups in the company? Relative to your staff?
33. How do your characteristics compare to the profile of the ideal manager that you
just described?
34. How does your preferred work style work with the ideal test-manager role that
you just described? What is different between the way you work and the role you
described?
35. Who should you hire in a testing group and why?
36. What is the role of metrics in comparing staff performance in human resources
management?
37. How do you estimate staff requirements?
38. What do you do (with the project staff) when the schedule fails?
39. Describe some staff conflicts youÂ’ve handled.

1. Why did you ever become involved in QA/testing?


2. What is the testing lifecycle and explain each of its phases?
3. What is the difference between testing and Quality Assurance?
4. What is Negative testing?
5. What was a problem you had in your previous assignment (testing if
possible)? How did you resolve it?
6. What are two of your strengths that you will bring to our QA/testing team?
7. How would you define Quality Assurance?
8. What do you like most about Quality Assurance/Testing?
9. What do you like least about Quality Assurance/Testing?
10. What is the Waterfall Development Method and do you agree with all the
steps?
11. What is the V-Model Development Method and do you agree with this
model?
12. What is the Capability Maturity Model (CMM)? At what CMM level were
the last few companies you worked?
13. What is a "Good Tester"?
14. Could you tell me two things you did in your previous assignment
(QA/Testing related hopefully) that you are proud of?
15. List 5 words that best describe your strengths.
16. What are two of your weaknesses?
17. What methodologies have you used to develop test cases?
18. In an application currently in production, one module of code is being
modified. Is it necessary to re- test the whole application or is it enough to
just test functionality associated with that module?
19. Define each of the following and explain how each relates to the other: Unit,
System, and Integration testing.
20. Define Verification and Validation. Explain the differences between the two.
21. Explain the differences between White-box, Gray-box, and Black-box testing.
22. How do you go about going into a new organization? How do you assimilate?
23. Define the following and explain their usefulness: Change Management,
Configuration Management, Version Control, and Defect Tracking.
24. What is ISO 9000? Have you ever been in an ISO shop?
25. When are you done testing?
26. What is the difference between a test strategy and a test plan?
27. What is ISO 9003? Why is it important
28. What are ISO standards? Why are they important?
29. What is IEEE 829? (This standard is important for Software Test
Documentation-Why?)
30. What is IEEE? Why is it important?
31. Do you support automated testing? Why?
32. We have a testing assignment that is time-driven. Do you think automated
tests are the best solution?
33. What is your experience with change control? Our development team has
only 10 members. Do you think managing change is such a big deal for us?
34. Are reusable test cases a big plus of automated testing and explain why.
35. Can you build a good audit trail using Compuware's QACenter products.
Explain why.
36. How important is Change Management in today's computing environments?
37. Do you think tools are required for managing change. Explain and please list
some tools/practices which can help you managing change.
38. We believe in ad-hoc software processes for projects. Do you agree with this?
Please explain your answer.
39. When is a good time for system testing?
40. Are regression tests required or do you feel there is a better use for
resources?
41. Our software designers use UML for modeling applications. Based on their
use cases, we would like to plan a test strategy. Do you agree with this
approach or would this mean more effort for the testers.
42. Tell me about a difficult time you had at work and how you worked through
it.
43. Give me an example of something you tried at work but did not work out so
you had to go at things another way.
44. How can one file compare future dated output files from a program which
has change, against the baseline run which used current date for input. The
client does not want to mask dates on the output files to allow compares. -
Answer-Rerun baseline and future date input files same # of days as future
dated run of program with change. Now run a file compare against the
baseline future dated output and the changed programs' future dated output.

Software Testing

1. What is the difference between Regression testing and retesting?


2. What are the errors encountered while testing an application manually or using
automated tool like Test director, Winrunner?
3. What is Inspection, Review?
4. What is the actual different between re-testing and regression testing. briefly
explain
5. Explain Test Strategy
6. What is Traceability Matrix?
7. What are the major diff. between the Winrunner 6.0 and 7.0 (with internal
procedure)?
8. What test you perform mostly? Regression or retesting in your testing process?
9. What did you do as a team leader?
10. How do we know about the build we are going to test? Where do you see this?
11. What is system testing and what are the different types of tests you perform in
system testing?
12. What is the difference between Return and return?
13. How do you test a web link which is changing dynamically?
14. What are the flaws in water fall model and how to overcome it?
15. What is defect leakage?
16. Did you ever have to deal with someone who doesn't believe in testing? What did
u do?
17. How will you write test cases for a code currently under development?
18. Describe the last project scenario and generate test cases for it?
19. If there are a lot of bugs to be fixed, which one would, you resolve first
20. How will you test a stapler?

Test Automation

1. Give me an example where you have customized an automated test script.


2. What steps have you followed while automating?
.......a) Running the test manually and ensuring a "pass".
.......b) Recording
.......c) Checkpoints/Verification
3. Can you automate Context-sensitive help? If so, how do you that?
4. What are the major differences between Stress testing, Load testing, Volume
testing?
5. What is the difference between quality assurance and testing?
6. What are the main attributes of test automation?
7. What is data - driven automation?
8. What is 'configuration management'?
9. What is memory leaks and buffer overflows?
10. Why does software have bugs?
11. How do you do usability testing, security testing, installation testing, ADHOC,
safety and smoke testing?
12. Describe some problem that you had with automating testing tool.
13. How do we test for severe memory leakages?
14. What are the problems encountered during the testing the application
compatibility on different browsers and on different operating systems
15. How will you evaluate the fields in the application under test using automation
tool?
16. How did you use automating testing tools in your job?
17. What skills needed to be a good test automator?
18. Can the activities of test case design be automated?
19. What types of scripting techniques for test automation do you know?
.......What are scripting techniques? Could you describe the 5 techniques mentioned?

20. What tools are available for support of testing during software development life
cycle?

Test Cases

1. How will you prepare test cases?


2. Write the test cases on ATM Transactions?
3. What is meant by Hot Keys?
4. How is test case written?
5. How can we write a good test case?
6. How will you check that your test cases covered all the requirements
7. For a triangle (sum of two sides is greater than or equal to the third side), what is the
minimal number of test cases required.
For more Interview Question and Answers - Test Cases Interview Questions

Test Director

1. Difference between WEBINSPECT-QAINSPECT and WINRUNNER/TEST


DIRECTOR?
2. How will you generate the defect ID in test director? Is it generated automatically or
not?
3. Difference between TD 8.0 (Test director) and QC 8.0 (Quality Center).
4. How do you ensure that there are no duplication of bugs in Test Director?
5. Difference between WinRunner and Test Director?
6. How will you integrated your automated scripts from TestDirector?
7. What is the use of Test Director software?

For more Interview Question and Answers - Test Director Interview Questions

Testing-Scenarios

1. How to find out the length of the edit box through WinRunner?
2. Is it compulsary that a tester should study a Design Document for writing integration
and system test casses
3. What is Testing Scenario? What is scenario based testing? Can you explain with an
example?
4. Lets say we have an GUI map and scripts, and we got some 5 new pages included in an
application, How do we do that?
5. How do you complete the testing when you have a time constraint?
6. Given an yahoo application how many test cases u can write?
7. GUI contains 2 fields. Field 1 to accept the value of x and Field 2 displays the result of
the formula a+b/c-d where a=0.4*x, b=1.5*a, c=x, d=2.5*b; How many system test cases
would you write
8. How do you know that all the scenarios for testing are covered?

Web Testing

1. What is the difference between testing in client-server applications and web based
applications?
2. Without using GUI map editor, Can we recognise the application in Winrunner?
3. What command is used to launch a application in Winrunner?
4. What is the difference in testing a CLIENT-SERVER application and a WEB
application ?
6. What bugs are mainly come in Web Testing? What severity and priority we are giving?

Wireless Testing

1. What is Wireless Testing? How do we do it? What are the concepts a test engineer
should have knowledge of? How do you classify testing of wireless products?
Testing General

1. What is workadround ?
2. What is a show stopper?
3. What is Traceability Matrix? Who prepares this document?
4. What is test log document in testing process?
5. What is the Entry And Exit Criteria of a test plan? 2.How to Automate your test
plan?
6. What is the role of QA in a company that produces software?
7. What is terminologe? Why testing necessary? Fundamental test process
psychology of testing
8. What are the common bugs encountered while testing an application manually or
using test?
9. What are the bug and testing metrics?
10. For a bug with high severity can we give the priority also to be high...If so why
we need both?
11. How would you differentiaite between Bug, Defect, Failure, Error.
12. What is the difference between Client Server Testing and Web Testing?
13. What is backward compatibility testing ?
14. What certifications are available in testing?
15. What is release candidate?
16. What do you think the role of test-group manager should be?
17. What is a test data? Give examples
18. What is the difference between QA, QC and testing?
19. What is seviarity & priority? What is test format? Test procedure?
20. What are the different is manual database checking types?

WinRunner Interview Questions

1.. How to recognise the objects during runtime in new build version (test suite)
comparing with old guimap?
2. wait(20) - What is the minimum and maximum time the above mentioned
synchronization statements will wait given that the global default timeout is set to 15
seconds.
3. Where in the user-defined function library should a new error code be defined?
4. In a modular test tree, each test will receive the values for the parameters passed
from the main test. These parameters are defined in the Test properties dialog box of
each test.Refering to the above, in which one of the following files are changes made
in the test properties dialog saved?
5. What is the scripting process in Winrunner?
6. How many scripts can we generate for one project?
7. What is the command in Winrunner to invoke IE Browser? And once I open the IE
browser is there a unique way to identify that browser?
8. How do you load default comments into your new script like IDE's?
9. What is the new feature add in QTP 8.0 compare in QTP 6.0
10. When will you go to automation?
11. How to test the stored procedure?
12. How to recognise the objects during runtime in new build version (test suite)
comparing with old guimap
13. what is use of GUI files in winrunner.
14. Without using the datadriven test, how can we test the application with different
set of inputs?
15. How do you load compiled module inside a comiled module?
16. Can you tell me the bug life cycle
17. How to find the length of the edit box through WinRunner?
18. What is file type of WinRunner test files, its extension?
19. What is candidate release?
20. What type of variables can be used with in the TSL function?

QTP

1. How can I add an action (external action) programmatically?


2. How can I call an external action, which is not added external action of an action.
3. What is meant by Source Control?
4. How and what kind of VB functions do u use in QTP?
5. How can u describe the basic flow of automation with conditional and
programmatic logic?
6. How can I implement error handling in QTP,
7. How to recall a function in QTP
8. Give one example where you have used Regular Expression?
9. How can I implement error handling in QTP?
10. How to select particular value from the combo box in the current page which is
entered in the previous page edit box after parameterization?
11. If you have the same application screen with 7 drop down boxes and
approximately 70 values how do you test with QTP?
12. When there is a task that gets repeated in multiple scripts, what do you do in
QTP?
13. What is the descriptive programming?
14. What is the use of descriptive programming?
15. How to instruct QTP to display errors and other description in the test results
instead of halting execution by throwing error in the mid of execution due to an error
(for example Object not found)?
16. How you write scripts in QTP? What's the main process in QTP? How do you run
scripts in QTP? Please anyone can answer my questions.......
17. What is descriptive programming?
18. How to add run-time parameter to a datasheet?
19. How to load the *.vbs or test generating script in a new machine?
20. How can you write a script without using a GUI in QTP?

QA Testing

1. If the actual result doesn't match with expected result in this situation what
should we do?
2. What is the importance of requirements traceability in a product testing?
3. When is the best time for system testing?
4. What is use case? What is the difference between test cases and use cases?
5. What is the difference between the test case and a test script
6. Describe to the basic elements you put in a defect report?
7. How do you test if you have minimal or no documentation about the product?
8. How do you decide when you have tested enough?
9. How do you determine what to test?
10. In general, how do you see automation fitting into the overall process of testing?
11. How do you deal with environments that are hostile to quality change efforts?
12. Describe to me the Software Development Life Cycle as you would define it?
13. Describe to me when you would consider employing a failure mode and defect
analysis?
14. What is the role of QA in a company that produces software?
15. How do you scope, organize, and execute a test project?
16. How can you test the white page
17. What is the role of QA in a project development?
18. How you used white box and block box technologies in your application?
19. 1) What are the demerits of winrunner?2) We write the test data after what are
the principles to do testing an application?
20. What is the job of Quality Assurance Engineer? Difference between the Testing &
Quality Assurance job.

LoadRunner

1. What is load testing? Can we test J2ME application with load runner ? What is
Performance testing?
2. Which protocol has to be selected for record/playback Oracle 9i application?
3. What are the enhancements which have been included in loadrunner 8.0 when
compared to loadrunner 6.2?
4. Can we use Load Runner for testing desktop applications or non web based
applications and how do we use it.?
5. How to call winrunner script in Loadrunner?
6. What arr the types of parameterisation in load runner? List the step to do strees
testing?
7. What are the steps for doing load and performance testing using Load Runner?
8. What is concurrent load and corollation? What is the process of load runner?
9. What is planning for the test?
10. What enables the controller and the host to communicate with each other in Load
Runner?
11. Where is Load testing usually done?
12. What are the only means of measuring performance?
13. Testing requirement and design are not part of what?
14. According to Market analysis 70% of performance problem lies with what?
15. What is the level of system loading expected to occur during specific business
scenario?
16. What is run-time-setting.
17. When load runner is used .
18. What protocols does LoadRunner support?
19. What do you mean by creating vuser script.?
20. What is rendezvous point?

DataBase Testing

1. What SQL statements have you used in Database Testing?


2. How to test data loading in Database testing
3. What is way of writing testcases for database testing?
4. What is Database testing?
5. What we normally check for in the Database Testing?
6. How to Test daCommon Interview Questions

1. What Technical Environments have you worked with?


2. Have you ever converted Test Scenarios into Test Cases?
3. What is the ONE key element of 'test case'?
4. What is the ONE key element of a Test Plan?
5. What is SQA testing? tell us steps of SQA testing
6. How do you promote the concept of phase containment and defect prevention?
7. Which Methodology you follow in your Testcase?
8. Specify the tools used by MNC companies
9. What are the test cases prepared by the testing team
10. During the start of the project how will the company come to an conclusion that tool is required for
testing or not?
11. Define Bug Life Cycle? What is Metrics
12. What is a Test procedure?
13. What is the difference between SYSTEM TESTING and END-TO-END TESTING?
14. What is Traceability Matrix? Is there any interchangeable term for Traceability Matrix ?Are Traceability
Matrix and Test Matrix same or Different ?
15. What is the differance between an exception and an error?
16. Correct bug tracking process - Reporting, Re-testing, Debigging, .....?
17. What is the difference between bug and defect?
18. How much time is/should be alloated for Testing out of total Development time based on industry
standards?
19. What are test bugs?
20. Define Quality - bug free, Functionality working or both?

Common Questions

1. If you have an application, but you do not have any requirements available, then how would you perform
the testing?
2. How can you know if a test case is necessary?
3. What is peer review in practical terms?
4. How do you know when you have enough test cases to adequately test a software system or module?
5. Who approved your test cases?
6. What will you when you find a bug?
7. What test plans have you written?
8. What is QA? What is Testing? Are they both same or different?
9. How to write Negative Testcase? Give ex.
10. In an application currently in production, one module of code is being modified. Is it necessary to re-test
the whole application or is it enough to just test functionality associated with that module?
11. What is included in test strategy? What is overall process of testing step by step and what are various
documents used testing during process?
12. What is the most challenging situation you had during testing
13. What are you going to do if there is no Functional Spec or any documents related to the system and
developer who wrote the code does not work in the company anymore, but you have system and need to
test?
14. What is the major problem did you resolve during testing process
15. What are the types of functional testing?
16. 1. How will you write integration test cases 2. How will you track bugs from winrunner. 3.How will you
customize the bugs as pass/fail. 4. You find a bug How will you repair 5. In test cases you have bug or not.
6. What is use case ? what does it contains.
17. What is the difference between smoke testing and sanity testing
18. What is Random Testing?
19. What is smoke testing?
20. What is stage containment in testing?

tabase Manually? Explain with an example

Bug Tracking
1. What is the difference between a Bug and a Defect?
2. How to post a BUG
3. How do we track a bug? plz send format of excel sheet in which we write the bug details? How do we
give the severity and priority to the bugs?
4. What are the different types of Bugs we normally see in any of the Project? Include the severity as well.
5. Top Ten Tips for Bug Tracking

Performance Testing is the process by which software is tested and tuned with the intent
of realizing the required performance.

The performance testing part of performance engineering encompasses what's commonly


referred to as load, spike, and stress testing, as well as validating system performance.
Performance can be classified into three main categories:

• Speed — does the application respond quickly enough for the intended users?
• Scalability — Will the application handle the expected user load and beyond?
• Stability — is the application stable under expected and unexpected user loads?

Why should you automate performance testing?

A well-constructed performance test answers questions such as:

• Does the application respond quickly enough for the intended users?
• Will the application handle the expected user load and beyond?
• Will the application handle the number of transactions required by the business?
• Is the application stable under expected and unexpected user loads?

By answering these questions, automated performance testing quantifies the impact of a


change in business terms. This in turn makes clear the risks of deployment. An effective
automated performance testing process helps you to make more informed release
decisions, and prevents system downtime and availability problems.

What are the LoadRunner components?

LoadRunner contains the following components:

• The Virtual User Generator captures end-user business processes and creates an
automated performance testing script, also known as a virtual user script.
• The Controller organizes, drives, manages, and monitors the load test.
• The Load Generators create the load by running virtual users.
• The Analysis helps you view, dissect, and compare the performance results.
• The Launcher provides a single point of access for all of the LoadRunner components.

LoadRunner Terminology

A scenario is a file that defines the events that occur during each testing session, based on
performance requirements.

In the scenario, LoadRunner replaces human users with virtual users or Vusers. Vusers
emulate the actions of human users working with your application. A scenario can contain
tens, hundreds, or even thousands of Vusers.

The actions that a Vuser performs during the scenario are described in a Vuser script. To
measure the performance of the server, you define transactions. A transaction represents
end-user business processes that you are interested in measuring.

Load Testing Process

Load testing typically consists of five phases: planning, script creation, scenario
definition, scenario execution, and results analysis.

Plan Load Test: Define your performance testing requirements, for example, number of
concurrent users, typical business processes and required response times.

Create Vuser Scripts: Capture the end-user activities into automated scripts.

Define a Scenario: Use the LoadRunner Controller to set up the load test environment.

Run a Scenario: Drive, manage, and monitor the load test from the LoadRunner
Controller.

Analyze the Results: Use LoadRunner Analysis to create graphs and reports, and
evaluate the performance.

Conclusion:

Load Runner has good reporting features with which the user can easily analyze the
performance test results.

What is User Acceptance Testing?

User Acceptance Testing is often the final step before rolling out the application.

Usually the end users who will be using the applications test the application before
‘accepting’ the application.

This type of testing gives the end users the confidence that the application being
delivered to them meets their requirements.

This testing also helps nail bugs related to usability of the application.

User Acceptance Testing – Prerequisites:

Before the User Acceptance testing can be done the application is fully developed.
Various levels of testing (Unit, Integration and System) are already completed before
User Acceptance Testing is done. As various levels of testing have been completed most
of the technical bugs have already been fixed before UAT.

User Acceptance Testing – What to Test?

To ensure an effective User Acceptance Testing Test cases are created.


These Test cases can be created using various use cases identified during the
Requirements definition stage.
The Test cases ensure proper coverage of all the scenarios during testing.

During this type of testing the specific focus is the exact real world usage of the
application. The Testing is done in an environment that simulates the production
environment.
The Test cases are written using real world scenarios for the application

User Acceptance Testing – How to Test?

The user acceptance testing is usually a black box type of testing. In other words, the
focus is on the functionality and the usability of the application rather than the technical
aspects. It is generally assumed that the application would have already undergone Unit,
Integration and System Level Testing.

However, it is useful if the User acceptance Testing is carried out in an environment that
closely resembles the real world or production environment.

The steps taken for User Acceptance Testing typically involve one or more of the
following:
.......1) User Acceptance Test (UAT) Planning
.......2) Designing UA Test Cases
.......3) Selecting a Team that would execute the (UAT) Test Cases
.......4) Executing Test Cases
.......5) Documenting the Defects found during UAT
.......6) Resolving the issues/Bug Fixing
.......7) Sign Off

User Acceptance Test (UAT) Planning:


As always the Planning Process is the most important of all the steps. This affects the
effectiveness of the Testing Process. The Planning process outlines the User Acceptance
Testing Strategy. It also describes the key focus areas, entry and exit criteria.

Designing UA Test Cases:


The User Acceptance Test Cases help the Test Execution Team to test the application
thoroughly. This also helps ensure that the UA Testing provides sufficient coverage of all
the scenarios.
The Use Cases created during the Requirements definition phase may be used as inputs
for creating Test Cases. The inputs from Business Analysts and Subject Matter Experts
are also used for creating.

Each User Acceptance Test Case describes in a simple language the precise steps to be
taken to test something.

The Business Analysts and the Project Team review the User Acceptance Test Cases.

Selecting a Team that would execute the (UAT) Test Cases:


Selecting a Team that would execute the UAT Test Cases is an important step.
The UAT Team is generally a good representation of the real world end users.
The Team thus comprises of the actual end users who will be using the application.

Executing Test Cases:


The Testing Team executes the Test Cases and may additional perform random Tests
relevant to them

Documenting the Defects found during UAT:


The Team logs their comments and any defects or issues found during testing.

Resolving the issues/Bug Fixing:


The issues/defects found during Testing are discussed with the Project Team, Subject
Matter Experts and Business Analysts. The issues are resolved as per the mutual
consensus and to the satisfaction of the end users.

Sign Off:
Upon successful completion of the User Acceptance Testing and resolution of the issues
the team generally indicates the acceptance of the application. This step is important in
commercial software sales. Once the User “Accept” the Software delivered they indicate
that the software meets their requirements.

The users now confident of the software solution delivered and the vendor can be paid for
the same.

How does System Testing fit into the Software Development Life Cycle?

In a typical Enterprise, ‘unit testing’ is done by the programmers. This ensures that the
individual components are working OK. The ‘Integration testing’ focuses on successful
integration of all the individual pieces of software (components or units of code).

Once the components are integrated, the system as a whole needs to be rigorously tested
to ensure that it meets the Quality Standards.

Thus the System testing builds on the previous levels of testing namely unit testing and
Integration Testing.
Usually a dedicated testing team is responsible for doing ‘System Testing’.

Why System Testing is important?

System Testing is a crucial step in Quality Management Process.

........- In the Software Development Life cycle System Testing is the first level where
...........the System is tested as a whole
........- The System is tested to verify if it meets the functional and technical
...........requirements
........- The application/System is tested in an environment that closely resembles the
...........production environment where the application will be finally deployed
........- The System Testing enables us to test, verify and validate both the Business
...........requirements as well as the Application Architecture

Prerequisites for System Testing:

The prerequisites for System Testing are:


........- All the components should have been successfully Unit Tested
........- All the components should have been successfully integrated and Integration
..........Testing should be completed
........- An Environment closely resembling the production environment should be
...........created.

When necessary, several iterations of System Testing are done in multiple environments.

Steps needed to do System Testing:

The following steps are important to perform System Testing:


........Step 1: Create a System Test Plan
........Step 2: Create Test Cases
........Step 3: Carefully Build Data used as Input for System Testing
........Step 3: If applicable create scripts to
..................- Build environment and
..................- to automate Execution of test cases
........Step 4: Execute the test cases
........Step 5: Fix the bugs if any and re test the code
........Step 6: Repeat the test cycle as necessary

What is a ‘System Test Plan’?

As you may have read in the other articles in the testing series, this document typically
describes the following:
.........- The Testing Goals
.........- The key areas to be focused on while testing
.........- The Testing Deliverables
.........- How the tests will be carried out
.........- The list of things to be Tested
.........- Roles and Responsibilities
.........- Prerequisites to begin Testing
.........- Test Environment
.........- Assumptions
.........- What to do after a test is successfully carried out
.........- What to do if test fails
.........- Glossary

How to write a System Test Case?

A Test Case describes exactly how the test should be carried out.

The System test cases help us verify and validate the system.
The System Test Cases are written such that:
........- They cover all the use cases and scenarios
........- The Test cases validate the technical Requirements and Specifications
........- The Test cases verify if the application/System meet the Business & Functional
...........Requirements specified
........- The Test cases may also verify if the System meets the performance standards

Since a dedicated test team may execute the test cases it is necessary that System Test
Cases. The detailed Test cases help the test executioners do the testing as specified
without any ambiguity.

The format of the System Test Cases may be like all other Test cases as illustrated below:

• Test Case ID
• Test Case Description:
o What to Test?
o How to Test?
• Input Data
• Expected Result
• Actual Result

Sample Test Case Format:

Test
What To How to Expected Actual
Case Input Data Pass/Fail
Test? Test? Result Result
ID
. . . . . . .

Additionally the following information may also be captured:


........a) Test Suite Name
........b) Tested By
........c) Date
........d) Test Iteration (The Test Cases may be executed one or more times)

Working towards Effective Systems Testing:

There are various factors that affect success of System Testing:

1) Test Coverage: System Testing will be effective only to the extent of the coverage of
Test Cases. What is Test coverage? Adequate Test coverage implies the scenarios covered
by the test cases are sufficient. The Test cases should “cover” all scenarios, use cases,
Business Requirements, Technical Requirements, and Performance Requirements. The
test cases should enable us to verify and validate that the system/application meets the
project goals and specifications.

2) Defect Tracking: The defects found during the process of testing should be tracked.
Subsequent iterations of test cases verify if the defects have been fixed.

3) Test Execution: The Test cases should be executed in the manner specified. Failure to
do so results in improper Test Results.

4) Build Process Automation: A Lot of errors occur due to an improper build. ‘Build’ is
a compilation of the various components that make the application deployed in the
appropriate environment. The Test results will not be accurate if the application is not
‘built’ correctly or if the environment is not set up as specified. Automating this process
may help reduce manual errors.

What is Regression Testing?

If a piece of Software is modified for any reason testing needs to be done to ensure that it
works as specified and that it has not negatively impacted any functionality that it offered
previously. This is known as Regression Testing.

Regression Testing attempts to verify:

- That the application works as specified even after the changes/additions/modification


were made to it

- The original functionality continues to work as specified even after


changes/additions/modification to the software application

- The changes/additions/modification to the software application have not introduced any


new bugs
When is Regression testing necessary?

Regression Testing plays an important role in any Scenario where a change has been
made to a previously tested software code. Regression Testing is hence an important
aspect in various Software Methodologies where software changes enhancements occur
frequently.

Any Software Development Project is invariably faced with requests for changing
Design, code, features or all of them.

Some Development Methodologies embrace change.

For example ‘Extreme Programming’ Methodology advocates applying small incremental


changes to the system based on the end user feedback.

Each change implies more Regression Testing needs to be done to ensure that the System
meets the Project Goals.

Why is Regression Testing important?

Any Software change can cause existing functionality to break.


Changes to a Software component could impact dependent Components.

It is commonly observed that a Software fix could cause other bugs.

All this affects the quality and reliability of the system. Hence Regression Testing, since
it aims to verify all this, is very important.

Making Regression Testing Cost Effective:

Every time a change occurs one or more of the following scenarios may occur:
- More Functionality may be added to the system
- More complexity may be added to the system
- New bugs may be introduced
- New vulnerabilities may be introduced in the system
- System may tend to become more and more fragile with each change

After the change the new functionality may have to be tested along with all the original
functionality.

With each change Regression Testing could become more and more costly.

To make the Regression Testing Cost Effective and yet ensure good coverage one or more
of the following techniques may be applied:
- Test Automation: If the Test cases are automated the test cases may be executed using
scripts after each change is introduced in the system. The execution of test cases in this
way helps eliminate oversight, human errors,. It may also result in faster and cheaper
execution of Test cases. However there is cost involved in building the scripts.

- Selective Testing: Some Teams choose execute the test cases selectively. They do not
execute all the Test Cases during the Regression Testing. They test only what they decide
is relevant. This helps reduce the Testing Time and Effort.

Regression Testing – What to Test?

Since Regression Testing tends to verify the software application after a change has been
made everything that may be impacted by the change should be tested during Regression
Testing. Generally the following areas are covered during Regression Testing:

- Any functionality that was addressed by the change

- Original Functionality of the system

- Performance of the System after the change was introduced

Testing & The Role of a Test Lead / Manager


By David W Johnson

The Role of Test Lead / Manager is to effectively lead the testing team. To fulfill this role
the Lead must understand the discipline of testing and how to effectively implement a
testing process while fulfilling the traditional leadership roles of a manager. What does
this mean? The manager must manage and implement or maintain an effective testing
process. This involves creating a test infrastructure that supports robust communication
and a cost effective testing framework.

The Test Lead / Manager is responsible for:

• Defining and implementing the role testing plays within the organizational structure.
• Defining the scope of testing within the context of each release / delivery.
• Deploying and managing the appropriate testing framework to meet the testing mandate.
• Implementing and evolving appropriate measurements and metrics.
o To be applied against the Product under test.
o To be applied against the Testing Team.
• Planning, deploying, and managing the testing effort for any given engagement / release.
• Managing and growing Testing assets required for meeting the testing mandate:
o Team Members
o Testing Tools
o Testing Process
• Retaining skilled testing personnel.

The Test Lead must understand how testing fits into the organizational structure, in other
words, clearly define its role within the organization . this is often accomplished by
crafting a Mission Statement or a defined Testing Mandate. For example:

"To prevent, detect, record, and manage defects within the context of a defined release."

Now it becomes the task on the Test Lead to communicate and implement effective
managerial and testing techniques to support this .simple. mandate. Expectations of your
team, your peers (Development Lead, Deployment Lead, and other leads) and your
superior need to be set appropriately given the timeframe of the release, the maturity of
the development team and testing team. These expectations are usually defined in terms
of functional areas deemed to be in Scope or out of Scope. For example:

In Scope:

• Create New Customer Profile


• Update Customer Profile
• ...

Out of Scope:

• Security
• Backup and Recovery
• ...

The definition of Scope will change as you move through the various stages of testing but
the key is to ensure that your testing team and the organization as a whole clearly
understands what is and what is not being tested for the current release.

The Test Lead / Manager must employ the appropriate Testing Framework or Test
Architecture to meet the organizations testing needs. While the Testing Framework
requirements for any given organization are difficult to define there are several questions
the Test Lead / Manager must ask themselves . the answers to these questions and others
will define the short term and long term goals of the Testing Framework.

What is the relationship between product maturity and


testing?
Acceptance - Product is ready for deployment.

- Product is ready to be tested as an integrated whole


System
or system.

- Functional testing can be performed against delivered


Function
components.

Unit - Developer can test code as an un-integrated unit.

Design Review - Product concept can be captured and reviewed.

- How much more construction is required to complete


% Construction
the product.
% Product - How much of the product has been constructed.

How can the Testing Organization help prevent defects


from occurring?
There are really two sides to testing Verification and Validation . unfortunately the
meaning of these terms has been defined differently by several governing / regulatory
bodies. To put it more succinctly there is testing that can be performed before the product
is constructed / built and there are types of testing that can be performed after the product
has been constructed / built.

Preventing defects from occurring involves testing before the product is constructed /
built. There are several methods for accomplishing this goal. The most powerful and cost
effective being Reviews. Reviews can be either formal / technical reviews or peer
reviews. Formal product development life cycles will provide the testing team with useful
materials / deliverables for the review process. When properly implemented any effective
development paradigm should supply these deliverables. For example:

• Cascade
o Requirements
o Functional Specifications
• Agile or Extreme
o High level Requirements
o Storyboards

Testing needs to be involved in this Review process and any defects need to be recorded
and managed.

How and when can the Testing Organization detect


software defects?
The Testing Organization can detect software defects after the product or some
operational segment of it has been delivered. The type of testing to be performed depends
on the maturity of the product at the time. The classic hierarchy or sequence of testing is:

• Design Review
• Unit Testing
• Function Testing
• System Testing
• User Acceptance Testing
The Testing Team should be involved in at least three of these phases: Design Review,
Function Testing, and System Testing.

Functional Testing involves the design, implementation, and execution of test cases
against the functional specification and / or functional requirements for the product. This
is where the testing team measures the functional implementation against the product
intent using well-formulated test cases and notes any discrepancies as defects (faults). For
example testing to ensure the web page allows the entry of a new forum member . in this
case we are testing to ensure the web page functions as an interface.

System Testing follows much the same course (Design, Implement, execute and defect)
but the intent or focus is very different. While Functional Testing focuses on discrete
functional requirements System Testing focuses on the flow through the system and the
connectivity between related systems. For example testing to ensure the application
allows the entry, activation, and recovery of a new forum member . in this case we are
testing to ensure the system supports the business. There are several types of System
Testing, what is required for any given release should be determined by the Scope:

• Security
• Performance
• Integration

What are the minimum set of measurements and


metrics?
The single most important deliverable the testing team maintains are defects. Defects are
arguably the only product the testing team produces that are seen and understood by the
project as a whole. This is where the faults against the system are recorded and tracked --
at a bare minimum each defect should contain:

• Defect Name / Title


• Defect description . What requirement is not being met?
• Detail instructions on how to replicate the defect.
• Defect severity.
• Impacted functional area.
• Defect Author.
• Status (Open, Work-in-Progress, Fixed, Closed)

This will then provide the data for a minimal set of metrics:

• Number of defects raised


• Distribution of defects in terms of severity
• Distribution of defects in terms of functional area
From this baseline the measurements and metrics a testing organization maintains are
dependent on its maturity and mission statement. The SEI (Software Engineering
Institute) Process Maturity Levels apply to testing as much as they do to any Software
Engineering discipline:

1. Initial: (Anarchy) Unpredictable and poorly controlled.


2. Repeatable: (Folklore) Repeat previously mastered tasks.
3. Defined: (Standards) Process characterized, fairly well understood.
4. Managed: (Measurement) Process measured and controlled.
5. Optimizing: (Optimization) Focus on process improvement.

How disciplined the testing organization needs to become and what measurements and
metrics are required are dependent on a cost benefit analysis by the Test Lead / Manager.
What makes sense in terms of the stated goals and previous performance of the testing
organization?

How to grow and maintain a Testing Organization?


Managing or leading a testing team is probably one of the most challenging positions in
the IT industry. The team is usually understaffed, lacks appropriate tooling, and
financing. Deadlines don.t move but the testing phase is continually being pressured by
product delays. Motivation and retention of key testing personnel under these conditions
is critical . How do you accomplish this seemly impossible task? I can only go by my
personal experience both as a lead and a team member:

• If the timelines are impacted modify the Test Plan appropriately in terms of Scope.
• Clearly communicate the situation to the testing team and project management.
• Keep clear lines of communication to Development and project management.
• Whenever possible sell, sell, sell the importance and contributions of the Testing Team.
• Ensure the testing organization has clearly defined roles for each member of the team and
a well-defined career path.
• Measure and communicate testing return on investment -- if the detected defect would
have reached the field what would have been the cost.
• Explain testing expenditures in terms of investment (ROI) not cost.
• Finally, never lose your cool -- Good luck.

David W Johnson, A Senior Computer Systems Analyst with over 20 years of


experience in Information Technology across several industries having played key roles
in business needs analysis, software design, software development, testing, training,
implementation, organizational assessments, and support of business solutions.
Developed specific expertise over the past 10 years on implementing "Testware"
including - test strategies, test planning, test automation, and test management solutions.
Experienced in deploying immediate solutions Worldwide, that improve software quality,
test efficiency, and test effectiveness. This has led to a unique combination of technical
skills, business knowledge, and the ability to apply the "right solution" to meet customer
needs. Contact David at DavidWJohnson@Eastlink.ca

LoadRunner interview questions


1. What is load testing? - Load testing is to test that if the application works fine with the
loads that result from large number of simultaneous users, transactions and to determine
weather it can handle peak usage periods.
2. What is Performance testing? - Timing for both read and update transactions should be
gathered to determine whether system functions are being performed in an acceptable
timeframe. This should be done standalone and then in a multi user environment to
determine the effect of multiple transactions on the timing of a single transaction.
3. Did u use LoadRunner? What version? - Yes. Version 7.2.
4. Explain the Load testing process? -
Step 1: Planning the test. Here, we develop a clearly defined test plan to ensure the test
scenarios we develop will accomplish load-testing objectives. Step 2: Creating Vusers.
Here, we create Vuser scripts that contain tasks performed by each Vuser, tasks
performed by Vusers as a whole, and tasks measured as transactions. Step 3: Creating
the scenario. A scenario describes the events that occur during a testing session. It
includes a list of machines, scripts, and Vusers that run during the scenario. We create
scenarios using LoadRunner Controller. We can create manual scenarios as well as goal-
oriented scenarios. In manual scenarios, we define the number of Vusers, the load
generator machines, and percentage of Vusers to be assigned to each script. For web tests,
we may create a goal-oriented scenario where we define the goal that our test has to
achieve. LoadRunner automatically builds a scenario for us. Step 4: Running the
scenario.
We emulate load on the server by instructing multiple Vusers to perform tasks
simultaneously. Before the testing, we set the scenario configuration and scheduling. We
can run the entire scenario, Vuser groups, or individual Vusers. Step 5: Monitoring the
scenario.
We monitor scenario execution using the LoadRunner online runtime, transaction, system
resource, Web resource, Web server resource, Web application server resource, database
server resource, network delay, streaming media resource, firewall server resource, ERP
server resource, and Java performance monitors. Step 6: Analyzing test results. During
scenario execution, LoadRunner records the performance of the application under
different loads. We use LoadRunner.s graphs and reports to analyze the application.s
performance.
5. When do you do load and performance Testing? - We perform load testing once we are
done with interface (GUI) testing. Modern system architectures are large and complex.
Whereas single user testing primarily on functionality and user interface of a system
component, application testing focuses on performance and reliability of an entire
system. For example, a typical application-testing scenario might depict 1000 users
logging in simultaneously to a system. This gives rise to issues such as what is the
response time of the system, does it crash, will it go with different software applications
and platforms, can it hold so many hundreds and thousands of users, etc. This is when we
set do load and performance testing.
6. What are the components of LoadRunner? - The components of LoadRunner are The
Virtual User Generator, Controller, and the Agent process, LoadRunner Analysis and
Monitoring, LoadRunner Books Online.
7. What Component of LoadRunner would you use to record a Script? - The Virtual
User Generator (VuGen) component is used to record a script. It enables you to develop
Vuser scripts for a variety of application types and communication protocols.
8. What Component of LoadRunner would you use to play Back the script in multi
user mode? - The Controller component is used to playback the script in multi-user
mode. This is done during a scenario run where a vuser script is executed by a number of
vusers in a group.
9. What is a rendezvous point? - You insert rendezvous points into Vuser scripts to
emulate heavy user load on the server. Rendezvous points instruct Vusers to wait during
test execution for multiple Vusers to arrive at a certain point, in order that they may
simultaneously perform a task. For example, to emulate peak load on the bank server, you
can insert a rendezvous point instructing 100 Vusers to deposit cash into their accounts at
the same time.
10. What is a scenario? - A scenario defines the events that occur during each testing
session. For example, a scenario defines and controls the number of users to emulate, the
actions to be performed, and the machines on which the virtual users run their
emulations.
11. Explain the recording mode for web Vuser script? - We use VuGen to develop a Vuser
script by recording a user performing typical business processes on a client application.
VuGen creates the script by recording the activity between the client and the server. For
example, in web based applications, VuGen monitors the client end of the database and
traces all the requests sent to, and received from, the database server. We use VuGen to:
Monitor the communication between the application and the server; Generate the
required function calls; and Insert the generated function calls into a Vuser script.
12. Why do you create parameters? - Parameters are like script variables. They are used to
vary input to the server and to emulate real users. Different sets of data are sent to the
server each time the script is run. Better simulate the usage model for more accurate
testing from the Controller; one script can emulate many different users on the system.
13. What is correlation? Explain the difference between automatic correlation and
manual correlation? - Correlation is used to obtain data which are unique for each run
of the script and which are generated by nested queries. Correlation provides the value to
avoid errors arising out of duplicate values and also optimizing the code (to avoid nested
queries). Automatic correlation is where we set some rules for correlation. It can be
application server specific. Here values are replaced by data which are created by these
rules. In manual correlation, the value we want to correlate is scanned and create
correlation is used to correlate.
14. How do you find out where correlation is required? Give few examples from your
projects? - Two ways: First we can scan for correlations, and see the list of values which
can be correlated. From this we can pick a value to be correlated. Secondly, we can
record two scripts and compare them. We can look up the difference file to see for the
values which needed to be correlated. In my project, there was a unique id developed for
each customer, it was nothing but Insurance Number, it was generated automatically and
it was sequential and this value was unique. I had to correlate this value, in order to avoid
errors while running my script. I did using scan for correlation.
15. Where do you set automatic correlation options? - Automatic correlation from web
point of view can be set in recording options and correlation tab. Here we can enable
correlation for the entire script and choose either issue online messages or offline actions,
where we can define rules for that correlation. Automatic correlation for database can be
done using show output window and scan for correlation and picking the correlate query
tab and choose which query value we want to correlate. If we know the specific value to
be correlated, we just do create correlation for the value and specify how the value to be
created.
16. What is a function to capture dynamic values in the web Vuser script? -
Web_reg_save_param function saves dynamic data information to a parameter.
17. When do you disable log in Virtual User Generator, When do you choose standard
and extended logs? - Once we debug our script and verify that it is functional, we can
enable logging for errors only. When we add a script to a scenario, logging is
automatically disabled. Standard Log Option: When you select
Standard log, it creates a standard log of functions and messages sent during script
execution to use for debugging. Disable this option for large load testing scenarios. When
you copy a script to a scenario, logging is automatically disabled Extended Log Option:
Select
extended log to create an extended log, including warnings and other messages. Disable
this option for large load testing scenarios. When you copy a script to a scenario, logging
is automatically disabled. We can specify which additional information should be added
to the extended log using the Extended log options.
18. How do you debug a LoadRunner script? - VuGen contains two options to help debug
Vuser scripts-the Run Step by Step command and breakpoints. The Debug settings in the
Options dialog box allow us to determine the extent of the trace to be performed during
scenario execution. The debug information is written to the Output window. We can
manually set the message class within your script using the lr_set_debug_message
function. This is useful if we want to receive debug information about a small section of
the script only.
19. How do you write user defined functions in LR? Give me few functions you wrote in
your previous project? - Before we create the User Defined functions we need to create
the external
library (DLL) with the function. We add this library to VuGen bin directory. Once the
library is added then we assign user defined function as a parameter. The function should
have the following format: __declspec (dllexport) char* <function name>(char*,
char*)Examples of user defined functions are as follows:GetVersion, GetCurrentTime,
GetPltform are some of the user defined functions used in my earlier project.
20. What are the changes you can make in run-time settings? - The Run Time Settings
that we make are: a) Pacing - It has iteration count. b) Log - Under this we have Disable
Logging Standard Log and c) Extended Think Time - In think time we have two options
like Ignore think time and Replay think time. d) General - Under general tab we can set
the vusers as process or as multithreading and whether each step as a transaction.
21. Where do you set Iteration for Vuser testing? - We set Iterations in the Run Time
Settings of the VuGen. The navigation for this is Run time settings, Pacing tab, set
number of iterations.
22. How do you perform functional testing under load? - Functionality under load can be
tested by running several Vusers concurrently. By increasing the amount of Vusers, we
can determine how much load the server can sustain.
23. What is Ramp up? How do you set this? - This option is used to gradually increase the
amount of Vusers/load on the server. An initial value is set and a value to wait between
intervals can be
specified. To set Ramp Up, go to ‘Scenario Scheduling Options’
24. What is the advantage of running the Vuser as thread? - VuGen provides the facility
to use multithreading. This enables more Vusers to be run per
generator. If the Vuser is run as a process, the same driver program is loaded into memory
for each Vuser, thus taking up a large amount of memory. This limits the number of
Vusers that can be run on a single
generator. If the Vuser is run as a thread, only one instance of the driver program is
loaded into memory for the given number of
Vusers (say 100). Each thread shares the memory of the parent driver program, thus
enabling more Vusers to be run per generator.
25. If you want to stop the execution of your script on error, how do you do that? - The
lr_abort function aborts the execution of a Vuser script. It instructs the Vuser to stop
executing the Actions section, execute the vuser_end section and end the execution. This
function is useful when you need to manually abort a script execution as a result of a
specific error condition. When you end a script using this function, the Vuser is assigned
the status "Stopped". For this to take effect, we have to first uncheck the .Continue on
error. option in Run-Time Settings.
26. What is the relation between Response Time and Throughput? - The Throughput
graph shows the amount of data in bytes that the Vusers received from the server in a
second. When we compare this with the transaction response time, we will notice that as
throughput decreased, the response time also decreased. Similarly, the peak throughput
and highest response time would occur approximately at the same time.
27. Explain the Configuration of your systems? - The configuration of our systems refers
to that of the client machines on which we run the Vusers. The configuration of any client
machine includes its hardware settings, memory, operating system, software applications,
development tools, etc. This system component configuration should match with the
overall system configuration that would include the network infrastructure, the web
server, the database server, and any other components that go with this larger system so
as to achieve the load testing objectives.
28. How do you identify the performance bottlenecks? - Performance Bottlenecks can be
detected by using monitors. These monitors might be application server monitors, web
server monitors, database server monitors and network monitors. They help in finding out
the troubled area in our scenario which causes increased response time. The
measurements made are usually performance response time, throughput, hits/sec, network
delay graphs, etc.
29. If web server, database and Network are all fine where could be the problem? - The
problem could be in the system itself or in the application server or in the code written for
the application.
30. How did you find web server related issues? - Using Web resource monitors we can
find the performance of web servers. Using these monitors we can analyze throughput on
the web server, number of hits per second that
occurred during scenario, the number of http responses per second, the number of
downloaded pages per second.
31. How did you find database related issues? - By running .Database. monitor and help of
.Data Resource Graph. we can find database related issues. E.g. You can specify the
resource you want to measure on before running the controller and than you can see
database related issues
32. Explain all the web recording options?
33. What is the difference between Overlay graph and Correlate graph? - Overlay
Graph: It overlay the content of two graphs that shares a common x-axis. Left Y-axis on
the merged graph show.s the current graph.s value & Right Y-axis show the value of Y-
axis of the graph that was merged. Correlate Graph: Plot the Y-axis of two graphs
against each other. The active graph.s Y-axis becomes X-axis of merged graph. Y-axis of
the graph that was merged becomes merged graph.s Y-axis.
34. How did you plan the Load? What are the Criteria? - Load test is planned to decide
the number of users, what kind of machines we are going to use and from where they are
run. It is based on 2 important documents, Task Distribution Diagram and Transaction
profile. Task Distribution Diagram gives us the information on number of users for a
particular transaction and the time of the load. The peak usage and off-usage are decided
from this Diagram. Transaction profile gives us the information about the transactions
name and their priority levels with regard to the scenario we are deciding.
35. What does vuser_init action contain? - Vuser_init action contains procedures to login
to a server.
36. What does vuser_end action contain? - Vuser_end section contains log off
procedures.
37. What is think time? How do you change the threshold? - Think time is the time that
a real user waits between actions. Example: When a user receives data from a server, the
user may wait several seconds to review the data before responding. This delay is known
as the think time. Changing the Threshold: Threshold level is the level below which the
recorded think time will be ignored. The default value is five (5) seconds. We can change
the think time threshold in the Recording options of the Vugen.
38. What is the difference between standard log and extended log? - The standard log
sends a subset of functions and messages sent during script execution to a log. The subset
depends on the Vuser type Extended log sends a detailed script execution messages to the
output log. This is mainly used during debugging when we want information about:
Parameter substitution. Data returned by the server. Advanced trace.
39. Explain the following functions: - lr_debug_message - The lr_debug_message function
sends a debug message to the output log when the specified message class is set.
lr_output_message - The lr_output_message function sends notifications to the
Controller Output window and the Vuser log file. lr_error_message - The
lr_error_message function sends an error message to the LoadRunner Output window.
lrd_stmt - The lrd_stmt function associates a character string (usually a SQL statement)
with a cursor. This function sets a SQL statement to be processed. lrd_fetch - The
lrd_fetch function fetches the next row from the result set.
40. Throughput - If the throughput scales upward as time progresses and the number
of Vusers increase, this indicates that the bandwidth is sufficient. If the graph were to
remain relatively flat as the number of Vusers increased, it would
be reasonable to conclude that the bandwidth is constraining the volume of
data delivered.
41. Types of Goals in Goal-Oriented Scenario - Load Runner provides you with five
different types of goals in a goal oriented scenario:
o The number of concurrent Vusers
o The number of hits per second
o The number of transactions per second
o The number of pages per minute
o The transaction response time that you want your scenario
42. Analysis Scenario (Bottlenecks): In Running Vuser graph correlated with the response
time graph you can see that as the number of Vusers increases, the average response time
of the check itinerary transaction very gradually increases. In other words, the average
response time steadily increases as the load
increases. At 56 Vusers, there is a sudden, sharp increase in the average response
time. We say that the test broke the server. That is the mean time before failure
(MTBF). The response time clearly began to degrade when there were more than 56
Vusers running simultaneously.
43. What is correlation? Explain the difference between automatic correlation and
manual correlation? - Correlation is used to obtain data which are unique for each run
of the script and which are generated by nested queries. Correlation provides the value to
avoid errors arising out of duplicate values and also optimizing the code (to avoid nested
queries). Automatic correlation is where we set some rules for correlation. It can be
application server specific. Here values are replaced by data which are created by these
rules. In manual correlation, the value we want to correlate is scanned and create
correlation is used to correlate.
44. Where do you set automatic correlation options? - Automatic correlation from web
point of view, can be set in recording options and correlation tab. Here we can enable
correlation for the entire script and choose either issue online messages or offline actions,
where we can define rules for that correlation. Automatic correlation for database, can be
done using show output window and scan for correlation and picking the correlate query
tab and choose which query value we want to correlate. If we know the specific value to
be correlated, we just do create correlation for the value and specify how the value to be
created.
45. What is a function to capture dynamic values in the web vuser script? -
Web_reg_save_param function saves dynamic data information to a parameter.

Interviewing at Microsoft
Over the years I've been collecting interview questions from Microsoft. I guess I started
this hobby with the intent of working there some day, although I still have never
interviewed there myself. However, I thought I'd give all of those young Microserf
wanna-bes a leg up and publish my collection so far. I've actually known people to study
for weeks for a Microsoft interview. Instead, kids this age should be out having a life. If
you're one of those -- go outside! Catch some rays and chase that greenish monitor glow
from your face!

If you've actually interviewed at Microsoft, please feel free to contribute your wacky
Microsoft interview stories.

Wasting the Prince of Darkness


Wed 11/23/2005 9:43 AM

From "Pete" (not his real name):

I walked into my first technical interview at Microsoft, and before I could say anything,
the woman says, Youre in an 8x8 stone corridor. I blink and sit down.

Interviewer: The prince of darkness appears before you.

Me: You mean, like, the devil?

Interviewer: Any prince of darkness will do.

Me: Ok.

Interviewer: What do you do?

Me: <pause> Can I run?

Interviewer: Do you want to run?

Me: Hmm I guess not Do I have a weapon?

Interviewer: What kind of weapon do you want?

Me: Um something with range?

Interviewer: Like what?

Me: Uh a crossbow?

Interviewer: What kind of ammo do you have?

Me: <long pause> Ice arrows?


Interviewer: Why?

Me: <floundering> Because the prince of darkness is a creature made of fire???

Interviewer: Fine so what do you do next?

Me: I shoot him?

Interviewer: No what do you do?

Me: <blank stare>

Interviewer: You WASTE him! You *WASTE* the prince of darkness!!

Me: <completely freaked out and off my game> Holy crap what have I gotten myself
into.

She then tells me that she asks that question for two reasons. 1) Because she wants to
know if the candidate is a gamer (which is apparently really important please note: Im not
a gamer) and 2) because she wants her question to show up on some website. I hate to
accommodate her, but this is definitely the weirdest interview question Ive ever heard of.

Well, here you go, weird-prince-of-darkness-wasting-lady...

Stumped
Tue, 9/6/05, 2:29pm
Scott Hanselman's "Great .NET Developer" Questions
Tue, 2/22/05 12:30pm

Scott Hanselman has posted a set of questions that he thinks "great" .NET developers
should be able to answer in an interview. He even splits it up into various categories,
including:

• Everyone who writes code


• Mid-Level .NET Developer
• Senior Developers/Architects
• C# Component Developers
• ASP.NET (UI) Developers
• Developers using XML

Am I the only one that skipped ahead to "Senior Developers/Architects" to see if I could
cut Scott's mustard?

Jason Olson's Microsoft Interview Advice


Fri, 1/21/05 8:16pm

Jason Olson recently interviewed for an SDE/T position (Software Development


Engineer in Test) at Microsoft and although he didn't get it, he provides the following
words of advice for folks about to interview for the first time:

• Just Do It
• Remember, no matter how much you might know your interviewer, it is
important to not forget that it is still in interview
• Pseudocode! Pseudocode! Pseudocode!
• But, as long as you verbalize what you're thinking you should be in pretty
good shape
• Bring an energy bar or something to snack on between breaks in order to
keep your energy level up
• [B]ring a bottle of water and keep it filled up
• A lunch interview is still an interview!
• Know the position you're interviewing for [ed: unless you're interviewing
for an editing position, in which case you should know the position for
which you're interviewing]
• Your interview day is not only your opportunity to be interviewed, but also
your opportunity to interview the team/company

You can read the full story on his web site.


"Standing Out" When Submitting Your Resume
Sun, 8/22/04 8:22am

After seeing all of those pictures in Wired of the wacky letters that people send, I love the
idea of Michael Swanson opening the floodgates by sending his resume along with a life-
size cardboard figure. What's next?

Some of the MS Interview Process Filmed (Finally!)


Fri, 8/20/04 3:22am

Channel9 did what I was unable to ever get done: filmed some of the interview process
(part 1, part 2 and part 3). It's not an actual interview, but Gretchen Ledgard and Zoe
Goldring, both Central Sourcing Consultants at HR for MS, lead you through what to
expect at a Microsoft interview, providing a wealth of wonderful tips, e.g.

• MS is casual, so it doesn't matter so much what you where (i.e. don't feel
you have to wear a suit, but don't show up in flip-flops and headphones
around your neck [still playing!]). Regardless of what you where, it's what's
in your head that's important.
• Interact a lot of with the interviewer. Ask questions, think out loud, etc. The
questions are meant to be vague and again, it's about what's going on in
your head, so verbalize it.
• Bring water if you're thirsty, not coffee, as spilling coffee is going to leave a
much more lasting stain/impression.
• MS rarely asks logic/riddle questions anymore. They're not a good
indicator of a good employee.
• Expect coding questions if you're a dev, testing questions if you're a tester
and passion questions no matter what.
• If an MS recruiter calls, don't expect them to have a specific job in mind.
Instead, expect to be asked what you'd like to do at MS.
• If the first interview doesn't take, it may well be that you're right for MS but
not right for that job. It can literally take years to find the right job for you at
MS.

BTW, I have to say that I never got a ride on an HR shuttle. I guess they save that for the
"good" hires... : )

Discuss

Questions for Testers


Tue, 7/20/04 4:19pm

A friend of mine sent along some questions he was asked for a SDE/T position at
Microsoft (Software Design Engineer in Test):

1. "How would you deal with changes being made a week or so before
the ship date?
2. "How would you deal with a bug that no one wants to fix? Both the
SDE and his lead have said they won't fix it.
3. "Write a function that counts the number of primes in the range [1-
N]. Write the test cases for this function.
4. "Given a MAKEFILE (yeah a makefile), design the data structure
that a parser would create and then write code that iterates over
that data structure executing commands if needed.
5. "Write a function that inserts an integer into a linked list in
ascending order. Write the test cases for this function.
6. "Test the save dialog in Notepad. (This was the question I enjoyed
the most).
7. "Write the InStr function. Write the test cases for this function.
8. "Write a function that will return the number of days in a month (no
using System.DateTime).
9. "You have 3 jars. Each jar has a label on it: white, black, or
white&black. You have 3 sets of marbles: white, black, and
white&black. One set is stored in one jar. The labels on the jars are
guaranteed to be incorrect (i.e. white will not contain white). Which
jar would you choose from to give you the best chances of
identifying the which set of marbles in is in which jar.
10. "Why do you want to work for Microsoft.
11. "Write the test cases for a vending machine.

"Those were the questions I was asked. I had a lot of discussions


about how to handle situations. Such as a tester is focused on one
part of an SDK. During triage it was determined that that portion of
the SDK was not on the critical path, and the tester was needed
elsewhere. But the tester continued to test that portion because it is
his baby. How would you get him to stop testing that portion and
work on what needs to be worked on?

"Other situations came up like arranging tests into the different


testing buckets (functional, stress, perf, etc.)."

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy