Interview Questions For Software Testers
Interview Questions For Software Testers
No = 15 points
No = 10 points
Yes = 10 points
Yes = 15 points
Yes = 10 points
6. Have you defined the requirements and success criteria for automation?
Yes = 15 points
7. Are you open to different concepts of what test automation can mean?
Yes = 10 points
And testing?
Yes = 15 points
1. What types of documents would you need for QA, QC, and Testing?
2. What did you include in a test plan?
3. Describe any bug you remember.
4. What is the purpose of the testing?
5. What do you like (not like) in this job?
6. What is QA (quality assurance)?
7. What is the difference between QA and testing?
8. How do you scope, organize, and execute a test project?
9. What is the role of QA in a development project?
10. What is the role of QA in a company that produces software?
11. Define quality for me as you understand it
12. Describe to me the difference between validation and verification.
13. Describe to me what you see as a process. Not a particular process, just the
basics of having a process.
14. Describe to me when you would consider employing a failure mode and effect
analysis.
15. Describe to me the Software Development Life Cycle as you would define it.
16. What are the properties of a good requirement?
17. How do you differentiate the roles of Quality Assurance Manager and Project
Manager?
18. Tell me about any quality efforts you have overseen or implemented. Describe
some of the challenges you faced and how you overcame them.
19. How do you deal with environments that are hostile to quality change efforts?
20. In general, how do you see automation fitting into the overall process of testing?
21. How do you promote the concept of phase containment and defect prevention?
22. If you come onboard, give me a general idea of what your first overall tasks will
be as far as starting a quality effort.
23. What kinds of testing have you done?
24. Have you ever created a test plan?
25. Have you ever written test cases or did you just execute those written by others?
26. What did your base your test cases?
27. How do you determine what to test?
28. How do you decide when you have 'tested enough?'
29. How do you test if you have minimal or no documentation about the product?
30. Describe me to the basic elements you put in a defect report?
31. How do you perform regression testing?
32. At what stage of the life cycle does testing begin in your opinion?
33. How do you analyze your test results? What metrics do you try to provide?
34. Realising you won't be able to test everything - how do you decide what to test
first?
35. Where do you get your expected results?
36. If automating - what is your process for determining what to automate and in
what order?
37. In the past, I have been asked to verbally start mapping out a test plan for a
common situation, such as an ATM. The interviewer might say, "Just thinking out
loud, if you were tasked to test an ATM, what items might you test plan include?"
These type questions are not meant to be answered conclusively, but it is a good way
for the interviewer to see how you approach the task.
38. If you're given a program that will average student grades, what kinds of inputs
would you use?
39. Tell me about the best bug you ever found.
40. What made you pick testing over another career?
41. What is the exact difference between Integration & System testing, give me
examples with your project.
42. How did you go about testing a project?
43. When should testing start in a project? Why?
44. How do you go about testing a web application?
45. Difference between Black & White box testing
46. What is Configuration management? Tools used?
47. What do you plan to become after say 2-5yrs (Ex: QA Manager, Why?)
48. Would you like to work in a team or alone, why?
49. Give me 5 strong & weak points of yours
50. Why do you want to join our company?
51. When should testing be stopped?
52. What sort of things would you put down in a bug report?
53. Who in the company is responsible for Quality?
54. Who defines quality?
55. What is an equivalence class?
56. Is a "A fast database retrieval rate" a testable requirement?
57. Should we test every possible combination/scenario for a program?
58. What criteria do you use when determining when to automate a test or leave it
manual?
59. When do you start developing your automation tests?
60. Discuss what test metrics you feel are important to publish an organization?
61. In case anybody cares, here are the questions that I will be asking:
62. Describe the role that QA plays in the software lifecycle.
63. What should Development require of QA?
64. What should QA require of Development?
65. How would you define a "bug?"
66. Give me an example of the best and worst experiences you've had with QA.
67. How does unit testing play a role in the development / software lifecycle?
68. Explain some techniques for developing software components with respect to
testability.
69. Describe a past experience with implementing a test harness in the development
of software.
70. Have you ever worked with QA in developing test tools? Explain the
participation Development should have with QA in leveraging such test tools for QA
use.
71. Give me some examples of how you have participated in Integration Testing.
72. How would you describe the involvement you have had with the bug-fix cycle
between Development and QA?
72. What is unit testing?
73. Describe your personal software development process.
74. How do you know when your code has met specifications?
75. How do you know your code has met specifications when there are no
specifications?
76. Describe your experiences with code analyzers.
77. How do you feel about cyclomatic complexity?
78. Who should test your code?
79.How do you survive chaos?
80. What processes/methodologies are you familiar with?
81. What type of documents would you need for QA/QC/Testing?
82. How can you use technology to solve problem?
83. What type of metrics would you use?
84. How to find that tools work well with your existing system?
85. What automated tools are you familiar with?
86. How well you work with a team?
87. How would you ensure 100% coverage of testing?
88. How would you build a test team?
89. What problem you have right now or in the past? How you solved it?
90. What you will do during the first day of job?
91. What would you like to do five years from now?
92. Tell me about the worst boss you've ever had.
93. What are your greatest weaknesses?
94. What are your strengths?
95. What is a successful product?
96. What do you like about Windows?
97. What is good code?
98. Who is Kent Beck, Dr Grace Hopper, Dennis Ritchie?
99. What are basic, core, practises for a QA specialist?
100. What do you like about QA?
101. What has not worked well in your previous QA experience and what would you
change?
102. How you will begin to improve the QA process?
103. What is the difference between QA and QC?
104. What is UML and how to use it for testing?
105. What is CMM and CMMI? What is the difference?
106. What do you like about computers?
107. Do you have a favourite QA book? More than one? Which ones? And why.
108. What is the responsibility of programmers vs QA?
109.What are the properties of a good requirement?
110.Ho to do test if we have minimal or no documentation about the product?
111.What are all the basic elements in a defect report?
112.Is an "A fast database retrieval rate" a testable requirement?
113.Why should you care about objects and object-oriented testing?
114. What does 100% statement coverage mean?
115. How do you perform configuration management with typical revision control
systems?
116. What is code coverage?
117. What types of code coverage do you know?
118. What tools can be used for code coverage analysis?
119. Is any graph is used for code coverage analysis?
120. At what stage of the development cycle software errors are least costly to
correct?
121. What can you tell about the project if during testing you found 80 bugs in it?
122. How to monitor test progress?
123. Describe a few reasons that a bug might not be fixed.
124. What are the possible states of software bug�s life cycle?
Software Testing
Test Automation
20. What tools are available for support of testing during software development life
cycle?
Test Cases
Test Director
For more Interview Question and Answers - Test Director Interview Questions
Testing-Scenarios
1. How to find out the length of the edit box through WinRunner?
2. Is it compulsary that a tester should study a Design Document for writing integration
and system test casses
3. What is Testing Scenario? What is scenario based testing? Can you explain with an
example?
4. Lets say we have an GUI map and scripts, and we got some 5 new pages included in an
application, How do we do that?
5. How do you complete the testing when you have a time constraint?
6. Given an yahoo application how many test cases u can write?
7. GUI contains 2 fields. Field 1 to accept the value of x and Field 2 displays the result of
the formula a+b/c-d where a=0.4*x, b=1.5*a, c=x, d=2.5*b; How many system test cases
would you write
8. How do you know that all the scenarios for testing are covered?
Web Testing
1. What is the difference between testing in client-server applications and web based
applications?
2. Without using GUI map editor, Can we recognise the application in Winrunner?
3. What command is used to launch a application in Winrunner?
4. What is the difference in testing a CLIENT-SERVER application and a WEB
application ?
6. What bugs are mainly come in Web Testing? What severity and priority we are giving?
Wireless Testing
1. What is Wireless Testing? How do we do it? What are the concepts a test engineer
should have knowledge of? How do you classify testing of wireless products?
Testing General
1. What is workadround ?
2. What is a show stopper?
3. What is Traceability Matrix? Who prepares this document?
4. What is test log document in testing process?
5. What is the Entry And Exit Criteria of a test plan? 2.How to Automate your test
plan?
6. What is the role of QA in a company that produces software?
7. What is terminologe? Why testing necessary? Fundamental test process
psychology of testing
8. What are the common bugs encountered while testing an application manually or
using test?
9. What are the bug and testing metrics?
10. For a bug with high severity can we give the priority also to be high...If so why
we need both?
11. How would you differentiaite between Bug, Defect, Failure, Error.
12. What is the difference between Client Server Testing and Web Testing?
13. What is backward compatibility testing ?
14. What certifications are available in testing?
15. What is release candidate?
16. What do you think the role of test-group manager should be?
17. What is a test data? Give examples
18. What is the difference between QA, QC and testing?
19. What is seviarity & priority? What is test format? Test procedure?
20. What are the different is manual database checking types?
1.. How to recognise the objects during runtime in new build version (test suite)
comparing with old guimap?
2. wait(20) - What is the minimum and maximum time the above mentioned
synchronization statements will wait given that the global default timeout is set to 15
seconds.
3. Where in the user-defined function library should a new error code be defined?
4. In a modular test tree, each test will receive the values for the parameters passed
from the main test. These parameters are defined in the Test properties dialog box of
each test.Refering to the above, in which one of the following files are changes made
in the test properties dialog saved?
5. What is the scripting process in Winrunner?
6. How many scripts can we generate for one project?
7. What is the command in Winrunner to invoke IE Browser? And once I open the IE
browser is there a unique way to identify that browser?
8. How do you load default comments into your new script like IDE's?
9. What is the new feature add in QTP 8.0 compare in QTP 6.0
10. When will you go to automation?
11. How to test the stored procedure?
12. How to recognise the objects during runtime in new build version (test suite)
comparing with old guimap
13. what is use of GUI files in winrunner.
14. Without using the datadriven test, how can we test the application with different
set of inputs?
15. How do you load compiled module inside a comiled module?
16. Can you tell me the bug life cycle
17. How to find the length of the edit box through WinRunner?
18. What is file type of WinRunner test files, its extension?
19. What is candidate release?
20. What type of variables can be used with in the TSL function?
QTP
QA Testing
1. If the actual result doesn't match with expected result in this situation what
should we do?
2. What is the importance of requirements traceability in a product testing?
3. When is the best time for system testing?
4. What is use case? What is the difference between test cases and use cases?
5. What is the difference between the test case and a test script
6. Describe to the basic elements you put in a defect report?
7. How do you test if you have minimal or no documentation about the product?
8. How do you decide when you have tested enough?
9. How do you determine what to test?
10. In general, how do you see automation fitting into the overall process of testing?
11. How do you deal with environments that are hostile to quality change efforts?
12. Describe to me the Software Development Life Cycle as you would define it?
13. Describe to me when you would consider employing a failure mode and defect
analysis?
14. What is the role of QA in a company that produces software?
15. How do you scope, organize, and execute a test project?
16. How can you test the white page
17. What is the role of QA in a project development?
18. How you used white box and block box technologies in your application?
19. 1) What are the demerits of winrunner?2) We write the test data after what are
the principles to do testing an application?
20. What is the job of Quality Assurance Engineer? Difference between the Testing &
Quality Assurance job.
LoadRunner
1. What is load testing? Can we test J2ME application with load runner ? What is
Performance testing?
2. Which protocol has to be selected for record/playback Oracle 9i application?
3. What are the enhancements which have been included in loadrunner 8.0 when
compared to loadrunner 6.2?
4. Can we use Load Runner for testing desktop applications or non web based
applications and how do we use it.?
5. How to call winrunner script in Loadrunner?
6. What arr the types of parameterisation in load runner? List the step to do strees
testing?
7. What are the steps for doing load and performance testing using Load Runner?
8. What is concurrent load and corollation? What is the process of load runner?
9. What is planning for the test?
10. What enables the controller and the host to communicate with each other in Load
Runner?
11. Where is Load testing usually done?
12. What are the only means of measuring performance?
13. Testing requirement and design are not part of what?
14. According to Market analysis 70% of performance problem lies with what?
15. What is the level of system loading expected to occur during specific business
scenario?
16. What is run-time-setting.
17. When load runner is used .
18. What protocols does LoadRunner support?
19. What do you mean by creating vuser script.?
20. What is rendezvous point?
DataBase Testing
Common Questions
1. If you have an application, but you do not have any requirements available, then how would you perform
the testing?
2. How can you know if a test case is necessary?
3. What is peer review in practical terms?
4. How do you know when you have enough test cases to adequately test a software system or module?
5. Who approved your test cases?
6. What will you when you find a bug?
7. What test plans have you written?
8. What is QA? What is Testing? Are they both same or different?
9. How to write Negative Testcase? Give ex.
10. In an application currently in production, one module of code is being modified. Is it necessary to re-test
the whole application or is it enough to just test functionality associated with that module?
11. What is included in test strategy? What is overall process of testing step by step and what are various
documents used testing during process?
12. What is the most challenging situation you had during testing
13. What are you going to do if there is no Functional Spec or any documents related to the system and
developer who wrote the code does not work in the company anymore, but you have system and need to
test?
14. What is the major problem did you resolve during testing process
15. What are the types of functional testing?
16. 1. How will you write integration test cases 2. How will you track bugs from winrunner. 3.How will you
customize the bugs as pass/fail. 4. You find a bug How will you repair 5. In test cases you have bug or not.
6. What is use case ? what does it contains.
17. What is the difference between smoke testing and sanity testing
18. What is Random Testing?
19. What is smoke testing?
20. What is stage containment in testing?
Bug Tracking
1. What is the difference between a Bug and a Defect?
2. How to post a BUG
3. How do we track a bug? plz send format of excel sheet in which we write the bug details? How do we
give the severity and priority to the bugs?
4. What are the different types of Bugs we normally see in any of the Project? Include the severity as well.
5. Top Ten Tips for Bug Tracking
Performance Testing is the process by which software is tested and tuned with the intent
of realizing the required performance.
• Speed — does the application respond quickly enough for the intended users?
• Scalability — Will the application handle the expected user load and beyond?
• Stability — is the application stable under expected and unexpected user loads?
• Does the application respond quickly enough for the intended users?
• Will the application handle the expected user load and beyond?
• Will the application handle the number of transactions required by the business?
• Is the application stable under expected and unexpected user loads?
• The Virtual User Generator captures end-user business processes and creates an
automated performance testing script, also known as a virtual user script.
• The Controller organizes, drives, manages, and monitors the load test.
• The Load Generators create the load by running virtual users.
• The Analysis helps you view, dissect, and compare the performance results.
• The Launcher provides a single point of access for all of the LoadRunner components.
LoadRunner Terminology
A scenario is a file that defines the events that occur during each testing session, based on
performance requirements.
In the scenario, LoadRunner replaces human users with virtual users or Vusers. Vusers
emulate the actions of human users working with your application. A scenario can contain
tens, hundreds, or even thousands of Vusers.
The actions that a Vuser performs during the scenario are described in a Vuser script. To
measure the performance of the server, you define transactions. A transaction represents
end-user business processes that you are interested in measuring.
Load testing typically consists of five phases: planning, script creation, scenario
definition, scenario execution, and results analysis.
Plan Load Test: Define your performance testing requirements, for example, number of
concurrent users, typical business processes and required response times.
Create Vuser Scripts: Capture the end-user activities into automated scripts.
Define a Scenario: Use the LoadRunner Controller to set up the load test environment.
Run a Scenario: Drive, manage, and monitor the load test from the LoadRunner
Controller.
Analyze the Results: Use LoadRunner Analysis to create graphs and reports, and
evaluate the performance.
Conclusion:
Load Runner has good reporting features with which the user can easily analyze the
performance test results.
User Acceptance Testing is often the final step before rolling out the application.
Usually the end users who will be using the applications test the application before
‘accepting’ the application.
This type of testing gives the end users the confidence that the application being
delivered to them meets their requirements.
This testing also helps nail bugs related to usability of the application.
Before the User Acceptance testing can be done the application is fully developed.
Various levels of testing (Unit, Integration and System) are already completed before
User Acceptance Testing is done. As various levels of testing have been completed most
of the technical bugs have already been fixed before UAT.
During this type of testing the specific focus is the exact real world usage of the
application. The Testing is done in an environment that simulates the production
environment.
The Test cases are written using real world scenarios for the application
The user acceptance testing is usually a black box type of testing. In other words, the
focus is on the functionality and the usability of the application rather than the technical
aspects. It is generally assumed that the application would have already undergone Unit,
Integration and System Level Testing.
However, it is useful if the User acceptance Testing is carried out in an environment that
closely resembles the real world or production environment.
The steps taken for User Acceptance Testing typically involve one or more of the
following:
.......1) User Acceptance Test (UAT) Planning
.......2) Designing UA Test Cases
.......3) Selecting a Team that would execute the (UAT) Test Cases
.......4) Executing Test Cases
.......5) Documenting the Defects found during UAT
.......6) Resolving the issues/Bug Fixing
.......7) Sign Off
Each User Acceptance Test Case describes in a simple language the precise steps to be
taken to test something.
The Business Analysts and the Project Team review the User Acceptance Test Cases.
Sign Off:
Upon successful completion of the User Acceptance Testing and resolution of the issues
the team generally indicates the acceptance of the application. This step is important in
commercial software sales. Once the User “Accept” the Software delivered they indicate
that the software meets their requirements.
The users now confident of the software solution delivered and the vendor can be paid for
the same.
How does System Testing fit into the Software Development Life Cycle?
In a typical Enterprise, ‘unit testing’ is done by the programmers. This ensures that the
individual components are working OK. The ‘Integration testing’ focuses on successful
integration of all the individual pieces of software (components or units of code).
Once the components are integrated, the system as a whole needs to be rigorously tested
to ensure that it meets the Quality Standards.
Thus the System testing builds on the previous levels of testing namely unit testing and
Integration Testing.
Usually a dedicated testing team is responsible for doing ‘System Testing’.
........- In the Software Development Life cycle System Testing is the first level where
...........the System is tested as a whole
........- The System is tested to verify if it meets the functional and technical
...........requirements
........- The application/System is tested in an environment that closely resembles the
...........production environment where the application will be finally deployed
........- The System Testing enables us to test, verify and validate both the Business
...........requirements as well as the Application Architecture
When necessary, several iterations of System Testing are done in multiple environments.
As you may have read in the other articles in the testing series, this document typically
describes the following:
.........- The Testing Goals
.........- The key areas to be focused on while testing
.........- The Testing Deliverables
.........- How the tests will be carried out
.........- The list of things to be Tested
.........- Roles and Responsibilities
.........- Prerequisites to begin Testing
.........- Test Environment
.........- Assumptions
.........- What to do after a test is successfully carried out
.........- What to do if test fails
.........- Glossary
A Test Case describes exactly how the test should be carried out.
The System test cases help us verify and validate the system.
The System Test Cases are written such that:
........- They cover all the use cases and scenarios
........- The Test cases validate the technical Requirements and Specifications
........- The Test cases verify if the application/System meet the Business & Functional
...........Requirements specified
........- The Test cases may also verify if the System meets the performance standards
Since a dedicated test team may execute the test cases it is necessary that System Test
Cases. The detailed Test cases help the test executioners do the testing as specified
without any ambiguity.
The format of the System Test Cases may be like all other Test cases as illustrated below:
• Test Case ID
• Test Case Description:
o What to Test?
o How to Test?
• Input Data
• Expected Result
• Actual Result
Test
What To How to Expected Actual
Case Input Data Pass/Fail
Test? Test? Result Result
ID
. . . . . . .
1) Test Coverage: System Testing will be effective only to the extent of the coverage of
Test Cases. What is Test coverage? Adequate Test coverage implies the scenarios covered
by the test cases are sufficient. The Test cases should “cover” all scenarios, use cases,
Business Requirements, Technical Requirements, and Performance Requirements. The
test cases should enable us to verify and validate that the system/application meets the
project goals and specifications.
2) Defect Tracking: The defects found during the process of testing should be tracked.
Subsequent iterations of test cases verify if the defects have been fixed.
3) Test Execution: The Test cases should be executed in the manner specified. Failure to
do so results in improper Test Results.
4) Build Process Automation: A Lot of errors occur due to an improper build. ‘Build’ is
a compilation of the various components that make the application deployed in the
appropriate environment. The Test results will not be accurate if the application is not
‘built’ correctly or if the environment is not set up as specified. Automating this process
may help reduce manual errors.
If a piece of Software is modified for any reason testing needs to be done to ensure that it
works as specified and that it has not negatively impacted any functionality that it offered
previously. This is known as Regression Testing.
Regression Testing plays an important role in any Scenario where a change has been
made to a previously tested software code. Regression Testing is hence an important
aspect in various Software Methodologies where software changes enhancements occur
frequently.
Any Software Development Project is invariably faced with requests for changing
Design, code, features or all of them.
Each change implies more Regression Testing needs to be done to ensure that the System
meets the Project Goals.
All this affects the quality and reliability of the system. Hence Regression Testing, since
it aims to verify all this, is very important.
Every time a change occurs one or more of the following scenarios may occur:
- More Functionality may be added to the system
- More complexity may be added to the system
- New bugs may be introduced
- New vulnerabilities may be introduced in the system
- System may tend to become more and more fragile with each change
After the change the new functionality may have to be tested along with all the original
functionality.
With each change Regression Testing could become more and more costly.
To make the Regression Testing Cost Effective and yet ensure good coverage one or more
of the following techniques may be applied:
- Test Automation: If the Test cases are automated the test cases may be executed using
scripts after each change is introduced in the system. The execution of test cases in this
way helps eliminate oversight, human errors,. It may also result in faster and cheaper
execution of Test cases. However there is cost involved in building the scripts.
- Selective Testing: Some Teams choose execute the test cases selectively. They do not
execute all the Test Cases during the Regression Testing. They test only what they decide
is relevant. This helps reduce the Testing Time and Effort.
Since Regression Testing tends to verify the software application after a change has been
made everything that may be impacted by the change should be tested during Regression
Testing. Generally the following areas are covered during Regression Testing:
The Role of Test Lead / Manager is to effectively lead the testing team. To fulfill this role
the Lead must understand the discipline of testing and how to effectively implement a
testing process while fulfilling the traditional leadership roles of a manager. What does
this mean? The manager must manage and implement or maintain an effective testing
process. This involves creating a test infrastructure that supports robust communication
and a cost effective testing framework.
• Defining and implementing the role testing plays within the organizational structure.
• Defining the scope of testing within the context of each release / delivery.
• Deploying and managing the appropriate testing framework to meet the testing mandate.
• Implementing and evolving appropriate measurements and metrics.
o To be applied against the Product under test.
o To be applied against the Testing Team.
• Planning, deploying, and managing the testing effort for any given engagement / release.
• Managing and growing Testing assets required for meeting the testing mandate:
o Team Members
o Testing Tools
o Testing Process
• Retaining skilled testing personnel.
The Test Lead must understand how testing fits into the organizational structure, in other
words, clearly define its role within the organization . this is often accomplished by
crafting a Mission Statement or a defined Testing Mandate. For example:
"To prevent, detect, record, and manage defects within the context of a defined release."
Now it becomes the task on the Test Lead to communicate and implement effective
managerial and testing techniques to support this .simple. mandate. Expectations of your
team, your peers (Development Lead, Deployment Lead, and other leads) and your
superior need to be set appropriately given the timeframe of the release, the maturity of
the development team and testing team. These expectations are usually defined in terms
of functional areas deemed to be in Scope or out of Scope. For example:
In Scope:
Out of Scope:
• Security
• Backup and Recovery
• ...
The definition of Scope will change as you move through the various stages of testing but
the key is to ensure that your testing team and the organization as a whole clearly
understands what is and what is not being tested for the current release.
The Test Lead / Manager must employ the appropriate Testing Framework or Test
Architecture to meet the organizations testing needs. While the Testing Framework
requirements for any given organization are difficult to define there are several questions
the Test Lead / Manager must ask themselves . the answers to these questions and others
will define the short term and long term goals of the Testing Framework.
Preventing defects from occurring involves testing before the product is constructed /
built. There are several methods for accomplishing this goal. The most powerful and cost
effective being Reviews. Reviews can be either formal / technical reviews or peer
reviews. Formal product development life cycles will provide the testing team with useful
materials / deliverables for the review process. When properly implemented any effective
development paradigm should supply these deliverables. For example:
• Cascade
o Requirements
o Functional Specifications
• Agile or Extreme
o High level Requirements
o Storyboards
Testing needs to be involved in this Review process and any defects need to be recorded
and managed.
• Design Review
• Unit Testing
• Function Testing
• System Testing
• User Acceptance Testing
The Testing Team should be involved in at least three of these phases: Design Review,
Function Testing, and System Testing.
Functional Testing involves the design, implementation, and execution of test cases
against the functional specification and / or functional requirements for the product. This
is where the testing team measures the functional implementation against the product
intent using well-formulated test cases and notes any discrepancies as defects (faults). For
example testing to ensure the web page allows the entry of a new forum member . in this
case we are testing to ensure the web page functions as an interface.
System Testing follows much the same course (Design, Implement, execute and defect)
but the intent or focus is very different. While Functional Testing focuses on discrete
functional requirements System Testing focuses on the flow through the system and the
connectivity between related systems. For example testing to ensure the application
allows the entry, activation, and recovery of a new forum member . in this case we are
testing to ensure the system supports the business. There are several types of System
Testing, what is required for any given release should be determined by the Scope:
• Security
• Performance
• Integration
This will then provide the data for a minimal set of metrics:
How disciplined the testing organization needs to become and what measurements and
metrics are required are dependent on a cost benefit analysis by the Test Lead / Manager.
What makes sense in terms of the stated goals and previous performance of the testing
organization?
• If the timelines are impacted modify the Test Plan appropriately in terms of Scope.
• Clearly communicate the situation to the testing team and project management.
• Keep clear lines of communication to Development and project management.
• Whenever possible sell, sell, sell the importance and contributions of the Testing Team.
• Ensure the testing organization has clearly defined roles for each member of the team and
a well-defined career path.
• Measure and communicate testing return on investment -- if the detected defect would
have reached the field what would have been the cost.
• Explain testing expenditures in terms of investment (ROI) not cost.
• Finally, never lose your cool -- Good luck.
Interviewing at Microsoft
Over the years I've been collecting interview questions from Microsoft. I guess I started
this hobby with the intent of working there some day, although I still have never
interviewed there myself. However, I thought I'd give all of those young Microserf
wanna-bes a leg up and publish my collection so far. I've actually known people to study
for weeks for a Microsoft interview. Instead, kids this age should be out having a life. If
you're one of those -- go outside! Catch some rays and chase that greenish monitor glow
from your face!
If you've actually interviewed at Microsoft, please feel free to contribute your wacky
Microsoft interview stories.
I walked into my first technical interview at Microsoft, and before I could say anything,
the woman says, Youre in an 8x8 stone corridor. I blink and sit down.
Me: Ok.
Me: Uh a crossbow?
Me: <completely freaked out and off my game> Holy crap what have I gotten myself
into.
She then tells me that she asks that question for two reasons. 1) Because she wants to
know if the candidate is a gamer (which is apparently really important please note: Im not
a gamer) and 2) because she wants her question to show up on some website. I hate to
accommodate her, but this is definitely the weirdest interview question Ive ever heard of.
Stumped
Tue, 9/6/05, 2:29pm
Scott Hanselman's "Great .NET Developer" Questions
Tue, 2/22/05 12:30pm
Scott Hanselman has posted a set of questions that he thinks "great" .NET developers
should be able to answer in an interview. He even splits it up into various categories,
including:
Am I the only one that skipped ahead to "Senior Developers/Architects" to see if I could
cut Scott's mustard?
• Just Do It
• Remember, no matter how much you might know your interviewer, it is
important to not forget that it is still in interview
• Pseudocode! Pseudocode! Pseudocode!
• But, as long as you verbalize what you're thinking you should be in pretty
good shape
• Bring an energy bar or something to snack on between breaks in order to
keep your energy level up
• [B]ring a bottle of water and keep it filled up
• A lunch interview is still an interview!
• Know the position you're interviewing for [ed: unless you're interviewing
for an editing position, in which case you should know the position for
which you're interviewing]
• Your interview day is not only your opportunity to be interviewed, but also
your opportunity to interview the team/company
After seeing all of those pictures in Wired of the wacky letters that people send, I love the
idea of Michael Swanson opening the floodgates by sending his resume along with a life-
size cardboard figure. What's next?
Channel9 did what I was unable to ever get done: filmed some of the interview process
(part 1, part 2 and part 3). It's not an actual interview, but Gretchen Ledgard and Zoe
Goldring, both Central Sourcing Consultants at HR for MS, lead you through what to
expect at a Microsoft interview, providing a wealth of wonderful tips, e.g.
• MS is casual, so it doesn't matter so much what you where (i.e. don't feel
you have to wear a suit, but don't show up in flip-flops and headphones
around your neck [still playing!]). Regardless of what you where, it's what's
in your head that's important.
• Interact a lot of with the interviewer. Ask questions, think out loud, etc. The
questions are meant to be vague and again, it's about what's going on in
your head, so verbalize it.
• Bring water if you're thirsty, not coffee, as spilling coffee is going to leave a
much more lasting stain/impression.
• MS rarely asks logic/riddle questions anymore. They're not a good
indicator of a good employee.
• Expect coding questions if you're a dev, testing questions if you're a tester
and passion questions no matter what.
• If an MS recruiter calls, don't expect them to have a specific job in mind.
Instead, expect to be asked what you'd like to do at MS.
• If the first interview doesn't take, it may well be that you're right for MS but
not right for that job. It can literally take years to find the right job for you at
MS.
BTW, I have to say that I never got a ride on an HR shuttle. I guess they save that for the
"good" hires... : )
Discuss
A friend of mine sent along some questions he was asked for a SDE/T position at
Microsoft (Software Design Engineer in Test):
1. "How would you deal with changes being made a week or so before
the ship date?
2. "How would you deal with a bug that no one wants to fix? Both the
SDE and his lead have said they won't fix it.
3. "Write a function that counts the number of primes in the range [1-
N]. Write the test cases for this function.
4. "Given a MAKEFILE (yeah a makefile), design the data structure
that a parser would create and then write code that iterates over
that data structure executing commands if needed.
5. "Write a function that inserts an integer into a linked list in
ascending order. Write the test cases for this function.
6. "Test the save dialog in Notepad. (This was the question I enjoyed
the most).
7. "Write the InStr function. Write the test cases for this function.
8. "Write a function that will return the number of days in a month (no
using System.DateTime).
9. "You have 3 jars. Each jar has a label on it: white, black, or
white&black. You have 3 sets of marbles: white, black, and
white&black. One set is stored in one jar. The labels on the jars are
guaranteed to be incorrect (i.e. white will not contain white). Which
jar would you choose from to give you the best chances of
identifying the which set of marbles in is in which jar.
10. "Why do you want to work for Microsoft.
11. "Write the test cases for a vending machine.