IT2032 Software Testing - Question BANK
Year: IV Sem: VIII “A” & “B”
Sub Name: Software Testing Subcode:IT2032
UNIT-I
PART-A
- Define software testing.
- List out the difference between verification and
validation.
- List out the Quality factors of testing.
- What is the role of the test specialist in software
development process?
- What are the components of engineered process?
- Write down the ingredients of the test cases.
- Differentiate error, fault, and failure.
- Define a test oracle.
- Define Test bed.
- What is a Software Quality?
- Define a Test.
- What is metric and quality metric?
- Mention the role of SQA Group.
- Why according to principle5, is it important to develop
test cases for both valid and invalid input conditions?
- Give some challenges facing a tester.
- Define principle.
- Define Reviews.
- From the own personal experiences what are the major
sources of defects in the software you have developed.
- Short note on defect model.
- Give the post and preconditions of the defect model
with an example.
- Give some sample specification with examples.
- Define a defect repository.
PART-B
- Explain how the testing principles
are important to test specialists in detail.
- Suppose you were reviewing a
requirements document and noted that a feature was described incompletely. How would
you classify this defect? How would you insure that it was corrected?
- Explain the testing process in
detail.
- Suppose you are a member of a team
that was designing a defect repository. What organizational approach would you
suggest and why? What information do you think should be associated with
each defect? Why is this information useful,
and who would use
it?
- Suppose you are testing a code
component and you discover a defect: it calculates an output variable
incorrectly. (a) How would you classify this defect?(b) What are the
likely causes of this defect? (c) What steps could have been taken to
prevent this type of defect from propagating to the code?
- Suppose you were reviewing a
design phase and noted that a feature was described incompletely. How
would you classify this defect? How would you insure that it was
corrected?
- Suppose you were reviewing a
Coding and testing phase and noted that a feature was described
incompletely. How would you classify this defect? How would you insure
that it was corrected?
- Elaborate in detail difficulties and challenges for the
tester according to Principle 11.
- Why, according to Principle 5, is
it important to develop test cases for both valid and invalid input
conditions?
- With respect to Principle 3—‘‘test
results should be meticulously inspected’’— why do you think this is
important to the tester? Discuss any experiences you have had where poor
inspection of test results has led to delays in your testing
efforts.
UNIT-II
PART-A
1.
What is the
responsibility of smart tester to design the test?
2.
List out the two
basic testing strategies.
3.
Define a random
testing with an example.
4.
What are the
advantages of equivalence class partitioning?
5.
Describe the rule
of thumb.
6.
Give the sample
cause of effect graph notations.
7.
What is finite
class machine?
8.
What is error
guessing?
9.
What is
certification of COTS Components?
10.
Define COTS.
11.
Differences
between random testing and testing using error guessing.
12.
Difference between
the white box and black box testing strategies.
13.
What is Regression
Testing?
14.
What is Recovery
Testing?
15.
Define alpha
testing.
16.
Define acceptance
testing.
17.
Define use case
with an example.
18.
Under what
circumstances planned degree of coverage may be < 100%
19.
What are the flow
of control on a unit of code?
20.
Define a path.
21.
Define cyclomatic
complexity.
22.
How to compute
complexity value specify with an example.
23.
Mention Rapps and
Weyukar identified data flow based test adequacy criteria.
24.
List down the two
assumptions of mutation testing.
25.
Define test set
with an example.
26.
Mention some test
cases for loop testing.
27.
What is
monotonicity property?
28.
Explain
computation of mutation score.
PART-B
- Describe the difference between
the white box and black box testing strategies.
- Explain the differences between
random testing and testing using error guessing.
- Define the equivalence classes and
develop a set of test cases to cover them for the following module
description. The module accepts a color for a part. Color choices are
{RED, BLUE, YELLOW, VIOLET}.
- Define the equivalence classes and
develop a set of test cases to cover them for the computation of square
root module description.
- Briefly explain about
Cause and effect analyzing.
State transition testing
Error Guessing
- Suppose a tester believes a unit
contains a specification defect. Which testing strategy would be best to
uncover the defect and why?
- Draw a state transition diagram
for a simple stack machine. Assume the stack holds n data items where n is
a small positive number. It has operations ‘‘push’’ and ‘‘pop’’ that cause
the stack pointer to increment or decrement, respectively. The stack can
enter states such as ‘‘full’’ if it contains n items and, ‘‘empty’’ if it contains
no items. Popping an item from the empty stack, or pushing an item on the
full stack cause a transition to an error state. Based on your state
transition diagram, develop a set of black box test cases that cover the
key state transitions. Be sure to describe the exact sequence of inputs,
as well as the expected sequence of
state changes and actions.
UNIT-III
PART-A
- How would you define a software
test?
- Discuss the need for the levels of
testing.
- Mention some components suitable
for unit test.
- Define a unit test.
- What are several tasks to be
performed to prepare for unit test?
- What are the phases of unit test?
- Define class as a testable unit.
- What are the goals of integration
test?
- Show the top down integration of
modules.
- Define cluster with an example.
- What do you mean by system test?
- What are the types of system test?
- What are the goals of functional
testing?
- Give the examples of special
resources needed for a performance test.
- Mention the objectives of Bezier
Configuration testing.
- What are testing techniques?
- Define a cluster.
- Major requirements of performance
testing?
- Define a stress testing.
- Define a configuration testing.
- What are the various methods of
which damage can be done?
- List out the effects of security.
- Mention some areas of security
testing.
- Define a use case.
- How would you define a test
harness?
PART-B
1.
Summarize the issues that arise during the class
testing.
2.
There are several
types of system tests. Select from these types those you would perform for the software described below.
For each category you choose (i) specify the test objectives, and (ii) give a
general description of the tests you would develop and tools you would need.
You may make any assumptions related to system characteristics that are needed
to support your answers. An
on-line fast food restaurant system. The system reads customer orders, relays
orders to the kitchen, calculates the customer’s bill, and gives change. It also
maintains inventory information. Each wait-person has a terminal. Only
authorized wait-persons and a system administrator can access the system.
3.
Discuss the
importance of regression testing when developing a new software release. What items from previous release
would be useful to the regression tester?
4.
Discuss in detail
about the planning and designing of unit test .
5.
Explain in detail
about the integration strategies.
6.
Discuss in detail
about the types of system test.
UNIT-IV
PART-A
- Mention some testing policy
statements.
- Define policy.
- Mention any two debugging policy
statements.
- Define plan for software statement.
- Define milestones for a company.
- Give the hierarchy of a test plan.
- What are the components of test
plan?
- Define WBS (Work Breakdown
Structure).
- Mention some approaches for test
cost estimation.
- Give the COCOMO equation with an
example.
- What are the test cost drivers?
- Give some WBS elements for
testing.
- What is format of requirement
traceability matrix?
- What are the elements of test case
specification?
- Define test procedure.
- List out the components of test
item transmittal report.
- What is a test log?
- When the test incident report prepared
and what are its components?
- What are the elements of test
summary report and when is it prepared?
- What is the purpose of test
transmittal report?
- What is the purpose of the test
log?
- What are the skills needed by test
specialist?
- What are the characteristics of
technical level testers?
- Who is a test specialist?
- How to build a testing group?
PART –B
1.
2.
Why is the testing
planning so important for developing a repeatable and manageable process?
3.
Explain the
testing policy statement adopted by a software company for its products.
4.
Explain in detail
how debugging policy is used in an organization corporation to deliver high
quality software to customers.
5.
Explain briefly
the components of test plan.
6.
Explain the role of
three critical groups in testing planning and policy development for TMM level3
software organization.
7.
Explain in detail
about the skills needed by a test specialist.
8.
Test-related
documents are developed and used before, during, and after execution-based
testing. The test plan is a test-related document that is prepared before execution-based testing takes
place.
(a) What are some of the
essential items a tester should include in a test plan?
(b) Describe the
test-related documents that are developed during, and after execution-based testing.
Include in the description how these documents are used by managers,
developers,
and testers.
UNIT-V
PART-A
- Define project monitoring.
- Define project controlling.
- What is the role of a tester in
supporting the monitoring and controlling of testing?
- Describe the tasks suggested by
Tharger fro project controlling.
- Define a milestone.
- Give some examples fro testing
milestones.
- What are the questions the manager
asks at the status meeting?
- What are the types of status
measurements?
- What are the measures needed fro
test execution?
- Give some measures for defect
tracking.
- Define DRL.
- Calculate the DRL if defects found
in unit test.
- Calculate the DRL if defects found
in integration test.
- What is the format of earned value
table?
- What are the criteria for test
completion?
- Define SCM.
- What are the three critical views
on controlling and monitoring?
- Define a review.
- What are the goals for reviewers?
- Difference between static analysis
and dynamic analysis.
- At what phase of software
development, the review is conducted.
- What are the benefits of review
program?
- List the types of reviews.
- Show the inspection process.
- Compare the walkthrough and
review.
- What are the components of review
plan?
- What are the items you select for
review in a software organization?
- Give the general preconditions for
a review.
- List down the review roles.
- Who can be the review team
members?
- What are the components of
inspection checklist?
- Give the general review checklist.
- What are the components of review
reports?
- What are the components of
inspection report?
- How are the defects ranked?
- What are the components of
walkthrough report?
PART-B
- Suppose you are a test manager.
What are the milestones you would select for a unit test plan, an integration test plan, and a system test
plan?
- Some of your customers have
suggested that the number of defects found in the software that was
delivered to them is unacceptable in future releases. You are concerned
that your test effectiveness is poor. What measurements could you use to evaluate
your current test effectiveness and any changes in effectiveness due to
improvements in your process? How could you evaluate the relative
effectiveness of each of your testing phases?
- What is the role of the tester in
supporting the monitoring and controlling of testing?
- What measurements would you
suggest for monitoring test status during system test for a large application with no mission- or
safety-critical features?
- For the question 4,Suggest
appropriate measurements for monitoring tester productivity,
- For the question 4,Suggest
appropriate measurements for monitoring testing costs.
- For the question 4,Suggest appropriate
measurements for monitoring defects/faults and failures.
- Suppose a test group was testing a
mission-critical softwae system. The group has found 85 out of the 100 seeded defects. If you were the
test manager, would you stop
testing at this point? Explain your decision. If in addition you found 70 actual
no seeded defects, what would be your decision and why?
- Explain SCM activities involved in
Software Company.
- Which groups do you think should
contribute to membership of a configuration control board and why?
- A software engineering group is developing a
mission-critical software system that guides a commercial rocket to its
proper destination. This is a new product; the
group and its parent organization have never built such a product before.
There is a debate among the group
as to whether an inspection or walkthrough is the best way to evaluate the
quality of the code. The company standards are ambiguous as to which
review type should be used here. Which would you recommend?and why?
- Suppose you were a member of a
technical training team. Describe the topics that you would include for
discussion in training sessions for review leaders.
- Your organization has just begun a
review program. What are some of the metrics you would recommend for
collection to evaluate the effectiveness of the program?
- Discuss in detail about various
checklist.
0 comments:
Post a Comment