Friday, December 28, 2007

v model to w model

V-Model:

The V-model promotes the idea that the dynamic test stages (on the right hand side of the model) use the documentation identified on the left hand side as baselines for testing. The V-Model further promotes the notion of early test preparation.

The V-Model of testing

Early test preparation finds faults in baselines and is an effective way of detecting faults early. This approach is fine in principle and the early test preparation approach is always effective. However, there are two problems with the V-Model as normally presented.


The V-Model with early test preparation

There is rarely a perfect, one-to-one relationship between the documents on the left hand side and the test activities on the right. For example, functional specifications don’t usually provide enough information for a system test. System tests must often take account of some aspects of the business requirements as well as physical design issues for example. System testing usually draws on several sources of requirements information to be thoroughly planned.

V-Model has little to say about static testing at all. The V-Model treats testing as a back-door activity on the right hand side of the model. There is no mention of the potentially greater value and effectiveness of static tests such as reviews, inspections, static code analysis and so on. This is a major omission and the V-Model does not support the broader view of testing as a constantly prominent activity throughout the development lifecycle.

W Model:


The W-Model of testing

Paul Herzlich introduced the W-Model approach in 1993. The W-Model attempts to address shortcomings in the V-Model. Rather than focus on specific dynamic test stages, as the V-Model does, the W-Model focuses on the development products themselves. Essentially, every development activity that produces a work product is shadowed by a test activity. The purpose of the test activity specifically is to determine whether the objectives of a development activity have been met and the deliverable meets its requirements. In its most generic form, the W-Model presents a standard development lifecycle with every development stage mirrored by a test activity. On the left hand side, typically, the deliverables of a development activity (for example, write requirements) is accompanied by a test activity test the requirements and so on. If your organization has a different set of development stages, then the W-Model is easily adjusted to your situation. The important thing is this: the W-Model of testing focuses specifically on the product risks of concern at the point where testing can be most effective.


The W-Model and static test techniques.

If we focus on the static test techniques, you can see that there is a wide range of techniques available for evaluating the products of the left hand side. Inspections, reviews, walkthroughs, static analysis, requirements animation as well as early test case preparation can all be used.



The W-Model and dynamic test techniques.

If we consider the dynamic test techniques you can see that there is also a wide range of techniques available for evaluating executable software and systems. The traditional unit, integration, system and acceptance tests can make use of the functional test design and measurement techniques as well as the non-functional test techniques that are all available for use to address specific test objectives.

The W-Model removes the rather artificial constraint of having the same number of dynamic test stages as development stages. If there are five development stages concerned with the definition, design and construction of code in your project, it might be sensible to have only three stages of dynamic testing only. Component, system and acceptance testing might fit your normal way of working. The test objectives for the whole project would be distributed across three stages, not five. There may be practical reasons for doing this and the decision is based on an evaluation of product risks and how best to address them. The W-Model does not enforce a project symmetry that does not (or cannot) exist in reality. The W-model does not impose any rule that later dynamic tests must be based on documents created in specific stages (although earlier documentation products are nearly always used as baselines for dynamic testing. In projects using these methods, requirements and designs might be documented in multiple models so system testing might be based on several of these models (spread over several documents).

We use the W-Model in test strategy as follows. Having identified the specific risks of concern, we specify the products that need to be tested; we then select test techniques (static reviews or dynamic test stages) to be used on those products to address the risks; we then schedule test activities as close as practicable to the development activity that generated the products to be tested.

v model to w model

V-Model:

The V-model promotes the idea that the dynamic test stages (on the right hand side of the model) use the documentation identified on the left hand side as baselines for testing. The V-Model further promotes the notion of early test preparation.

The V-Model of testing

Early test preparation finds faults in baselines and is an effective way of detecting faults early. This approach is fine in principle and the early test preparation approach is always effective. However, there are two problems with the V-Model as normally presented.


The V-Model with early test preparation

There is rarely a perfect, one-to-one relationship between the documents on the left hand side and the test activities on the right. For example, functional specifications don’t usually provide enough information for a system test. System tests must often take account of some aspects of the business requirements as well as physical design issues for example. System testing usually draws on several sources of requirements information to be thoroughly planned.

V-Model has little to say about static testing at all. The V-Model treats testing as a back-door activity on the right hand side of the model. There is no mention of the potentially greater value and effectiveness of static tests such as reviews, inspections, static code analysis and so on. This is a major omission and the V-Model does not support the broader view of testing as a constantly prominent activity throughout the development lifecycle.

W Model:


The W-Model of testing

Paul Herzlich introduced the W-Model approach in 1993. The W-Model attempts to address shortcomings in the V-Model. Rather than focus on specific dynamic test stages, as the V-Model does, the W-Model focuses on the development products themselves. Essentially, every development activity that produces a work product is shadowed by a test activity. The purpose of the test activity specifically is to determine whether the objectives of a development activity have been met and the deliverable meets its requirements. In its most generic form, the W-Model presents a standard development lifecycle with every development stage mirrored by a test activity. On the left hand side, typically, the deliverables of a development activity (for example, write requirements) is accompanied by a test activity test the requirements and so on. If your organization has a different set of development stages, then the W-Model is easily adjusted to your situation. The important thing is this: the W-Model of testing focuses specifically on the product risks of concern at the point where testing can be most effective.


The W-Model and static test techniques.

If we focus on the static test techniques, you can see that there is a wide range of techniques available for evaluating the products of the left hand side. Inspections, reviews, walkthroughs, static analysis, requirements animation as well as early test case preparation can all be used.



The W-Model and dynamic test techniques.

If we consider the dynamic test techniques you can see that there is also a wide range of techniques available for evaluating executable software and systems. The traditional unit, integration, system and acceptance tests can make use of the functional test design and measurement techniques as well as the non-functional test techniques that are all available for use to address specific test objectives.

The W-Model removes the rather artificial constraint of having the same number of dynamic test stages as development stages. If there are five development stages concerned with the definition, design and construction of code in your project, it might be sensible to have only three stages of dynamic testing only. Component, system and acceptance testing might fit your normal way of working. The test objectives for the whole project would be distributed across three stages, not five. There may be practical reasons for doing this and the decision is based on an evaluation of product risks and how best to address them. The W-Model does not enforce a project symmetry that does not (or cannot) exist in reality. The W-model does not impose any rule that later dynamic tests must be based on documents created in specific stages (although earlier documentation products are nearly always used as baselines for dynamic testing. In projects using these methods, requirements and designs might be documented in multiple models so system testing might be based on several of these models (spread over several documents).

We use the W-Model in test strategy as follows. Having identified the specific risks of concern, we specify the products that need to be tested; we then select test techniques (static reviews or dynamic test stages) to be used on those products to address the risks; we then schedule test activities as close as practicable to the development activity that generated the products to be tested.

Equivalence partitioning

Equivalence partitioning:


Equivalence partitioning is a method for deriving test cases. In this method, classes of input conditions called equivalence classes are identified such that each member of the class causes the same kind of processing and output to occur.

In this method, the tester identifies various equivalence classes for partitioning. A class is a set of input conditions that are is likely to be handled the same way by the system. If the system were to handle one case in the class erroneously, it would handle all cases erroneously.

Equivalence partitioning drastically cuts down the number of test cases required to test a system reasonably. It is an attempt to get a good 'hit rate', to find the most errors with the smallest number of test cases.

To use equivalence partitioning, you will need to perform four steps:

  • Determining conditions to be Tested
  • Defining Tests
  • Designing test cases
  • Identifying Final set of Test Cases

Defining Tests

A number of items must be considered when determining the tests using the equivalence partitioning method, like:
  • All valid input data for a given condition are likely to go through the same process.
  • Invalid data can go through various processes and need to be evaluated more carefully. For example,
  • a blank entry may be treated differently than an incorrect entry,
  • a value that is less than a range of values may be treated differently than a value that is greater,
  • if there is more than one error condition within a particular function, one error may override the other, which means the subordinate error does not get tested unless the other value is valid.
Defining Test Cases

Create test cases that incorporate each of the tests. For valid input, include as many tests as possible in one test case. For invalid input, include only one test in a test case in order to isolate the error. Only the invalid input test condition needs to be evaluated in such tests, because the valid condition has already been tested.

EXAMPLE OF EQUIVALENCE PARTITIONING

1. Conditions to be Tested

The following input conditions will be tested:
  • For the first three digits of all social insurance (security) numbers, the minimum number is 111 and the maximum number is 222.
  • For the fourth and fifth digits of all social insurance (security) numbers, the minimum number is 11 and the maximum number is 99.
2. Defining Tests

Identify the input conditions and uniquely identify each test, keeping in mind the items to consider when defining tests for valid and invalid data.

The tests for these conditions are:
  • The first three digits of the social insurance (security) number are:
  1. = or > 111 and = or <>
  2. <>
  3. > 222, (invalid input, above the range),
  4. blank, (invalid input, below the range, but may be treated differently).
  • The fourth and fifth digits of the social insurance (security) number are:
  1. = or > 11 and = or <>
  2. <>
  3. > 99, (invalid input, above the range),
  4. blank, (invalid input, below the range, but may be treated differently).
Using equivalence partitioning, only one value that represents each of the eight equivalence classes needs to be tested.

3. Defining Test Cases

After identifying the tests, create test cases to test each equivalence class, (i.e., tests 1 through 8).

Create one test case for the valid input conditions, (i.e., tests 1 and 5), because the two conditions will not affect each other.

Identify separate test cases for each invalid input, (i.e., tests 2 through 4 and tests 6 through 8).

Both conditions specified, (i.e., condition 1 - first three digits, condition 2 - fourth and fifth digits), apply to the social insurance (security) number.

Since equivalence partitioning is a type of black-box testing, the tester does not look at the code and, therefore, the manner in which the programmer has coded the error handling for the social insurance (security) number is not known. Separate tests are used for each invalid input, to avoid masking the result in the event one error takes priority over another.

For example, if only one error message is displayed at one time, and the error message for the first three digits takes priority, then testing invalid inputs for the first three digits and the fourth and fifth digits together, does not result in an error message for the fourth and fifth digits. In tests B through G, only the results for the invalid input need to be evaluated, because the valid input was tested in test case A.

4. Suggested test cases:
  1. Test Case A - Tests 1 and 5, (both are valid, therefore there is no problem with errors),
  2. Test Case B - Tests 2 and 5, (only the first one is invalid, therefore the correct error should be produced),
  3. Test Case C - Tests 3 and 5, (only the first one is invalid, therefore the correct error should be produced),
  4. Test Case D - Tests 4 and 5, (only the first one is invalid, therefore the correct error should be produced),
  5. Test Case E - Tests 1 and 6, (only the second one is invalid, therefore the correct error should be produced),
  6. Test Case F - Tests 1 and 7, (only the second one is invalid, therefore the correct error should be produced),
  7. Test Case G - Tests 1 and 8, (only the second one is invalid, therefore the correct error should be produced).

Cyclomatic complexity

Cyclomatic complexity is a software metric (measurement). It was developed by Thomas McCabe and is used to measure the complexity of a program. It directly measures the number of linearly independent paths through a program's source code. It is computed using a graph that describes the control flow of the program. The nodes of the graph correspond to the commands of a program. A directed edge connects two nodes if the second command might be executed immediately after the first command.


Definition


M = E − N + 2P

where

M = cyclomatic complexity
E = the number of edges of the graph
N = the number of nodes of the graph
P = the number of connected components.

"M" is alternatively defined to be one larger than the number of decision points (if/case-statements, while-statements, etc) in a module (function, procedure, chart node, etc.), or more generally a system.

Separate subroutines are treated as being independent, disconnected components of the program's control flow graph.


Alternative definition


v(G) = e − n + p
G is a program's flowgraph
e is the number of edges (arcs) in the flowgraph
n is the number of nodes in the flowgraph
p is the number of connected components


Alternative way


There is another simple way to determine the cyclomatic number. This is done by counting the number of closed loops in the flow graph, and incrementing that number by one.

i.e.

M = Number of closed loops + 1

where

M = Cyclomatic number.


Implications for Software Testing:

  • M is a lower bound for the number of possible paths through the control flow graph.
  • M is an upper bound for the number of test cases that are necessary to achieve a complete branch coverage.

For example, consider a program that consists of two sequential if-then-else statements.

if (c1) {
f1();
} else {
f2();
}

if (c2) {
f3();
} else {
f4();
}

  • To achieve a complete branch coverage, two test cases are sufficient here.
  • For a complete path coverage, four test cases are necessary.
  • The cyclomatic number M is three, falling in the range between these two values, as it does for any program.

Cyclomatic complexity can be applied in several areas
, including:
  • Code development risk analysis: While code is under development, it can be measured for complexity to assess inherent risk or risk buildup.
  • Change risk analysis in maintenance: Code complexity tends to increase as it is maintained over time. By measuring the complexity before and after a proposed change, this buildup can be monitored and used to help decide how to minimize the risk of the change.
  • Test Planning: Mathematical analysis has shown that cyclomatic complexity gives the exact number of tests needed to test every decision point in a program for each outcome. Thus, the analysis can be used for test planning. An excessively complex module will require a prohibitive number of test steps; that number can be reduced to a practical size by breaking the module into smaller, less-complex sub-modules.
  • Re-engineering: Cyclomatic complexity analysis provides knowledge of the structure of the operational code of a system. The risk involved in reengineering a piece of code is related to its complexity. Therefore, cost and risk analysis can benefit from proper application of such an analysis.

test coverage matrix vs traceblity matrix

Test coverage matrix:

Test coverage matrix is a checklist which ensures that the functionality of the given screen(unit) is checked in all possible combinations (positive and negative) which have not been covered in test cases. Test coverage matrix is usually prepared for a screen having large number of controls (textboxes, dropdowns, buttons etc) usually, test coverage matrix is prepared in a spread sheet having all the controls (textboxes, dropdowns, buttons etc) in the columns and then all possible entries in those fields in the rows with an ''yes'' or ''no'' in the rows against the controls listed in the columns. For example, consider a ''login'' screen wherein we have ''username'' and ''password" textfields.

While preparing test coverage matrix, the first column will be ''s.no'' and the second will be ''username" and ''password" will be the third field followed by ''ok'' and ''cancel'' button. Then, in the first row for s.no 1, enter ''yes'' for both ''user name'' and ''password'' columns, ''yes'' implying that a value is entered in that field. In the second row, enter ''yes'' and ''no'' and in the third row, ''no'' and 'yes'' and so on.

The complexity increases with the number of controls in the screen. Each of the row is considered as one condition and executed while testing. This is how we prepare test coverage matrix. (this is a black box testing technique).

Traceability matrix:

While, Traceability matrix serves in mapping the test cases to the requirements. It serves as a checklist wherein all the requirements (of srs) are listed and the test cases covering the corresponding requirement is listed against each requirement. Every company may have their own template for rtm, but they serve the same purpose as described above.

Top 24 replies by programmers when their programs don't work:

Top 24 replies by programmers when their programs don't work:

24. "It works fine on MY computer"
23. "Who did you login as ?"
22. "It's a feature"
21. "It's WAD (Working As Designed)"
20. "That's weird..."
19. "It's never done that before."
18. "It worked yesterday."
17. "How is that possible?"
16. "It must be a hardware problem."
15. "What did you type in wrong to get it to crash?"
14. "There is something funky in your data."
13. "I haven't touched that module in weeks!"
12. "You must have the wrong version."
11. "It's just some unlucky coincidence."
10. "I can't test everything!"
9. "THIS can't be the source of THAT."
8. "It works, but it's not been tested."
7. "Somebody must have changed my code."
6. "Did you check for a virus on your system?"
5. "Even though it doesn't work, how does it feel?"
4. "You can't use that version on your system."
3. "Why do you want to do it that way?"
2. "Where were you when the program blew up?"
1. "I thought I fixed that."

Sunday, December 23, 2007

Friday, November 30, 2007

Some famous quotes about Software Testing

"The last project generated a ton of paper and it was still a disaster, so this project will have to generate two tons." (Lister, DeMarco: "Peopleware")

"Testing is a skill. While this may come as a surprise to some people it is a simple fact." (Fewster, Graham: "Software Test Automation")


"To find the bugs that customers see - that are important to customers - you need to write tests that cross functional areas by mimicking typical user tasks. This type of testing is called scenario testing, task-based testing, or use-case testing." (Brian Marick)

"The more you improve the way you go about your work, the harder the work will be." (Lister, DeMarco: "Peopleware")

"Testing a product is a learning process." (Brian Marick)

"Most of us are pretty comfortable with the way we are, what we're doing and how we operate. But today the typical organization is telling the middle manager that he has to be a different kind of manager. These middle managers have been promoted throughout their careers and gotten bonuses based on their performance, but that's now history. ..." (Carr, Hard, Trahant: "Change Process")

"The projects most worth doing are the ones that will move you down one full level on your process scale." (Lister, DeMarco: "Peopleware")


"First law of Bad Management: If something isn't working, do more of it." (DeMarco: "Slack")

"The real reason for the use of pressure and overtime may be to make everyone look better when the project fails." (DeMarco, "The Deadline")

"Projects that set out to achieve 'aggressive' schedules probably take longer to complete than they would have if they have started with more reasonable schedules." (DeMarco, "The Deadline")


"The real complexity in our jobs is that all planning is done under conditions of uncertainty and ignorance. The code isn't the only think that changes. Schedules slip. New milestones are added for new features. Features are cut from the release. During development, everyone - marketers, developers and testers - comes to understand better what the product is really for." (Brian Marick)

"Companies that downsize are frankly admitting that their upper management has blown it." (Lister, DeMarco: "Peopleware")

"Everything really interesting that happens in software projects eventually comes down to people." (James Bach)

"We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value: Individuals and interactions over processes and tools, working software over comprehensive documentation, customer collaboration over contract negotiation, responding to change over following a plan." (Agile Software Development Manifesto)


"But there's more to defining processes and coordinating people than assigning someone to dream up a checklist and get it blessed in a staff meeting." (James Bach)

"If the date is missed, the schedule was wrong. It doesn't matter why the date was missed. The purpose of the schedule was planning, not goal-setting." (DeMarco: "Slack")

"Exploratory testing can be described as a martial art of the mind. It's how you deal with a product that jumps out from the bushes and challenges you to a duel of testing. Well, you don't become a black belt by reading books. You have to work on it. Happy practising." (James Bach)

"Management involves heart, gut, soul and nose. So ... lead with the heart, trust your gut ..., build soul into the organisation, develop a nose for bullshit." (DeMarco, "Deadline")


"Process obsession is the problem. Process obsession is not just an anomaly that occurs now and again. It is an epidemic." (DeMarco: "Slack")


"... our basic ideas about what are better or worse practices are strongly influenced by people we perceive as knowing how to make software." (James Bach)


"The danger of standard process is that people will miss chances to take important shortcuts." (DeMarco, "Deadline")


"I see design standards that don't tell you how to come up with a good design (only how to write it down), employee evaluation standards that don't help you build meaningful long-term relationships with staff, testing standards that don't tell you how to invent a test that is worth running." (DeMarco: "Slack")

"It's more about good enough than it is about right or wrong." (James Bach)


"The ultimate management sin is wasting people's time." (Lister, DeMarco: "Peopleware")


"The major problems of our work are not so much technological as sociological in nature." (Lister, DeMarco: "Peopleware")

"We all tend to tie our self-esteem strongly to the quality of the product we produce - not the quantity of the product, but the quality." (Lister, DeMarco: "Peopleware")


"The only person who likes change is a wet baby." (Carr, Hard, Trahant: "Change Process")

"Documentation is not understanding, process is not discipline, formality is not skill." (Jim Highsmith)

"A fool with a tool is still a fool." (Grady Booch)


"We are still in the infancy of naming what is really happening on software development projects." (Alistair Cockburn, "Agile Software Development")


"Quality is free, but only to those who are willing to pay heavily for it." (Lister, DeMarco: "Peopleware")

"A good model guides your thinking, a bad one warps it." (Brian Marick)


“There is probably no job on earth for which an ability to believe six impossible things before breakfast is more of a requirement than software project management. .... The business of believing only what you have a right to believe is called risk management." (DeMarco, Lister: "Waltzing with Bears")

"Ever Tried. Ever failed. No matter.
Try again. Fail again. Fail better."
(Samuel Beckett, "Worstward Ho")

"People can't embrace change unless they feel safe. ... A lack of safety makes people risk-averse." (DeMarco, "The Deadline")

"A day lost at the beginning of a project hurts just as much as a day lost at the end. ... There are infinitely many ways to loose a day ..but not even one way to get one back." (DeMarco, "The Deadline")


"This is not the end of the world, although you sure can see it from here." (The Tangent, "The music that died alone")


"You can't get people to do anythink different without caring for them and about them. To get them to change, you have to understand (appreciate) where they're coming from and why. (DeMarco, "The Deadline")


"If we fail, we fall. If we succeed - then we will face the next task." (Tolkien, "Lord of the Rings", Gandalf's comment on IT projects).

"It is ever so with the things that men begin: there is a frost in Spring, or a blight in Summer. and they fail of their promise." (Tolkien, "Lord of the Rings", Gimli's comment on IT projects).


"Any process that tries to reduce software development to a "no brainer" will eventually produce just that: a product developed by people without brains." (Any Hunt, Dave Thomas, "Cook until done")

"Winners never talk about glorious victories. That's because they are the ones who see what the battlefield looks like afterwards. It's only the losers who have glorious victories." (Terry Pratchett, "Small Gods")


"No counsel have I to give to those that despair. Yet counsel I could give, and words I could speak to you. Will you here them? They are not for all ears. ... Too long have you sat in shadows and trusted to twisted tales and crooked promptings." (Tolkien, "Lord of the Rings", the IT-Consultant Gandalf persuades a new customer)


If I understand aright all that I have heard, I think that this task is appointed for you, ... and that if you don't find a way, no one will. This is the hour of the QA, when they arise from their quiet cubicles to shake the towers and counsels of the Great. Who of the Wise could have foreseen it? Or, if they are wise, why should they expect to know it, until the hour has struck?" (Tolkien, "Lord of the Rings", a motivational speech of CTO Elrond).

Monday, November 19, 2007

Istqb,cste inforamtions and training centers in India

About istqb:
The ISTQB was officially founded as an International Testing Qualifications Board in Edinburgh in November 2002 and it is responsible for the "ISTQB Certified Tester", which is an international qualification scheme.

ISTQB is the parent body responsible for approving various national boards in addition to other tasks such as defining the syllabi for various certifications.
website url:
www.indiantestingboard.com FAQ :http://208.116.30.129/faq.htm

for examination and preparation and sample question papers available in below link

will be helpful for ISTQB
http://india.istqb.org/resources.htm

http://www.geekinterview.com/quiz/Testing

join yahoo groups:
in this group you can ask your queries about istqb examinations and certification related doubts and sample papers to certified testers..foundation level and advance level question keep on raised by members.

ISTQB-India@ yahoogroups. com

CSTE information:

QAI, India, the premier knowledge corporation in the software engineering and management domain, has recently conducted the first Certified Software Test Engineer Certification examination in India. The CSTE certification is a formal recognition of proficiency-level attained in IT software testing by business and professional associates. According to Navyug Mohnot, Executive Director of QAI, “A CSTE is acknowledged to be proficient in the domains that make up the Common Body of Knowledge for the Information Systems Software Testing Profession”.

CSTE certification is proof that you have mastered a basic skill set recognized worldwide in the Information Technology arena. It will also result in greater acceptance as an advisor to upper management. The candidates will be required to demonstrate proficiency in the following knowledge domains which make up the Common Body of Knowledge for the Information Systems Software Testing profession. The CBOK consists of topics such as Test Tactics, Quality Principles and Concepts, Design, Methods for Software Development and Maintenance,Defect Tracking and Management, Verification and Validation Methods, Quantitative Measurement, Test Reporting Risk Analysis, etc.

Held simultaneously at Delhi, Bangalore, and Mumbai, the participants of the examination were from leading software companies like Cognizant Technology Solutions, Melstar, Satyam GE Software Services, IBM Global Services, and Patni Computer Systems.

The Certification process
The CSTE brochure and exam application form can be obtained from QAI, on any working day. The eligibility criteria to appear for the examination is a bachelor’s degree or an associate degree with two years of test work experience or six years of testing experience.

url for the site:http://www.qaiasia.com/News_room/News/qai_conducts_cert.htm

Top Companies hiring Certified Testers:


1.Accenture India

2.Itc info tech ,bangalore

3.Ibm India labs

4.US technologies

5.Hcl

6.Tata consultancy Services

7.Syman tech

Thursday, April 19, 2007

testing advanced interview questions


Gray Box Testing
Definition: Gray Box testing refers to the technique of testing a system with limited knowledge of the internals of the system. Gray Box testers have access to detailed design documents with information beyond requirement documents. Gray Box tests are generated based on information such as state-based models or architecture diagrams of the target system.

So typically we can say that the Gray box testing technique is a combination of both Black box and white box testing (Partially) technique.

Gray box testing is nothing but combination of both black box and
white box testing. For this tester should have knowledge of both
internal and external knowledge of logic.

what are the Essential Documents require to write manual test cases ?

-Requirement Documents
-Use Case Documents
-Any previously written test cases on eairlier
builds.
-User manuals if any
-Walkthrough notes etc.

what i CAR in testing?

CAR stands for Causal Analysis Resolution.Causal Analysis and Resolution tool is a defect tracking mechanism. It gives the view of various defects or problems found in projects.Not only defects are analysed also if there are no defects in a project then CAR analysis can be done to check what is that the project actual does where the project didnt even faced a single defect.
Once the defects are analysed then a Brainstorming session will be conducted to know the problem statement.Then inputs will be discussed and also preventive action items will be taken to ensure that the defects which occurred will not appear again.Generally pareto chart and fish bone analysis diagram is used for CAR.

Functionality testing?

Functionality testing is nothing but Requirements Testing. During this tesing Test engineer can validate the correctness of the functionality w.r.t customer requirements.

we can validate correctness of functionality interms of Caliculation coverage, Behaviour coverage, Error Handling coverage and Back-end coverage.

1.caliculation coverage is nothing but, can we can proper output or not ( for example
we have to additon operation 10+5 =12 it is not valid additon right instead of 12, 15 will be displayed)

2.Behaviour coverage: we can check the behaviour of application
3.Error Handling: Occurrences of Error Message(popup messages) to prevent negative navigation.
4.Back-End : Impact of front end operation on back-end tables interms of Data validation and Data integrity.


System Testing:
System Testing is nothing to test the overall functionality of application. it is a end-to-end testing for example login -logout . In this testing tester expected to testing the application from login to logout the application. The requirements of application will be covered in System testing. It is based SRS.

Sytstem testing consists of Usablity, Functional, Performance, Security testings.


Fuzz testing ?
fuzz testing is a technique that provides random data ("fuzz") to the inputs of a program. If the program fails (for example, by crashing, or by failing built-in code assertions), the defects can be noted.

What is difference between system testing and integration testing? Can the test cases
written for system testing can be used for integration testing?
Integration testing concentrates on testing the interfaces and communication between
different modules. Here the assumption is since the component (modules) testing has
already completed we don't concentrate on the functionality.

Coming to System testing, all the components(modules) should be ready with functionality
and integration testing, now we will test the functionality of the entire system as one
component. Generally system testing compares the system with the requirement documents
and is performed from end user prospective.

Fuzz testing or fuzzing is asoftware testing technique that provides random data ("fuzz") to the inputs of a program. If the program fails (for example, by crashing, or by failing built-in code assertions), the defects can be noted.



Fuzz testing or fuzzing is a software testing technique that provides random data ("fuzz") to the inputs of a program. If the program fails (for example, by crashing, or by failing built-in code assertions), the defects can be noted.

USES:

Fuzz testing is often used in large software development projects that perform black box testing. These usually have a budget to develop test tools, and fuzz testing is one of the techniques which offers a high benefit to cost ratio.

Fuzz testing is also used as a gross measurement of a large software system's quality. The advantage here is that the cost of generating the tests is relatively low. For example, third party testers have used fuzz testing to evaluate the relative merits of different operating systems and application programs.

Fuzz testing is thought to enhance software security and software safty because it often finds odd oversights and defects which human testers would fail to find, and even careful human test designers would fail to create tests for.

However, fuzz testing is not a substitute for exhaustive testing or formal methods: it can only provide a random sample of the system's behavior, and in many cases passing a fuzz test may only demonstrate that a piece of software handles exceptions without crashing, rather than behaving correctly. Thus, fuzz testing can only be regarded as a proxy for program correctness, rather than a direct measure, with fuzz test failures actually being more useful as a bug-finding tool than fuzz test passes as an assurance of quality.

what is Bespoke and off -the- self means?

A Software customized for a particular user group, or
organization is known as bespoke software.

It is typically in contrast with "off-the-shelf"
software that is run by thousand or even million of
users.

characteristics of a good test case?
1. Accurate
2. Economical
3. Self standing
4. Appropriate
5. Traceable

Thursday, April 12, 2007

advance software testing terms and definations

what is testharness?

Test Harness is nothing but a tool or set of tools to perform the testing of a program unit in a fully automated scenario. All the testing parameters are set in the “test script repository” of “test harnessing” tool(s) (which is nothing but testing softwares). In brief test harness includes:

* Predefined set of parameters, functions and inputs
* A standard way to specify setup (i.e., creating an artificial runtime environment) and cleanup.
* A method for selecting individual tests to run, or all tests.
* A means of analyzing output for expected (or unexpected) results.
* A standardized form of failure reporting.

Test Harness engine runs in an automated manner to produce the different result scenarios based on the parameters, functions and inputs defined in the test script repository. this is a part of testing preferably large applications where manual testing will be very time consuming (or practically impossible).

2:Test bed:Testing environment creation i.e. all required software installation, etc.

3:Test Plan : is the document which describes whole test planning, strategy, entry exit criteria etc.
QA Plan: Is related to Process we will folow for testing (not sure)
QA: person who is involved in standards and process for testing.
QC: is quality control which is actal testing of software