Tuesday, September 30, 2008

software testing

Explain Test Plan, Test Strategy, Test Scenario, Test Case, Test Script, Test Environment, Test Procedure and Test Log.

Test Plan is a document with information about Scope of the project, Approach, Schedule of testing activities, Resources or Manpower required, Risk Issues, Features to be tested and not to be tested, Test Tools and Environment Requirements.

Test Strategy is a document prepared by the Quality Assurance Department with the details of testing approach to reach the Quality standards.

Test Scenario is prepared based on the test cases and test scripts with the sequence of execution.

Test case is a document normally prepared by the tester with the sequence of steps to test the behavior of feature/functionality/non-functionality of the application.

Test Case document consists of Test case ID, Test Case Name, Conditions (Pre and Post Conditions) or Actions, Environment, Expected Results, Actual Results, Pass/Fail.

The Test cases can be broadly classified as User Interface Test cases, Positive Test cases and Negative Test cases.

Test Script is a program written to test the functionality of the application. It is a set of system readable instructions to automate the testing with the advantage of doing repeatable and regression testing easily.

Test Environment is the Hardware and Software Environment where the testing is going to be done. It also explains whether the software under test interacts with Stubs and Drivers.

Test Procedure is a document with the detailed instruction of step by step execution of one or more test cases. Test procedure is used in Test Scenario and Test Scripts.

Test Log contains the details of which test cases were run with the output of execution.

What are the major activities in Database Testing?
The major activities in Database testing includes,
Checking the Data Validity
Checking the Data Integrity
Checking the Performance related to Database
Checking the Security Aspects

The aspects to be considered in Database Schema testing are,
Checking the Databases and Devices
Checking the Tables, Fields, Constraints, Defaults
Checking the Keys and Indexes
Checking the Stored procedures & Packages
Checking the Error messages
Checking the Triggers - Update, Insert, Delete
Checking the Schema comparisons

What is Fuzz Testing ?
Fuzz testing is a Black box testing technique which uses random bad data to attack a program and see what breaks in the application.
Fuzz testing is mostly used to,
• Set up a correct file to enter your program.
• Restore some part of the file by using random data.
• Unlock the file with the program.
• Observe what breaks.
Fuzz testing can be automated for maximum effects on large applications. This testing improves the confidence that the application is safe and secure.


Explain Peer Review in Software Testing
It is an alternative form of Testing, where some colleagues were invited to examine your work products for defects and improvement opportunities.
Some Peer review approaches are,

Inspection – It is a more systematic and rigorous type of peer review. Inspections are more effective at finding defects than are informal reviews.
Ex : In Motorola’s Iridium project nearly 80% of the defects were detected through inspections where only 60% of the defects were detected through formal reviews.

Team Reviews – It is a planned and structured approach but less formal and less rigorous comparing to Inspections.

Walkthrough – It is an informal review because the work product’s author describes it to some colleagues and asks for suggestions. Walkthroughs are informal because they typically do not follow a defined procedure, do not specify exit criteria, require no management reporting, and generate no metrics.

Pair Programming – In Pair Programming, two developers work together on the same program at a single workstation and continuously reviewing their work.

Peer Deskcheck – In Peer Deskcheck only one person besides the author examines the work product. It is an informal review, where the reviewer can use defect checklists and some analysis methods to increase the effectiveness.

Passaround – It is a multiple, concurrent peer deskcheck where several people are invited to provide comments on the product.

Explain Compatibility Testing with an example.
Compatibility testing is to evaluate the application compatibility with the computing environment like Operating System, Database, Browser compatibility, Backwards compatibility, Computing capacity of the Hardware Platform and compatibility of the Peripherals.

Ex : If Compatibility testing is done on a Game application, before installing a game on a computer, its compatibility is checked with the computer specification that whether it is compatible with the computer having that much of specification or not.

What is Traceability Matrix ?
Traceability Matrix is a document used for tracking the requirement, Test cases and the defect. This document is prepared to make the clients satisfy that the coverage done is complete as end to end, This document consists of Requirement/Base line doc Ref No., Test case/Condition, Defects/Bug id. Using this document the person can track the Requirement based on the Defect id.

Explain Load, Performance, Stress Testing with an example

Load Testing and Performance Testing are commonly said as positive testing where as Stress Testing is said to be as negative testing.

Say for example there is a application which can handle 25 simultaneous user logins at a time. In load testing we will test the application for 25 users and check how application is working in this stage, in performance testing we will concentrate on the time taken to perform the operation. Where as in stress testing we will test with more users than 25 and the test will continue to any number and we will check where the application is cracking the Hardware resources.


Explain Boundary value testing and Equivalence testing with some examples.

Boundary value testing is a technique to find whether the application is accepting the expected range of values and rejecting the values which falls out of range.

Ex. A user ID text box has to accept alphabet characters ( a-z ) with length of 4 to 10 characters.

BVA is done like this, max value:10 pass; max-1: 9 pass;

max+1=11 fail ;min=4 pass;min+1=5 pass;min-1=3 fail;

Like wise we check the corner values and come out with a conclusion whether the application is accepting correct range of values.

Equivalence testing is normally used to check the type of the object.

Ex. A user ID text box has to accept alphabet characters ( a - z ) with length of 4 to 10 characters.

In +ve condition we have test the object by giving alphabets. i.e a-z char only, after that we need to check whether the object accepts the value, it will pass.

In -ve condition we have to test by giving other than alphabets (a-z) i.e A-Z,0-9,blank etc, it will fail.


What is Security testing?

It is a process used to look out whether the security features of a system are implemented as designed and also whether they are adequate for a proposed application environment. This process involves functional testing, penetration testing and verification.


What is Installation testing?

Installation testing is done to verify whether the hardware and software are installed and configured properly. This will ensure that all the system components were used during the testing process. This Installation testing will look out the testing for a high volume data, error messages as well as security testing.


What is AUT ?

AUT is nothing but "Application Under Test". After the designing and coding phase in Software development life cycle, the application comes for testing then at that time the application is stated as Application Under Test.


What is Defect Leakage ?

Defect leakage occurs at the Customer or the End user side after the application delivery. After the release of the application to the client, if the end user gets any type of defects by using that application then it is called as Defect leakage. This Defect Leakage is also called as Bug Leak.


What are the contents in an effective Bug report?

Project, Subject, Description, Summary, Detected By (Name of the Tester), Assigned To (Name of the Developer who is supposed to the Bug), Test Lead ( Name ), Detected in Version, Closed in Version, Date Detected, Expected Date of Closure, Actual Date of Closure, Priority (Medium, Low, High, Urgent), Severity (Ranges from 1 to 5), Status, Bug ID, Attachment, Test Case Failed (Test case that is failed for the Bug)

What is Bug Life Cycle?

Bug Life Cycle is nothing but the various phases a Bug undergoes after it is raised or reported.

* New or Opened
* Assigned
* Fixed
* Tested
* Closed


What is Error guessing and Error seeding ?

Error Guessing is a test case design technique where the tester has to guess what faults might occur and to design the tests to represent them.

Error Seeding is the process of adding known faults intentionally in a program for the reason of monitoring the rate of detection & removal and also to estimate the number of faults remaining in the program.


What is the difference between Bug, Error and Defect?

Error : It is the Deviation from actual and the expected value.

Bug : It is found in the development environment before the product is shipped to the respective customer.

Defect : It is found in the product itself after it is shipped to the respective customer.

What is Test bed and Test data ?

Test Bed is an execution environment configured for software testing. It consists of specific hardware, network topology, Operating System, configuration of the product to be under test, system software and other applications. The Test Plan for a project should be developed from the test beds to be used.

Test Data is that run through a computer program to test the software. Test data can be used to test the compliance with effective controls in the software.

What is Negative testing?

Negative testing - Testing the system using negative data is called negative testing, e.g. testing the password where it should be minimum of 8 characters so testing it using 6 characters is negative testing.

What are SDLC and STLC ? Explain its different phases.


SDLC

* Requirement phase
* Designing phase (HLD, DLD (Program spec))
* Coding
* Testing
* Release
* Maintenance

STLC

* System Study
* Test planning
* Writing Test case or scripts
* Review the test case
* Executing test case
* Bug tracking
* Report the defect



What is Ad-hoc testing?

Ad hoc testing is concern with the Application Testing without following any rules or test cases.

For Ad hoc testing one should have strong knowledge about the Application.


Describe bottom-up and top-down approaches in Regression Testing.

Bottom-up approach :
In this approach testing is conducted from sub module to main module, if the main module is not developed a temporary program called DRIVERS is used to simulate the main module.

Top-down approach : In this approach testing is conducted from main module to sub module. if the sub module is not developed a temporary program called STUB is used for simulate the submodule.

What is the difference between structural and functional testing?

Structural testing is a "white box" testing and it is based on the algorithm or code.

Functional testing is a "black box" (behavioral) testing where the tester verifies the functional specification.


What is the difference between Re-test and Regression Testing?
Re- test - Retesting means we testing only the certain part of an application again and not considering how it will effect in the other part or in the whole application.

Regression Testing - Testing the application after a change in a module or part of the application for testing that is the code change will affect rest of the application.

What is UAT testing? When it is to be done?

UAT testing - UAT stands for 'User acceptance Testing. This testing is carried out with the user perspective and it is usually done before the release.


What are the basic solutions for the software development problems?

* Basic requirements - clear, detailed, complete, achievable, testable requirements has to be developed. Use some prototypes to help pin down requirements. In nimble environments, continuous and close coordination with customers/end-users is needed.
* Schedules should be realistic - enough time to plan, design, test, bug fix, re-test, change, and document in the given schedule.
* Adequate testing – testing should be started early, it should be re-tested after the bug fixed or changed, enough time should be spend for testing and bug-fixing.
* Proper study on initial requirements – be ready to look after more changes after the development has begun and be ready to explain the changes done to others. Work closely with the customers and end-users to manage expectations. This avoids excessive changes in the later stages.
* Communication – conduct frequent inspections and walkthroughs in appropriate time period; ensure that the information and the documentation is available on up-to-date if possible electronic. More emphasize on promoting teamwork and cooperation inside the team; use prototypes and proper communication with the end-users to clarify their doubts and expectations.

What are the common problems in the software development process?

* Inadequate requirements from the Client - if the requirements given by the client is not clear, unfinished and not testable, then problems may come.
* Unrealistic schedules – Sometimes too much of work is being given to the developer and ask him to complete in a Short duration, then the problems are unavoidable.
* Insufficient testing – The problems can arise when the developed software is not tested properly.
* Given another work under the existing process – request from the higher management to work on another project or task will bring some problems when the project is being tested as a team.
* Miscommunication – in some cases, the developer was not informed about the Clients requirement and expectations, so there can be deviations.


Why does software have bugs?

* Miscommunication or no communication – about the details of what an application should or shouldn't do
* Programming errors – in some cases the programmers can make mistakes.
* Changing requirements – there are chances of the end-user not understanding the effects of changes, or may understand and request them anyway to redesign, rescheduling of engineers, effects of other projects, work already completed may have to be redone or thrown out.
* Time force - preparation of software projects is difficult at best, often requiring a lot of guesswork. When deadlines are given and the crisis comes, mistakes will be made.


What software testing types can be considered?


Black box testing –
This type of testing doesn’t require any knowledge of the internal design or coding. These Tests are based on the requirements and functionality.

White box testing –
This kind of testing is based on the knowledge of internal logic of a particular application code. The Testing is done based on the coverage of code statements, paths, conditions.

Unit testing – the 'micro' scale of testing; this is mostly used to test the particular functions or code modules. This is typically done by the programmer and not by testers; it requires detailed knowledge of the internal program design and code. It cannot be done easily unless the application has a well-designed architecture with tight code; this type may require developing test driver modules or test harnesses.

Sanity testing or Smoke testing – This type of testing is done initially to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing the systems in every 5 minutes or corrupting databases, the software may not be in a 'sound’ condition to proceed for further testing in its current state.

Functional testing – This a commonly used black-box testing geared to check the functional requirements of an application; this type of testing should be done by testers.

Integration testing – This testing is combining the ‘parts’ of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to the client/server and distributed systems.

Incremental Integration testing – This is continuous testing of an application when a new functionality is added the existing ones; it checks the application functionality by verifying whether it works separately before all parts of the program are completed, in this type it will be checked whether to introduce test drivers or not; this is done by programmers or by testers.

Regression testing – This is testing the whole application again after the fixes or the modifications are done on the software. This is mostly done at the end of the Software development life cycle. Mostly Automated testing tools are used for these type of testing.

System testing – This is a type of black-box type testing that is based on overall requirements specifications; covers all combined parts of a system.

End-to-end testing – This is similar to system testing; this involves testing of a complete application environment such as interacting with a database, using network communications, or interacting with other hardware, applications and so on.

UAT ( User Acceptance Testing ) – This type of testing comes on the final stage and mostly done on the specifications of the end-user or client.

Usability testing – This testing is done to check the 'user-friendliness' of the application. This depends on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.

Compatibility testing – Testing how well the software performs in a particular hardware, software, operating system, network etc.

Comparison testing – This is nothing comparing the software strengths and weakness with another competing product.

Mutation testing –
This is another method for determining if a set of test data or test cases is useful, by purposely introducing various code changes or bugs and retesting with the original test data or cases to determine whether the 'bugs' are detected.

How do you decide when you have 'tested enough’?

Common factors in deciding when to stop are:

* Deadlines (release deadlines, testing deadlines, etc.)
* Test cases completed with certain percentage passed
* Test budget depleted
* Coverage of code/functionality/requirements reaches a specified point
* Bug rate falls below a certain level
* Beta or alpha testing period ends

Describe the Software Development Life Cycle

It includes aspects such as initial concept, requirements analysis, functional design, internal design, documentation planning, test planning, coding, document preparation, integration, testing, maintenance, updates, retesting, phase-out, and other aspects.

Describe the difference between validation and verification

Verification is done by frequent evaluation and meetings to appraise the documents, policy, code, requirements, and specifications. This is done with the checklists, walkthroughs, and inspection meetings.

Validation is done during actual testing and it takes place after all the verifications are being done.

What is the difference between QA and testing?

Testing involves operation of a system or application under controlled conditions and evaluating the results. It is oriented to 'detection'.

Software QA involves the entire software development PROCESS - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with. It is oriented to 'prevention'.

What is quality assurance?

Software QA involves the entire software development PROCESS - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with. It is oriented to 'prevention'.


What is the purpose of the testing?

Software testing is the process used to help identify the Correctness, Completeness, Security and Quality of the developed Computer Software.

Software Testing is the process of executing a program or system with the intent of finding errors.