Manual Testing Tutorial

[vc_row][vc_column][vc_column_text]

Manual Testing Tutorial – Complete Guide | Software Testing Tutorial

[/vc_column_text][vc_column_text]In this free online Software Testing Tutorial / Manual Testing Tutorial, we cover all manual testing concepts in detail with easy to understand examples. This Software Testing Tutorial / Manual Testing Tutorial is helpful for beginners to advanced level users to learn software testing concepts with practical examples.[/vc_column_text][/vc_column][/vc_row][vc_row][vc_column width=”5/6″ css=”.vc_custom_1585002008288{background-color: #f9f9f9 !important;}”][vc_column_text]

Why This Manual Testing Tutorial?

[/vc_column_text][vc_column_text]This Software Testing Tutorial covers right from basics to advanced test concepts.[/vc_column_text][vc_column_text]

What are the prerequisites for this Manual Testing Tutorials?

[/vc_column_text][vc_column_text]

  • Basic computer knowledge
  • Interest to learn Software Testing

[/vc_column_text][vc_column_text]

Who is the targeted audience of this Software Testing Tutorial?

[/vc_column_text][vc_column_text]Anyone who has the interest to learn Software Testing.[/vc_column_text][vc_column_text]

What is Software Testing:

[/vc_column_text][vc_column_text]Software testing is a process, to evaluate the functionality of a software application with an intent to find whether the developed software met the specified requirements or not and to identify the defects to ensure that the product is defect free in order to produce the quality product.[/vc_column_text][vc_column_text]Let’s see standard definition, software testing types such as manual and automation testing, testing methods, testing approaches and types of black box testing.

Definition:

Software Testing Definition according to ANSI/IEEE 1059 standard – A process of analyzing a software item to detect the differences between existing and required conditions (i.e., defects) and to evaluate the features of the software item.[/vc_column_text][vc_column_text]

Software Testing Types:

Manual Testing: Manual testing is the process of testing software by hand to learn more about it, to find what is and isn’t working. This usually includes verifying all the features specified in requirements documents, but often also includes the testers trying the software with the perspective of their end user’s in mind. Manual test plans vary from fully scripted test cases, giving testers detailed steps and expected results, through to high-level guides that steer exploratory testing sessions. There are lots of sophisticated tools on the market to help with manual testing, but if you want a simple and flexible place to start, take a look at Testpad.

Automation Testing: Automation testing is the process of testing the software using an automation tool to find the defects. In this process, testers execute the test scripts and generate the test results automatically by using automation tools. Some of the famous automation testing tools for functional testing are QTP/UFT and Selenium.

Testing Methods:

  1. Static Testing
  2. Dynamic Testing

Static Testing: It is also known as Verification in Software Testing. Verification is a static method of checking documents and files. Verification is the process, to ensure that whether we are building the product right i.e., to verify the requirements which we have and to verify whether we are developing the product accordingly or not.

Activities involved here are Inspections, Reviews, Walkthroughs

Dynamic Testing: It is also known as Validation in Software Testing. Validation is a dynamic process of testing the real product. Validation is the process, whether we are building the right product i.e., to validate the product which we have developed is right or not.

Activities involved in this is Testing the software application

Read more on Static and Dynamic Testing.

Testing Approaches:

There are three types of software testing approaches.

  1. White Box Testing
  2. Black Box Testing
  3. Grey Box Testing

White Box Testing: It is also called as Glass Box, Clear Box, Structural Testing. White Box Testing is based on applications internal code structure. In white-box testing, an internal perspective of the system, as well as programming skills, are used to design test cases. This testing is usually done at the unit level.

Black Box Testing: It is also called as Behavioral/Specification-Based/Input-Output Testing. Black Box Testing is a software testing method in which testers evaluate the functionality of the software under test without looking at the internal code structure.

Grey Box Testing: Grey box is the combination of both White Box and Black Box Testing. The tester who works on this type of testing needs to have access to design documents. This helps to create better test cases in this process.

Read more on White Box and Black Box Testing

No matter whether you are a Black box, White box or Grey box tester. Success of a project due to testing in Software Engineering has a huge role.

Testing Levels:

  1. Unit Testing
  2. Integration Testing
  3. System Testing
  4. Acceptance Testing

Unit Testing: Unit Testing is done to check whether the individual modules of the source code are working properly. i.e. testing each and every unit of the application separately by the developer in the developer’s environment. It is AKA Module Testing or Component Testing

Integration Testing: Integration Testing is the process of testing the connectivity or data transfer between a couple of unit tested modules. It is AKA I&T Testing or String Testing. It is subdivided into Top-Down Approach, Bottom-Up Approach and Sandwich Approach (Combination of Top Down and Bottom Up).

System Testing (end to end testing): It’s a black box testing. Testing the fully integrated application this is also called as end to end scenario testing. To ensure that the software works in all intended target systems. Verify thorough testing of every input in the application to check for desired outputs. Testing of the users experiences with the application.

Acceptance Testing: To obtain customer sign-off so that software can be delivered and payments received. Types of Acceptance Testing are Alpha, Beta & Gamma Testing.

Read more on Levels of Testing.

Types of Black Box Testing:

  1. Functionality Testing
  2. Non-functionality Testing

Functional testing:

In simple words, what the system actually does is functional testing. To verify that each function of the software application behaves as specified in the requirement document. Testing all the functionalities by providing appropriate input to verify whether the actual output is matching the expected output or not. It falls within the scope of black box testing and the testers need not concern about the source code of the application.

Non-functional testing:

In simple words, how well the system performs is non-functionality testing. Non-functional testing refers to various aspects of the software such as performance, load, stress, scalability, security, compatibility etc., Main focus is to improve the user experience on how fast the system responds to a request.

There are more than 100 types of testing. You can check this post where we have mentioned 100+ software testing types.

Testing Artifacts: 

Test Artifacts are the deliverables which are given to the stakeholders of a software project. A software project which follows SDLC undergoes the different phases before delivering to the customer. In this process, there will be some deliverables in every phase. Some of the deliverables are provided before the testing phase commences and some are provided during the testing phase and rest after the testing phase is completed.

Some of the test deliverables are as follows:

  • Test plan
  • Traceability matrix
  • Test case
  • Test script
  • Test suite
  • Test data or Test Fixture
  • Test harness

Read more: Detailed explanation – Test Artifacts

Why do we need Software Testing

Interviewers may ask you this question like “Why do we need Software Testing” or “Why is testing required” or “Why Software Testing”.

When I started in software testing I had no idea what software testing is and why it is required. I also had no clue of where to start. May be you are in the same situation as I was long back. I say, Software Testing is an art to evaluate the functionality of a software application with an intent to find whether the developed software met the specified requirements or not and to identify the defects to ensure that the product is defect free in order to produce the quality product.

What if there is no Software Testing in the Software Development process.

As per the current trend, due to constant change and development in digitization, our lives are improving in all areas. The way we work is also changed. We access our bank online, we do shopping online, we order food online and many more. We rely on software’s and systems. What if these systems turnout to be defective. We all know that one small bug shows huge impact on business in terms of financial loss and goodwill. To deliver a quality product, we need to have Software Testing in the Software Development Process.

Some of the reasons why software testing becomes very significant and integral part in the field of information technology are as follows.

  1. Cost effectiveness
  2. Customer Satisfaction
  3. Security
  4. Product Quality

1. Cost effectiveness

As a matter of fact design defects can never be completely ruled out for any complex system. It is not because developers are careless but because the complexity of a system is intractable. If the design issues go undetected, then it will become more difficult to trace back defects and rectify it. It will become more expensive to fix it. Sometimes, while fixing one bug we may introduce another one in some other module unknowingly. If the bugs can be identified in early stages of development then it costs much less to fix them. That is why it is important to find defects in the early stages of the software development life cycle. One of the benefits of testing is cost effectiveness.

It is better to start testing earlier and introduce it in every phase of software development life cycle and regular testing is needed to ensure that the application is developed as per the requirement.

2. Customer Satisfaction

In any business, the ultimate goal is to give the best customer satisfaction. Yes, customer satisfaction is very important. Software testing improves the user experience of an application and gives satisfaction to the customers. Happy customers means more revenue for a business. One of the reasons why software testing is necessary is to provide the best user experience.

3. Security

This is probably the most sensitive and vulnerable part of software testing. Testing (penetration testing & security testing) helps in product security. Hackers gain unauthorized access to data. These hackers steal user information and use it for their benefit. If your product is not secured, users won’t prefer your product. Users always look for trusted products. Testing helps in removing vulnerabilities in the product.

4. Product Quality

Software Testing is an art which helps in strengthening the market reputation of a company by delivering the quality product to the client as mentioned in the requirement specification documents.

Due to these reasons software testing becomes very significant and integral part of Software Development process.

Now let’s move ahead and have a look at some of the principles of Software Testing.[/vc_column_text][vc_column_text]

Software Development Life Cycle – SDLC | Software Testing Material

[/vc_column_text][vc_column_text]Software Development Life Cycle (SDLC) aims to produce a high-quality system that meets or exceeds customer expectations, works effectively and efficiently in the current and planned information technology infrastructure, and is inexpensive to maintain and cost-effective to enhance.

Detailed Explanation:

A process followed in software projects is SDLC. Each phase of SDLC produces deliverables required by the next phase in the life cycle. Requirements are translated into design. Code is produced according to the design. Testing should be done on a developed product based on the requirement. The deployment should be done once the testing was completed. It aims to produce a high-quality system that meets or exceeds customer expectations, works effectively and efficiently in the current and planned information technology infrastructure, and is inexpensive to maintain and cost-effective to enhance.

SDLC Process:

SDLC is a process which follows in Software Projects to develop a product in a systematic way and to deliver a high-quality product. By following proper SDLC process, Software companies can react well to the market pressure and release high-quality software. This process involves different stages of SDLC right from the requirement stage to deployment and maintenance phase. These SDLC phases we will see later section of this post.

Why SDLC:

Some of the reasons why SDLC is important in Software Development are as follows.

  • It provides visibility of a project plan to all the involved stakeholders
  • It helps us to avoid project risks
  • It allows us to track and control the project
  • It doesn’t conclude until all the requirements have been achieved

Don’t miss: Difference between SDLC & STLC

SDLC Phases:

A typical Software Development Life Cycle (SDLC) consists of the following phases:

 

Requirement Phase:

Requirement gathering and analysis is the most important phase in the software development lifecycle. Business Analyst collects the requirement from the Customer/Client as per the clients business needs and documents the requirements in the Business Requirement Specification (document name varies depends upon the Organization. Some examples are Customer Requirement Specification (CRS), Business Specification (BS), etc., and provides the same to Development Team.

Analysis Phase:

Once the requirement gathering and analysis is done the next step is to define and document the product requirements and get them approved by the customer. This is done through the SRS (Software Requirement Specification) document. SRS consists of all the product requirements to be designed and developed during the project life cycle. Key people involved in this phase are Project Manager, Business Analyst and Senior members of the Team. The outcome of this phase is the Software Requirement Specification.

Design Phase:

It has two steps:
HLD – High-Level Design – It gives the architecture of the software product to be developed and is done by architects and senior developers
LLD – Low-Level Design – It is done by senior developers. It describes how each and every feature in the product should work and how every component should work. Here, only the design will be there and not the code
The outcome from this phase is High-Level Document and Low-Level Document which works as an input to the next phase

Development Phase:

Developers of all levels (seniors, juniors, freshers) involved in this phase. This is the phase where we start building the software and start writing the code for the product. The outcome from this phase is Source Code Document (SCD) and the developed product.

Testing Phase:

When the software is ready, it is sent to the testing department where Test team tests it thoroughly for different defects. They either test the software manually or using automated testing tools depends on the process defined in STLC (Software Testing Life Cycle) and ensure that each and every component of the software works fine. Once the QA makes sure that the software is error-free, it goes to the next stage, which is Implementation. The outcome of this phase is the Quality Product and the Testing Artifacts.

Deployment & Maintenance Phase:

After successful testing, the product is delivered/deployed to the customer for their use. Deployment is done by the Deployment/Implementation engineers. Once when the customers start using the developed system then the actual problems will come up and needs to be solved from time to time. Fixing the issues found by the customer comes in the maintenance phase. 100% testing is not possible – because, the way testers test the product is different from the way customers use the product. Maintenance should be done as per SLA (Service Level Agreement)

Types of Software Development Life Cycle Models:

There are various Software Development Life Cycle models in the industry which are followed during the software development process. These models are also referred to as Software Development Process Models.

Each SDLC models might have a different approach but the Software Development Phases and activities remain the same in all the models.

Some of the Software Development LifeCycle Models (SDLC Models) followed in the industry are as follows:

  1. Waterfall Model: Waterfall Model is a traditional model. It is aka Sequential Design Process, often used in SDLC, in which the progress is seen as flowing downwards like a waterfall, through the different phases such as Requirement Gathering, Feasibility Study/Analysis, Design, Coding, Testing, Installation, and Maintenance. Every next phase is begun only once the goal of the previous phase is completed. This methodology is preferred in projects where quality is more important as compared to schedule or cost. This methodology is best suitable for short term projects where the requirements will not change. (E.g. Calculator, Attendance Management)Learn in detail – Waterfall Model
  2. Spiral: Spiral model works in an iterative nature. It is a combination of both Prototype development process and Linear development process (waterfall model). This model place more emphasis on risk analysis. Mostly this model adpots to the large and complicated projects where risk is high. Every Iteration starts with a planning and ends with the product evaluation by client.Learn in detail – Spiral
  3. V Model: V-model is also known as Verification and Validation (V&V) model. In this, each phase of SDLC must be completed before the next phase starts. It follows a sequential design process same like waterfall model.Learn in detail – V Model
  4. Prototype: The Prototype Model is one of the mostly used Software Development Life Cycle Models (SDLC models). A prototype of the end product is first developed prior to the actual product. Usually this SDLC model is used when the customers don’t know the project requirements beforehand. By developing the prototype of the end product, it gives the customers an opportunity to see the product early in the life cycle.
    It starts by getting the inputs (requirements) from the customers and undergoes developing the prototype. By getting the customers feedback, requirements are refined. Actual product development starts once the customer approves the prototype. The developed product is released for customer’s feedback. Released product is refined as per the customers. This process goes on until the model is accepted by the customer.
  5. Agile: Agile Scrum Methodology is one of the popular Agile software development methods. There are some other Agile software development methods but the popular one which is using widely is Agile Scrum Methodology. The Agile Scrum Methodology is a combination of both Incremental and Iterative model for managing product development.Learn in detail – Agile

The other related Software Development LifeCycle models are Agile Model, Rapid Application Development, Rational Unified Model, Hybrid Model etc.,[/vc_column_text][vc_column_text]

What is Software Testing Life Cycle (STLC)

[/vc_column_text][vc_column_text]

What Is Bug Life Cycle or Defect Life Cycle In Software Testing

[/vc_column_text][vc_column_text]Bug life cycle is also known as Defect life cycle. In Software Development process, the bug has a life cycle. The bug should go through the life cycle to be closed. Bug life cycle varies depends upon the tools (QC, JIRA etc.,) used and the process followed in the organization.

Before going further I strongly recommend you to go through both the Software Life Cycle’s such as SDLC and STLC.

What is a Software Bug?

Software bug can be defined as the abnormal behavior of the software. Bug starts when the defect is found and ends when a defect is closed, after ensuring it is not reproduced.

different states of a bug in the bug life cycle are as follows:

New: When a tester finds a new defect. He should provide a proper Defect document to the Development team to reproduce and fix the defect. In this state, the status of the defect posted by tester is “New”

Assigned: Defects which are in the status of New will be approved (if valid) and assigned to the development team by Test Lead/Project Lead/Project Manager. Once the defect is assigned then the status of the bug changes to “Assigned”

Open: The development team starts analyzing and works on the defect fix

Fixed: When a developer makes the necessary code change and verifies the change, then the status of the bug will be changed as “Fixed” and the bug is passed to the testing team.

Test: If the status is “Test”, it means the defect is fixed and ready to do test whether it is fixed or not.

Verified: The tester re-tests the bug after it got fixed by the developer. If there is no bug detected in the software, then the bug is fixed and the status assigned is “verified.”

Closed: After verified the fix, if the bug is no longer exits then the status of bug will be assigned as “Closed.”

Reopen: If the defect remains same after the retest, then the tester posts the defect using defect retesting document and changes the status to “Reopen”. Again the bug goes through the life cycle to be fixed.

Duplicate: If the defect is repeated twice or the defect corresponds the same concept of the bug, the status is changed to “duplicate” by the development team.

Deferred: In some cases, Project Manager/Lead may set the bug status as deferred.
If the bug found during end of release and the bug is minor or not important to fix immediately
If the bug is not related to current build
If it is expected to get fixed in the next release
Customer is thinking to change the requirement
In such cases the status will be changed as “deferred” and it will be fixed in the next release.

Rejected: If the system is working according to specifications and bug is just due to some misinterpretation (such as referring to old requirements or extra features) then Team lead or developers can mark such bugs as “Rejected”

Some other statuses are:

Cannot be fixed: Technology not supporting, Root of the product issue, Cost of fixing bug is more

Not Reproducible: Platform mismatch, improper defect document, data mismatch, build mismatch, inconsistent defects

Need more information: If a developer is unable to reproduce the bug as per the steps provided by a tester then the developer can change the status as “Need more information’. In this case, the tester needs to add detailed reproducing steps and assign bug back to the development team for a fix. This won’t happen if the tester writes a good defect document.[/vc_column_text][vc_column_text]

Software Testing Life Cycle:

Software Testing Life Cycle (STLC) identifies what test activities to carry out and when to accomplish those test activities. Even though testing differs between Organizations, there is a testing life cycle.

Don’t Miss: Manual Testing Complete Tutorial

The different phases of Software Testing Life Cycle are:

  • 1. Requirement Analysis
  • 2. Test Planning
  • 3. Test Design
  • 4. Test Environment Setup
  • 5. Test Execution
  • 6. Test Closure

Dont miss: Difference between SDLC & STLC

Every phase of STLC (Software Testing Life Cycle) has a definite Entry and Exit Criteria.

Requirement Analysis:

Entry criteria for this phase is BRS (Business Requirement Specification) document. During this phase, test team studies and analyzes the requirements from a testing perspective. This phase helps to identify whether the requirements are testable or not. If any requirement is not testable, test team can communicate with various stakeholders (Client, Business Analyst, Technical Leads, System Architects etc) during this phase so that the mitigation strategy can be planned.

Entry Criteria: BRS (Business Requirement Specification)
Deliverables: List of all testable requirements, Automation feasibility report (if applicable)

Test Planning:

Test planning is the first step of the testing process. In this phase typically Test Manager/Test Lead involves determining the effort and cost estimates for the entire project. Preparation of Test Plan will be done based on the requirement analysis. Activities like resource planning, determining roles and responsibilities, tool selection (if automation), training requirement etc., carried out in this phase. The deliverables of this phase are Test Plan & Effort estimation documents.

Must Read: Test Strategy In Deapth Explanation

Entry Criteria: Requirements Documents
Deliverables: Test Strategy, Test Plan, and Test Effort estimation document.

Must Read: How To Write A Good Test Plan

Test Design:

Test team starts with test cases development activity here in this phase. Test team prepares test cases, test scripts (if automation) and test data. Once the test cases are ready then these test cases are reviewed by peer members or team lead. Also, test team prepares the Requirement Traceability Matrix (RTM). RTM traces the requirements to the test cases that are needed to verify whether the requirements are fulfilled. The deliverables of this phase are Test Cases, Test Scripts, Test Data, Requirements Traceability Matrix

Entry Criteria: Requirements Documents (Updated version of unclear or missing requirement)
Deliverables: Test cases, Test Scripts (if automation), Test data.

Must Read: How To Write Test Cases

Test Environment Setup:

This phase can be started in parallel with Test design phase. Test environment setup is done based on the hardware and software requirement list. Some cases test team may not be involved in this phase. Development team or customer provides the test environment. Meanwhile, test team should prepare the smoke test cases to check the readiness of the given test environment.

Entry Criteria: Test Plan, Smoke Test cases, Test Data
Deliverables: Test Environment. Smoke Test Results.

Test Execution:

Test team starts executing the test cases based on the planned test cases. If a test case result is Pass/Fail then the same should be updated in the test cases. Defect report should be prepared for failed test cases and should be reported to the Development Team through bug tracking tool (eg., Quality Center) for fixing the defects. Retesting will be performed once the defect was fixed. Click here to see the Bug Life Cycle.

Entry Criteria: Test Plan document, Test cases, Test data, Test Environment.
Deliverables: Test case execution report, Defect report, RTM

Must Read: How To Write An Effective Defect Report

Test Closure:

The final stage where we prepare Test Closure Report, Test Metrics.
Testing team will be called out for a meeting to evaluate cycle completion criteria based on Test coverage, Quality, Time, Cost, Software, Business objectives. Test team analyses the test artifacts (such as Test cases, Defect reports etc.,) to identify strategies that have to be implemented in future, which will help to remove process bottlenecks in the upcoming projects. Test metrics and Test closure report will be prepared based on the above criteria.

Entry Criteria: Test Case Execution report (make sure there are no high severity defects opened), Defect report
Deliverables: Test Closure report, Test metrics[/vc_column_text][vc_column_text]

Waterfall Model in Software Development Life Cycle

[/vc_column_text][vc_column_text]

Spiral Model in Software Development Life Cycle

[/vc_column_text][vc_column_text]Before starting Spiral Model in Software Development Life Cycle, I would suggest you to check this post “Software Development Life Cycle”

You could see different types of Software Development Methodologies in that post. I have mentioned Spiral Model as one of the Software Development Methodologies over there.

I would also suggest you to read about “Software Testing Life Cycle”

Let’s see what is Spiral Model in SDLC and it’s advantages and disadvantages in detail.

Spiral Model:

Spiral Model was first described by Barry W. Boehm (American Software Engineer) in 1986.

Spiral model works in an iterative nature. It is a combination of both Prototype development process and Linear development process (waterfall model). This model place more emphasis on risk analysis. Mostly this model adpots to the large and complicated projects where risk is high. Every Iteration starts with a planning and ends with the product evaluation by client.

Let’s take an example of a product development team (like Microsoft). They know that there will be a high risk and they face lots of difficulties in the journey of developing and releasing the product
and also they know that they will release next version of product when the current version is in existence. They prefer Spiral Model to develop the product in an iterative nature. They could release one version of the product to the end user and start developing next version which includes new enhancements and improvements on previous version (based on the issues faced by the user in the previous version). Like Microsoft released Windows 8 and improved it based on user feedback and released the next version (Windows 8.1).

Spiral Model undergoes 4 phases.

Planning Phase – Requirement Gathering, Cost Estimation, Resource Allocation
Risk Analysis Phase – Strengths and weaknesses of the project
Design Phase – Coding, Internal Testing and deployment
Evaluation Phase – Client Evaluation (Client side Testing) to get the feedback

Advantages:

  • It allows requirement changes
  • Suitable for large and complicated projects
  • It allows better risk analysis
  • Cost effective due to good risk management

Disadvantages:

  • Not suitable for small projects
  • Success of the project depends on risk analysis phase
  • Have to hire more experienced resource especially for risk analysis

[/vc_column_text][vc_column_text]Before start reading Waterfall Model, I would suggest you to check this post “Software Development Life Cycle”

You could see different types of Software Development Methodologies in that post. I have mentioned Waterfall Model as one of the Software Development Methodologies.

I would also suggest you to read about Software Testing Life Cycle too

Let’s see what is Waterfall Model and it’s advantages and disadvantages in detail.

Waterfall Model:

Waterfall Model is a traditional model. It is aka Sequential Design Process, often used in SDLC, in which the progress is seen as flowing downwards like a waterfall, through the different phases such as Requirement Gathering, Feasibility Study/Analysis, Design, Coding, Testing, Installation and Maintenance. Every next phase is begun only once the goal of previous phase is completed. This methodology is preferred in projects where quality is more important as compared to schedule or cost. This methodology is best suitable for short term projects where the requirements will not change. (E.g. Calculator, Attendance Management)

Advantages:

  • Requirements do not change nor does design and code, so we get a stable product.
  • This model is simple to implement. Requirements are finalized earlier in the life cycle. So there won’t be any chaos in the next phases.
  • Required resources to implement this model are minimal compared to other methodologies
  • Every phase has specific deliverable’s. It gives high visibility to the project manager and clients about the progress of the project.

Disadvantages:

  • Backtracking is not possible i.e., we cannot go back and change requirements once the design stage is reached.
  • Change in requirements leads to change in design and code which results defects in the project due to overlapping of phases.
  • Customer may not be satisfied, if the changes they required are not incorporated in the product.
  • The end result of waterfall model may not be a flexible product
  • In terms of testing, testers involve only in the testing phase. Requirements are not tested in the requirement phase. It can’t be modified even though we identify that there is a bug in the requirement in the testing phase. It goes on till the end and leads to lot of re-work.
  • It is not suitable for long term projects where requirements may change time to time
  • Waterfall model can be used only when the requirements are very well known and fixed

Final words: Testing is not just finding bugs. As per the Waterfall Model, Testers involve only almost at the end of the SDLC. Ages ago the mantra of testing is just to finding bugs in the software. Things changed a lot now. There are some other SDLC models implemented. I would post other models in the upcoming posts in detail with their advantages and disadvantages. It is up to your team to choose the SDLC model depends on the project you are working.[/vc_column_text][vc_column_text]

V Model in Software Development Life Cycle

[/vc_column_text][vc_column_text]Before starting V Model, I would recommend you to check this post “Software Development Life Cycle”

You could see different types of Software Development Methodologies such as Waterfall Model, Agile and so on in that post. Here I am going to write about the V Model which I mentioned in that post.

I would also recommend you to read about Software Testing Life Cycle

Let’s see what is V Model and it’s advantages and disadvantages in detail.

V Model:

V-model is also known as Verification and Validation (V&V) model. In this each phase of SDLC must be completed before the next phase starts. It follows a sequential design process same like waterfall model.

Don’t you think that why do we use this V Model, if it is same as Waterfall Model. 🙂

Let me mention the next point on why do we need this Verification and Validation Model.

It overcomes the disadvantages of waterfall model. In the waterfall model, we have seen that testers involve in the project only at the last phase of the development process.

In this, test team involves in the early phases of SDLC. Testing starts in early stages of product development which avoids downward flow of defects which in turns reduces lot of rework. Both teams (test and development) work in parallel. The test team works on various activities like preparing test strategy, test plan and test cases/scripts while the development team works on SRS, Design and Coding.

Once the requirements were received, both the development and test team start their activities.

Deliverables are parallel in this model. Whilst, developers are working on SRS (System Requirement Specification), testers work on BRS (Business Requirement Specification) and prepare ATP(Acceptance Test Plan) and ATC (Acceptance Test Cases) and so on.

Testers will be ready with all the required artifacts (such as Test Plan, Test Cases)  by the time developers release the finished product. It saves lots of time.

Let’s see the how the development team and test team involves in each phase of SDLC in V Model.

1. Once client sends BRS, both the teams (test and development) start their activities. The developers translate the BRS to SRS. The test team involves in reviewing the BRS to find the missing or wrong requirements and writes acceptance test plan and acceptance test cases.

2. In the next stage, the development team sends the SRS the testing team for review and the developers start building the HLD (High Level Design Document) of the product. The test team involves in reviewing the SRS against the BRS and writes system test plan and test cases.

3. In the next stage, the development team starts building the LLD (Low Level Design) of the product. The test team involves in reviewing the HLD (High Level Design) and writes Integration test plan and integration test cases.

4. In the next stage, the development team starts with the coding of the product. The test team involves in reviewing the LLD and writes functional test plan and functional test cases.

5. In the next stage, the development team releases the build to the test team once the unit testing was done. The test team carries out functional testing, integration testing, system testing and acceptance testing on the release build step by step.

Advantages:

  • Testing starts in early stages of product development which avoids downward flow of defects and helps to find the defects in the early stages
  • Test team will be ready with the test cases by the time developers release the software which in turns saves a lot of time
  • Testing is involved in every stage of product development. It gives a quality product.
  • Total investment is less due to less or no rework.

Disadvantages:

  • Initial investment is more because test team involves right from the early stage.
  • Whenever there is change in requirement, the same procedure continues. It leads more documentation work.

Applications:

Long term projects, complex applications, When customer is expecting a very high quality product with in stipulated time frame because every stage is tested and developers & testers are working in parallel[/vc_column_text][vc_column_text]

Types of Software Testing – The Ultimate List | SoftwareTestingMaterial

[/vc_column_text][vc_column_text]In this post ‘Types of Software Testing’, I would like to mention almost all the software testing types at one place. One challenge to learning about software testing is that there are many terms in the industry, and these terms often used inconsistently. While there is no universally-accepted definitions for all the testing terms, I think a good source is to refer ISTQB Certified Tester Foundation Level Syllabus.

100+ Types of Software Testing

I would like to start with Software Testing before going to the actual post 100+ Software Test Types.

Software Testing: It is a process, to evaluate the functionality of a software application with an intent to find whether the developed software met the specified requirements or not and to identify the defects to ensure that the product is defect free in order to produce the quality product. Read more on Software Testing Definitions & Approaches.

The Ultimate list of Types of Testing:

Let’s see different Types of Software Testing one by one.

1. Functional testing: In simple words, what the system actually does is functional testing. To verify that each function of the software application behaves as specified in the requirement document. Testing all the functionalities by providing appropriate input to verify whether the actual output is matching the expected output or not. It falls within the scope of black box testing and the testers need not concern about the source code of the application.

2. Non-functional testing: In simple words, how well the system performs is non-functionality testing. Non-functional testing refers to various aspects of the software such as performance, load, stress, scalability, security, compatibility etc., Main focus is to improve the user experience on how fast the system responds to a request.

3. Manual testing: Manual testing is the process of testing the software manually to find the defects. A tester should have the perspective of an end user and to ensure all the features are working as mentioned in the requirement document. In this process, testers execute the test cases and generate the reports manually without using any automation tools.

4. Automated testing: Automation testing is the process of testing the software using an automation tool to find the defects. In this process, executing the test scripts and generating the results are performed automatically by automation tools. Some most popular tools to do automation testing are HP QTP/UFT, Selenium WebDriver, etc.,

Learn the Difference between Manual & Automated Testing here…

5. Black box testing: Black Box Testing is a software testing method in which testers evaluate the functionality of the software under test without looking at the internal code structure. This can be applied to every level of software testing such as Unit, Integration, System and Acceptance Testing.

Read more on black box testing here…

6. Glass box testing – Refer white box testing

7. White box testing: White Box Testing is also called as Glass Box, Clear Box, and Structural Testing. It is based on applications internal code structure. In white-box testing, an internal perspective of the system, as well as programming skills, are used to design test cases. This testing usually was done at the unit level.

Click here for more details.

8. Specification-based testing: Refer black-box testing.

9. Structure-based testing: Refer white-box testing.

10. Gray box testing: Grey box is the combination of both White Box and Black Box Testing. The tester who works on this type of testing needs to have access to design documents. This helps to create better test cases in this process.

11. Unit testing: Unit Testing is also called Module Testing or Component Testing. It is done to check whether the individual unit or module of the source code is working properly. It is done by the developers in the developer’s environment.

12. Component testing: Refer Unit Testing

13. Module testing: Refer Unit Testing

14. Integration testing: Integration Testing is the process of testing the interface between the two software units. Integration testing is done by multiple approaches such Big Bang Approach, Top-Down Approach, Bottom-Up Approach, and Hybrid Integration approach.

Integration Testing Complete Guide

15. System testing: Testing the fully integrated application to evaluate the system’s compliance with its specified requirements is called System Testing AKA End to End testing. Verifying the completed system to ensure that the application works as intended or not.

16. Acceptance testing: It is also known as pre-production testing.  This is done by the end users along with the testers to validate the functionality of the application. After successful acceptance testing. Formal testing conducted to determine whether an application is developed as per the requirement. It allows the customer to accept or reject the application. Types of acceptance testing are Alpha, Beta & Gamma.

17. Big bang Integration Testing: Combining all the modules once and verifying the functionality after completion of individual module testing.

Top down and bottom up are carried out by using dummy modules known as Stubs and Drivers. These Stubs and Drivers are used to stand-in for missing components to simulate data communication between modules.

18. Top-down Integration Testing: Testing takes place from top to bottom. High-level modules are tested first and then low-level modules and finally integrating the low-level modules to a high level to ensure the system is working as intended. Stubs are used as a temporary module if a module is not ready for integration testing.

19. Bottom-up Integration Testing: It is a reciprocate of the Top-Down Approach. Testing takes place from bottom to up. Lowest level modules are tested first and then high-level modules and finally integrating the high-level modules to a low level to ensure the system is working as intended. Drivers are used as a temporary module for integration testing.

20. Hybrid Integration Testing: Hybrid integration testing is the combination of both Top-down and bottom-up integration testing.

21. Alpha testing: Alpha testing is done by the in-house developers (who developed the software) and testers. Sometimes alpha testing is done by the client or outsourcing team with the presence of developers or testers.

22. Beta testing: Beta testing is done by a limited number of end users before delivery. Usually, it is done in the client place.

23. Gamma Testing: Gamma testing is done when the software is ready for release with specified requirements. It is done at the client place. It is done directly by skipping all the in-house testing activities.

24. Equivalence partitioning testing: Equivalence Partitioning is also known as Equivalence Class Partitioning. In equivalence partitioning, inputs to the software or system are divided into groups that are expected to exhibit similar behavior, so they are likely to be proposed in the same way. Hence selecting one input from each group to design the test cases.

Read more on Equivalence Partitioning Testing Technique…

25. Boundary value analysis testing: Boundary value analysis (BVA) is based on testing the boundary values of valid and invalid partitions. The Behavior at the edge of each equivalence partition is more likely to be incorrect than the behavior within the partition, so boundaries are an area where testing is likely to yield defects. Every partition has its maximum and minimum values and these maximum and minimum values are the boundary values of a partition. A boundary value for a valid partition is a valid boundary value. Similarly, a boundary value for an invalid partition is an invalid boundary value.

Read more on Boundary Value Analysis Testing Technique…

26. Decision tables testing: Decision Table is aka Cause-Effect Table. This test technique is appropriate for functionalities which has logical relationships between inputs (if-else logic). In Decision table technique, we deal with combinations of inputs. To identify the test cases with decision table, we consider conditions and actions. We take conditions as inputs and actions as outputs.

Read more on Decision Table Testing Technique…

27. Cause-effect graph testing– Refer Decision Table Testing

28. State transition testing: Using state transition testing, we pick test cases from an application where we need to test different system transitions. We can apply this when an application gives a different output for the same input, depending on what has happened in the earlier state.

Read more on State Transition Test Design Technique…

29. Exhaustive Testing: Testing all the functionalities using all valid and invalid inputs and preconditions is known as Exhaustive testing.

30. Early Testing: Defects detected in early phases of SDLC are less expensive to fix. So conducting early testing reduces the cost of fixing defects.

31. Use case testing: Use case testing is carried out with the help of use case document. It is done to identify test scenarios to test end to end testing

32. Scenario testing: Scenario testing is a software testing technique which is based on a scenario. It involves in converting business requirements to test scenarios for better understanding and achieve end to end testing. A well designed scenario should be motivating, credible, complex and the outcome of which is easy to evaluate.

33. Documentation testing: Documentation testing is done to validate the documented artifacts such as requirements, test plan, traceability matrix, test cases.

34. Statement coverage testing: Statement coverage testing is a white box testing technique which is to validate whether each and every statement in the code is executed at least once.

35. Decision coverage testing/branch coverage testing: Decision coverage or branch coverage testing is a white box testing technique which is to validate every possible branch is executed at least once.

36. Path testing: Path coverage testing is a white box testing technique which is to validate that all the paths of the program are executed at least once.

37. Mutation testing: Mutation testing is a type of white box testing which is to change (mutate) certain statements in the source code and verify if the tests  are able to find the errors.

38. Loop testing: Loop testing is a white box testing technique which is to validate the different kind of loops such as simple loops, nested loops, concatenated loops and unstructured loops.

39. Performance testing: This type of testing determines or validates the speed, scalability, and/or stability characteristics of the system or application under test. Performance is concerned with achieving response times, throughput, and resource-utilization levels that meet the performance objectives for the project or product.

40. Load testing: It is to verify that the system/application can handle the expected number of transactions and to verify the system/application behavior under both normal and peak load conditions.

41. Stress testing: It is to verify the behavior of the system once the load increases more than its design expectations.

42. Soak testing: Running a system at high load for a prolonged period of time to identify the performance problems is called Soak Testing.

43. Endurance testing: Refer Soak testing

44. Stability testing: Refer Soak testing

45. Scalability Testing: Scalability testing is a type of non-functional testing. It is to determine how the application under test scales with increasing workload.

46. Volume testing: It is to verify that the system/application can handle a large amount of data

47. Robustness testing: Robustness testing is a type of testing that is performed to validate the robustness of the application.

48. Vulnerability testing: Vulnerability testing is the process of identifying the vulnerabilities or weakness in the application.

49. Adhoc testing: Ad-hoc testing is quite opposite to the formal testing. It is an informal testing type. In Adhoc testing, testers randomly test the application without following any documents and test design techniques. This testing is primarily performed if the knowledge of testers in the application under test is very high. Testers randomly test the application without any test cases or any business requirement document.

50. Exploratory testing: Usually, this process will be carried out by domain experts. They perform testing just by exploring the functionalities of the application without having the knowledge of the requirements.

51. Retesting: To ensure that the defects which were found and posted in the earlier build were fixed or not in the current build. Say, Build 1.0 was released. Test team found some defects (Defect Id 1.0.1, 1.0.2) and posted. Build 1.1 was released, now testing the defects 1.0.1 and 1.0.2 in this build is retesting.

52. Regression testing: Repeated testing of an already tested program, after modification, to discover any defects introduced or uncovered as a result of the changes in the software being tested or in another related or unrelated software components.

53. Smoke testing: Smoke Testing is done to make sure if the build we received from the development team is testable or not. It is also called as “Day 0” check. It is done at the “build level”. It helps not to waste the testing time to simply testing the whole application when the key features don’t work or the key bugs have not been fixed yet.

54. Sanity testing: Sanity Testing is done during the release phase to check for the main functionalities of the application without going deeper. It is also called as a subset of Regression testing. It is done at the “release level”. At times due to release time constraints rigorous regression testing can’t be done to the build, sanity testing does that part by checking main functionalities.

55. Dynamic testing: Dynamic testing involves in the execution of code. It validates the output with the expected outcome

56. Static testing: Static Testing involves in reviewing the documents to identify the defects in the early stages of SDLC.

57. Monkey testing: Perform abnormal action on the application deliberately in order to verify the stability of the application.

58. Gorilla testing: Gorilla testing is done by testers, sometimes developers also join hands with testers. It involves testing a system repeatedly to test the robustness of the system.

59. Usability testing: To verify whether the application is user-friendly or not and was comfortably used by an end user or not. The main focus in this testing is to check whether the end user can understand and operate the application easily or not. An application should be self-exploratory and must not require training to operate it.

60. Accessibility testing: Accessibility testing is a subset of usability testing. It aims to discover how easily people with disabilities (such as visual Impairments, Physical Impairment, Hearing Impairment, Cognitive Impairment, Learning Impairment) can use a system.

61. Compatibility testing: It is to deploy and check whether the application is working as expected in a different combination of environmental components.

62. Configuration testing: Configuration testing is the process of testing an application with each one of the supported hardware and software configurations to find out whether the application can work without any issues.

63. Localization testing: Localization is a process of adapting globalization software for a specific region or language by adding local specific components.

64. Globalization testing: Globalization is a process of designing a software application so that it can be adapted to various languages and regions without any changes.

65. Internationalization testing– Refer Globalization testing

66. Positive Testing: It is to determine what system supposed to do. It helps to check whether the application is justifying the requirements or not.

67. Negative testing: It is to determine what system not supposed to do. It helps to find the defects from the software.

68. Security testing: Security testing is a process to determine whether the system protects data and maintains functionality as intended.

Security Testing Complete Guide

69. Penetration testing: Penetration testing is also known as pen testing. It is a type of security testing. It is performed to evaluate the security of the system.

Penetration Testing Complete Guide

70. Database testing: Database testing is done to validate the data in the UI is matched with the data stored in the database. It involves in checking the schema, tables, triggers etc., of the database.

71. Bucket Testing: Bucket testing is a method to compare two versions of an application against each other to determine which one performs better.

72. A/B testing: Refer Bucket Testing…

73. Split testing– Refer bucket testing…

74. Reliability Testing: Perform testing on the application continuously for a long period of time in order to verify the stability of the application

75. Interface Testing: Interface testing is performed to evaluate whether two intended modules pass data and communicate correctly to one another.

76. Concurrency testing: Concurrency testing means accessing the application at the same time by multiple users to ensure the stability of the system. This is mainly used to identify deadlock issues.

77. Fuzz testing: Fuzz testing is used to identify coding errors and security loopholes in an application. By inputting a massive amount of random data to the system in an attempt to make it crash to identify if anything breaks in the application.

78. GUI Testing: Graphical User Interface Testing is to test the interface between the application and the end user. Mainly testers concern about the appearance of the elements such as fonts and colors conforms to design specifications.

79. API testing: API stands for Application Programming Interface. API testing is a type of software testing that involves testing APIs using some tools like SOAPUI, PostMan.

80. Agile testing: Agile testing is a type of testing that involves following principles of agile software development methodology. In this agile testing, testing is conducted throughout the lifecycle of the continuously evolving project instead of being confined to a particular phase.

81. End to end testing– Refer system testing…

82. Recovery testing: Recovery testing is performed in order to determine how quickly the system can recover after the system crash or hardware failure. It comes under the type of non-functional testing.

83. Risk-based testing: Identify the modules or functionalities which are most likely cause failures and then testing those functionalities.

84. Installation testing: It is to check whether the application is successfully installed and it is working as expected after installation.

85. Formal Testing: It is a process where the testers test the application by having pre-planned procedures and proper documentation.

86. Pilot testing: Pilot testing is testing carried out under a real-time operating condition by the company in order to gain the confidence of the client

87. Backend testing: Refer Database testing…

88. Cross-browser testing: Cross Browser Testing is a type of non-functional test which helps us to ensure that our website or web application works as expected in various web browsers.

Read more on Cross Browser Testing…

89. Browser compatibility testing: Refer browser compatibility testing…

90. Forward compatibility testing: Forward compatibility testing is to validate the application under test is working as intended in the later versions of software’s current version.

91. Backward compatibility testing: Backward compatibility testing is to validate the application under test is working as intended in the earlier versions of software’s current version.

92. Downward compatibility testing: Refer Backward compatibility testing…

93. Compliance testing: Compliance testing is non-functional testing which is done to validate whether the software meets a defined set of standards.

94. Conformance testing: Refer compliance testing…

95. UI testing: In UI testing, testers aim to test both GUI and Command Line Interfaces (CLIs)

Also, refer GUI Testing…

96. Destructive testing: Destructive testing is a testing technique which aims to validate the robustness of the application by testing continues until the application breaks.

97. Dependency testing: Dependency testing is a testing technique which examines the requirements of an application for pre-conditions, initial states, and configuration for the proper functioning of the application.

98. Crowdsourced testing: Crowdsourced testing is carried out by a community of expert quality assurance testers through an online platform.

99. ETL testing: ETL (Extract, Transform and Load) testing involves in validating the data movement from source to destination and verifying the data count in both source and destination and verifying data extraction, transformation and also verifying the table relations.

100. Data warehouse testing: Refer ETL testing…

101. Fault injection testing: Fault injection testing is a testing technique in which fault is intentionally introduced in the code in order to improve the test coverage.

102. Failover testing: Failover testing is a testing technique that validates a system’s ability to be able to allocate extra resource during the server failure and transferring of the processing part to back-up systems

103. All pair testing: All pair testing approach is to test the application with all possible combination of the values of input parameters.

104. Pairwise Testing: Refer All pair testing

Here I am going to conclude different types of software testing types. If you like this post, please share it with your friends.[/vc_column_text][vc_column_text]

Performance Testing And Types of Performance Testing

[/vc_column_text][vc_column_text]

What is Performance Testing?

Performance testing and types of performance testing such as Load Testing, Volume Testing, Stress Testing, Capacity Testing, Soak/Endurance Testing and Spike Testing come under Non-functional Testing

In the field of Software Testing, Testers mainly concentrate on Black Box and White Box Testing. Under the Black Box testing, again there are different types of testing. The major types of testing are Functionality testing and Non-functional testing. As I mentioned in the first paragraph of this article, Performance testing and testing types related to performance testing fall under Non-functional testing.

In current market performance and responsiveness of applications play an important role. We conduct performance testing to address the bottlenecks of the system and to fine tune the system by finding the root cause of performance issues. Performance testing answers to the questions like how many users the system could handle, How well the system could recover when the no. of users crossed the maximum users, What is the response time of the system under normal and peak loads.

We use tools such as HP LoadRunner, Apache JMeter, etc., to measure the performance of any System or Application Under Test (AUT). Let’s see what are these terms in detail below.

Types of Performance Testing:

Performance Testing: 

Performance testing determines or validates the speed, scalability, and/or stability characteristics of the system or application under test. Performance is concerned with achieving response times, throughput, and resource-utilization levels that meet the performance objectives for the project or product.

Capacity Testing:

Capacity Testing is to determine how many users the system/application can handle successfully before the performance goals become unacceptable. This allows us to avoid the potential problems in future such as increased user base or increased volume of data.

Load Testing:

Load Testing is to verify that the system/application can handle the expected number of transactions and to verify the system/application behaviour under both normal and peak load conditions (no. of users).

Volume Testing:

Volume Testing is to verify that the system/application can handle a large amount of data. This testing focuses on Data Base.

Stress Testing:

Stress Testing is to verify the behaviour of the system once the load increases more than the system’s design expectations. This testing addresses which components fail first when we stress the system by applying the load beyond the design expectations. So that we can design more robust system.

Soak Testing:

Soak Testing is aka Endurance Testing. Running a system at high load for a prolonged period of time to identify the performance problems is called Soak Testing. It is to make sure the software can handle the expected load over a long period of time.

Spike Testing is to determine the behaviour of the system under sudden increase of load (a large number of users) on the system.[/vc_column_text][vc_column_text]

Levels of Testing | Software Testing Material

[/vc_column_text][vc_column_text]Levels of Testing!! Before starting the post on Levels of Testing, let’s see what is Software Testing.

In Software development, both developers and testers work together to release a high-quality product. To release a high-quality product, every product goes through various testing processes. Coming to testing, testers use various levels of testing in the process of releasing a quality product. There are different levels of software testing. Each of these levels of software testing has a specific purpose. We will see each software testing level in detail.

What is Software Testing?

Software testing is a process, to evaluate the functionality of a software application with an intent to find whether the developed software met the specified requirements or not and to identify the defects to ensure that the product is defect free in order to produce the quality product.

Learn more:

Software Testing – Definition, Types, Methods & Approach

Levels of Software Testing:

Let’s see what are the levels of software testing:

Different levels of software testing are as follows.

1. Unit Testing
2. Integration Testing
3. System Testing
4. Acceptance Testing

UNIT TESTING:

Unit Testing is done to check whether the individual modules of the source code are working properly. i.e. testing each and every unit of the application separately by the developer in the developer’s environment. It is AKA Module Testing or Component Testing

INTEGRATION TESTING:

Integration Testing is the process of testing the connectivity or data transfer between a couple of unit tested modules. It is AKA I&T Testing or String Testing

It is subdivided into the Top-Down Approach, Bottom-Up Approach and Sandwich Approach (Combination of Top Down and Bottom Up). This process is carried out by using dummy programs called Stubs and Drivers. Stubs and Drivers do not implement the entire programming logic of the software module but just simulate data communication with the calling module.

Big Bang Integration Testing:

In Big Bang Integration Testing, the individual modules are not integrated until all the modules are ready. Then they will run to check whether it is performing well. In this type of testing, some disadvantages might occur like, defects can be found at the later stage. It would be difficult to find out whether the defect arouses in an interface or in a module.

Top-Down Integration Testing

In Top-Down Integration Testing, the high-level modules are integrated and tested first. i.e Testing from the main module to the submodule. In this type of testing, Stubs are used as a temporary module if a module is not ready for integration testing.

Bottom-Up Integration Testing

In Bottom Up Integration Testing, the low-level modules are integrated and tested first i.e Testing from sub-module to the main module. Same like Stubs, here drivers are used as a temporary module for integration testing.

Stub:

It is called by the Module under Test.

Driver:

It calls the Module to be tested.

Learn more on Integration Testing here

SYSTEM TESTING (END TO END TESTING):

It’s a black box testing. Testing the fully integrated application this is also called as an end to end scenario testing. To ensure that the software works in all intended target systems. Verify thorough testing of every input in the application to check for desired outputs. Testing of the users’ experiences with the application.

ACCEPTANCE TESTING:

To obtain customer sign-off so that software can be delivered and payments received.

Types of Acceptance Testing are Alpha, Beta & Gamma Testing.

Alpha Testing:

Alpha testing is mostly like performing usability testing which is done by the in-house developers who developed the software. Sometimes this alpha testing is done by the client or outsiders with the presence of developers or testers.

Beta Testing:

Beta testing is done by a limited number of end users before delivery, the change request would be fixed if the user gives feedback or reports defect.

Gamma testing is done when the software is ready for release with specified requirements; this testing is done directly by skipping all the in-house testing activities.[/vc_column_text][vc_column_text]

Automation Testing Vs Manual Testing | SoftwareTestingMaterial

[/vc_column_text][vc_column_text]

Automation Testing Vs Manual Testing

In this article, we are going to see Automation Testing vs Manual Testing.

We know that every project has three important aspects such as Quality, Cost & Time. The objective of any project is to get a high-quality output while controlling the cost and the time required for completing the project.

What is Software Testing?

But first, let’s clarify the term ‘Software Testing’.

Software testing is a process, to evaluate the functionality of a software application with an intent to find whether the developed software met the specified requirements or not and to identify the defects to ensure that the product is defect free in order to produce the quality product.

Check this ANSI/IEEE 1059 Standard Definition of Software Testing.

Software Testing is an integral part of any project.

Software testing is categorized into two areas namely Manual Testing & Automation Testing. Both manual testing and automation testing has their own advantages and disadvantages but it’s worth knowing the difference between manual & automation testing and when to you use manual testing and when to use automated testing.

What is Manual Testing?

Manual testing is the process of testing the software manually to find the defects. Tester should have the perspective of an end user and to ensure all the features are working as mentioned in the requirement document. In this process, testers execute the test cases and generate the reports manually without using any automation tools.

Types of Manual Testing:

  1. Black Box Testing
  2. White Box Testing
  3. Unit Testing
  4. System Testing
  5. Integration Testing
  6. Acceptance Testing

Black Box Testing: Black Box Testing is a software testing method in which testers evaluate the functionality of the software under test without looking at the internal code structure. This can be applied to every level of software testing such as Unit, Integration, System and Acceptance Testing.

White Box Testing: White Box Testing is also called as Glass Box, Clear Box, and Structural Testing. It is based on applications internal code structure. In white-box testing, an internal perspective of the system, as well as programming skills, are used to design test cases. This testing usually done at the unit level.

Unit Testing: Unit Testing is also called as Module Testing or Component Testing. It is done to check whether the individual unit or module of the source code is working properly. It is done by the developers in developer’s environment.

System Testing: Testing the fully integrated application to evaluate the systems compliance with its specified requirements is called System Testing AKA End to End testing. Verifying the completed system to ensure that the application works as intended or not.

Integration Testing: Integration Testing is the process of testing the interface between the two software units. Integration testing is done by three ways. Big Bang Approach, Top Down Approach, Bottom-Up Approach

Acceptance Testing: It is also known as pre-production testing.  This is done by the end users along with the testers to validate the functionality of the application. After successful acceptance testing. Formal testing conducted to determine whether an application is developed as per the requirement. It allows customer to accept or reject the application. Types of acceptance testing are Alpha, Beta & Gamma.

There are many types of software testing but here we dealt mainly about Manual and Automation Testing. Here you could read the complete list of software testing types.

When to use Manual Testing?

Exploratory Testing: Exploratory testing will be carried out by domain experts. They perform testing just by exploring the functionalities of the application without having the knowledge of the requirements.

Usability Testing: To verify whether the application is user-friendly or not and was comfortably used by an end user or not. The main focus in this testing is to check whether the end user can understand and operate the application easily or not. An application should be self-exploratory and must not require training to operate it.

Ad-hoc Testing: Ad-hoc testing is quite opposite to formal testing. It is an informal testing type. In Adhoc testing, testers randomly test the application without following any documents and test design techniques. This testing is primarily performed if the knowledge of testers in the application under test is very high. Testers randomly test the application without any test cases or any business requirement document.

When do you prefer Manual Testing over Automation Testing?

We prefer Manual Testing over Automation Testing in the following scenarios

  1. When the project is in initial development stage.
  2. When testing user interface especially their visual aspects.
  3. When exploratory or adhoc testing needs to be performed.
  4. If the project is a short term and writing scripts will be time consuming when compared to manual testing
  5. If the test case is not automatable. Example captcha.

Manual Testing Pros and Cons

Advantages of Manual Testing:

  • Manual testing can be done on all kinds of applications
  • It is preferable for short life cycle products
  • Newly designed test cases should be executed manually
  • Application must be tested manually before it is automated
  • It is preferred in the projects where the requirements change frequently and for the products where the GUI changes constantly
  • It is cheaper in terms of initial investment compared to Automation testing
  • It requires less time and expense to begin productive manual testing
  • It allows tester to perform adhoc testing
  • There is no necessity to the tester to have knowledge on Automation Tools

Disadvantages of Manual Testing:

  • Manual Testing is time-consuming mainly while doing regression testing.
  • Manual testing is less reliable compared to automation testing because it is conducted by humans. So there will always be prone to errors and mistakes.
  • Expensive over automation testing in the long run

It is not possible to reuse because this process can’t be recorded

What is Automation Testing?

Automation testing is the process of testing the software using an automation tools to find the defects. In this process, executing the test scripts and generating the results are performed automatically by automation tools. Some most popular tools to do automation testing are HP QTP/UFT, Selenium WebDriver, etc.,

Some of the popular automation testing tools

  1. HP QTP(Quick Test Professional)/UFT(Unified Functional Testing)
  2. Selenium
  3. LoadRunner
  4. IBM Rational Functional Tester
  5. SilkTest
  6. TestComplete
  7. WinRunner
  8. WATIR

When to use Automation Testing?

We do Automation testing in the following areas:

Regression Testing: Repeated testing of an already tested program, after modification, to discover any defects introduced or uncovered as a result of the changes in the software being tested or in another related or unrelated software components. So, regression testing is best suitable for automated testing because of frequent code changes and it is beyond the human capacity to execute tests in a timely manner.

Read more about regression testing here

Load Testing:  It is to verify that the system/application can handle the expected number of transactions and to verify the system/application behavior under both normal and peak load conditions. Automated testing is also the best way to complete the testing efficiently when it comes to load testing. It is best suited for automation testing.

Read more about load testing here

Performance Testing – This type of testing determines or validates the speed, scalability, and/or stability characteristics of the system or application under test. Performance is concerned with achieving response times, throughput, and resource-utilization levels that meet the performance objectives for the project or product. It is best suited for automation testing.

Read more about performance testing here

The tests which can be done through automated or manual approach:

Integration Testing – Integration Testing is the process of testing the interface between the two software units. Integration testing is done by multiple approaches such as Big Bang Approach, Top-Down Approach, Bottom-Up Approach, and Hybrid Integration approach.

Integration Testing Complete Guide

System Testing – Testing the fully integrated application to evaluate the system’s compliance with its specified requirements is called System Testing AKA End to End testing. Verifying the completed system to ensure that the application works as intended or not.

Unit Testing: Unit Testing is also called Module Testing or Component Testing. It is done to check whether the individual unit or module of the source code is working properly. It is done by the developers in the developer’s environment.

Acceptance Testing: It is also known as pre-production testing.  This is done by the end users along with the testers to validate the functionality of the application. After successful acceptance testing. Formal testing conducted to determine whether an application is developed as per the requirement. It allows the customer to accept or reject the application. Types of acceptance testing are Alpha, Beta & Gamma.

In interviews, you may be asked to answer the following question

Which tests cannot be automated?

Let’s see which tests cannot be automated. Test which take too much effort to automate are

  1. Exploratory Testing
  2. User interface testing
  3. Adhoc Testing

When do you prefer Automation Testing over Manual Testing?

We prefer Manual Testing over Automation Testing in the following scenarios

  1. To handle repetitive and time consuming tasks
  2. To do parallel testing
  3. To do non-functional testing like load, performance, stress testing
  4. To avoid human errors

Automated Testing Pros and Cons

Advantages of automated testing:

  • Automation testing is faster in execution
  • It is cheaper compared to manual testing in a long run
  • Automated testing is more reliable
  • Automated testing is more powerful and versatile
  • It is mostly used for regression testing
  • It is reusable because automation process can be recorded
  • It does not require human intervention. Test scripts can be run unattended
  • It helps to increase the test coverage

Disadvantages of Automated Testing:

  • It is recommended only for stable products
  • Automation testing is expensive initially
  • Most of the automation tools are expensive
  • It has some limitations such as handling captcha, getting visual aspects of UI such as fonts, color, sizes etc.,
  • Huge maintenance in case of repeated changes in the requirements

Not all the tools support all kinds of testing. Such as windows, web, mobility, performance/load testing

Difference between Manual Testing & Automation Testing (Automation Testing Vs Manual Testing)?

Let’s see the difference between Manual Testing and Automation Testing.

Automation Testing Vs. Manual Testing:

Automation Testing Manual Testing
Automated testing is more reliable. It performs same operation each time. It eliminates the risk of human errors. Manual testing is less reliable. Due to human error, manual testing is not accurate all the time.
Initial investment of automation testing is higher. Investment is required for testing tools. In the long run it is less expensive than manual. ROI is higher in the long run compared to Manual testing. Initial investment of manual testing is less than automation. Investment is required for human resources. ROI is lower in the long run compared to Automation testing.
Automation testing is a practical option when we do regressions testing. Manual testing is a practical option where the test cases are not run repeatedly and only needs to run once or twice.
Execution is done through software tools, so it is faster than manual testing and needs less human resources compared to manual testing. Execution of test cases is time consuming and needs more human resources
Exploratory testing is not possible Exploratory testing is possible
Performance Testing like Load Testing, Stress Testing etc. is a practical option in automation testing. Performance Testing is not a practical option in manual testing
It can be done in parallel and reduce test execution time. Its not an easy task to execute test cases in parallel in manual testing. We need more human resources to do this and becomes more expensive.
Programming knowledge is a must in automation testing Programming knowledge is not required to do manual testing.
Build verification testing (BVT) is highly recommended Build verification testing (BVT) is not recommended
Human intervention is not much, so it is not effective to do User Interface testing. It involves human intervention, so it is highly effective to do User Interface testing.

Conclusion:

Here I am going to conclude this Manual Testing vs Automation Testing post. The real value of manual & automation testing comes when the right type of testing is applied in the right environment. Hope you have understood the difference between manual testing and automation testing and also learnt the advantages and disadvantages of both. If you find any other points which we overlooked, just put it in the comments. We will include and make this post “Manual Testing Vs Automation Testing” updated.[/vc_column_text][vc_column_text]

Black Box And White Box Testing | Definition And Types

[/vc_column_text][vc_column_text]

Black Box and White Box Testing and It’s types:

BLACK BOX TESTING:

It is also called as Behavioral/Specification-Based/Input-Output Testing

Black Box Testing is a software testing method in which testers evaluate the functionality of the software under test without looking at the internal code structure. This can be applied to every level of software testing such as Unit, Integration, System and Acceptance Testing.

Testers create test scenarios/cases based on software requirements and specifications. So it is AKA Specification Based Testing.

Tester performs testing only on the functional part of an application to make sure the behavior of the software is as expected. So it is AKA Behavioral Based Testing.

The tester passes input data to make sure whether the actual output matches the expected output. So it is AKA Input-Output Testing.

Black Box Testing Techniques:

  1. Equivalence Partitioning
  2. Boundary Value Analysis
  3. Decision Table
  4. State Transition

Equivalence Partitioning: Equivalence Partitioning is also known as Equivalence Class Partitioning. In equivalence partitioning, inputs to the software or system are divided into groups that are expected to exhibit similar behavior, so they are likely to be proposed in the same way. Hence selecting one input from each group to design the test cases. Click here to see detailed post on equivalence partitioning.

Boundary Value Analysis: Boundary value analysis (BVA) is based on testing the boundary values of valid and invalid partitions. The Behavior at the edge of each equivalence partition is more likely to be incorrect than the behavior within the partition, so boundaries are an area where testing is likely to yield defects. Click here to see detailed post on boundary value analysis.

Decision Table: Decision Table is aka Cause-Effect Table. This test technique is appropriate for functionalities which has logical relationships between inputs (if-else logic). In Decision table technique, we deal with combinations of inputs. To identify the test cases with decision table, we consider conditions and actions. We take conditions as inputs and actions as outputs. Click here to see detailed post on decision table.

State Transition: Using state transition testing, we pick test cases from an application where we need to test different system transitions. We can apply this when an application gives a different output for the same input, depending on what has happened in the earlier state. Click here to see detailed post on state transition technique.

Types of Black Box Testing:

Functionality Testing: In simple words, what the system actually does is functional testing
Non-functionality Testing: In simple words, how well the system performs is non-functionality testing

WHITE BOX TESTING:

It is also called as Glass Box, Clear Box, Structural Testing.

White Box Testing is based on applications internal code structure. In white-box testing an internal perspective of the system, as well as programming skills, are used to design test cases. This testing usually done at the unit level.

White Box Testing Techniques:

  1. Statement Coverage
  2. Branch Coverage
  3. Path Coverage

[/vc_column_text][vc_column_text]

What is Smoke Testing And Sanity Testing? Smoke Testing Vs Sanity Testing with Examples

[/vc_column_text][vc_column_text]In this article, we see what is Smoke Testing and Sanity Testing and also the difference between Smoke and Sanity Testing. Both smoke tests and sanity tests have their own objectives and priorities. These two types of testing play a key role in the success of a project.

Smoke and Sanity Testing come into the picture after build release. There is chaos in novice testers when it comes to the difference between smoke and sanity testing. Here in this article, let’s see what Smoke and Sanity Testing are and the difference between Smoke and Sanity Testing in detail with practical examples to understand easily. Hopefully, by the end of this article, you will get clear idea on Sanity and Smoke Testing.

What is Smoke Testing in Software Testing?

Smoke Testing is done to make sure if the build we received from the development team is testable or not. It is also called as “Day 0” check. It is done at the “build level”.

It helps not to waste the testing time to simply testing the whole application when the key features don’t work or the key bugs have not been fixed yet. Here our focus will be on primary and core application work flow.

How to Conduct Smoke Testing?

To conduct smoke testing, we don’t write test cases. We just pick the necessary test cases from already written test cases.

Do we really write test cases for all testing types? Here in this article, we have given clear idea on choosing testing types to write test cases.

As mentioned earlier, here in Smoke Testing, our main focus will be on core application work flow. So we pick the test cases from our test suite which cover major functionality of the application. In general, we pick minimal number of test cases that won’t take more than half an hour to execute.

Real Time Example: Assume, you are working for an eCommerce site. When a new build is released for testing, as a Software QA you have to make sure whether the core functionalities are working or not. So you try to access the eCommerce site and add an item in to your cart to place an order. That’s a major flow in most of the eCommerce sites. If this flow works, you can say this build is passed. You can move on to functional testing on the same build.

What is Sanity Testing in Software Testing?

Sanity Testing is done during the release phase to check for the main functionalities of the application without going deeper. It is also called as a subset of Regression testing. It is done at the “release level”.

At times due to release time constraints rigorous regression testing can’t be done to the build, sanity testing does that part by checking main functionalities.

Most of the times, we don’t get enough time to complete the whole testing. Especially in Agile Methodology, we will get pressure from the Product owners to complete testing in few hours or end of the day. In this scenarios we choose Sanity Testing. Sanity Testing plays a key role in these kind of situations.

Earlier I have posted a detailed post on “Difference between Regression and Retesting”. If you haven’t gone through it, you can browse by clicking on the link.

How to Conduct Sanity Testing?

Same like Smoke testing, we don’t write separate test cases for Sanity testing. We just pick the necessary test cases from already written test cases.

As mentioned earlier, it is a subset of regression testing. When it comes to Sanity testing, the main focus is to make sure whether the planned functionality is working as expected.

Real Time Example: Let’s take the same example as above. Assume you are working on an eCommerce site. A new feature is released which is related to Search functionality. Here your main focus should be on the Search functionality. Once you make sure that the Search functionality is working fine then you move on to other major functionality such as payment flow.

Smoke Testing Vs Sanity Testing

Example to showcase the difference between Smoke and Sanity Testing:

For example: In a project for the first release, Development team releases the build for testing and the test team tests the build. Testing the build for the very first time is to accept or reject the build. This we call it as Smoke Testing. If the test team accepts the build then that particular build goes for further testing. Imagine the build has 3 modules namely Login, Admin, Employee. The test team tests the main functionalities of the application without going deeper. This we call it as Sanity Testing.

Some more differences between Smoke Testing and Sanity Testing:

SMOKE TESTING SANITY TESTING
Smoke Test is done to make sure if the build we received from the development team is testable or not Sanity Test is done during the release phase to check for the main functionalities of the application without going deeper
Smoke Testing is performed by both Developers and Testers Sanity Testing is performed by Testers alone
Smoke Testing exercises the entire application from end to end Sanity Testing exercises only the particular component of the entire application
Smoke Testing, build may be either stable or unstable Sanity Testing, build is relatively stable

Do we automate smoke tests?

I have received many queries from my facebook and twitter followers on this.

Yes, we do automate Smoke test cases. It saves a lot of testing time. Assume you have 50-100 smoke test cases. To execute these 50-100 test cases, it may take time around 4-6 hours approximately. If you have automation scripts for these test cases, you can execute them once the build is released and confirm whether the build is passed or not in less than the time you spend on manually executing smoke test cases. So most of the teams automate smoke test cases.[/vc_column_text][vc_column_text]

Functional Testing | Software Testing Material

[/vc_column_text][vc_column_text]In this article, we are going to see Functional Testing in detail.

We know that every project has three important aspects such as Quality, Cost & Time. The objective of any project is to get a high-quality output while controlling the cost and the time required for completing the project.

After you finished reading this blog post, you will learn the following.

  • What is Functional Testing
  • Functional Testing Types
  • Functional Testing vs Non functional Testing
  • Functional Testing Tools

What is Functional Testing?

In simple words, what the system actually does is functional testing. To verify that each function of the software application behaves as specified in the requirement document. Testing all the functionalities by providing appropriate input to verify whether the actual output is matching the expected output or not. It falls within the scope of black box testing and the testers need not concern about the source code of the application.

Functional Testing Types

There are several types of testing that are performed to ensure the quality of a software. Some of the most important types of functional testing are as follows:

Unit Testing

Unit Testing is also called Module Testing or Component Testing. It is done to check whether the individual unit or module of the source code is working properly. It is done by the developers in the developer’s environment.

Integration Testing

Integration Testing is the process of testing the interface between the two software units. Integration testing is done by multiple approaches such Big Bang Approach, Top-Down Approach, Bottom-Up Approach, and Hybrid Integration approach.

System Testing

Testing the fully integrated application to evaluate the system’s compliance with its specified requirements is called System Testing AKA End to End testing. Verifying the completed system to ensure that the application works as intended or not.

Regression Testing

Repeated testing of an already tested program, after modification, to discover any defects introduced or uncovered as a result of the changes in the software being tested or in another related or unrelated software components.

User Acceptance Testing

It is also known as pre-production testing.  This is done by the end users along with the testers to validate the functionality of the application. After successful acceptance testing. Formal testing conducted to determine whether an application is developed as per the requirement. It allows the customer to accept or reject the application. Types of acceptance testing are Alpha, Beta & Gamma.

Black Box Testing

Black Box Testing is a software testing method in which testers evaluate the functionality of the software under test without looking at the internal code structure. This can be applied to every level of software testing such as Unit, Integration, System and Acceptance Testing.

Functional Testing Techniques

Functional testing can be done in the following methods.

Requirements Based Testing

Requirements Based Testing contains all the functional specifications which form a basis for all the test cases written.

Business Scenarios Based Testing

Business Scenarios Based Testing contains the information about how the system will be perceived from perspective of business process.

Functional Testing vs Non-Functional Testing

Earlier we have discussed the difference between functional testing and non functional testing. Check this post for detailed explanation.

Functional Testing Tools

Selenium

Selenium is possibly the most popular open-source test automation framework for Web applications. Being originated in the 2000s and evolved over a decade, Selenium has been an automation framework of choice for Web automation testers, especially for those who possess advanced programming and scripting skills. Selenium has become a core framework for other open-source test automation tools such as Katalon Studio, Watir, Protractor, and Robot Framework.

Selenium supports multiple system environments (Windows, Mac, Linux) and browsers (Chrome, Firefox, IE, and Headless browsers). Its scripts can be written in various programming languages such as Java, Groovy, Python, C#, PHP, Ruby, and Perl. While testers have flexibility with Selenium and they can write complex and advanced test scripts to meet various levels of complexity, it requires advanced programming skills and effort to build automation frameworks and libraries for specific testing needs.

UFT/QTP

Very user-friendly Functional Test tool by HP

Watir

Watir is an open-source testing tool for web automation testing based on Ruby libraries. Watir supports cross-browser testing including Firefox, Opera, headless browser, and IE. It also supports data-driven testing and integrates with BBD tools like RSpec, Cucumber, and Test/Unit.

Here I have hand-picked a few posts which will help you to learn more interview related stuff:[/vc_column_text][vc_column_text]

Unit Testing Guide| Software Testing Material

[/vc_column_text][vc_column_text]Unit testing is the first level of testing in software testing where individual components of a software are tested. If you are new to software testing, be sure to read this Beginners’ Guide for Software Testing. Also don’t miss our detailed list of 100+ types of software testing.

Let’s explore what Unit Testing is in detail.

In this article, we are going to see the following:

  • What is Unit Testing
  • Levels of Software Testing
  • Difference between Unit Testing & Integration Testing
  • Goals of Unit Testing
  • Advantages of Unit Testing
  • Unit Testing Method
  • Types of Unit Testing
  • When Unit Testing is performed
  • Who performs Unit Testing
  • Tasks of Unit Testing
  • Unit Testing Tools

What is Unit Testing:

Unit Testing is also called Module Testing or Component Testing. It is done during the development of an application to check whether the individual unit or module of the application is working properly. It is done by the developers in the developer’s environment.

Levels of Software Testing:

There are four levels of software testing. Each level of software testing verifies the software functionality for correctness, quality, & performance. The four levels of software testing are as follows:

  1. Unit Testing
  2. Integration Testing
  3. System Testing
  4. Acceptance Testing

Difference between Unit Testing & Integration Testing:

UNIT TESTING INTEGRATION TESTING
Unit testing is the first level of testing in the Software Testing Integration Testing is the second level of testing in Software Testing
Considers each component, as a single system Integrated components are seen as a single system
Purpose is to test working of individual unit Purpose is to test the integration of multiple unit modules
It evaluates the each component or unit of the software product It examines the proper working, interface and reliability, after the integration of the modules, and along with the external interfaces and system
Scope of Unit testing is limited to a particular unit under test Scope of Unit testing is wider in comparison to Unit testing. It covers two or more modules
It has no further types It is divided into following approaches
• Bottom up integration approach
• Top down integration approach
• Big Bang approach
• Hybrid approach
It is also known as Component Testing It is also known as I&T or String Testing
It is performed at the code level It is performed at the communication level
It is carried out with the help of reusable test cases It is carried out with the help of stubs and drivers
It comes under White Box Testing It comes under both Black Box and White Box Testing
It is performed by developers It is performed by either testers or developers

Goals of Unit Testing:

The goal of Unit Testing are

  • To isolate every section of code.
  • To make sure individual parts are correct.
  • To find bugs early in development cycle.
  • To save testing cost.
  • To allow developers to refactor or upgrade code at the later date.

Advantages of Unit Testing:

  • It finds problems early in the development cycle. So it reduces the cost of testing. The cost of finding a bug earlier is considerably lower than the cost of finding it later.
  • It reduces defects when changing the existing functionality (Regression Testing)
  • It simplifies the debugging process. Debugging is the process of finding and resolving defects within a program that prevent correct operation of a software. When unit testing is implemented only the latest changes made in the code need to be debugged when a test fails.
  • it provides code documentation due to better coding standards and practices

Unit Testing Method:

It is performed by using White Box Testing method.

Types of Unit Testing:

There are two types of Unit Testing – Manual & Automated.

When Unit Testing is performed:

Unit Testing is the first level of Software Testing. It is performed prior to Integration Testing.

Who performs Unit Testing:

It is normally performed by Software Developers or White box testers.

What are the tasks of Unit Testing:

Unit Test Plan:

  • Prepare
  • Review
  • Rework
  • Baseline

Unit Test Cases/Scripts:

  • Prepare
  • Review
  • Rework
  • Baseline

Unit Test:

  • Perform

Unit Testing Tools:

There are several automated tools available to assist with unit testing. We will provide a few examples below:

Junit:

JUnit 5 is the next generation of JUnit. The goal is to create an up-to-date foundation for developer-side testing on the JVM. This includes focusing on Java 8 and above, as well as enabling many different styles of testing.

NUnit:

NUnit is a unit-testing framework for all .Net languages. Initially ported from JUnit, the current production release, version 3, has been completely rewritten with many new features and support for a wide range of .NET platforms.

JMockit: 

JMockit is open source Unit testing tool. JMockit is a Java toolkit for developer testing, including mocking APIs and a code coverage tool.

EMMA:

EMMA is an open-source toolkit for measuring and reporting Java code coverage. EMMA distinguishes itself from other tools by going after a unique feature combination: support for large-scale enterprise software development while keeping individual developer’s work fast and iterative. Every developer on your team can now get code coverage for free and they can get it fast!

PHPUnit is a programmer-oriented testing framework for PHP. It is an instance of the xUnit architecture for unit testing frameworks.[/vc_column_text][vc_column_text]

Integration Testing – Big Bang, Top Down, Bottom Up & Hybrid Integration

[/vc_column_text][vc_column_text]Every software application contains multiple modules that converse with each other through an interface. Integrating these individual software modules and testing them together is known as Software Integration Testing. It is an extension to Unit testing. In this tutorial, we are going to see the following.

  • 1. What is Integration Testing
  • 2. Objectives of integration testing
  • 3. How To Write Integration Test Cases
  • 4. What is Big Bang Approach
  • 5. What is Top-Down Approach
  • 6. What is Bottom-Up Approach
  • 7. What is the difference between Stubs and Drivers in Software Testing
  • 8. What is a Stub
  • 9. What is a Driver
  • 10. What is Hybrid Integration Testing
  • 11. What is the difference between Unit Testing and Integration Testing
  • 12. What is the difference between Integration Testing and System Testing
  • 13. Integration Testing Tools

What is Integration Testing?

Integration Testing is the process of testing the connectivity or data transfer between the couple of unit tested modules. It is AKA I&T (Integration & Testing) or String Testing

It is sub divided into Big Bang Approach, Top Down Approach, Bottom Up Approach and Sandwich or Hybrid Integration Approach (Combination of Top Down and Bottom Up). This process is carried out by using dummy programs called Stubs and Drivers. Stubs and Drivers do not implement the entire programming logic of the software module but just simulate data communication with the calling module.

It is done after Unit testing. Each and every module involved in integration testing should be unit testing prior to integration testing. By doing unit testing prior to integration testing gives confidence in performing software integration testing.

It is done as per Test plan. By following the test plan before doing integration testing mitigate the chaos and gives a clear path in performing integration testing effectively.

Objectives of System Integration testing:

Objectives of integration testing include:

  • To reduce risk
  • To verify whether the functional and non-functional behaviors of the interfaces are as designed and specified
  • To build confidence in the quality of the interfaces
  • To find defects (which may be in the interfaces themselves or within the components or systems)
  • To prevent defects from escaping to higher test levels

How To Write Integration Test Cases?

Assume there are 3 modules in an application such as ‘Login Page’, ‘Inbox’ and ‘Delete Mails’

While writing integration test cases, we don’t focus on functionality of the individual modules because individual modules should have been covered during Unit Testing. Here we have to focus mainly on the communication between the modules. As per our above assumption, we have to focus on “How Login Page is linked to the Inbox Page” and “How Inbox Page is linked to the Delete Mails module”.

What is Big Bang Approach?

Combining all the modules once and verifying the functionality after completion of individual module testing. In Big Bang Integration Testing, the individual modules are not integrated until all the modules are ready. Then they will run to check whether it is performing well. In this type of testing, some disadvantages might occur like, defects can be found at the later stage. It would be difficult to find out whether the defect arouse in interface or in module.

What is Top-Down Approach?

In Top Down Integration Testing, testing takes place from top to bottom. High-level modules are tested first and then low-level modules and finally integrating the low-level modules to a high level to ensure the system is working as intended.

In this type of testing, Stubs are used as temporary module if a module is not ready for integration testing.

What is Bottom-Up Approach?

It is a reciprocate of the Top-Down Approach. In Bottom Up Integration Testing, testing takes place from bottom to up. Lowest level modules are tested first and then high-level modules and finally integrating the high-level modules to a low level to ensure the system is working as intended.  Drivers are used as a temporary module for integration testing.

What is the difference between Stubs and Drivers in Software Testing?

Stubs and drivers are used at component level testing. Check the below image

Assume we have two modules in an application say ‘Module A’ & ‘Module B’. Application developers have developed just ‘Module A’. Before they finish developing ‘Module B’, we (testers) received a requirement to test ‘Module A’. Here we can test ‘Module A’ if there is no dependency with ‘Module B’. Assume ‘Module A’ is dependent on ‘Module B’. Here what you do. In order to test ‘Module A’ in this case. Developers create a dummy module say Stub to replace ‘Module B’. Same way if ‘Module B’ is dependent on ‘Module A’ but ‘Module A’ is not ready yet. Here in this case we use Driver to replace ‘Module A’.

What is a Stub?

It is called by the Module under Test.

What is a Driver?

These terms (stub & driver) come into the picture while doing Integration Testing. While working on integration, sometimes we face a situation where some of the functionalities are still under development. So the functionalities which are under development will be replaced with some dummy programs. These dummy programs are named as Stubs or Drivers.

Imagine, we have two pages i.e., Login page and Admin page.

You have to test Login page (assume, Admin page is under development). The login page will call the Admin page after login but the Admin page is not ready yet. To overcome this situation developers write a dummy program which acts as Admin page. This dummy program is AKA Stub.
Stubs are ‘Called programs’. If a ‘Called program’ is incomplete, it is replaced with Stub. (This happens in Top down approach)

Coming to Drivers. In the above example, the Login page is ready but not the Admin page. this time assume that the Admin page is ready to test but the Login page is not ready yet. To overcome this situation developers write a dummy program which acts like Login page. This dummy program is AKA Driver. Drivers are ‘Calling programs’. If a ‘Calling program’ is incomplete, it is replaced with Driver. (This happens in bottom up approach)

STUBS DRIVERS
Stubs used in Top Down Integration Testing Drivers used in Bottom Up Integration Testing
Stubs are used when sub programs are under development Drivers are used when main programs are under development
Top most module is tested first Lowest module is tested first
It can simulate the behavior of lower level modules that are not integrated It can simulate the behavior of upper level modules that are not integrated
Stubs are called programs Drivers are the calling programs

What is Hybrid Integration Testing?

Hybrid integration testing is also known as Sandwich integration testing. It is the combination of both Top-down and Bottom-up integration testing.

What is the difference between Unit Testing and Integration Testing?

UNIT TESTING INTEGRATION TESTING
Unit testing is the first level of testing in the Software Testing Integration Testing is the second level of testing in Software Testing
Considers each component, as a single system Integrated components are seen as a single system
Purpose is to test working of individual unit Purpose is to test the integration of multiple unit modules
It evaluates the each component or unit of the software product It examines the proper working, interface and reliability, after the integration of the modules, and along with the external interfaces and system
Scope of Unit testing is limited to a particular unit under test Scope of Unit testing is wider in comparison to Unit testing. It covers two or more modules
It has no further types It is divided into following approaches
• Bottom up integration approach
• Top down integration approach
• Big Bang approach
• Hybrid approach
It is also known as Component Testing It is also known as I&T or String Testing
It is performed at the code level It is performed at the communication level
It is carried out with the help of reusable test cases It is carried out with the help of stubs and drivers
It comes under White Box Testing It comes under both Black Box and White Box Testing
It is performed by developers It is performed by either testers or developers

What is the difference between Integration Testing and System Testing?

INTEGRATION TESTING SYSTEM TESTING
It is a low level testing It is a high level testing
It is followed by System Testing It is followed by Acceptance Testing
It is performed after unit testing It is performed after integration testing
Different types of integration testing are:
• Top bottom integration testing
• Bottom top integration testing
• Big bang integration testing
• Sandwich integration testing
Different types of system testing are:
• Regression testing
• Sanity testing
• Usability testing
• Retesting
• Load testing
• Performance testing
• Maintenance testing
Testers perform functional testing to validate the interaction of two modules Testers perform both functional as well as non-functional testing to evaluate the functionality, usability, performance testing etc.,
Performed to test whether two different modules interact effectively with each other or not Performed to test whether the product is performing as per user expectations and the required specifications
It can be performed by both testers and developers It is performed by testers
Testing takes place on the interface of two individual modules Testing takes place on complete software application

Integration Testing Tools:

Some of the integration testing tools are as follows:

  1. Citrus Integration Testing
  2. VectorCAST/C++
  3. FitNesse
  4. Validata

[/vc_column_text][vc_column_text]

What is Regression Testing? When and How We Do It?

[/vc_column_text][vc_column_text]

What is Regression Testing with example?

In the interview perspective, you will be asked to define It. Let’s see what is Regression Testing in Software Testing

Definition:

Repeated testing of an already tested program, after modification, to discover any defects introduced or uncovered as a result of the changes in the software being tested or in another related or unrelated software components.

In simple words,  We do regression by re-executing the tests against the modified application to evaluate whether the modified code breaks anything which was working earlier. Anytime we do modify an application, we should do regression testing (we run regression test).

Also Read: What is Retesting? When Do We Do Retesting?

Regression testing gives confidence to the developers that there is no broken functionality after modifying the production code. It makes sure that there are no unexpected side effects. Hope you have understood what is regression tests. Now let’s see when do we do this type of testing.

Check below video to see “What Is It & When Do We Do It

 

When do we do Regression Testing?

Interviewers may ask you why do you do regression testing. We do software regression testing whenever the production code is modified. Usually, we do execute regression tests in the following cases:

Some regression tests examples are here.

1. When new functionalities are added to the application.
Example: A website has a login functionality which allows users to do login only with Email. Now the new features look like “providing a new feature to do login using Facebook”.

2. When there is a Change Requirement (In organizations, we call it as CR).
Example: Remember password should be removed from the login page which is available earlier

3. When there is a Defect Fix.
Example: Imagine Login button is not working in a login page and a tester reports a bug stating that the login button is broken. Once the bug is fixed by the developers, testers test it to make sure whether the Login button is working as per the expected result.  Simultaneously testers test other functionalities which are related to login button

4. When there is a Performance Issue Fix.
Example: Loading the home page takes 5 seconds. Reducing the load time to 2 seconds

5. When there is an Environment change.
Example: Updating the Database from MySQL to Oracle)

So far we have seen what is regression and when do we do regression. Now let’s see how we do it.

Regression Testing – Manual or Automation?

Regression tests are generally extremely tedious and time-consuming. We do regression after every deployment, so it would make life easy to automate test cases instead of running manually on each and every time. if we have thousands of test cases, it’s better to create automation test scripts for the test cases which we do on every build (i.e., regression testing)

Automated regression test is the best practice and it is the choice of organizations to save lot of time and to run nightly builds.

Regression Test Tools:

Some of the Regression Test Tools are:

1. Selenium

2. QTP/UFT

3. SahiPro

4. TestComplete

5. Watir

6. IBM Rational Functional Tester

If you think, we missed some of the popular Regression Test Tools, please comment below and we will try to include it in this Regression Test Tools List.[/vc_column_text][vc_column_text]

What is Retesting? When We Do Retesting in Software Development?

[/vc_column_text][vc_column_text]

What is Retesting?

Retesting : To ensure that the defects which were found and posted in the earlier build were fixed or not in the current build.

Retesting is running the previously failed test cases again on the new software to verify whether the defects posted earlier are fixed or not.

In simple words, Retesting is testing a specific bug after it was fixed.

Example: Say, Build 1.0 was released. While testing the Build 1.0, Test team found some defects (example, Defect Id 1.0.1 and Defect Id 1.0.2) and posted. The test team tests the defects 1.0.1 and 1.0.2 in the Build 1.1 (only if these two defects are mentioned in the Release Note of the Build 1.1) to make sure whether the defects are fixed or not.

Process:  As per the Bug Life Cycle, once a tester found a bug, the bug is reported to the Development Team. The status of Bug should be “New”. The Development Team may accept or reject the bug. If the development team accepts the bug then they do fix it and release it in the next release. The status of the bug will be “Ready For QA”. Now the tester verifies the bug to find out whether it is resolved or not. This testing is known as retesting. Retesting is a planned testing. We do use same test cases with same test data which we used in the earlier build. If the bug is not found then we do change the status of the bug as “Fixed” else we do change the status as “Not Fixed” and send a Defect Retesting Document to the development team.

When Do We Do Re-testing:

1. When there is a particular bug fix specified in the Release Note:
Once the development team releases the new build, then the test team has to test the already posted bugs to make sure that the bugs are fixed or not.

2. When a Bug is rejected:
At times, development team refuses few bugs which were raised by the testers and mention the status of the bug as Not Reproducible. In this case, the testers need to retest the same issue to let the developers know that the issue is valid and reproducible.

To avoid this scenario, we need to write a good bug report. Here is a post on how to write a good bug report.

3. When a Client calls for a retesting:
At times, the Client may request us to do the test again to gain the confidence on the quality of the product. In this case, test teams do test the product again.[/vc_column_text][vc_column_text]

What is the difference between Regression And Retesting

[/vc_column_text][vc_column_text]Let’s see the difference between Regression and Retesting. This might be one of the top 5 interview questions for freshers. Most of the testers have confusion with Regression and Retesting. Here in this post, we will show case the difference between regression and retesting with practical example to understand clearly.

REGRESSION TESTING:

Repeated testing of an already tested program, after modification, to discover any defects introduced or uncovered as a result of the changes in the software being tested or in another related or unrelated software components.

Usually, we do regression testing in the following cases:

  1. New functionalities are added to the application
  2. Change Requirement (In organizations, we call it as CR)
  3. Defect Fixing
  4. Performance Issue Fix
  5. Environment change (E.g.. Updating the DB from MySQL to Oracle)

RETESTING:

To ensure that the defects which were found and posted in the earlier build were fixed or not in the current build.

Say, Build 1.0 was released. Test team found some defects (Defect Id 1.0.1, 1.0.2) and posted.

Build 1.1 was released, now testing the defects 1.0.1 and 1.0.2 in this build is retesting.

Example to showcase the difference between Regression and Retesting:

Let’s take two scenarios.

Case 1: Login Page – Login button not working (Bug)

Case 2: Login Page – Added “Stay signed in” checkbox (New feature)

In Case 1, Login button is not working, so tester reports a bug. Once the bug is fixed, testers test it to make sure whether the Login button is working as per the expected result.

Earlier I have posted a detailed post on “Bug Report Template”. If you haven’t gone through it, you can browse by clicking here. Also, you could download the Sample Bug Report Template / Defect Report Template from here.

In Case 2, tester tests the new feature to ensure whether the new feature (Stay signed in) is working as intended.

Case 1 comes under Re-testing. Here tester retests the bug which was found in the earlier build by using the steps to reproduce which were mentioned in the bug report.

Also in the Case 1, tester tests other functionalities which are related to login button which we call as Regression Testing.

Case 2 comes under Regression Testing. Here tester tests the new feature (Stay signed in) and also tests the relevant functionalities. Testing the relevant functionalities while testing the new features come under Regression Testing.

Another Example:

Imagine, An Application Under Test has three modules namely Admin, Purchase and Finance. Finance module depends on Purchase module. If a tester found a bug on Purchase module and posted. Once the bug is fixed, the tester needs to do Retesting to verify whether the bug related to the Purchase is fixed or not and also tester needs to do Regression Testing to test the Finance module which depends on the Purchase module.

Some other Differences between Regression and Retesting:

Retesting done on failed test cases whereas Regression Testing done on passed test cases.

Retesting makes sure that the original defect has been corrected whereas Regression Testing makes sure that there are no unexpected side effects.[/vc_column_text][vc_column_text]

Entry and Exit Criteria in the Process of STLC

[/vc_column_text][vc_column_text]Entry and Exit Criteria in the Process of Software Testing Life Cycle – In this post we are going to see What is Entry Criteria and What is Exit Criteria and how we apply this in each phase of STLC

Entry Criteria:

The prerequisites that must be achieved before commencing the testing process.

Exit Criteria:

The conditions that must be met before testing should be concluded.

Entry and Exit Criteria:

As mentioned earlier, each and every phase of STLC has Entry and Exit criteria. Let’s see phase wise entry and exit criteria:

Requirement Analysis:

A quality assurance professional has to verify the requirement documents prior to starting the phases like Planning, Design, Environment Setup, Execution, Reporting and Closure . We prepare test artifacts like Test Strategy, Test Plan and other based on the analysis of requirement documents.

Test Planning:

Test Manager/Test Lead prepares the Test Strategy and Test Plan documents and testers will get a chance to involve in the preparation process. It varies company to company.

Entry Criteria: Requirements Documents

Exit Criteria: Test Strategy, Test Plan and Test Effort estimation document.

Test Design:

In test design phase, testers prepare test scenarios, test cases/test scripts and test data based on the Requirement Documents and Test Plan. Here testers involve in the process of preparing the test cases, reviewing the test cases of peers, getting the approvals of test cases and storing the test cases in a repository.

Entry Criteria: Requirements Documents, Test Plan

Exit Criteria: Test cases, Test Scripts (if automation), Test data.

Test Environment Setup:

In this phase, in most of the companies testers won’t involve in the process of preparing test environment setup. Most probably Dev Team or Implementation Team prepares the test environment. It varies company to company. As a tester, you will be given an installation document to setup test environment. Readiness of the given test environment.

Entry Criteria: Test Plan, Smoke Test cases, Test Data

Exit Criteria: Test Environment, Smoke Test Results

Test Execution:

In this phase, testers involve in execution of test cases, reporting the defects and updating the requirement traceability matrix.

Entry Criteria: Test Plan document, Test cases, Test data, Test Environment

Exit Criteria: Test case execution report. Defect report, RTM

Test Closure:

Traditionally this is the final phase of testing. Test lead involves in preparing Test Metrics and Test Closure Report. Test lead sends out test closure report once the testing process is completed

Entry Criteria: Test case Execution report (make sure there are no high severity defects opened), Defect report[/vc_column_text][vc_column_text]

Difference Between Test Scenario Vs Test Case

[/vc_column_text][vc_column_text]In this post, we will see the difference between Test Case and Test Scenario. In most of the interviews you will face this question i.e., Test Scenario Vs Test Case. Here in this post, we will show 10 differences between Test Scenario and Test Case. Both these Test Scenario and Test Case templates come under Test Artifacts. As a tester, these two templates are very useful in Test Design and Test Execution phases of Software Test Life Cycle (STLC).

Test Scenario Vs Test Case

What is a Test Scenario?

Test Scenario gives the idea of what we have to test. Test Scenario is like a high-level test case.

Test Scenario answers “What to be tested

Assume that we need to test the functionality of a login page of Gmail application. Test scenario for the Gmail login page functionality as follows:

Test Scenario Example: Verify the login functionality

What is a Test Case?

Test cases are the set of positive and negative executable steps of a test scenario which has a set of pre-conditions, test data, expected result, post-conditions and actual results.

Test Case answers “How to be tested

Assume that we need to test the functionality of a login page of Gmail application. Test cases for the above login page functionality as follows:

Test Case Examples:

Test Case 1: Enter valid User Name and valid Password
Test Case 2: Enter valid User Name and invalid Password
Test Case 3: Enter invalid User Name and valid Password
Test Case 4: Enter invalid User Name and invalid Password

Difference between Test Case and Test Scenario

TEST CASE TEST SCENARIO
Test case consists of Test case name, Precondition, Test steps, Expected result and Post condition Test scenario are one liner but it is associated with multiple test cases
Test case guide a user on “how to test” Test scenario guide a user on “what to test”
Purpose of test case is to validate the test scenario by executing a set of steps Purpose of test scenario is to test end to end functionality of a software application
Creating test cases is important when working with testers off-site Creating test scenarios helps you in a time-sensitive situation (especially working in Agile)
Software applications change often. It leads to redesigning the pages and adding new functionalities. It hard to maintain test cases Test scenarios are easy to maintain due to its high level design
More time consumption compared to test scenarios Less time consumption compared to test cases
Required more resources to create and execute test cases Relatively less resources enough to create and test using test scenarios
It helps in exhaustive testing of application It helps in agile way of testing end to end functionality
Test cases are derived from test scenarios Test scenarios are derived from use cases
Test cases are low level actions Test scenarios are high level actions

Note: Using both Test Scenario and Test Cases together will ensure a robust, high coverage testing initiative. Its a best practice to write Test Scenarios and then move on to Test Cases. Even though its a best practice, in today’s Agile era, most of the companies prefer Test Scenarios. Test cases are being replaced with test scenarios in the agile era to save time. Check the below post for detailed explanation on Test Case Template.[/vc_column_text][vc_column_text]

Manual Testing Methods

[/vc_column_text][vc_column_text]

BLACK BOX TESTING:

Black Box Testing is a software testing method in which testers evaluate the functionality of the software under test without looking at the internal code structure. This can be applied to every level of software testing such as Unit, Integration, System and Acceptance Testing.

Testers create test scenarios/cases based on software requirements and specifications. So it is AKA Specification Based Testing, Behavioral Testing and Input-Output Testing.

Tester performs testing only on the functional part of an application to make sure the behavior of the software is as expected. So it is AKA Behavioral Based Testing.

The tester passes input data to make sure whether the actual output matches the expected output. So it is AKA Input-Output Testing.

There is no obligation on testers to have knowledge on source code in this process.

Black Box Testing Techniques:

  • Equivalence Partitioning
  • Boundary Value Analysis
  • Decision Table
  • State Transition Testing

Types of Black Box Testing:

Functionality Testing: In simple words, what the system actually does is functional testing. To verify that each function of the software application behaves as specified in the requirement document. Testing all the functionalities by providing appropriate input to verify whether the actual output is matching the expected output or not. It falls within the scope of black box testing and the testers need not concern about the source code of the application.

Non-functionality Testing: In simple words, how well the system performs is non-functionality testing. Non-functional testing refers to various aspects of the software such as performance, load, stress, scalability, security, compatibility etc., Main focus is to improve the user experience on how fast the system responds to a request.

Must Read: 100+ Types of Software Testing

WHITE BOX TESTING:

White Box Testing is based on applications internal code structure. In white-box testing, an internal perspective of the system, as well as programming skills, are used to design test cases. This testing usually was done at the unit level. It is AKA Glass Box, Clear Box, Structural Testing, Open Box, Transparent Box.

White Box Testing Techniques:

  • Statement Coverage
  • Branch Coverage
  • Path Coverage

GREY BOX TESTING:

Grey box is the combination of both White Box and Black Box Testing. The tester who works on this type of testing needs to have access to design documents. This helps to create better test cases in this process.[/vc_column_text][vc_column_text]

What is Verification And Validation In Software Testing

[/vc_column_text][vc_column_text]

Verification And Validation:

In software testing, verification and validation are the processes to check whether a software system meets the specifications and that it fulfills its intended purpose or not. Verification and validation is also known as V & V. It may also be referred to as software quality control. It is normally the responsibility of software testers as part of the Software Development Life Cycle.

Must Read: Regression & Retesting

VERIFICATION: (Static Testing)

Verification is the process, to ensure that whether we are building the product right i.e., to verify the requirements which we have and to verify whether we are developing the product accordingly or not.

Activities involved here are Inspections, Reviews, Walkthroughs

In simple words, Verification is verifying the documents

As per IEEE-STD-610:

The process of evaluation software to determine whether the products of a given development phase satisfy the conditions imposed at the beginning of that phase.

Am I building a product right? It’s a Low-Level Activity. Verification is a static method of checking documents and files.

Must Read: Manual Testing Interview Questions

VALIDATION: (Dynamic Testing)

Validation is the process, whether we are building the right product i.e., to validate the product which we have developed is right or not.

Activities involved in this is Testing the software application

In simple words, Validation is to validate the actual and expected output of the software

As per IEEE-STD-610:

The process of evaluating software during or at the end of the development process to determine whether it satisfies specified requirements [IEEE-STD-610]

Am I building a right product? It’s a High Level Activity. Validation is a dynamic process of testing the real product.

This is a brief explanation of Verification And Validation in Software Testing.

Also Go Through: Manual Testing Complete Tutorial

If you are not a regular reader of SoftwareTestingMaterial.com then I highly recommend you to signup for the free email newsletter using the below link.[/vc_column_text][vc_column_text]

The Complete Guide To Writing Test Strategy [Sample Test Strategy Document]

[/vc_column_text][vc_column_text]

The Complete Guide To Writing Test Strategy

Test Strategy is a high level document (static document) and usually developed by project manager. It is a document which captures the approach on how we go about testing the product and achieve the goals. It is normally derived from the Business Requirement Specification (BRS). Documents like Test Plan are prepared by keeping this document as base.

Even though testing differs between organizations. Almost all the software development organizations follow Test Strategy document to achieve the goals and to follow the best practice.

Usually test team starts writing the detailed Test Plan and continue further phases of testing once the test strategy is ready. In Agile world, some of the companies are not spending time on test plan preparation due to the minimal time for each release but they maintain test strategy document. Maintaining this document for the entire project leads to mitigate the unforeseen risks.

This is one of the important documents in test deliverables. Like other test deliverables, test team shares this with the stakeholders for better understanding about the scope of the project, test approaches and other important aspects.

  • 1. Sections of Test Strategy Document
  • 2. Scope and overview
  • 3. Test Approach
  • 4. Test Levels
  • 5. Test Types
  • 6. Roles and responsibilities
  • 7. Environment requirements
  • 8. Testing tools
  • 9. Industry standards to follow
  • 10. Test deliverables
  • 11. Testing metrics
  • 12. Requirement Traceability Matrix
  • 13. Risk and mitigation
  • 14. Reporting tool
  • 15. Test Summary
  • 16. Download Sample Test Strategy Document

If you are a beginner, you may not get an opportunity to create a test strategy document but it’s good to know how to create test strategy document. It will be helpful when you are handling a QA Team. Once you become a Project Lead or Project manager you have to develop test strategy document. Creating an effective test strategy document is a skill which you must acquire. By writing a test strategy plan you can define the testing approach of your project. Test strategy document should be circulated to all the team members so that every team member will be consistent with the testing approach. Remember there is no rule to maintain all these sections in your Test Strategy document. It varies from company to company. This list gives a fair idea on how to write a good Test Strategy.

Sections of Test Strategy Document:

Following are the sections of test strategy document:

  1. Scope and overview
  2. Test Approach
  3. Testing tools
  4. Industry standards to follow
  5. Test deliverables
  6. Testing metrics
  7. Requirement Traceability Matrix
  8. Risk and mitigation
  9. Reporting tool
  10. Test summary

We have seen what is test strategy document and what it contains. Let’s discuss each section of Test Strategy in STLC briefly.

Scope and overview:

In this section, we will mention the scope of testing activities (what to test and why to test) and mention an overview of the AUT.

Example: Creating a new Application (Say Google Mail) which offers email services. Test the functionality of emails and make sure it gives value to the customer.

Test Approach:

In this section, we usually define the following

  • Test levels
  • Test types
  • Roles and responsibilities
  • Environment requirements

Test Levels:

This section lists out the levels of testing that will be performed during QA Testing. Levels of testing such as unit testing, integration testing, system testing and user acceptance testing. Testers are responsible for integration testing, system testing and user acceptance testing.

Test Types:

This section lists out the testing types that will be performed during QA Testing.

Roles and responsibilities:

This section describes the roles and responsibilities of Project Manager, Project Lead, individual testers.

Environment requirements:

This section lists out the hardware and software for the test environment in order to commence the testing activities.

Testing tools:

This section will describe the testing tools necessary to conduct the tests

Example: Name of Test Management Tool, Name of Bug Tracking Tool, Name of Automation Tool

Industry standards to follow:

This section describes the industry standard to produce high quality system that meets or exceeds customer expectations. Usually, project manager decides the testing models and procedures which need to follow to achieve the goals of the project.

Test deliverables:

This section lists out the deliverables that need to produce before, during and at the end of testing.

Read more on Test Deliverables here..

Testing metrics:

This section describes the metrics that should be used in the project to analyze the project status.

Read more on Test Metrics here..

Requirement Traceability Matrix:

Requirement traceability matrix is used to trace the requirements to the tests that are needed to verify whether the requirements are fulfilled.

Read more on RTM here..

Risk and mitigation:

Identify all the testing risks that will affect the testing process and specify a plan to mitigate the risk.

Reporting tool:

This section outlines how defects and issues will be tracked using a reporting tool.

Must read: Popular defect tracking tools

Test Summary:

This section lists out what kind of test summary reports will be produced along with the frequency. Test summary reports will be generated on a daily, weekly or monthly basis depends on how critical the project is.

Download Sample Test Strategy Document:

Conclusion:

Test strategy document gives a clear vision of what the test team will do for the whole project. It is a static document means it wont change throughout the project life cycle. The one who prepares this document, must have good experience in the product domain, as this is the document that is going to drive the entire team and it won’t change throughout the project life cycle (it is a static document). Test strategy document should be circulated to entire testing team before the testing activities begin. Writing a good test strategy improves the complete testing process and leads to produce a high-quality system.[/vc_column_text][vc_column_text]

Software Test Plan Template with Detailed Explanation [Sample Test Plan Document]

[/vc_column_text][vc_column_text]

Software Test Plan Template with Detailed Explanation

In this post, we will learn how to write a Software Test Plan Template. Before that we see what is a Test plan. Test plan document is a document which contains the plan for all the testing activities to be done to deliver a quality product. Test Plan document is derived from the Product Description, SRS, or Use Case documents for all future activities of the project. It is usually prepared by the Test Lead or Test Manager and the focus of the document is to describe what to test, what not to test, how to test when to test and who will do what test. Also, it includes the environment and tools needed, resource allocation, test technique to be followed, risks and contingencies plan. A test plan is a dynamic document and we should always keep it up-to-date. Test plan document guides us how the testing activity should go on. Success of the testing project completely depends on Test Plan.

  • 1. How To Prepare Effective Test Plan
  • 2. Who Prepare Test Plan Template
  • 3. Sections of Test Plan Template
  • 4. Test Plan Identifier
  • 5. References
  • 6. Introduction
  • 7. Test Items
  • 8. Features To Be Tested
  • 9. Features Not To Be Tested
  • 10. Approach
  • 11. Pass/Fail Criteria
  • 12. Suspension Criteria
  • 13. Test Deliverables
  • 14. Testing Tasks
  • 15. Environmental Needs
  • 16. Responsibilities
  • 17. Staffing and Training Needs
  • 18. Schedule
  • 19. Risks and Contingencies
  • 20. Approvals
  • 21. Download Sample Test Plan Document

Test plan is one of the documents in test deliverables. Like other test deliverables, the test plan document is also shared with the stakeholders. The stakeholders get to know the scope, approach, objectives, and schedule of software testing to be done.

How To Prepare Effective Test Plan?

Some of the measures are to start preparing the test plan early in the STLC, keep the test plan short and simple to understand, and keep the test plan up-to-date

 

Who Prepare Test Plan Template?

Usually, Test Lead prepares Test Plan and Testers involve in the process of preparing test plan document. Once the test plan is well prepared, then the testers write test scenarios and test cases based on test plan document.

Sections of Test Plan Template:

Following are the sections of test plan document as per IEEE 829 standards.

  1. Test Plan Identifier
  2. References
  3. Introduction
  4. Test Items
  5. Features To Be Tested
  6. Features Not To Be Tested
  7. Approach
  8. Pass/Fail Criteria
  9. Suspension Criteria
  10. Test Deliverables
  11. Testing Tasks
  12. Environmental Needs
  13. Responsibilities
  14. Staffing and Training Needs
  15. Schedule
  16. Risks and Contingencies
  17. Approvals

Let’s see each component of the Test Plan Document. We are going to present the Test Plan Document as per IEEE 829 Standards.

Test Plan Identifier:

Test Plan Identifier is a unique number to identify the test plan.

Example: ProjectName_0001

References:

This section is to specify all the list of documents that support the test plan which you are currently creating.

Example: SRS (System Requirement Specification), Use Case Documents, Test Strategy, Project Plan, Project Guidelines etc.,

Introduction:

Introduction or summary includes the purpose and scope of the project

Example: The objective of this document is to test the functionality of the ‘ProjectName’

Test Items:

A list of test items which will be tested

Example: Testing should be done on both front end and back end of the application on the Windows/Linux environments.

Features To Be Tested:

In this section, we list out all the features that will be tested within the project.

Example: The features which are to be tested are Login Page, Dashboard, Reports.

Features Not To Be Tested:

In this section, we list out the features which are not included in the project.

Example: Payment using PayPal features is above to remove from the application. There is no need to test this feature.

Approach:

The overall strategy of how testing will be performed. It contains details such as Methodology, Test types, Test techniques etc.,

Example: We follow Agile Methodology in this project

Pass/Fail Criteria:

In this section, we specify the criteria that will be used to determine pass or fail percentage of test items.

Example: All the major functionality of the application should work as intended and the pass percentage of test cases should be more than 95% and there should not be any critical bugs.

Suspension Criteria:

In this section, we specify when to stop the testing.

Example: If any of the major functionalities are not functional or system experiences login issues then testing should suspend.

Test Deliverables:

List of documents need to be delivered at each phase of testing life cycle. The list of all test artifacts.

Examples: Test Cases, Bug Report

Read more on “Test Deliverables”..

Testing Tasks:

In this section, we specify the list of testing tasks we need to complete in the current project.

Example: Test environment should be ready prior to test execution phase. Test summary report needs to be prepared.  

Environmental Needs:

List of hardware, software and any other tools that are needed for a test environment.

Responsibilities:

We specify the list of roles and responsibilities of each test tasks.

Example: Test plan should be prepared by Test Lead. Preparation and execution of tests should be carried out by testers.

Staffing and Training Needs:

Plan training course to improve the skills of resources in the project to achieve the desired goals.

Schedule:

Complete details on when to start, finish and how much time each task should take place.

Example: Perform test execution – 120 man-hours, Test Reporting – 30 man-hours

Risks and Contingencies:

In this section, we specify the probability of risks and contingencies to overcome those risks.

Example: Risk –  In case of a wrong budget estimation, the cost may overrun.  Contingency Plan – Establish the scope before beginning the testing tasks and pay attention in the project planning and also track the budget estimates constantly.

Approvals:

Who should sign off and approve the testing project

Example: Project manager should agree on completion of the project and determine the steps to proceed further.[/vc_column_text][vc_column_text]

Test Case Template With Explanation | Software Testing Material

[/vc_column_text][vc_column_text]

Detailed Explanation – Test Case Template

test case template is a document comes under one of the test artifacts, which allows testers to develop the test cases for a particular test scenario in order to verify whether the features of an application are working as intended or not. Test cases are the set of positive and negative executable steps of a test scenario which has a set of pre-conditions, test data, expected result, post-conditions and actual results.

Most of the companies are using test case management tools such as Quality Center (HP QC), JIRA etc., and some of the companies still using excel sheets to write test cases.

Assume we need to write test cases for a scenario (Verify the login of Gmail account).

Here are some test cases.

1. Enter valid User Name and valid Password
2. Enter valid User Name and invalid Password
3. Enter invalid User Name and valid Password
4. Enter invalid User Name and invalid Password

Find the test case template screenshot below:

Let’s discuss the main fields of a test case:

PROJECT NAME: Name of the project the test cases belong to
MODULE NAME: Name of the module the test cases belong to
REFERENCE DOCUMENT: Mention the path of the reference documents (if any such as Requirement Document, Test Plan, Test Scenarios etc.,)
CREATED BY: Name of the Tester who created the test cases
DATE OF CREATION: When the test cases were created
REVIEWED BY: Name of the Tester who created the test cases
DATE OF REVIEW: When the test cases were reviewed
EXECUTED BY: Name of the Tester who executed the test case
DATE OF EXECUTION: When the test case was executed
TEST CASE ID: Each test case should be represented by a unique ID. It’s good practice to follow some naming convention for better understanding and discrimination purpose.
TEST SCENARIO: Test Scenario ID or title of the test scenario.
TEST CASE: Title of the test case
PRE-CONDITION: Conditions which needs to meet before executing the test case.
TEST STEPS: Mention all the test steps in detail and in the order how it could be executed.
TEST DATA: The data which could be used an input for the test cases.
EXPECTED RESULT: The result which we expect once the test cases were executed. It might be anything such as Home Page, Relevant screen, Error message etc.,
POST-CONDITION: Conditions which needs to achieve when the test case was successfully executed.
ACTUAL RESULT: The result which system shows once the test case was executed.
STATUS: If the actual and expected results are same, mention it as Passed. Else make it as Failed. If a test fails, it has to go through the bug life cycle to be fixed.[/vc_column_text][vc_column_text]

Test Scenarios Login Page [How To Write Test Scenarios of a Login Page] | SoftwareTestingMaterial

[/vc_column_text][/vc_column][vc_column width=”1/6″][/vc_column][/vc_row][vc_row][vc_column][vc_column_text]

Test Scenarios Login Page

In any application, logging in is the process to access an application by an individual who has valid user credentials. Logging in is usually used to enter a specific page, which trespassers cannot see. In this post, we will see “Test Scenarios Login Page”. Testing of the Login page is very important for any application in terms of security aspect. We will try to cover most widely used Login Page scenarios here.

Must Read: Test Case Template With Detailed Explanation

We usually write test cases for login page for every application we test. Every login page should have the following elements.

  1. ‘Email/Phone Number/Username’ Textbox
  2. ‘Password’ Textbox
  3. Login Button
  4. ‘Remember Me’ Checkbox
  5. ‘Keep Me Signed In’ Checkbox
  6. ‘Forgot Password’ Link
  7. ‘Sign up/Create an account’ Link
  8. CAPTCHA

Following are the test cases for User Login Page. The list consists of both Positive and Negative test scenarios login page.

Must Read: Test Plan Template With Detailed Explanation

Test Cases of a Login Page (Test Scenarios Login Page):

  1. Verify that cursor is focused on “Username” text box on the page load (login page)
  2. Verify that the login screen contains elements such as Username, Password, Sign in button, Remember password check box, Forgot password link, and Create an account link.
  3. Verify that tab functionality is working properly or not
  4. Verify that Enter/Tab key works as a substitute for the Sign in button
  5. Verify that all the fields such as Username, Password has a valid placeholder
  6. Verify that the labels float upward when the text field is in focus or filled (In case of floating label)
  7. Verify that User is able to Login with Valid Credentials
  8. Verify that User is not able to Login with invalid Username and invalid Password
  9. Verify that User is not able to Login with Valid Username and invalid Password
  10. Verify that User is not able to Login with invalid Username and Valid Password
  11. Verify that User is not able to Login with blank Username or Password
  12. Verify that User is not able to Login with inactive credentials
  13. Verify that clicking on browser back button after successful login should not take User to log out mode
  14. Verify that clicking on browser back button after successful logout should not take User to logged in mode
  15. Verify that there is a limit on the total number of unsuccessful login attempts (No. of invalid attempts should be based on business logic. Based on the business logic, User will be asked to enter captcha and try again or user will be blocked)
  16. Verify that the password is in encrypted form when entered
  17. Verify the password can be copy-pasted
  18. Verify that encrypted characters in “Password” field should not allow deciphering if copied
  19. Verify that User should be able to login with the new password after changing the password
  20. Verify that User should not be able to login with the old password after changing the password
  21. Verify that spaces should not be allowed before any password characters attempted
  22. Verify that whether User is still logged in after series of actions such as sign in, close browser and reopen the application.
  23. Verify that the ways to retrieve the password if the User forgets the password
  24. Verify that “Remember password” checkbox is unselected by default (depends on business logic, it may be selected or unselected)
  25. Verify that “Keep me logged in” checkbox is unselected by default (depends on business logic, it may be selected or unselected)
  26. Verify that the timeout of the login session (Session Timeout)
  27. Verify that the logout link is redirected to login/home page
  28. Verify that User is redirected to appropriate page after successful login
  29. Verify that User is redirected to Forgot password page when clicking on Forgot Password link
  30. Verify that User is redirected to Create an account page when clicking on Sign up / Create an account link
  31. Verify that validation message is displayed in case when User leaves Username or Password as blank
  32. Verify that validation message is displayed in case of exceeding the character limit of the Username and Password fields
  33. Verify that validation message is displayed in case of entering special character in the Username and password fields
  34. Verify whether the login form is revealing any security information by viewing page source
  35. Verify that the login page is vulnerable to SQL injection
  36. Verify whether Cross-site scripting (XSS ) vulnerability work on a login page. XSS vulnerability may be used by hackers to bypass access controls.
    If there is a captcha on the login page (Test Cases for CAPTCHA):
  37. Verify that whether there is a client-side validation when User doesn’t enter CAPTCHA
  38. Verify that the refresh link of CAPTCHA is generating new CAPTCHA
  39. Verify that the CAPTCHA is case sensitive
  40. Verify whether the CAPTCHA has audio support to listen

Must Read: Test Scenarios of a Signup form

Writing test cases for an application takes a little practice. A well-written test case should allow any tester to understand and execute the tests and make the testing process smoother and saves a lot of time in long run. Earlier we have posted a video on How To Write Test Cases. I am concluding this post “Test Scenarios Login Page / Test Scenarios of Login form”.

Like this post? Don’t forget to share it! If you have queries, please comment below.[/vc_column_text][vc_column_text]

Test Scenarios Registration Form [Write Test Cases of Signup Form]

[/vc_column_text][vc_column_text]

Test Scenarios Registration Form

Registration form varies based on business requirement. In this post, we will see the Test Scenarios Registration form. We will list all the possible test scenarios of a registration form (Test Scenarios Registration Page/Test Scenarios Signup form). We usually write test cases for Registration Form/Signup form/Signup page for every application we test. Here are the fields usually used in a registration form.

Must Read: Test Case Template With Detailed Explanation

  • User Name
  • First Name
  • Last Name
  • Password
  • Confirm Password
  • Email Id
  • Phone number
  • Date of birth
  • Gender
  • Location
  • Terms of use
  • Submit
  • Login (If you already have an account)

Test Scenarios of a Registration Form:

  1. Verify that the Registration form contains Username, First Name, Last Name, Password, Confirm Password, Email Id, Phone number, Date of birth, Gender, Location, Terms of use, Submit, Login (If you already have an account)
  2. Verify that tab functionality is working properly or not
  3. Verify that Enter/Tab key works as a substitute for the Submit button
  4. Verify that all the fields such as Username, First Name, Last Name, Password and other fields have a valid placeholder
  5. Verify that the labels float upward when the text field is in focus or filled (In case of floating label)
  6. Verify that all the required/mandatory fields are marked with * against the field
  7. Verify that clicking on submit button after entering all the mandatory fields, submits the data to the server
  8. Verify that system generates a validation message when clicking on submit button without filling all the mandatory fields.
  9. Verify that entering blank spaces on mandatory fields lead to validation error
  10. Verify that clicking on submit button by leaving optional fields, submits the data to the server without any validation error
  11. Verify that case sensitivity of Username (usually Username field should not follow case sensitivity – ‘rajkumar’ & ‘RAJKUMAR’ acts same)
  12. Verify that system generates a validation message when entering existing username
  13. Verify that the character limit in all the fields (mainly username and password) based on business requirement
  14. Verify that the username validation as per business requirement (in some application, username should not allow numeric and special characters)
  15. Verify that the validation of all the fields are as per business requirement
  16. Verify that the date of birth field should not allow the dates greater than current date (some applications have age limit of 18 in that case you have to validate whether the age is greater than or equal to 18 or not)
  17. Verify that the validation of email field by entering incorrect email id
  18. Verify that the validation of numeric fields by entering alphabets and characters
  19. Verify that leading and trailing spaces are trimmed after clicking on submit button
  20. Verify that the “terms and conditions” checkbox is unselected by default (depends on business logic, it may be selected or unselected)
  21. Verify that the validation message is displayed when clicking on submit button without selecting “terms and conditions” checkbox
  22. Verify that the password is in encrypted form when entered
  23. Verify whether the password and confirm password are same or not

Must Read: Test Scenarios of a Login form

Writing test cases for an application takes a little practice. A well-written test case should allow any tester to understand and execute the tests and make the testing process smoother and saves a lot of time in long run. Earlier we have posted a video on How To Write Test Cases. I am concluding this post “Test Scenarios Registration form / Test Scenarios of Signup form”.[/vc_column_text][vc_column_text]

Bug Report Template With Detailed Explanation | Software Testing Material

[/vc_column_text][vc_column_text]

Bug Report Template – Detailed Explanation

Defect report template or Bug report template is one of the test artifacts. It comes into picture when the test execution phase is started.

Earlier I have posted a detailed post on “Software Testing Life Cycle (STLC)”, if you haven’t gone through it, you can browse “Software Testing Life Cycle (STLC)” here

The purpose of using Defect report template or Bug report template is to convey the detailed information (like environment details, steps to reproduce etc.,) about the bug to the developers. It allows developers to replicate the bug easily.

See the difference between Error, Bug, Defect and Failure here

Check below video to see “How To Write Good Bug Report With Clear Explanation of each field”

Please be patient. The video will load in some time.

If you liked this video, then please subscribe to our YouTube Channel for more video tutorials.

Components of Bug Report Template:

Let’s discuss the main fields of a defect report and in the next post, we learn how to write a good bug report.

Defect ID: Add a Defect ID using a naming convention followed by your team. The Defect ID will be generated automatically in case of defect management tool.

Title/Summary: Title should be short and simple. It should contain specific terms related to the actual issue. Be specific while writing the title.

Assume, you have found a bug in the registration page while uploading a profile picture that too a particular file format (i.e., JPEG file). System is crashing while uploading a JPEG file.

Note: I use this example, throughout this post.

Good: “Uploading a JPEG file (Profile Picture) in the Registration Page crashes the system”.

Bad: “System crashes”.

Reporter Name: Name of the one who found the defect (Usually tester’s name but sometimes it might be Developer, Business Analyst, Subject Matter Expert (SME), Customer)

Defect Reported Date: Mention the date on which you have found the bug.

Who Detected: Specify the designation of the one who found the defect. E.g. QA, Developer, Business Analyst, SME, Customer

How Detected: In this field, you must specify on how you have detected such as while doing Testing or while doing Review or while giving Walkthrough etc.,

Project Name: Sometimes, we may work on multiple projects simultaneously. So, choose the project name correctly. Specify the name of the project (If it’s a product, specify the product name)

Release/Build Version: On which release this issue occurs. Mention the build version details clearly.

Defect/Enhancement: If the system is not behaving as intended then you need to specify it as a Defect. If its just a request for a new feature then you must specify it as Enhancement.

Environment: You must mention the details of Operation Systems, Browser Details and any other related to the test environment in which you have encountered the bug.

Example: Windows 8/Chrome 48.0.2564.103

Priority: Priority defines how soon the bug should be fixed. Usually, the priority of the bug is set by the Managers. Based on the priority, developers could understand how soon it must be fixed and set the order in which a bug should be resolved.

Categories of Priority:

  • High
  • Medium
  • Low

Read more on Priority & Severity of Bug.

Severity: Severity talks about the impact of the bug on the customer’s business. Usually, the severity of the bug is set by the Managers. Sometimes, testers choose the severity of the bug but in most cases, it will be selected by Managers/Leads.

Categories of Severity:

  • Blocker
  • Critical
  • Major
  • Minor
  • Trivial

Status: Specify the status of the bug. If you just found a bug and about to post it then the status will be “New”. In the course of bug fixing, the status of the bug will change.

(E.g. New/ Assigned/ Open/ Fixed/ Test/ Verified/ Closed/ Reopen/ Duplicate/ Deferred/ Rejected/ cannot be fixed/ Not Reproducible/ Need more information)

Must Read: Bug Life Cycle – Explained in detail

Description: In the description section, you must briefly explain what you have done before facing the bug.

Steps to reproduce: In this section, you should describe how to reproduce the bug in step by step manner. Easy to follow steps give room to the developers to fix the issue without any chaos. These steps should describe the bug well enough and allows developers to understand and act on the bug without discussing to the one who wrote the bug report. Start with “opening the application”, include “prerequisites” if any and write till the step which “causes the bug”.

Good:

i. Open URL “Your URL”
ii. Click on “Registration Page”
iii. Upload “JPEG” file in the profile photo field

Bad:

Upload a file in the registration page.

URL: Mention the URL of the application (If available)

Expected Result: What is the expected output from the application when you make an action which causes failure.

Good: A message should display “Profile picture uploaded successfully”

Bad: System should accept the profile picture.

Earlier I have posted a detailed post on “Test Case Template With Explanation”, if you haven’t gone through it, you can browse “Test Case Template With Explanation” here.

Actual Result: What is the expected output from the application when you make an action which causes failure.

Good: “Uploading a JPEG file (Profile Picture) in the Registration Page crashes the system”.

Bad: System is not accepting profile picture.

AttachmentsAttach the screenshots which you had captured when you faced the bug. It helps the developers to see the bug which you have faced.

Defect Close Date: The ‘Defect Close Date’ is the date which needs to be updated once you ensure that the defect is not reproducible.

This is all about Bug Report Template. Download a sample Bug Report / Defect Report Template for your reference.[/vc_column_text][vc_column_text]

Software Test Metrics – Product Metrics & Process Metrics

[/vc_column_text][vc_column_text]

Software Test Metrics:

Before starting what is Software Test Metrics and types, I would like to start with the famous quotes in terms of metrics.

You can’t control what you can’t measure – Tom Demacro (an American software engineer, author, and consultant on software engineering topics).

Software test metrics is to monitor and control process and product. It helps to drive the project towards our planned goals without deviation.

Metrics answer different questions. It’s important to decide what questions you want answers to.

Software test metrics are classified into two types

  1. Process metrics
  2. Product metrics

Process Metrics:

Software Test Metrics used in the process of test preparation and test execution phase of STLC.

The following are generated during the Test Preparation phase of STLC:

Test Case Preparation Productivity:

It is used to calculate the number of Test Cases prepared and the effort spent for the preparation of Test Cases.

Formula:

E.g.:

No. of Test cases = 240

Effort spent for Test case preparation (in hours) = 10

Test Case preparation productivity = 240/10 = 24 test cases/hour

Test Design Coverage:

It helps to measure the percentage of test case coverage against the number of requirements

Formula:

E.g.:

Total number of requirements: 100

Total number of requirements mapped to test cases: 98

Test Design Coverage = (98/100)*100 = 98%

The following are generated during the Test Execution phase of STLC:

Test Execution Productivity:

It determines the number of Test Cases that can be executed per hour

Formula:

E.g.:

No of Test cases executed = 180

Effort spent for execution of test cases = 10

Test Execution Productivity = 180/10 = 18 test cases/hour

Test Execution Coverage:

It is to measure the number of test cases executed against the number of test cases planed.

Formula:

E.g.:

Total no. of test cases planned to execute = 240

Total no. of test cases executed = 160

Test Execution Coverage = (180/240)*100 = 75%

Test Cases Passed:

It is to measure the percentage no. of test cases passed

Formula:

E.g.:

Test Cases Pass = (80/90)*100 = 88.8 = 89%

Test Cases Failed:

It is to measure the percentage no. of test cases failed

Formula:

E.g.:

Test Cases Failed= (10/90)*100 = 11.1 = 11%

Test Cases Blocked:

It is to measure the percentage no. of test cases blocked

Formula:

E.g.:

Test Cases Blocked = (5/90)*100 = 5.5 = 6%
Check below video to see “Test Metrics”

Product metric:

Software Test Metrics used in the process of defect analysis phase of STLC.

Error Discovery Rate:

It is to determine the effectiveness of the test cases.

Formula:

E.g.:

Total no. of test cases executed = 240

Total number of defects found = 60

Error Discovery Rate = (60/240)*100 = 25%

Defect Fix Rate:

It helps to know the quality of a build in terms of defect fixing.

Formula:

E.g.:

Total no of defects reported as fixed = 10

Total no. of defects reopened = 2

Total no. of new Bugs due to fix = 1

Defect Fix Rate = ((10 – 2)/(10 + 1))*100 = (8/11)100 = 72.7 = 73%

Defect Density:

It is defined as the ratio of defects to requirements.

Defect density determines the stability of the application.

Formula:

E.g.:

Total no. of defects identified = 80

Actual Size= 10

Defect Density = 80/10 = 8

Defect Leakage:

It is used to review the efficiency of the testing process before UAT.

Formula:

E.g.:

No. of defects found in UAT = 20

No. of Defects found before UAT = 120

Defect Leakage = (20 /120) * 100 = 16.6 = 17%

Defect Removal Efficiency:

It allows us to compare the overall (defects found pre and post-delivery) defect removal efficiency

Formula:

E.g.:

Total no. of defects found pre-delivery = 80

Total no. of defects found post-delivery = 10

Defect Removal Efficiency = ((80) / ((80) + (10)))*100 = (80/90)*100 = 88.8 = 89%

Here I have hand-picked few posts which will help you to learn more interview related stuff:[/vc_column_text][vc_column_text]

Requirements Traceability Matrix (RTM) | SoftwareTestingMaterial

[/vc_column_text][vc_column_text]

What is Requirement Traceability Matrix?

Requirements Traceability Matrix (RTM) is used to trace the requirements to the tests that are needed to verify whether the requirements are fulfilled.

Requirement Traceability Matrix AKA Traceability Matrix or Cross Reference Matrix.

Advantage of Requirements Traceability Matrix (RTM):

  1. 100% test coverage
  2. It allows to identify the missing functionality easily
  3. It allows to identify the test cases which needs to be updated in case of change in requirement
  4. It is easy to track the overall test execution status

How to prepare Requirement Traceability Matrix (RTM):

  • Collect all the available requirement documents.
  • Allot an unique Requirement ID for each and every Requirement
  • Create Test Cases for each and every requirement and link Test Case IDs to the respective Requriement ID.

Like all other test artifacts, RTM too varies between organizations. Most of the organizations use just the Requirement Id’s and Test Case Id’s in the RTM. It is possible to make some other fields such as Requirement Description, Test Phase, Test case result,  Document Owner etc.,  It is necessary to update the RTM whenever there is a change in requirement.

The following illustration gives you a basic idea about Requirement Traceability Matrix (RTM).

Assume we have requirements

Assume total test cases identified are 10

Whenever we write new test cases, the same need to be updated in the RTM

Adding a new test case id TID011 and mapping it to the requirement id BID005

Types of Requirements Traceability Matrix (RTM):

Let’s see different types of Traceability Matrix:

  • Forward Traceability: Mapping requirements to test cases is called Forward Traceability Matrix. It is used to ensure whether the project progresses in the desired direction. It makes sure that each requirement is tested thoroughly.
  • Backward or Reverse Traceability: Mapping test cases to requirements is called Backward Traceability Matrix. It is used to ensure whether the current product remains on the right track. It makes sure that we are not expanding the scope of the project by adding functionality that is not specified in the requirements.
  • Bi-directional traceability (Forward + Backward): Mapping requirements to test cases (forward traceability) and test cases to requirements (backward traceability) is called Bi-directional Traceability Matrix. It is used to ensure that all the specified requirements have appropriate test cases and vice versa.

[/vc_column_text][vc_column_text]

Test Deliverables in Software Testing – Detailed Explanation

[/vc_column_text][vc_column_text]Test Deliverables are the test artifacts which are given to the stakeholders of a software project during the SDLC (Software Development Life Cycle) A software project which follows SDLC undergoes the different phases before delivering to the customer. In this process, there will be some deliverables in every phase. Some of the deliverables are provided before the testing phase commences and some are provided during the testing phase and rest after the testing phase is completed.

Don’t miss: Manual Testing Complete Tutorial

Every software application goes through different phases of SDLC and STLC. In the process of software application development, test teams prepare different documents to improve communication among the team members and other stakeholders. These documents are also known as Test Deliverables, as they are delivered to the client along with the final product of software application.

Test Deliverables Infographics:

Interview Question: What is test deliverables and list out the test deliverables you have come across in the process of STLC?
This is one of the most important QA interview questions for freshers.

The following are a list of test deliverables:

The test deliverables prepared during the process of software testing are as follows

1. Test Strategy: Test Strategy is a high-level document (static document) and usually developed by a project manager. It is a document which captures the approach on how we go about testing the product and achieve the goals. It is normally derived from the Business Requirement Specification (BRS). Documents like Test Plan are prepared by keeping this document as a base. Click here for more details.

2. Test Plan: Test plan document is a document which contains the plan for all the testing activities to be done to deliver a quality product. The test Plan document is derived from the Product Description, SRS, or Use Case documents for all future activities of the project. It is usually prepared by the Test Lead or Test Manager. Click here for more details.

Difference between Test Strategy and Test Plan

3. Effort Estimation Report: In this report, usually test teams mention the efforts put in to complete the testing process by the test team.

4. Test Scenarios: Test Scenario gives the idea of what we have to test. Test Scenario is like a high-level test case.

Difference between Test Scenario and Test Case

5. Test Cases/Scripts: Test cases are the set of positive and negative executable steps of a test scenario which has a set of pre-conditions, test data, expected result, post-conditions and actual results. Click here for more details.

6. Test Data: Test data is the data that is used by the testers to run the test cases. Whilst running the test cases, testers need to enter some input data. To do so, testers prepare test data. It can be prepared manually and also by using tools.

For example, To test a basic login functionality having a user id, password fields. We need to enter some data in the user id and password fields. So we need to collect some test data.

7. Requirement Traceability Matrix (RTM): Requirements Traceability Matrix (RTM) is used to trace the requirements to the tests that are needed to verify whether the requirements are fulfilled. Requirement Traceability Matrix AKA Traceability Matrix or Cross Reference Matrix. Click here for more details.

8. Defect Report/Bug Report: The purpose of using Defect report template or Bug report template is to convey the detailed information (like environment details, steps to reproduce etc.,) about the bug to the developers. It allows developers to replicate the bug easily. Click here for more details.

9. Test Execution Report: It contains the test results and the summary of test execution activities.

10. Graphs and Metrics: Software test metrics is to monitor and control process and product. It helps to drive the project towards our planned goals without deviation. Metrics answer different questions. It’s important to decide what questions you want answers to. Click here for more details.

11. Test summary report: It contains the summary of test activities and final test results.

12. Test incident report: It contains all the incidents such as resolved or unresolved incidents which are found while testing the software.

13. Test closure report: It gives a detailed analysis of the bugs found, bugs removed and discrepancies found in the software.

14. Release Note: Release notes will be sent to the client, customer or stakeholders along with the build. It contains a list of new releases, bug fixes.

15. Installation/configuration guide: This guide helps to install or configure the components that make up the system and its hardware and software requirements.

16. User guide: This guide gives assistance to the end user on accessing the software application.

17. Test status report: It is to track the testing status. It is prepared on a periodic or weekly basis. It contains work done till date and work remains pending.

18. Weekly status report (Project manager to a client): It is similar to the Test status report but generates weekly.[/vc_column_text][vc_column_text]

Difference between defect, bug, error and failure

[/vc_column_text][vc_column_text]Let’s see the difference between defect, bug, error and failure. In general, we use these terms whenever the system/application acts abnormally. Sometimes we call it’s an error and sometimes bug and so on. Many of the newbies in Software Testing industry have confusion in using this.

What is the difference between defect, bug, error and failure is one the interview question while recruiting a fresher.

Generally, there is a contradiction in the usage of these terminologies. Usually in Software Development Life Cycle we use these terms based on the phase.

Note: Both Defect and Bug are the issues in an application but in which phase of SDLC it was found makes the overall difference.

What is a defect?

The variation between the actual results and expected results is known as defect.

If a developer finds an issue and corrects it by himself in the development phase then it’s called a defect.

What is a bug?

If testers find any mismatch in the application/system in testing phase then they call it as Bug.

As I mentioned earlier, there is a contradiction in the usage of Bug and Defect. People widely say the bug is an informal name for the defect.

What is an error?

We can’t compile or run a program due to coding mistake in a program. If a developer unable to successfully compile or run a program then they call it as an error.

What is a failure?

Once the product is deployed and customers find any issues then they call the product as a failure product. After release, if an end user finds an issue then that particular issue is called as failure

Points to know:

If a Quality Analyst (QA) finds a bug, he has to reproduce and record it using the bug report template.

Earlier I have posted a detailed post on “Bug Report Template”. If you haven’t gone through it, you can browse by clicking here.

Also, you could download the Sample Bug Report Template / Defect Report Template from here.

Remember to share this post with anyone who might benefit from this information, including your Facebook friends, Twitter followers, LinkedIn followers and members of your Google+ group![/vc_column_text][vc_column_text]

How To Write Good Bug Report | Software Testing Material

[/vc_column_text][vc_column_text]

How to Write Good Bug Report!!

In this post, we show you how to write good bug report. At the end, we will give a link to down a sample defect report template. So, let’s get started.

Have you ever seen a rejected bug which has comments as “it is not reproducible”. Sometimes Dev Team rejects few bugs due to Bad Bug Report.

Bad Bug Report?

Imagine you are using Mozilla Firefox for testing (I am mentioning a sample case here). You found an issue that login button is not working. You have posted the same issue with all the steps except mentioning the name and version of Browser. One of the developers opened that report and tried to reproduce it based on the steps you mentioned in the report. Here, in this case, the Developer is using Internet Explorer. The Login button is working properly in their environment. So the Developer will reject the bug by mentioning the comments as the bug is not reproducible. You will find the same issue after you do retest. Again you will report the same issue and get the same comments from the Dev Team.

You forgot to mention the name and version of Browser in your bug report. If you forgot some key information to reproduce the bug in the Developers Environment, you will face the consequences like this.

It creates a bad impression on you. Action will be taken on you based on the company for wasting the time and effort.

There is an old saying: “You never get a second chance to make a first impression.”

Writing good bug report is a skill every tester should have. You have to give all necessary details to the Dev Team to get your issue fixed.

Earlier I have posted a detailed post on “Bug Life Cycle”, if you haven’t gone through it, you can browse “Bug Life Cycle” here

Do you want to get the bug fixed without rejection? So you have to report it by using a good bug report.

Please be patient. The video will load in some time.

If you liked this video, then please subscribe to our YouTube Channel for more video tutorials.

How To Write Good Defect Report?

Let me first mention what are the fields needed in a good bug report.

Defect ID, Reporter Name, Defect Reported Date, Who Detected, How Detected, Project Name, Release/Build Version, Defect/Enhancement, Environment, Priority, Severity, Status, Description, Steps To Reproduce, URL, Expected Result, Actual Result.

Earlier I have posted a detailed post on “Bug Report Template With Detailed Explanation”, click here to get the detailed explanation on each field and download a sample bug report.

The first thing we should do before writing a bug report is to reproduce the bug two to three times.

If you are sure that bug exists then ascertain whether the same bug was posted by someone else or not. Use some keywords related to your bug and search in the Defect Tracking Tool. If you did not find an issue which is related to the bug same like you found then you could start writing a bug report.

Hold on!!

Why could not we ascertain whether the same issue is available in the related modules?. If you find that the same issue is available in the related modules then you could address those issues in the same bug report. It saves a lot of time in terms of fixing the issues and writing repeated bug reports for the same kind of issues.

Start writing a bug report by mentioning all the details in the fields as mentioned above and write detailed steps to reproduce

Make a checklist and ensure whether you have passed all the points before reporting a bug. 

i. Have I reproduced the bug 2-3 times.
ii. Have I verified in the Defect Tracking Tool (using keywords) whether someone else already posted the same issue.
iii. Have I verified the similar issue in the related modules.
iv. Have I written the detailed steps to reproduce the bug.
v. Have I written proper defect summary.
vi. Have I attached relevant screenshots.
vii. Have I missed any necessary fields in the bug report?

Consolidating all the points on how to write good bug report:

i. Reproduce the bug 2-3 times.
ii. Use some keywords related to your bug and search in the Defect Tracking Tool.
iii. Check in similar modules.
iv. Report the problem immediately.
v. Write detailed steps to reproduce the bug.
vi. Write a good defect summary. Watch your language in the process of writing the bug report, your words should not offend people. Never use capital letter whilst explaining the issue.
vii. Advisable to Illustrate the issue by using proper screenshots.
viii. Proofread your bug report twice or thrice before posting it.

This is all about “writing a good bug report”. If you have any thoughts, please comment below. As promised, here is the sample defect report template.[/vc_column_text][vc_column_text]

Software Architecture: One-Tier, Two-Tier, Three Tier, N Tier

[/vc_column_text][vc_column_text]Software Architecture: Software Architecture consists of One Tier, Two Tier, Three Tier and N-Tier architectures.

A “tier” can also be referred to as a “layer”.

Three layers involved in the application namely Presentation Layer, Business Layer and Data Layer. Let’s see each layer in detail:

Presentation Layer: It is also known as Client layer. Top most layer of an application. This is the layer we see when we use a software. By using this layer we can access the webpages. The main functionality of this layer is to communicate with Application layer. This layer passes the information which is given by the user in terms of keyboard actions, mouse clicks to the Application Layer.
For example, login page of Gmail where an end user could see text boxes and buttons to enter user id, password and to click on sign-in.

In a simple words, it is to view the application.

In a simple words, it is to perform operations on the application.

Data Layer: The data is stored in this layer. Application layer communicates with Database layer to retrieve the data. It contains methods that connects the database and performs required action e.g.: insert, update, delete etc.

In a simple words, it is to share and retrieve the data.

Must Read: Manual Testing Complete Tutorial

Types of Software Architecture:

One Tier Architecture:

One Tier application AKA Standalone application

One tier architecture has all the layers such as Presentation, Business, Data Access layers in a single software package. Applications which handles all the three tiers such as MP3 player, MS Office are come under one tier application. The data is stored in the local system or a shared drive.

Must Read: Most Popular Software Testing Interview Questions

Two-Tier Architecture:

Two Tier application AKA Client-Server application

The Two-tier architecture is divided into two parts:

1. Client Application (Client Tier)
2. Database (Data Tier)

Client system handles both Presentation and Application layers and Server system handles Database layer. It is also known as client server application. The communication takes place between the Client and the Server. Client system sends the request to the Server system and the Server system processes the request and sends back the data to the Client System

Must Read: SQL for Software Testers Complete Tutorial

Three-Tier Architecture:

Three Tier application AKA Web Based application

The Three-tier architecture is divided into three parts:

1. Presentation layer (Client Tier)
2. Application layer (Business Tier)
2. Database layer (Data Tier)

Client system handles Presentation layer, Application server handles Application layer and Server system handles Database layer.

Note: Another layer is N-Tier application. N-Tier application AKA Distributed application. It is similar to three tier architecture but number of application servers are increased and represented in individual tiers in order to distributed the business logic so that the logic will be distributed.[/vc_column_text][vc_column_text]

Seven Principles of Software Testing | Software Testing Material

[/vc_column_text][vc_column_text]There are seven principles of Software Testing.

  1. Testing shows presence of defects
  2. Exhaustive testing is impossible
  3. Early testing
  4. Defect clustering
  5. Pesticide paradox
  6. Testing is context dependent
  7. Absence of error – fallacy

 

If you liked this video, then please subscribe to our YouTube Channel for more video tutorials.
Let’s see principles of Software Testing in detail.

Seven Principles of Software Testing:

1. Testing Shows Presence of Defects:

Testing shows the presence of defects in the software. The goal of testing is to make the software fail. Sufficient testing reduces the presence of defects. In case testers are unable to find defects after repeated regression testing doesn’t mean that the software is bug-free.

Testing talks about the presence of defects and don’t talk about the absence of defects.

2. Exhaustive Testing is Impossible:

What is Exhaustive Testing?

Testing all the functionalities using all valid and invalid inputs and preconditions is known as Exhaustive testing.

Why it’s impossible to achieve Exhaustive Testing?

Assume we have to test an input field which accepts age between 18 to 20 so we do test the field using 18,19,20. In case the same input field accepts the range between 18 to 100 then we have to test using inputs such as 18, 19, 20, 21, …., 99, 100. It’s a basic example, you may think that you could achieve it using automation tool. Imagine the same field accepts some billion values. It’s impossible to test all possible values due to release time constraints.

If we keep on testing all possible test conditions then the software execution time and costs will rise. So instead of doing exhaustive testing, risks and priorities will be taken into consideration whilst doing testing and estimating testing efforts.

3. Early Testing:

Defects detected in early phases of SDLC are less expensive to fix. So conducting early testing reduces the cost of fixing defects.

Assume two scenarios, first one is you have identified an incorrect requirement in the requirement gathering phase and the second one is you have identified a bug in the fully developed functionality. It is cheaper to change the incorrect requirement compared to fixing the fully developed functionality which is not working as intended.

4. Defect Clustering:

Defect Clustering in software testing means that a small module or functionality contains most of the bugs or it has the most operational failures.

As per the Pareto Principle (80-20 Rule), 80% of issues comes from 20% of modules and remaining 20% of issues from remaining 80% of modules. So we do emphasize testing on the 20% of modules where we face 80% of bugs.

5. Pesticide Paradox:

Pesticide Paradox in software testing is the process of repeating the same test cases again and again, eventually, the same test cases will no longer find new bugs. So to overcome this Pesticide Paradox, it is necessary to review the test cases regularly and add or update them to find more defects.

6. Testing is Context Dependent:

Testing approach depends on the context of the software we develop. We do test the software differently in different contexts. For example, online banking application requires a different approach of testing compared to an e-commerce site.

7. Absence of Error – Fallacy:

99% of bug-free software may still be unusable, if wrong requirements were incorporated into the software and the software is not addressing the business needs.

The software which we built not only be a 99% bug-free software but also it must fulfill the business needs otherwise it will become an unusable software.[/vc_column_text][vc_column_text]

Black Box Test Design Techniques | Software Testing Material

[/vc_column_text][vc_column_text]Black Box Test Design Techniques are widely used as a best practice in the industry. Black box test design techniques are used to pick the test cases in a systematic manner. By using these techniques we could save lots of testing time and get the good test coverage.

Note: Knowledge on the internal structure (code) of the AUT (Application Under Test) is not necessary to use these black box test design techniques.

Following are the list of Black Box Test Design Techniques:

These test design techniques are used to derive the test cases from the Requirement Specification document and also based on testers expertise:

  1. Equivalence Partitioning
  2. Boundary Value Analysis
  3. Decision Table
  4. State Transition
  5. Exploratory Testing
  6. Error Guessing

 

Let’s see each technique in detail.

Equivalence Partitioning:

It is also known as Equivalence Class Partitioning (ECP).

Using equivalence partitioning test design technique, we divide the test conditions into class (group). From each group we do test only one condition. Assumption is that all the conditions in one group works in the same manner. If a condition from a group works then all of the conditions from that group work and vice versa. It reduces lots of rework and also gives the good test coverage. We could save lots of time by reducing total number of test cases that must be developed.

For example: A field should accept numeric value. In this case, we split the test conditions as Enter numeric value, Enter alpha numeric value, Enter Alphabets, and so on. Instead of testing numeric values such as 0, 1, 2, 3, and so on.

Click here to see a detailed post on Equivalence Class Partitioning.

Boundary Value Analysis:

Using Boundary value analysis (BVA), we take the test conditions as partitions and design the test cases by getting the boundary values of the partition. The boundary between two partitions is the place where the behavior of the application varies. The test conditions on either side of the boundary are called boundary values. In this we have to get both valid boundaries (from the valid partitions) and invalid boundaries (from the invalid partitions).

For example: If we want to test a field which should accept only amount more than 10 and less than 20 then we take the boundaries as 10-1, 10, 10+1, 20-1, 20, 20+1. Instead of using lots of test data, we just use 9, 10, 11, 19, 20 and 21.

Click here to see a detailed post on Boundary Value Analysis.

Decision Table:

Decision Table is aka Cause-Effect Table. This test technique is appropriate for functionalities which has logical relationships between inputs (if-else logic). In Decision table technique, we deal with combinations of inputs. To identify the test cases with decision table, we consider conditions and actions. We take conditions as inputs and actions as outputs.

Click here to see a detailed post on Decision Table.

State Transition Testing:

Using state transition testing, we pick test cases from an application where we need to test different system transitions. We can apply this when an application gives a different output for the same input, depending on what has happened in the earlier state.

Some examples are Vending Machine, Traffic Lights.

Vending machine dispenses products when the proper combination of coins is deposited.

Traffic Lights will change sequence when cars are moving / waiting

Click here to see a detailed post on State Transition Testing.

Exploratory Testing:

Usually this process will be carried out by domain experts. They perform testing just by exploring the functionalities of the application without having the knowledge of the requirements.

Whilst using this technique, testers could explore and learn the system. High severity bugs are found very quickly in this type of testing.

Error Guessing:

Error guessing is one of the testing techniques used to find bugs in a software application based on tester’s prior experience. In Error guessing we don’t follow any specific rules.

  • Submitting a form without entering values.
  • Entering invalid values such as entering alphabets in the numeric field.

[/vc_column_text][vc_column_text]

Equivalence Partitioning Test Case Design Technique

[/vc_column_text][vc_column_text]Equivalence Partitioning Test case design technique is one of the testing techniques. You could find other testing techniques such as Boundary Value Analysis, Decision Table and State Transition Techniques by clicking on appropriate links.

Equivalence Partitioning is also known as Equivalence Class Partitioning. In equivalence partitioning, inputs to the software or system are divided into groups that are expected to exhibit similar behavior, so they are likely to be proposed in the same way. Hence selecting one input from each group to design the test cases.

Check below video to see “Equivalence Partitioning In Software Testing”

Each and every condition of particular partition (group) works as same as other. If a condition in a partition is valid, other conditions are valid too. If a condition in a partition is invalid, other conditions are invalid too.

It helps to reduce the total number of test cases from infinite to finite. The selected test cases from these groups ensure coverage of all possible scenarios.

Equivalence partitioning is applicable at all levels of testing.

Example on Equivalence Partitioning Test Case Design Technique:

Example 1:

Assume, we have to test a field which accepts Age 18 – 56

Valid Input: 18 – 56

Invalid Input: less than or equal to 17 (<=17), greater than or equal to 57 (>=57)

Valid Class: 18 – 56 = Pick any one input test data from 18 – 56

Invalid Class 1: <=17 = Pick any one input test data less than or equal to 17

Invalid Class 2: >=57 = Pick any one input test data greater than or equal to 57

We have one valid and two invalid conditions here.

Example 2:

Assume, we have to test a filed which accepts a Mobile Number of ten digits.

Valid input: 10 digits

Invalid Input: 9 digits, 11 digits

Invalid Class Enter mobile number which has less than 10 digits = 987654321

Invalid Class Enter mobile number which has more than 11 digits = 98765432109[/vc_column_text][vc_column_text]

Boundary Value Analysis Test Case Design Technique

[/vc_column_text][vc_column_text]Boundary Value Analysis Test case design technique is one of the testing techniques. You could find other testing techniques such as Equivalence Partitioning, Decision Table and State Transition Techniques by clicking on appropriate links.

Boundary value analysis (BVA) is based on testing the boundary values of valid and invalid partitions. The Behavior at the edge of each equivalence partition is more likely to be incorrect than the behavior within the partition, so boundaries are an area where testing is likely to yield defects.

Every partition has its maximum and minimum values and these maximum and minimum values are the boundary values of a partition.

A boundary value for a valid partition is a valid boundary value. Similarly a boundary value for an invalid partition is an invalid boundary value.

Tests can be designed to cover both valid and invalid boundary values. When designing test cases, a test for each boundary value is chosen.

For each boundary, we test +/-1 in the least significant digit of either side of the boundary.

Boundary value analysis can be applied at all test levels.

Example on Boundary Value Analysis Test Case Design Technique:

Assume, we have to test a field which accepts Age 18 – 56

Minimum boundary value is 18

Maximum boundary value is 56

Valid Inputs: 18,19,55,56

Invalid Inputs: 17 and 57

Test case 1: Enter the value 17 (18-1) = Invalid

Test case 2: Enter the value 18 = Valid

Test case 3: Enter the value 19 (18+1) = Valid

Test case 4: Enter the value 55 (56-1) = Valid

Test case 5: Enter the value 56 = Valid

Test case 6: Enter the value 57 (56+1) =Invalid

Example 2:

Assume we have to test a text field (Name) which accepts the length between 6-12 characters.

Minimum boundary value is 6

Maximum boundary value is 12

Valid text length is 6, 7, 11, 12

Invalid text length is 5, 13

Test case 1: Text length of 5 (min-1) = Invalid

Test case 2: Text length of exactly 6 (min) = Valid

Test case 3: Text length of 7 (min+1) = Valid

Test case 4: Text length of 11 (max-1) = Valid

Test case 5: Text length of exactly 12 (max) = Valid

Test case 6: Text length of 13 (max+1) = Invalid[/vc_column_text][vc_column_text]

Decision Table Test Case Design Technique

[/vc_column_text][vc_column_text]Decision Table Test case design technique is one of the testing techniques. You could find other testing techniques such as Equivalence Partitioning, Boundary Value Analysis and State Transition Techniques by clicking on appropriate links.

Decision Table is aka Cause-Effect Table. This test technique is appropriate for functionalities which has logical relationships between inputs (if-else logic). In Decision table technique, we deal with combinations of inputs. To identify the test cases with decision table, we consider conditions and actions. We take conditions as inputs and actions as outputs.

Examples on Decision Table Test Case Design Technique:

Take an example of transferring money online to an account which is already added and approved.

Here the conditions to transfer money are ACCOUNT ALREADY APPROVED, OTP (One Time Password) MATCHED, SUFFICIENT MONEY IN THE ACCOUNT.

And the actions performed are TRANSFER MONEY, SHOW A MESSAGE AS INSUFFICIENT AMOUNT, BLOCK THE TRANSACTION INCASE OF SUSPICIOUS TRANSACTION.

Here we decide under what condition the action be performed Now let’s see the tabular column below.

In the first column I took all the conditions and actions related to the requirement. All the other columns represent Test Cases.

T = True, F = False, X = Not possible

From the case 3 and case 4, we could identify that if condition 2 failed then system will execute Action 3. So we could take either of case 3 or case 4

So finally concluding with the below tabular column.

We write 4 test cases for this requirement.

Take another example – login page validation. Allow user to login only when both the ‘User ID’ and ‘Password’ are entered correct.

Here the Conditions to allow user to login are Enter Valid User Name and Enter Valid Password.

The Actions performed are Displaying home page and Displaying an error message that User ID or Password is wrong.

So I am eliminating one of the test case from case 2 and case 3 and concluding with the below tabular column.[/vc_column_text][vc_column_text]

State Transition Test Case Design Technique

[/vc_column_text][vc_column_text]State Transition Test case design technique is one of the testing techniques. You could find other testing techniques such as Equivalence Partitioning, Boundary Value Analysis and Decision Table Techniques by clicking on appropriate links.

Using state transition testing, we pick test cases from an application where we need to test different system transitions. We can apply this when an application gives a different output for the same input, depending on what has happened in the earlier state.

 

Vending machine dispenses products when the proper combination of coins is deposited.

Traffic Lights will change sequence when cars are moving / waiting

Example on State Transition Test Case Design Technique:

Take an example of login page of an application which locks the user name after three wrong attempts of password.

A finite state system is often shown as a state diagram

It works like a truth table. First determine the states, input data and output data.

Entering correct password in the first attempt or second attempt or third attempt, user will be redirected to the home page (i.e., State – S4).

Entering incorrect correct password in the first attempt, a message will be displayed as try again and user will be redirected to state S2 for the second attempt.

Entering incorrect correct password in the second attempt, a message will be displayed as try again and user will be redirected to state S3 for the third attempt.

Entering incorrect correct password in the third attempt, user will be redirected to state S5 and a message will be displayed as “Account locked. Consult Administrator”.

Likewise, let’s see another example.

Withdrawal of money from ATM. ‘User A’ wants to withdraw 30,000 from ATM. Imagine he could take 10,000 per transaction and total balance available in the account is 25,000. In the first two attempts, he could withdraw money. Whereas in the third attempt, ATM shows a message as “Insufficient balance, contact Bank”. Same Action but due to change in the state, he couldn’t withdraw the money in the third transaction.[/vc_column_text][vc_column_text]

Difference Between Defect Severity And Priority In Software Testing

[/vc_column_text][vc_column_text]In this post, we see the difference between Severity and Priority. Severity and priority are the two things we have to choose once the bug is found. Whenever we find a bug, we select the bug severity and bug priority. Usually, Testers select the severity of the bug and the Project Manager or Project Lead selects the bug priority. Let’s see bug severity and bug priority in detail.

  • 1. Severity and Priority Infographic
  • 2. What is Severity?
  • 3. What are the types of Severity?
  • 4. What is Priority?
  • 5. What are the types of Priority?
  • 6. High Priority & High Severity
  • 7. Low Priority & High Severity
  • 8. High Priority & Low Severity
  • 9. Low Priority & Low Severity

Severity and Priority Infographic:

What is Severity?

Bug/Defect severity can be defined as the impact of the bug on the application. It can be Critical, Major or Minor. In simple words, how much effect will be there on the system because of a particular defect

Must Read: How To Write A Good Bug Report

What are the types of Severity?

Severity can be categorized into three types:

As mentioned above the type of severity are categorized as Critical, Major, and Minor

Let’s see how can we segregate a bug into these types:

Critical: 

 

A critical severity issue is an issue where a large piece of functionality or major system component is completely broken and there is no workaround to move further.
For example, Due to a bug in one module, we cannot test the other modules because that blocker bug has blocked the other modules. Bugs which affects the customers business are considered as critical

Major:

A major severity issue is an issue where a large piece of functionality or major system component is completely broken and there is a workaround to move further.

Minor:

A minor severity issue is an issue that imposes some loss of functionality, but for which there is an acceptable & easily reproducible workaround.

For example, font family or font size or color or spelling issue

Trivial:

A trivial severity defect is a defect which is related to the enhancement of the system

What is Priority?

Defect priority can be defined as an impact of the bug on the customers business. Main focus on how soon the defect should be fixed. It gives the order in which a defect should be resolved. Developers decide which defect they should take up next based on the priority. It can be High, Medium or Low.

Most of the times the priority status is set based on the customer requirement.

Must Read: Difference between Defect, Bug, Error, And Failure

What are the types of Priority?

Priority can be categorized into three types:

As mentioned above the type of severity are categorized as High, Medium, and Low

Let’s see how can we segregate a bug into these types:

High:

A high priority issue is an issue which has a high impact on the customers business or an issue which affects the system severely and the system cannot be used until the issue was fixed. These kinds of issues must be fixed immediately. Most of the cases as per the user perspective, the priority of the issue is set to high priority even though the severity of the issue is minor.

Medium:

Issues which can be released in the next build comes under medium priority. Such issues can be resolved along with other development activities.

Low:

An issue which has no impact on the customer business comes under low priority.

Some important scenarios which are asked in the interviews on Severity and Priority:

High Priority & High Severity:

A critical issue where a large piece of functionality or major system component is completely broken.
For example,
1. Submit button is not working on a login page and customers are unable to login to the application
2. On a bank website, an error message pops up when a customer clicks on transfer money button.
3. Application throws an error 500 response when a user tries to do some action.
500 Status Codes:
The server has problems in processing the request and these are mainly server errors and not with the request.

These kinds of showstoppers come under High Priority and High Severity.
There won’t be any workaround and the user can’t do the further process.

Low Priority & High Severity:

An issue which won’t affects customers business but it has a big impact in terms of functionality.
For example,
1. Crash in some functionality which is going to deliver after couple of releases
2. There is a crash in an application whenever a user enters 4 digits in the age field which accepts max 3 digits.

High Priority & Low Severity:

A minor issue that imposes some loss of functionality, but for which there is an acceptable & easily reproducible workaround. Testing can proceed without interruption but it affects customers reputation.

For example,
1. Spelling mistake of a company name on the homepage
2. Company logo or tagline issues

It is important to fix the issue as soon as possible, although it may not cause a lot of damage.

Low Priority & Low Severity:

A minor issue that imposes some loss of functionality, but for which there is an acceptable & easily reproducible workaround. Testing can proceed without interruption.
For example,
1. FAQ page takes a long time to load.
2. Font family or font size or color or spelling issue in the application or reports (Spelling mistake of company name on the home page won’t come under this Low Priority and Low
Severity)

These kinds of issues won’t bother the customers much.

Some more points:

  1. Development team takes up the high priority defects first rather than of high severity.
  2. Generally, severity is assigned by Tester / Test Lead & priority is assigned by Developer/Team Lead/Project Lead.

The above are just examples. Selection of severity and priority may vary depends on project and organization. In Gmail, composing an email is main functionality, whereas composing an email feature in a banking (email option to send emails internally) application is not the main functionality.[/vc_column_text][vc_column_text]

Bug Severity And Priority In Software Testing – Infographic

[/vc_column_text][vc_column_text]

Bug severity and priority in Software Testing – Infographic:

Earlier I have posted a detailed post on bug severity and priority and types of bug severity and priority. If you have missed it, you could check the detailed post on defect severity and priority here.

You could also find the articles related to this topic such as Bug Life Cycle and Bug Report Template with Detailed Explanation[/vc_column_text][vc_column_text]

Performance Testing Tutorial | Software Testing Material

[/vc_column_text][vc_column_text]In the field of Software Testing, Testers mainly concentrate on Black Box and White Box Testing. Under the Black Box testing, again there are different types of testing. The major types of testing are Functionality testing and Non-functional testing. Performance testing and types of performance testing fall under Non-functional testing.

In this article, we will be looking into the following

  • What is Performance Testing
  • Why Performance Testing
  • Types of Performance Testing
  • Difference between Functional Testing and Non-functional Testing
  • Difference between Performance Testing, Load Testing & Stress Testing
  • Difference between Performance Engineering & Performance Testing
  • Performance Testing Process
  • Example of Performance Test Cases
  • How to choose the right Performance Testing Tool
  • What are some popular Performance Testing Tools to do Performance Testing

What is Performance Testing?

In software, performance testing (also called Perf Testing) determines or validates the speed, scalability, and/or stability characteristics of the system or application under test. Performance is concerned with achieving response times, throughput, and resource-utilization levels that meet the performance objectives for the project or product.

Web application performance testing is conducted to mitigate the risk of availability, reliability, scalability, responsiveness, stability etc. of a system.

Performance testing encompasses a number of different types of testing like load testing, volume testing, stress testing, capacity testing, soak/endurance testing and spike testing each of which is designed to uncover or solve performance problems in a system.

Why Performance Testing?

In the current market performance and responsiveness of applications play an important role in the success of a business. We conduct performance testing to address the bottlenecks of the system and to fine-tune the system by finding the root cause of performance issues. Performance testing answers to the questions like how many users the system could handle, how well the system could recover when the no. of users crossed the maximum users, what is the response time of the system under normal and peak loads.

We use performance testing tools to measure the performance of a system or application under test (AUT) and help in releasing high-quality software but it is not done to find defects in an application.

Load and performance testing will determine whether an application meets Speed, Scalability, and Stability requirements under expected workloads.

Speed: It determines whether an application responds quickly

Scalability: It determines maximum load an application can handle

Stability: It determines whether an application is stable under varying loads

Poorly performed applications gain a bad reputation and fail to meet the expected goals. So performance testing of an application is very important.

Types of Performance Testing?

Capacity Testing:

Capacity Testing is to determine how many users a system/application can handle successfully before the performance goals become unacceptable. This allows us to avoid the potential problems in the future such as increased user base or increased volume of data. It helps users to identify a scaling strategy in order to determine whether a system should scale up or scale out. It is done majorly for eCommerce and Banking sites.  are some examples. This testing is sometimes called Scalability testing.

Load Testing:

Load Testing is to verify that a system/application can handle the expected number of transactions and to verify the system/application behavior under both normal and peak load conditions (no. of users).

Volume Testing:

Volume Testing is to verify whether a system/application can handle a large amount of data. This testing focuses on Data Base. Performance tester who does volume testing has to populate a huge volume of data in a database and monitors the behavior of a system.

Stress Testing:

Stress Testing is to verify the behavior of the system once the load increases more than the system’s design expectations. This testing addresses which components fail first when we stress the system by applying the load beyond the design expectations. So that we can design a more robust system.

Soak/Endurance Testing:

Soak Testing is aka Endurance Testing. Running a system at high load for a prolonged period of time to identify the performance problems is called Soak Testing. It is to make sure the software can handle the expected load over a long period of time.

Spike Testing:

Spike Testing is to determine the behavior of the system under a sudden increase of load (a large number of users) on the system.

Read more: Types of Performance Testing & 100+ Types of Software Testing

Difference between Functional Testing and Non-functional Testing?

Functional Testing Non-functional Testing
What the system actually does is functional testing How well the system performs is non-functionality testing
To ensure that your product meets customer and business requirements and doesn’t have any major bugs To ensure that the product stands up to customer expectations
To verify the accuracy of the software against expected output To verify the behavior of the software at various load conditions
It is performed before non-functional testing It is performed after functional testing
Example of functional test case is to verify the login functionality Example of non-functional test case is to check whether the homepage is loading in less than 2 seconds
Testing types are
• Unit testing
• Smoke testing
• User Acceptance
• Integration Testing
• Regression testing
• Localization
• Globalization
• Interoperability
Testing types are
• Performance Testing
• Volume Testing
• Scalability
• Usability Testing
• Load Testing
• Stress Testing
• Compliance Testing
• Portability Testing
• Disaster Recover Testing
It can be performed either manual or automated way It can be performed efficiently if automated

Difference between Performance Testing, Load Testing & Stress Testing

Performance Testing:

In software, performance testing (also called Perf Testing) determines or validates the speed, scalability, and/or stability characteristics of the system or application under test. Performance is concerned with achieving response times, throughput, and resource-utilization levels that meet the performance objectives for the project or product.

Performance testing is conducted to mitigate the risk of availability, reliability, scalability, responsiveness, stability etc. of a system.

Performance testing encompasses a number of different types of testing like load testing, volume testing, stress testing, capacity testing, soak/endurance testing and spike testing each of which is designed to uncover or solve performance problems in a system.

Load Testing:

Load Testing is to verify that a system/application can handle the expected number of transactions and to verify the system/application behavior under both normal and peak load conditions (no. of users).

Stress Testing:

Stress Testing is to verify the behavior of the system once the load increases more than the system’s design expectations. This testing addresses which components fail first when we stress the system by applying the load beyond the design expectations. So that we can design a more robust system.

Performance Testing Load testing Stress testing
It is a superset of load and stress testing It is a subset of performance testing It is a subset of performance testing
Goal of performance testing is to set the benchmark and standards for the application Goal of load testing is to identify the upper limit of the system, set SLA of the app and check how the system handles heavy load Goal of stress testing is to find how the system behaves under extreme loads and how it recovers from failure
Load limit is both below and above the threshold of a break Load limit is a threshold of a break Load limit is above the threshold of a break
The attributes which are checked in performance testing are speed, response time, resource usage, stability, reliability and throughput The attributes which are checked in a load testing are peak performance, server throughput, response time under various load levels, load balancing requirements etc. The attributes which are checked in a stress testing are stability response time, bandwidth capacity etc.,

Difference between Performance Engineering & Performance Testing?

Performance engineering is a discipline that includes best practices and activities during every phase of the software development life cycle (SDLC) in order to test and tune the application with the intent of realizing the required performance.

Performance testing simulates the realistic end-user load to determine the speed, responsiveness, and stability of the system. It concerned with testing and reporting the current performance of an application under various parameters such as response time, concurrent user load, server throughput etc.

Performance Testing Process

Identify the test environment:

Identify the physical test environment, production environment and know what testing tools are available. Before beginning the testing process, understand details of the hardware, software and network configurations. This process must be revisited periodically throughout the projects life cycle.

Identify performance acceptance criteria:

This includes goals and constraints for response time, throughput and resource utilization. Response time is a user concern, throughput is a business concern, and resource utilization is a system concern. It is also necessary to identify project success criteria that may not be captured by those goals and constraints.

Plan & Design performance tests:

Identify key scenarios to test for all possible use cases. Determine how to simulate that variability, define test data, and establish metrics to be gathered.

Configure the Test Environment:

Prepare the test environment, arrange tools and other resources before execution

Implement the Test Design:

Develop the performance tests in accordance with the test design

Execute the Test:

Execute and monitor the tests

Analyze Results, Report, and Retest:

Consolidate, analyze and share test results. Fine tune and retest to see if there is an improvement in performance. When all of the metric values are within acceptable limits then you have finished testing that particular scenario on that particular configuration.

Example of Performance Test Cases

Writing test cases for performance testing requires a different mindset compared to writing functional test cases.

Read more: How To Write Functional Test Cases.

  • To verify whether an application is capable of handling a certain number of simultaneous users
  • To verify whether the response time of an application under load is within an acceptable range when the network connectivity is slow
  • To verify the response time of an application under low, normal, moderate and heavy load conditions
  • To check whether the server remain functional without any crash under high load
  • To verify whether an application reverts to normal behavior after a peak load
  • To verify database server and CPU and memory usage of the application under peak load

How to choose the right Performance Testing Tool

There are many tools in the market to do performance testing. It is impossible to mention the best performance testing tool out of all the tools available. It is because every company has its own needs. What’s perfect for one company may not be suitable for another company. We have to do some analysis before choosing the right tool. Here are some factors we have to consider when choosing the best performance testing tool.

Some factors considered for choosing the best performance testing tool for Performance Testing.

  • Budget (License cost)
  • Types of license
  • Protocol support
  • Customer preference of load testing tool
  • The cost involved in training employees on the selected tool
  • Hardware/Software requirements of a loading tool
  • Tool Vendor support and update policy

There are a lot of performance testing tools in the market. There are free website load testing tools, paid tools and freemium tools. Almost all the commercial performance testing tools have a free trial. You can get a chance to work hands-on before deciding which is the best tool for your needs.

Some of the popular performance testing tools are LoadRunner, Apache JMeter, NeoLoad, StresStimulus, LoadUI Pro, WebLOAD, Rational Performance Tester, AppLoader, SmartMeter.io, Silk Performer, StormRunner Load, LoadView.

In this article, we have covered most of the information required to understand Performance testing. If you have any queries, please comment in the comment section below.[/vc_column_text][vc_column_text]

Penetration Testing Tutorial | Software Testing Material

[/vc_column_text][vc_column_text]In this penetration testing tutorial (pen test tutorial), we are going to learn the following:

  • 1. What is a Penetration Testing
  • 2. Why is Penetration Testing necessary
  • 3. How often to conduct pen testing
  • 4. What are the phases of Penetration Testing
  • 5. What are the root causes of Security Vulnerabilities
  • 6. What is the difference between Penetration Testing & Vulnerability Scanning
  • 7. Who Performs Pen-testing
  • 8. Role and Responsibilities of Penetration Testers
  • 9. Types of Penetration Testers
  • 10. What is the difference between Black, White and Grey hat hackers
  • 11. What are the types of Penetration Tests
  • 12. What are the Types of Pen Testing
  • 13. Limitations of Penetration Testing
  • 14. Penetration Testing Tools
  • 15. How to choose a Penetration Testing Tools

Let’s dive in further to learn Penetration testing without further delay.

What is a Penetration Testing?

Penetration testing is also a type of Security testing which is performed to evaluate the security of the system (hardware, software, networks or an information system environment). The goal of this testing is to find all the security vulnerabilities that are present in an application by evaluating the security of the system with malicious techniques and to protect the data from the hackers and maintain functionality of the system. It is a type of Non-functional testing which intends to make authorized attempts to violate the security of the system. It is also known as Pen Testing or Pen Test and the tester who does this testing is a penetration tester aka ethical hacker.

Why is Penetration Testing necessary?

If a system is not secured, then an attacker can take authorized access to the system. Penetration testing evaluates the security of a system and protects it against internal and external threats. It identifies the vulnerabilities and determines whether unauthorized access or other malicious activity is possible.

Organizations conduct penetration testing for a number of reasons

  1. To prevent data breaches
  2. To check security controls
  3. To meet compliance requirements
  4. To ensure the security of new applications
  5. To access incident detection and response effectiveness

How often to conduct pen testing?

Cyber-attacks are quite often in current days. It is very important to conduct regular vulnerability scans and penetration testing to detect recently discovered and previously unknown vulnerabilities.

The frequency of conducting pen testing should depend on your organization’s security policy. However, conducting pen testing regularly can determine the weaknesses of your system and keep it stay away from security breaches.

Usually, pen testing is done after the deployment of new infrastructure and applications. Also, it is done after major changes to infrastructure and applications.

Vulnerability scanning examines the servers for vulnerabilities. We have to make sure the vulnerabilities we found are not false positives. Actually reporting false positives is a downside of vulnerability scanning.

Penetration testing examines the servers for vulnerabilities and exploits them.

Both vulnerability scanning and penetration testing can test an organizations ability to detect security breaches. Organizations need to scan both the external and internal infrastructure and applications to protect against external and internal threats.

Organizations have to conduct regular penetration testing for the following reasons:

  • To find security vulnerabilities in a system
  • To secure user data
  • To test applications that are often the avenues of attack
  • To identify new bugs in an existing system after deployment or after major changes done in the system
  • To prevent the black hat attacks and guards the user data
  • To control revenue loss
  • To improve the existing security standards

What are the phases of Penetration Testing?

The process of penetration testing can be divided into five phases, which are as follows:

1. Planning phase

In this phase, we define the scope (which system to test and the goals and objectives to achieve with the penetration test) and the resources and the tools (vulnerability scanners or penetration testing tools) to employ for test execution

2. Discovery phase

In this phase, we collect as much information as possible about the systems that are in the scope of the penetration test.

3. Vulnerability assessment:

In vulnerability assessment, we just identify and report the vulnerability using vulnerability scanning tools.

4. Exploitation Phase

In this phase, we try to exploit the vulnerabilities identified in the previous phase (i.e., discovery phase) to gain access to the target system.

5. Reporting Phase

In this phase, we document all the results and findings in an effective manner. This report will be used as a reference document while mitigating activities to address the identified vulnerabilities.

What are the root causes of Security Vulnerabilities?

Some of the root causes of Security Vulnerabilities are as follows

Complexity:

Security vulnerabilities rise in proportion to the complexity of a system. Complexity in terms of software, hardware, information, businesses, and processes introduce more security vulnerabilities.

Connectivity:

Every unsecured connection is a potential avenue for exploitation.

Design Flaws:

There shouldn’t be any design bugs in software and hardware. These bugs can expose businesses to significant risks.

Configuration:

Poor system configuration introduces security vulnerabilities.

User Input:

Data received through SQL injections, buffer overflows etc., can be designed to attack the receiving system.

Management:

Management should have a proper risk management plan to avoid security vulnerabilities in the system.

Passwords:

Passwords are there to avoid unauthorized access and secure your personal data. Unsecured passwords (sharing with others, writing them down somewhere, setting easy to guess) allows hackers to guess your password easily.

Lack of training:

Lack of training leads to human errors. Human errors can be prevented by giving proper training to the employees.

Human errors:

Human errors such as improper disposal of documents, coding errors, giving out passwords to phishing sites are a significant source of security vulnerabilities.

Communication:

Communication channels such as telephone, mobile, internet give scope for security vulnerabilities.

Social:

Employees disclosing sensitive information with outsiders is one of the common reasons for security threats.

What is the difference between Penetration Testing & Vulnerability Scanning?

Before looking into the difference between penetration testing and vulnerability scanning. Let’s see two most used terms such as vulnerability and exploit.

What is a Vulnerability?

A vulnerability is a security weakness or flaw which can be exploited by an attacker, to perform unauthorized actions within a system.

What Is An Exploit?

An exploit is a software program that takes advantage of a vulnerability to cause unintended behavior to occur on a system. This action is done to gain control of a system to attack it.

Now let’s see the difference between Penetration testing and vulnerability assessment.

There is a confusion in the industry on the difference between Penetration Testing & Vulnerability Scanning. Even though these two terms are commonly interchanged but there are some differences between these two terms. Penetration testing is not the same as the vulnerability testing.

Vulnerability Scanning:

In vulnerability scanning (aka vulnerability assessment), we just identify and report the vulnerability using vulnerability scanning tools.

It’s the first step to improve the security of a system.

A vulnerability assessment report should contain the title, the description and the severity of a vulnerability.

Penetration Testing:

In Penetration testing (aka Pen test), we identify the vulnerabilities and attempt to exploit them using penetration testing tools. We repeat the same penetration tests until the system is negative to all those tests.

A penetration testing report lists the vulnerabilities that were exploited successfully.

If an organization is interested in protecting their system from security issues then they should carry out vulnerability assessment and penetration testing on a regular basis.

Pen testing can be divided into three techniques such as manual penetration testing, automated penetration testing and a combination of both manual & automated penetration testing.

By using automated penetration testing tools, it is not possible to find all vulnerabilities. Some vulnerabilities can be identified using a manual scan. So, experienced pen testers use their experience and skills to attack a system using manual penetration testing.

Who Performs Pen-testing?

Pen-testing can be performed by Testers or Network Specialists or Security Consultants.

Role and Responsibilities of Penetration Testers:

Responsibilities of penetration testers vary from company to company. However, there are several core responsibilities common to all pen testers such as

  • Understand complex computer systems and technical cybersecurity terms
  • Collect the required information from the organization to enable penetration tests
  • Plan and create penetration methods, scripts, and tests
  • Carry out onsite testing of clients infrastructure and remote testing of clients network to expose weaknesses in security
  • Work with clients to determine their requirements from the test
  • Conducts penetration testing and vulnerability scanning using automated tools, ad-hoc tools, and manual testing
  • Ability to analyze root causes and deliver strategic recommendations during security reviews
  • Create reports and recommendations from your findings
  • Understand how the flaws that you identify could affect business, or business function if they’re not fixed.
  • The flaws that you identify should be reproducible so that it will be easy for developers to fix them
  • All the data and information should be kept confidential

Types of Penetration Testers?

  1. Black hat penetration testers
  2. White hat penetration testers
  3. Grey hat penetration testers

What is the difference between Black, White and Grey hat hackers?

Black Hat Hackers:

Black hat hackers (aka Black hats) are considered as cybercriminals. They use their skills with an evil motive for personal or financial gains. They break into computer networks to destroy data or make the system unusable for those who are authorized to use the system. Usually, they involve in hacking banks, stealing credit card information, creating and using a botnet to perform DDoS (Distributed Denial of Service) attacks etc.,

White Hat Hackers:

White hat hackers (aka White hats) are usually called ethical hackers. Ethical hackers work for good reasons rather than evil. Usually, companies recruit these white hat hackers as full-time employees and also some companies work with these white hat hackers on contract basis as security specialists to find security loopholes of their system. White hat hackers attack a system after getting permission from the owner of the system.

Grey Hat Hackers:

Grey hat hackers (aka Grey hats) may hack a system for ethical and unethical reasons. Activities of these Grey hat hackers fall somewhere between black hat hackers and white hat hackers. Grey hat hackers find vulnerabilities in a system. This type of hacking is considered illegal because they attack the system without getting permission from the owner of the system. They find for the security vulnerabilities but not for bad purposes. After finding security vulnerabilities, they report them to the owner of the system. Sometimes they request a fee to fix the issue. If the owner doesn’t respond then sometimes the hackers will disclose the security flaw to the public.

What are the types of Penetration Tests?

Different types of Pen Testing which are as follows

1. Network Services Tests

Network services pen test aims to identify security weaknesses and vulnerabilities in the network. These tests can be run both locally and remotely.

Pen testers should target the following network areas

  • Firewall config test
  • Stateful analysis test
  • Firewall bypass test
  • IPS deception
  • DNS level attacks such as Zone transfer testing, Switching or routing issues, and another required network testing

Pen testers also cover some of the most common software packages such as

  • Secure Shell (SSH)
  • SQL Server or MySQL
  • Simple Mail Transfer Protocol (SMTP)
  • File Transfer Protocol (FTP)

2. Web Application Tests

Web application pen tests (web application penetration testing) aim to identify the security vulnerabilities of web applications, web browsers, and their components like ActiveX, Applets, Silverlight and APIs.

3. Client Side Tests

Client-side pen tests aim to find security vulnerabilities and exploit them in client-side software applications.

4. Wireless Tests

Wireless pen tests involved in analyzing the Wi-Fi networks and wireless devices deployed on the client site. Wireless devices such as laptops, netbooks, tablets, smartphones, iPods etc.,

5. Social Engineering Tests

Employees disclosing sensitive information with outsiders is one of the common reasons of security threats. All the employees should follow security standards and policies to avoid social engineering penetration attempt. These tests are mostly done through communication channels such as telephone, mobile, internet and it targets employees, helpdesks and processes.

Social engineering pen tests can be subcategorized into two types

Remote Tests:

Remote tests intend to trick an employee to disclose sensitive information via an electronic means (ie., via Phishing Email Campaign)

Physical Tests:

Strong physical security methods should be applied to protect sensitive information. It involves human handling tactics like convincing an employee via phone calls. It is generally using in a military facility.

What are the Types of Pen Testing?

There are three types of Pen Testing which can be used, which are as follows

1. Black Box Penetration Testing
2. White Box Penetration Testing
3. Grey Box Penetration Testing

Black Box Penetration Testing

In Black Box Penetration Testing approach, black box pen testers (Black Hat Hackers) assess the target system without having any knowledge of system details. They just have high-level details about the system such as URL or company name. They don’t examine any programming code. These testers are not ethical hackers. It’s impossible to gather information about the target system from the owner of the system. So they launch an all-out, brute force attack against the system to find out weaknesses or vulnerabilities in a system. This approach is also referred as “trial and error” approach.

White Box Penetration Testing

In White Box Penetration Testing approach, white box pen testers (White Hat Hackers) access the target system with complete details about the system. Since they have complete details about the system, white box test can be accomplished much quicker compared to black box test. White box pen testers are called ethical hackers. This approach is also known as clear box, glass box, open box and structural testing.

Grey Box Penetration Testing

In Grey Box Penetration Testing approach, grey box pen testers (Grey Hat Hackers) utilize both manual and automated testing processes. It is a combination of both Black Box and White Box penetration testing techniques. These hackers may attack a system for ethical and unethical reasons. These hackers find vulnerabilities in a system. This type of hacking is considered illegal because they attack the system without getting permission from the owner of the system.

Limitations of Penetration Testing

Penetration testing cannot find all vulnerabilities in a target system. There are some limitations based on the resources and restrictions of a test, such as

  • Limitations of skills of a pen tester – It’s hard to find professional pen testes.
  • Limitations of scope – most of the organizations leave some tests due to a resource, security constraints etc.,
  • Limitations of time
  • Limitations of budget
  • Limitations of tools used

Penetration Testing Tools:

Pen Testing Tools are classified into Vulnerability Scanners and Vulnerability Attackers. Vulnerability Scanners show you the weak spots of the system whereas Vulnerability Attackers show you the weak spots of the system and attack them.

Free Penetration Testing Tools:

Some of the free penetration testing tools (network vulnerability assessment tools/web vulnerability assessment tools) are NMap, Nessus, Metasploit, Wireshark, OpenSSL, Cain & Abel, W3af etc.,

Commercial Penetration Testing Tools:

Some of the commercial penetration testing tools are Pure Hacking, Torrid Networks, SecPoint, Veracode etc.,

How to choose a Penetration Testing Tools?

We need to choose a penetration testing based on the following points.

  • It should be easy to use
  • It should be easy to configure & deploy
  • It should scan vulnerabilities easily
  • Categorization of vulnerabilities based on the severity
  • It should generate detailed reports and logs
  • It should be cost-effective in terms of budget
  • A good support team & technical documentation is essential

Conclusion:

We’ve prepared this tutorial by keeping software testers in mind and covered everything needed for them to learn and implement Penetration Testing at work. Even though it’s a beginner’s guide for Penetration Testing, we hope you would be able to improve your knowledge on Penetration Testing with this tutorial.

If you have any queries, please comment below. If you are a penetration tester, please share your experience in the comment section below.[/vc_column_text][vc_column_text]

Security Testing Tutorial | Software Testing Material

[/vc_column_text][vc_column_text]

What is Security Testing?

Security testing is a process to determine whether the system protects data and maintains functionality as intended.

Security testing aims to find out all possible loopholes and weaknesses of the system in the starting stage itself to avoid inconsistent system performance, unexpected breakdown, loss of information, loss of revenue, loss of customer’s trust.

It comes under Non-functional Testing.

We can do security testing using both manual and automated security testing tools and techniques. Security testing reviews the existing system to find vulnerabilities.

Most of the companies perform security testing on newly deployed or developed software, hardware, and network or information system environment. But it’s highly recommended by experts to make security testing as a part of information system audit process of an existing information system environment in detecting all possible security risks and help developers in fixing them.

Security testing aims at covering following basic security components

  1. Authentication
  2. Authorization
  3. Availability
  4. Confidentiality
  5. Integrity
  6. Non-repudiation

Why Security Testing is Important?

Security testing is important due to the increase in the number of privacy breaches that websites are facing today. In order to avoid these privacy breaches, software development organizations have to adopt security testing in their development strategy based on testing methodologies and latest industry standards.

It is important to adopt Security Process in each and every phase of SDLC.

Requirement Phase: Security analysis of all the requirements
Design Phase: Implementation of Test Plan including Security tests.
Code & Unit Testing: Security White Box Testing
Integration Testing: Black Box Testing
System Testing: Black Box Testing & Vulnerability Scanning
Implementation of System Testing: Penetration Testing & Vulnerability Scanning
Support: Impact Analysis

Top Vulnerabilities:

Security tests include testing for vulnerabilities such as

  • SQL Injection
  • Cross-Site Scripting (XSS)
  • Session Management
  • Broken Authentication
  • Cross-Site Request Forgery (CSRF)
  • Security Misconfiguration
  • Failure to Restrict URL Access
  • Secure Data Exposure
  • Insecure Direct Object Reference
  • Missing Function Level Access Control
  • Using Components with Known Vulnerabilities
  • Unvalidated Redirects and Forwards

Types of Security Testing:

There are seven main types of security testing which are presented below.

Vulnerability Scanning:

In vulnerability scanning (aka vulnerability assessment), we just identify and report the vulnerability using vulnerability scanning tools.

It’s the first step to improve the security of a system.

A vulnerability assessment report should contain the title, the description and the severity of a vulnerability

Security Scanning:

Security scanning is done to find weak points in the security of network and system and also provides solutions to reduce these risks.

Penetration Testing:

In Penetration testing (aka Pen test), we identify the vulnerabilities and attempt to exploit them using penetration testing tools. We repeat the same penetration tests until the system is negative to all those tests.

Pen testing can be divided into three techniques such as manual penetration testing, automated penetration testing and a combination of both manual & automated penetration testing.

Read more on Pen Testing Techniques

Risk Assessment:

Risk assessment involves reviewing and analyzing security risks that later will be prioritized as Low, Medium and High. It also recommends possible ways to prevent the risk.

Security Auditing:

Security auditing is the procedure of defining security flaws. It is an internal inspection of systems to find security flaws. In some cases, an audit is done via line by line inspection of code

Ethical Hacking:

Ethical hacking is done on a system with an intent to find and expose security issues in the system. Ethical hacking is done by a white hat hacker. White hat hacker is a security professional who uses their skills in a legitimate manner to reveal the defects of a system.

Read more: Types of Hackers

Posture Assessment:

Posture assessment is a combination of security scanning, ethical hacking, and risk assessment to present the security posture of a system or organization.

Security Testing Tools:

To find the flaws and vulnerabilities in a web application, there are many free, paid, and open source security testing tools available in the market. We know that the advantage of open source tools are we can easily customize it to match our requirements. We are here to showcase some of the top __ open source security testing tools.

We use security testing tools for checking how secure a website or web application is.

Open Source Security Testing Tools:

Some of the open source security testing tools are Zed Attack Proxy, Wfuzz, Wapiti etc.,

Commercial Security Testing Tools:

Some of the commercial security testing tools are GrammaTech, Appscan, Veracode etc.,

Conclusion:

We know how important is security testing in current days. It aims to find out all possible loopholes and weaknesses of the system. Testers play a role of an attacker to find out security related bugs in the system.[/vc_column_text][vc_column_text]

PDCA Cycle (Plan Do Check Act) in Software Development Life Cycle

[/vc_column_text][vc_column_text]

What is PDCA Cycle?

PDCA Cycle is an iterative four-step management method used in business to focus on continuous improvement of processes. The PDCA cycle consists of four steps namely Plan, Do, Check, and Act. It is one of the key concepts of quality and it is also called the Deming circle/cycle/wheel.

Some of the cases where we use PDCA Cycle are when implementing any changes or when a new improvement project starts or when defining a repetitive process

Let’s see what we do in each stage of PDCA Cycle in SDLC.

You could also check this video

PLAN: 

Plan a change (either to solve a problem or to improve some areas) and decide what goal to achieve.

Here we define the objective, strategy and supporting methods to achieve the goal of our plan.

DO:

To design or revise the business requirement as planned

Here we implement the plan (in terms of putting the plan into an action) and test its performance

CHECK: 

Evaluate the results to make sure whether we reach the goals as planned

Here we make a checklist to record what went well and what did not work (lessons learnt)

ACT: 

If the changes are not as planned then continue the cycle to achieve the goal with a different plan.

Here we take action on what is not working as planned. Task is to keep trying to improve the process with different plan.

PDCA Cycle is a continuous process until we achieve our goals which we planned. [/vc_column_text][vc_column_text]

Why did you choose Software Testing as a career?

[/vc_column_text][vc_column_text]“Why did you choose Software Testing as a career” is one of the most common questions in the interview process. So get ready with an answer which impresses the interviewer. Jot down some points that relate to your own strengths and experience related to this and get ready with the answer. Don’t memorize and answer in the interview.

We usually receive emails from our readers as mentioned below.

Is software testing a good career choice?
How can I achieve more growth in software testing careers?
Which job position is better software testing or software development?
Is software testing career a good choice?

Each career path is unique, we can’t deny it. If you are looking to become a Software Tester or you are already a Software Tester then you have to give some good answer which impresses the interviewer.

Once completed the graduation, we will be in chaos to choose our career path. Some myths in industry related to choosing Software Testing as a career are

  1. Anyone can test. Development is better than testing.
  2. Salaries will be less compared to Developers in the industry
  3. Only the people who can’t code choose Software Testing as a career.
  4. There won’t be any growth in Software Testing.

Must Read: Manual Testing Complete Tutorial

Gone are the days, see the below points to the above mentioned myths. If you are in chaos in choosing Software Testing as a career then these points make you strengthen to choose software testing as a career (or) if you are already working as a Software Tester and worrying about your career growth then these points prove you that you have chosen a right career path.

Please be patient. The video will load in some time.

  1. Not everyone can test. One should need good analytical skills to become a Software Tester. You need to be good at communication skills for reporting and convincing others.
  2. Salary may be less when you start your career. Experienced Testers are earning the same level of the package compared to Developers. Many companies are offering much more salaries to the Automation Testers compared to Developers.
  3. It’s an old myth in the industry that one who can’t code can be a Software Tester. Record and playback days were gone. It’s an automation age. An Automation Tester writes code to automate the scripts.
  4. Growth – Tester will become Test Lead, Project Lead, Automation Architect, Test Manager etc., Ultimately everyone reaches to the manager level.

Must Read: SQL Tutorial for Software Testers

Software Testing as a career – why I chose?

A simple answer is I love to be a Software Tester. So, I chose Software Testing as a career. I would like to mention a few more points on why I love to be a Software tester and chose Software Testing as a career.

I love solving logical puzzles. Testing is kind of solving a logical puzzle. We will be given a software which will go straight to the market if we nod our head that there are no bugs in the software and ready to release. We, the Testers are the protectors at the gateway. We not only find the bugs. We break the system too in terms of stress testing.

I love helping others. 🙂 I proudly say that as a Software Tester, I do help in releasing a quality product to the market. I can help in finding bugs which are hidden in the software. Even though Developers do their best to release a good product, there will be some mistakes.

I love to take challenges. In many projects, we need to do testing without having specification documents. It’s a big challenge to explore the system and find the bugs. Domain knowledge is also one of the biggest challenges a tester faces. We, the testers do explore the system and struggle to understand and finding bugs and report to fix and deliver a quality product to the market.

I love to write code too. Yeah, I am an Automation Tester. Who said one who can’t code can choose a career in software testing. As an Automation Tester, I write code to find the bugs in the system and involving in deliver the quality product.

Must Read: Top 100 Selenium Interview Questions

I love to interact with people. As a Software Tester, I could get a lot of opportunities to interact with people (not only peers, I could discuss with Stakeholder). Testers need to know all parts of the application which they are going to test. So we need to discuss with clients too to get more information on domain knowledge. This way we could meet many people to share knowledge.

I love to be in a team where quality products will be delivered. Customers spend lots of money to buy a product. No customer will be happy if the product doesn’t work as intended. I play a role where I can deliver a quality product which not only make customer just happy, it makes customer delight.

Must Read: Agile Scrum Methodology in Software Development

Do you like to choose Software Testing as a career? If yes, start preparing your resume. I have posted some sample resumes for both freshers and experienced professionals. Click here Sample Resume for Software Testers and get sample resumes and modify as per your requirements.[/vc_column_text][vc_column_text]

Software Testing Interview Questions Free eBook

[/vc_column_text][vc_column_text]Software Testing Interview Questions Free eBook – A free eBook from SoftwareTestingMaterial. We have covered almost all the important interview questions related to software testing and additionally we have covered some of the common interview questions which you are asked by the interview panel.

Go through the eBook and leave your valuable comments below to improve the next version of this eBook.

Enter your Name and Email in the below form and click on subscribe. We will send a confirmation email to the email you submit. Please go to the email and confirm the subscription to get the download link of our “Software Testing Interview Questions Free eBook”[/vc_column_text][vc_column_text]

Principles Agile Software Development | Software Testing Material

[/vc_column_text][vc_column_text]Principles Agile Software Development – listed the 12 principles of Agile Software Development.

Before going ahead, let’s see what is Agile Testing.

Agile testing is a software testing practice that follows the principles agile software development. It is an iterative software development methodology where requirements keep changing as per the customer needs.  Testing is done in parallel to the development of an iterative model. Test team receives frequent code changes from the development team for testing an application.

In the below video, we have explained each point in detail.

 

If you liked this video, then please subscribe to our YouTube Channel for more video tutorials.

12 principles Agile Software Development:

1. Highest priority is to satisfy the customer through early and continuous delivery of business valuable software
2. Welcome changing requirements, even late in development
3. Deliver working software frequently
4. Business people and developers must work together daily with transparency throughout the project
5. Build projects around motivated individuals
6. The best form of communication is to do face-to-face conversation
7. Working software is the primary measure of progress
8. Able to maintain a constant pace
9. Continuous attention to technical excellence
10. Simplicity – the art of maximizing the amount of work not done – is essential
11. Self-organizing teams
12. At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly

Earlier I have posted a detailed post on “Agile Scrum Methodology” and also “Agile Testing Interview Questions“. If you haven’t gone through those, you can browse by clicking on those links.[/vc_column_text][/vc_column][/vc_row]

WhatsApp us