Functional Testing

Functional Testing

Functional testing is a kind of black box testing that is performed to confirm that the functionality of an application or system is behaving as expected.

It is done to verify all the functionality of an application.

There must be something that defines what is acceptable behavior and what is not.

This is specified in a functional or requirement specification. It is a document that describes what a user is permitted to do so, that he can determine the conformance of the application or system to it. Additionally, sometimes this could also entail the actual business side scenarios to be validated.

Therefore, functionality testing can be carried out via two popular techniques:

  • Testing based on Requirements: Contains all the functional specifications which form a basis for all the tests to be conducted.
  • Testing based on Business scenarios: Contains the information about how the system will be perceived from a business process perspective.

Testing and Quality Assurance are a huge part of the SDLC process. As a tester, we need to be aware of all the types of testing even if we’re not directly involved with them on a daily basis.

As testing is an ocean, the scope of it is so vast indeed, and we have dedicated testers who perform different kinds of testing. Most probably all of us must be familiar with most of the concepts, but it will not hurt to organize it all here.

Functional testing has many categories and these can be used based on the scenario.

The most prominent types are briefly discussed below:

Unit Testing:

Unit testing is usually performed by a developer who writes different code units that could be related or unrelated to achieve a particular functionality. His, this usually entails writing unit tests which would call the methods in each unit and validate those when the required parameters are passed, and its return value is as expected.

Code coverage is an important part of unit testing where the test cases need to exist to cover the below three:

i) Line coverage
ii) Code path coverage
iii) Method coverage

Sanity Testing: Testing that is done to ensure that all the major and vital functionalities of the application/system are working correctly. This is generally done after a smoke test.

Smoke testing: Testing that is done after each build is released to test in order to ensure build stability. It is also called as build verification testing.

Regression tests: Testing performed to ensure that adding new code, enhancements, fixing of bugs is not breaking the existing functionality or causing any instability and still works according to the specifications.

Regression tests need not be as extensive as the actual functional tests but should ensure just the amount of coverage to certify that the functionality is stable.

Integration tests: When the system relies on multiple functional modules that might individually work perfectly, but have to work coherently when clubbed together to achieve an end to end scenario, validation of such scenarios is called Integration testing.

Beta/Usability testing: Product is exposed to the actual customer in a production like an environment and they test the product. The user’s comfort is derived from this and the feedback is taken. This is similar to that of User Acceptance testing.

Functional System Testing:

System testing is a testing that is performed on a complete system to verify if it works as expected once all the modules or components are integrated.

End to end testing is performed to verify the functionality of the product. This testing is performed only when system integration testing is complete including both the functional & non-functional requirements.

Entry criteria:

  • Requirement Specification document is defined and approved.
  • Test Cases have been prepared.
  • Test data has been created.
  • The environment for testing is ready, all the tools that are required are available and ready.
  • Complete or partial Application is developed and unit tested and is ready for testing.

Exit Criteria:

  • Execution of all the functional test cases has been completed.
  • No critical or P1, P2 bugs are open.
  • Reported bugs have been acknowledged.

The various steps involved in this testing are mentioned below:

  • The very first step involved is to determine the functionality of the product that needs to be tested and it includes testing the main functionalities, error condition, and messages, usability testing i.e. whether the product is user-friendly or not etc.
  • Next step is to create the input data for the functionality to be tested as per the requirement specification.
  • Later, from the requirement specification, the output is determined for the functionality under test.
  • Prepared test cases are executed.
  • Actual output i.e. the output after executing the test case and expected output (determined from requirement specification) are compared to find whether the functionality is working as expected or not.

Different kind of scenarios can be thought of and authored in the form of “test cases”. As QA folks, we all know how the skeleton of a test case looks.

It mostly has four parts to it:

  • Test summary
  • Pre-requisites
  • Test Steps and
  • Expected results.

Obviously, attempting to author each and every kind of test is not only impossible but also time-consuming and expensive.

Typically, we would want to uncover the maximum bugs without any escapes with existing tests. Therefore, the QA needs to use optimization techniques and strategize how they would approach the testing.

Functional Testing Techniques

Functional Testing Techniques

The system under test may have many components which when coupled together achieve the user scenario.

In the Example, a customer scenario would include tasks like HRMS application loading, entering the correct credentials, going to the home page, performing some actions and logging out of the system. This particular flow has to work without any errors for a basic business scenario.

Some samples are given below:

Sl No Summary Pre-requisite Test case Expected results.
1. Fully privileged user can make account changes 1)User account must exist
2) User needs to have the required privileges
1) User enters the userid and password
2) User sees edit permissions to modify account itself
3) User modifies account information and saves.
4) User logs out.
1) User is logged into the home page
2) Edit screen is presented to the user.3) Account information is saved
4) User is taken back to login page
2. Another valid user without full privileges 1)User account must exist
2) User needs to have the minimum privileges
1) User enters the userid and password
2) User sees edit permissions to modify only certain fields.
3) User modifies only those fields and saves.
4) User logs out.
1) User is logged into the home page
2) Edit screen is presented to the user only on certain fields. The account fields are grayed out.
3) Fields modified are saved
4) User is taken back to login p

In Equivalence partitioning, the test data are segregated into various partitions called equivalence data classes. Data in each partition must behave in the same way, therefore only one condition needs to be tested. Similarly, if one condition in a partition doesn’t work, then none of the others will work.

For Example, in the above scenario the user id field can have a maximum of 10 characters, so entering data > 10 should behave the same way.

Boundary tests imply data limits to the application and validate how it behaves.

Therefore, if the inputs are supplied beyond the boundary values, then it is considered to be a negative testing. So a minimum of 6 characters for the user sets the boundary limit. Tests written to have user id < 6 characters are boundary analysis tests.

Decision-based tests are centered around the ideology of the possible outcomes of the system when a particular condition is met.

In the above scenario given, the following decision-based tests can be immediately derived:

  1. If the wrong credentials are entered, it should indicate that to the user and reload the login page.
  2. If the user enters the correct credentials, it should take the user to the next UI.
  3. If the user enters the correct credentials but wishes to cancel login, then it should not take the user to the next UI and reload the login page.

Alternate path tests are basically run to validate all the possible ways that exist, other than the main flow to accomplish a function.

When most of the bugs are uncovered through the above techniques, ad-hoc tests are a great way to uncover any discrepancies that are not observed earlier. These are performed with the mindset of breaking the system and see if it responds gracefully.

For Example, a sample test case would be:

  • A user is logged in, but the admin deletes the user account while he is performing some operations. It would be interesting to see how the application handles this gracefully.

Functional vs Non-Functional Testing:

Non-functional tests  focus on the quality of the application/system as a whole. Hence, it tries to deduce how well the system performs as per the customer requirements as in contrast to the function it performs.

Advantages

Enlisted below are the various advantages of Functional Testing:

  • This testing reproduces or is a replica of what the actual system is i.e. it is a replica of what the product is in the live environment. Testing is focused on the specifications as per the customer usage i.e. System specifications, Operating system, browsers etc.
  • It does not work on any if and buts or any assumptions about the structure of the system.
  • This testing ensures to deliver a high-quality product which meets the customer requirement and makes sure that the customer is satisfied with the end results.
  • It ensures to deliver a bug-free product which has all the functionalities working as per the customer requirement.
  • Risk-based testing is done to decrease the chances of any kind of risk in the product.

Limitations

This testing is done to make sure that the product works as expected and the entire requirement is implemented and the product is exactly as per the customer requirement.

However, it does not consider the other factors such as the performance of the product i.e. responsiveness, throughput time etc., which are important and very much required to be a part of testing before releasing the product.

Disadvantages

  • There are many chances of performing redundant testing.
  • Logical errors can be missed out in the product.
  • This testing is based on the requirement, if in case the requirement is not complete or is complicated or is not clear, performing this testing in such a scenario becomes difficult and can be time-consuming too.

Hence basically, both these types of testing are needed for a quality product.

Month End Offer - Flat 20% Off + 20% Cashback  
+
WhatsApp us