Basic Manual Testing Interview Questions and Answers

1. What is Manual Testing?

Manual Testing is a software testing process in which test cases are executed by human testers without the use of automation tools or scripts. In this approach, the tester manually interacts with the application to find defects or issues. It involves checking the application's functionality, performance, and usability by simulating real-world scenarios based on predefined test cases or exploratory testing.Ideal for complex and subjective test scenarios (e.g., UI/UX testing).Provides human judgment to identify issues that automated tests might miss.

Challeges are : Time-consuming and repetitive.Less efficient for large-scale or long-term testing efforts, compared to automation.

2. What is the difference between a test case and a test scenario?

A test case is a specific, detailed set of conditions, inputs, actions, and expected results used to verify a particular feature or functionality of an application. It includes the test steps, preconditions, expected output, and postconditions.

A test scenario, on the other hand, is a high-level description of a functionality or feature to be tested. It outlines a broad testing objective without detailing the specific steps, inputs, or expected results. Test scenarios are generally used to identify areas to test, while test cases are derived from those scenarios for more granular execution.

3. What are the different levels of testing?

  • The different levels of testing are:
    • Unit Testing: Focuses on individual components or functions of the code to ensure they work as expected. Usually done by developers.
    • Integration Testing: Verifies that different modules or components of the system work together as expected.
    • System Testing: Tests the entire system as a whole, ensuring all components function correctly in a complete environment.
    • Acceptance Testing: Ensures the system meets the business requirements and is ready for deployment. It often involves end users.
    • Regression Testing: Ensures new changes haven’t affected existing function

4. What is the difference between functional and non-functional testing?

Functional Testing focuses on verifying that the system works according to specified requirements. It checks the functionality of features, ensuring that the software performs as expected, such as user login, data input validation, or transactions. Examples include unit testing, integration testing, and system testing.

Non-Functional Testing, on the other hand, evaluates aspects of the system that don't relate directly to specific functions, such as performance, scalability, usability, and security. It ensures the system operates under load, is user-friendly, and adheres to security standards. Examples include load testing, stress testing, and usability testing.

5. What is a bug life cycle?

The bug life cycle is the process that a software bug follows from its discovery to its resolution. It typically includes the following stages:

  • New: The bug is reported but not yet reviewed.
  • Assigned: The bug is assigned to a developer for investigation.
  • Open: The developer starts working on the issue.
  • Fixed: The bug is fixed by the developer and marked as resolved.
  • Verified: The tester verifies if the fix works as expected.
  • Closed: The bug is successfully resolved and closed.
  • Reopened: If the issue persists, the bug is reopened for further investigation.

6. What is Regression Testing?

Regression Testing is a type of software testing that ensures new code changes, such as bug fixes, enhancements, or new features, do not negatively impact the existing functionality of the application. It is performed after modifications to confirm that previously working features still function as expected and that no new issues have been introduced.

The process involves rerunning previously executed test cases, both functional and non-functional, to check for any unexpected side effects. Regression testing is crucial for maintaining software quality over time, especially during frequent updates or releases. Often, automated testing is used for efficiency and to cover large test suites consistently.

7. What is the difference between Verification and Validation?

Verification is the process of evaluating whether a system or component meets the specified requirements and design specifications. It ensures that the product is being built correctly and focuses on consistency, completeness, and correctness of the software against the design documents.

Validation, on the other hand, ensures that the product meets the user's needs and requirements. It checks if the system is the right product and performs as expected in real-world scenarios.
In short, verification answers "Are we building the product right?" while validation answers "Are we building the right product?"

8. What is a test plan?

A test plan is a comprehensive document that outlines the strategy and approach for testing a software application. It defines the scope, objectives, and goals of testing, including which features, functionalities, or components will be tested. The plan specifies the testing types to be used (e.g., unit testing, integration testing, system testing), as well as the criteria for success (pass/fail conditions).

It also outlines the resources required, including the testing team, tools, environments, and schedules. Additionally, the test plan details the roles and responsibilities of team members, risk assessments, and deliverables. A well-structured test plan ensures efficient, effective, and organized testing to meet project deadlines and quality standards.

9. What is Black Box Testing?

Black Box Testing is a software testing technique where the internal structure, design, or implementation of the system is not known to the tester. The tester focuses on verifying the functionality of the software by providing inputs and evaluating the output against expected results.

It is called "black box" because the tester cannot see the internal workings of the system, much like a sealed box. This method is used to check if the software behaves as expected under various conditions, and it includes functional, non-functional, and system testing. Black Box Testing is often used for validation purposes and helps identify issues related to user requirements, input validation, and output correctness.

10. What is White Box Testing?

White Box Testing, also known as Clear Box or Glass Box Testing, is a software testing method where the tester has full knowledge of the internal structure and code of the application. The tester examines the internal logic, flow, and design of the software to ensure that all parts of the code are functioning as expected. This testing focuses on the internal workings, such as checking for code coverage, paths, branches, and conditions.

White Box Testing is useful for identifying hidden errors, optimizing performance, and verifying security vulnerabilities. It includes techniques like unit testing, integration testing, and code reviews. The primary goal is to ensure that the code behaves correctly and meets the specified requirements.

11. What is User Acceptance Testing (UAT)?

User Acceptance Testing (UAT) is the final phase of software testing, where the software is tested by the end users to ensure it meets their needs and requirements. This testing is performed in a real-world environment to validate that the system behaves as expected from the user’s perspective.

UAT focuses on confirming that the software solves the intended business problem and is user-friendly. Users test the application with real scenarios and data to verify its functionality, usability, and performance. Any issues or discrepancies identified during UAT are reported and addressed before the software is released for production. Successful UAT signifies that the software is ready for deployment, ensuring it meets user expectations and business objectives.

12. What is Smoke Testing?

Smoke Testing is a preliminary software testing process that aims to determine whether the most critical functions of a software application work correctly after a new build or update. It involves running a set of basic tests to identify major issues early on, such as crashes, broken features, or basic functionality failures. The goal is not to perform exhaustive testing but to quickly assess whether the build is stable enough for more detailed testing.

If the software passes the smoke test, it is considered "stable" for further testing. Smoke Testing helps save time by catching critical errors early in the development cycle, allowing teams to address serious issues before proceeding with more in-depth testing.

13. What is Sanity Testing?

Sanity Testing is a type of software testing performed after receiving a new build or bug fix to ensure that specific functionalities are working as expected. Unlike Smoke Testing, which verifies the overall stability of the application, Sanity Testing focuses on validating particular areas or features that have been modified or added.

It is usually a shallow and quick test to confirm that the changes haven't broken any existing functionality. Sanity Testing is performed when there is limited time for testing or when only a small part of the software has been modified. If the application passes the sanity tests, more detailed testing can proceed. If issues are found, further investigation is needed before continuing with additional testing.

14. What are Test Case Design Techniques ?

  • Common test case design techniques include:

    • Equivalence Partitioning: Divides input data into valid and invalid partitions, ensuring that test cases cover each partition.
    • Boundary Value Analysis: Focuses on testing the boundaries of input ranges, where errors are most likely to occur.
    • Decision Table Testing: Uses a table to represent different combinations of inputs and their expected outputs, ensuring all scenarios are tested.
    • State Transition Testing: Validates transitions between different states of the application.
    • Error Guessing: Relies on the tester’s experience to predict where errors might occur and create test cases around those areas.

15. What is Exploratory Testing ?

Exploratory Testing is an approach where testers actively explore the software to identify defects without predefined test cases. It involves simultaneous learning, test design, and execution, allowing testers to adapt and respond to findings during the process. Testers use their experience, intuition, and creativity to investigate the software, often focusing on areas that seem most prone to errors or are complex.

The goal is to uncover unexpected issues that might not be covered by scripted testing. Exploratory testing is especially useful in agile environments where quick feedback is needed, and it encourages tester curiosity and critical thinking. This approach complements formal testing techniques and is valuable for discovering hidden bugs, usability issues, or unanticipated software behaviors.

16. What is Ad-Hoc Testing?

Ad-Hoc Testing is an informal and unstructured testing technique where testers execute random tests without planning or documentation. The primary focus is to find defects by exploring the software in an unsystematic way. Testers rely on their intuition, experience, and knowledge of the application to test areas that seem suspicious or under-tested. Unlike other testing methods, Ad-Hoc Testing doesn’t follow predefined test cases or scripts, making it flexible and adaptable.

While it may not guarantee exhaustive coverage, it helps uncover defects that might not be detected through traditional testing methods. Ad-Hoc Testing is typically performed when there is limited time, or when testers want to quickly identify potential issues without formal preparation or planning.

17. What is Load Testing?

Load Testing is a type of performance testing that evaluates how a software application behaves under a specific expected load, such as a certain number of users or transactions. The goal is to determine if the system can handle the required workload efficiently without degrading performance. During load testing, the system is subjected to gradual increases in load, and its response times, resource utilization, and stability are monitored.

It helps identify potential bottlenecks, performance degradation, or failure points before the software is deployed in a real-world environment. Load Testing ensures that the application can handle expected traffic levels and function smoothly under normal usage conditions. It is crucial for ensuring the reliability and scalability of the system.

18. What is Stress Testing?

Stress Testing is a type of performance testing that evaluates how a software application behaves under extreme conditions, beyond its specified limits. The goal is to determine the system's breaking point by pushing it to handle more users, transactions, or data than it is expected to support. Stress Testing helps identify vulnerabilities, resource exhaustion, or critical failures that could occur under high-stress situations, such as sudden traffic spikes or system overloads.

The application’s stability, recovery mechanisms, and error handling are closely examined. This testing helps ensure that the system can gracefully recover or fail without catastrophic consequences when it faces unexpected stress or load. It is essential for understanding the system's robustness and overall resilience in extreme conditions.

19. What is Compatibility Testing?

Compatibility Testing is a type of software testing that ensures a software application works as intended across different environments, devices, operating systems, browsers, network configurations, and hardware. The goal is to verify that the software is compatible with various platforms and works consistently for all users, regardless of their system setup. This testing helps identify issues related to system compatibility, such as layout inconsistencies, functionality failures, or performance problems when the software runs on different combinations of devices or configurations.

Compatibility Testing is crucial for ensuring a seamless user experience across diverse environments, particularly in cases where the software needs to operate in heterogeneous systems or with various versions of third-party software like web browsers.

20. What is the difference between Alpha and Beta Testing?

Alpha Testing is the initial phase of testing, typically conducted by the development team or a dedicated testing team within the organization. It is performed in a controlled environment to identify bugs, issues, and usability problems before releasing the software to external users. The goal is to catch as many issues as possible in the early stages.

Beta Testing, on the other hand, occurs after Alpha Testing, where the software is released to a limited number of external users (beta testers). These testers use the software in real-world environments and provide feedback on its functionality, usability, and performance. Beta Testing helps uncover any remaining issues before the final release.

21. What is the difference between a defect and a bug?

Defect refers to any flaw or issue in the software that deviates from the expected behavior or requirements. It could be a design issue, a mismatch in functionality, or an error introduced during development. A defect can exist even before testing begins.

Bug refers specifically to a problem found during the testing phase that causes the software to malfunction or produce incorrect results. It’s often seen as a symptom of a defect in the code or design.

22. What is Performance Testing?

Performance Testing is a type of software testing aimed at evaluating how a software application performs under various conditions. It measures the application's responsiveness, stability, scalability, and resource usage when subjected to different workloads. The primary goal is to ensure the system meets performance requirements and performs efficiently under expected load conditions.

Performance testing includes several subtypes, such as Load Testing (to check how the system handles expected load), Stress Testing (to assess performance under extreme conditions), Scalability Testing (to test how well the system can scale with increased load), and Spike Testing (to evaluate system behavior under sudden, intense traffic spikes). This testing is crucial for identifying performance bottlenecks, ensuring fast response times, and enhancing user experience.

23. What is the role of a tester in Agile?

In Agile, the role of a tester is collaborative and integral throughout the software development lifecycle. Testers work closely with developers, product owners, and other team members to ensure the product meets quality standards and user expectations. They participate in Agile ceremonies such as sprint planning, daily stand-ups, and sprint reviews. Testers are responsible for creating test cases, performing manual and automated testing, and identifying bugs early in the development process. They also contribute to test-driven development (TDD) and continuous integration (CI) practices by writing and running tests during each iteration.

Additionally, testers provide feedback on user stories, help ensure acceptance criteria are met, and validate that features function as expected in real-world scenarios. Their role focuses on delivering high-quality software continuously.

24. What is a Test Environment?

A Test Environment is a setup where software testing is conducted. It includes all the hardware, software, network configurations, tools, and resources needed to execute test cases and verify the functionality, performance, and behavior of the application. The test environment replicates the production environment as closely as possible to ensure accurate testing results. It typically involves servers, databases, operating systems, browsers, and any third-party services or applications that the software interacts with.

Test environments can be either dedicated (isolated from production) or shared, and they help ensure that tests are executed under controlled and consistent conditions. Proper configuration of the test environment is crucial for identifying defects, validating new features, and ensuring the application works as expected across different platform.

25. What is Defect Density?

Defect Density is a software quality metric that measures the number of defects (bugs) found in a software application relative to its size or complexity. It is calculated by dividing the number of defects identified during testing by the size of the software, typically measured in lines of code (LOC), function points, or any other suitable metric.

Defect Density helps assess the quality of the software by providing insight into how many defects exist in relation to its size. A higher defect density indicates poor quality, while a lower density suggests better quality. This metric helps in evaluating the effectiveness of the development process and identifying areas needing improvement.

line

Copyrights © 2024 letsupdateskills All rights reserved