Fundamentals of Software Testing and Quality Models

Posted by Anonymous and classified in Language

Written on in English with a size of 852.62 KB

Fundamentals of Software Testing

Imagine you buy a new remote-control car. Before playing with it, you check if it moves forward, turns left or right, and stops when you press stop. If something doesn’t work, you fix it before giving it to someone. Software Testing is exactly this, but instead of a toy, we check software (apps, websites, programs). It is the process of checking software to find errors (bugs) and to make sure it works correctly according to user requirements.

Objectives of Software Testing

  • To find bugs: Like finding holes in a bucket before filling it with water.
  • To check correctness: Making sure the output is right, not wrong.
  • To improve quality: Better software leads to happy users.
  • To ensure reliability: Software should not crash repeatedly.
  • To verify requirements: Check if the software does what the user asked for.

Goals of Software Testing

  • Deliver defect-free software: Software with minimum errors.
  • Increase customer satisfaction: The user should feel, “Yes! This works!”
  • Reduce future cost: Fixing bugs early is cheaper than later.
  • Ensure safety and performance: Software should be fast and safe.
  • Build confidence in the product: Developers and users trust the software.

Why Exhaustive Testing is Impossible

Imagine you have a lock with numbers from 0 to 9. If the lock has 4 digits, there are 10,000 combinations (0000 to 9999). Now imagine software with many buttons, inputs, users, and devices. Checking every single possibility is called exhaustive testing. This is why exhaustive testing is impossible; it means testing the software with all possible inputs, paths, and conditions.

Reasons for Impossibility

  • Too many input combinations: Even a small program can have millions of inputs (e.g., username + password + OTP).
  • Limited time: Projects have deadlines; no company has years to test one piece of software.
  • Limited cost: More testing equals more money, and companies cannot afford infinite testing.
  • Complex program paths: Programs have loops, conditions, and branches, making the number of execution paths very large.
  • Human limitations: Testers are humans, not robots; mistakes and fatigue can happen.

Real-Life Example: Think of checking all roads in India. You can’t travel every road—you only check important highways. Software testing also checks important paths, not all paths.

Key Testing Terminology

TermMeaningNatureOccurs WhenWho Causes ItImpactExample
ErrorA mistake made by a humanHuman-relatedDuring requirement, design, or codingDeveloper / TesterMay introduce defectsWrong formula written
DefectDeviation from expected behaviorSoftware-relatedAfter an error is introduced in codeSoftwareCan lead to failureLogin button not working
BugInformal term for defectSoftware-relatedWhen defect is identifiedSoftwareCan cause incorrect resultsApp freezes on submit
FaultIncorrect logic in codeCode-relatedWhen wrong statement existsDeveloperCauses software failureWrong loop condition
FailureSoftware does not work as expectedSystem-relatedWhen software is executedSystemUser-visible malfunctionApp crashes during payment
IncidentReported problemProcess-relatedWhen failure is observedUser / TesterTriggers investigationUser reports “Payment failed”

The Bug Life Cycle

Imagine you find a hole in your school bag. You notice it, tell your parents, they give it to the tailor, the tailor fixes it, you check it again, and the bag is finally usable. A bug in software goes through the same steps. The Bug Life Cycle is the journey of a bug from the time it is found until it is fixed and closed.

Stages of the Bug Life Cycle

  • New: Bug is found and recorded for the first time.
  • Open: Bug is accepted and ready to be fixed.
  • Assigned: Bug is assigned to a developer to start working on it.
  • Fixed: Developer fixes the bug and changes the code.
  • Test: Fixed bug is sent back to the tester to re-check.
  • Verified: Tester confirms the bug is fixed and no issue is found.
  • Closed: Bug is officially closed; no more action is needed.

Other Possible States: Reopened (bug still exists), Rejected (invalid bug), or Deferred (fix postponed to a future release).

Bugs Across the SDLC Stages

If a house plan is wrong, the house will be wrong. If construction is careless, walls may crack. Bugs can occur at any stage of the Software Development Life Cycle (SDLC).

  • Requirement Bugs: Caused by unclear requirements. Example: User wants OTP login, but the document misses it.
  • Design Bugs: Errors in architecture. Example: Database not designed to handle many users.
  • Coding Bugs: Mistakes while writing code. Example: Using '=' instead of '==' in a condition.
  • Integration Bugs: Occur when different modules are combined. Example: Payment module not updating order status.
  • System Bugs: Found during full system testing. Example: App crashes when 1,000 users log in together.
  • Maintenance Bugs: Introduced after software release during updates. Example: An update fixes one bug but breaks the login.

Seven Principles of Software Testing

Testing Principles are basic rules that help avoid mistakes and ensure effective testing.

  1. Testing shows the presence of defects, not their absence: Testing can find bugs but never prove there are zero bugs.
  2. Exhaustive testing is impossible: We test only important cases, not every possible input.
  3. Early testing is the best policy: Start testing from requirements to save time and money.
  4. Defect clustering: Most bugs are usually found in a few specific modules.
  5. Pesticide paradox: If the same tests are repeated, they will eventually stop finding new bugs.
  6. Testing is context-dependent: A banking app is tested differently than a game app.
  7. Absence-of-errors fallacy: A bug-free software is useless if it doesn’t meet user needs.

Verification vs. Validation

Verification asks: "Are we building the product right?" while Validation asks: "Are we building the right product?"

  • Verification: Checking documents (requirements, design, code) without running the software. It is a static process.
  • Validation: Running the software to check if it meets user requirements. It is a dynamic process.
AspectVerificationValidation
MeaningChecking work-productsChecking final product
QuestionAre we building it right?Are we building the right product?
NatureStaticDynamic
ExecutionNoYes
Performed ByDevelopers, reviewersTesters

The V-Model: Parallel Development

The V-Model (Verification and Validation Model) is a framework where testing is planned in parallel with development. The left side represents Verification, and the right side represents Validation.

Structure of the V-Model

  • Left Side (Verification): Requirement Analysis (Acceptance Test Plan), System Design (System Test Plan), HLD (Integration Test Plan), and LLD (Unit Test Plan).
  • Bottom: Coding (Actual development occurs here).
  • Right Side (Validation): Unit Testing, Integration Testing, System Testing, and Acceptance Testing.

Test Management and Organization

Test Management is the process of planning, organizing, and monitoring testing activities to ensure software quality. It includes resource management, environment management, and team organization.

Testing Group Hierarchy

  • Test Manager: Responsible for overall planning, policies, and resource allocation.
  • Test Leader: Prepares detailed plans and coordinates activities.
  • Test Engineers: Design and execute test cases and report defects.
  • Junior Test Engineers: Assist senior testers and execute basic cases.

Major Activities of a Testing Group

  • Maintenance of test policies and standards.
  • Participation in requirement, design, and code reviews.
  • Test planning, execution, and monitoring.
  • Defect tracking and acquisition of testing tools.
  • Test reporting and measurement.

Test Strategy and Planning

A Test Strategy is a high-level plan defining the overall approach, levels, and techniques. A Test Strategy Matrix maps test levels with types and techniques to ensure complete coverage.

Test Planning defines the scope, resources, and schedule. Components include the test plan identifier, features to be tested, pass/fail criteria, deliverables, and risks.

Agile Testing and Scrum

In Agile Testing, testing is continuous and starts from Day 1. Software is developed in small cycles called iterations or sprints.

Features of Agile Testing

  • Continuous testing and regular customer feedback.
  • Testers and developers work together closely.
  • Less documentation, more working software.

The Scrum Framework

  • Product Owner: Maintains the product backlog and represents customer needs.
  • Scrum Master: Facilitates the process and removes obstacles.
  • Development Team: Develops and tests the product.

Progressive vs. Regression Testing

BasisProgressive TestingRegression Testing
MeaningTesting new featuresTesting existing features
FocusNew functionalityOld functionality
Performed whenNew code is addedCode is modified or fixed

Automation Testing and Selenium

Automation testing uses tools and scripts to test software automatically, saving time and increasing accuracy. It is essential for Regression Testing and Continuous Integration.

Selenium Architecture

Selenium is a popular tool for automating web applications. Its architecture consists of:

  • Selenium Client Library: Where scripts are written (Java, Python, etc.).
  • JSON Wire Protocol / W3C Protocol: The translator between code and browser.
  • Browser Drivers: Programs like ChromeDriver or GeckoDriver that control the browser.
  • Real Browser: Where the actual testing happens (Chrome, Firefox).

Software Quality Models

McCall’s Quality Factors Model

  • Product Operation: Correctness, Reliability, Efficiency, Integrity, Usability.
  • Product Revision: Maintainability, Testability, Flexibility.
  • Product Transition: Portability, Reusability, Interoperability.

Software Quality Views

  • Transcendent View: Quality is a felt, premium experience.
  • User-Based View: Quality is defined by user satisfaction.
  • Manufacturing-Based View: Quality is following specifications correctly.
  • Product-Based View: Quality depends on the number of useful features.
  • Value-Based View: Quality is judged by price versus performance.

Six Sigma and DMAIC

Six Sigma aims to reduce defects to 3.4 per million opportunities (DPMO). The DMAIC methodology includes:

  • Define: Identify the problem and customer requirements.
  • Measure: Collect data on current process performance.
  • Analyze: Find the root causes of defects.
  • Improve: Implement and verify solutions.
  • Control: Monitor the process to maintain improvements.

RiDnisfiTPzyb7D4XBHu0l99OM366bvW97IkDbX66y3m1mObsNoV0IIIYQQQsjV5P8HL9HK6oOUp18AAAAASUVORK5CYII=

+zAIUFoVtIUAAAAASUVORK5CYII=

YmA0pexJlB3f2eqeTs7GslkGvuGPR52IvRkTNU8s+wb4oGIN3nHUejT7LDwoRjeGFwsE6B46OvhC+UVseW3en90FrPndLIWh4GINZjgS0Acu55nZ75lRzQB93oFHMMY4sUQmDyNUOe3T26DRr1Mj9AtlRPz99NUwZgR2dYpVRK92C4SPHvpKtg26NUStE0Ig5xWAAxhhZdtHqyOuvWJZo+chSlNnCPDP6eb4CWaISQQlneGF4m+XuBxE5PV4pG1kxTtbIRyELwx7Zw+6NGMssIZq3e+ZvV7gW+b0UqkVERERELlIzVkRERETkIoVqEREREZGLFKpFRERERC5SqBYRERERuUihWkRERETkIoVqEREREZGLFKpFRERERC5SqBYRERERuUihWkRERETkIoVqEREREZGLFKpFRERERC5SqBYRERERuUihWkRERETkIoVqEREREZGLFKpFRERERC5SqBYRERERuUihWkRERETkIoVqEREREZGLFKpFRERERC5SqBYRERERuUihWkRERETkIoVqEREREZGLFKpFRERERC5SqBYRERERuUihWkRERETkIoVqEREREZGLFKpFRERERC5SqBYRERERuUihWkRERETkon8BAktRGs46kYsAAAAASUVORK5CYII=

lEG7tSZSdNQAAAABJRU5ErkJggg==

Related entries: