Software Quality Assurance: Strategies, Testing, and Design
Classified in Computers
Written on in English with a size of 16.47 KB
Strategic Approach: Begins with technical reviews to identify errors early. Moves from component-level (unit testing) to system-level integration. Different strategies suit conventional software, object-oriented software, and web applications.
Strategies for Different Systems
- Conventional Software: Focus on module testing and integration.
- Object-Oriented Software: Emphasis shifts to classes, attributes, and their collaborations.
- Web Applications: Covers usability, interface, security, and environmental compatibility.
Key Strategic Issues: Define requirements quantitatively before testing. Develop robust software with self-testing capabilities. Use iterative testing cycles to refine quality. Employ independent testers alongside developers.
Regression Testing: Verifies that recent changes haven’t introduced errors in existing functionality. Ensures software stability through retesting of previously successful test cases. May be manual or automated, covering functional and non-functional aspects.
Smoke Testing: Preliminary testing to check the stability of a software build. Aims to identify critical issues ("showstoppers") that prevent further testing. Ensures all major functions operate without failure before detailed testing.
System Testing: Evaluates the complete software system's functionality and performance. Tests include:
- Recovery Testing: Validates fault recovery mechanisms.
- Stress Testing: Tests system under extreme loads.
- Security Testing: Ensures protection mechanisms work as intended.
- Performance Testing: Checks system performance metrics.
Debugging involves identifying and resolving defects discovered during testing. It’s critical for software stability and quality.
Debugging Process
- Identify the symptom through testing outputs.
- Trace the cause using systematic analysis.
- Apply corrective actions to resolve the issue.
Debugging Techniques
- Brute Force: Analyze logs and outputs for clues.
- Backtracking: Trace steps backward to identify the origin of the error.
- Cause Elimination: Hypothesis-driven approach to isolate faults.
- Automated Debugging: Tools to simplify error detection and correction.
Debugging Challenges
- Symptom and cause may be geographically separated.
- Some bugs may not manifest consistently, making debugging complex.
Basis Path Testing, a white-box testing technique, ensures every path in the control flow is executed at least once.
Basis Path Testing Concept
Relies on constructing a flow graph to represent control flow. Uses cyclomatic complexity (V(G)) to determine the number of independent paths.
Basis Path Testing Steps
- Derive a flow chart & convert it into a flow graph.
- Compute cyclomatic complexity using:
- V(G)=E−N+2 (Edges & Nodes).
- V(G)=P+1 (Predicate nodes).
- Identify independent paths & design test cases for each.
Basis Path Testing Example
For a flow graph with V(G)=4, the independent paths are:
- Path 1: 1 → 2 → 3 → 6 → 7 → 8.
- Path 2: 1 → 2 → 3 → 5 → 7 → 8.
- Path 3: 1 → 2 → 4 → 7 → 8.
- Path 4: 1 → 2 → 4 → 7 → 2 → 4 → 7 → 8.
Test cases are designed to cover all these paths, ensuring all conditions are tested.
Risk Assessment: A process to systematically identify & analyze risks affecting a software project.
Risk Assessment Steps
- Identify potential risks using brainstorming & past project data.
- Categorize risks (e.g., technical, financial, business).
- Analyze risks using tools like risk matrices.
Risk Projection
Definition: Estimating the likelihood of risk occurrence & its impact.
Formula: RE = P × C, where: P: Probability of occurrence. C: Cost or impact if the risk materializes.
Example: If a risk has an 80% probability & a $25,000 impact, RE = 0.8 × 25,000 = $20,000.
Types of Risks
- Project Risks: Affect schedules & budgets. Example: Delays in team onboarding.
- Technical Risks: Affect quality & performance. Example: Failure in integrating a critical API.
- Business Risks: Affect organizational goals. Example: Building a product no one wants.
Examples of Risk Mitigation
- Project risks: Conducting thorough planning.
- Technical risks: Performing feasibility analysis.
- Business risks: Engaging stakeholders early.
Design Principles include:
- Open-Closed Principle (OCP): Components should be open for extension but closed for modification.
- Liskov Substitution Principle (LSP): Subclasses must be substitutable for their base classes.
- Dependency Inversion Principle (DIP): Depend on abstractions rather than concretions.
- Interface Segregation Principle (ISP): Prefer multiple specific interfaces over a generic one.
- Common Closure Principle (CCP): Group classes that change together.
- Common Reuse Principle (CRP): Avoid grouping unrelated classes to prevent unnecessary dependencies.
Formal Technical Reviews (FTRs)
FTR Definition
A systematic activity where peers review project deliverables to ensure quality, identify defects, & verify adherence to standards.
FTR Objectives
- Early detection of defects to minimize rework.
- Verification of compliance with project requirements.
- Enhance collaboration & knowledge sharing among team members.
Types of FTRs
- Code Reviews: Validate source code against standards.
- Design Reviews: Assess design feasibility & alignment.
- Walkthroughs: Conduct informal discussions to understand deliverables.
FTR Process
- Planning: Define scope & participants.
- Preparation: Distribute materials for review.
- Conducting Review: Discuss findings & document issues.
- Rework: Address identified issues.
- Follow-up: Verify implementation of recommendations.
FTR Benefits
- Reduces downstream costs by detecting defects early.
- Enhances team understanding of the product.
- Promotes a culture of continuous improvement.
SCM Process: A systematic approach to manage changes in software products.
SCM Key Tasks
- Identification: Label & define software configuration items (SCIs).
- Version Control: Track & manage versions of SCIs.
- Change Control: Approve & implement changes systematically.
- Configuration Auditing: Ensure changes are implemented correctly.
- Status Reporting: Track & communicate change details.
SCM Repositories
Serve as centralized storage for SCIs. Features include Versioning, Dependency Tracking, Audit Trails, and Collaboration Tools.
Examples of SCM Tools
Git, Subversion, & Docker.
Benefits of SCM
- Improved collaboration among teams.
- Enhanced traceability & accountability.
- Reduced errors through version control & audit trails.
Conclusion: SCM ensures that software projects remain organized & adaptable to changes.
Change Control in SCM
Definition:
A structured process to handle changes in software projects systematically and cost-effectively.
Steps in Change Control:
- Submit Change Request: Document details like rationale, urgency, and impact.
- Evaluation: Assess feasibility, risks, and costs.
- Generate Change Report: Approved changes are formalized with an Engineering Change Order (ECO).
- Implementation: Changes are made using SCM tools and techniques like branching and merging.
- Verification and Audit: Ensure the changes are correctly implemented through technical reviews and configuration audits.
- Reporting: Document and share change details with stakeholders.
Examples of Tools for Change Control:
- Jira: For tracking change requests.
- Git: For version control.
- Azure DevOps: For integrated change and release management.
Challenges:
- Managing conflicting changes.
- Ensuring timely approvals without delaying project timelines.
Benefits:
- Maintains the integrity of the software baseline.
- Enhances team accountability.
A design pattern is a reusable and proven solution to recurring problems in software design. It provides a structured framework for solving design challenges, helping developers avoid reinventing the wheel. Patterns are abstract and adaptable to various contexts without specifying exact implementation.
Types of Design Patterns
- Architectural Patterns: Focus on the system's high-level structure and organization. Example: The Broker Pattern.
- Data Patterns: Address recurring data-related problems. Example: Database Management Systems Pattern.
- Component Patterns: Solve issues in the development of subsystems or components. Example: The Help Wizard Pattern.
- Interface Design Patterns: Target common user interface problems. Example: Shopping Cart Pattern.
- Creational Patterns: Focus on object creation. Example: Singleton Pattern.
- Structural Patterns: Address the composition of classes and objects. Example: Pipes and Filters.
- Behavioral Patterns: Deal with object interaction and delegation. Example: Observer Pattern.
Component-Level Design ensures the software is modular, maintainable, and easy to scale.
Component-Level Design Principles
- Open-Closed Principle (OCP): Extendable without modification.
- Liskov Substitution Principle (LSP): Subclasses substitutable for base classes.
- Dependency Inversion Principle (DIP): Depend on abstractions, not concretions.
- Interface Segregation Principle (ISP): Prefer specific interfaces over generic ones.
- Common Closure Principle (CCP): Group classes that change together.
- Common Reuse Principle (CRP): Package together classes reused together.
Cohesion and Coupling are essential metrics for evaluating software quality.
Cohesion
The degree to which elements within a module are related. High cohesion is desirable.
Levels of Cohesion:
- Coincidental (weak): Random grouping of functions.
- Logical (weak): Grouped by type of functionality.
- Temporal (moderate): Grouped by execution time.
- Procedural (moderate): Sequentially executed tasks.
- Communicational (medium): Operate on the same data.
- Sequential (medium): Output of one is input to another.
- Functional (strong): Every element contributes to a single task.
- Object (strong): Operations relate directly to an object’s attributes.
Coupling
Measures the interdependence between modules. Lower coupling is preferred.
Levels of Coupling:
- Content Coupling (highest): One module modifies another.
- Common Coupling (high): Modules share global variables.
- Control Coupling (moderate): Modules control each other’s behavior.
- Stamp Coupling (low): Passes structured data.
- Data Coupling (lowest): Passes simple data variables.
The Golden Rules ensure user-friendly interfaces by focusing on usability and simplicity:
- Place the User in Control: Avoid forcing actions, provide flexibility, allow undo/redo, customize interface.
- Reduce User’s Memory Load: Use familiar icons/terminology, display information clearly, minimize memorization.
- Make the Interface Consistent: Maintain uniform design elements across screens and modules.
Interface Design Principles and the Design Evaluation Cycle
Interface Design Principles:
- Consistency: Uniformity in interface elements.
- Simplicity: Easy to learn and use.
- Error Handling: Clear feedback and error messages.
- Feedback: Keep users informed about system actions.
- Flexibility: Adapt to user preferences and skill levels.
Design Evaluation Cycle:
- Requirements Analysis: Understand user needs.
- Prototype Design: Develop initial UI mockups.
- Usability Testing: Gather user feedback.
- Feedback Analysis: Identify and prioritize issues.
- Redesign: Implement improvements.
- Reevaluate: Test updated design until optimal usability is achieved.
Pattern-Based Design leverages proven solutions to address design challenges.
Pattern-Based Design Process:
- Analyze the Requirements Model: Identify problems and contexts.
- Identify Applicable Patterns: Choose high-level architectural patterns.
- Refine with Component Patterns: Use design and interface patterns for lower-level issues.
- Adapt Patterns to the Problem Context: Customize patterns to project requirements.
- Iterate: Repeat for every abstraction layer.
- Evaluate the Design: Ensure quality standards and user needs are met.
This process leads to robust, reusable, and maintainable designs.
UI Design Errors and Issues
Common errors in user interface design hinder usability and user satisfaction:
- Lack of Consistency: Confusing design elements due to varied styles and layouts.
- Excessive Memorization Requirements: Forcing users to remember complex commands or steps.
- Poor Guidance: Insufficient help options or unclear instructions.
- No Context Sensitivity: Failing to adapt to user actions or scenarios.
- Slow Responses: Delayed system feedback leading to user frustration.
- Unfriendly Design: Overly complex or technical interfaces.
Avoiding these issues improves usability, accessibility, and overall user experience.
Key Software Metrics:
- Defect Density = Number of Defects / Size (LOC or FP). Example: 50 defects in 10,000 LOC = 0.005 defects/LOC.
- Availability = MTTF / (MTTF + MTTR) × 100%. Example: If MTTF = 480 hrs, MTTR = 20 hrs, Availability = 96.4%.
- Reliability (MTBF): Mean Time Between Failures. Sum of MTTF (Mean Time To Failure) & MTTR (Mean Time To Repair). Higher MTBF indicates better reliability.