Index

Complete list of all articles organised by category.

 

Science

Fundamental definitions and assessments of software testing and quality as science and scientific research

  • Science & Testing - Testing is applied research into software quality using scientific approaches and evidence to help stakeholders make informed decisions
  • Computer Science & Software Engineering - Testing isn't algorithms, problem solving, solution building or process management but a knowledge of both is important for testing
  • Quality - Defined as value to some person(s), which can include expectations met and positive feelings
  • Value - Primarily an ability to solve a problem, complete a task or achieve a goal making software a social or cognitive prosthesis or tool
  • Feelings & Emotions - Focus on people's feelings as the indicator to quality but be careful to avoid being fooled by illusions
  • Expectations - Expectations come from mental models that stakeholders have about the software
  • Objective vs Subjective vs Relative - Quality is subjective, relative truth with an objective, relative analysis and is formed from relationships between people and software
  • Quantitative vs Qualitative - Testing is journaling, note-taking and immersion resulting in a written evaluation of quality carefully supported by metrics where appropriate
  • Reliability vs Validity - Avoid misleading information by ensuring consistent and accurate testing through diversity of testing methods and approaches
  • Confirmation vs Falsification - Testing is scepticism, falsification and invalidation not confirmation, verification or demonstration
  • Deduction vs Induction - Testing is constantly building and falsifying hypotheses of quality using logical reasoning to uncover information
  • Method - Testing scientific model and process of research question, background investigation, analysis, hypothesis, experiment, evaluation, action (reporting)
  • Story - Quality is evaluated and communicated as stories of context known as events that contain the who, what, where, when, why and how of value to someone

Approaches

High-level approaches to software quality management and testing that apply to every project

  • Black vs Clear Box - Testing with and without access or knowledge of software internals to find different problems
  • Static vs Dynamic - Testing with and without executing the software's code to find different types of problems
  • Loss vs Harm - Risk/threat modelling 1: Problems occur when value isn't deliver or when additional harm occurs
  • Something vs Nothing - Risk/threat modelling 2: Problems occur when nothing happens when expected but also something happened when unexpected
  • Test Plan Mind Map - Coverage mindmaps, requirements/risk, charters/questions, designs, environmental/personnel, journaling/noting results

Dimensions

Different ways in which value of a software product or project can be delivered or threatened


Capability (Functionality) Ability to complete tasks or user functions in reasonable and expected ways
Performance Ability to complete tasks or user functions in reasonable time and responsiveness
Usability Ability to complete tasks or user functions reasonably easily, simply and intuitively
Accessibility Ability to complete tasks or user functions reasonably well as accessibility-needs users
Concurrency Ability to complete tasks or user functions reasonably simultaneously with similar tasks and with other tasks
Efficiency Ability to complete tasks or user functions with reasonable system resources and running cost
Reliability Ability to complete tasks or user functions without unreasonable interruption and with reasonable recovery from interruptions
Integrity Ability to complete tasks or user functions with reasonable accuracy
Security Ability to reasonably guard against unauthorised access to complete or alter tasks or to prevent authorised access to do so
Portability Ability to complete tasks or user functions on different devices and environments
Localisation Ability to complete tasks or user functions reasonably well in different countries or cultures
Scalability Ability of all Quality Criteria to reasonably scale with increased usage, size, complexity, market, time etc
Operability Ability to reasonably deploy, install, configure, update and resolve problems in production with reasonable disruption and time scale
Testability Ability to reasonably observe and configure the product for testing purposes
Maintainability Ability to reasonably maintain the product for future development including bug fixing, expansion and repurposing
Aesthetics Ability to otherwise give a good impression of quality and be reasonably marketable

Consistencies

Different ways in which threats to value can be identified and described


Specification Consistency with explicit claims made in project documentation
Reference Consistency with software programs that share similarities in purpose or function
Regression Consistency with past versions of the same software program
Desire Consistency with implicit desires, wants and needs of stakeholders
Purpose Consistency with the intended purpose of software
Self Consistency within the software program itself
Legal Consistency with governmental or judicial statutes, precedents or regulations
Standards Consistency with standards, recommendations or best practices within the company or external authoritative bodies
Image Consistency with the image that stakeholders want to project
Problem Inconsistency with previously reported and resolved problems

Designs

Different techniques used to design tests by modelling out software to some degree

Uncategorised

Other posts