Science & Testing

Testing is applied research into software quality using scientific approaches and evidence to help stakeholders make informed decisions


Defined as value to some person(s), which can include expectations met and positive feelings


Primarily an ability to solve a problem, complete a task or achieve a goal making software a social or cognitive prosthesis or tool

Feelings & Emotions

Focus on people’s feelings as the indicator to quality but be careful to avoid being fooled by illusions

Expectations & Oracles

Expectations are mental models of how software works or delivers value; built from references which can be used as oracles to evaluate and identify problems

Quantitative vs Qualitative

Testing is journaling, note-taking and immersion resulting in a written evaluation of quality carefully supported by metrics where appropriate

Reliability vs Validity

Avoid misleading information by ensuring consistent and accurate testing through diversity of testing methods and approaches

Deduction vs Induction

Testing is constantly building and falsifying hypotheses of quality using logical reasoning to uncover information


Testing scientific model and process of research question, background investigation, analysis, hypothesis, experiment, evaluation, action (reporting)


Quality is evaluated and communicated as stories of context known as events that contain the who, what, where, when, why and how of value to someone

Black vs Clear Box

Testing with and without access or knowledge of software internals to find different problems

Static vs Dynamic

Testing with and without executing the software’s code to find different types of problems

Loss vs Harm

Risk/threat modelling 1: Problems occur when value isn’t deliver or when additional harm occurs

Something vs Nothing

Risk/threat modelling 2: Problems occur when nothing happens when expected but also something happened when unexpected

Left vs Right

Continuous, parallel testing in throughout the software DevOps lifecycle

Idea & Artefact

Software starts with an idea which is communicated and refined through artefacts tested with heuristics and imaginations


Coverage mindmaps, requirements/risk, charters/questions, designs, environmental/personnel, journaling/noting results


Ability to complete tasks in reasonable and expected ways, possibly in multiple different ways (flexibility).


Ability to complete tasks in reasonable time and with reasonable responsiveness


Ability to complete those tasks reasonably easily, simply and intuitively.


Ability to complete those tasks reasonably well as accessibility-needs users


Ability to reasonably guard against completing or altering tasks as unauthorised users or prevent authorised users from doing so


Ability to complete tasks reasonable accurately and when needed


Ideas or artefacts that form the basis of expectations and utilised as comparison oracles


Authentication vs Authorisation: Confirming users are who they say they are and assigning what users can do and access


For authentication and authorisation, logical access control can be mandatory, discretionary or role-based


Authentication via multiple method types of knowledge, possession (token) and inherence (biometric) where needed


Individual contributors: apprentice, associate, mid, senior, principal, architect and administrative officers: lead, manager, director


Process are repeatable tasks to achieve something modellable as flow diagrams


Measurement management via theory X external vs delegation management via theory Y internal, plus values and relationships

Assurance vs Control

Proactive vs reactive approaches to putting good things in and taking bad things out of testing processes to assure and control testing quality


Process where seniors help/assist/teach/advise/guide juniors/interns/apprentices to onboard, upskill, achieve goal, get promoted informally/ad-hoc or part of a programme


(One-to-one) Regular, private, non-status update, tester-driven agenda meetings to build human connections, trust and rapport with manager