Narrative-Based Assessment of Quality in Context

The goal of software testing is to qualitatively research, evaluate and describe quality and communicate it as information in the form of stories, or the who, what, where, when, why and how of value and risk. Stories can also serve as hypotheses for testing and are communicated to stakeholders to allow them to make the best business decisions. There are an infinite number of ways to evaluate and describe quality so there are an infinite number of stories that can be told, but the goal of software testing is to identify and tell the stories that matter. All stories are time and context dependent, subjective and ever changing, which why quality much be assessed by skilled and motivated individuals who can shine a light on quality in the most truthful ways possible.
The Testing and Quality Story is the result of testing. You can never know the “actual” quality of a software product— you can’t “verify” quality, as such— but by performing tests, you can make an assessment, and that takes the form of a story you tell (including bugs, curios, etc.) (Bach, 2020)
Like any story, it’s made up of various aspects that represent important contextual factors, both objective and subjective. Below is what that story structure may look like based on previous definitions, and items from the Heuristic Test Strategy Model (HTSM) (Bach, 2019):
Quality in a software product, system or service can be evaluated, described and communicated as instances of:
Value (or harm) in some way is primarily the different dimensions of quality and the associated benefit or risk, such as its ability (or inability) as a social prosthesis or tool. Value and harm can also be evaluated as time or cost (monetary worth) or perceived importance (or insignificance). Someone who matters are stakeholders, the subjects of the subjective quality, or those affected by quality in a way that is cared about by someone else. Expectations met or exceed describes consistencies (or inconsistencies) between related things, actual experiences of the software compared to expected experiences from mental models. At some point in time means all this can change from moment to moment and is not fixed. Some method of interaction are the different things a user or other stakeholder does (or doesn’t do) with the software to get that value, loss or harm. Strong/weak product element the part(s) of the software or project that the method of interaction is with. Experienced emotion or lack of emotion, the feeling that the stakeholder experiences, especially if it’s born from a cognitive dissonance. Surrogate measured is a quantitative measurement of value to someone by how much they would pay for it or score/rate a software product that has (or doesn’t have) it.
As project quality is linked to product quality, the story can further be expanded to identify the root cause of any delivered or threatened value. This consequence is due to a decision (or lack of a decision) made by a project team member driven by a strong or weak project process. In a story of risk, the latter is referred to as a “cause”, the likelihood of which increase with certain problem drivers (van Daele, 2017) (Kaner, Bach & Pettichord, 2002a). This not only allows stakeholders to make decisions on the quality of the software, but also allows project team members and managers to make decisions on the quality of the project processes that are linked to it. For testers, this means also reporting on the quality of testing itself and the software project (Bolton, 2012).
|
Examples
- A tax calculation program could have an oversight that allows division by zero (vulnerability) that if encountered (exploited) by an accountant (someone who matters) inputting numbers (method of interaction) for a tax calculation (value benefit) could cause an error (empirical observation). This is a functional bug (dimension) because it risks someone not being frustrated (feelings) at not being able to complete their tax calculations for a client on time (lost value or harm).
- A shopping website could be designed in such a way so that it causes a bottleneck during peak usage (vulnerability) that if encountered (exploited) by a customer (someone who matters) trying to click on link (method of interaction) to buy some clothes (value benefit) is prevented from doing so due to a persistent busy circle (empirical observation). This is a performance bug (dimension) because it risks someone becoming impatient (feeling) and leaving to buy their clothes from a competitor (lost value or harm).
- A government’s management information system (MIS) has doesn’t sanitise one of its data input fields (vulnerability), that if encountered (exploited) by a malicious user (a disfavoured stakeholder), could allow a database injection attack (method of interaction) resulting in unauthorised access (empirical observation). This is a security bug (dimension) because it risks personal or financial data being stolen (loss or harm).
User Stories
In agile software development, user stories are a type of initial claim that contain some context that can feed into the quality story, and so are a valuable asset to testers. User stories were coined as part of extreme programming by allowing discussions around value and described in such a way that allows simple understanding and communication (Beck, 1999). The user story was refined into three simple aspects that represent who, what and why of the quality story respectively (Davis, 2001) (Agile Alliance). A popular example of user stories contains a type of user (some person who matters) who wants to achieve a goal (method of interaction) so that they can achieve a benefit (value in some way, particularly capability):
As a <type of user>, I want <some goal>, so that <some reason>
(North, 2007), (Cohn, 2008)
While everyone else in a project is focused on stories of value and benefit, testers are considering the opposite: how the user story of value may not be delivered. To achieve this, testers create “anti-user stories” for discussion around risk and serve as counter claims of lost value and harm. In the same way user stories of value serve an important part of software development, user stories of risk serve an important part of software testing.
As a <type of user>, I encounter <some threat>, which leads to <lost value/harm> |
Events
While user stories of risk and value are claims of what could happen, events are when something does happen.
An event is an observed or experienced situation of change in a software product or project in response to an actor’s interaction (or lack of interaction) that has the potential to deliver or threaten value. A threat event is an observed or experienced situation that exploits a weak product or project element that leads to lost value or harm (ISO, 2009). For testers, events are the instances that provide empirical evidence to the quality story, and therefore, all quality stories contain events.
In the same way calendar events contain a time, a location and people for a particular purpose (value), software quality is gained or lost in events of time, value and people too:
SummaryQuality is evaluated and communicated as series of stories known as events. The events contain context on the who, what, where, when, why and how of delivered or threatened value. Testers identify, evaluate, describe and communicate the most important stories to stakeholders in a way they can easily understand and make business decisions on. |
Citations
- Agile Alliance, User Story Template. [online] Glossary. Available at: Link
- Bach, J., 2019. Heuristic Test Strategy Model. Version 5.9 [online] Satisfice. Available at: Link
- Beck, K., 1999. Extreme Programming Explained: Embrace Change. 1st ed. Boston: Addison-Wesley
- Bolton, M., 2012. Braiding The Stories (Test Reporting Part 2) [online] DevelopSense. Available at: Link
- Cohn, M., 2008. Advantages of the “As a user, I want” user story template. [online] Mountain Goat Software. Available at: Link, see also: Link
- Davis, R., 2001. XP2001 Conference (see Agile Alliance)
- ISO, 2009. ISO Guide 73:2009 – Risk management – Vocabulary. Available at: Link
- Kaner, C., Bach, J. and Pettichord, B., 2002. Lessons Learned In Software Testing: A Context Driven Approach. 1st ed. New York: Wiley, p.61
- North, D., 2007. What’s in a Story? [online] Dan North & Associates Ltd. Available at: Link
- van Daele, B., 2017. RiskStorming …A cheesy name for a cool Risk focused format! [online] Slides: RiskStorming shared. p.8. Available at: Link
2021-10-16: Revised and updated
2023-02-08: Revised events