Another way value can be empirically observed, assessed and evaluated is through feelings and emotions within the people who matter. Software testing is in part identifying and describing the positive or negative feelings in any given situation involving the software-under-test. So when testing, we need to be constantly asking:
How does this make someone who matters feel?
Positive feelings are signs of a quality product, system or service. Negative feelings are signs of a poor quality one. However this presents some big problems:
- The quality of the product can only be known when it’s out there being used
- It can’t be known for sure what people think and feel about it (testers aren’t psychic)
- Different people will feel differently about it at different times (often contradictory)
The first reason is why testing happens, to try and get an idea of quality before software is released. Points two and three however present huge challenges to testing in achieving this.
People experience feelings of unease when they have inconsistent beliefs, values or knowledge (Gray and Bjorklund, 2014). People have expectations around software, and when they experience an inconsistency with these expectations during a quality event, it creates a cognitive dissonance. This elicits a negative emotional response.
Cognitive dissonance theory in psychology explains how people react when their observations of an event is different from what they expected of that event(Bhattacherjee, 2012)
How people may react is important as this is where the risks to the software project lie. Users of the software may try to resolve cognitive dissonance in undesirable ways such as stopping using the software or declaring it low quality. This risks lost sales or reputation damage respectively. Thinking about cognitive dissonance is important when researching quality and forms important information supplied to stakeholders. By resolving bugs, developers bring the product back inline with expectations, resolve the dissonance and restores harmony to the quality relationship.
Illusions of Quality
“Is this good quality?”
The less capable tester might do a quick validation check of a feature to see if it works, say “yes”, give the rubber stamp of approval and move on. The better tester would additionally ask:
“Ah yes, but is it though? And is it really delivering the value that it’s meant to? What ways is it not delivering value? Like the person in the photo, is this a fake illusion of quality? Are we being tricked here?…”
Two people are using software. What are these two people feeling? Which one is using the high quality software and which one is using the low quality software?
Based on feelings, the person on the right is using the high quality product and the person on the left is using the low quality product. It’s also a semi-trick question, because they are both using exactly the same software in exactly the same way to achieve exactly same goal. What one user loves or finds helpful, another may hate or find hindering. They could also be the same person just at different times. Something that didn’t bother them before suddenly bothers them a lot when tired, in a rush or they have a slight variation in what they’re trying to achieve.
However customers must speak with one voice so a product, system or service can be made. It must also satisfy the organisation creating it (customers can’t get everything they want if it bankrupts the organisation), as well as governmental and societal needs (it can’t be illegal or cause harm to others). A researcher’s job is to piece together different truths to find some underlying knowledge and understanding. A tester’s job is to piece together different truths of quality to find underlying risks (or opportunities and benefits) that can be reported to business stakeholders. This means considering all the people who may be affected by the software and how they may be affected at different times.
Surveys such as CEIP (Customer Experience Improvement Programme) can tell the project team a lot about what end-users are doing such as functionalities used, but that rarely capture how customers and end-users actually feel about the software. Site visits and focus groups are good approaches but even they often fail to recall feelings at the time of using the software. As mentioned with Pirsig’s definition of quality, it’s important to capture the feelings in the moment.
A simple and effective starting point is “send a smile” where users can click a smiley face or frowny face at any given moment when interacting with the software. If they have a nice moment, they click smile; if not, they click a frown. The form that pops up can ask for more details if they want to write something, otherwise positives and negatives can be tracked over time as a simple but useful measure of end-user feelings (Page, Johnston and Rollison, 2009).
When testing, focus on the feelings of the people who matter as an indicator of a troubled relationship. Always ask how it would make the people affected by it feel. Follow the thread to find the root cause but think carefully and don’t allow yourself to be tricked by illusions.
- Bach, J., 2009. Quality Is Dead #2: The Quality Creation Myth. [online] Satisfice, Inc. Available at: Link
- Bhattacherjee, A., 2012. Science and Scientific Research. Social Science Research: Principles, Method and Practices, 2nd Edition. University of South Florida, Tampa. p. 3
- Gray, P. and Bjorklund, D., 2014. Attitudes and rationalizations to achieve cognitive consistency. Psychology, 7th Edition. Worth, New York. p. 533
- Oxford Languages, 2011. In Concise Oxford English dictionary. Oxford: Oxford University Press
- Page, A., Johnston, K. and Rollison, B., 2009. How We Test Software at Microsoft. 1st ed. Redmond, WA: Microsoft, p. 310
Updated 01/01/2022: Added “moments”