Idea and artefact testing is a expand left approach to continuous testing where the ideas and artefacts of a project are tested for quality.
All software starts with an idea, so this is where testing begins. Ideas are then communicated, refined and documented via various mediums, or artefacts, which communicate the idea and enable further discussion and refinement. The problem is many of these artefacts are subject to to their own risk and problems that can increase the cost of the project or the quality of the software product, system or service. By testing these ideas and artefacts helps identify problems at the source keeping feedback loops small and reducing costs to resolve them before any software has been written.
When an idea is first proposed or pitched, the first step is to ask “is this actually a good idea?”. To help answer this, additional questions can be asked based on the 5W+H starting with why. The goal is to both learn about the idea and to help the person pitching it identify potential problems:
- Why is this idea being proposed? (value to customers and profitable to business?)
- What is the idea being proposed? What are the main risks?
- Who is this idea for?
- Where will it work?
- When will it work?
- How will it work and how will we build, test and support it? (do we have enough people, skills and funding)
Specific questions that help clarify the idea or identify problems:
- What if…
- Yes, but…
- To clarify… (readback in own words)
It’s important that software project artefacts are tested, as any miscommunication, misunderstanding or misinterpretation (known as the “three misses”) presents serious threat to the quality of the software project and product and should be mitigated whenever possible. Testers therefore treat artefacts the same as the software product where they apply heuristics to help identify bugs and report these to stakeholders. (Dan Ashby, 2021) (Edgren, 2011) (Wiio, 1978)
- Ambiguity: What’s unclear or can be interpreted in multiple ways, especially the worst way possible?
- Opposite: If a user wants something, what don’t they want or to never happen?
- Missing: What’s missing from the user story altogether? E.g. persona, action, value, acceptance criteria, discussion
- Complex: What complex ideas are being communicated that may need clarifying or breaking down?
- Unnecessary: What’s included that’s unnecessary and can be removed for simplicity of communication?
- Assumption: What assumptions are being made that need to be questioned and clarified?
- Implicit: What’s being left unsaid that needs to be made more explicit?
- Emphasis: How does putting emphasis on different words change the meaning of the communication? (Mary Had A Little Lamb heuristic)
- Future: What may not seem important now may become more important in the future?
- Readback: Is my understanding of this correct when I resay it in my own words?
6C (User Stories)
Components of a user story to help identify where they may be missing or lacking
- Card story itself including the value/why
- Comment discussions, or recorded notes of discussions, in the comments
- Criteria the acceptance criteria
- Context where and when the user story applies, for example, use cases and examples
- Child link to parent features, epics or child tasks
- Capacity effort or sizing estimates, plus discussion on whether it can be split up further into smaller user stories (acceptance criteria can be a great indicator)
In addition to testing the artefact itself, it also allows the user to start thinking about how they would test the actual application including what they need to plan in terms of approaches, techniques and logistics. One good approach testers use is using their imagination to test artefacts as if they were testing the actual application in front of them. This can be applied to any artefact, but is best applied to visual ones such designs, models and prototypes.
For example, with a system diagram, testers can start to think how and where to test the system at various points. With a user interface, users can imagine interacting with the on-screen elements and anticipate what should happen (this is an example of using a reference as an oracle to form an expectation).
Imagination testing helps the tester assess the quality of the artefact using the heuristics above, such as where they have to make assumptions to help identify missing or problematic assumptions from the author. This goes for the rest of the heuristics too, for example, anything ambiguous will become clearer once the testers try to use the applications in their mind.
- Ashby, D., 2021. Continuous testing throughout the SDLC. [online] Ministry of Testing. Available at: Link
- Edgren, R., 2011. The Little Black Book on Test Design. [online] thoughts from the test eye. Available at: Link pp. 12-14
- Wiio, O., 1978. Wiion lait – ja vähän muidenkin (Wiio’s Laws – And Some Others). Weilin+Göös: Espoo.