Testing The Implicit!
Testing Implicit Requirements
With the wider adaptation of Agile software development, the quality of system requirements has improved significantly, especially within teams where there is proactive backlog grooming and sprint planning. But requirements still remain the main source of defects, mainly because we write software code to satisfy a set of expectations and it becomes an issue if any expectation is missing, incomplete or ambiguous.
And there is also a factor of testing implicit requirements, which we will discuss in this article.
What Are Implicit Requirements?
There is really no such industry terminology in the software industry; I came up with this terminology and I hope this will be become clear in the next few paragraphs.
As humans we take a lot of things for granted. For example, when you use “Tab” on your keyboard you would expect the cursor to move left to right and then down. Another example is a user will expect a mobile app to mute, go to background and display an incoming call. Sometimes the operating system will enforce certain behaviors on the application to meet these requirements.
There are expectations from products that are driven by what used to be core requirements in the past and with time they became so fundamental that we don’t specify them any more. They become implicit requirements because intentionally or not we don’t specify them. Also, specifying all these types of requirements may be too much noise and may even distract from the core features of the application we are trying to implement. This is the part where Agile teams excel because such details are spelled out during discussions rather than complicated documentations.
The above examples were very trivial and they mostly constitute positive scenarios. There are more serious cases of software defects which result from not testing implicit requirements. Let’s have a look at three similar cases:
Case study 1 – Uber Implicit Requirements
A lawyer in France is suing Uber the cab service for damages in the millions of Euros because his wife divorced him after his journey notifications sent by Uber revealed his travel routes to a lover. The issue apparently was that he once booked a journey from his wife’s phone and the system sends notifications to ALL phones used for booking under the same account. So his wife was also receiving all his journey notifications on her phone and became suspicious of his regular travels to a secret location.
Case study 2 – Supermarket Promotion Implicit Requirements
A supermarket promotion in one store got propagated to several other stores across the country and the chain only realized after suffering some financial loses because those other stores were not aware of the promotion and therefore did not stop it.
Case study 3 – Vodafone Implicit Requirements
I have had Vodafone small business account for about 10 years but last November I decided to port a newly added mobile number to another Vodafone mobile number, which happened to be a consumer number. My whole account was turned from business to consumer, a credit check conducted on myself and credit reference account created on myself with major credit reference agencies. All my lines were deactivated except the new one and were restored only after I formally launched a complaint. Vodafone is still working into restoring my account back to business.
These cases are very serious because of their impacts on those affected and majority go unreported. There is a pattern in these cases; the software was working according to written requirements; it was doing what it was expected to do and presumably all acceptance criteria were met. But it was doing more than what it was expected to do because it did not satisfy some implicit requirements. Let’s try to identify the implicit requirements in the above examples:
Case 1: The implicit expectation of the user is that they have booked a journey from a specific phone and that is where they are reachable at that particular moment. They are not expecting notifications to any other phones because it just does not make sense.
Case 2: A store manager is responsible for that particular store and has no visibility and direct interest in other stores. They want to get rid of few products in their stores probably because they want new stock or expiry date is approaching. Their implicit requirement is that their authority will be executed locally, not regionally or nationally.
Case 3: Number portability is a legal requirement, which every network provider must comply with; it has nothing to do with the type of service account. My expectation was that after the operation the source number will be deactivated and all services transferred to the target number (at least that is how tested it in the early days of my career), nothing more.
Explore Through Negative Testing Implicit Requirements!
The list of things the software should not do can be long and mostly never or minimally documented. This is partly because we cannot predict every possible route of actions, especially when this is driven by events and user actions. Because of this, the quality of negative testing heavily depends on the experience and imagination of the QA involved. This is where automation cannot replace humans because creativity and critical thinking is not something that can be programmed.
While some negative testing can and should be specified, many more scenarios come to live when we do exploratory testing, asking what if questions and exploring the different threads. Having already confirmed the software can do what it is expected to do, with the working product in their hands, QA will explore deeply the behavior of the software under various conditions, chains of actions, combinations of input data, etc.
QA Challenges for Testing Implicit Requirements
To be effective in their work, QA, like any another team member needs space and support, especially from management. But there are other challenges too. These two are the most common ones I have come across:
- Most defects found through exploratory testing cannot be tied to documented requirements. From developer point of view, this is an additional requirement. And to make things worse, project managers and product owners tend to give low priority to these defects, arguing that they are edge cases, which will never happen in production. True, they may not happen in production. What we don’t realise is that, a defect may be small but it might be hiding one or more other serious ones down the line. The more defects we have unresolved regardless of their impact, the more the risk is building. Further more, someone with malicious or fun intention will explore those extreme conditions (I am always tempted to enter 1 billion carrier bags when prompted at self check in supermarket just see what will happen)
- There has always been time pressure on QA to deliver in time; development delays squeeze out testing time. With Agile the case may even be worse, especially with teams that follow scrum. Many times I have seen stories being completed on the last day or hours of the sprint and rushed through QA in order to hit the velocity. There is barely any time left for proper exploratory testing; in fact exploratory testing is replaced with ad-hoc testing confined to confirming acceptance criteria.
I don’t think there are golden rules is the software industry and we can only talk about best practices; each team will need to find out what works best for them and keep optimizing. There are few things that have worked for me and worth trying:
- Ensuring functional testing and automation are part of development so both developers and QA can work on it; this will free up time for exploratory testing when the story is completed. I have seen some brilliant QA people who actually test at developer desk and together they fix issues right there unless it is more complicated. When the story moves to QA, it is not about validating the acceptance criteria but exploring the functionality under different conditions and data.
- QA need to be pragmatic and be able to look beyond their desk for the benefit of the team & company as a whole. However, we also need to challenge decisions and strive to have lower number of unresolved defects regardless of their priorities. Other than prioritisation, window fixing and refactoring are two other ways of getting defects resolved, especially those minor ones.
- Risk analysis of defects to facilitate better decision making. On one occasion we found that a store supervisor can delete their own account in a POS system, which locks out all cashiers and other staff under their supervision (this was irreversible operation). The project manager said “no one is so stupid to delete their own account” and did not want to prioritise the defect. I argued if an employee who has been laid off can come back with a gun to kill their co-workers, I see no reason why they would not take advantage of this easy target and delete their account while clearing their desk. The message got across and the defect was fixed.
- Reduce waste – your time is precious and use it wisely; protest against any bureaucracies. I still see a lot of teams who document complicated test results, some spending considerable time on this. Sure, there are exceptions like compliance testing, where you need these evidences but my view is that rules and regulations and created by and for the benefit of human beings and so they need change, adapt or sometimes even drop out if the benefit is diminished.
- Build sound product knowledge – the better QA knows the domain, the application of the system and business context, the more effective they become. Case study 2 and 3 are classic examples where good product and domain knowledge would alert QA what scenarios to explore. Someone with less product knowledge may not know about store and regional managers (and scope of their responsibilities) or small business, corporate and consumer accounts and their impact on number portability.
- Technical knowledge – understanding the underlying technology and architecture is critical, especially in complex systems. With good technical knowledge, QA will not forget to test things like flooding and draining queues, log rotations, data integrity, race conditions and deadlocks, circuit breakers, application properties, etc.
- The bigger picture – this is one of the most difficult challenges I personally struggle with in Agile teams, especially when I join the project a bit late. Because of the nature of Agile (iterative development), we tend to lose the bigger picture and focus on narrow vertical functionalities, sometimes unknowingly forgetting about the impacts and implications of a change. Knowledge sharing, for example via QA forums is one example to help with this. Checking the architectural diagrams, reminding ourselves about the personas of the system and sneaking into Dev forums and Code reviews (and yes strange but some might try to kick you out but try keeping low profile) can all help. I will really love to hear how others cope with this.
Note: By technical knowledge I don’t mean the ability to write Java/Scala program but be able to understand the architectural landscape of the system; the components and how each work (individually and together), the operating systems, databases and data structure, queues & topics, integration points and mechanisms, etc.
We may not be able to spell out all the things a software should not do and there is no defect-free software, but we can reduce the risk by doing more and proper exploratory testing. We should always ask and look out for what the software can do to discover as many implicit requirements as possible.