To make most effective use of a model, it needs to pervade every medium of communication.
― Eric Evans, DDD Blue Book
The more extensively Ubiquitous Language (UL) is utilized, the more effortless the flow of understanding becomes. Translation fades away, and the language becomes integral to every activity the developers and domain experts engage in.
Tests are one of the most valuable artifacts in a software project. They contain deep domain knowledge. Moreover, they are an excellent way to communicate about the model. This article explores a set of heuristics for effective use of Ubiquitous Language in tests and emphasizes their role as an effective means for communication and enhancing the understanding of the model and problem domain.
Please note that the heuristics presented here are not a comprehensive list, but rather some among many. They are also context-sensitive, like any other heuristic.
#1 Don’t Write Unit Tests, Capture Conversations and Write Specifications
Developers and domain experts discuss the domain, the model, and how it can be used in that particular context. They analyze different scenarios and behaviors to see how the model fits into the situation. We gain a lot of insight from these conversations about what the problem is, how our software will respond, and how it will work around it. These types of conversations can be used to specify software in a common language, which can be turned into executable tests to ensure it works.
Similar concepts have been discussed in terms of BDD, Specification by Example, or Example-guided development. They all have the same core practice, but are explained differently or from a slightly different perspective or level of abstraction. The approaches cover a lot more than this, and we won't be discussing them here. The focus here is on shifting from writing tests to specifying software, or "writing specifications". And since we are discussing unit testing, it would be more accurate to call them "Low-Level Specifications1".
Let’s look at an example:
As you can see here, this test emphasizes the function being tested instead of what the code is supposed to do in business terms. It has no real meaning from the perspective of a domain expert. It's filled with technical stuff and is not the kind of conversation that people might have around a model.
Is there a better way to write it?
The test is built around the concept of "cancellation policy", which deals with rules and penalties for cancelling a ticket. The cancellation policy for a ticket cannot be "empty", meaning it must include at least one rule, or else it is useless. Here's an example to illustrate this insight:
Or, maybe:
I don’t necessary want to give a “way to write unit tests” or any kind of “template” or something like that. Basically, I'm suggesting that we should focus on specifying the model rather than testing it, and mimicking real conversations by sticking close to ubiquitous language.
#2 Keep Your Technical Jargon To a Minimum
As we're specifically discussing unit tests, we shouldn't be surprised to see technical stuff in tests, due to their lower abstraction level. The technical details can be tolerated as long as the behavior is somehow dependent on the technical concept, such as an algorithm. However, what we often see in projects is a bloat of technical jargon that makes the unit test so technical and low level that only a developer can understand it (hopefully).
In some cases, technical jargon appears in the definition. Consider this example:
The scenario is very simple, but described in a very technical way. Aside from the fact that such tests are coupled to the code structure (as you will see in #3), it is somehow unreadable from a business perspective. All of these terms need to be translated to a domain expert in order to make sense.
As an alternative, you can write something like this:
Besides the definition, technical terms also appear in the body of the test, Particularly during the "Fixture setup" or "Arrangement" phase.
This is a sample unit test from the eShop project (Link):
There are not many impressive things about the name and definition of the test, and there are a lot of technical details included in the body. According to the body, it simply states, “User can see his own basket (not other users)'”.
There are two sections in the arrangement phase:
Line 1-4 → User with ID 1 is given a basket with some items
Line 5-8 → A user with ID 1 is set as the logged in user
You think of an alternative like this:
The original test uses technical terms that can simply be eliminated with some helpers (like builders for example) in order to keep the focus on what matters and to better express ubiquitous language.
These low-level specifications can also be specified using lightweight BDD frameworks. Here is a sample using BDDfy tool:
#3 Focus on Behavior, Not Structure
Many of us tend to write and structure our unit tests according to the structure of our code. this can take various forms and in the extreme case, the tests mirror the code's structure (e.g. XTests for every X class). The previous examples also demonstrate this dependence on structure.
The more structure-aware you make your unit tests, the more tightly coupled they become with your code. Consequently, changes in the production code can affect the tests in a big way, which is terrible. Have you seen teams who go through "unloading tests" because it costs a lot to make them work with the new structure, even though the behavior hasn't changed? That's probably due to high coupling. Tests should have their own design instead of blindly following production code structure2.
Besides resulting in highly-coupled tests, this also severely harms the language of tests. let’s look at an example:
The test is obviously flawed since it relies heavily on the structure of the class "Passenger". Moreover, the language doesn't explain what is going on here; we just know that when one parameter is false, another property should be empty.
Tests of this type should always be avoided since they are highly-coupled to code, resist changes, and say nothing about the domain. As we mentioned before: specify the behavior instead of testing the code and structure.
#4 Pay Attention To The Story
Storytelling reveals meaning without committing the error of defining it.
― Hannah Arendt
Storytelling seems to be evolutionary hard-wired into our brains3. A story allows us to experience everything as if we were a part of it. Listening to great stories is not just passive listening, it's an active experience.
Each test has a story to tell about the domain. For the storyteller to communicate a meaning through the story, he or she must develop a consistent, relevant, and meaningful story. Let’s take a look at an example (Link):
It seems like that the user used three gift cards with one order: 30, 20 & 5 (total 55$). Going further, the amount of the remaining gift card will appear in his/her account as 45$. Wait, What causes that to happen? This is an inconsistent story and as a communication medium, it fails to deliver its message.
At the top of this test class, there is a method for setting up the test:
The story only makes sense and seems coherent after reading this. So why would I keep looking for this data in other places if it's such an important element of the story?
This is probably motivated by reusability. When it comes to code reuse, tests should not follow the same guidelines as production code. When writing tests with a heavy focus on code reuse, we often produce incomplete, inconsistent, and unclear stories4.
There's a lot to cover here, so I'll probably write a separate post about this. We can, however, list some of the properties that make a test a good story:
Clear Context
Providing the reader with accurate and precise context for the story.
Relevant Information
There should be purpose and impact behind each piece of information in the test. Data that is useless, irrelevant to the story, only confuses the reader.
Self-Sufficiency & Independence
All necessary elements should be in the story in order to convey the story's meaning.
Consistent Abstraction Level
A story with inconsistencies in abstraction level just makes the story difficult to follow and understand.
#5 “Punchline” Is Important
A punchline is the climax or conclusion of a joke, story, or humorous narrative. It's the final part that delivers an unexpected, clever, or funny twist, generating laughter or amusement. The punchline is probably the most significant part of a joke. Without a strong punchline, a joke is likely to leave you feeling unsatisfied.
Meaningful Assertions
To me, an assertion or verification in a test is like the punchline of a joke. A test without a meaningful assertion cannot communicate its intent and you may end up asking "So what?".
Let’s look at this example:
It basically tests whether the feature of locking a reserved seat works, by checking whether it changes its state to "Locked" after locking it. But what does locking mean? Apparently, a significant part of the conversation is not captured here, which is what happens when a seat is locked.
According to the code, "Locked" means you can't sell it to anyone and it's temporarily unavailable to buy. In this case, instead of just checking the state, we can do something like this:
We have added an assertion to step 1 to capture the concept of preventing sales from happening. In Step 2, both assertions were moved to a custom assertion to make the test language more expressive and more closely aligned with the domain language.
Explanations To Clarify “Why”
You can sometimes clarify the assertion by explaining why it is that way. A short text can help reduce the effort needed to understand the story by putting the right explanation in the right context.
Consider this example:
Even though the title of the test expresses the general scenario, the string provided as the "Because" parameter for assertion clarifies this particular case.
Final Thoughts
Testing is more than just technical checking or validation; testing is a way to communicate how software should behave. By using the ubiquitous language and telling the story of the domain, as well as specifying rather than testing, we're on the path to using tests as a great communication tool.
I first came across this phrase in the book “BDD in Action” by John Ferguson Smart. It’s an interesting resource for anyone interested in BDD.
The blog post titled "DRY vs DAMP in Unit Tests" by Vladimir Khorikov is a good take on this topic.
Brilliant Hadi, my take is that testing a unit of behaviour rather than testing a small piece of code could make huge improvements in terms of decoupling the test code from the implementation details.
Kudo bro, nice points.