Liskov substitution principle (LSP) – Architectural Principles

The Liskov Substitution Principle (LSP) states that in a program, if we replace an instance of a superclass (supertype) with an instance of a subclass (subtype), the program should not break or behave unexpectedly.Imagine we have a base class called Bird with a function called Fly, and we add the Eagle and Penguin subclasses. Since a penguin can’t fly, replacing an instance of the Bird class with an instance of the Penguin subclass might cause problems because the program expects all birds to be able to fly.So, according to the LSP, our subclasses should behave so the program can still work correctly, even if it doesn’t know which subclass it’s using, preserving system stability.Before moving on with the LSP, let’s look at covariance and contravariance.

Covariance and contravariance

We won’t go too deep into this, so we don’t move too far away from the LSP, but since the formal definition mentions them, we must understand these at least a minimum.Covariance and contravariance represent specific polymorphic scenarios. They allow reference types to be converted into other types implicitly. They apply to generic type arguments, delegates, and array types. Chances are, you will never need to remember this, as most of it is implicit, yet, here’s an overview:

  • Covariance (out) enables us to use a more derived type (a subtype) instead of the supertype. Covariance is usually applicable to method return types. For instance, if a base class method returns an instance of a class, the equivalent method of a derived class can return an instance of a subclass.
  • Contravariance (in) is the reverse situation. It allows a less derived type (a supertype) to be used instead of the subtype. Contravariance is usually applicable to method argument types. If a method of a base class accepts a parameter of a particular class, the equivalent method of a derived class can accept a parameter of a superclass.

Let’s use some code to understand this more, starting with the model we are using:

public record class Weapon { }
public record class Sword : Weapon { }
public record class TwoHandedSword : Sword { }

Simple class hierarchy, we have a TwoHandedSword class that inherits from the Sword class and the Sword class that inherits from the Weapon class.

Covariance

To demo covariance, we leverage the following generic interface:

public interface ICovariant<out T>
{
    T Get();
}

In C#, the out modifier, the highlighted code, explicitly specifies that the generic parameter T is covariant. Covariance applies to return types, hence the Get method that returns the generic type T.Before testing this out, we need an implementation. Here’s a barebone one:

public class SwordGetter : ICovariant<Sword>
{
    private static readonly Sword _instance = new();
    public Sword Get() => _instance;
}

The highlighted code, which represents the T parameter, is of type Sword, a subclass of Weapon. Since covariance means you can return (output) the instance of a subtype as its supertype, using the Sword subtype allows exploring this with the Weapon supertype. Here’s the xUnit fact that demonstrates covariance:

[Fact]
public void Generic_Covariance_tests()
{
    ICovariant<Sword> swordGetter = new SwordGetter();
    ICovariant<Weapon> weaponGetter = swordGetter;
    Assert.Same(swordGetter, weaponGetter);
    Sword sword = swordGetter.Get();
    Weapon weapon = weaponGetter.Get();
    var isSwordASword = Assert.IsType<Sword>(sword);
    var isWeaponASword = Assert.IsType<Sword>(weapon);
    Assert.NotNull(isSwordASword);
    Assert.NotNull(isWeaponASword);
}

The highlighted line represents covariance, showing that we can implicitly convert the ICovariant<Sword> subtype to the ICovariant<Weapon> supertype.The code after that showcases what happens with that polymorphic change. For example, the Get method of the weaponGetter object returns a Weapon type, not a Sword, even if the underlying instance is a SwordGetter object. However, that Weapon is, in fact, a Sword, as the assertions demonstrate.Next, let’s explore contravariance.

Don’t repeat yourself (DRY) – Architectural Principles

The DRY principle advocates the separation of concerns principle and aims to eliminate redundancy in code as well. It promotes the idea that each piece of knowledge or logic should have a single, unambiguous representation within a system.So, when you have duplicated logic in your system, encapsulate it and reuse that new encapsulation in multiple places instead. If you find yourself writing the same or similar code in multiple places, refactor that code into a reusable component instead. Leverage functions, classes, modules, or other abstractions to refactor the code.Adhering to the DRY principle makes your code more maintainable, less error-prone, and easier to modify because a change in logic or bug fix needs to be made in only one place, reducing the likelihood of introducing errors or inconsistencies.However, it is imperative to regroup duplicated logic by concern, not only by the similarities of the code itself. Let’s look at those two classes:

public class AdminApp
{
    public async Task DisplayListAsync(
        IBookService bookService,
        IBookPresenter presenter)
    {
        var books = await bookService.FindAllAsync();
        foreach (var book in books)
        {
            await presenter.DisplayAsync(book);
        }
    }
}
public class PublicApp
{
    public async Task DisplayListAsync(
        IBookService bookService,
        IBookPresenter presenter)
    {
        var books = await bookService.FindAllAsync();
        foreach (var book in books)
        {
            await presenter.DisplayAsync(book);
        }
    }
}

The code is very similar, but encapsulating a single class or method could very well be a mistake. Why? Keeping two separate classes is more logical because the admin program can have different reasons for modification compared to the public program.However, encapsulating the list logic into the IBookPresenter interface could make sense. It would allow us to react differently to both types of users if needed, like filtering the admin panel list but doing something different in the public section. One way to do this is by replacing the foreach loop with a presenter DisplayListAsync(books) call, like the following highlighted code:

public class AdminApp
{
    public async Task DisplayListAsync(
        IBookService bookService,
        IBookPresenter presenter)
    {
        var books = await bookService.FindAllAsync();
        // We could filter the list here
        await presenter.DisplayListAsync(books);
    }
}
public class PublicApp
{
    public async Task DisplayListAsync(
        IBookService bookService,
        IBookPresenter presenter)
    {
        var books = await bookService.FindAllAsync();
        await presenter.DisplayListAsync(books);
    }
}

There is more to those simple implementations to discuss, like the possibility of supporting multiple implementations of the interfaces for added flexibility, but let’s keep some subjects for further down the book.

When you don’t know how to name a class or a method, you may have identified a problem with your separation of concerns. This is a good indicator that you should go back to the drawing board. Nevertheless, naming is hard, so sometimes, that’s just it.

Keeping our code DRY while following the separation of concerns principles is imperative. Otherwise, what may seem like a good move could become a nightmare.

Important testing principles – Automated Testing

Finally, we can create a dedicated class that instantiates WebApplicationFactory manually. It leverages the other workarounds but makes the test cases more readable. By encapsulating the setup of the test application in a class, you will improve the reusability and maintenance cost in most cases.First, we need to change the Program class visibility by adding the following line to the Project.cs file:

public partial class Program { }

Now that we can access the Program class without the need to allow internal visibility to our test project, we can create our test application like this:

namespace MyMinimalApiApp;
public class MyTestApplication : WebApplicationFactory<Program> {}

Finally, we can reuse the same code to test our program but instantiate MyTestApplication instead of WebApplicationFactory<Program>, highlighted in the following code:

namespace MyMinimalApiApp;
public class MyTestApplicationTest
{
    public class Get : ProgramTestWithoutFixture
    {
        [Fact]
        public async Task Should_respond_a_status_200_OK()
        {
            // Arrange
            await using var app = new MyTestApplication();
            var httpClient = app.CreateClient();
            // Act
            var result = await httpClient.GetAsync(“/”);
            // Assert
            Assert.Equal(HttpStatusCode.OK, result.StatusCode);
        }
    }
}

You can also leverage fixtures, but for the sake of simplicity, I decided to show you how to instantiate our new test application manually.And that’s it. We have covered multiple ways to work around integration testing minimal APIs simplistically and elegantly. Next, we explore a few testing principles before moving to architectural principles in the next chapter.

Important testing principles

One essential thing to remember when writing tests is to test use cases, not the code itself; we are testing features’ correctness, not code correctness. Of course, if the expected outcome of a feature is correct, that also means the codebase is correct. However, it is not always true the other way around; correct code may yield an incorrect outcome. Also, remember that code costs money to write, while features deliver value.To help with that, test requirements should revolve around inputs and outputs. When specific values go into your subject under test, you expect particular values to come out. Whether you are testing a simple Add method where the ins are two or more numbers, and the out is the sum of those numbers, or a more complex feature where the ins come from a form, and the out is the record getting persisted in a database, most of the time, we are testing that inputs produced an output or an outcome.Another concept is to divide those units as a query or a command. No matter how you organize your code, from a simple single-file application to a microservices architecture-base Netflix clone, all simple or compounded operations are queries or commands. Thinking about a system this way should help you test the ins and outs. We discuss queries and commands in several chapters, so keep reading to learn more.Now that we have laid this out, what if a unit must perform multiple operations, such as reading from a database, and then send multiple commands? You can create and test multiple smaller units (individual operations) and another unit that orchestrates those building blocks, allowing you to test each piece in isolation. We explore how to achieve this throughout the book.In a nutshell, when writing automated tests:

  • In case of a query, we assert the output of the unit undergoing testing based on its input parameters.
  • In case of a command, we assert the outcome of the unit undergoing testing based on its input parameters.

We explore numerous techniques throughout the book to help you achieve that level of separation, starting with architectural principles in the next chapter.

Summary

This chapter covered automated testing, such as unit and integration tests. We also briefly covered end-to-end tests, but covering that in only a few pages is impossible. Nonetheless, how to write integration tests can also be used for end-to-end testing, especially in the REST API space.We explored different testing approaches from a bird’s eye view, tackled technical debt, and explored multiple testing techniques like black-box, white-box, and grey-box testing. We also peaked at a few formal ways to choose the values to test, like equivalence partitioning and boundary value analysis.We then looked at xUnit, the testing framework used throughout the book, and a way of organizing tests. We explored ways to pick the correct type of test and some guidelines about choosing the right quantity for each kind of test. Then we saw how easy it is to test our ASP.NET Core web applications by running it in memory. Finally, we explored high-level concepts that should guide you in writing testable, flexible, and reliable programs.Now that we have talked about testing, we are ready to explore a few architectural principles to help us increase programs’ testability. Those are a crucial part of modern software engineering and go hand in hand with automated testing.

Writing ASP.NET Core integration tests – Automated Testing

When Microsoft built ASP.NET Core from the ground up, they fixed and improved so many things that I cannot enumerate them all here, including testability.Nowadays, there are two ways to structure a .NET program:

  • The classic ASP.NET Core Program and the Startup classes. This model might be found in existing projects (created before .NET 6).
  • The minimal hosting model introduced in .NET 6. This may look familiar to you if you know Node.js, as this model encourages you to write the start-up code in the Program.cs file by leveraging top-level statements. You will most likely find this model in new projects (created after the release of .NET 6).

No matter how you write your program, that’s the place to define how the application’s composition and how it boots. Moreover, we can leverage the same testing tools more or less seamlessly.In the case of a web application, the scope of our integration tests is often to call the endpoint of a controller over HTTP and assert the response. Luckily, in .NET Core 2.1, the .NET team added the WebApplicationFactory<TEntry> class to make the integration testing of web applications easier. With that class, we can boot up an ASP.NET Core application in memory and query it using the supplied HttpClient in a few lines of code. The test classes also provide extension points to configure the server, such as replacing implementations with mocks, stubs, or other test-specific elements.Let’s start by booting up a classic web application test.

Classic web application

In a classic ASP.NET Core application, the TEntry generic parameter of the WebApplicationFactory<TEntry> class is usually the Startup or Program class of your project under test.

The test cases are in the Automated Testing solution under the MyApp.IntegrationTests project.

Let’s start by looking at the test code structure before breaking it down:

namespace MyApp.IntegrationTests.Controllers;
public class ValuesControllerTest : IClassFixture<WebApplicationFactory<Startup>>
{
    private readonly HttpClient _httpClient;
    public ValuesControllerTest(
        WebApplicationFactory<Startup> webApplicationFactory)
    {
        _httpClient = webApplicationFactory.CreateClient();
    }
    public class Get : ValuesControllerTest
    {
        public Get(WebApplicationFactory<Startup> webApplicationFactory)
            : base(webApplicationFactory) { }
        [Fact]
        public async Task Should_respond_a_status_200_OK()
        {
            // Omitted Test Case 1
        }
        [Fact]
        public async Task Should_respond_the_expected_strings()
        {
            // Omitted Test Case 2
        }
    }
}

The first piece of the preceding code that is relevant to us is how we get an instance of the WebApplicationFactory<Startup> class. We inject a WebApplicationFactory<Startup> object into the constructor by implementing the IClassFixture<T> interface (a xUnit feature). We can also use the factory to configure the test server, but we don’t need to here, so we can only keep a reference on the HttpClient, preconfigured to connect to the in-memory test server.Then, we may have noticed we have the nested Get class that inherits the ValuesControllerTest class. The Get class contains the test cases. By inheriting the ValuesControllerTest class, we can leverage the _httpClient field from the test cases we are about to see.In the first test case, we use HttpClient to query the http://localhost/api/values URI, accessible through the in-memory server. Then, we assert that the status code of the HTTP response was a success (200 OK):

[Fact]
public async Task Should_respond_a_status_200_OK()
{
    // Act
    var result = await _httpClient
        .GetAsync(“/api/values”);
    // Assert
    Assert.Equal(HttpStatusCode.OK, result.StatusCode);
}

The second test case also sends an HTTP request to the in-memory server but deserializes the body’s content as a string[] to ensure the values are the same as expected instead of validating the status code:

[Fact]
public async Task Should_respond_the_expected_strings()
{
    // Act
    var result = await _httpClient
        .GetFromJsonAsync<string[]>(“/api/values”);
    // Assert
    Assert.Collection(result,
        x => Assert.Equal(“value1”, x),
        x => Assert.Equal(“value2”, x)
    );
}

As you may have noticed from the test cases, the WebApplicationFactory preconfigured the BaseAddress property for us, so we don’t need to prefix our requests with http://localhost.

When running those tests, an in-memory web server starts. Then, HTTP requests are sent to that server, testing the complete application. The tests are simple in this case, but you can create more complex test cases in more complex programs.Next, we explore how to do the same for minimal APIs.

Test class name – Automated Testing

By convention, I name test classes [class under test]Test.cs and create them in the same directory as in the original project. Finding tests is easy when following that simple rule since the test code is in the same location of the file tree as the code under test but in two distinct projects.

 Figure 2.8: The Automated Testing Solution Explorer, displaying how tests are organizedFigure 2.8: The Automated Testing Solution Explorer, displaying how tests are organized 

Test code inside the test class

For the test code itself, I follow a multi-level structure similar to the following:

  • One test class is named the same as the class under test.
  • One nested test class per method to test from the class under test.
  • One test method per test case of the method under test.

This technique helps organize tests by test case while keeping a clear hierarchy, leading to the following hierarchy:

  • Class under test
  • Method under test
  • Test case using that method

In code, that translates to the following:

namespace MyApp.IntegrationTests.Controllers;
public class ValuesControllerTest
{
    public class Get : ValuesControllerTest
    {
        [Fact]
        public void Should_return_the_expected_strings()
        {
            // Arrange
            var sut = new ValuesController();
            // Act
            var result = sut.Get();
            // Assert
            Assert.Collection(result.Value,
                x => Assert.Equal(“value1”, x),
                x => Assert.Equal(“value2”, x)
            );
        }
    }
}

This convention allows you to set up tests step by step. For example, by inheriting the outer class (the ValuesControllerTest class here) from the inner class (the Get nested class), you can create top-level private mocks or classes shared by all nested classes and test methods. Then, for each method to test, you can modify the setup or create other private test elements in the nested classes. Finally, you can do more configuration per test case inside the test method (the Should_return_the_expected_strings method here).

Don’t go too hard on reusability inside your test classes, as it can make tests harder to read from an external eye, such as a reviewer or another developer that needs to play there. Unit tests should remain focused, small, and easy to read: a unit of code testing another unit of code. Too much reusability may lead to a brittle test suite.

Now that we have explored organizing unit tests, let’s look at integration tests.

Integration tests

Integration tests are harder to organize because they depend on multiple units, can cross project boundaries, and interact with various dependencies.We can create one integration test project for most simple solutions or many for more complex scenarios.When creating one, you can name the project IntegrationTests or start with the entry point of your tests, like a REST API project, and name the project [Name of the API project].IntegrationTests. At this point, how to name the integration test project depends on your solution structure and intent.When you need multiple integration projects, you can follow a convention similar to unit tests and associate your integration projects one-to-one: [Project under test].IntegrationTests.Inside those projects, it depends on how you want to attack the problem and the structure of the solution itself. Start by identifying the features under test. Name the test classes in a way that mimics your requirements, organize those into sub-folders (maybe a category or group of requirements), and code test cases as methods. You can also leverage nested classes, as we did with unit tests.

We write tests throughout the book, so you will have plenty of examples to make sense of all this if it’s not clear now.

Next, we implement an integration test by leveraging ASP.NET Core features.

Theories – Automated Testing-2

The third data feeds three more sets of data to the test method. However, that data originates from the GetData method of the ExternalData class, sending 10 as an argument during the execution (the start parameter). To do that, we must specify the MemberType instance where the method is located so xUnit knows where to look. In this case, we pass the argument 10 as the second parameter of the MemberData constructor. However, in other cases, you can pass zero or more arguments there.Finally, we are doing the same for the ExternalData.TypedData property, which is represented by the [MemberData(nameof(ExternalData.TypedData), MemberType = typeof(ExternalData))] attribute. Once again, the only difference is that the property is defined using TheoryData instead of IEnumerable<object[]>, which makes its intent clearer.When running the tests, the data provided by the [MemberData] attributes are combined, yielding the following result in the Test Explorer:

 Figure 2.5: Member data theory test resultsFigure 2.5: Member data theory test results 

These are only a few examples of what we can do with the [MemberData] attribute.

I understand that’s a lot of condensed information, but the goal is to cover just enough to get you started. I don’t expect you to become an expert in xUnit by reading this chapter.

Last but not least, the [ClassData] attribute gets its data from a class implementing IEnumerable<object[]> or inheriting from TheoryData<…>. The concept is the same as the other two. Here is an example:

public class ClassDataTest
{
    [Theory]
    [ClassData(typeof(TheoryDataClass))]
    [ClassData(typeof(TheoryTypedDataClass))]
    public void Should_be_equal(int value1, int value2, bool shouldBeEqual)
    {
        if (shouldBeEqual)
        {
            Assert.Equal(value1, value2);
        }
        else
        {
            Assert.NotEqual(value1, value2);
        }
    }
    public class TheoryDataClass : IEnumerable<object[]>
    {
        public IEnumerator<object[]> GetEnumerator()
        {
            yield return new object[] { 1, 2, false };
            yield return new object[] { 2, 2, true };
            yield return new object[] { 3, 3, true };
        }
        IEnumerator IEnumerable.GetEnumerator() => GetEnumerator();
    }
    public class TheoryTypedDataClass : TheoryData<int, int, bool>
    {
        public TheoryTypedDataClass()
        {
            Add(102, 104, false);
        }
    }
}

These are very similar to [MemberData], but we point to a type instead of pointing to a member.In TheoryDataClass, implementing the IEnumerable<object[]> interface makes it easy to yield return the results. On the other hand, in the TheoryTypedDataClass class, by inheriting TheoryData, we can leverage a list-like Add method. Once again, I find inheriting from TheoryData more explicit, but either way works with xUnit. You have many options, so choose the best one for your use case.Here is the result in the Test Explorer, which is very similar to the other attributes:

 Figure 2.6: Test ExplorerFigure 2.6: Test Explorer 

That’s it for the theories—next, a few last words before organizing our tests.

Assertions – Automated Testing

An assertion is a statement that checks whether a particular condition is true or false. If the condition is true, the test passes. If the condition is false, the test fails, indicating a problem with the subject under test.Let’s visit a few ways to assert correctness. We use barebone xUnit functionality in this section, but you can bring in the assertion library of your choice if you have one.

In xUnit, the assertion throws an exception when it fails, but you may never even realize that. You do not have to handle those; that’s the mechanism to propagate the failure result to the test runner.

We won’t explore all possibilities, but let’s start with the following shared pieces:

public class AssertionTest
{
    [Fact]
    public void Exploring_xUnit_assertions()
    {
        object obj1 = new MyClass { Name = “Object 1” };
        object obj2 = new MyClass { Name = “Object 1” };
        object obj3 = obj1;
        object?
obj4 = default(MyClass);
        //
        // Omitted assertions
        //
        static void OperationThatThrows(string name)
        {
            throw new SomeCustomException { Name = name };
        }
    }
    private record class MyClass
    {
        public string?
Name { get; set; }
    }
    private class SomeCustomException : Exception
    {
        public string?
Name { get; set; }
    }
}

The two preceding record classes, the OperationThatThrows method, and the variables are utilities used in the test to help us play with xUnit assertions. The variables are of type object for exploration purposes, but you can use any type in your test cases. I omitted the assertion code that we are about to see to keep the code leaner.The following two assertions are very explicit:

Assert.Equal(expected: 2, actual: 2);
Assert.NotEqual(expected: 2, actual: 1);

The first compares whether the actual value equals the expected value, while the second compares if the two values are different. Assert.Equal is probably the most commonly used assertion method.

As a rule of thumb, it is better to assert equality (Equal) than assert that the values are different (NotEqual). Except in a few rare cases, asserting equality will yield more consistent results and close the door to missing defects.

The next two assertions are very similar to the equality ones but assert that the objects are the same instance or not (the same instance means the same reference):

Assert.Same(obj1, obj3);
Assert.NotSame(obj2, obj3);

The next one validates that the two objects are equal. Since we are using record classes, it makes it super easy for us; obj1 and obj2 are not the same (two instances) but are equal (see Appendix A for more information on record classes):

Assert.Equal(obj1, obj2);

The next two are very similar and assert that the value is null or not:

Assert.Null(obj4);
Assert.NotNull(obj3);

The next line asserts that obj1 is of the MyClass type and then returns the argument (obj1) converted to the asserted type (MyClass). If the type is incorrect, the IsType method will throw an exception:

var instanceOfMyClass = Assert.IsType<MyClass>(obj1);

Then we reuse the Assert.Equal method to validate that the value of the Name property is what we expect:

Assert.Equal(expected: “Object 1”, actual: instanceOfMyClass.Name);

The following code block asserts that the testCode argument throws an exception of the SomeCustomException type:

var exception = Assert.Throws<SomeCustomException>(
    testCode: () => OperationThatThrows(“Toto”)
);

The testCode argument executes the OperationThatThrows inline function we saw initially. The Throws method allows us to test some exception properties by returning the exception in the specified type. The same behavior as the IsType method happens here; if the exception is of the wrong type or no exception is thrown, the Throws method will fail the test.

It is a good idea to ensure that not only the proper exception type is thrown, but the exception carries the correct values as well.

The following line asserts that the value of the Name property is what we expect it to be, ensuring our program would propagate the proper exception:

Assert.Equal(expected: “Toto”, actual: exception.Name);

We covered a few assertion methods, but many others are part of xUnit, like the Collection, Contains, False, and True methods. We use many assertions throughout the book, so if these are still unclear, you will learn more about them.Next, let’s look at data-driven test cases using theories.

How to create an xUnit test project – Automated Testing

To create a new xUnit test project, you can run the dotnet new xunit command, and the CLI does the job for you by creating a project containing a UnitTest1 class. That command does the same as creating a new xUnit project from Visual Studio.For unit testing projects, name the project the same as the project you want to test and append .Tests to it. For example, MyProject would have a MyProject.Tests project associated with it. We explore more details in the Organizing your tests section below.The template already defines all the required NuGet packages, so you can start testing immediately after adding a reference to your project under test.

You can also add project references using the CLI with the dotnet add reference command. Assuming we are in the ./test/MyProject.Tests directory and the project file we want to reference is in the ./src/MyProject directory; we can execute the following command to add a reference:

dotnet add reference ../../src/MyProject.csproj.

Next, we explore some xUnit features that will allow us to write test cases.

Key xUnit features

In xUnit, the [Fact] attribute is the way to create unique test cases, while the [Theory] attribute is the way to make data-driven test cases. Let’s start with facts, the simplest way to write a test case.

Facts

Any method with no parameter can become a test method by decorating it with a [Fact] attribute, like this:

public class FactTest
{
    [Fact]
    public void Should_be_equal()
    {
        var expectedValue = 2;
        var actualValue = 2;
        Assert.Equal(expectedValue, actualValue);
    }
}

You can also decorate asynchronous methods with the fact attribute when the code under test needs it:

public class AsyncFactTest
{
    [Fact]
    public async Task Should_be_equal()
    {
        var expectedValue = 2;
        var actualValue = 2;
        await Task.Yield();
        Assert.Equal(expectedValue, actualValue);
    }
}

In the preceding code, the highlighted line conceptually represents an asynchronous operation and does nothing more than allow using the async/await keywords.When we run the tests from Visual Studio’s Test Explorer, the test run result looks like this:

 Figure 2.3: Test results in Visual StudioFigure 2.3: Test results in Visual Studio 

You may have noticed from the screenshot that the test classes are nested in the xUnitFeaturesTest class, part of the MyApp namespace, and under the MyApp.Tests project. We explore those details later in the chapter.Running the dotnet test CLI command should yield a result similar to the following:

Passed! 
– Failed:     0, Passed:    23, Skipped:     0, Total:    23, Duration: 22 ms – MyApp.Tests.dll (net8.0)

As we can read from the preceding output, all tests are passing, none have failed, and none were skipped. It is as simple as that to create test cases using xUnit.

Learning the CLI can be very helpful in creating and debugging CI/CD pipelines, and you can use them, like the dotnet test command, in any script (like bash and PowerShell).

Have you noticed the Assert keyword in the test code? If you are not familiar with it, we will explore assertions next.

Equivalence Partitioning – Automated Testing

This technique divides the input data of the software into different equivalence data classes and then tests these classes rather than individual inputs. An equivalence data class means that all values in that partition set should lead to the same outcome or yield the same result. Doing this allows for limiting the number of tests considerably.For example, consider an application that accepts an integer value between 1 and 100 (inclusive). Using equivalence partitioning, we can divide the input data into two equivalence classes:

  • Valid
  • Invalid

To be more precise, we could further divide it into three equivalence classes:

  • Class 1: Less than 1 (Invalid)
  • Class 2: Between 1 and 100 (Valid)
  • Class 3: Greater than 100 (Invalid)

Then we can write three tests, picking one representative from each class (e.g., 0, 50, and 101) to create our test cases. Doing so ensures a broad coverage with minimal test cases, making our testing process more efficient.

Boundary Value Analysis

This technique focuses on the values at the boundary of the input domain rather than the center. This technique is based on the principle that errors are most likely to occur at the boundaries of the input domain.The input domain represents the set of all possible inputs for a system. The boundaries are the edges of the input domain, representing minimum and maximum values.For example, if we expect a function to accept an integer between 1 and 100 (inclusive), the boundary values would be 1 and 100. With Boundary Value Analysis, we would create test cases for these values, values just outside the boundaries (like 0 and 101), and values just inside the boundaries (like 2 and 99).Boundary Value Analysis is a very efficient testing technique that provides good coverage with a relatively small number of test cases. However, it’s unsuitable for finding errors within the boundaries or for complex logic errors. Boundary Value Analysis should be used on top of other testing methods, such as equivalence partitioning and decision table testing, to ensure the software is as defect-free as possible.

Decision Table Testing

This technique uses a decision table to design test cases. A decision table is a table that shows all possible combinations of input values and their corresponding outputs.It’s handy for complex business rules that can be expressed in a table format, enabling testers to identify missing and extraneous test cases.For example, our system only allows access to a user with a valid username and password. Moreover, the system denies access to users when it is under maintenance. The decision table would have three conditions (username, password, and maintenance) and one action (allow access). The table would list all possible combinations of these conditions and the expected action for each combination. Here is an example:

Valid UsernameValid PasswordSystem under MaintenanceAllow Access
TrueTrueFalseYes
TrueTrueTrueNo
TrueFalseFalseNo
TrueFalseTrueNo
FalseTrueFalseNo
FalseTrueTrueNo
FalseFalseFalseNo
FalseFalseTrueNo

The main advantage of Decision Table Testing is that it ensures we test all possible input combinations. However, it can become complex and challenging to manage when systems have many input conditions, as the number of rules (and therefore test cases) increases exponentially with the number of conditions.

Conclusion – Automated Testing

White-box testing includes unit and integration tests. Those tests run fast, and developers use them to improve the code and test complex algorithms. However, writing a large quantity of those tests takes time. Writing brittle tests that are tightly coupled with the code itself is easier due to the proximity to the code, increasing the maintenance cost of such test suites. It also makes it prone to overengineering your application in the name of testability.Black-box testing encompasses different types of tests that tend towards end-to-end testing. Since the tests target the external surface of the system, they are less likely to break when the system changes. Moreover, they are excellent at testing behaviors, and since each test tests an end-to-end use case, we need fewer of them, leading to a decrease in writing time and maintenance costs. Testing the whole system has drawbacks, including the slowness of executing each test, so combining black-box testing with other types of tests is very important to find the right balance between the number of tests, test case coverage, and speed of execution of the tests.Grey-box testing is a fantastic mix between the two others; you can treat any part of the software as a black box, leverage your inner-working knowledge to mock or stub parts of the test case (like to assert if the system persisted a record in the database), and test end-to-end scenarios more efficiently. It brings the best of both worlds, significantly reducing the number of tests while increasing the test surface considerably for each test case. However, doing grey-box testing on smaller units or heavily mocking the system may yield the same drawbacks as white-box testing. Integration tests or almost-E2E tests are good candidates for grey-box testing. We implement grey-box testing use cases in Chapter 16, Request-Endpoint-Response (REPR). Meanwhile, let’s explore a few techniques to help optimize our test case creation by applying different techniques, like testing a small subset of values to assert the correctness of our programs by writing an optimal number of tests.

Test case creation

Multiple ways exist to break down and create test cases to help find software defects with a minimal test count. Here are some techniques to help minimize the number of tests while maximizing the test coverage:

  • Equivalence Partitioning
  • Boundary Value Analysis
  • Decision Table Testing
  • State Transition Testing
  • Use Case Testing

I present the techniques theoretically. They apply to all sorts of tests and should help you write better test suites. Let’s have a quick look at each.