Category Keep it simple, stupid (KISS)

Liskov substitution principle (LSP) – Architectural Principles

The Liskov Substitution Principle (LSP) states that in a program, if we replace an instance of a superclass (supertype) with an instance of a subclass (subtype), the program should not break or behave unexpectedly.Imagine we have a base class called Bird with a function called Fly, and we add the Eagle and Penguin subclasses. Since a penguin can’t fly, replacing an instance of the Bird class with an instance of the Penguin subclass might cause problems because the program expects all birds to be able to fly.So, according to the LSP, our subclasses should behave so the program can still work correctly, even if it doesn’t know which subclass it’s using, preserving system stability.Before moving on with the LSP, let’s look at covariance and contravariance.

Covariance and contravariance

We won’t go too deep into this, so we don’t move too far away from the LSP, but since the formal definition mentions them, we must understand these at least a minimum.Covariance and contravariance represent specific polymorphic scenarios. They allow reference types to be converted into other types implicitly. They apply to generic type arguments, delegates, and array types. Chances are, you will never need to remember this, as most of it is implicit, yet, here’s an overview:

  • Covariance (out) enables us to use a more derived type (a subtype) instead of the supertype. Covariance is usually applicable to method return types. For instance, if a base class method returns an instance of a class, the equivalent method of a derived class can return an instance of a subclass.
  • Contravariance (in) is the reverse situation. It allows a less derived type (a supertype) to be used instead of the subtype. Contravariance is usually applicable to method argument types. If a method of a base class accepts a parameter of a particular class, the equivalent method of a derived class can accept a parameter of a superclass.

Let’s use some code to understand this more, starting with the model we are using:

public record class Weapon { }
public record class Sword : Weapon { }
public record class TwoHandedSword : Sword { }

Simple class hierarchy, we have a TwoHandedSword class that inherits from the Sword class and the Sword class that inherits from the Weapon class.

Covariance

To demo covariance, we leverage the following generic interface:

public interface ICovariant<out T>
{
    T Get();
}

In C#, the out modifier, the highlighted code, explicitly specifies that the generic parameter T is covariant. Covariance applies to return types, hence the Get method that returns the generic type T.Before testing this out, we need an implementation. Here’s a barebone one:

public class SwordGetter : ICovariant<Sword>
{
    private static readonly Sword _instance = new();
    public Sword Get() => _instance;
}

The highlighted code, which represents the T parameter, is of type Sword, a subclass of Weapon. Since covariance means you can return (output) the instance of a subtype as its supertype, using the Sword subtype allows exploring this with the Weapon supertype. Here’s the xUnit fact that demonstrates covariance:

[Fact]
public void Generic_Covariance_tests()
{
    ICovariant<Sword> swordGetter = new SwordGetter();
    ICovariant<Weapon> weaponGetter = swordGetter;
    Assert.Same(swordGetter, weaponGetter);
    Sword sword = swordGetter.Get();
    Weapon weapon = weaponGetter.Get();
    var isSwordASword = Assert.IsType<Sword>(sword);
    var isWeaponASword = Assert.IsType<Sword>(weapon);
    Assert.NotNull(isSwordASword);
    Assert.NotNull(isWeaponASword);
}

The highlighted line represents covariance, showing that we can implicitly convert the ICovariant<Sword> subtype to the ICovariant<Weapon> supertype.The code after that showcases what happens with that polymorphic change. For example, the Get method of the weaponGetter object returns a Weapon type, not a Sword, even if the underlying instance is a SwordGetter object. However, that Weapon is, in fact, a Sword, as the assertions demonstrate.Next, let’s explore contravariance.

Open/Closed principle (OCP) – Architectural Principles-2

Now the EntityService is composed of an EntityRepository instance, and there is no more inheritance. However, we still tightly coupled both classes, and it is impossible to change the behavior of the EntityService this way without changing its code.To fix our last issues, we can inject an EntityRepository instance into the class constructor where we set our private field like this:

namespace OCP.DependencyInjection;
public class EntityService
{
    private readonly EntityRepository _repository;
    public EntityService(EntityRepository repository)
    {
        _repository = repository;
    }
    public async Task ComplexBusinessProcessAsync(Entity entity)
    {
        // Do some complex things here
        await _repository.CreateAsync(entity);
        // Do more complex things here
    }
}

With the preceding change, we broke the tight coupling between the EntityService and the EntityRepository classes. We can also control the behavior of the EntityService class from the outside by deciding what instance of the EntityRepository class we inject into the EntityService constructor. We could even go further by leveraging an abstraction instead of a concrete class and explore this subsequently while covering the DIP.As we just explored, the OCP is a super powerful principle, yet simple, that allows controlling an object from the outside. For example, we could create two instances of the EntityService class with different EntityRepository instances that connect to different databases. Here’s a rough example:

using OCP;
using OCP.DependencyInjection;
// Create the entity in database 1
var repository1 = new EntityRepository(/* connection string 1 */);
var service1 = new EntityService(repository1);
// Create the entity in database 2
var repository2 = new EntityRepository(/* connection string 2 */);
var service2 = new EntityService(repository2);
// Save an entity in two different databases
var entity = new Entity();
await service1.ComplexBusinessProcessAsync(entity);
await service2.ComplexBusinessProcessAsync(entity);

In the preceding code, assuming we implemented the EntityRepository class and configured repository1 and repository2 differently, the result of executing the ComplexBusinessProcessAsync method on service1 and service2 would create the entity in two different databases. The behavior change between the two instances happened without changing the code of the EntityService class; composition: 1, inheritance: 0.

We explore the Strategy pattern—the best way of implementing the OCP—in Chapter 5, Strategy, Abstract Factory, and Singleton. We revisit that pattern and also learn to assemble our program’s well-designed pieces and sew them together using dependency injection in Chapter 6, Dependency Injection.

Next, we explore the principle we can perceive as the most complex of the five, yet the one we will use the less.

Open/Closed principle (OCP) – Architectural Principles-1

Let’s start this section with a quote from Bertrand Meyer, the person who first wrote the term open/closed principle in 1988:

“Software entities (classes, modules, functions, and so on) should be open for extension but closed for modification.”

OK, but what does that mean? It means you should be able to change the class behaviors from the outside without altering the code.As a bit of history, the first appearance of the OCP in 1988 referred to inheritance, and OOP has evolved a lot since then. Inheritance is still useful, but you should be careful as it is easily misused. Inheritance creates direct coupling between classes. You should, most of the time, opt for composition over inheritance.

“Composition over inheritance” is a principle that suggests it’s better to build objects by combining simple, flexible parts (composition) rather than by inheriting properties from a larger, more complex object (inheritance).

Think of it like building with LEGO® blocks. It’s easier to build and adjust your creation if you put together small blocks (composition) rather than trying to alter a big, single block that already has a fixed shape (inheritance).

Meanwhile, we explore three versions of a business process to illustrate the OCP.

Project – Open Close

First, we look at the Entity and EntityRepository classes used in the code samples:

public record class Entity();
public class EntityRepository
{
    public virtual Task CreateAsync(Entity entity)
        => throw new NotImplementedException();
}

The Entity class represents a simple fictive entity with no properties; consider it anything you’d like. The EntityRepository class has a single CreateAsync method that inserts an instance of an Entity in a database (if it was implemented).

The code sample has few implementation details because it is irrelevant to understanding the OCP. Please assume we implemented the CreateAsync logic using your favorite database.

For the rest of the sample, we refactor the EntityService class, beginning with a version that inherits the EntityRepository class, breaking the OCP:

namespace OCP.NoComposability;
public class EntityService : EntityRepository
{
    public async Task ComplexBusinessProcessAsync(Entity entity)
    {
        // Do some complex things here
        await CreateAsync(entity);
        // Do more complex things here
    }
}

As the namespace implies, the preceding EntityService class offers no composability. Moreover, we tightly coupled it with the EntityRepository class. Since we just covered the composition over inheritance principle, we can quickly isolate the problem: inheritance.As the next step to fix this mess, let’s extract a private _repository field to hold an EntityRepository instance instead:

namespace OCP.Composability;
public class EntityService
{
    private readonly EntityRepository _repository
        = new EntityRepository();
    public async Task ComplexBusinessProcessAsync(Entity entity)
    {
        // Do some complex things here
        await _repository.CreateAsync(entity);
        // Do more complex things here
    }
}

Don’t repeat yourself (DRY) – Architectural Principles

The DRY principle advocates the separation of concerns principle and aims to eliminate redundancy in code as well. It promotes the idea that each piece of knowledge or logic should have a single, unambiguous representation within a system.So, when you have duplicated logic in your system, encapsulate it and reuse that new encapsulation in multiple places instead. If you find yourself writing the same or similar code in multiple places, refactor that code into a reusable component instead. Leverage functions, classes, modules, or other abstractions to refactor the code.Adhering to the DRY principle makes your code more maintainable, less error-prone, and easier to modify because a change in logic or bug fix needs to be made in only one place, reducing the likelihood of introducing errors or inconsistencies.However, it is imperative to regroup duplicated logic by concern, not only by the similarities of the code itself. Let’s look at those two classes:

public class AdminApp
{
    public async Task DisplayListAsync(
        IBookService bookService,
        IBookPresenter presenter)
    {
        var books = await bookService.FindAllAsync();
        foreach (var book in books)
        {
            await presenter.DisplayAsync(book);
        }
    }
}
public class PublicApp
{
    public async Task DisplayListAsync(
        IBookService bookService,
        IBookPresenter presenter)
    {
        var books = await bookService.FindAllAsync();
        foreach (var book in books)
        {
            await presenter.DisplayAsync(book);
        }
    }
}

The code is very similar, but encapsulating a single class or method could very well be a mistake. Why? Keeping two separate classes is more logical because the admin program can have different reasons for modification compared to the public program.However, encapsulating the list logic into the IBookPresenter interface could make sense. It would allow us to react differently to both types of users if needed, like filtering the admin panel list but doing something different in the public section. One way to do this is by replacing the foreach loop with a presenter DisplayListAsync(books) call, like the following highlighted code:

public class AdminApp
{
    public async Task DisplayListAsync(
        IBookService bookService,
        IBookPresenter presenter)
    {
        var books = await bookService.FindAllAsync();
        // We could filter the list here
        await presenter.DisplayListAsync(books);
    }
}
public class PublicApp
{
    public async Task DisplayListAsync(
        IBookService bookService,
        IBookPresenter presenter)
    {
        var books = await bookService.FindAllAsync();
        await presenter.DisplayListAsync(books);
    }
}

There is more to those simple implementations to discuss, like the possibility of supporting multiple implementations of the interfaces for added flexibility, but let’s keep some subjects for further down the book.

When you don’t know how to name a class or a method, you may have identified a problem with your separation of concerns. This is a good indicator that you should go back to the drawing board. Nevertheless, naming is hard, so sometimes, that’s just it.

Keeping our code DRY while following the separation of concerns principles is imperative. Otherwise, what may seem like a good move could become a nightmare.

Separation of concerns (SoC) – Architectural Principles

Before you begin: Join our book community on Discord

Give your feedback straight to the author himself and chat to other early readers on our Discord server (find the “architecting-aspnet-core-apps-3e” channel under EARLY ACCESS SUBSCRIPTION).

https://packt.link/EarlyAccess

This chapter delves into fundamental architectural principles: pillars of contemporary software development practices. These principles help us create flexible, resilient, testable, and maintainable code.We can use these principles to stimulate critical thinking, fostering our ability to evaluate trade-offs, anticipate potential issues, and create solutions that stand the test of time by influencing our decision-making process and helping our design choices.As we embark on this journey, we constantly refer to those principles throughout the book, particularly the SOLID principles, which improve our ability to build flexible and robust software systems.In this chapter, we cover the following topics:

  • The separation of concerns (SoC) principle
  • The DRY principle
  • The KISS principle
  • The SOLID principles

We also revise the following notions:

  • Covariance
  • Contravariance
  • Interfaces

Separation of concerns (SoC)

As its name implies, the idea is to separate our software into logical blocks, each representing a concern. A “concern” refers to a specific aspect of a program. It’s a particular interest or focus within a system that serves a distinct purpose. Concerns could be as broad as data management, as specific as user authentication, or even more specific, like copying an object into another. The Separation of Concerns principle suggests that each concern should be isolated and managed separately to improve the system’s maintainability, modularity, and understandability.

The Separation of Concerns principle applies to all programming paradigms. In a nutshell, this principle means factoring a program into the correct pieces. For example, modules, subsystems, and microservices are macro-pieces, while classes and methods are smaller pieces.

By correctly separating concerns, we can prevent changes in one area from affecting others, allow for more efficient code reuse, and make it easier to understand and manage different parts of a system independently.Here are a few examples:

  • Security and logging are cross-cutting concerns.
  • Rendering a user interface is a concern.
  • Handling an HTTP request is a concern.
  • Copying an object into another is a concern.
  • Orchestrating a distributed workflow is a concern.

Before moving to the DRY principle, it is imperative to consider concerns when dividing software into pieces to create cohesive units. A good separation of concerns helps create modular designs and face design dilemmas more effectively, leading to a maintainable application.

Writing ASP.NET Core integration tests – Automated Testing

When Microsoft built ASP.NET Core from the ground up, they fixed and improved so many things that I cannot enumerate them all here, including testability.Nowadays, there are two ways to structure a .NET program:

  • The classic ASP.NET Core Program and the Startup classes. This model might be found in existing projects (created before .NET 6).
  • The minimal hosting model introduced in .NET 6. This may look familiar to you if you know Node.js, as this model encourages you to write the start-up code in the Program.cs file by leveraging top-level statements. You will most likely find this model in new projects (created after the release of .NET 6).

No matter how you write your program, that’s the place to define how the application’s composition and how it boots. Moreover, we can leverage the same testing tools more or less seamlessly.In the case of a web application, the scope of our integration tests is often to call the endpoint of a controller over HTTP and assert the response. Luckily, in .NET Core 2.1, the .NET team added the WebApplicationFactory<TEntry> class to make the integration testing of web applications easier. With that class, we can boot up an ASP.NET Core application in memory and query it using the supplied HttpClient in a few lines of code. The test classes also provide extension points to configure the server, such as replacing implementations with mocks, stubs, or other test-specific elements.Let’s start by booting up a classic web application test.

Classic web application

In a classic ASP.NET Core application, the TEntry generic parameter of the WebApplicationFactory<TEntry> class is usually the Startup or Program class of your project under test.

The test cases are in the Automated Testing solution under the MyApp.IntegrationTests project.

Let’s start by looking at the test code structure before breaking it down:

namespace MyApp.IntegrationTests.Controllers;
public class ValuesControllerTest : IClassFixture<WebApplicationFactory<Startup>>
{
    private readonly HttpClient _httpClient;
    public ValuesControllerTest(
        WebApplicationFactory<Startup> webApplicationFactory)
    {
        _httpClient = webApplicationFactory.CreateClient();
    }
    public class Get : ValuesControllerTest
    {
        public Get(WebApplicationFactory<Startup> webApplicationFactory)
            : base(webApplicationFactory) { }
        [Fact]
        public async Task Should_respond_a_status_200_OK()
        {
            // Omitted Test Case 1
        }
        [Fact]
        public async Task Should_respond_the_expected_strings()
        {
            // Omitted Test Case 2
        }
    }
}

The first piece of the preceding code that is relevant to us is how we get an instance of the WebApplicationFactory<Startup> class. We inject a WebApplicationFactory<Startup> object into the constructor by implementing the IClassFixture<T> interface (a xUnit feature). We can also use the factory to configure the test server, but we don’t need to here, so we can only keep a reference on the HttpClient, preconfigured to connect to the in-memory test server.Then, we may have noticed we have the nested Get class that inherits the ValuesControllerTest class. The Get class contains the test cases. By inheriting the ValuesControllerTest class, we can leverage the _httpClient field from the test cases we are about to see.In the first test case, we use HttpClient to query the http://localhost/api/values URI, accessible through the in-memory server. Then, we assert that the status code of the HTTP response was a success (200 OK):

[Fact]
public async Task Should_respond_a_status_200_OK()
{
    // Act
    var result = await _httpClient
        .GetAsync(“/api/values”);
    // Assert
    Assert.Equal(HttpStatusCode.OK, result.StatusCode);
}

The second test case also sends an HTTP request to the in-memory server but deserializes the body’s content as a string[] to ensure the values are the same as expected instead of validating the status code:

[Fact]
public async Task Should_respond_the_expected_strings()
{
    // Act
    var result = await _httpClient
        .GetFromJsonAsync<string[]>(“/api/values”);
    // Assert
    Assert.Collection(result,
        x => Assert.Equal(“value1”, x),
        x => Assert.Equal(“value2”, x)
    );
}

As you may have noticed from the test cases, the WebApplicationFactory preconfigured the BaseAddress property for us, so we don’t need to prefix our requests with http://localhost.

When running those tests, an in-memory web server starts. Then, HTTP requests are sent to that server, testing the complete application. The tests are simple in this case, but you can create more complex test cases in more complex programs.Next, we explore how to do the same for minimal APIs.

Organizing your tests – Automated Testing

There are many ways of organizing test projects inside a solution, and I tend to create a unit test project for each project in the solution and one or more integration test projects.A unit test is directly related to a single unit of code, whether it’s a method or a class. It is straightforward to associate a unit test project with its respective code project (assembly), leading to a one-on-one relationship. One unit test project per assembly makes them portable, easier to navigate, and even more so when the solution grows.

If you have a preferred way to organize yours that differs from what we are doing in the book, by all means, use that approach instead.

Integration tests, on the other hand, can span multiple projects, so having a single rule that fits all scenarios is challenging. One integration test project per solution is often enough. Sometimes we can need more than one, depending on the context.

I recommend starting with one integration test project and adding more as needed during development instead of overthinking it before getting started. Trust your judgment; you can always change the structure as your project evolves.

Folder-wise, at the solution level, creating the application and its related libraries in an src directory helps isolate the actual solution code from the test projects created under a test directory, like this:

 Figure 2.7: The Automated Testing Solution Explorer, displaying how the projects are organizedFigure 2.7: The Automated Testing Solution Explorer, displaying how the projects are organized 

That’s a well-known and effective way of organizing a solution in the .NET world.

Sometimes, it is not possible or unwanted to do that. One such use case would be multiple microservices written under a single solution. In that case, you might want the tests to live closer to your microservices and not split them between src and test folders. So you could organize your solution by microservice instead, like one directory per microservice that contains all the projects, including tests.

Let’s now dig deeper into organizing unit tests.

Unit tests

How you organize your test projects may make a big difference between searching for your tests or making it easy to find them. Let’s look at the different aspects, from the namespace to the test code itself.

Namespace

I find it convenient to create unit tests in the same namespace as the subject under test when creating unit tests. That helps get tests and code aligned without adding any additional using statements. To make it easier when creating files, you can change the default namespace used by Visual Studio when creating a new class in your test project by adding <RootNamespace>[Project under test namespace]</RootNamespace> to a PropertyGroup of the test project file (*.csproj), like this:<PropertyGroup>
  …
<RootNamespace>MyApp</RootNamespace>
</PropertyGroup>

Closing words – Automated Testing

Now that facts, theories, and assertions are out of the way, xUnit offers other mechanics to allow developers to inject dependencies into their test classes. These are named fixtures. Fixtures allow dependencies to be reused by all test methods of a test class by implementing the IClassFixture<T> interface. Fixtures are very helpful for costly dependencies, like creating an in-memory database. With fixtures, you can create the dependency once and use it multiple times. The ValuesControllerTest class in the MyApp.IntegrationTests project shows that in action.It is important to note that xUnit creates an instance of the test class for every test run, so your dependencies are recreated every time if you are not using the fixtures.You can also share the dependency provided by the fixture between multiple test classes by using ICollectionFixture<T>, [Collection], and [CollectionDefinition] instead. We won’t get into the details here, but at least you know it’s possible and know what types to look for when you need something similar.Finally, if you have worked with other testing frameworks, you might have encountered setup and teardown methods. In xUnit, there are no particular attributes or mechanisms for handling setup and teardown code. Instead, xUnit uses existing OOP concepts:

  • To set up your tests, use the class constructor.
  • To tear down (clean up) your tests, implement IDisposable or IAsyncDisposable and dispose of your resources there.

That’s it, xUnit is very simple and powerful, which is why I adopted it as my main testing framework several years ago and chose it for this book.Next, we learn to write readable test methods.

Arrange, Act, Assert

Arrange, Act, Assert (AAA or 3A) is a well-known method for writing readable tests. This technique allows you to clearly define your setup (arrange), the operation under test (act), and your assertions (assert). One efficient way to use this technique is to start by writing the 3A as comments in your test case and then write the test code in between. Here is an example:

[Fact]
public void Should_be_equals()
{
    // Arrange
    var a = 1;
    var b = 2;
    var expectedResult = 3;
    // Act
    var result = a + b;
    // Assert
    Assert.Equal(expectedResult, result);
}

Of course, that test case cannot fail, but the three blocks are easily identifiable with the 3A comments.In general, you want the Act block of your unit tests to be a single line, making the test focus clear. If you need more than one line, the chances are that something is wrong in the test or the design.

When the tests are very small (only a few lines), removing the comments might help readability. Furthermore, when you have nothing to set up in your test case, delete the Arrange comment to improve its readability further.

Next, we learn how to organize tests into projects, directories, and files.

Assertions – Automated Testing

An assertion is a statement that checks whether a particular condition is true or false. If the condition is true, the test passes. If the condition is false, the test fails, indicating a problem with the subject under test.Let’s visit a few ways to assert correctness. We use barebone xUnit functionality in this section, but you can bring in the assertion library of your choice if you have one.

In xUnit, the assertion throws an exception when it fails, but you may never even realize that. You do not have to handle those; that’s the mechanism to propagate the failure result to the test runner.

We won’t explore all possibilities, but let’s start with the following shared pieces:

public class AssertionTest
{
    [Fact]
    public void Exploring_xUnit_assertions()
    {
        object obj1 = new MyClass { Name = “Object 1” };
        object obj2 = new MyClass { Name = “Object 1” };
        object obj3 = obj1;
        object?
obj4 = default(MyClass);
        //
        // Omitted assertions
        //
        static void OperationThatThrows(string name)
        {
            throw new SomeCustomException { Name = name };
        }
    }
    private record class MyClass
    {
        public string?
Name { get; set; }
    }
    private class SomeCustomException : Exception
    {
        public string?
Name { get; set; }
    }
}

The two preceding record classes, the OperationThatThrows method, and the variables are utilities used in the test to help us play with xUnit assertions. The variables are of type object for exploration purposes, but you can use any type in your test cases. I omitted the assertion code that we are about to see to keep the code leaner.The following two assertions are very explicit:

Assert.Equal(expected: 2, actual: 2);
Assert.NotEqual(expected: 2, actual: 1);

The first compares whether the actual value equals the expected value, while the second compares if the two values are different. Assert.Equal is probably the most commonly used assertion method.

As a rule of thumb, it is better to assert equality (Equal) than assert that the values are different (NotEqual). Except in a few rare cases, asserting equality will yield more consistent results and close the door to missing defects.

The next two assertions are very similar to the equality ones but assert that the objects are the same instance or not (the same instance means the same reference):

Assert.Same(obj1, obj3);
Assert.NotSame(obj2, obj3);

The next one validates that the two objects are equal. Since we are using record classes, it makes it super easy for us; obj1 and obj2 are not the same (two instances) but are equal (see Appendix A for more information on record classes):

Assert.Equal(obj1, obj2);

The next two are very similar and assert that the value is null or not:

Assert.Null(obj4);
Assert.NotNull(obj3);

The next line asserts that obj1 is of the MyClass type and then returns the argument (obj1) converted to the asserted type (MyClass). If the type is incorrect, the IsType method will throw an exception:

var instanceOfMyClass = Assert.IsType<MyClass>(obj1);

Then we reuse the Assert.Equal method to validate that the value of the Name property is what we expect:

Assert.Equal(expected: “Object 1”, actual: instanceOfMyClass.Name);

The following code block asserts that the testCode argument throws an exception of the SomeCustomException type:

var exception = Assert.Throws<SomeCustomException>(
    testCode: () => OperationThatThrows(“Toto”)
);

The testCode argument executes the OperationThatThrows inline function we saw initially. The Throws method allows us to test some exception properties by returning the exception in the specified type. The same behavior as the IsType method happens here; if the exception is of the wrong type or no exception is thrown, the Throws method will fail the test.

It is a good idea to ensure that not only the proper exception type is thrown, but the exception carries the correct values as well.

The following line asserts that the value of the Name property is what we expect it to be, ensuring our program would propagate the proper exception:

Assert.Equal(expected: “Toto”, actual: exception.Name);

We covered a few assertion methods, but many others are part of xUnit, like the Collection, Contains, False, and True methods. We use many assertions throughout the book, so if these are still unclear, you will learn more about them.Next, let’s look at data-driven test cases using theories.

State Transition Testing – Automated Testing

We usually use State Transition Testing to test software with a state machine since it tests the different system states and their transitions. It’s handy for systems where the system behavior can change based on its current state. For example, a program with states like “logged in” or “logged out”.To perform State Transition Testing, we need to identify the states of the system and then the possible transitions between the states. For each transition, we need to create a test case. The test case should test the software with the specified input values and verify that the software transitions to the correct state. For example, a user with the state “logged in” must transition to the state “logged out” after signing out.The main advantage of State Transition Testing is that it tests sequences of events, not just individual events, which could reveal defects not found by testing each event in isolation. However, State Transition Testing can become complex and time-consuming for systems with many states and transitions.

Use Case Testing

This technique validates that the system behaves as expected when used in a particular way by a user. Use cases could have formal descriptions, be user stories, or take any other form that fits your needs.A use case involves one or more actors executing steps or taking actions that should yield a particular result. A use case can include inputs and expected outputs. For example, when a user (actor) that is “signed in” (precondition) clicks the “sign out” button (action), then navigates to the profile page (action), the system denies access to the page and redirects the users to the sign in page, displaying an error message (expected behaviors).Use case testing is a systematic and structured approach to testing that helps identify defects in the software’s functionality. It is very user-centric, ensuring the software meets the users’ needs. However, creating test cases for complex use cases can be difficult. In the case of a user interface, the time to execute end-to-end tests of use cases can take a long time, especially as the number of tests grows.

It is an excellent approach to think of your test cases in terms of functionality to test, whether using a formal use case or just a line written on a napkin. The key is to test behaviors, not code.

Now that we have explored these techniques, it is time to introduce the xUnit library, ways to write tests, and how tests are written in the book. Let’s start by creating a test project.