Single responsibility principle (SRP) – Architectural Principles-2

The ProductRepository class mixes public and private product logic. From that API alone, there are many possibilities where an error could lead to leaking restricted data to public users. That is also true because the class exposes the private logic to the public-facing consumers; someone else could make a mistake.We are ready to rethink the class now that we identified the responsibilities. We know it has two responsibilities, so breaking the class into two sounds like an excellent first step. Let’s start with extracting a public API:

namespace AfterSRP;
public class PublicProductReader
{
    public ValueTask<IEnumerable<Product>> GetAllAsync()
        => throw new NotImplementedException();
    public ValueTask<Product> GetOneAsync(int productId)
        => throw new NotImplementedException();
}

The PublicProductReader class now contains only two methods: GetAllAsync and GetOneAsync. When reading the name of the class and its methods, it is clear that the class handles only public product data. By lowering the complexity of the class, we made it easier to understandNext, let’s do the same for the private products:

namespace AfterSRP;
public class PrivateProductRepository
{
    public ValueTask<IEnumerable<Product>> GetAllAsync()
        => throw new NotImplementedException();
    public ValueTask<Product> GetOneAsync(int productId)
        => throw new NotImplementedException();
    public ValueTask CreateAsync(Product product)
        => throw new NotImplementedException();
    public ValueTask DeleteAsync(Product product)
        => throw new NotImplementedException();
    public ValueTask UpdateAsync(Product product)
        => throw new NotImplementedException();
}

The PrivateProductRepository class follows the same pattern. It includes the read methods, named the same as the PublicProductReader class, and the mutation methods only users with private access can use.We improved our code’s readability, flexibility, and security by splitting the initial class into two. However, one thing to be careful about with the SRP is not to over-separate classes. The more classes in a system, the more complex assembling the system can become, and the harder it can be to debug and follow the execution paths. On the other hand, many well-separated responsibilities should lead to a better, more testable system.It is tough to define one hard rule that defines “one reason” or “a single responsibility”. However, as a rule of thumb, aim at packing a cohesive set of functionalities in a single class that revolves around its responsibility. You should strip out any excess logic and add missing pieces.A good indicator of the SRP violation is when you don’t know how to name an element, which points towards the fact that the element should not reside there, that you should extract it, or that you should split it into multiple smaller pieces.

Using precise names for variables, methods, classes, and other elements is very important and should not be overlooked.

Another good indicator is when a method becomes too big, maybe containing many if statements or loops. In that case, you can split that method into multiple smaller methods, classes, or any other construct that suits your requirements. That should make the code easier to read and make the initial method’s body cleaner. It often also helps you get rid of useless comments and improve testability. Next, we explore how to change behaviors without modifying code, but before that, let’s look at interfaces.

Single responsibility principle (SRP) – Architectural Principles-1

Essentially, the SRP means that a single class should hold one, and only one, responsibility, leading me to the following quote:

“There should never be more than one reason for a class to change.”— Robert C. Martin, originator of the single responsibility principle

OK, but why? Before answering, take a moment to remember a project you’ve worked on where someone changed one or more requirements along the way. I recall several projects that would have benefited from this principle. Now, imagine how much simpler it would have been if each part of your system had just one job: one reason to change.

Software maintainability problems can be due to both tech and non-tech people. Nothing is purely black or white—most things are a shade of gray. The same applies to software design: always do your best, learn from your mistakes, and stay humble (a.k.a. continuous improvement).

By understanding that applications are born to change, you will feel better when that happens, while the SRP helps mitigate the impact of changes. For example, it helps make our classes more readable and reusable and to create more flexible and maintainable systems. Moreover, when a class does only one thing, it’s easier to see how changes will affect the system, which is more challenging with complex classes since one change might break other parts. Furthermore, fewer responsibilities mean less code. Less code is easier to understand, helping you grasp that part of the software more quickly.Let’s try this out in action.

Project – Single Responsibility

First, we look at the Product class used in both code samples. That class represents a simple fictive product:

public record class Product(int Id, string Name);

The code sample has no implementation because it is irrelevant to understanding the SRP. We focus on the class API instead. Please assume we implemented the data-access logic using your favorite database.

The following class breaks the SRP:

namespace BeforeSRP;
public class ProductRepository
{
    public ValueTask<Product> GetOnePublicProductAsync(int productId)
        => throw new NotImplementedException();
    public ValueTask<Product> GetOnePrivateProductAsync(int productId)
        => throw new NotImplementedException();
    public ValueTask<IEnumerable<Product>> GetAllPublicProductsAsync()
        => throw new NotImplementedException();
    public ValueTask<IEnumerable<Product>> GetAllPrivateProductsAsync()
        => throw new NotImplementedException();
    public ValueTask CreateAsync(Product product)
        => throw new NotImplementedException();
    public ValueTask UpdateAsync(Product product)
        => throw new NotImplementedException();
    public ValueTask DeleteAsync(Product product)
        => throw new NotImplementedException();
}

What does not conform to the SRP in the preceding class? By reading the name of the methods, we can extract two responsibilities:

  • Handling public products (highlighted code).
  • Handling private products.

Don’t repeat yourself (DRY) – Architectural Principles

The DRY principle advocates the separation of concerns principle and aims to eliminate redundancy in code as well. It promotes the idea that each piece of knowledge or logic should have a single, unambiguous representation within a system.So, when you have duplicated logic in your system, encapsulate it and reuse that new encapsulation in multiple places instead. If you find yourself writing the same or similar code in multiple places, refactor that code into a reusable component instead. Leverage functions, classes, modules, or other abstractions to refactor the code.Adhering to the DRY principle makes your code more maintainable, less error-prone, and easier to modify because a change in logic or bug fix needs to be made in only one place, reducing the likelihood of introducing errors or inconsistencies.However, it is imperative to regroup duplicated logic by concern, not only by the similarities of the code itself. Let’s look at those two classes:

public class AdminApp
{
    public async Task DisplayListAsync(
        IBookService bookService,
        IBookPresenter presenter)
    {
        var books = await bookService.FindAllAsync();
        foreach (var book in books)
        {
            await presenter.DisplayAsync(book);
        }
    }
}
public class PublicApp
{
    public async Task DisplayListAsync(
        IBookService bookService,
        IBookPresenter presenter)
    {
        var books = await bookService.FindAllAsync();
        foreach (var book in books)
        {
            await presenter.DisplayAsync(book);
        }
    }
}

The code is very similar, but encapsulating a single class or method could very well be a mistake. Why? Keeping two separate classes is more logical because the admin program can have different reasons for modification compared to the public program.However, encapsulating the list logic into the IBookPresenter interface could make sense. It would allow us to react differently to both types of users if needed, like filtering the admin panel list but doing something different in the public section. One way to do this is by replacing the foreach loop with a presenter DisplayListAsync(books) call, like the following highlighted code:

public class AdminApp
{
    public async Task DisplayListAsync(
        IBookService bookService,
        IBookPresenter presenter)
    {
        var books = await bookService.FindAllAsync();
        // We could filter the list here
        await presenter.DisplayListAsync(books);
    }
}
public class PublicApp
{
    public async Task DisplayListAsync(
        IBookService bookService,
        IBookPresenter presenter)
    {
        var books = await bookService.FindAllAsync();
        await presenter.DisplayListAsync(books);
    }
}

There is more to those simple implementations to discuss, like the possibility of supporting multiple implementations of the interfaces for added flexibility, but let’s keep some subjects for further down the book.

When you don’t know how to name a class or a method, you may have identified a problem with your separation of concerns. This is a good indicator that you should go back to the drawing board. Nevertheless, naming is hard, so sometimes, that’s just it.

Keeping our code DRY while following the separation of concerns principles is imperative. Otherwise, what may seem like a good move could become a nightmare.

Separation of concerns (SoC) – Architectural Principles

Before you begin: Join our book community on Discord

Give your feedback straight to the author himself and chat to other early readers on our Discord server (find the “architecting-aspnet-core-apps-3e” channel under EARLY ACCESS SUBSCRIPTION).

https://packt.link/EarlyAccess

This chapter delves into fundamental architectural principles: pillars of contemporary software development practices. These principles help us create flexible, resilient, testable, and maintainable code.We can use these principles to stimulate critical thinking, fostering our ability to evaluate trade-offs, anticipate potential issues, and create solutions that stand the test of time by influencing our decision-making process and helping our design choices.As we embark on this journey, we constantly refer to those principles throughout the book, particularly the SOLID principles, which improve our ability to build flexible and robust software systems.In this chapter, we cover the following topics:

  • The separation of concerns (SoC) principle
  • The DRY principle
  • The KISS principle
  • The SOLID principles

We also revise the following notions:

  • Covariance
  • Contravariance
  • Interfaces

Separation of concerns (SoC)

As its name implies, the idea is to separate our software into logical blocks, each representing a concern. A “concern” refers to a specific aspect of a program. It’s a particular interest or focus within a system that serves a distinct purpose. Concerns could be as broad as data management, as specific as user authentication, or even more specific, like copying an object into another. The Separation of Concerns principle suggests that each concern should be isolated and managed separately to improve the system’s maintainability, modularity, and understandability.

The Separation of Concerns principle applies to all programming paradigms. In a nutshell, this principle means factoring a program into the correct pieces. For example, modules, subsystems, and microservices are macro-pieces, while classes and methods are smaller pieces.

By correctly separating concerns, we can prevent changes in one area from affecting others, allow for more efficient code reuse, and make it easier to understand and manage different parts of a system independently.Here are a few examples:

  • Security and logging are cross-cutting concerns.
  • Rendering a user interface is a concern.
  • Handling an HTTP request is a concern.
  • Copying an object into another is a concern.
  • Orchestrating a distributed workflow is a concern.

Before moving to the DRY principle, it is imperative to consider concerns when dividing software into pieces to create cohesive units. A good separation of concerns helps create modular designs and face design dilemmas more effectively, leading to a maintainable application.

Minimal hosting – Automated Testing

Unfortunately, we must use a workaround to make the Program class discoverable when using minimal hosting. Let’s explore a few workarounds that leverage minimal APIs, allowing you to pick the one you prefer.

First workaround

The first workaround is to use any other class in the assembly as the TEntryPoint of WebApplicationFactory<TEntryPoint> instead of the Program or Startup class. This makes what WebApplicationFactory does a little less explicit, but that’s all. Since I tend to prefer readable code, I do not recommend this.

Second workaround

The second workaround is to add a line at the bottom of the Program.cs file (or anywhere else in the project) to change the autogenerated Program class visibility from internal to public. Here is the complete Program.cs file with that added line (highlighted):

var builder = WebApplication.CreateBuilder(args);
var app = builder.Build();
app.MapGet(“/”, () => “Hello World!”);
app.Run();
public partial class Program { }

Then, the test cases are very similar to the ones of the classic web application explored previously. The only difference is the program itself, both programs don’t do the same thing.

namespace MyMinimalApiApp;
public class ProgramTest : IClassFixture<WebApplicationFactory<Program>>
{
    private readonly HttpClient _httpClient;
    public ProgramTest(
        WebApplicationFactory<Program> webApplicationFactory)
    {
        _httpClient = webApplicationFactory.CreateClient();
    }
    public class Get : ProgramTest
    {
        public Get(WebApplicationFactory<Program> webApplicationFactory)
            : base(webApplicationFactory) { }
        [Fact]
        public async Task Should_respond_a_status_200_OK()
        {
            // Act
            var result = await _httpClient.GetAsync(“/”);
            // Assert
            Assert.Equal(HttpStatusCode.OK, result.StatusCode);
        }
        [Fact]
        public async Task Should_respond_hello_world()
        {
            // Act
            var result = await _httpClient.GetAsync(“/”);
            // Assert
            var contentText = await result.Content.ReadAsStringAsync();
            Assert.Equal(“Hello World!”, contentText);
        }
    }
}

The only change is the expected result as the endpoint returns the text/plain string Hello World! instead of a collection of strings serialized as JSON. The test cases would be identical if the two endpoints produced the same result.

Third workaround

The third workaround is to instantiate WebApplicationFactory manually instead of leveraging a fixture. We can use the Program class, which requires changing its visibility by adding the following line to the Program.cs file:

public partial class Program { }

However, instead of injecting the instance using the IClassFixture interface, we instantiate the factory manually. To ensure we dispose the WebApplicationFactory instance, we also implement the IAsyncDisposable interface.Here’s the complete example, which is very similar to the previous workaround:

namespace MyMinimalApiApp;
public class ProgramTestWithoutFixture : IAsyncDisposable
{
    private readonly WebApplicationFactory<Program> _webApplicationFactory;
    private readonly HttpClient _httpClient;
    public ProgramTestWithoutFixture()
    {
        _webApplicationFactory = new WebApplicationFactory<Program>();
        _httpClient = _webApplicationFactory.CreateClient();
    }
    public ValueTask DisposeAsync()
    {
        return ((IAsyncDisposable)_webApplicationFactory)
            .DisposeAsync();
    }
    // Omitted nested Get class
}

I omitted the test cases in the preceding code block because they are the same as the previous workarounds. The full source code is available on GitHub: https://adpg.link/vzkr.

Using class fixtures is more performant since the factory and the server get created only once per test run instead of recreated for every test method.

Test class name – Automated Testing

By convention, I name test classes [class under test]Test.cs and create them in the same directory as in the original project. Finding tests is easy when following that simple rule since the test code is in the same location of the file tree as the code under test but in two distinct projects.

 Figure 2.8: The Automated Testing Solution Explorer, displaying how tests are organizedFigure 2.8: The Automated Testing Solution Explorer, displaying how tests are organized 

Test code inside the test class

For the test code itself, I follow a multi-level structure similar to the following:

  • One test class is named the same as the class under test.
  • One nested test class per method to test from the class under test.
  • One test method per test case of the method under test.

This technique helps organize tests by test case while keeping a clear hierarchy, leading to the following hierarchy:

  • Class under test
  • Method under test
  • Test case using that method

In code, that translates to the following:

namespace MyApp.IntegrationTests.Controllers;
public class ValuesControllerTest
{
    public class Get : ValuesControllerTest
    {
        [Fact]
        public void Should_return_the_expected_strings()
        {
            // Arrange
            var sut = new ValuesController();
            // Act
            var result = sut.Get();
            // Assert
            Assert.Collection(result.Value,
                x => Assert.Equal(“value1”, x),
                x => Assert.Equal(“value2”, x)
            );
        }
    }
}

This convention allows you to set up tests step by step. For example, by inheriting the outer class (the ValuesControllerTest class here) from the inner class (the Get nested class), you can create top-level private mocks or classes shared by all nested classes and test methods. Then, for each method to test, you can modify the setup or create other private test elements in the nested classes. Finally, you can do more configuration per test case inside the test method (the Should_return_the_expected_strings method here).

Don’t go too hard on reusability inside your test classes, as it can make tests harder to read from an external eye, such as a reviewer or another developer that needs to play there. Unit tests should remain focused, small, and easy to read: a unit of code testing another unit of code. Too much reusability may lead to a brittle test suite.

Now that we have explored organizing unit tests, let’s look at integration tests.

Integration tests

Integration tests are harder to organize because they depend on multiple units, can cross project boundaries, and interact with various dependencies.We can create one integration test project for most simple solutions or many for more complex scenarios.When creating one, you can name the project IntegrationTests or start with the entry point of your tests, like a REST API project, and name the project [Name of the API project].IntegrationTests. At this point, how to name the integration test project depends on your solution structure and intent.When you need multiple integration projects, you can follow a convention similar to unit tests and associate your integration projects one-to-one: [Project under test].IntegrationTests.Inside those projects, it depends on how you want to attack the problem and the structure of the solution itself. Start by identifying the features under test. Name the test classes in a way that mimics your requirements, organize those into sub-folders (maybe a category or group of requirements), and code test cases as methods. You can also leverage nested classes, as we did with unit tests.

We write tests throughout the book, so you will have plenty of examples to make sense of all this if it’s not clear now.

Next, we implement an integration test by leveraging ASP.NET Core features.

Closing words – Automated Testing

Now that facts, theories, and assertions are out of the way, xUnit offers other mechanics to allow developers to inject dependencies into their test classes. These are named fixtures. Fixtures allow dependencies to be reused by all test methods of a test class by implementing the IClassFixture<T> interface. Fixtures are very helpful for costly dependencies, like creating an in-memory database. With fixtures, you can create the dependency once and use it multiple times. The ValuesControllerTest class in the MyApp.IntegrationTests project shows that in action.It is important to note that xUnit creates an instance of the test class for every test run, so your dependencies are recreated every time if you are not using the fixtures.You can also share the dependency provided by the fixture between multiple test classes by using ICollectionFixture<T>, [Collection], and [CollectionDefinition] instead. We won’t get into the details here, but at least you know it’s possible and know what types to look for when you need something similar.Finally, if you have worked with other testing frameworks, you might have encountered setup and teardown methods. In xUnit, there are no particular attributes or mechanisms for handling setup and teardown code. Instead, xUnit uses existing OOP concepts:

  • To set up your tests, use the class constructor.
  • To tear down (clean up) your tests, implement IDisposable or IAsyncDisposable and dispose of your resources there.

That’s it, xUnit is very simple and powerful, which is why I adopted it as my main testing framework several years ago and chose it for this book.Next, we learn to write readable test methods.

Arrange, Act, Assert

Arrange, Act, Assert (AAA or 3A) is a well-known method for writing readable tests. This technique allows you to clearly define your setup (arrange), the operation under test (act), and your assertions (assert). One efficient way to use this technique is to start by writing the 3A as comments in your test case and then write the test code in between. Here is an example:

[Fact]
public void Should_be_equals()
{
    // Arrange
    var a = 1;
    var b = 2;
    var expectedResult = 3;
    // Act
    var result = a + b;
    // Assert
    Assert.Equal(expectedResult, result);
}

Of course, that test case cannot fail, but the three blocks are easily identifiable with the 3A comments.In general, you want the Act block of your unit tests to be a single line, making the test focus clear. If you need more than one line, the chances are that something is wrong in the test or the design.

When the tests are very small (only a few lines), removing the comments might help readability. Furthermore, when you have nothing to set up in your test case, delete the Arrange comment to improve its readability further.

Next, we learn how to organize tests into projects, directories, and files.

Assertions – Automated Testing

An assertion is a statement that checks whether a particular condition is true or false. If the condition is true, the test passes. If the condition is false, the test fails, indicating a problem with the subject under test.Let’s visit a few ways to assert correctness. We use barebone xUnit functionality in this section, but you can bring in the assertion library of your choice if you have one.

In xUnit, the assertion throws an exception when it fails, but you may never even realize that. You do not have to handle those; that’s the mechanism to propagate the failure result to the test runner.

We won’t explore all possibilities, but let’s start with the following shared pieces:

public class AssertionTest
{
    [Fact]
    public void Exploring_xUnit_assertions()
    {
        object obj1 = new MyClass { Name = “Object 1” };
        object obj2 = new MyClass { Name = “Object 1” };
        object obj3 = obj1;
        object?
obj4 = default(MyClass);
        //
        // Omitted assertions
        //
        static void OperationThatThrows(string name)
        {
            throw new SomeCustomException { Name = name };
        }
    }
    private record class MyClass
    {
        public string?
Name { get; set; }
    }
    private class SomeCustomException : Exception
    {
        public string?
Name { get; set; }
    }
}

The two preceding record classes, the OperationThatThrows method, and the variables are utilities used in the test to help us play with xUnit assertions. The variables are of type object for exploration purposes, but you can use any type in your test cases. I omitted the assertion code that we are about to see to keep the code leaner.The following two assertions are very explicit:

Assert.Equal(expected: 2, actual: 2);
Assert.NotEqual(expected: 2, actual: 1);

The first compares whether the actual value equals the expected value, while the second compares if the two values are different. Assert.Equal is probably the most commonly used assertion method.

As a rule of thumb, it is better to assert equality (Equal) than assert that the values are different (NotEqual). Except in a few rare cases, asserting equality will yield more consistent results and close the door to missing defects.

The next two assertions are very similar to the equality ones but assert that the objects are the same instance or not (the same instance means the same reference):

Assert.Same(obj1, obj3);
Assert.NotSame(obj2, obj3);

The next one validates that the two objects are equal. Since we are using record classes, it makes it super easy for us; obj1 and obj2 are not the same (two instances) but are equal (see Appendix A for more information on record classes):

Assert.Equal(obj1, obj2);

The next two are very similar and assert that the value is null or not:

Assert.Null(obj4);
Assert.NotNull(obj3);

The next line asserts that obj1 is of the MyClass type and then returns the argument (obj1) converted to the asserted type (MyClass). If the type is incorrect, the IsType method will throw an exception:

var instanceOfMyClass = Assert.IsType<MyClass>(obj1);

Then we reuse the Assert.Equal method to validate that the value of the Name property is what we expect:

Assert.Equal(expected: “Object 1”, actual: instanceOfMyClass.Name);

The following code block asserts that the testCode argument throws an exception of the SomeCustomException type:

var exception = Assert.Throws<SomeCustomException>(
    testCode: () => OperationThatThrows(“Toto”)
);

The testCode argument executes the OperationThatThrows inline function we saw initially. The Throws method allows us to test some exception properties by returning the exception in the specified type. The same behavior as the IsType method happens here; if the exception is of the wrong type or no exception is thrown, the Throws method will fail the test.

It is a good idea to ensure that not only the proper exception type is thrown, but the exception carries the correct values as well.

The following line asserts that the value of the Name property is what we expect it to be, ensuring our program would propagate the proper exception:

Assert.Equal(expected: “Toto”, actual: exception.Name);

We covered a few assertion methods, but many others are part of xUnit, like the Collection, Contains, False, and True methods. We use many assertions throughout the book, so if these are still unclear, you will learn more about them.Next, let’s look at data-driven test cases using theories.

How to create an xUnit test project – Automated Testing

To create a new xUnit test project, you can run the dotnet new xunit command, and the CLI does the job for you by creating a project containing a UnitTest1 class. That command does the same as creating a new xUnit project from Visual Studio.For unit testing projects, name the project the same as the project you want to test and append .Tests to it. For example, MyProject would have a MyProject.Tests project associated with it. We explore more details in the Organizing your tests section below.The template already defines all the required NuGet packages, so you can start testing immediately after adding a reference to your project under test.

You can also add project references using the CLI with the dotnet add reference command. Assuming we are in the ./test/MyProject.Tests directory and the project file we want to reference is in the ./src/MyProject directory; we can execute the following command to add a reference:

dotnet add reference ../../src/MyProject.csproj.

Next, we explore some xUnit features that will allow us to write test cases.

Key xUnit features

In xUnit, the [Fact] attribute is the way to create unique test cases, while the [Theory] attribute is the way to make data-driven test cases. Let’s start with facts, the simplest way to write a test case.

Facts

Any method with no parameter can become a test method by decorating it with a [Fact] attribute, like this:

public class FactTest
{
    [Fact]
    public void Should_be_equal()
    {
        var expectedValue = 2;
        var actualValue = 2;
        Assert.Equal(expectedValue, actualValue);
    }
}

You can also decorate asynchronous methods with the fact attribute when the code under test needs it:

public class AsyncFactTest
{
    [Fact]
    public async Task Should_be_equal()
    {
        var expectedValue = 2;
        var actualValue = 2;
        await Task.Yield();
        Assert.Equal(expectedValue, actualValue);
    }
}

In the preceding code, the highlighted line conceptually represents an asynchronous operation and does nothing more than allow using the async/await keywords.When we run the tests from Visual Studio’s Test Explorer, the test run result looks like this:

 Figure 2.3: Test results in Visual StudioFigure 2.3: Test results in Visual Studio 

You may have noticed from the screenshot that the test classes are nested in the xUnitFeaturesTest class, part of the MyApp namespace, and under the MyApp.Tests project. We explore those details later in the chapter.Running the dotnet test CLI command should yield a result similar to the following:

Passed! 
– Failed:     0, Passed:    23, Skipped:     0, Total:    23, Duration: 22 ms – MyApp.Tests.dll (net8.0)

As we can read from the preceding output, all tests are passing, none have failed, and none were skipped. It is as simple as that to create test cases using xUnit.

Learning the CLI can be very helpful in creating and debugging CI/CD pipelines, and you can use them, like the dotnet test command, in any script (like bash and PowerShell).

Have you noticed the Assert keyword in the test code? If you are not familiar with it, we will explore assertions next.

Conclusion – Automated Testing

White-box testing includes unit and integration tests. Those tests run fast, and developers use them to improve the code and test complex algorithms. However, writing a large quantity of those tests takes time. Writing brittle tests that are tightly coupled with the code itself is easier due to the proximity to the code, increasing the maintenance cost of such test suites. It also makes it prone to overengineering your application in the name of testability.Black-box testing encompasses different types of tests that tend towards end-to-end testing. Since the tests target the external surface of the system, they are less likely to break when the system changes. Moreover, they are excellent at testing behaviors, and since each test tests an end-to-end use case, we need fewer of them, leading to a decrease in writing time and maintenance costs. Testing the whole system has drawbacks, including the slowness of executing each test, so combining black-box testing with other types of tests is very important to find the right balance between the number of tests, test case coverage, and speed of execution of the tests.Grey-box testing is a fantastic mix between the two others; you can treat any part of the software as a black box, leverage your inner-working knowledge to mock or stub parts of the test case (like to assert if the system persisted a record in the database), and test end-to-end scenarios more efficiently. It brings the best of both worlds, significantly reducing the number of tests while increasing the test surface considerably for each test case. However, doing grey-box testing on smaller units or heavily mocking the system may yield the same drawbacks as white-box testing. Integration tests or almost-E2E tests are good candidates for grey-box testing. We implement grey-box testing use cases in Chapter 16, Request-Endpoint-Response (REPR). Meanwhile, let’s explore a few techniques to help optimize our test case creation by applying different techniques, like testing a small subset of values to assert the correctness of our programs by writing an optimal number of tests.

Test case creation

Multiple ways exist to break down and create test cases to help find software defects with a minimal test count. Here are some techniques to help minimize the number of tests while maximizing the test coverage:

  • Equivalence Partitioning
  • Boundary Value Analysis
  • Decision Table Testing
  • State Transition Testing
  • Use Case Testing

I present the techniques theoretically. They apply to all sorts of tests and should help you write better test suites. Let’s have a quick look at each.